Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Uniform approximation rates and metric entropy of shallow neural networks

Uniform approximation rates and metric entropy of shallow neural networks We study the approximation properties of the variation spaces corresponding to shallow neural networks with respect to the uniform norm. Specifically, we consider the spectral Barron space, which consists of the convex hull of decaying Fourier modes, and the convex hull of indicator functions of half-spaces, which corresponds to shallow neural networks with sigmoidal activation function. Up to logarithmic factors, we determine the metric entropy and nonlinear dictionary approximation rates for these spaces with respect to the uniform norm. Combined with previous results with respect to the L2\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$L^2$$\end{document}-norm, this also gives the metric entropy up to logarithmic factors with respect to any Lp\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$L^p$$\end{document}-norm with 1≤p≤∞\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$1\le p\le \infty $$\end{document}. In addition, we study the approximation rates for high-order spectral Barron spaces using shallow neural networks with ReLUk\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$^k$$\end{document} activation function. Specifically, we show that for a sufficiently high-order spectral Barron space, ReLUk\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$^k$$\end{document} networks are able to achieve an approximation rate of n-(k+1)\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$n^{-(k+1)}$$\end{document} with respect to the uniform norm. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Research in the Mathematical Sciences Springer Journals

Uniform approximation rates and metric entropy of shallow neural networks

Loading next page...
 
/lp/springer-journals/uniform-approximation-rates-and-metric-entropy-of-shallow-neural-x0LLf00L5e
Publisher
Springer Journals
Copyright
Copyright © The Author(s), under exclusive licence to Springer Nature Switzerland AG 2022
eISSN
2197-9847
DOI
10.1007/s40687-022-00346-y
Publisher site
See Article on Publisher Site

Abstract

We study the approximation properties of the variation spaces corresponding to shallow neural networks with respect to the uniform norm. Specifically, we consider the spectral Barron space, which consists of the convex hull of decaying Fourier modes, and the convex hull of indicator functions of half-spaces, which corresponds to shallow neural networks with sigmoidal activation function. Up to logarithmic factors, we determine the metric entropy and nonlinear dictionary approximation rates for these spaces with respect to the uniform norm. Combined with previous results with respect to the L2\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$L^2$$\end{document}-norm, this also gives the metric entropy up to logarithmic factors with respect to any Lp\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$L^p$$\end{document}-norm with 1≤p≤∞\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$1\le p\le \infty $$\end{document}. In addition, we study the approximation rates for high-order spectral Barron spaces using shallow neural networks with ReLUk\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$^k$$\end{document} activation function. Specifically, we show that for a sufficiently high-order spectral Barron space, ReLUk\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$^k$$\end{document} networks are able to achieve an approximation rate of n-(k+1)\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$n^{-(k+1)}$$\end{document} with respect to the uniform norm.

Journal

Research in the Mathematical SciencesSpringer Journals

Published: Sep 1, 2022

Keywords: Neural networks; Approximation rates; Metric entropy; Finite element methods

References