Titel
Sobolev-Type Embeddings for Neural Network Approximation Spaces
Abstract
We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated (with error measured in ) by ReLU neural networks with an increasing number of coefficients, subject to bounds on the magnitude of the coefficients and the number of hidden layers. We prove embedding theorems between these spaces for different values of p. Furthermore, we derive sharp embeddings of these approximation spaces into Hölder spaces. We find that, analogous to the case of classical function spaces (such as Sobolev spaces, or Besov spaces) it is possible to trade “smoothness” (i.e., approximation rate) for increased integrability. Combined with our earlier results in Grohs and Voigtlaender (Proof of the theory-to-practice gap in deep learning via sampling complexity bounds for neural network approximation spaces, 2021. arXiv preprint arXiv:2104.02746), our embedding theorems imply a somewhat surprising fact related to “learning” functions from a given neural network space based on point samples: if accuracy is measured with respect to the uniform norm, then an optimal “learning” algorithm for reconstructing functions that are well approximable by ReLU neural networks is simply given by piecewise constant interpolation on a tensor product grid.
Stichwort
Deep neural networksApproximation spacesHölder spacesEmbedding theoremsOptimal learning algorithms
Objekt-Typ
Sprache
Englisch [eng]
Persistent identifier
phaidra.univie.ac.at/o:1942011
Erschienen in
Titel
Constructive Approximation
Band
57
Ausgabe
2
ISSN
0176-4276
Erscheinungsdatum
2023
Seitenanfang
579
Seitenende
599
Publication
Springer Science and Business Media LLC
Erscheinungsdatum
2023
Zugänglichkeit
Rechteangabe
© The Author(s) 2023

Herunterladen

Universität Wien | Universitätsring 1 | 1010 Wien | T +43-1-4277-0