Abstract
The universal approximation theorem is generalised to uniform convergence on the (noncompact) input space Rn. All continuous functions that vanish at infinity can be uniformly approximated by neural networks with one hidden layer, for all activation functions φ that are continuous, nonpolynomial, and asymptotically polynomial at ±∞. When φ is moreover bounded, we exactly determine which functions can be uniformly approximated by neural networks, with the following unexpected results. Let Nφl(Rn)¯ denote the vector space of functions that are uniformly approximable by neural networks with l hidden layers and n inputs. For all n and all l≥2, Nφl(Rn)¯ turns out to be an algebra under the pointwise product. If the left limit of φ differs from its right limit (for instance, when φ is sigmoidal) the algebra Nφl(Rn)¯ (l≥2) is independent of φ and l, and equals the closed span of products of sigmoids composed with one-dimensional projections. If the left limit of φ equals its right limit, Nφl(Rn)¯ (l≥1) equals the (real part of the) commutative resolvent algebra, a C*-algebra which is used in mathematical approaches to quantum theory. In the latter case, the algebra is independent of l≥1, whereas in the former case Nφ2(Rn)¯ is strictly bigger than Nφ1(Rn)¯.
Original language | English |
---|---|
Article number | 106181 |
Number of pages | 11 |
Journal | Neural Networks |
Volume | 173 |
DOIs | |
Publication status | Published - 2024 |
Bibliographical note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Keywords
- Deep learning
- Feedforward ANN
- Functional analysis
- Ridge functions
- Uniform convergence
- Universal approximation theorem