We analyse spaces of deep neural networks with a fixed architecture. We demonstrate that, when interpreted as a set of functions, spaces of neural networks exhibit many unfavourable properties: They are highly non-convex and not closed with respect to Lp-norms, for 0 < p < ∞ and all commonly-used activation functions. They are not closed with respect to the L∞-norm for almost all practically-used activation functions; here, the (parametric) ReLU is the only exception. Finally, we show that the function that maps a family of neural network weights to the associated functional representation of a network is not inverse stable for every practically-used activation function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.