88 — On the Complexity of Learning Neural Networks

Song et al (1707.04615)

Read on 16 November 2017
#neural-network  #machine-learning  #computational-complexity  #lower-bound  #proof  #complexity  #computation 

If you model a neural network as a probablistic sampling of a target distribution — a “statistical query” — then you can determine how quickly a model can capture a target distribution based upon the size of the query and the size of the thing it’s trying to emulate. For example, if you have a small neural network and a complex query, then you can quickly model the network with your query.

But if you have a large neural network and a small statistical query, then it takes you an exponential number of queries to correctly model the target distribution. The authors use the computable complexity of statistical queries — and the fact that only some neural networks can be modeled as a statistical query — in order to establish a lower bound for the complexity of a net.

They then demonstrate this proof of a lower bound for computational complexity on a single layer network, and then demonstrate that it can be applied to the net as a whole by applying the same technique to each layer.

Understanding the computational complexity of neural network families is important especially as the tasks and training sets get larger and the amount of resources required to train or run a single network increase in magnitude.