Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
This paper demonstrates that efficient Turing computability implies the existence of compositionally sparse circuit representations and corresponding neural approximants. It shows that any efficiently computable function can be represented by a bounded-fan-in Boolean circuit and approximated by a deep neural network of polynomial size and depth.
Provides fundamental theoretical insights into the capabilities of deep learning, potentially guiding the design of more efficient and powerful neural network architectures.