Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 80% Match Research Paper Theoretical computer scientists,Machine learning theorists,Researchers in deep learning foundations 3 weeks ago

On efficiently computable functions, deep networks and sparse compositionality

generative-ai › autoregressive
📄 Abstract

Abstract: We show that \emph{efficient Turing computability} at any fixed input/output precision implies the existence of \emph{compositionally sparse} (bounded-fan-in, polynomial-size) DAG representations and of corresponding neural approximants achieving the target precision. Concretely: if $f:[0,1]^d\to\R^m$ is computable in time polynomial in the bit-depths, then for every pair of precisions $(n,m_{\mathrm{out}})$ there exists a bounded-fan-in Boolean circuit of size and depth $\poly(n+m_{\mathrm{out}})$ computing the discretized map; replacing each gate by a constant-size neural emulator yields a deep network of size/depth $\poly(n+m_{\mathrm{out}})$ that achieves accuracy $\varepsilon=2^{-m_{\mathrm{out}}}$. We also relate these constructions to compositional approximation rates \cite{MhaskarPoggio2016b,poggio_deep_shallow_2017,Poggio2017,Poggio2023HowDS} and to optimization viewed as hierarchical search over sparse structures.
Authors (1)
Tomaso Poggio
Submitted
October 13, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

This paper demonstrates that efficient Turing computability implies the existence of compositionally sparse circuit representations and corresponding neural approximants. It shows that any efficiently computable function can be represented by a bounded-fan-in Boolean circuit and approximated by a deep neural network of polynomial size and depth.

Business Value

Provides fundamental theoretical insights into the capabilities of deep learning, potentially guiding the design of more efficient and powerful neural network architectures.