Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
This paper systematically clarifies different definitions of linear regions in ReLU networks and analyzes their computational complexity. It proves NP- and #P-hardness for counting these regions even for single-hidden-layer networks, and strong hardness of approximation results for deeper networks, while showing polynomial space algorithms exist for some definitions.
Provides fundamental theoretical understanding of neural network complexity, which can inform the design of more efficient and interpretable models, and guide research into approximation algorithms.