Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 90% Match Research Paper Theoretical computer scientists,Machine learning theorists,Researchers in neural network analysis,AI researchers focused on interpretability 1 week ago

The Computational Complexity of Counting Linear Regions in ReLU Neural Networks

ai-safety › interpretability
📄 Abstract

Abstract: An established measure of the expressive power of a given ReLU neural network is the number of linear regions into which it partitions the input space. There exist many different, non-equivalent definitions of what a linear region actually is. We systematically assess which papers use which definitions and discuss how they relate to each other. We then analyze the computational complexity of counting the number of such regions for the various definitions. Generally, this turns out to be an intractable problem. We prove NP- and #P-hardness results already for networks with one hidden layer and strong hardness of approximation results for two or more hidden layers. Finally, on the algorithmic side, we demonstrate that counting linear regions can at least be achieved in polynomial space for some common definitions.
Authors (3)
Moritz Stargalla
Christoph Hertrich
Daniel Reichman
Submitted
May 22, 2025
arXiv Category
cs.CC
arXiv PDF

Key Contributions

This paper systematically clarifies different definitions of linear regions in ReLU networks and analyzes their computational complexity. It proves NP- and #P-hardness for counting these regions even for single-hidden-layer networks, and strong hardness of approximation results for deeper networks, while showing polynomial space algorithms exist for some definitions.

Business Value

Provides fundamental theoretical understanding of neural network complexity, which can inform the design of more efficient and interpretable models, and guide research into approximation algorithms.