Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Neural networks have become standard tools in many areas, yet many important
statistical questions remain open. This paper studies the question of how much
data are needed to train a ReLU feed-forward neural network. Our theoretical
and empirical results suggest that the generalization error of ReLU
feed-forward neural networks scales at the rate $1/\sqrt{n}$ in the sample size
$n$ rather than the usual "parametric rate" $1/n$. Thus, broadly speaking, our
results underpin the common belief that neural networks need "many" training
samples.