Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The start of deep neural network training is characterized by a brief yet
critical phase that lasts from the beginning of the training until the accuracy
reaches approximately 50\%. During this phase, disordered representations
rapidly transition toward ordered structure, and we term this phase the
Enlightenment Period. Through theoretical modeling based on phase transition
theory and experimental validation, we reveal that applying Mixup data
augmentation during this phase has a dual effect: it introduces a Gradient
Interference Effect that hinders performance, while also providing a beneficial
Activation Revival Effect to restore gradient updates for saturated neurons. We
further demonstrate that this negative interference diminishes as the sample
set size or the model parameter size increases, thereby shifting the balance
between these two effects. Based on these findings, we propose three strategies
that improve performance by solely adjusting the training data distribution
within this brief period: the Mixup Pause Strategy for small-scale scenarios,
the Alpha Boost Strategy for large-scale scenarios with underfitting, and the
High-Loss Removal Strategy for tasks where Mixup is inapplicable (e.g., time
series and large language models). Extensive experiments show that these
strategies achieve superior performance across diverse architectures such as
ViT and ResNet on datasets including CIFAR and ImageNet-1K. Ultimately, this
work offers a novel perspective on enhancing model performance by strategically
capitalizing on the dynamics of the brief and crucial early stages of training.
Code is available at https://anonymous.4open.science/r/code-A5F1/.
Authors (4)
Tiantian Liu
Meng Wan
Jue Wang
Ningming Nie
Key Contributions
JSON parse error: Unexpected token i in JSON at position 53128