Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 90% Match Research Paper ML engineers,Researchers in efficient deep learning,Mobile developers 3 days ago

FastBoost: Progressive Attention with Dynamic Scaling for Efficient Deep Learning

computer-vision › scene-understanding
📄 Abstract

Abstract: We present FastBoost, a parameter-efficient neural architecture that achieves state-of-the-art performance on CIFAR benchmarks through a novel Dynamically Scaled Progressive Attention (DSPA) mechanism. Our design establishes new efficiency frontiers with: CIFAR-10: 95.57% accuracy (0.85M parameters) and 93.80% (0.37M parameters) CIFAR-100: 81.37% accuracy (0.92M parameters) and 74.85% (0.44M parameters) The breakthrough stems from three fundamental innovations in DSPA: (1) Adaptive Fusion: Learnt channel-spatial attention blending with dynamic weights. (2) Phase Scaling: Training-stage-aware intensity modulation (from 0.5 to 1.0). (3) Residual Adaptation: Self-optimized skip connections (gamma from 0.5 to 0.72). By integrating DSPA with enhanced MBConv blocks, FastBoost achieves a 2.1 times parameter reduction over MobileNetV3 while improving accuracy by +3.2 percentage points on CIFAR-10. The architecture features dual attention pathways with real-time weight adjustment, cascaded refinement layers (increasing gradient flow by 12.7%), and a hardware-friendly design (0.28G FLOPs). This co-optimization of dynamic attention and efficient convolution operations demonstrates unprecedented parameter-accuracy trade-offs, enabling deployment in resource-constrained edge devices without accuracy degradation.
Authors (1)
JunXi Yuan
Submitted
November 2, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Introduces FastBoost, a parameter-efficient neural architecture featuring a novel Dynamically Scaled Progressive Attention (DSPA) mechanism. DSPA incorporates adaptive fusion, phase scaling, and residual adaptation to achieve state-of-the-art accuracy on CIFAR benchmarks with significantly reduced parameter counts compared to existing models like MobileNetV3.

Business Value

Enables the deployment of high-performance computer vision models on resource-constrained devices (e.g., mobile phones, edge devices), leading to more efficient and powerful AI applications.