Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 90% Match Research Paper Researchers in computer vision,AI engineers working on video analysis,Developers of action recognition systems 2 weeks ago

StretchySnake: Flexible SSM Training Unlocks Action Recognition Across Spatio-Temporal Scales

computer-vision › video-understanding
📄 Abstract

Abstract: State space models (SSMs) have emerged as a competitive alternative to transformers in various tasks. Their linear complexity and hidden-state recurrence make them particularly attractive for modeling long sequences, whereas attention becomes quadratically expensive. However, current training methods for video understanding are tailored towards transformers and fail to fully leverage the unique attributes of SSMs. For example, video models are often trained at a fixed resolution and video length to balance the quadratic scaling of attention cost against performance. Consequently, these models suffer from degraded performance when evaluated on videos with spatial and temporal resolutions unseen during training; a property we call spatio-temporal inflexibility. In the context of action recognition, this severely limits a model's ability to retain performance across both short- and long-form videos. Therefore, we propose a flexible training method that leverages and improves the inherent adaptability of SSMs. Our method samples videos at varying temporal and spatial resolutions during training and dynamically interpolates model weights to accommodate any spatio-temporal scale. This instills our SSM, which we call StretchySnake, with spatio-temporal flexibility and enables it to seamlessly handle videos ranging from short, fine-grained clips to long, complex activities. We introduce and compare five different variants of flexible training, and identify the most effective strategy for video SSMs. On short-action (UCF-101, HMDB-51) and long-action (COIN, Breakfast) benchmarks, StretchySnake outperforms transformer and SSM baselines alike by up to 28%, with strong adaptability to fine-grained actions (SSV2, Diving-48). Therefore, our method provides a simple drop-in training recipe that makes video SSMs more robust, resolution-agnostic, and efficient across diverse action recognition scenarios.
Authors (4)
Nyle Siddiqui
Rohit Gupta
Sirnam Swetha
Mubarak Shah
Submitted
October 17, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Proposes a flexible training method for State Space Models (SSMs) in video understanding to address spatio-temporal inflexibility, a common issue where models trained at fixed resolutions/lengths perform poorly on unseen variations. This method aims to leverage SSMs' linear complexity for long sequences and improve performance across diverse video scales.

Business Value

Enables more robust video analysis systems that can handle diverse video inputs without significant performance degradation, useful for applications like content moderation, surveillance, and autonomous systems.