Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Recently, Video-Language Models (VideoLMs) have demonstrated remarkable
capabilities, offering significant potential for flexible and powerful video
query systems. These models typically rely on Vision Transformers (ViTs), which
process video frames individually to extract visual embeddings. However,
generating embeddings for large-scale videos requires ViT inferencing across
numerous frames, posing a major hurdle to real-world deployment and
necessitating solutions for integration into scalable video data management
systems. This paper introduces D\'ej\`a Vu, a video-language query engine that
accelerates ViT-based VideoLMs by reusing computations across consecutive
frames. At its core is ReuseViT, a modified ViT model specifically designed for
VideoLM tasks, which learns to detect inter-frame reuse opportunities, striking
an effective balance between accuracy and reuse. Although ReuseViT
significantly reduces computation, these savings do not directly translate into
performance gains on GPUs. To overcome this, D\'ej\`a Vu integrates
memory-compute joint compaction techniques that convert the FLOP savings into
tangible performance gains. Evaluations on three VideoLM tasks show that
D\'ej\`a Vu accelerates embedding generation by up to a 2.64x within a 2% error
bound, dramatically enhancing the practicality of VideoLMs for large-scale
video analytics.