Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper AI Researchers,ML Engineers,Computer Vision Engineers,NLP Engineers 3 weeks ago

NaViL: Rethinking Scaling Properties of Native Multimodal Large Language Models under Data Constraints

large-language-models › multimodal-llms
📄 Abstract

Abstract: Compositional training has been the de-facto paradigm in existing Multimodal Large Language Models (MLLMs), where pre-trained vision encoders are connected with pre-trained LLMs through continuous multimodal pre-training. However, the multimodal scaling property of this paradigm remains difficult to explore due to the separated training. In this paper, we focus on the native training of MLLMs in an end-to-end manner and systematically study its design space and scaling property under a practical setting, i.e., data constraint. Through careful study of various choices in MLLM, we obtain the optimal meta-architecture that best balances performance and training cost. After that, we further explore the scaling properties of the native MLLM and indicate the positively correlated scaling relationship between visual encoders and LLMs. Based on these findings, we propose a native MLLM called NaViL, combined with a simple and cost-effective recipe. Experimental results on 14 multimodal benchmarks confirm the competitive performance of NaViL against existing MLLMs. Besides that, our findings and results provide in-depth insights for the future study of native MLLMs.

Key Contributions

Systematically studies the design space and scaling properties of native, end-to-end trained Multimodal Large Language Models (MLLMs) under data constraints. Proposes NaViL, an optimal meta-architecture that balances performance and cost, and demonstrates a positive scaling relationship between vision encoders and LLMs.

Business Value

Enables the development of more capable and cost-effective multimodal AI systems, accelerating innovation in areas requiring joint understanding of vision and language.