Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We present VLMEvalKit: an open-source toolkit for evaluating large
multi-modality models based on PyTorch. The toolkit aims to provide a
user-friendly and comprehensive framework for researchers and developers to
evaluate existing multi-modality models and publish reproducible evaluation
results. In VLMEvalKit, we implement over 200+ different large multi-modality
models, including both proprietary APIs and open-source models, as well as more
than 80 different multi-modal benchmarks. By implementing a single interface,
new models can be easily added to the toolkit, while the toolkit automatically
handles the remaining workloads, including data preparation, distributed
inference, prediction post-processing, and metric calculation. Although the
toolkit is currently mainly used for evaluating large vision-language models,
its design is compatible with future updates that incorporate additional
modalities, such as audio and video. Based on the evaluation results obtained
with the toolkit, we host OpenVLM Leaderboard, a comprehensive leaderboard to
track the progress of multi-modality learning research. The toolkit is released
on https://github.com/open-compass/VLMEvalKit and is actively maintained.