Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The original CLIP text encoder is limited by a maximum input length of 77
tokens, which hampers its ability to effectively process long texts and perform
fine-grained semantic understanding. In addition, the CLIP text encoder lacks
support for multilingual inputs. All these limitations significantly restrict
its applicability across a broader range of tasks. Recent studies have
attempted to replace the CLIP text encoder with an LLM-based embedder to
enhance its ability in processing long texts, multilingual understanding, and
fine-grained semantic comprehension. However, because the representation spaces
of LLMs and the vision-language space of CLIP are pretrained independently
without alignment priors, direct alignment using contrastive learning can
disrupt the intrinsic vision-language alignment in the CLIP image encoder,
leading to an underutilization of the knowledge acquired during pre-training.
To address this challenge, we propose ProCLIP, a curriculum learning-based
progressive vision-language alignment framework to effectively align the CLIP
image encoder with an LLM-based embedder. Specifically, ProCLIP first distills
knowledge from CLIP's text encoder into the LLM-based embedder to leverage
CLIP's rich pretrained knowledge while establishing initial alignment between
the LLM embedder and CLIP image encoder. Subsequently, ProCLIP further aligns
the CLIP image encoder with the LLM-based embedder through image-text
contrastive tuning, employing self-distillation regularization to avoid
overfitting. To achieve a more effective alignment, instance semantic alignment
loss and embedding structure alignment loss are employed during representation
inheritance and contrastive tuning. The Code is available at
https://github.com/VisionXLab/ProCLIP.
Authors (9)
Xiaoxing Hu
Kaicheng Yang
Ziyang Gong
Qi Ming
Zonghao Guo
Xiang An
+3 more
Submitted
October 21, 2025
Key Contributions
ProCLIP proposes a curriculum learning-based approach to progressively align LLM-based text embedders with the CLIP vision-language space. This method overcomes the limitations of direct alignment, which can disrupt existing CLIP alignment, by preserving pre-trained knowledge and enabling better processing of long texts and multilingual inputs.
Business Value
Enhances the capabilities of vision-language models for applications requiring understanding of complex, long, or multilingual text descriptions of images, leading to more sophisticated search, analysis, and interaction tools.