Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Modern Vision-Language Models (VLMs) exhibit remarkable visual and linguistic
capabilities, achieving impressive performance in various tasks such as image
recognition and object localization. However, their effectiveness in
fine-grained tasks remains an open question. In everyday scenarios, individuals
encountering design materials, such as magazines, typography tutorials,
research papers, or branding content, may wish to identify aesthetically
pleasing fonts used in the text. Given their multimodal capabilities and free
accessibility, many VLMs are often considered potential tools for font
recognition. This raises a fundamental question: Do VLMs truly possess the
capability to recognize fonts? To investigate this, we introduce the Font
Recognition Benchmark (FRB), a compact and well-structured dataset comprising
15 commonly used fonts. FRB includes two versions: (i) an easy version, where
10 sentences are rendered in different fonts, and (ii) a hard version, where
each text sample consists of the names of the 15 fonts themselves, introducing
a stroop effect that challenges model perception. Through extensive evaluation
of various VLMs on font recognition tasks, we arrive at the following key
findings: (i) Current VLMs exhibit limited font recognition capabilities, with
many state-of-the-art models failing to achieve satisfactory performance and
being easily affected by the stroop effect introduced by textual information.
(ii) Few-shot learning and Chain-of-Thought (CoT) prompting provide minimal
benefits in improving font recognition accuracy across different VLMs. (iii)
Attention analysis sheds light on the inherent limitations of VLMs in capturing
semantic features.