Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper AI Researchers,Machine Learning Engineers,Developers of Creative AI Tools,UX Designers 1 week ago

Latent Sketchpad: Sketching Visual Thoughts to Elicit Multimodal Reasoning in MLLMs

large-language-models › multimodal-llms
📄 Abstract

Abstract: While Multimodal Large Language Models (MLLMs) excel at visual understanding, they often struggle in complex scenarios that require visual planning and imagination. Inspired by how humans use sketching as a form of visual thinking to develop and communicate ideas, we introduce Latent Sketchpad, a framework that equips MLLMs with an internal visual scratchpad. The internal visual representations of MLLMs have traditionally been confined to perceptual understanding. We repurpose them to support generative visual thought without compromising reasoning ability. Building on frontier MLLMs, our approach integrates visual generation directly into their native autoregressive reasoning process. It allows the model to interleave textual reasoning with the generation of visual latents. These latents guide the internal thought process and can be translated into sketch images for interpretability. To realize this, we introduce two components: a Context-Aware Vision Head autoregressively produces visual representations, and a pretrained Sketch Decoder renders these into human-interpretable images. We evaluate the framework on our new dataset MazePlanning. Experiments across various MLLMs show that Latent Sketchpad delivers comparable or even superior reasoning performance to their backbone. It further generalizes across distinct frontier MLLMs, including Gemma3 and Qwen2.5-VL. By extending model's textual reasoning to visual thinking, our framework opens new opportunities for richer human-computer interaction and broader applications. More details and resources are available on our project page: https://latent-sketchpad.github.io/.
Authors (12)
Huanyu Zhang
Wenshan Wu
Chengzu Li
Ning Shang
Yan Xia
Yangyu Huang
+6 more
Submitted
October 28, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

Latent Sketchpad introduces a novel framework that equips Multimodal Large Language Models (MLLMs) with an internal visual scratchpad, enabling them to perform visual planning and imagination. By repurposing internal visual representations for generative visual thought and integrating visual latent generation directly into the autoregressive reasoning process, the model can interleave textual reasoning with visual generation, leading to enhanced complex reasoning capabilities and interpretable sketch outputs.

Business Value

Empowers AI systems to engage in more sophisticated visual reasoning and creative tasks, opening possibilities for AI-assisted design, content creation, and more intuitive human-AI collaboration.