Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Food image classification plays a vital role in intelligent food quality
inspection, dietary assessment, and automated monitoring. However, most
existing supervised models rely heavily on large labeled datasets and exhibit
limited generalization to unseen food categories. To overcome these challenges,
this study introduces MultiFoodChat, a dialogue-driven multi-agent reasoning
framework for zero-shot food recognition. The framework integrates
vision-language models (VLMs) and large language models (LLMs) to enable
collaborative reasoning through multi-round visual-textual dialogues. An Object
Perception Token (OPT) captures fine-grained visual attributes, while an
Interactive Reasoning Agent (IRA) dynamically interprets contextual cues to
refine predictions. This multi-agent design allows flexible and human-like
understanding of complex food scenes without additional training or manual
annotations. Experiments on multiple public food datasets demonstrate that
MultiFoodChat achieves superior recognition accuracy and interpretability
compared with existing unsupervised and few-shot methods, highlighting its
potential as a new paradigm for intelligent food quality inspection and
analysis.
Submitted
October 14, 2025
Key Contributions
Introduces MultiFoodChat, a novel dialogue-driven multi-agent reasoning framework for zero-shot food recognition. It integrates VLMs and LLMs to enable collaborative reasoning through visual-textual dialogues, allowing for flexible understanding of complex food scenes without additional training or manual annotations.
Business Value
Enhances food safety, quality control, and supply chain efficiency through automated, intelligent inspection systems that can adapt to new food items without retraining.