Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: We present a system using Multimodal LLMs (MLLMs) to analyze a large database
with tens of millions of images captured at different times, with the aim of
discovering patterns in temporal changes. Specifically, we aim to capture
frequent co-occurring changes ("trends") across a city over a certain period.
Unlike previous visual analyses, our analysis answers open-ended queries (e.g.,
"what are the frequent types of changes in the city?") without any
predetermined target subjects or training labels. These properties cast prior
learning-based or unsupervised visual analysis tools unsuitable. We identify
MLLMs as a novel tool for their open-ended semantic understanding capabilities.
Yet, our datasets are four orders of magnitude too large for an MLLM to ingest
as context. So we introduce a bottom-up procedure that decomposes the massive
visual analysis problem into more tractable sub-problems. We carefully design
MLLM-based solutions to each sub-problem. During experiments and ablation
studies with our system, we find it significantly outperforms baselines and is
able to discover interesting trends from images captured in large cities (e.g.,
"addition of outdoor dining,", "overpass was painted blue," etc.). See more
results and interactive demos at https://boyangdeng.com/visual-chronicles.