Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: In this paper, we propose a training-free framework for vision-and-language
navigation (VLN). Existing zero-shot VLN methods are mainly designed for
discrete environments or involve unsupervised training in continuous simulator
environments, which makes it challenging to generalize and deploy them in
real-world scenarios. To achieve a training-free framework in continuous
environments, our framework formulates navigation guidance as graph constraint
optimization by decomposing instructions into explicit spatial constraints. The
constraint-driven paradigm decodes spatial semantics through constraint
solving, enabling zero-shot adaptation to unseen environments. Specifically, we
construct a spatial constraint library covering all types of spatial
relationship mentioned in VLN instructions. The human instruction is decomposed
into a directed acyclic graph, with waypoint nodes, object nodes and edges,
which are used as queries to retrieve the library to build the graph
constraints. The graph constraint optimization is solved by the constraint
solver to determine the positions of waypoints, obtaining the robot's
navigation path and final goal. To handle cases of no solution or multiple
solutions, we construct a navigation tree and the backtracking mechanism.
Extensive experiments on standard benchmarks demonstrate significant
improvements in success rate and navigation efficiency compared to
state-of-the-art zero-shot VLN methods. We further conduct real-world
experiments to show that our framework can effectively generalize to new
environments and instruction sets, paving the way for a more robust and
autonomous navigation framework.