Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: To enable embodied agents to operate effectively over extended timeframes, it
is crucial to develop models that form and access memories to stay
contextualized in their environment. In the current paradigm of training
transformer-based policies for embodied sequential decision-making tasks,
visual inputs often overwhelm the context limits of transformers, while humans
can maintain and utilize a lifetime of experience compressed as memories.
Significant compression is possible in principle, as much of the input is
irrelevant and can be abstracted. However, existing approaches predominantly
focus on either recurrent models with fixed-size memory or transformers with
full-context reliance. In this work, we propose Memo, a transformer-based
architecture and training recipe for reinforcement learning (RL) on
memory-intensive, long-horizon tasks. Memo incorporates the creation and
retrieval of memory by interleaving periodic summarization tokens with the
inputs of a model during training. We demonstrate Memo's effectiveness on a
gridworld meta-RL benchmark and a multi-object navigation task in
photo-realistic indoor settings. Memo outperforms naive long-context
transformer baselines while being more compute and storage efficient.
Additionally, Memo generalizes better to longer contexts at inference time and
remains robust in streaming settings, where historical context must be
truncated to fit inference constraints.
Authors (5)
Gunshi Gupta
Karmesh Yadav
Zsolt Kira
Yarin Gal
Rahaf Aljundi
Submitted
October 22, 2025
Key Contributions
Memo is a novel transformer-based architecture and training recipe for memory-intensive, long-horizon tasks in embodied RL. It addresses the context limits of transformers by interleaving periodic summarization tokens with inputs, enabling efficient memory creation and retrieval.
Business Value
Enables the development of more capable and persistent AI agents for tasks requiring long-term memory and context, such as autonomous robots in complex environments or sophisticated virtual assistants.