Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper AI Researchers,LLM Developers,Developers of AI Agents,System Designers 1 week ago

Temporal Blindness in Multi-Turn LLM Agents: Misaligned Tool Use vs. Human Time Perception

large-language-models › reasoning
📄 Abstract

Abstract: Large language model agents are increasingly used in multi-turn conversational settings to interact with and execute tasks in dynamic environments. However, a key limitation is their temporal blindness: they, by default, operate with a stationary context, failing to account for the real-world time elapsed between messages. This becomes a critical liability when an agent must decide whether to invoke a tool based on how much time has passed since the last observation. Without temporal awareness, agents often either over-rely on previous context (skipping necessary tool calls), or under-rely on it (unnecessarily repeating tool calls). To study this challenge, we introduce TicToc-v1, a test set of multi-turn user-agent trajectories across 34 scenarios with varying time sensitivity. Each trajectory ends with a user question, where the need for a tool call depends on the amount of time elapsed since the last message. To give LLMs temporal context, we augment dialogue messages with explicit timestamps, bridging the gap between static dialogue and evolving environments. We then collected human preferences for these samples, creating two subsets: one where humans preferred relying on the previous observation (prefer-noTool), and another where they preferred a new tool call (prefer-Tool). We evaluated how well LLM tool-calling decisions align with human preferences under varying time intervals on TicToc-v1. Our analysis show that without time information, most models perform only slightly better than random, with the top alignment rate being just over 60%. While adding timestamps leads to a slight improvement, particularly for larger models, the improvement is modest, peaking at around 65%. We also show that naive, prompt-based alignment have limited effectiveness. Our findings highlight the need for specific post-training alignment to align multi-turn LLM tool use with human temporal perception.
Authors (7)
Yize Cheng
Arshia Soltani Moakhar
Chenrui Fan
Kazem Faghih
Parsa Hosseini
Wenxiao Wang
+1 more
Submitted
October 27, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper identifies and analyzes 'temporal blindness' in multi-turn LLM agents, where they fail to account for elapsed time, leading to misaligned tool use. It introduces the TicToc-v1 benchmark to study this issue and proposes augmenting dialogue messages with temporal context to improve agent performance.

Business Value

Enables the development of more reliable and context-aware AI agents that can operate effectively in dynamic environments, improving automation and user interaction quality.