Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Modelling pedestrian-driver interactions is critical for understanding human
road user behaviour and developing safe autonomous vehicle systems. Existing
approaches often rely on rule-based logic, game-theoretic models, or
'black-box' machine learning methods. However, these models typically lack
flexibility or overlook the underlying mechanisms, such as sensory and motor
constraints, which shape how pedestrians and drivers perceive and act in
interactive scenarios. In this study, we propose a multi-agent reinforcement
learning (RL) framework that integrates both visual and motor constraints of
pedestrian and driver agents. Using a real-world dataset from an unsignalised
pedestrian crossing, we evaluate four model variants, one without constraints,
two with either motor or visual constraints, and one with both, across
behavioural metrics of interaction realism. Results show that the combined
model with both visual and motor constraints performs best. Motor constraints
lead to smoother movements that resemble human speed adjustments during
crossing interactions. The addition of visual constraints introduces perceptual
uncertainty and field-of-view limitations, leading the agents to exhibit more
cautious and variable behaviour, such as less abrupt deceleration. In this
data-limited setting, our model outperforms a supervised behavioural cloning
model, demonstrating that our approach can be effective without large training
datasets. Finally, our framework accounts for individual differences by
modelling parameters controlling the human constraints as population-level
distributions, a perspective that has not been explored in previous work on
pedestrian-vehicle interaction modelling. Overall, our work demonstrates that
multi-agent RL with human constraints is a promising modelling approach for
simulating realistic road user interactions.
Authors (3)
Yueyang Wang
Mehmet Dogar
Gustav Markkula
Submitted
October 31, 2025
Key Contributions
This study proposes a multi-agent reinforcement learning (RL) framework that integrates both visual and motor constraints of pedestrian and driver agents to model their interactions realistically. Evaluating different constraint combinations, it demonstrates that incorporating both visual and motor constraints yields the most realistic interaction behaviors.
Business Value
Enables the development of safer autonomous vehicle systems by providing more realistic simulations of complex human-road user interactions, reducing risks during testing and deployment.