Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Non-prehensile pushing to move and reorient objects to a goal is a versatile
loco-manipulation skill. In the real world, the object's physical properties
and friction with the floor contain significant uncertainties, which makes the
task challenging for a mobile manipulator. In this paper, we develop a
learning-based controller for a mobile manipulator to move an unknown object to
a desired position and yaw orientation through a sequence of pushing actions.
The proposed controller for the robotic arm and the mobile base motion is
trained using a constrained Reinforcement Learning (RL) formulation. We
demonstrate its capability in experiments with a quadrupedal robot equipped
with an arm. The learned policy achieves a success rate of 91.35% in simulation
and at least 80% on hardware in challenging scenarios. Through our extensive
hardware experiments, we show that the approach demonstrates high robustness
against unknown objects of different masses, materials, sizes, and shapes. It
reactively discovers the pushing location and direction, thus achieving
contact-rich behavior while observing only the pose of the object.
Additionally, we demonstrate the adaptive behavior of the learned policy
towards preventing the object from toppling.
Authors (4)
Ioannis Dadiotis
Mayank Mittal
Nikos Tsagarakis
Marco Hutter
Submitted
February 3, 2025
Key Contributions
This paper develops a learning-based controller for mobile manipulators to perform dynamic object goal pushing, even with significant uncertainties in object properties. It uses a constrained Reinforcement Learning formulation trained on a quadrupedal robot with an arm, achieving high success rates in simulation and hardware.
Business Value
Enables robots to perform complex manipulation tasks in unstructured environments, increasing automation possibilities in logistics, manufacturing, and hazardous material handling.