Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: While deep learning-based robotic grasping technology has demonstrated strong
adaptability, its computational complexity has also significantly increased,
making it unsuitable for scenarios with high real-time requirements. Therefore,
we propose a low computational complexity and high accuracy model named VMGNet
for robotic grasping. For the first time, we introduce the Visual State Space
into the robotic grasping field to achieve linear computational complexity,
thereby greatly reducing the model's computational cost. Meanwhile, to improve
the accuracy of the model, we propose an efficient and lightweight multi-scale
feature fusion module, named Fusion Bridge Module, to extract and fuse
information at different scales. We also present a new loss function
calculation method to enhance the importance differences between subtasks,
improving the model's fitting ability. Experiments show that VMGNet has only
8.7G Floating Point Operations and an inference time of 8.1 ms on our devices.
VMGNet also achieved state-of-the-art performance on the Cornell and Jacquard
public datasets. To validate VMGNet's effectiveness in practical applications,
we conducted real grasping experiments in multi-object scenarios, and VMGNet
achieved an excellent performance with a 94.4% success rate in real-world
grasping tasks. The video for the real-world robotic grasping experiments is
available at https://youtu.be/S-QHBtbmLc4.