GraspM3: Dexterous Grasp Motion Generation at Million Scale with Semantic Labelling

Explore the groundbreaking GraspM3 dataset with visualizations of dexterous hand grasping simulations.

[Dataset]

Introduction

The GraspM3 dataset is a large-scale dataset for dexterous hand grasping, featuring over 8,000 objects and 1,000,000 grasping motion trajectories. It includes comprehensive semantic annotations, such as object categories, grasp quality, and contact details. Simulations were validated in NVIDIA Isaac Gym, supporting efficient large-scale parallel computations.

Collective Grasping Visualization

This section demonstrates collective grasping simulations performed with multiple objects and dexterous hands simultaneously.

Individual Grasping Visualization

This section showcases detailed visualizations of single-object grasping, highlighting the interaction between the dexterous hand and individual objects.

HTML-Based Visualization

Interactive HTML visualizations allow users to explore simulated grasping processes directly in the browser, including mano hand and shadow hand (top and bottom row).

Human Hand grasping Shadow Hand Grasping