GravMAD: Grounded Spatial Value Maps Guided Action Diffusion for Generalized 3D Manipulation


GravMAD can reason about new instructions, understand visual information, adapt to environmental changes, and generalize to novel tasks.
All videos are at 2x speed.

Abstract

Robots' ability to follow language instructions and execute diverse 3D tasks is vital in robot learning. Traditional imitation learning-based methods perform well on seen tasks but struggle with novel, unseen ones due to variability. Recent approaches leverage large foundation models to assist in understanding novel tasks, thereby mitigating this issue. However, these methods lack a task-specific learning process, which is essential for an accurate understanding of 3D environments, often leading to execution failures. In this paper, we introduce GravMAD, a sub-goal-driven, language-conditioned action diffusion framework that combines the strengths of imitation learning and foundation models. Our approach breaks tasks into sub-goals based on language instructions, allowing auxiliary guidance during both training and inference. During training, we introduce Sub-goal Keypose Discovery to identify key sub-goals from demonstrations. Inference differs from training, as there are no demonstrations available, so we use pre-trained foundation models to bridge the gap and identify sub-goals for the current task. In both phases, GravMaps are generated from sub-goals, providing flexible 3D spatial guidance compared to fixed 3D positions. Empirical evaluations on RLBench show that GravMAD significantly outperforms state-of-the-art methods, with a 28.63% improvement on novel tasks and a 13.36% gain on tasks encountered during training. These results demonstrate GravMAD's strong multi-task learning and generalization in 3D manipulation.

GravMAD Overview



(a) GravMap Synthesis: During training, we use Sub-goal Keypose Discovery to obtain sub-goals $g^\text{pos}$ and $g^\text{open}$. During inference, the Detector, Planner, and Composer pipeline interprets visual observations and language instructions to derive $g^\text{pos}$ and $g^\text{open}$, which are processed into a GravMap and encoded as a GravMap token.




(b) GravMaps Guided Action Diffusion: The policy network perceives the scene and denoises noisy actions guided by the GravMap token. After $K$ denoising steps, the clean actions are executed by the robot.



Interactive Visualization

Visualize GravMaps for

Interactive GravMap 1

Interactive GravMap 2




Performance on 12 Base Task

Evaluated on 12 base tasks:
episode

close the silver jar. (success)

Performance on 8 Novel Task

Evaluated on 8 unseen tasks:
episode


close bottom drawer.


VoxPoser (fail)

Act3D (fail)

3D Diffuser Actor (fail)

GravMAD (success)

Prompts

Prompts in Simulation Environments:
Planner | Composer | Detetor | Get Cost Maps | Get Gripper Maps | More Detailed Prompts