Reproducing the Unitree G1 acrobatics seen at the 2025 CCTV Spring Festival Gala (春晚).
This repo contains retargeted motion capture data for the Unitree G1 29-DOF humanoid robot
and a lightweight visualization tool — no Isaac Sim required to preview motions (see gif/ and tools/visualize_motion.py).
Motion data is produced by GMR (General Motion Retargeting) from SMPL-X mocap sequences and retargeted to the G1 joint space. The training pipeline uses unitree_rl_lab.
Four representative clips from gif/ (3D point cloud, Z-up; see Visualization legend for colors).
More sequences live in gif/ alongside data/*.csv.
| 88_05 | 88_08 |
|---|---|
![]() |
![]() |
| 85_07 | 143_34 |
|---|---|
![]() |
![]() |
g1-motion-data/
├── data/ # Retargeted motion CSVs (G1 SDK joint order, 30 fps)
│ ├── 88_03_stageii_unitree_g1.csv
│ ├── 88_05_stageii_unitree_g1.csv
│ └── ... (16 motions total)
├── raw_npz_before_retargeting/ # Source AMASS/SMPL-X NPZ files (before GMR retargeting)
│ ├── 88_03_stageii.npz
│ ├── 88_05_stageii.npz
│ └── ... (16 sequences total)
├── gif/ # Preview GIFs (3D point-cloud animation, Z-up)
│ ├── 88_03_stageii_unitree_g1.gif
│ └── ... (16 GIFs total)
└── tools/
├── visualize_motion.py # NPZ → 3D GIF / interactive viewer
└── requirements.txt
Each CSV row is one frame (no header). Column layout:
| Cols | Content |
|---|---|
| 0–2 | Root position x, y, z (metres, Z-up world frame) |
| 3–6 | Root quaternion qx, qy, qz, qw |
| 7–35 | 29 joint positions (rad), Unitree G1 SDK order |
0 left_hip_pitch 6 right_hip_pitch 12 waist_yaw
1 left_hip_roll 7 right_hip_roll 13 waist_roll
2 left_hip_yaw 8 right_hip_yaw 14 waist_pitch
3 left_knee 9 right_knee 15 left_shoulder_pitch
4 left_ankle_pitch 10 right_ankle_pitch 16 left_shoulder_roll
5 left_ankle_roll 11 right_ankle_roll 17 left_shoulder_yaw
18 left_elbow
19 left_wrist_roll
20 left_wrist_pitch
21 left_wrist_yaw
22 right_shoulder_pitch
23 right_shoulder_roll
24 right_shoulder_yaw
25 right_elbow
26 right_wrist_roll
27 right_wrist_pitch
28 right_wrist_yaw
The viewer requires an NPZ file with body_pos_w (T, B, 3) — world-frame
forward-kinematics body positions generated by Isaac Sim.
pip install -r tools/requirements.txtUse unitree_rl_lab:
python scripts/mimic/csv_to_npz.py \
-f data/88_05_stageii_unitree_g1.csv \
--input_fps 30 \
--headless仓库已附带 gif/ 下的预览图;若需自定义视角 / 批量导出,可在本地用下面命令从 NPZ 重新生成。
# Save GIF
python tools/visualize_motion.py -f data/88_05_stageii_unitree_g1.npz
# Interactive window
python tools/visualize_motion.py -f data/88_05_stageii_unitree_g1.npz --show
# Custom camera
python tools/visualize_motion.py -f data/88_05_stageii_unitree_g1.npz \
--elev 20 --azim -45
# Batch all NPZ files
for f in data/*.npz; do
python tools/visualize_motion.py -f "$f" -o "${f%.npz}.gif"
done| Color | Meaning |
|---|---|
| 🔵 Blue dots | All body links |
| 🔴 Red dots | Feet (left/right ankle_roll) |
| 🟢 Green dots | Hands (left/right wrist_yaw) |
| Gray plane | Ground (Z = 0) |
These CSVs are designed for use with the mimic task in unitree_rl_lab.
Pipeline:
CSV (this repo)
→ csv_to_npz.py (Isaac Sim, generates body_pos_w for training)
→ NPZ (training input for mimic RL)
→ npz_to_deploy_csv.py (convert to deploy format, DFS joint order)
→ deploy CSV (runtime motion reference for C++ controller)
Note on joint order: The CSVs here use SDK order (hardware motor numbering). The deploy pipeline reorders joints to Isaac DFS order automatically.
The raw_npz_before_retargeting/ folder contains the original SMPL-X motion sequences
from the AMASS dataset (CMU mocap subset) that were used
as input to GMR for retargeting to the G1 joint space.
These files are handpicked for their relevance to the 2025 CCTV Spring Festival Gala (春晚) acrobatic sequences — flips, jumps, and high-dynamic movements.
If you want to reproduce the retargeting yourself, you can feed these NPZ files directly
into GMR and use the CSVs in data/ as reference outputs.
| Key | Description |
|---|---|
trans |
Root translation, shape (T, 3) |
root_orient |
Root orientation (axis-angle), shape (T, 3) |
pose_body |
Body joint poses (axis-angle), shape (T, 63) |
betas |
Shape coefficients, shape (16,) |
mocap_frame_rate |
Frame rate (float) |
gender |
Subject gender ('male' / 'female') |
Motion sequences are sourced from the CMU Motion Capture Database, distributed as part of the AMASS dataset:
Mahmood, N., Ghorbani, N., Troje, N. F., Pons-Moll, G., & Black, M. J. (2019). AMASS: Archive of motion capture as surface shapes. ICCV 2019.
Please cite AMASS and CMU MoCap if you use these sequences in your work.
MIT



