Yunrui Lian | 连允睿

| Email | Google Scholar |
| Github |

I am a first-year M.S. student at IIIS, Tsinghua University, advised by Prof. Li Yi. My research is supported by GALBOT.

Prior to that, I received my Bachelor's degree from Fudan University. I also spent time as a research intern at Carnegie Mellon University.

My research lies at the intersection of robotics and machine learning. I am currently interested in developing control algorithms that enable robots to perform highly dynamic movements and tasks. I am also exploring the potential of leveraging human videos to scale up perceptive loco-manipulation systems.

Email: lianyunrui [AT] gmail.com


  News
  • [03/2026] We introduced LATENT: Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data. Dynamic movements, agile whole-body coordination, and rapid reactions. A step toward athletic humanoid sports skills.

  Publications

Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data
Zhikai Zhang*, Haofei Lu*, Yunrui Lian*, Ziqing Chen, Yun Liu, Chenghuai Lin, Han Xue, Zicheng Zeng, Zekun Qi, Shaolin Zheng, Qing Luan, Jingbo Wang, Junliang Xing, He Wang, Li Yi
arxiv, 2026

webpage | arXiv | abstract | bibtex | code (LATENT) GitHub stars

Human athletes demonstrate versatile and highly-dynamic tennis skills to successfully conduct competitive rallies with a high-speed tennis ball. However, reproducing such behaviors on humanoid robots is difficult, partially due to the lack of perfect humanoid action data or human kinematic motion data in tennis scenarios as reference. In this work, we propose LATENT, a system that Learns Athletic humanoid TEnnis skills from imperfect human motioN daTa. The imperfect human motion data consist only of motion fragments that capture the primitive skills used when playing tennis rather than precise and complete human-tennis motion sequences from real-world tennis matches, thereby significantly reducing the difficulty of data collection. Our key insight is that, despite being imperfect, such quasi-realistic data still provide priors about human primitive skills in tennis scenarios. With further correction and composition, we learn a humanoid policy that can consistently strike incoming balls under a wide range of conditions and return them to target locations, while preserving natural motion styles. We also propose a series of designs for robust sim-to-real transfer and deploy our policy on the Unitree G1 humanoid robot. Our method achieves surprising results in the real world and can stably sustain multi-shot rallies with human players.

    @misc{zhang2026learningathletichumanoidtennis,
          title={Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data}, 
          author={Zhikai Zhang and Haofei Lu and Yunrui Lian and Ziqing Chen and Yun Liu and Chenghuai Lin and Han Xue and Zicheng Zeng and Zekun Qi and Shaolin Zheng and Qing Luan and Jingbo Wang and Junliang Xing and He Wang and Li Yi},
          year={2026},
          eprint={2603.12686},
          archivePrefix={arXiv},
          primaryClass={cs.RO},
          url={https://arxiv.org/abs/2603.12686}, 
    }
  
sym

Humanoid Generative Pre-Training for Zero-Shot Motion Tracking
Zekun Qi*, Xuchuan Chen*, Jilong Wang*, Chenghuai Lin*, Yunrui Lian, Zhikai Zhang, Yu Guan, Wenyao Zhang, Xinqiang Yu, He Wang, Li Yi
CVPR, 2026

webpage | abstract

We introduce Humanoid-GPT, a GPT-style Transformer with causal attention trained on a billion-scale motion corpus for whole-body control. Unlike prior shallow MLP trackers constrained by scarce data and an agility–generalization trade-off, Humanoid-GPT is pre-trained on a 2B-frame retargeted corpus that unifies all major mocap datasets with large-scale in-house recordings. Scaling both data and model capacity yields a single generative Transformer that tracks highly dynamic behaviors while achieving unprecedented zero-shot generalization to unseen motions and control tasks. Extensive experiments and scaling analyses show that our model establishes a new performance frontier, demonstrating robust zero-shot generalization to unseen tasks while simultaneously tracking highly dynamic and complex motions.

    @article{humanoidgpt25,
        title={Humanoid Generative Pre-Training for Zero-Shot Motion Tracking},
        author={Qi, Zekun and Chen, Xuchuan and Wang, Jilong and Lin, Chenghuai and Lian, Yunrui and Zhang, Zhikai and Zhang, Wenyao and Yu, Xinqiang and Wang, He and Yi, Li},
        journal={arXiv preprint arXiv:25xx.xxxxx},
        year={2025}
      }
  

Collision-Free Humanoid Traversal in Cluttered Indoor Scenes
Han Xue*, Sikai Liang*, Zhikai Zhang*, Zicheng Zeng, Yun Liu, Yunrui Lian, Jilong Wang, Qingtao Liu, Xuesong Shi, Li Yi
arxiv, 2026

webpage | arXiv | abstract | bibtex | code (Click-and-Traverse) GitHub stars

We study the problem of collision-free humanoid traversal in cluttered indoor scenes, such as hurdling over objects scattered on the floor, crouching under low-hanging obstacles, or squeezing through narrow passages. To achieve this goal, the humanoid needs to map its perception of surrounding obstacles with diverse spatial layouts and geometries to the corresponding traversal skills. However, the lack of an effective representation that captures humanoid–obstacle relationships during collision avoidance makes directly learning such mappings difficult. We therefore propose Humanoid Potential Field (HumanoidPF), which encodes these relationships as collision-free motion directions, significantly facilitating RL-based traversal skill learning. We also find that HumanoidPF exhibits a surprisingly negligible sim-to-real gap as a perceptual representation. To further enable generalizable traversal skills through diverse and challenging cluttered indoor scenes, we further propose a hybrid scene generation method, incorporating crops of realistic 3D indoor scenes and procedurally synthesized obstacles. We successfully transfer our policy to the real world and develop a teleoperation system where users could command the humanoid to traverse in cluttered indoor scenes with just a single click. Extensive experiments are conducted in both simulation and the real world to validate the effectiveness of our method.

    @misc{xue2026collisionfreehumanoidtraversalcluttered,
      title={Collision-Free Humanoid Traversal in Cluttered Indoor Scenes}, 
      author={Han Xue and Sikai Liang and Zhikai Zhang and Zicheng Zeng and Yun Liu and Yunrui Lian and Jilong Wang and Qingtao Liu and Xuesong Shi and Li Yi},
      year={2026},
      eprint={2601.16035},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2601.16035}, 
    }
  

Track Any Motions under Any Disturbances
Zhikai Zhang*, Jun Guo*, Chao Chen, Jilong Wang, Chenghuai Lin, Yunrui Lian, Han Xue, Zhenrong Wang, Maoqi Liu, Jiangran Lyu, Huaping Liu, He Wang, Li Yi
ICRA, 2026

webpage | arXiv | abstract | bibtex | code (OpenTrack) GitHub stars

A foundational humanoid motion tracker is expected to be able to track diverse, highly dynamic, and contact-rich motions. More importantly, it needs to operate stably in real-world scenarios against various dynamics disturbances, including terrains, external forces, and physical property changes for general practical use. To achieve this goal, we propose Any2Track (Track Any motions under Any disturbances), a two-stage RL framework to track various motions under multiple disturbances in the real world. Any2Track reformulates dynamics adaptability as an additional capability on top of basic action execution and consists of two key components: AnyTracker and AnyAdapter. AnyTracker is a general motion tracker with a series of careful designs to track various motions within a single policy. AnyAdapter is a history-informed adaptation module that endows the tracker with online dynamics adaptability to overcome the sim2real gap and multiple real-world disturbances. We deploy Any2Track on Unitree G1 hardware and achieve a successful sim2real transfer in a zero-shot manner. Any2Track performs exceptionally well in tracking various motions under multiple real-world disturbances.

    @misc{zhang2025trackmotionsdisturbances,
      title={Track Any Motions under Any Disturbances}, 
      author={Zhikai Zhang and Jun Guo and Chao Chen and Jilong Wang and Chenghuai Lin and Yunrui Lian and Han Xue and Zhenrong Wang and Maoqi Liu and Jiangran Lyu and Huaping Liu and He Wang and Li Yi},
      year={2025},
      eprint={2509.13833},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2509.13833}, 
    }
  

  Honors and Awards
🏆 2025, Outstanding Graduate, Fudan University
🏆 2023, First Prize, Scholarship for HongKong, Macau, and Overseas Chinese Students (Top 5%)
🏆 2022, National Scholarship (Top 0.2% nationwide)


Website template from here and here