Jinrui Han (韓金睿)
👾

1class ContactInformationCard:
2 def __init__(self):
3 self.dept = "me @ sjtu"
4 self.lab = ""
5 self.email = "jrhan82@sjtu.edu.cn"
6 self.phone = "+86 182 7314 4065"
7
8 def flipCard(self):
9 print("tap on the card to flip.")
10
11 def closeCard(self):
12 print("tap outside to close it.")

Jinrui Han (韓金睿)

Jinrui Han (韓金睿)

I am a last-year master student majoring Mechanical Engineering at Shanghai Jiao Tong University (SJTU). Before that, I earned a Bachelor's degree from SJTU in 2023.

Currently I am a research intern at Institute of Artificial Intelligence (TeleAI), China Telecom, co-worked with Dr.Chenjia Bai to advance research in humanoid whole-body control.

I do research in robotics, mainly focusing on applying reinforcement learning to endow robots with human-like behaviors.

Humanoid Locomotion: reinforcement learning, whole-body control, motion imitation
Loco-Manipulation: learning to humanoid-object interaction skills
Email: You can contact me via email for any inquiries or collaborations:jrhan82@sjtu.edu.cn or wechat:Bw_Rooney

💬 updates


📄 publications

KungfuBot2: Learning Versatile Motion Skills for Humanoid Whole-Body Control
In submission
Jinrui Han, Weiji Xie, Jiakun Zheng, Jiyuan Shi, Weinan Zhang, Ting Xiao, Chenjia Bai†
[PDF] | [CODE] | [DEMO]
Abstract: Learning versatile whole-body skills by tracking various human motions is a fundamental step toward general-purpose humanoid robots. This task is particularly challenging because a single policy must master a broad repertoire of motion skills while ensuring stability over long-horizon sequences. To this end, we present VMS, a unified whole-body controller that enables humanoid robots to learn diverse and dynamic behaviors within a single policy. Our framework integrates a hybrid tracking objective that balances local motion fidelity with global trajectory consistency, and an Orthogonal Mixture-of-Experts (OMoE) architecture that encourages skill specialization while enhancing generalization across motions. A segment-level tracking reward is further introduced to relax rigid step-wise matching, enhancing robustness when handling global displacements and transient inaccuracies. We validate VMS extensively in both simulation and real-world experiments, demonstrating accurate imitation of dynamic skills, stable performance over minute-long sequences, and strong generalization to unseen motions. These results highlight the potential of VMS as a scalable foundation for versatile humanoid whole-body control ... See More
KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills
NeurIPS 2025 (San Diego, United States)
Weiji Xie*, Jinrui Han*, Jiakun Zheng*, Huanyu Li, Xinzhe Liu, Jiyuan Shi, Weinan Zhang, Chenjia Bai†, Xuelong Li.
[PDF] | [CODE] | [DEMO] | [TALK]
Abstract: Humanoid robots are promising to acquire various skills by imitating human behaviors. However, existing algorithms are only capable of tracking smooth, low-speed human motions, even with delicate reward and curriculum design. This paper presents a physics-based humanoid control framework, aiming to master highly-dynamic human behaviors such as Kungfu and dancing through multi-steps motion processing and adaptive motion tracking. For motion processing, we design a pipeline to extract, filter out, correct, and retarget motions, while ensuring compliance with physical constraints to the maximum extent. For motion imitation, we formulate a bi-level optimization problem to dynamically adjust the tracking accuracy tolerance based on the current tracking error, creating an adaptive curriculum mechanism. We further construct an asymmetric actor-critic framework for policy training. In experiments, we train whole-body control policies to imitate a set of highly-dynamic motions. Our method achieves significantly lower tracking errors than existing approaches and is successfully deployed on the Unitree G1 robot, demonstrating stable and expressive behaviors. ... See More