Game Development 11 min read

Motion Retargeting Techniques for High‑Quality Virtual Character Driving

The article surveys motion retargeting, describing its importance for virtual characters, outlining traditional geometric methods and recent deep‑learning approaches, presenting a customized Interaction Mesh‑based solution from Kuaishou Y‑tech, and discussing performance, limitations, and future research directions.

Kuaishou Tech
Kuaishou Tech
Kuaishou Tech
Motion Retargeting Techniques for High‑Quality Virtual Character Driving

From traditional CG animation and special effects to today’s virtual idols and the metaverse, high‑quality virtual characters are a core component of immersive content, and motion retargeting is a key technology for preserving animation quality when driving diverse character models.

Motion retargeting aims to map source motion data onto any virtual character while keeping semantic features and natural fluidity. Challenges include the wide variety of character body shapes and the complex interactions within motions.

In animation pipelines, motion capture provides joint rotations and positions that are mapped to a standard human model. However, when target characters differ significantly from human proportions, manual refinement becomes time‑consuming, limiting real‑time applications such as virtual live streaming.

Traditional methods abstract motions as point sets in 3D space and use geometric descriptors (e.g., Interaction Mesh, Relation Descriptors, Egocentric Mesh, Aura Mesh) to measure similarity and drive new characters via optimization or iterative solvers.

Deep‑learning methods encode motions into latent vectors using networks such as RNN‑based Neural Kinematic Networks (NKN), Skeleton‑Aware Networks (SAN), and Contact‑Aware Retargeting (CAR) that combine PointNet encoding of mesh geometry with recurrent encoders, enabling semantic transfer and contact preservation.

Both families have trade‑offs: traditional approaches preserve basic shape but miss semantic nuances, while deep models capture semantics but require large datasets and may lack generality. Hybrid methods like CAR integrate local geometric cues with learned latent spaces to improve results.

The proposed technical solution from Kuaishou Y‑tech adopts an improved Interaction Mesh that incorporates sparse point‑cloud modeling of character geometry and leverages skeletal structure information to handle large body‑proportion differences. The pipeline includes sparse point‑cloud construction, geometric feature extraction, and either inverse kinematics or forward kinematics for final pose generation.

Optimizations reduce computational load, enabling real‑time driving of numerous virtual avatars without extensive parameter tuning. Visual comparisons show the method outperforming existing software in preserving limb contacts and handling extreme body‑shape variations.

In conclusion, the in‑house motion retargeting algorithm effectively transfers single‑person motions across diverse models, maintains critical contact information, and runs efficiently for real‑time scenarios. Future work includes extending to multi‑person interactions, environmental constraints, and further improving generalization.

animationdeep learningGame developmentcomputer graphicsmotion retargetingvirtual characters
Kuaishou Tech
Written by

Kuaishou Tech

Official Kuaishou tech account, providing real-time updates on the latest Kuaishou technology practices.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.