About
Cyanpuppets is a Chinese AI technology company behind the CYAN.AI platform — a markerless motion capture solution that uses deep learning to convert ordinary 2D video into professional-grade 3D animation data. Built on Convolutional Neural Networks (CNN) and Deep Neural Networks (DNN) with 900 million parameters, the platform simultaneously tracks 208 keypoints: 140 facial points, 21 finger points, and 30 body points, all at sub-0.1-second latency. Unlike traditional optical or inertial motion capture, CYAN.AI requires no wearable suits or specialized hardware. It automatically calibrates the scene, isolates subjects from backgrounds and obstacles, and exports FBX files directly to local storage — even in complex, small, or cluttered environments. Its proprietary Retargeting algorithm maps motion data to virtually all major 3D skeleton systems, including Metahuman, Unreal Engine Skeleton Assets, Daz, CC4, iClone, Unity Avatar, VRM, MMD, Mixamo, and more, eliminating the fragmentation between 3D platforms. Designed for both independent creators and professional film and game studios, CYAN.AI runs at 30+ FPS on consumer-grade NVIDIA GeForce RTX 3060 GPUs. Backed by the NVIDIA Inception Program, Intel Partner Alliance, and Unity Black Horse Program, it has accelerated 3D content creation workflows for 200+ enterprise clients including Tencent Games, Bilibili, Microsoft Xbox, major Chinese universities, and international partners in Japan and South Korea.
Key Features
- Markerless AI Motion Capture: Converts standard 2D video into full-body 3D motion data using CNN and DNN algorithms — no wearable suits, markers, or specialized cameras required.
- 208-Keypoint Full-Body Tracking: Simultaneously tracks 140 facial points, 21 finger points, and 30 body points with sub-0.1-second latency for cinema-quality facial expressions, hand gestures, and body movement.
- Universal Skeleton Retargeting: Proprietary retargeting algorithm maps captured motion to all major 3D skeleton systems including Metahuman, Unreal, Unity Avatar, VRM, MMD, Mixamo, Daz, iClone, Maya, and 3ds Max Biped.
- Consumer GPU Compatible: Runs at 30+ FPS on a standard NVIDIA GeForce RTX 3060, making professional motion capture accessible to independent creators without expensive dedicated hardware.
- Automated Scene Calibration & FBX Export: Automatically segments subjects from backgrounds and obstacles in any environment, then one-click exports motion data as industry-standard FBX files.
Use Cases
- Game studios generating high-quality character animations from actor video footage without traditional mocap suits or stage setups
- VTubers and virtual influencers driving full-body 3D avatars in real time using only a webcam or smartphone camera
- Film and animation production companies accelerating 3D content pipelines by converting existing video references directly into animation-ready FBX data
- XR and metaverse developers building full-body interactive virtual social experiences with low-latency avatar control
- Universities and medical institutions analyzing human movement, posture, and biomechanics using AI-extracted 3D skeletal data
Pros
- No Wearable Equipment Needed: Eliminates the high cost and physical complexity of traditional optical or inertial mocap suits, making professional motion capture accessible to solo creators and small studios.
- Broad 3D Platform Compatibility: Supports retargeting to virtually every major engine and tool out of the box, integrating seamlessly into existing production pipelines without format conversion headaches.
- Proven at Enterprise Scale: Trusted by 200+ clients including Tencent Games, Bilibili, Microsoft Xbox, top-tier universities, and international studios, with backing from NVIDIA, Intel, and Unity programs.
Cons
- Primarily Chinese-Language Interface: The platform, documentation, and customer support are predominantly in Chinese, which may present a significant barrier for non-Chinese-speaking users and international teams.
- Dedicated NVIDIA GPU Required: Requires at minimum an NVIDIA GeForce RTX 3060 for smooth operation, limiting use on non-NVIDIA workstations, older hardware, or purely cloud-based setups.
- Limited Public Pricing Transparency: Pricing is not publicly listed on the website; prospective customers must contact the sales team directly, making budget planning harder for smaller studios.
Frequently Asked Questions
No. CYAN.AI uses AI computer vision to analyze ordinary 2D video, so no wearable suits, markers, or specialized cameras are required. Any standard video camera or webcam serves as the input.
CYAN.AI supports automated retargeting to Metahuman, Unreal Engine Skeleton Assets, Daz, CC4, iClone, Unity Avatar, VRM, MMD, Mixamo, MayaAdv, MayaHumanIK, 3ds Max Biped, and more.
CYAN.AI requires an NVIDIA GeForce RTX 3060 GPU as a minimum, which delivers stable 30+ FPS performance at 0.1s latency. Higher-end NVIDIA GPUs improve throughput and scene complexity handling.
CYAN.AI exports captured motion data as industry-standard FBX files, which are compatible with all major 3D animation, game development, and VFX tools.
CYAN.AI serves a wide range, from independent content creators and VTubers to professional animation studios, game developers, XR teams, broadcasters, universities, and medical researchers — anyone who needs 3D motion data without the cost of traditional mocap hardware.