Speech-to-Animation (STA) is a technology developed exclusively by FaceUnity, which uses voice message to drive avatar to speak with emotions and movements. The technology combines deep learning neural networks with computer graphics to enable computers to understand the content of speech and fine-tune the avatar's lip movements, facial expressions, and body gestures to create realistic avatar animations.
Chinese Input Only
仅支持上☆传 30 秒以内的 .mp3 文件
Accurately analyze information in the speech.and transform it into the phoneme script that can drive the avatar.
It defines dozens of basic lip styles and lip coefficients to realize the lip shape transition of word-to-word pronunciation.
Using 55 blendshapes and expression coefficients to show all common expressions of human.
With mature specifications of skeleton and movement production. Models can be driven by audio content, including the head and the body.
Supporting the rendering and driving of various types of avatars,including cartoon characters, animal style, real human style, and high-rendering simulation style.
Combined with NLP and TTS technology, it can provide customers an AI virtual assistant with real-time conversation and rich facial expression feedback, which making the interaction between human and machine more vivid and interesting.MORE
With STA technology, editors can quickly generate videos of AI virtual anchor with synchronized lip shapes, rich facial expressions and model actions by simply inputting text content.MORE