Avatar Interaction
Avatars can interact with several ways:
Lip-sync
The avatars generated from AvatarLabs support real-time lip-sync based on audio provided to the SDK.
To make your avatar start talking, you need to create an AudioContext
and connect it to the SDK with connectAudioContext
.
Connect Audio
After connecting AudioContext, you need to create a HTMLAudioElement or AudioNode and connect it to the SDK in order to sync the audio with lip-sync system.
After connection, when you play the audio from the HTMLAudioElement, the avatar will start lip-sync automatically.
You can visit here for a complete demo shocasing how our lip-sync system works!
Expression
Avatars created from our creation suite support different expressions.
How many expression the avatar can make depends on which model you are using to generate the avatar.