The avatars generated from AvatarLabs support real-time lip-sync based on audio provided to the SDK.To make your avatar start talking, you need to create an AudioContext and connect it to the SDK with connectAudioContext.
After connecting AudioContext, you need to create a HTMLAudioElement or AudioNode and connect it to the SDK in order to sync the audio with lip-sync system.
After connection, when you play the audio from the HTMLAudioElement, the avatar will start lip-sync automatically.