Avatars can interact with several ways:

Lip-sync

The avatars generated from AvatarLabs support real-time lip-sync based on audio provided to the SDK.

To make your avatar start talking, you need to create an AudioContext and connect it to the SDK with connectAudioContext.

function App(){
    const {
        avatarDisplay,
        connectAudioElement,
        connectAudioContext,
        connectAudioNode
    } = useAvatar({
        //...
    });

    const audioContextRef = useRef<AudioContext>(new AudioContext());

    return (
        {avatarDisplay}
        <button
            onClick={() => {
              connectAudioContext(audioContextRef.current)
            }}
        >
            Connect AudioContext
        </button>
    );
}

Connect Audio

After connecting AudioContext, you need to create a HTMLAudioElement or AudioNode and connect it to the SDK in order to sync the audio with lip-sync system.

After connection, when you play the audio from the HTMLAudioElement, the avatar will start lip-sync automatically.

const { avatarDisplay, context, connectAudioElement } = useAvatar({
  //...
});

const audioPlayer = useRef(new Audio())

//...

return (
    {avatarDisplay}
    <button
        onClick={() => {
          connectAudioElement(audioPlayer.current, audioContextRef.current);
        }}
    >
        Connect 
    </button>

    <button
        onClick={() => {
          audioPlayer.current.src = "audio.mp3";
          audioPlayer.current.play();
        }}
    >
        Play
    </button>
)

You can visit here for a complete demo shocasing how our lip-sync system works!

Expression

Avatars created from our creation suite support different expressions.

How many expression the avatar can make depends on which model you are using to generate the avatar.

const { avatarDisplay, availableEmotions, setEmotion } = useAvatar({
  //...
});
return (
  <button
    onClick={() => {
      setEmotion(availableEmotions[0]);
    }}
  >
    Set emotion
  </button>
);