Before we start

Before proceeding, make sure you read the ‘Getting Started’ guide. It’s expected that you have already set up the project structure.

If you haven’t yet, here is the link : Getting Started

Make your avatar alive

To make your avatar alive, three different type of services is needed:

An example of connecting a Rive character to Langchain is provided below.

Example

To get this started, You need to provide a audio service API key and put key inside the “AvatarDisplay” tag to make your avatars start talking.

You should also provide a mouth-sync supported avatar and put the url or file inside the “AvatarDisplay” tag.

import { useState } from 'react'
import { useAvatar } from '@avatechai/avatars/react'
import {
  defaultAvatarLoaders,
  defaultBlendshapesService,
} from '@avatechai/avatars/default-loaders'

// you can replace the url with any rive model that support mouth-sync.

const elevenLabs = new ElevenLabVoiceService(
  "<apiKey>",
  "eleven_monolingual_v1",
  "<voiceId>"
);

function App() {
  const [text, setText] = useState('');
  const { avatarDisplay, handleFirstInteractionAudio } = useAvatar({
    // Avatar State
    text: text,
    avatarId: 'af3f42c9-d1d7-4e14-bd81-bf2e05fd11a3',

    // Loader + Plugins
    avatarLoaders: defaultAvatarLoaders,
    blendshapesService: defaultBlendshapesService,
    audioService: elevenLabs,

    // Style Props
    scale: 4,
  })
  return(
  <>
    {avatarDisplay}
    <button
      onClick={() => {
        handleFirstInteractionAudio()
        setText('hi')
      }}
    >
      send it
    </button>
  </>
  ) 
}

export default App