Basic demo
Before we start
Before proceeding, make sure you read the ‘Getting Started’ guide. It’s expected that you have already set up the project structure.
If you haven’t yet, here is the link : Getting Started
Make your avatar alive
To make your avatar alive, three different type of services is needed:
Large language model
Large language model
Voice providers
Voice providers
A Text to Speech voice generation providers is needed. These voice provider will generate voices based on the text input, so that avatars can respond through speech. Currently we support Elevenlabs to generate the voice of your avatars.
An example of connecting a Rive character to Langchain is provided below.
Example
To get this started, You need to provide a audio service API key and put key inside the “AvatarDisplay” tag to make your avatars start talking.
You should also provide a mouth-sync supported avatar and put the url or file inside the “AvatarDisplay” tag.