Skip to main content
For a complete demo and GitHub repo, you can check this out directly.

Next JS

With Next JS, Vercel AI, a simple chat with avatar experience.

Setup

Clone and try out the repo demo!
  • npx
  • git clone
npx degit avatechgg/avatars-nextjs-demo avatars-nextjs-demo
Install and run the dependencies.
cd avatars-nextjs-demo
pnpm install
pnpm dev
The demo request for OpenAI API key and Eleven Labs API key. You can get the API key from the respective website.
OPENAI_API_KEY=""
NEXT_PUBLIC_ELEVEN_LABS_API_KEY=""

Explaination

This demo shows how to integrate with Vercel AI SDK, visit this get started with Vercel AI SDK https://sdk.vercel.ai/docs/guides/openai#guide-chat-bot
Below is step by step explaination on how to integrate with Vercel AI SDK.
pnpm dlx create-next-app my-ai-app
cd my-ai-app
pnpm install ai openai-edge
Setup the OpenAI API key in your environment variables.
OPENAI_API_KEY=xxxxxxxxx
app/api/chat/route.ts
import { Configuration, OpenAIApi } from 'openai-edge'
import { OpenAIStream, StreamingTextResponse } from 'ai'
 
// Optional, but recommended: run on the edge runtime.
// See https://vercel.com/docs/concepts/functions/edge-functions
export const runtime = 'edge'
 
const apiConfig = new Configuration({
  apiKey: process.env.OPENAI_API_KEY!
})
 
const openai = new OpenAIApi(apiConfig)
 
export async function POST(req: Request) {
  // Extract the `messages` from the body of the request
  const { messages } = await req.json()
 
  // Request the OpenAI API for the response based on the prompt
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages: messages
  })
 
  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response)
 
  // Respond with the stream
  return new StreamingTextResponse(stream)
}
The useAvatar hook create and setup the avatar display for you! You only need to provide the avatarId and other corresponding loaders.
chat.tsx
export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit, isLoading, setMessages } = useChat()
  
  const [text, currentEmotion] = getAIReplyWithEmotion(messages, isLoading)

  const { avatarDisplay, handleFirstInteractionAudio, availableEmotions, context } = useAvatar({
    // Avatar State
    text: text,
    currentEmotion: currentEmotion,
    avatarId: 'af3f42c9-d1d7-4e14-bd81-bf2e05fd11a3',

    // Loader + Plugins
    avatarLoaders: defaultAvatarLoaders,
    blendshapesService: defaultBlendshapesService_2,

    audioService: elevenLabs,

    // Style Props
    scale: 4,
    className: 'w-[400px] h-[400px]',
  })

  return <>
   {/* The rest of the UI */}
  </>
}
By utilising the buildCharacterPersonaPrompt the Avatech SDK provides a very simple way for you to configure the initial prompt with emotion tags.
// Set initial prompt
useEffect(() => {
  if (!availableEmotions) return

  setMessages([
    {
      content: buildCharacterPersonaPrompt({
        name: 'Ava',
        context: 'Im ava, a virtual idol from avatechs.',
        exampleReplies: ['I am ava!', 'I love next js!', 'What are you working on recently?', 'npm i @avatechai/avatars'],
        emotionList: availableEmotions,
      }),
      role: 'system',
      id: '1',
    },
  ])
}, [availableEmotions])
Using the avatar hook you created!
chat.tsx
export default function Chat() {

  // The reset of the hook

  return (
    <>
      <div className="md:max-w-md py-24 flex flex-col stretch ">
        {/* Avatar Display */}
        {avatarDisplay}

        {/* Message Display */}
        {messages.map(
          (m) =>
            m.role != 'system' && (
              <div key={m.id}>
                {m.role === 'user' ? 'User: ' : 'AI: '}
                {m.content}
              </div>
            ),
        )}
      </div>

      {/* Input */}
      <form
        onSubmit={(e) => {
          e.preventDefault();
          handleSubmit(e);
          handleFirstInteractionAudio();
        }}
      >
        {/* Input */}
      </form>
    </>
  );
}
I