Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

README.md

LiveAvatarSDK - Headless AI Avatar SDK

A framework-agnostic, headless SDK for building custom AI avatar interfaces with Pipecat. Build your own UI while we handle the WebRTC and Pipecat integration.

Why Headless?

The existing <live-avatar> and <live-avatar-rectangular> web components provide complete, ready-to-use UI solutions. The headless SDK is for developers who want:

  • 100% custom UI control - Design your own buttons, layouts, and styles
  • Framework flexibility - Works with React, Vue, Angular, Svelte, vanilla JS
  • Advanced integrations - Build complex UIs with your existing design system
  • Full control - Decide when to connect, what to show, how to animate

When to Use What

Component Use Case UI Control Effort
<live-avatar> Quick floating bubble widget None (pre-styled) 1 line of HTML
<live-avatar-rectangular> Embedded rectangle widget None (pre-styled) 1 line of HTML
LiveAvatarSDK (Headless) Custom UI from scratch 100% custom Build your own

Installation

Via npm

npm install @iwy/live-widgets

Via CDN

<script type="module">
  import { LiveAvatarSDK } from 'https://unpkg.com/@iwy/live-widgets@latest/dist/headless.esm.js';
</script>

Quick Start

Vanilla JavaScript

import { LiveAvatarSDK } from '@iwy/live-widgets/headless';

// Get your video/audio elements
const videoEl = document.getElementById('my-video');
const audioEl = document.getElementById('my-audio');

// Create SDK instance
const avatar = new LiveAvatarSDK(
  {
    agentId: 'your-agent-id',
    videoElement: videoEl,
    audioElement: audioEl,
  },
  {
    onConnected: () => console.log('Connected!'),
  }
);

// Connect when your custom button is clicked
document.getElementById('connect-btn').addEventListener('click', () => {
  avatar.connect();
});

// Disconnect
document.getElementById('disconnect-btn').addEventListener('click', () => {
  avatar.disconnect();
});

// Toggle mic
document.getElementById('mic-btn').addEventListener('click', () => {
  avatar.toggleMic();
});

React

import { useRef, useState, useEffect } from 'react';
import { LiveAvatarSDK } from '@iwy/live-widgets/headless';

export default function CustomAvatar() {
  const videoRef = useRef<HTMLVideoElement>(null);
  const audioRef = useRef<HTMLAudioElement>(null);
  const avatarRef = useRef<LiveAvatarSDK | null>(null);

  const [isConnected, setIsConnected] = useState(false);

  useEffect(() => {
    const avatar = new LiveAvatarSDK(
      {
        agentId: 'your-agent-id',
        videoElement: videoRef.current || undefined,
        audioElement: audioRef.current || undefined,
      },
      {
        onConnected: () => setIsConnected(true),
        onDisconnected: () => setIsConnected(false),
      }
    );

    avatarRef.current = avatar;
    return () => avatar.destroy();
  }, []);

  return (
    <div className="my-custom-avatar">
      <video ref={videoRef} autoPlay playsInline muted />
      <audio ref={audioRef} autoPlay />

      <button onClick={() => avatarRef.current?.connect()}>
        Connect
      </button>

      <button onClick={() => avatarRef.current?.disconnect()}>
        Disconnect
      </button>
    </div>
  );
}

Vue 3

<template>
  <div class="my-custom-avatar">
    <video ref="videoRef" autoplay playsinline muted />
    <audio ref="audioRef" autoplay />

    <button @click="handleConnect">Connect</button>
    <button @click="handleDisconnect">Disconnect</button>
  </div>
</template>

<script setup>
import { ref, onMounted, onUnmounted } from 'vue';
import { LiveAvatarSDK } from '@iwy/live-widgets/headless';

const videoRef = ref(null);
const audioRef = ref(null);

let avatar = null;

onMounted(() => {
  avatar = new LiveAvatarSDK(
    {
      agentId: 'your-agent-id',
      videoElement: videoRef.value,
      audioElement: audioRef.value,
    },
    {}
  );
});

onUnmounted(() => {
  avatar?.destroy();
});

const handleConnect = () => avatar?.connect();
const handleDisconnect = () => avatar?.disconnect();
</script>

API Reference

Constructor

new LiveAvatarSDK(config: LiveAvatarConfig, callbacks?: LiveAvatarCallbacks)

Configuration (LiveAvatarConfig)

Property Type Required Default Description
agentId string Yes - Your Pipecat agent ID
videoElement HTMLVideoElement No - Video element for bot video
audioElement HTMLAudioElement No - Audio element for bot audio
enableMic boolean No true Enable microphone by default
enableCam boolean No false Enable camera (if needed)
warmStart boolean No true Pre-fetch session on init for faster connection
warmRestart boolean No true Pre-fetch new session after each call ends for faster reconnection

Callbacks (LiveAvatarCallbacks)

Callback Parameters Description
onConnecting () Called when connection starts
onConnected () Called when connected
onDisconnected () Called when disconnected
onBotConnected () Called when bot joins
onBotReady () Called when bot is ready
onError (error: Error) Called on errors
onVideoTrack (track: MediaStreamTrack) Bot video track available
onAudioTrack (track: MediaStreamTrack) Bot audio track available
onLocalAudioTrack (track: MediaStreamTrack) Local audio track available
onUserTranscript (data: TranscriptData) User speech transcript
onBotTranscript (data: TranscriptData) Bot speech transcript
onMicStateChange (enabled: boolean) Microphone state changed

Methods

connect(): Promise<void>

Connect to the Pipecat session.

await avatar.connect();

disconnect(): Promise<void>

Disconnect from the session.

await avatar.disconnect();

toggleMic(): void

Toggle microphone on/off.

avatar.toggleMic();

setMicEnabled(enabled: boolean): void

Set microphone state explicitly.

avatar.setMicEnabled(true);  // Enable
avatar.setMicEnabled(false); // Disable

attachVideoElement(element: HTMLVideoElement): void

Attach or change video element dynamically.

const newVideo = document.getElementById('another-video');
avatar.attachVideoElement(newVideo);

attachAudioElement(element: HTMLAudioElement): void

Attach or change audio element dynamically.

const newAudio = document.getElementById('another-audio');
avatar.attachAudioElement(newAudio);

getTracks()

Get current media tracks.

const tracks = avatar.getTracks();
console.log(tracks?.bot?.video); // Bot video track
console.log(tracks?.local?.audio); // Local audio track

destroy(): void

Clean up and destroy the SDK instance.

avatar.destroy();

Properties

Property Type Description
connectionState ConnectionState Current connection state
isConnected boolean Whether connected
isConnecting boolean Whether connecting
isMicEnabled boolean Whether mic is enabled
error Error | null Current error (if any)

Types

ConnectionState

type ConnectionState = 'disconnected' | 'connecting' | 'connected' | 'error';

TranscriptData

interface TranscriptData {
  text: string;
  final?: boolean;
  timestamp?: number;
}

Complete Examples

Custom Dashboard

import { LiveAvatarSDK } from '@iwy/live-widgets/headless';

const avatar = new LiveAvatarSDK(
  {
    agentId: 'your-agent-id',
  },
  {
    onConnecting: () => {
      updateStatus('Connecting...');
      showSpinner();
    },
    onConnected: () => {
      updateStatus('Connected');
      hideSpinner();
      enableControls();
    },
    onBotConnected: () => {
      console.log('Bot joined!');
      showBotIndicator();
    },
    onError: (error) => {
      updateStatus('Error: ' + error.message);
      showErrorNotification(error);
    },
    onUserTranscript: (data) => {
      if (data.final) {
        addTranscript('user', data.text);
      }
    },
    onBotTranscript: (data) => {
      addTranscript('bot', data.text);
    },
    onVideoTrack: (track) => {
      // Manually attach video if needed
      const video = document.getElementById('custom-video');
      video.srcObject = new MediaStream([track]);
      video.play();
    },
  }
);

// Your custom UI controls
document.getElementById('start').onclick = () => avatar.connect();
document.getElementById('stop').onclick = () => avatar.disconnect();
document.getElementById('mute').onclick = () => avatar.toggleMic();

Multi-Agent Switcher

import { LiveAvatarSDK } from '@iwy/live-widgets/headless';

let currentAvatar = null;

async function switchAgent(agentId) {
  // Disconnect current
  if (currentAvatar) {
    await currentAvatar.disconnect();
    currentAvatar.destroy();
  }

  // Create new connection
  currentAvatar = new LiveAvatarSDK(
    {
      agentId: agentId,
      videoElement: document.getElementById('video'),
      audioElement: document.getElementById('audio'),
    },
    {
      onConnected: () => {
        console.log(`Connected to agent: ${agentId}`);
      },
    }
  );

  await currentAvatar.connect();
}

// Switch between agents
document.getElementById('agent1-btn').onclick = () => switchAgent('agent-1');
document.getElementById('agent2-btn').onclick = () => switchAgent('agent-2');

Advanced Features

Media Handling

The SDK supports two approaches for handling bot video and audio. Choose one approach for clarity.

Mode 1: SDK-Managed (Recommended)

Pass videoElement and/or audioElement in the config. The SDK automatically attaches tracks when they become available.

const avatar = new LiveAvatarSDK({
  agentId: 'your-agent-id',
  videoElement: document.getElementById('my-video'),
  audioElement: document.getElementById('my-audio'),
});

// That's it! SDK handles track attachment automatically
await avatar.connect();

Mode 2: Manual Control

Don't pass elements in config. Handle track attachment yourself via callbacks.

const avatar = new LiveAvatarSDK(
  { agentId: 'your-agent-id' },
  {
    onVideoTrack: (track) => {
      const video = document.getElementById('my-video');
      video.srcObject = new MediaStream([track]);
      video.play().catch(console.error);
    },
    onAudioTrack: (track) => {
      const audio = document.getElementById('my-audio');
      audio.srcObject = new MediaStream([track]);
      audio.play().catch(console.error);
    },
  }
);

await avatar.connect();

Note: The SDK internally prevents duplicate track attachment, so mixing modes won't cause errors. However, choosing one approach makes your code clearer.

Warm-Start and Warm-Restart (Faster Connections)

The SDK provides two options for optimizing connection speed:

warmStart (default: true) - Pre-fetches the session when the SDK is initialized, reducing latency on the first connect() call.

warmRestart (default: true) - Pre-fetches a new session immediately after each call ends, ensuring fast reconnection for subsequent calls.

// Both enabled (recommended for best UX)
const avatar = new LiveAvatarSDK({
  agentId: 'your-agent-id',
  warmStart: true,      // Pre-fetch on init (default)
  warmRestart: true,    // Pre-fetch after each call ends (default)
});

// First connect is fast (warmStart)
await avatar.connect();
// ... call ends ...
// Next connect is also fast (warmRestart pre-fetched in background)
await avatar.connect();

// Disable warm restart if you don't expect multiple calls per session
const avatar = new LiveAvatarSDK({
  agentId: 'your-agent-id',
  warmStart: true,      // Still pre-fetch on init
  warmRestart: false,   // Don't pre-fetch after disconnect
});

// Disable all pre-fetching (cold start only)
const avatar = new LiveAvatarSDK({
  agentId: 'your-agent-id',
  warmStart: false,
  warmRestart: false,
});

Attaching Elements Dynamically

const avatar = new LiveAvatarSDK({ agentId: 'demo' });

// Later, attach elements
avatar.attachVideoElement(document.getElementById('video-1'));

// Switch to different element
avatar.attachVideoElement(document.getElementById('video-2'));

Conditional Connection

const avatar = new LiveAvatarSDK(
  { agentId: 'demo' },
  {
    onBotReady: async () => {
      // Wait for bot to be ready before showing UI
      document.getElementById('loading').style.display = 'none';
      document.getElementById('avatar-ui').style.display = 'block';
    },
  }
);

// Only connect after user grants permissions
navigator.mediaDevices.getUserMedia({ audio: true })
  .then(() => avatar.connect())
  .catch((err) => console.error('Mic permission denied'));

TypeScript Support

The SDK is fully typed with TypeScript:

import { LiveAvatarSDK, LiveAvatarConfig, LiveAvatarCallbacks, ConnectionState } from '@iwy/live-widgets/headless';

const config: LiveAvatarConfig = {
  agentId: 'demo',
  enableMic: true,
};

const callbacks: LiveAvatarCallbacks = {
  onConnected: () => console.log('Connected'),
};

const avatar = new LiveAvatarSDK(config, callbacks);

Browser Compatibility

Browser Support
Chrome/Edge ✅ Full
Firefox ✅ Full
Safari ✅ Full (iOS 11+)

Requirements:

  • WebRTC
  • ES2020+

Troubleshooting

Video not showing

Make sure you attach the video element and call play():

const avatar = new LiveAvatarSDK({
  agentId: 'demo',
  videoElement: document.getElementById('my-video'),
});

Or handle it manually:

const avatar = new LiveAvatarSDK(
  { agentId: 'demo' },
  {
    onVideoTrack: (track) => {
      const video = document.getElementById('my-video');
      video.srcObject = new MediaStream([track]);
      video.play();
    },
  }
);

Audio not playing

Ensure you have an audio element and it's set to autoplay:

<audio id="my-audio" autoplay></audio>
const avatar = new LiveAvatarSDK({
  agentId: 'demo',
  audioElement: document.getElementById('my-audio'),
});

Microphone not working

Check browser permissions and HTTPS:

// Request mic permission first
await navigator.mediaDevices.getUserMedia({ audio: true });

// Then connect
await avatar.connect();

Examples

See the examples directory for complete working examples:

Links

License

MIT License - see LICENSE file

Credits

Built by iwy.ai with: