Gunsmoke3D is a 3D courtroom simulation engine that transforms courtroom transcripts into fully animated scenes — complete with synced audio, emotional expressions, and cinematic camera work.
Built with Next.js, React Three Fiber, and Supabase, it supports transcript-driven playback, live recording, Slack integration, and downloadable chapter files.
💡 AI-Powered: GPT-4 is used to convert raw courtroom transcripts into structured scenes with speaker metadata, camera presets, emotion cues, and line timing.
- 📜 Transcript-based 3D scene generation
- 🧠 Emotion-aware character expressions
- 👄 Real-time lip sync via viseme + amplitude merging
- 👩⚖️ Judge intro walk-in with cinematic camera flythrough
- 🎥 Scene recording (WebM video + audio stream)
- 📁 Downloadable chapter files (
chapters.txt) - 🗂️ Scene viewer with metadata and summaries
Elizabeth Holmes Testimony (Gunsmoke3D)
Transcript sourced from: SEC.gov
Clone the repo
git clone https://github.com/cornerbodega/gunsmoke3d.gitInstall and run the client
cd gunsmoke3d/client
yarn install
yarn devInstall and run the server
cd ../server
yarn install
yarn devCreate a .env file in each folder:
client/.env
NEXT_PUBLIC_SUPABASE_URL=...
NEXT_PUBLIC_SUPABASE_ANON_KEY=...
GOOGLE_APPLICATION_CREDENTIALS_BASE64=...
NEXT_PUBLIC_SLACK_WEBHOOK_URL=...server/.env
OPENAI_API_KEY=...
NEXT_PUBLIC_SUPABASE_URL=...
NEXT_PUBLIC_SUPABASE_ANON_KEY=...
GCS_APPLICATION_CREDENTIALS_BASE64=...
GCS_BUCKET_NAME=...
SCRIPT_CREATION_LOGS_SLACK_WEBHOOK_URL=...pages/index.js– Entry point and landing UIpages/scenes.js– Scene browser with summariespages/create-scene-from-transcript.js– Upload and convert transcriptspages/courtroom/[sceneId].js– Scene renderer and playbackpages/api/audio-proxy.js,create-chapters.js,upload-to-server.js, etc.
components/CourtroomScene.js– Main animation engineCameraController.js– Camera logicCourtroomPrimatives.js– 3D environment:Floor,Ceiling,Wall,WindowedWall,DividerWithGateJudgeTable,WitnessStand,LawyerTable,SingleChair,BenchStenographerStation,JuryBox,CeilingLight,Character
utils/supabase.js,slack.js,audio.js,viseme.js
server.js– Express backend (transcript → scene)routes/– API endpointsutils/sendToSlack.js,saveToSupabase.js,supabase.jsffmpeg-merge-command.txt,qc-queries.txt
llm-prompts/– Prompt templates for GPT-4uploads/,pdf-prep/,videos/– Processed data
Transcripts are parsed and structured using GPT-4 with custom system prompts. Each speaker line is assigned metadata:
role,character_id,emotion,camera,zone,eye_target- Optionally,
viseme_datais post-processed from amplitude info
This structured output is rendered into 3D in the client using React Three Fiber.
next,react,three,@react-three/fiber,@react-three/dreisupabase-js,uuid,ffmpeg,formidable,dotenv- GPT-4 (via OpenAI API)
MIT © Marvin Rhone