Stream LLM output into a terminal using @wterm/react, @wterm/markdown, and the AI SDK. Chat messages are sent to an API route that streams responses via AI Gateway, and the terminal renders incoming Markdown as ANSI in real time.
From the monorepo root:
pnpm install
zig buildCopy the env file and add your API key:
cp examples/markdown-streaming/.env.example examples/markdown-streaming/.env.localThen start the dev server:
pnpm --filter markdown-streaming devOpens at markdown-streaming-example.wterm.localhost via portless.
@wterm/reactrenders the terminal with<Terminal>anduseTerminal- A
ChatShellclass handles user input and sends messages to/api/chat - The API route uses AI SDK
streamTextwithopenai/gpt-4o-mini(AI Gateway) to stream responses - Response chunks are piped through
@wterm/markdown'sMarkdownRenderer, converting Markdown to ANSI escape sequences in real time - Press
Ctrl+Cduring a response to abort the stream
| File | Description |
|---|---|
src/app/page.tsx |
Terminal page with ChatShell that streams responses through MarkdownRenderer |
src/app/api/chat/route.ts |
API route using AI SDK streamText |
src/app/layout.tsx |
Root layout |