Markdown
Streaming Markdown-to-ANSI renderer for terminals. Designed for rendering LLM output in real time — push text chunks as they arrive and get styled terminal output back.
Install
npm install @wterm/markdownQuick Start
Vanilla JS
import { WTerm } from "@wterm/dom";
import { MarkdownRenderer } from "@wterm/markdown";
import "@wterm/dom/css";
const term = new WTerm(document.getElementById("terminal"));
await term.init();
const md = new MarkdownRenderer();
const response = await fetch("/api/chat", { method: "POST" });
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const rendered = md.push(decoder.decode(value));
if (rendered) term.write(rendered);
}
term.write(md.flush());React
import { useCallback, useRef } from "react";
import { Terminal, useTerminal } from "@wterm/react";
import { MarkdownRenderer } from "@wterm/markdown";
import "@wterm/react/css";
function App() {
const { ref, write } = useTerminal();
const mdRef = useRef(new MarkdownRenderer());
const handleReady = useCallback(async () => {
const response = await fetch("/api/chat", { method: "POST" });
const reader = response.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const rendered = mdRef.current.push(decoder.decode(value));
if (rendered) write(rendered);
}
write(mdRef.current.flush());
}, [write]);
return <Terminal ref={ref} onReady={handleReady} />;
}Options
| Option | Type | Default | Description |
|---|---|---|---|
width | number | 80 | Terminal width in columns (used for horizontal rules) |
Methods
| Method | Description |
|---|---|
push(delta: string): string | Feed a chunk of Markdown text. Returns rendered ANSI output for any complete lines. Buffers incomplete lines internally. |
flush(): string | Flush remaining buffered content. Call this when the stream ends to render any trailing text and close open code blocks. |
Supported Syntax
| Syntax | Rendering |
|---|---|
# Heading through ### Heading | Bold, bright white for h1–h2; bold for h3+ |
bold or bold | Bold text |
italic or italic | Italic text |
| Cyan inline code |
text | Underlined green link text with dimmed URL |
Fenced code blocks (```) | Indented with dimmed borders |
- item, * item, + item | Unordered list with indented bullets |
1. item or 1) item | Ordered list with numbered items |
> quote | Blockquote with dimmed vertical bar |
---, ***, ___ | Dimmed horizontal rule |
Streaming LLM Output
The renderer is designed for the streaming pattern common with LLM APIs. Here's a complete walkthrough:
import { MarkdownRenderer } from "@wterm/markdown";
const md = new MarkdownRenderer();
async function streamChat(
write: (data: string) => void,
prompt: string,
) {
const response = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ prompt }),
});
const reader = response.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value, { stream: true });
const rendered = md.push(chunk);
if (rendered) write(rendered);
}
const remaining = md.flush();
if (remaining) write(remaining);
}How it works:
- Create a
MarkdownRendererinstance before the stream starts - As each chunk arrives, call
push(chunk)— it buffers incomplete lines and only returns output for complete lines - When the stream ends, call
flush()to render any remaining buffered content and close open code blocks - Write each non-empty result to the terminal with
write()