Advanced usage
How it works, optimizing token usage, and more.
syntux is highly optimized to save tokens. To understand how, we must first understand how syntux works.
How it works
Generated interfaces must be secure, reusable and cacheable.
As such, syntux does not:
- generate code (HTML/JSX), or...
- hardcode the
value
Instead, syntux generates a JSON-DSL representation of the UI, known as the React Interface Schema (RIS).
The RIS does not hardcode values. It binds to properties of the value and has built-in iterators, making it reusable and token-efficient for arrays.
An example of the RIS:
{"id":"loop_1", "parentId":"root", "type":"__ForEach__", "props":{"source":"authors"}}
{"id":"card_1", "parentId":"loop_1", "type":"div", "props":{"className":"card"}, "content": {"$bind": "$item.name"}}skeletonize property
When you send a generate request, syntux will stringify the value in its entirety and send it to the LLM.
This is OK for most applications, but not for arrays of large sizes. Doing so would consume a lot of tokens.
In order to solve this problem, you must generate a skeleton of the input with the skeletonize parameter:
const myArrayToDisplay = [ ... 1000 items ... ];
<GeneratedUI value={myArrayToDisplay} skeletonize={true} model={ ... } />The skeletonize prop works by removing all property values. It only keeps the type information of the object, providing the LLM with just
enough information to bind to properties.
An example input:
{
"user": { "id": 123, "name": "Grant", "verified": true },
"posts": [
{ "title": "Generative UI", "likes": 45, "tags": ["js", "web"] },
{ "title": "Is Awesome!", "likes": 6, "tags": ["ai"] }
],
"settings": {
"theme": "dark",
"notifications": { "email": true, "sms": false }
}
}After skeletonization:
{
"user": { "id": "number", "name": "string", "verified": "boolean" },
"posts": [
{ "title": "string", "likes": "number", "tags": ["string"] }
],
"settings": {
"theme": "string",
"notifications": { "email": "boolean", "sms": "boolean" }
}
}As you can see, quite powerful.
Use skeletonize sparingly!
By stripping out property values, you limit the context and creativity of the model.
In general, only use it for large arrays.
Cost estimation
As a TL;DR of the two points above:
syntux is very efficient when it comes to tokens.
Output tokens:
- it does not hardcode values. Instead, it binds to properties of the
value. - it has built-in iterators to avoid repetition, handling large arrays with ease.
Input tokens:
- with skeletonization (see above), it can reduce any input to a trivial size.
Cost estimation table:
| Provider | Model | Generations per $5 | Recommended? |
|---|---|---|---|
| Anthropic | Opus 4.5 | 1,000 | ❌ overkill |
| Anthropic | Sonnet 4.5 | 1,667 | ❌ overkill |
| OpenAI | GPT 5.2 | 2,860 | ❌ overkill |
| OpenAI | GPT 5.2 Mini | 20,000 | ✓ good |
| Gemini Pro Preview | 2,500 | ❌ overkill | |
| Gemini Flash Flash | 10,000 | ✓ good |
Assuming you have 20,000 DAUs, regenerating the UI once every week, you would incur a cost of $15 per month. At that scale, the cost is negligible.
The system prompt is 1k tokens (1.02k to be exact).
You may find that the cost is even less, as some providers (OpenAI, Google, Anthropic) will cache the system prompt.