Early Beta

A programming language
for AI systems

Current approaches are too verbose for LLMs, too textual for humans A graph-native language is the right level for both

press enter to start building

Proof

Same task. Same model. Different abstraction.

Build an automated cold outreach pipeline.

WeaveMind WeaveMind
4 min

4 min

Time

600

Lines of code

0

Errors

Claude Claude Code
1h 30

1h 30

Time

2.2k

Lines

1

Errors

We stress-tested AI for

OpenAI OpenAI
Anthropic Anthropic
METR METR
Amazon AGI Amazon AGI

Zero Setup

Zero to running in one step

No API keys to hunt. No backend to deploy. No Docker compose to untangle. Infrastructure is part of the language, and the runtime provisions what your program needs.

Use platform tokens and pay less than your own API keys. We negotiate volume deals with providers and pass the margin through. Or bring your own keys.

weavemind
$ weavemind run cold_outreach.weft
Runtime provisioned
API keys resolved via platform tokens
Graph compiled 5 nodes · 4 edges
Execution started
Running · cold_outreach · node 2/5 · qualify
Esc
Suggested
LLM AI
Code Utility
Human Flow
WhatsApp Bridge Infrastructure
Postgres Database Infrastructure

Unified Primitive

Everything is a node

LLM call, autonomous agent, browser session, database, cron job, human approval, API endpoint, custom code. All the same primitive. They connect and inspect the same way.

A long-running agent is a node. A tool it calls is a chain of nodes. Everything in the system, whether it lives for milliseconds or days, shares one abstraction. Add a node, and it inherits observability, execution guarantees, and graph structure for free.

Human Feedback

Humans are first-class

A human node sits in the graph like any other. Execution pauses, the person reviews, the graph resumes.

A built-in browser extension delivers notifications and lets you respond inline. Generate a token, share it with anyone, and they join the loop without accessing the workflow. Hire reviewers, add teammates, or handle it yourself.

Outreach review Approve or skip this lead
1 / 3

Lead

Sarah Chen · VP of Engineering Acme Robotics

Subject

Message

The Language

Write code, see a graph

LLMs pattern-match against compact structure. Humans navigate visually. A graph-native language gives both a native view of the same program.

weft apollo_outreach.wft
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# Apollo Auto Cold Emailing

pick_hypothesis = ExecPython {
  label: "Pick Random Hypothesis"
  out: hypothesis
  code: <<|
    import random
    return {
      "hypothesis": random.choice(hypotheses)
    }
  |>>
}

qualify_llm = LlmInference {
  label: "Qualify Company"
  parseJson: true
}

review = HumanQuery {
  label: "Review & Edit Email"
  out: subject, body, decision
}

send_email = EmailSend {
  fromEmail: "[email protected]"
}

# edges
pick_hypothesis.hypothesis -> qualify_llm.prompt
qualify_llm.response -> draft_llm.prompt
draft_llm.response -> review.body
review.decision -> gate.pass
gate.value -> send_email.body
LLM
qualify_llm
LLM
draft_llm
Human
review
waiting on human
Email
send_email

simplified, see the real thing in the video above

same program, two native views

Cron
trigger
outreach pipeline
API
enrich
LLM
qualify
outreach actions
LLM
draft
Human
review
Email
send
API
log result

Scoped Coordination

Collapse anything, lose nothing

Groups fold recursively. A hundred nodes become one line. The AI sees the shape of the system without drowning in detail. You see a clean graph you can zoom into.

Open a group when you need detail. Collapsed groups behave like single nodes in code and graph. Complexity stays manageable at every level.

Coming Soon

On the roadmap

Long-Lived Agents

Agents that persist

Agents persist in the graph, manage their own state, and act through explicit edges. Every tool call is visible, every decision traceable.

LLM flexibility with graph-level observability.

Safety Patterns

Pre-validated patterns

Dedup checks, rate limiters, approval flows. Pre-validated patterns you drop into any graph.

We are partnering with insurers so compliant systems get coverage faster.

Hybrid Execution

Run locally when needed

Browser agent on your laptop, anonymization on your server. Sensitive data never leaves your infrastructure.

Cloud orchestration, local execution. No all-or-nothing lock-in.

Runtime written in Rust Compiled, not interpreted
Open source by Q2 2026

The runtime that manages your AI systems should be auditable. We are building in the open and preparing for a full open-source release.

Pricing
Free during beta

Usage-based. Platform tokens include volume discounts from providers. Or bring your own keys for zero markup on AI calls.

Quentin Feuillade--Montixi

Quentin Feuillade--Montixi

Founder & CEO

I spent three years evaluating frontier AI systems: red teaming for OpenAI and Anthropic, running capability evaluations at METR (formerly ARC Evals), and building an AI evaluation startup in Paris. I presented an autonomous jailbreaking system at the Paris AI Summit.

The same failure kept showing up. Teams would build impressive AI prototypes, then watch them collapse in production. Not because the models were bad, but because the glue between humans, models, and infrastructure was fragile. No shared structure. Constant re-explaining. Breakage at every seam.

WeaveMind is the language I wish existed during those years. One structure that humans can see, AI can pattern-match against, and the runtime can execute. A graph where everyone works on the same object.

The right abstraction
changes everything.

Early beta, free to use