Skip to main content

AI that learns your judgment. Corrections become behavioral rules that converge on your style.

Project description

Gradata

Your AI keeps making the same mistakes. Gradata makes it stop.

Tests PyPI Python License

You fix a tone. You rewrite a regex. You re-explain how your team formats PRs. Then the AI forgets, and you do it all again next session.

Gradata turns every correction into a rule your AI carries forward. Not a longer prompt. Not a bigger context window. A behavioral rule that graduates from instinct → pattern → rule the more it proves itself — and dies the moment it stops.

pip install gradata

One-command setup for Claude Code, Cursor, Windsurf, or any MCP-compatible IDE:

npx gradata-install install --ide=claude-code

Works with any LLM. Python 3.11+. Zero required dependencies. Local-first. Apache-2.0.


The 30-second pitch

Memory systems remember what you said. Gradata learns how you think.

System Remembers Learns from corrections Graduates rules Proves convergence
Mem0
Letta (MemGPT)
LangChain Memory
Gradata
  • vs fine-tuning — no training run, no model lock-in, no GPU. Adapts at inference time.
  • vs system prompts — static rules you hand-write vs dynamic rules the model earns.
  • vs Mem0 / Letta — they store context; Gradata evolves behavior. Use both.

Not generally smarter. Calibrated to you.


How it works

flowchart LR
    A["You correct your AI"] --> B["brain.correct(draft, final)"]
    B --> C["Behavioral rule extracted"]
    C --> D["Confidence grows with reinforcement"]
    D --> E["INSTINCT → PATTERN → RULE"]
    E --> F["Related rules cluster → META-RULE"]
    F --> G["Your AI converges on YOUR judgment"]

    style A fill:#6366f1,stroke:#4f46e5,color:#fff
    style E fill:#8b5cf6,stroke:#7c3aed,color:#fff
    style G fill:#10b981,stroke:#059669,color:#fff

Every correction creates a lesson. Lessons compete. Contradicted rules lose confidence and die. Idle rules decay. Only rules that survive real-world application get promoted into your AI's behavior.

This is evolution, not configuration.

stateDiagram-v2
    [*] --> INSTINCT: Correction captured
    INSTINCT --> PATTERN: Reinforced across sessions
    PATTERN --> RULE: Proven + passes adversarial gates
    RULE --> META_RULE: 3+ related rules cluster

    INSTINCT --> KILLED: Contradicted or idle
    PATTERN --> INSTINCT: Confidence dropped
    RULE --> ARCHIVE: Graduated (reference only)

Show me it works

Ablation v4 — 4 models × 6 conditions × 16 tasks × 3 iterations = 432 trials, blind-judged by Haiku 4.5.

Model Preference lift (rules vs base) Correctness lift
Sonnet 4.6 +2.7% +0.4%
DeepSeek V3 +5.1% +0.9%
qwen2.5-coder 14B +5.7% +3.6%
gemma3:4b +3.4% +1.1%

The rules aren't just a format trick. We ran the Min et al. (2022) random-label control — plausible-but-unrelated rule text in the same envelope. Three of four models regress by 3–10%. Content is doing the work, not XML structure.

Smaller/local models benefit most. Frontier models get calibrated faster. The curve is the product demo: corrections-per-session drops monotonically as the brain converges.


60-second demo

from gradata import Brain

brain = Brain.init("./my-brain")

# Your AI produces output. You fix it. The brain learns.
brain.correct(
    draft="We are pleased to inform you of our new product offering.",
    final="Hey, check out what we just shipped.",
)
# → Extracts: "Write in a casual, direct tone, avoid formal business language"

# Next session, learned rules are injected automatically:
rules = brain.apply_brain_rules("write an email")
# → [RULE] TONE: Write in a casual, direct tone...

# Prove the brain is converging:
brain.manifest()   # Mathematical proof of convergence
brain.prove()      # Paired t-test on correction rate

Install (pick one)

Claude Code (recommended)

/plugin marketplace add Gradata/gradata
/plugin install gradata

Prereq: pipx install gradata. See .claude-plugin/README.md.

Python SDK

pipx install gradata
gradata install-hook --ide=claude-code

JS / TypeScript

The @gradata/cli npm package talks to a local Gradata daemon — no Python required at call time:

npm i @gradata/cli
import { GradataClient } from "@gradata/cli";

const client = new GradataClient({ endpoint: "http://127.0.0.1:8765" });
await client.correct({
  draft: "We are pleased to inform you of our new product offering.",
  final: "Hey, check out what we just shipped.",
  outputType: "email",
});

Full API in packages/npm/README.md.

Docker

docker run --rm -p 8765:8765 -v $(pwd)/brain:/brain \
  ghcr.io/gradata/gradata/daemon:latest \
  daemon --brain-dir /brain --port 8765

Or docker build -t gradata/daemon:dev . from the repo root. docker-compose.yml included for local dev.

CLI

gradata init ./my-brain        # create a brain
gradata demo ./eval-brain      # try a pre-trained one
gradata convergence            # ASCII chart of correction trend
gradata manifest --json        # mathematical convergence proof
gradata review                 # approve/reject pending promotions
gradata stats                  # brain health metrics
gradata doctor                 # diagnose issues

What's in the box

Core learning loop

  • brain.correct(draft, final) — captures corrections, extracts behavioral instructions
  • brain.apply_brain_rules(task) — injects graduated rules into prompts
  • brain.manifest() / brain.prove() — convergence proof, not vanity metrics
  • Event bus: brain.bus.on("correction.created" | "lesson.graduated" | "meta_rule.created" | "session.ended", handler)

Meta-rules. When 3+ rules cluster, an LLM synthesizes a scoped meta-rule with applies_when / never_when conditions.

Security. PII redaction before storage • HMAC-SHA256 provenance on every correction • score obfuscation so confidence never leaks to the LLM • per-brain salt on graduation thresholds.

Integrations. OpenAI · Anthropic · LangChain · CrewAI adapters · MCP server for Claude Code / Cursor / Windsurf · Claude Code hooks that auto-capture corrections · custom providers via GRADATA_LLM_PROVIDER=openai (or any OpenAI-compatible endpoint).


Inspection & Transparency

Every graduated rule can be traced back to the corrections that created it. No opaque behavior. Git diff for AI preferences.

from gradata import Brain

brain = Brain("./my-brain")

# List graduated rules (optionally filter)
brain.rules()
brain.rules(include_all=True, category="tone")

# Trace a rule to the corrections that created it
brain.explain("rule_abc123")
# → {"rule_id": ..., "description": ..., "source_corrections": [...], "sessions": [...]}

# Full provenance chain (rule → lesson → corrections → events)
brain.trace("rule_abc123")

# Export for review, diffing, or sharing
brain.export_data(output_format="json")           # or "yaml"
brain.export_rules(min_state="PATTERN")           # OpenSpace-compatible SKILL.md
brain.export_rules_json(min_state="RULE")         # flat sorted array
brain.export_skill(output_dir="./skills")         # full skill directory
brain.export_tree(format="obsidian", path="./vault")

# Human veto
brain.pending_promotions()
brain.approve_promotion("rule_abc123")
brain.reject_promotion("rule_abc123")

Full signatures in docs/sdk/brain.md.


Architecture

graph TB
    subgraph Public API
        Brain["brain.py"]
        CLI["cli.py"]
    end

    subgraph Core Pipeline
        Core["_core.py"]
        EventBus["events_bus.py"]
        EventLog["_events.py"]
    end

    subgraph Enhancements
        Classifier["edit_classifier.py"]
        Cache["instruction_cache.py"]
        Graduation["self_improvement.py"]
        Meta["meta_rules.py"]
    end

    subgraph Security
        PII["safety.py"]
        Prov["correction_provenance.py"]
    end

    Brain --> Core
    Core --> Classifier --> Cache
    Core --> Graduation --> Meta
    Core --> EventBus
    Core --> EventLog
    Core --> PII
    Core --> Prov

    style Brain fill:#6366f1,stroke:#4f46e5,color:#fff
    style Core fill:#8b5cf6,stroke:#7c3aed,color:#fff
    style EventBus fill:#f59e0b,stroke:#d97706,color:#fff

Repo layout

  • src/gradata/ — the Python SDK (correction → rules → graduation pipeline)
  • tests/ — SDK tests (pytest)
  • docs/ — mkdocs site sources (published to gradata.ai/docs)
  • examples/ — SDK usage examples
  • packages/npm/@gradata/cli JS client
  • gradata-install/ — npm wrapper for one-command IDE setup
  • .claude-plugin/ + hooks/ — Claude Code plugin manifest
  • brain/ — research scripts (benchmarks, simulations)

Community

Intellectual lineage

Built on Constitutional AI (Anthropic, 2022), Duolingo's half-life regression (Settles & Meeder, ACL 2016), the Copilot RCT (Peng et al., 2023), SuperMemo's two-component memory model (Wozniak, 1995), and MT-Bench LLM-as-judge (Zheng et al., NeurIPS 2023). Sits alongside Mem0, Letta, and EverMind — with one difference: Gradata learns from your corrections, not just recalls facts. Full credits in CREDITS.md.

Contributing

See CONTRIBUTING.md.

License

Apache-2.0. The full SDK — rules, hooks, graduation, meta-synthesis, scoring, profiling — is permissively open. Use it anywhere: commercial, proprietary, SaaS, internal tooling. No copyleft, no linking obligations, no commercial-license upsell.

Gradata Cloud is an optional hosted service (team brain, corrections corpus, brain marketplace, managed LLM). The SDK does not require it — everything works locally with your own LLM key.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gradata-0.6.0.tar.gz (885.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gradata-0.6.0-py3-none-any.whl (651.5 kB view details)

Uploaded Python 3

File details

Details for the file gradata-0.6.0.tar.gz.

File metadata

  • Download URL: gradata-0.6.0.tar.gz
  • Upload date:
  • Size: 885.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for gradata-0.6.0.tar.gz
Algorithm Hash digest
SHA256 a5590c298db2889be76bd24bbc4d2ac8a7a8b0e882316d8543e27e5363e87e82
MD5 d520a122fcec59d41f762c30580a61e3
BLAKE2b-256 4a84452ec14aa5554825ee8c9d9541cfa410158d7bee9797c35d87d9e4e5ca60

See more details on using hashes here.

Provenance

The following attestation bundles were made for gradata-0.6.0.tar.gz:

Publisher: sdk-release.yml on Gradata/gradata

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file gradata-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: gradata-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 651.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for gradata-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 397c0fbdd59ad64d31b1131af30210d45d97630c8db19269294958117a766737
MD5 3165f20de855fd4fd047260f10838e15
BLAKE2b-256 6ce273f8e4774e6412d9c741623deaa39fbb5c33b463d0f967460d927647a8e3

See more details on using hashes here.

Provenance

The following attestation bundles were made for gradata-0.6.0-py3-none-any.whl:

Publisher: sdk-release.yml on Gradata/gradata

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page