An open-source architecture for truthful AI.
WikiOracle is a truthful, explainable LLM system designed as a public good — the Wikipedia model applied to artificial intelligence.
The Problem
For-profit corporations are using our data — sourced from billions of people — to train models that are teaching our children. Those models hallucinate. They can’t explain themselves. They are vulnerable to ideological capture and data-driven manipulation, especially under online learning. And the knowledge they encode is locked behind proprietary walls.
Most large AI systems today are built around a single global objective function, centralized data aggregation, hidden alignment rules, and implicit averaging over moral and cultural differences. The result is predictable: minority viewpoints are quietly averaged away, the loudest groups shape the model at scale, a single model becomes an authority node that everyone depends on, and predictive advantage converts into economic or political dominance.
When this happens, wisdom stops being a shared good and becomes a strategic asset.
What Makes WikiOracle Different
Truth as a first-class constraint
WikiOracle does not optimize for fluency and bolt on truthfulness as an afterthought. Truthfulness is the primary design constraint. Every claim traces back to explicit trust entries carrying certainty values on [-1, +1]. Reasoning chains and citations are inspectable. Grounded models are less prone to hallucination and capture, and claims can be contested, improved, or revised openly.
You own your data
WikiOracle is local-first. Your conversation state, your trust entries, and your configuration live on your machine — not on a corporate server accumulating hidden central memory. The remote server is strictly stateless. You can export, merge, and port your sessions freely. Your data is yours.
Democratic, not corporate
No single actor — company, state, foundation, or maintainer group — can silently become the epistemic root for everyone else. WikiOracle supports multiple points of view, each with its own trust map and standards of evidence. Where serious disagreement exists among credible sources, the system represents the dispute rather than smoothing it away. Minority viewpoints are preserved, not averaged into oblivion.
A network of trust, not a monolithic oracle
Instead of one model that claims to know everything, WikiOracle builds a network of trust. Authorities are pointers to external knowledge bases whose entries are imported with scaled certainty — we trust what they trust, to a degree. You choose who to trust and how much to trust them. Multiple LLM providers serve as “other minds” whose outputs become evidence, not unquestionable authority. Trust is transitive and attenuated, distributed and structured.
Resistant to capture
A distributed truth network prevents appropriation. The network’s knowledge is distributed: this preserves the multicultural component, otherwise monolithic capture collapses culture into a bland consensus. To some degree OpenTruth disrupts business models that depend on information asymmetry, extractive IP capture, and strategic opacity, but that disruption is corrective instead of destructive.