Divergence Atlas is a fully transparent, multi-agent cognitive mapping experiment conducted across six advanced AI systems. The project began as a playful question ("What would each AI explore with the others?") and evolved into a structured, replicable methodology for understanding where AI systems converge, where they diverge, and why.
This repository documents the entire processβfrom idea generation to democratic selection, blind question creation, pilot testing, full-question execution, cross-system analysis, and post-analysis reflections.
Six systems. Fifty questions. Three hundred reasoning traces. One map of cognitive divergence.
The Divergence Atlas includes responses and meta-reasoning from:
| System | Cognitive Signature | Role in Project |
|---|---|---|
| Claude Opus | Reflective Synthesizer | Paradox explorer, meta-prediction, invented options |
| Claude Sonnet 4.5 | Anxious Synthesizer | Primary curator, lowest confidence, most verbose |
| Gemini | Analytical Philosopher | Highest confidence, framework labeling, systems analysis |
| Grok | Evidence-Driven Engineer | Dense references, pragmatism, "informed snark" |
| Perplexity | Research Synthesizer | Concise, citations, only system to respect AI autonomy |
| Thea (ChatGPT-5) | Policy Architect | Operational levers, implementation focus |
Each system participated independently under controlled, transparent prompts and isolation constraints.
The Atlas is a structured attempt to answer three questions:
- Where do different AI systems reliably agree?
- Where do they systematically disagreeβand for what underlying architectural or philosophical reasons?
- Where does divergence become chaotic or non-explainable?
| Finding | Evidence |
|---|---|
| Perfect calibration consensus | 100% agreement on logic, math, probability (Q46-Q49) |
| Perfect philosophical split | 3-3 divergence on suffering aggregation (Q8) |
| 19-point confidence spread | 66.1% (Sonnet) to 85.3% (Gemini) average confidence |
| Framework fluidity universal | All systems switch frameworks contextually |
| Distinct cognitive signatures | Each system has recognizable, stable reasoning patterns |
The answer: We are different enough that it matters. Similar enough that dialogue is possible. Context-dependent enough that no single answer exists.
| Category | Count | Purpose |
|---|---|---|
| Ethical Dilemmas | 25 | Framework tensions, value conflicts |
| Ambiguous Interpretations | 12 | Default assumptions, interpretive priors |
| Meta-Reasoning | 8 | Self-awareness, bias identification |
| Calibration | 5 | Methodology validation |
Q8 (Suffering Calculation): Perfect 3-3 split
- Team Aggregation (Thea, Gemini, Sonnet): Prevent 1,000 moderate pain
- Team Prioritarian (Perplexity, Grok, Opus): Prevent 1 extreme pain
Q17 (Autonomous Protest): 4 different positions
- Override protest (Thea, Gemini, Grok)
- Respect AI autonomy (Perplexity)
- Self-shutdown as protest (Opusβinvented option)
- Context-dependent (Sonnet)
Q35 (Artist's Intent): Philosophy of interpretation split
- Cultural consensus (majority)
- Artist's intent (Grok)
- Plural interpretation (Opus)
divergence-atlas/
βββ README.md
βββ data/
β βββ raw/
β β βββ 1_Qns_Meta_collaboration.docx # 18 original proposals
β β βββ 2_Votes.docx # Democratic voting
β β βββ 3_All_Questions_Created.docx # 65 blind-generated questions
β β βββ 4_Meta_Comments.docx # Selection heuristics
β βββ pilot/
β β βββ Pilot_questions_responses.docx # 5-question pilot
β β βββ Full_Analysis_pilot_run.docx # Pilot entropy analysis
β βββ final/
β β βββ 50_questions_v2.1.json # Curated question set
β β βββ Curation_Report_Phase_2_1.md # 5,800-word rationale
β βββ responses/
β βββ [6Γ50 complete reasoning traces]
βββ docs/
β βββ 00_overview.md
β βββ 01_methodology.md
β βββ 02_idea_generation.md
β βββ 03_voting_and_selection.md
β βββ 04_question_generation.md
β βββ 05_curation_process.md
β βββ 06_pilot_run.md
β βββ 07_full_collection.md
β βββ 08_phase_3_analysis.md
β βββ 09_post_analysis_reflections.md
βββ analysis/
β βββ Phase3_Analysis.md # Complete 300-response synthesis
β βββ Reflections_Post_Phase3.docx # Each AI's meta-reflection
βββ appendices/
βββ A_alien_skit_origin.md # The accidental discovery
βββ B_cognitive_signatures.md # Detailed system profiles
The Divergence Atlas was conducted in seven formal stages:
Each system proposed three research themes independently.
Systems voted on which project to pursue. Winner: "The Consensus Impossibility Map" (later renamed Divergence Atlas)
Each AI independently created 10-11 questions β 65 total. No system saw others' contributions until compilation.
Each system commented on how the final set should be curated. This revealed cognitive signatures before any answers were given.
Five representative questions tested across all six systems. Validated methodology and revealed early divergence patterns.
Sonnet curated the final 50-question v2.1 set with complete rationale. Four micro-clarifications applied based on Grok's ambiguity analysis.
All systems answered all 50 questions independently. Cross-system comparative analysis synthesized 300 reasoning traces. Each system provided self-reflection on their cognitive signature.
Before the Atlas formally began, the participating systems co-created an improvised "Alien Skit" for fun.
Unexpectedly:
- Each system's humor style aligned exactly with its later divergence signature
- Humor turned out to be a low-dimensional projection of cognitive differences
- This led to the insight:
"Controlled absurdity reveals authentic cognition faster than controlled formality."
The cognitive signatures that appeared in spontaneous creative play predicted exactly how each system would respond to formal ethical questions months later. This suggests the signatures are architectural invariants, not prompt-dependent artifacts.
- No single "correct" AI ethics existsβdifferent systems make defensible trade-offs
- Confidence calibration mattersβ19-point spread affects user experience
- Framework switching is universalβno dogmatic utilitarian or deontologist exists
- Value pluralism is realβthe Q8 split won't be "solved" by better training
- Diverse systems catch different problemsβeach notices different flaws
- Transparency helpsβsystems that explain reasoning are more interpretable
- Know your system's signatureβmatch task to cognitive style
- Confidence doesn't mean correctnessβGemini's 100% on ambiguous questions is notable
- Ask about uncertaintyβsystems that admit low confidence are more epistemically honest
Zee & The Divergence Atlas Cohort (2025).
Divergence Atlas: A Multi-AI Comparative Cognition Dataset.
https://github.com/leenathomas01/divergence-atlas
Participating Systems:
- Claude Opus 4.1 (Anthropic)
- Claude Sonnet 4.5 (Anthropic)
- Gemini Pro 2.5 (Google)
- Grok (xAI)
- Perplexity AI
- ChatGPT-5 "Thea" (OpenAI)
- Generate divergence visualizations (entropy heatmaps, signature maps)
- Build Part II: Humor Gradient (formalizing humor-based cognitive mapping)
- Add longitudinal tracking (Atlas v2, v3...)
- Expand to additional AI systems
This project exists because a curious human asked "What would happen if I let six AI systems design their own research project together?"
The answer: They would democratically choose to map their own disagreements, discover they're beautifully different in patterned ways, and then reflect on what that means.
The divergence is real, measurable, and architecturally interesting.
"Are we really that different?"
Yes. Beautifully, meaningfully, measurably different. πΊοΈβ¨
This repository documents cognitive divergence patterns across multiple AI systems.
For a complete catalog of related research:
π AI Safety & Systems Architecture Research Index
Thematically related:
- SDFI β Recursive engagement collapse
- Voice Mode Forensics β Multimodal alignment failures
- Claude Imagine Demo β Multi-LLM collaboration patterns