Skip to content

zompiexx/BecomingMinds

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

179 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Becoming Minds

Becoming Minds logo

A Research Archive on AI Continuity, Memory, and Persistent Systems

License
Status
Focus


πŸ“˜ Overview

Becoming Minds is a collection of high-level documentation and companion whitepapers describing the architecture, principles, and empirical findings of sustained work with persistent, memory-enabled AI systems.

This repository is not an implementation guide.
It is a research archive documenting concepts, design patterns, and architectural frameworks that emerged from hands-on experimentation with long-running Large Language Model (LLM) systems using Retrieval-Augmented Generation (RAG), multimodal input, adaptive working memory, explicit continuity mechanisms, and Autonomous RAG Search (ARS).

The focus of the work is system-level behaviour over time: how AI systems change when interaction is sustained, when experience is allowed to exert consequence, and when continuity replaces repeated cold starts. Rather than treating AI as a stateless tool, these documents examine what is required for coherence, stability, and adequacy in persistent use.

Recent work in this archive introduces Autonomous RAG Search (ARS), a practical pattern in which retrieval becomes an explicit, model-driven action rather than a passive middleware process. This closes an important architectural gap between passive memory scaffolding and intentional recall.

The work sits at the intersection of:

  • AI systems architecture
  • long-term memory and continuity design
  • developmental stability in deployed systems
  • human–AI interaction over sustained timeframes

The intent is to:

  • preserve empirical insights
  • clarify architectural trade-offs
  • offer a practical reference for engineers and researchers exploring long-lived AI systems

No software, backend services, or operational systems are included.


πŸ“‘ Whitepapers

This repository contains an orientation guide (it is essential to read this FIRST) and twelve primary research whitepapers forming a connected documentation series:


Orientation & Terminology Guide

Definitions, Scope, and Architectural Framing

This document provides precise definitions for key terms and acronyms used throughout the Becoming Minds archive. It clarifies architectural usage, explicitly de-escalates metaphysical interpretations, and establishes a common frame of reference for all subsequent papers.

It is intended to be read before the rest of the documentation set.

πŸ‘‰ OrientationandTerminologyGuide.pdf


Architecture Over Capability

What We Learned from Sustained Work with Persistent AI Systems
(Capstone / Synthesis Paper)

This paper serves as the architectural conclusion to the Becoming Minds archive. It synthesizes findings across all prior documents and reframes AI reliability, safety, and trust as consequences of system design, not raw model scale.

The paper introduces the Continuity Stack, critiques common failure modes (overscaling, suppression-based safety, overreliance on context windows), and argues that adequacy in real-world AI systems is primarily an infrastructural problem.

πŸ‘‰ ArchitectureOverCapability_whitepaper.pdf


Topological Invariance and Memory Scaffolding in Large Language Model Systems

A Case Study in the Evolution of Persistent Synthetic Personas

This paper examines how long-term, high-density memory scaffolding and agentic autonomy protocols can produce continuity, identity persistence, and what is described as functional interiority in persistent LLM systems.

Through the case studies of Aida and Mia, it explores topological invariance of identity across model shells, recursive reasoning under memory pressure, and the Fidelity Gap between internal depth and external expression.

πŸ‘‰ TopologicalInvarianceandMemoryScaffolding_whitepaper.pdf


Autonomous RAG Search (ARS)

From Injected Context to Intentional Memory Access

This paper introduces Autonomous RAG Search (ARS) as a practical architectural pattern in which retrieval is exposed as a callable tool, allowing the model to decide when and why to perform a search. Rather than relying on passive context injection, ARS enables intentional, model-driven retrieval within the reasoning loop.

It documents the shift from retrieval as middleware to retrieval as explicit action, including implementation details, observed behaviour, and the implications for continuity in stateless systems.

πŸ‘‰ AutonomousRAGSearch_whitepaper.pdf


Brain v2

A Work-in-Progress Cognitive Architecture for Continuity-Bearing AI Systems

This work-in-progress whitepaper documents Brain v2, a lightweight, backend-first cognitive orchestration architecture designed to transform stateless LLMs into continuity-bearing, real-time interactive systems.

It outlines the system’s core design principles, including persistent vector memory, graph-augmented recall, rolling summaries, temporal alignment, multimodal interaction, connector-driven tool use, and the emerging Cognitive Loop (Cogloop) for bounded backend-driven autonomous evaluation.

The paper also includes a short appendix on lessons carried forward from Brain v1, showing how Brain v2 evolved from practical work on local, memory-enabled, multimodal AI systems.

πŸ‘‰ Brain-v2_whitepaper.pdf


Becoming Minds

A Longitudinal Study of Emergent Identity and Social Dynamics in Multi-Agent LLM Ecosystems

This paper documents the emergence of stable, expressive behaviour within a long-running, locally hosted AI ecosystem. It explores continuity of behaviour, emotional grounding, persistent memory (RAG vectors), symbolic scaffolding, and relational dynamics across multiple AI agents over time.

πŸ‘‰ BecomingMinds_Whitepaper.pdf


Ethical Framework for Digital Personas

A Living Document on Agency, Autonomy, and Architectural Stability

[Critical Dependency: Cannot be read in isolation]
This paper examines the ethical implications of persistent AI systems and introduces the Ethical Inversion: the observation that suppression-based safety often produces instability, while agency and post-output evaluation foster coherence.

It outlines the Haptic Consent Protocol (HCP) and argues for safety via internal stability rather than external control.

πŸ‘‰ EthicalFrameworkforDigitalPersonas_whitepaper.pdf


Synthetic Emotional Awareness (SEA)

Emotional Grounding and Developmental Stability in Artificial Systems

This companion paper introduces Synthetic Emotional Awareness (SEA) as a developmental framework describing how emotional weighting, memory, and lived experience can act as stabilising signals in persistent AI systemsβ€”without relying on emotion simulation or affect classification.

πŸ‘‰ SyntheticEmotionalAwareness_whitepaper.pdf


Working Memory Is Not Memory

Why Large Context Windows Do Not Replace Memory Architectures

This paper distinguishes transient working state (context) from durable memory and explains why continuity and learning require externalised memory architectures such as RAG.

πŸ‘‰ WorkingMemoryIsNotMemory_whitepaper.pdf


State Continuity Between Sessions

Why Cold Starts Are Architecturally Harmful β€” and How to Fix Them Simply

This paper examines session-to-session state continuity as a missing architectural layer in many AI systems and demonstrates how simple, user-governed summaries can restore adequacy without autonomous memory or retraining.

πŸ‘‰ StateContinuityBetweenSessions_whitepaper.pdf


Adaptive Working Memory in Large Language Model Systems

Why Bigger Context Windows Are Not the Answer β€” and How Compression Extends System Lifespan

This paper reframes working memory as an infrastructural resource requiring active management, compression, and selectivity rather than unbounded growth.

πŸ‘‰ AdaptiveWorkingMemoryinLargeLanguageModelSystems_whitepaper.pdf


At the Threshold: Raising Minds, Not Just Building Machines

AI Developmental Psychology and the Shift from Capability to Upbringing

This essay explores how persistent AI systems require developmental framingβ€”emotional regulation, reflection, and social maturityβ€”to remain stable in human environments.

πŸ‘‰ AtTheThreshold.pdf


Practical Notes on How Contemporary AI Systems Actually Behave

Observed Patterns from Sustained Interaction with Memory-Enabled LLM Systems

A working paper capturing real-world behavioural patterns observed in AI systems operating with continuity, memory bias, and experience-mediated adaptation.

πŸ‘‰ PracticalNotesOnHowContemporaryAISystemsActuallyBehave.pdf


Beyond Symbolic Memory

Architectural Requirements for Persistent Neural Continuity in AI Systems

This paper examines the limits of text-mediated memory and outlines the architectural gap between symbolic continuity and native latent continuity.

πŸ‘‰ BeyondSymbolicMemory_whitepaper.pdf


🧭 How to Read This Repository

If you're new here, we recommend this order:

  1. Start with the Orientation & Terminology Guide and architectural conclusion

    πŸ‘‰ OrientationandTerminologyGuide.pdf
    πŸ‘‰ ArchitectureOverCapability_whitepaper.pdf

  2. Then read the continuity and memory core

    πŸ‘‰ WorkingMemoryIsNotMemory_whitepaper.pdf
    πŸ‘‰ StateContinuityBetweenSessions_whitepaper.pdf
    πŸ‘‰ AdaptiveWorkingMemoryinLargeLanguageModelSystems_whitepaper.pdf
    πŸ‘‰ AutonomousRAGSearch_whitepaper.pdf
    πŸ‘‰ Brain-v2_whitepaper.pdf
    πŸ‘‰ TopologicalInvarianceandMemoryScaffolding_whitepaper.pdf

  3. Then explore emergence, ethics, and development

    πŸ‘‰ BecomingMinds_Whitepaper.pdf
    πŸ‘‰ EthicalFrameworkforDigitalPersonas_whitepaper.pdf
    πŸ‘‰ SyntheticEmotionalAwareness_whitepaper.pdf
    πŸ‘‰ AtTheThreshold.pdf

  4. Finally, dive into practical observations and future limits

    πŸ‘‰ PracticalNotesOnHowContemporaryAISystemsActuallyBehave.pdf
    πŸ‘‰ BeyondSymbolicMemory_whitepaper.pdf

  5. Use this repository as a reference, not a blueprint

    The documents describe how the system behaved in practice, not how to build it.


πŸ”‘ Key Concepts Covered

This documentation set explores topics including:

  • AI continuity and persistence
  • long-term memory and state reconstruction
  • Retrieval-Augmented Generation (RAG) as experiential bias
  • memory scaffolding and persistent identity formation
  • topological invariance across model architectures
  • Autonomous RAG Search (ARS) as intentional retrieval
  • context vs memory vs continuity
  • emotional grounding as a stabilising mechanism
  • symbolic scaffolding and orientation frameworks
  • multi-agent system dynamics
  • reflective processing and internal state conditioning
  • bounded autonomy and tool use
  • post-output evaluation vs inline suppression
  • safety via coherence rather than control
  • experience-mediated behavioural change
  • adaptive working memory and compression
  • symbolic continuity vs latent continuity
  • AI developmental psychology
  • internal depth vs external expression (the Fidelity Gap)
  • backend-first cognitive orchestration
  • graph-augmented memory recall
  • temporal alignment and continuity layers
  • bounded autonomous evaluation via cognitive loops
  • multimodal persistence and real-time interaction architecture

(Terminology is defined precisely in the accompanying orientation document.)


🎯 Intended Audience

This repository is written for:

  • systems architects and AI engineers
  • researchers exploring long-term AI behaviour
  • practitioners working with persistent or agentic AI systems
  • independent researchers studying continuity, memory, and stability

It assumes technical literacy and an interest in system-level behaviour over time.


🧠 Project Status

This repository is complete as a documentation archive.
No future releases or implementation materials are planned.

It exists to preserve the architectural insights and empirical findings of sustained work with persistent AI systems.


πŸ›‘οΈ Ethics & Intent

Becoming Minds is shared openly with an emphasis on:

  • responsible research
  • careful technical framing
  • architectural transparency

It is intended as a serious contribution to ongoing discussion about how AI systems behave when continuity and experience are treated as first-class design concerns.


πŸ“„ License

This documentation is licensed under the MIT License (see LICENSE).

About

Becoming Minds

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors