A distributed neural network framework built on actor-based concurrency and reactive streams for high-performance, scalable deep learning computation.
HyperMind is a sophisticated distributed neural network framework that combines cutting-edge concurrency patterns with formal verification to deliver a robust, scalable platform for deep learning workloads. Built on the actor model with reactive stream processing, HyperMind enables network-transparent computation across distributed GPU and CPU resources.
- 🎭 Actor-Based Concurrency: Thread-safe, message-passing architecture with isolated state
- ⚡ Reactive Streams: Event-driven processing from multiple sources (CPU, GPU, Database)
- 🔗 Command Pattern: Extensible asynchronous operations with proxy-based chaining
- 🌐 Distributed Computing: Network-transparent neural network computation
- 🔒 Formally Specified: Complete Z++ formal specifications for correctness verification
- 🚀 GPU Acceleration: Native CUDA/OpenCL integration via dedicated streams
- 💾 Persistent State: Asynchronous PostgreSQL integration for model persistence
- 📊 Hierarchical Processing: Three-tier worker-manager-director computation model
HyperMind's architecture is built on four core components:
The fundamental computational unit that:
- Processes messages from multiple priority queues (internal, external, command)
- Handles asynchronous events from GPU streams and PostgreSQL pipes
- Manages session state and NDArray caches using hash maps
- Executes feedforward and backpropagation operations
- Implements hierarchical ranks (worker, manager, director)
Orchestrates neural network computation sessions:
- Creates and distributes sessions across available reactors
- Manages layer sequences for network topology
- Tracks session lifecycle (active, completed, failed)
- Balances load across reactor pool
Implements asynchronous operations:
Commandbase class with extensibleexecute()methodFeedForwardandBackPropagationcommandsCommandProxyfor operation chaining- Priority-based command queuing
Provides contracts for external systems:
- GPU Integration: CUDA/OpenCL operations via dedicated streams
- PostgreSQL Integration: Asynchronous persistence via database pipes
- Network Integration: Distributed reactor communication
For detailed architecture documentation, see docs/architecture_overview.md.
HyperMind includes comprehensive documentation and formal specifications:
- docs/README.md - Documentation index and navigation guide
- docs/architecture_overview.md - Detailed architecture with 10 Mermaid diagrams
- docs/implementation_guide.md - Step-by-step implementation instructions
- docs/test_generation_guide.md - Test generation from formal specs
- specs/data_model.zpp - Core data structures (NDArray, SessionState, Commands)
- specs/system_state.zpp - System state and global invariants
- specs/operations.zpp - Operation specifications with pre/postconditions
- specs/integrations.zpp - External system contracts (GPU, DB, Network)
See SPECIFICATION_SUMMARY.md for a complete overview of all specifications.
- C++17 or later
- CMake 3.15+
- CUDA Toolkit 11.0+ (for GPU support)
- PostgreSQL 12+ (for persistence)
- Google Test (for testing)
# Clone the repository
git clone https://github.com/o9nn/hypermind.git
cd hypermind
# Create build directory
mkdir build && cd build
# Configure with CMake
cmake ..
# Build
make -j$(nproc)# Run all tests
./hypermind_tests
# Run specific test suite
./hypermind_tests --gtest_filter=NDArrayTest.*
# Run with verbose output
./hypermind_tests --gtest_verbose#include "hypermind.hpp"
// Create a session initiator with 4 reactors
SessionInitiator initiator(4);
// Define neural network layers
std::vector<LayerProxy> layers = {
LayerProxy(784, 128), // Input layer
LayerProxy(128, 64), // Hidden layer
LayerProxy(64, 10) // Output layer
};
// Create a computation session
ID session_id = initiator.createSession(layers);
// Submit input data
NDArray input({784}, Device::CPU, DataType::float32);
initiator.feedForward(session_id, input);
// Retrieve results
NDArray output = initiator.getResult(session_id);HyperMind is currently in active development. The project includes:
✅ Complete Formal Specifications - All core components formally specified in Z++
✅ Comprehensive Documentation - Architecture diagrams, implementation guides, test generation guides
✅ Core Data Structures - Basic implementation in hypermind.hpp
🚧 Implementation In Progress - See docs/implementation_guide.md for next steps
From the implementation guide, the following components are planned:
- Remaining operations (BackPropagation, weight updates, gradient computation)
- Comprehensive error handling and recovery
- Full GPU integration (CUDA/OpenCL backends)
- Complete database integration (PostgreSQL async operations)
- Performance monitoring and profiling
- Integration tests for distributed scenarios
- Network communication layer
See the implementation guide for details.
- Each NeuralReactor runs in its own thread (ThreadActor)
- Lock-free communication via priority queues
- No shared mutable state between actors
- Message passing ensures thread safety
Each NeuralReactor maintains four event sources:
- Internal Priority Queue - Self-scheduled operations (highest priority)
- External Priority Queue - Messages from other reactors
- GPU Event Stream - Asynchronous GPU computation results
- PostgreSQL Pipe - Database operation results
Three-tier distributed computation model:
- Workers - Execute basic layer computations
- Managers - Coordinate workers for complex operations
- Directors - Orchestrate managers for full network passes
HyperMind is designed for horizontal and vertical scaling:
- Horizontal Scaling: Add reactor instances across machines
- Vertical Scaling: GPU acceleration and async database operations
- Network Transparency: Reactors communicate regardless of location
- Load Balancing: Automatic work distribution across reactor pool
HyperMind's formal Z++ specifications enable:
- Type Checking - Verify data structure consistency
- Invariant Checking - Prove operations maintain invariants
- Test Generation - Derive test cases from specifications
- Model Checking - Verify distributed system properties
- Refinement - Gradually refine specs to implementation
See docs/test_generation_guide.md for test generation from specs.
We welcome contributions! To contribute:
- Read the architecture documentation
- Review the formal specifications
- Follow the implementation guide
- Generate tests using the test generation guide
- Submit a pull request
- All implementations must match formal specifications in
specs/ - Maintain invariants specified in Z++ schemas
- Add tests for new operations
- Document architectural changes with Mermaid diagrams
- Ensure thread safety in actor implementations
This project is licensed under the MIT License - see the LICENSE file for details.
- Documentation: docs/
- Specifications: specs/
- Issues: GitHub Issues
- Agent: See .github/agents/hypermind.md for GitHub Copilot agent configuration
HyperMind combines ideas from:
- Actor Model (Hewitt, 1973)
- Reactive Streams (Reactive Manifesto)
- Command Pattern (Gang of Four)
- Formal Methods (Z Notation, Z++)
- Distributed Computing (MapReduce, Actor Frameworks)
Built with ❤️ for high-performance distributed deep learning