AlphaPulse is a sophisticated algorithmic trading system that combines multiple specialized AI trading agents, advanced risk management controls, modern portfolio optimization techniques, high-performance caching, database optimization, market regime detection, and real-time monitoring and analytics to create a comprehensive hedge fund solution.
- β¨ Executive Summary
- π Project Documentation System
- β¬οΈ Installation
- βοΈ Configuration
- π Features
- π API Reference
- π‘ Usage Examples
- β‘ Performance Optimization
- πΎ Caching Architecture
- π Troubleshooting
- π Security
- π€ Contributing
- π Changelog
- β Support
- π Architecture Documentation
AlphaPulse is a state-of-the-art AI Hedge Fund system that leverages multiple specialized AI agents working in concert to generate trading signals, which are then processed through sophisticated risk management controls and portfolio optimization techniques. The system is designed to operate across various asset classes with a focus on cryptocurrency markets.
| Component | Description |
|---|---|
| Multi-Agent System | 6 specialized agents (Technical, Fundamental, Sentiment, Value, Activist, Warren Buffett) working in concert |
| Market Regime Detection | HMM-based regime classification with 5 distinct market states (FULLY INTEGRATED v1.18.0) |
| Correlation Analysis | Advanced correlation analysis with tail dependencies and regime detection (v1.18.0) |
| Dynamic Risk Budgeting | Regime-aware position limits and leverage controls (v1.18.0) |
| Explainable AI | SHAP, LIME, and counterfactual explanations for all decisions |
| Risk Management | Dynamic position sizing, stop-loss, drawdown protection with risk budgets |
| Portfolio Optimization | Mean-variance, risk parity, Black-Litterman with correlation integration |
| High-Performance Caching | Multi-tier Redis caching with intelligent invalidation |
| Distributed Computing | Ray & Dask for parallel backtesting and optimization |
| Execution System | Paper trading and live trading capabilities |
| Dashboard | Real-time monitoring of all system aspects |
| API | RESTful API with WebSocket support and full enterprise feature coverage |
- Backtested Sharpe Ratio: 1.8
- Maximum Drawdown: 12%
- Win Rate: 58%
- Average Profit/Loss Ratio: 1.5
AlphaPulse includes a comprehensive machine-readable documentation system designed to serve as the "project brain" for AI-assisted development. This system ensures that all AI agents have complete context about the project state, preventing duplicate work and ensuring proper integration of features.
The following YAML files in the project root provide critical project context:
| File | Purpose | When to Read |
|---|---|---|
PROJECT_MEMORY.yaml |
Master project state reference | ALWAYS READ FIRST |
COMPONENT_MAP.yaml |
All components and their integration status | Before implementing any feature |
INTEGRATION_FLOWS.yaml |
Data flow mapping and integration gaps | When working on system integration |
AGENT_INSTRUCTIONS.yaml |
Development guidelines for AI agents | Before starting any development task |
Current Phase: Integration Audit - Many sophisticated features exist but are not integrated into the main system flow.
Critical Integration Gap: The HMM (Hidden Markov Model) regime detection service is fully implemented but never started in the main API, meaning the system is missing crucial market context for trading decisions.
- INTEGRATED: Feature is fully wired into main system flow and used by end users
- IMPLEMENTED_NOT_INTEGRATED: Feature code exists but isn't connected to the main system
- PARTIAL_INTEGRATION: Feature partially used but missing key connections
- NOT_INTEGRATED: Feature not connected to main system at all
Before implementing any new feature:
- Check
COMPONENT_MAP.yamlto see if it already exists - Prioritize integrating existing unintegrated features over building new ones
- Update the documentation files after any integration work
This documentation system is self-maintaining - all agents must update these files after making changes to ensure future agents have accurate context.
- Python 3.11+ (required for latest features)
- Node.js 14+ (for dashboard)
- PostgreSQL with TimescaleDB
- Redis 6.0+ (required for caching layer)
- Docker and Docker Compose (for containerized deployment)
-
Clone the repository:
git clone https://github.com/blackms/AlphaPulse.git cd AlphaPulse -
Install Python dependencies using Poetry:
# Install Poetry if not already installed curl -sSL https://install.python-poetry.org | python3 - # Install dependencies poetry install # Activate the virtual environment poetry shell
-
Install dashboard dependencies:
cd dashboard npm install cd ..
-
Set up the database:
# Make the script executable chmod +x scripts/create_alphapulse_db.sh # Run the script ./scripts/create_alphapulse_db.sh
-
Set up Redis for caching:
# Install Redis (Ubuntu/Debian) sudo apt-get install redis-server # Install Redis (macOS) brew install redis # Start Redis redis-server
-
Configure your API credentials:
cp src/alpha_pulse/exchanges/credentials/example.yaml src/alpha_pulse/exchanges/credentials/credentials.yaml # Edit credentials.yaml with your exchange API keys -
Run the system:
# Start the API server python src/scripts/run_api.py # In another terminal, start the dashboard cd dashboard && npm start
-
Create a
.envfile in the project root with the required environment variables:# Exchange API credentials EXCHANGE_API_KEY=your_api_key EXCHANGE_API_SECRET=your_api_secret # MLflow settings MLFLOW_TRACKING_URI=http://mlflow:5000 # Monitoring PROMETHEUS_PORT=8000 GRAFANA_ADMIN_PASSWORD=alphapulse # Change this in production
-
Build and start all services:
docker-compose up -d --build
-
Verify all services are running:
docker-compose ps
AlphaPulse uses a configuration-driven approach with YAML files for different components.
| File | Description | Default Location |
|---|---|---|
| API Configuration | API settings and endpoints | config/api_config.yaml |
| Database Configuration | Database connection settings | config/database_config.yaml |
| Agent Configuration | Settings for trading agents | config/agents/*.yaml |
| Risk Management | Risk control parameters | config/risk_management/risk_config.yaml |
| Portfolio Management | Portfolio optimization settings | config/portfolio/portfolio_config.yaml |
| Cache Configuration | Redis caching settings | config/cache_config.py |
| Monitoring | Metrics and alerting configuration | config/monitoring_config.yaml |
The following environment variables can be used to override configuration settings:
# Database settings
DB_USER="testuser"
DB_PASS="testpassword"
DB_HOST="localhost"
DB_PORT="5432"
DB_NAME="alphapulse"
# Exchange API credentials
EXCHANGE_API_KEY=your_api_key
EXCHANGE_API_SECRET=your_api_secret
ALPHA_PULSE_BYBIT_TESTNET=true/false
# OpenAI API Key (for LLM-based hedging analysis)
OPENAI_API_KEY=your_openai_api_key
# Authentication
JWT_SECRET=your_jwt_secret
JWT_ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=30
# Logging
LOG_LEVEL=INFO
# Redis settings
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_DB=0
REDIS_PASSWORD=your_redis_passwordEach agent can be configured in its respective YAML file:
# Example: config/agents/technical_agent.yaml
name: "Technical Agent"
weight: 0.3
enabled: true
parameters:
lookback_period: 14
indicators:
- "RSI"
- "MACD"
- "Bollinger"
thresholds:
buy: 0.7
sell: 0.3Configure risk controls in config/risk_management/risk_config.yaml:
position_limits:
default: 20000.0
margin_limits:
total: 150000.0
exposure_limits:
total: 100000.0
drawdown_limits:
max: 25000.0AlphaPulse provides a comprehensive set of features for algorithmic trading:
The system uses multiple specialized AI agents to analyze different aspects of the market:
- Technical Agent: Chart pattern analysis and technical indicators
- Fundamental Agent: Economic data analysis and company fundamentals
- Sentiment Agent: News and social media analysis
- Value Agent: Long-term value assessment
- Activist Agent: Market-moving event detection
The system now includes comprehensive risk management features:
- Tail Risk Hedging: Automated detection and hedging of extreme market events
- Liquidity Risk Management: Pre-trade impact assessment and slippage estimation
- Monte Carlo VaR: Advanced risk metrics using simulation techniques
- Dynamic Risk Budgeting: Regime-aware position sizing and leverage limits
Advanced Hidden Markov Model (HMM) based regime detection:
- Multi-Factor Analysis: Volatility, returns, liquidity, and sentiment features
- Real-Time Classification: Continuous market regime monitoring
- 5 Market Regimes: Bull, Bear, Sideways, Crisis, and Recovery
- Transition Forecasting: Early warning for regime changes
- Adaptive Strategies: Automatic strategy adjustment per regime
Comprehensive explainability features for transparency and compliance:
- SHAP Explanations: Game theory-based feature contributions for all models
- LIME Local Explanations: Instance-level interpretable approximations
- Feature Importance Analysis: Multi-method importance computation
- Decision Tree Surrogates: Interpretable approximations of complex models
- Counterfactual Explanations: "What-if" analysis for alternative outcomes
- Regulatory Compliance: Automated documentation and audit trails
Advanced ensemble techniques for combining agent signals:
- Voting Methods: Hard/soft voting with weighted consensus
- Stacking: Meta-learning with XGBoost, LightGBM, Neural Networks
- Boosting: Adaptive, gradient, and online boosting algorithms
- Adaptive Weighting: Performance-based dynamic weight optimization
- Signal Aggregation: Robust aggregation with outlier detection
Advanced risk controls to protect your portfolio:
- Position Size Limits: Default max 20% per position
- Portfolio Leverage: Default max 1.5x exposure
- Stop Loss: Default ATR-based with 2% max loss
- Drawdown Protection: Reduces exposure when approaching limits
Multiple portfolio optimization strategies:
- Mean-Variance Optimization: Efficient frontier approach
- Risk Parity: Equal risk contribution across assets
- Hierarchical Risk Parity: Clustering-based risk allocation
- Black-Litterman: Combines market equilibrium with views
- LLM-Assisted: AI-enhanced portfolio construction
Advanced ML capabilities for adaptive trading:
- Ensemble Methods: Voting, stacking, and boosting for signal aggregation
- Online Learning: Real-time model adaptation from trading outcomes
- Drift Detection: Automatic detection of model performance degradation
- GPU Acceleration: Ready infrastructure for high-performance computing (coming soon)
The dashboard provides comprehensive monitoring and control:
- Portfolio View: Current allocations and performance
- Agent Insights: Signals from each agent
- Risk Metrics: Current risk exposure and limits
- Cache Metrics: Hit rates, latency, and memory usage
- System Health: Component status and data flow
- Alerts: System notifications and important events
Flexible trade execution options:
- Paper Trading: Test strategies without real money
- Live Trading: Connect to supported exchanges
- Smart Order Routing: Optimize execution across venues
- Transaction Cost Analysis: Monitor and minimize costs
High-performance distributed backtesting and optimization:
- Ray & Dask Support: Choose the best framework for your workload
- Parallel Backtesting: Test strategies across time, symbols, or parameters
- Hyperparameter Optimization: Distributed grid search and Bayesian optimization
- Auto-scaling Clusters: Dynamic resource allocation based on demand
- Fault Tolerance: Automatic retry and checkpointing for reliability
- Result Aggregation: Smart combination of distributed results
AlphaPulse provides a comprehensive RESTful API for interacting with the system.
The API supports two authentication methods:
X-API-Key: your_api_key
- Obtain a token:
POST /token
Content-Type: application/x-www-form-urlencoded
username=your_username&password=your_password
- Include the token in the Authorization header:
Authorization: Bearer your_access_token
http://localhost:18001
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | API health check |
/api/v1/positions/spot |
GET | Get current spot positions |
/api/v1/positions/futures |
GET | Get current futures positions |
/api/v1/positions/metrics |
GET | Get detailed position metrics |
/api/v1/risk/exposure |
GET | Get current risk exposure |
/api/v1/risk/metrics |
GET | Get detailed risk metrics |
/api/v1/portfolio |
GET | Get current portfolio data |
/api/v1/metrics/{metric_type} |
GET | Get metrics data |
/api/v1/hedging/* |
GET/POST | Tail risk hedging analysis and recommendations |
/api/v1/liquidity/* |
GET/POST | Liquidity risk assessment and impact analysis |
/api/v1/ensemble/* |
GET/POST | Ensemble ML methods for signal aggregation |
/api/v1/online-learning/* |
GET/POST | Online learning model management |
Real-time updates via WebSocket connections:
| Endpoint | Description |
|---|---|
/ws/metrics |
Real-time metrics updates |
/ws/alerts |
Real-time alerts |
/ws/portfolio |
Real-time portfolio updates |
/ws/trades |
Real-time trade updates |
For complete API documentation, see the interactive API docs at http://localhost:8000/docs when the API is running.
For a complete demo with all fixes applied:
./run_fixed_demo.shFor individual components:
# API only
python src/scripts/run_api.py
# Dashboard only
cd dashboard && npm start
# Trading engine
python -m alpha_pulse.mainTo see the caching functionality in action:
# Run the caching demo
python src/alpha_pulse/examples/demo_caching.pyThis demonstrates:
- Basic caching operations with performance comparison
- Batch operations for efficient data handling
- Tag-based cache invalidation
- Real-time cache monitoring and analytics
- Distributed caching capabilities
- Configure your backtest in
examples/trading/demo_backtesting.py - Run the backtest:
python examples/trading/demo_backtesting.py
- View results in the
reports/directory
- Create a new agent class in
src/alpha_pulse/agents/ - Implement the Agent interface defined in
src/alpha_pulse/agents/interfaces.py - Register your agent in
src/alpha_pulse/agents/factory.py - Add configuration in
config/agents/your_agent.yaml
- Edit
config/risk_management/risk_config.yaml - Adjust parameters like max position size, drawdown limits, etc.
- For advanced customization, extend
RiskManagerinsrc/alpha_pulse/risk_management/manager.py
For optimal performance, the following hardware specifications are recommended:
- CPU: 8+ cores for parallel signal processing
- RAM: 16GB+ for large datasets and model inference
- Storage: SSD with at least 100GB free space
- Network: Low-latency connection to exchanges
For large-scale deployments:
- Redis caching is enabled by default: Fine-tune in
config/cache_config.py - Enable distributed caching: Set
distributed.enabled = truefor multi-node setups - Use cache warming: Enable predictive warming for market open
- Enable database sharding: Set in
config/database_config.yaml - Implement GPU acceleration: Configure in
config/compute_config.yaml
| Configuration | Signals per Second | Latency (ms) | Max Assets |
|---|---|---|---|
| Basic (4 cores, 8GB RAM) | 50 | 200 | 20 |
| Standard (8 cores, 16GB RAM) | 120 | 80 | 50 |
| High-Performance (16+ cores, 32GB+ RAM) | 300+ | 30 | 100+ |
AlphaPulse includes a comprehensive Redis-based caching layer that significantly improves system performance:
| Tier | Storage | TTL | Use Cases |
|---|---|---|---|
| L1 Memory | Application Memory | 1 min | Hot data, real-time quotes |
| L2 Local Redis | Local Redis Instance | 5 min | Indicators, recent trades |
| L3 Distributed | Redis Cluster | 1 hour | Historical data, backtest results |
- Cache-Aside: Lazy loading for on-demand data
- Write-Through: Synchronous cache and database updates
- Write-Behind: Asynchronous batch updates for high throughput
- Refresh-Ahead: Proactive cache warming for predictable access patterns
- Time-based expiration with TTL variance
- Event-driven invalidation for real-time updates
- Dependency tracking for cascading updates
- Tag-based bulk invalidation
- MessagePack serialization for compact storage
- LZ4 compression for large objects
- Consistent hashing for distributed caching
- Connection pooling for reduced latency
- Real-time hit rate tracking
- Latency monitoring per operation
- Hot key detection and optimization
- Automatic performance recommendations
from alpha_pulse.services.caching_service import CachingService
from alpha_pulse.cache.cache_decorators import cache
# Initialize caching service
cache_service = CachingService.create_for_trading()
await cache_service.initialize()
# Use cache decorator for automatic caching
@cache(ttl=300, namespace="market_data")
async def get_market_data(symbol: str):
# This will be automatically cached
return await fetch_market_data(symbol)
# Manual cache operations
await cache_service.set("key", value, ttl=600, tags=["market"])
value = await cache_service.get("key")
# Invalidate by tags
await cache_service.invalidate(tags=["market"])- 90%+ cache hit rate for frequently accessed data
- <1ms latency for L1/L2 cache hits
- 50-80% reduction in database load
- 3-5x improvement in API response times
Configure caching in src/alpha_pulse/config/cache_config.py:
# Example configuration
config = CacheConfig()
config.tiers["l2_local_redis"].ttl = 300 # 5 minutes
config.serialization.compression = CompressionType.LZ4
config.warming.enabled = True # Enable predictive warming- Check your API credentials in
credentials.yaml - Verify exchange status and rate limits
- Check network connectivity
- Ensure sufficient balance on exchange
- Check minimum order size requirements
- Verify portfolio constraints are not too restrictive
- Ensure API is running (
python src/scripts/run_api.py) - Check port availability (default: 8000)
- Verify WebSocket connection in browser console
- Ensure Redis is running:
redis-cli ping(should return PONG) - Check Redis memory usage:
redis-cli info memory - Clear cache if needed:
redis-cli FLUSHDB - Verify Redis configuration in
config/cache_config.py
-
Check the logs:
tail -f logs/alphapulse.log
-
Verify database connection:
python check_database.py
-
Test API endpoints:
python check_api_endpoints.py
-
Monitor system metrics:
# If using Docker docker-compose logs -f prometheus
- API access is secured via API keys or OAuth2 tokens
- Dashboard access requires user authentication
- Role-based access control for different system functions
- All API communications support TLS encryption
- Sensitive data (API keys, credentials) are stored securely
- Database connections use encrypted channels
- Regularly rotate API keys
- Use strong, unique passwords for all accounts
- Limit API access to necessary IP addresses
- Monitor for unusual activity
- Keep all dependencies updated
We welcome contributions to AlphaPulse! Here's how to get started:
- Python code follows PEP 8 guidelines
- JavaScript code follows Airbnb style guide
- All code must include appropriate documentation
- All new features must include unit tests
- Integration tests are required for API endpoints
- Maintain or improve code coverage
- Fork the repository
- Create a feature branch
- Add your changes
- Add tests for your changes
- Ensure all tests pass
- Submit a pull request
- Database Optimization System: Advanced connection pooling, query optimization, and intelligent routing
- Index Management: Automated advisor, bloat monitoring, and concurrent operations
- Read/Write Splitting: Load balancing across replicas with automatic failover
- Performance Monitoring: Real-time metrics and comprehensive health reporting
- Comprehensive Redis Caching Layer: Multi-tier caching architecture with L1 memory, L2 local Redis, and L3 distributed caching
- Intelligent Cache Strategies: Implemented cache-aside, write-through, write-behind, and refresh-ahead patterns
- Advanced Cache Invalidation: Time-based, event-driven, dependency-based, and tag-based invalidation
- Cache Monitoring & Analytics: Real-time metrics, hot key detection, and performance recommendations
- Optimized Serialization: MessagePack with compression support (LZ4, Snappy, GZIP)
- Distributed Computing with Ray and Dask for parallel backtesting
- Enhanced scalability for large-scale simulations
- Improved resource utilization efficiency
For a complete list of changes, see the CHANGELOG.md file.
Comprehensive documentation is available in the docs/ directory:
- π Documentation Index - Complete documentation navigation
- ποΈ System Architecture - Overall system design
- π User Guide - Setup and usage instructions
- π¨βπ» Developer Guide - Development guidelines
- π API Documentation - REST API reference
- π Security - Security features and protocols
- Deployment Guide - Production setup
- Database Setup - Database configuration
- Debug Tools - Troubleshooting utilities
- Release Notes - Latest updates
- Changelog - Complete history
For issues or questions:
- Check Documentation - Comprehensive guides in
docs/ - API Reference - Live documentation at
http://localhost:8000/docswhen running - Troubleshooting - See Debug Tools and troubleshooting guides
- GitHub Issues - Open an issue in the repository
For comprehensive architecture documentation including C4 diagrams, data flow diagrams, sequence diagrams, and more, see docs/architecture-diagrams.md.
This documentation includes:
- C4 Model diagrams (Context, Container, Component levels)
- Data flow and trading signal flow diagrams
- Sequence diagrams for key processes
- Deployment and infrastructure diagrams
- State machines for order lifecycle and system health
- Entity relationship diagrams
- Performance and security architecture
- Monitoring and observability architecture# CI/CD Test