A fault-tolerant, real-time stock price streaming system built with Phoenix LiveView and Elixir/OTP.
(Preview Knowledge Validation Lab for Arionkoder)
- Real-time stock price streaming via WebSockets and LiveView
- Fault-tolerant architecture with supervision trees and circuit breaker pattern
- Distributed PubSub using Phoenix.PubSub with Erlang distribution
- Mock API integration with configurable external stock price API
- Comprehensive testing with Mimic for mocking
- Responsive UI with Tailwind CSS
- StockPriceStreamer: GenServer that periodically fetches stock prices
- SubscriptionManager: Manages client subscriptions to stock symbols
- CircuitBreaker: Protects against external API failures
- StockSupervisor: Fault-tolerant supervision with restart strategies
- LiveView Dashboard: Real-time UI for subscribing and viewing stock updates
You can run this application in two modes:
| Feature | Mode 1: Containerized | Mode 2: Hybrid |
|---|---|---|
| Elixir/Erlang Required | ❌ No | ✅ Yes (1.14+) |
| Setup Complexity | 🟢 Simple | 🟡 Moderate |
| Development Speed | 🟡 Rebuild needed | 🟢 Fast reload |
| Production Ready | ✅ Yes | ❌ Dev only |
| Resource Usage | 🟡 Higher | 🟢 Lower |
| Debugging | 🟡 Container logs | 🟢 Direct access |
No local Elixir/Erlang installation required - everything runs in Docker containers.
Prerequisites:
- Docker and Docker Compose
Setup:
# Start all services (PostgreSQL, MockServer, and Phoenix app)
docker compose up -d
# View logs (optional)
docker compose logs -f serverAccess:
- Application: http://localhost:4000
- MockServer: http://localhost:1080
- PostgreSQL: localhost:5432
Management:
# Stop all services
docker compose down
# Rebuild after code changes
docker compose build server && docker compose up -d
# View service status
docker compose psLocal Phoenix server with containerized database and mock services.
Prerequisites:
- Elixir 1.14+
- Docker and Docker Compose
Setup:
- Start infrastructure services:
# Start only PostgreSQL and MockServer
docker compose up postgres mockserver -d- Setup application:
# Install dependencies
mix deps.get
# Create and migrate database
mix ecto.create
mix ecto.migrate
# Install frontend dependencies
mix assets.setup- Run application:
# Start Phoenix server locally
mix phx.server
# Or run in IEx for development
iex -S mix phx.serverAccess:
- Application: http://localhost:4000
- MockServer: http://localhost:1080
- PostgreSQL: localhost:5432
Mode 1 (Containerized):
# Run tests inside container
docker compose exec server mix test
# Run with coverage report
docker compose exec server mix coveralls.htmlMode 2 (Hybrid):
# Ensure test database is ready
docker compose up postgres -d
MIX_ENV=test mix ecto.create
MIX_ENV=test mix ecto.migrate
# Run all tests
mix test
# Run with detailed coverage
mix coveralls.html
# Run with coverage in terminal
mix coveralls
# Run code quality checks
mix credo --strict
mix dialyzer- ExCoveralls: Test coverage analysis with HTML reports
- Credo: Static code analysis for code quality
- Dialyxir: Static analysis tool for type checking
- ExUnit: Built-in testing framework with property testing support
After running mix coveralls.html, open cover/excoveralls.html to view:
- Line-by-line coverage highlighting
- Module coverage percentages
- Overall project coverage metrics
- Uncovered code identification
- Navigate to http://localhost:4000
- Enter a stock symbol (e.g., AAPL, GOOGL, MSFT, TSLA)
- Click "Subscribe" to receive real-time updates
- View live price updates as they stream in
- Click "×" next to subscribed symbols to unsubscribe
The system uses WireMock to simulate realistic stock price APIs with dynamic random values.
- Dynamic Price Fluctuations: Each API call returns different values within realistic ranges
- Real-time Timestamps: Current timestamp for every request
- 6 Stock Symbols with Realistic Ranges:
- AAPL: $145-155
- GOOGL: $2,700-2,800
- MSFT: $305-320
- TSLA: $240-260
- AMZN: $180-200
- NFLX: $450-550
- Zero Configuration: Works immediately with
docker-compose up
API Details:
- Endpoint: http://localhost:1080/api/stock-prices
- Behavior: Returns different prices every 5 seconds for realistic simulation
- Custom Ranges: Edit
mockserver/expectations/stock-prices.jsonto adjust price ranges
For multi-node setup:
# Node 1
iex --sname node1 --cookie secret -S mix phx.server
# Node 2 (different port)
PORT=4001 iex --sname node2 --cookie secret -S mix phx.server
# Connect nodes (from node2)
Node.connect(:node1@hostname)Mode 1 (Containerized):
- Environment variables are configured in
docker-compose.yml DATABASE_URL: postgres://postgres:postgres@postgres:5432/stock_stream_devSTOCK_API_URL: http://mockserver:8080 (internal Docker network)PORT: 4000
Mode 2 (Hybrid):
DATABASE_URL: postgres://postgres:postgres@localhost:5432/stock_stream_dev (default)STOCK_API_URL: http://localhost:1080 (default)PORT: 4000 (default)
Common variables:
PHX_HOST: Phoenix host configurationSECRET_KEY_BASE: Application secret (auto-generated for development)
Edit mockserver/expectations/stock-prices.json to modify mock stock data:
{
"symbol": "AAPL",
"price": 150.25,
"timestamp": "2024-01-01T10:00:00Z"
}- one_for_one: Individual process restarts without affecting others
- Circuit Breaker: Protects against cascading failures from external API
- Error Handling: Comprehensive error catching and logging
- Closed: Normal operation
- Open: Failing fast after threshold breaches
- Half-Open: Testing recovery after timeout
- Unit Tests: Individual module testing with Mimic mocks
- Integration Tests: LiveView and PubSub integration testing
- Fault Tolerance Tests: Supervisor and circuit breaker testing
Tests use StockStream.MimicSetup.start() to configure mocks for:
- HTTP requests via Req
- External API calls
- GenServer interactions
- Port: 5432
- Database: stock_stream_dev
- Credentials: postgres/postgres
- Available in both modes
- Port: 1080 (external) / 8080 (internal)
- API Endpoint: http://localhost:1080/api/stock-prices
- Admin UI: http://localhost:1080/__admin
- Expectations:
mockserver/expectations/stock-prices.json - Available in both modes
- Port: 4000
- Environment: Production mode with runtime configuration
- Database: Connects to postgres:5432 (internal Docker network)
- Mock API: Connects to mockserver:8080 (internal Docker network)
StockStream.StockPriceStreamer: Core streaming logicStockStream.SubscriptionManager: Subscription managementStockStream.CircuitBreaker: Fault protectionStockStreamWeb.StockDashboardLive: LiveView interface
- Update mock expectations in
mockserver/expectations/stock-prices.json - Restart MockServer:
docker-compose restart mockserver - New symbols will be available for subscription
Container won't start:
# Check container logs
docker compose logs server
# Rebuild if code changed
docker compose build server
# Restart all services
docker compose down && docker compose up -dDatabase connection issues:
# Ensure PostgreSQL is running
docker compose ps postgres
# Check database logs
docker compose logs postgresDatabase connection failed:
# Ensure PostgreSQL container is running
docker compose up postgres -d
# Verify connection
mix ecto.createMockServer not responding:
# Check MockServer status
docker compose ps mockserver
# Restart MockServer
docker compose restart mockserver
# Test endpoint
curl http://localhost:1080/api/stock-pricesMix commands fail:
# Ensure Elixir 1.14+ is installed
elixir --version
# Clean and reinstall dependencies
mix deps.clean --all && mix deps.getPort already in use:
- Change port in docker-compose.yml or environment variables
- Kill processes using the port:
lsof -ti:4000 | xargs kill -9
Permission denied (Docker):
- Ensure Docker daemon is running
- Check Docker permissions for your user
StockStream uses libcluster for automatic Erlang node discovery and clustering. This enables:
- Distributed PubSub: Real-time messages broadcast across all connected nodes
- Fault Tolerance: If one node fails, others continue serving requests
- Load Distribution: Stock price fetching and processing distributed across nodes
- Horizontal Scaling: Add more nodes to handle increased load
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Node 1 │◄──►│ Node 2 │◄──►│ Node 3 │
│ │ │ │ │ │
│ Phoenix App │ │ Phoenix App │ │ Phoenix App │
│ Stock Data │ │ Stock Data │ │ Stock Data │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
└───────────────────┼───────────────────┘
│
Shared PubSub Topics
(stock:AAPL, stock:all)
- Node Discovery: libcluster automatically discovers and connects nodes
- PubSub Distribution: Phoenix.PubSub replicates messages across all nodes
- Data Consistency: Stock price updates propagate to all connected clients
- Fault Recovery: Failed nodes automatically rejoin when healthy
Development (no clustering):
# config/dev.exs
config :libcluster, topologies: []Production - Kubernetes:
# Environment variables
CLUSTER_STRATEGY=Kubernetes
KUBERNETES_SELECTOR=app=stock_stream
KUBERNETES_NAMESPACE=defaultProduction - Static Hosts:
# Environment variables
CLUSTER_STRATEGY=Epmd
CLUSTER_HOSTS=node1@server1,node2@server2,node3@server3# Terminal 1 - Start first node
PORT=4000 iex --sname node1 --cookie secret -S mix phx.server
# Terminal 2 - Start second node
PORT=4001 iex --sname node2 --cookie secret -S mix phx.server
# Terminal 3 - Connect nodes manually (development)
iex --sname client --cookie secret
> Node.connect(:node1@hostname)
> Node.connect(:node2@hostname)
> Node.list() # Should show both nodesWhen nodes are connected:
- Stock price updates appear on all nodes simultaneously
- WebSocket clients on any node receive updates from any other node
- Circuit breaker state is local per node for isolation
Based on comprehensive code analysis, here are categorized improvement opportunities:
- Add missing
@spectype specifications for all public functions - Extract stock price parsing logic into dedicated
StockStream.Parsermodule - Implement
StockStream.Stockscontext for business logic abstraction - Add proper module documentation (
@moduledoc) for all modules - Create centralized
StockStream.Configurationmodule - Add structured logging with correlation IDs
- Remove hardcoded
SECRET_KEY_BASEfrom docker-compose.yml - Implement input validation for stock symbols (length, format)
- Add rate limiting using
HammerorExRated - Implement XSS protection for user inputs
- Add authentication system for dashboard access
- Use proper secrets management (HashiCorp Vault, k8s secrets)
- Add HTTP connection pooling with configurable pool sizes
- Implement adaptive fetch intervals based on subscription count
- Add ETS-based caching for hot stock data
- Implement data retention policies with background cleanup
- Add composite database indexes for query optimization
- Configure Phoenix Presence for connection tracking
- Add Prometheus + Grafana integration
- Implement distributed tracing with OpenTelemetry
- Add comprehensive health check endpoints (
/health/ready,/health/live) - Configure structured JSON logging for production
- Add custom Telemetry metrics for business events
- Implement alerting for circuit breaker state changes
- Add comprehensive integration tests for full request/response cycles
- Implement property-based testing with StreamData
- Add chaos engineering tests for fault tolerance validation
- Create performance benchmarks with
:benchee - Add contract testing for external API interactions
- Implement load testing scenarios
- Add Kubernetes manifests with Helm charts
- Implement blue-green deployment strategy
- Configure automated backup and disaster recovery
- Add container resource limits and requests
- Implement graceful shutdown handling
- Add automated rollback capabilities
- Implement historical stock price persistence
- Add price change alerts and notifications
- Create administrative dashboard for system management
- Add WebSocket connection limits and backpressure
- Implement price trend analysis algorithms
- Add support for multiple stock exchanges
- Add pre-commit hooks with code formatting
- Implement CI/CD pipeline with GitHub Actions
- Add OpenAPI/Swagger documentation
- Configure automated security scanning
- Add development Docker Compose override
- Create architectural decision records (ADRs)
- Add background jobs with Oban for async processing
- Implement database read replicas for scaling
- Add database partitioning for time-series data
- Configure automated database migrations
- Add data export/import capabilities
- Implement soft deletes for audit trails
- Configure proper database connection pooling
- Set up monitoring and observability
- Use SSL/TLS for external API calls
- Implement rate limiting for API requests
- Configure proper logging levels
- Set up health checks and metrics
- Use proper secrets management instead of hardcoded values
- Configure container resource limits and health checks
- Enable distributed clustering in production environments
- Configure proper node discovery strategy (Kubernetes DNS or static hosts)
- Set up load balancers with session affinity for WebSocket connections
This project was created for the Arionkoder Technical Test.