Reinforcement learning-powered motion control tuning for FRC robots.
Backstaff is a desktop application that helps robotics teams optimize PID controllers and motion parameters using reinforcement learning. Instead of manually tweaking parameters through trial and error, Backstaff learns optimal settings by analyzing your robot's performance data. Built with PyQt5 for a clean, responsive interface with dark mode support.
- RL-Based Optimization: Automatically tune PID gains (P, I, D, FF) using reinforcement learning algorithms
- Visual Model Management: Create, edit, and manage multiple controller configurations through an intuitive GUI
- Performance Tracking: Track optimization history and visualize parameter evolution over time
- Flexible Configuration: Set custom bounds, learning rates, and stochastic rates per parameter
- Dark Mode: Professional dark/light theme support for extended tuning sessions
- Data Persistence: All models and training history automatically saved to disk
(Recommended) Download a released version at https://github.com/MaxedPC08/Backstaff/releases/tag/V0.1
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txtpython main.py- Click "Add Model" in the sidebar
- Configure parameter names (e.g., P, I, D, FF)
- Set bounds and learning rates
- Start tuning!
- Reinforcement/optimizer.py: Core gradient-free optimizer with configurable learning rates and parameter constraints
- Reinforcement/controller.py: Controller wrapper that evaluates robot motion by analyzing position, velocity, and acceleration data
- Reinforcement/controller_manager.py: Manages multiple controller instances
- Reinforcement/demos/: Example implementations
pid.py: Standard PID controllerrl_pid.py: RL-enhanced PID tuner1d_momentum_sim.py: Motion simulation framework
- ui/main_window.py: Main application window with split-pane layout
- ui/main_content.py: Central content area for parameter visualization and control
- ui/model_manager.py: Backend logic for model CRUD operations and controller instantiation
- ui/widgets.py: Custom widgets including controller list and parameter displays
- ui/dialogs.py: Modal dialogs for adding/editing models and application settings
- ui/settings_manager.py: Persistent settings management
- ui/styles.py: Theme and stylesheet management
- Models: backstaff_data/models.json stores all model configurations
- History: Individual
*_history.jsonfiles track optimization progress per model - Notes: User annotations saved in
*_notes.jsonfiles - Settings: backstaff_settings.json stores UI preferences
- Define Model: Create a model with N parameters (e.g., P, I, D, FF gains) and set bounds
- Collect Data: Run your robot and collect frame data (position, velocity, time)
- Evaluate Performance: Backstaff computes error metrics based on:
- Distance traveled vs. optimal path
- Speed consistency (penalizes jerky motion)
- Target achievement
- Update Weights: RL optimizer adjusts parameters to minimize error
- Iterate: Repeat until performance converges
Each model in backstaff_data/models.json contains:
{
"name": "shooterpid",
"optimizer_type": "Standard Optimizer",
"num_params": 4,
"param_names": ["P", "I", "D", "FF"],
"mins": [0.0, 0.0, 0.0, 0.0],
"maxs": [1.0, 1.0, 1.0, 1.0],
"learning_rates": [0.0001, 0.0001, 0.0001, 0.0001],
"initial_weights": [0.5, 0.0, 0.1, 0.2]
}Configure in backstaff_settings.json:
{
"data_folder": "/path/to/backstaff_data",
"dark_mode": true
}A build script and spec file are included for creating standalone executables:
./build.shOutput will be in dist/Backstaff/
Backstaff/
├── main.py # Application entry point
├── Reinforcement/ # RL backend
│ ├── optimizer.py # Core optimization engine
│ ├── controller.py # Controller logic
│ ├── controller_manager.py # Multi-controller management
│ └── demos/ # Example implementations
├── ui/ # PyQt5 frontend
│ ├── main_window.py # Main application window
│ ├── model_manager.py # Model CRUD backend
│ ├── dialogs.py # UI dialogs
│ └── widgets.py # Custom components
├── backstaff_data/ # Persistent storage
│ ├── models.json # Model configurations
│ └── *_history.json # Training history
└── requirements.txt # Python dependencies
- PyQt5 (≥5.15.0): Cross-platform GUI framework
- NumPy (≥1.21.0): Numerical computing and optimization
Add your own test suite as needed. Consider testing:
- Optimizer convergence on known functions
- Model save/load integrity
- UI state management
Application won't start
- Ensure virtual environment is activated
- Verify PyQt5 installation:
pip list | grep PyQt5 - Check Python version (3.8+ recommended)
Models not saving
- Check write permissions on
backstaff_data/directory - Verify
data_folderpath in settings file
Dark mode not applying
- Toggle in Settings dialog (gear icon)
- Check backstaff_settings.json for
"dark_mode": true
See LICENSE for details.
This is a robotics team tool designed for FRC competition preparation. Contributions welcome - especially around:
- Additional optimizer algorithms
- Real-time robot integration examples
- Performance visualization improvements
- Multi-objective optimization support