Python package for fitting two-state models of sensorimotor adaptation using the Expectation-Maximization (EM) algorithm.
This package provides a complete, high-performance Python implementation of the Expectation-Maximization algorithm for fitting two-state models of motor learning. The implementation uses Numba JIT compilation to achieve C-like performance without requiring manual compilation of C++/MEX code.
Original Work:
- Author: Scott Albert
- Institution: Johns Hopkins University
- Lab: Laboratory for Computational Motor Control
- Advisor: Reza Shadmehr
- Date: July 25, 2017
β¨ High Performance: Numba-optimized likelihood function provides near-C performance
π¦ Easy Installation: Install as a package via pip or pixi
π¬ Scientific Computing: Built on NumPy, SciPy, and Numba
π Visualization: Optional matplotlib integration for plotting
π§ͺ Well-Tested: Includes benchmarks and convergence tests
π Comprehensive Docs: Type hints and detailed documentation
# Clone the repository
git clone https://github.com/Motor-Learning-Lab/albert-em-port.git
cd albert-em-port
# Install with pixi
pixi install
# Run example
pixi run example
# Run tests
pixi run testInstall directly from the GitHub repository using a PEP 508 URL. The import name is albert_em.
pip install "albert-em @ git+https://github.com/Motor-Learning-Lab/albert-em-port@main"Then in Python:
import albert_emYou can also add this project as a dependency in your own pyproject.toml:
[project]
dependencies = [
"albert-em @ git+https://github.com/Motor-Learning-Lab/albert-em-port@main",
]# Clone the repository
git clone https://github.com/Motor-Learning-Lab/albert-em-port.git
cd albert-em-port
# Install the package (editable)
pip install -e .
# Or with visualization support
pip install -e ".[viz]"
# Or for development
pip install -e ".[dev]"import numpy as np
from albert_em import generalized_expectation_maximization
# Define experimental paradigm
r = np.concatenate([np.zeros(20), 30*np.ones(50),
np.full(20, np.nan), np.zeros(30)])
EC = np.concatenate([np.zeros(70), np.ones(20), np.zeros(30)])
EC_value = np.concatenate([np.full(70, np.nan), np.zeros(20),
np.full(30, np.nan)])
c = np.array([1.0, 1.0])
# Your behavioral data
y = # ... motor output data
# Set up EM
initial_params = np.array([0.95, 0.7, 0.05, 0.30, 0, 0, 2, 2, 5])
search_space = np.array([
[0, 1.1], [0, 1.1], [0, 1], [0, 1],
[-30, 30], [-30, 30],
[0.0000001, 10], [0.0000001, 10], [0.0000001, 10]
])
constraints = np.array([0.001, 0.001])
# Run EM algorithm
parameters, likelihoods = generalized_expectation_maximization(
initial_params, y, r, EC, EC_value, c,
search_space, constraints, num_iterations=100
)albert-em-port/
βββ src/albert_em/ # Main package
β βββ __init__.py
β βββ em.py # Main EM coordinator
β βββ kalman_smoother.py # E-step
β βββ m_step.py # M-step
β βββ expected_complete_log_likelihood.py # Numba-optimized with fallback
β βββ incomplete_log_likelihood.py
β βββ simulation.py
βββ ipynb/
β βββ tutorial.ipynb # End-to-end demo (keep this)
βββ tests/
β βββ test_smoke.py # Fast, clear smoke test
βββ Matlab/ # Original MATLAB code
βββ pixi.toml # Pixi environment (dev + notebook)
βββ pyproject.toml # Python package metadata
βββ README.md
The Python implementation uses Numba JIT compilation to achieve near-C performance:
- β No manual compilation required (unlike MEX)
- β Automatic optimization on first run
- β Comparable performance to C++/MEX
- β Pure Python with scientific libraries
- Ease of use: No build toolchain required
- Performance: JIT compilation provides 50-100x speedup over pure Python
- Maintainability: Keep everything in Python
- Cross-platform: Works on Windows, Linux, macOS without recompilation
- Scientific ecosystem: Seamless NumPy integration
The two-state model has 9 parameters:
- aS - Slow state retention factor (0-1)
- aF - Fast state retention factor (0-1)
- bS - Slow state error sensitivity (0-1)
- bF - Fast state error sensitivity (0-1)
- xS1 - Initial slow state
- xF1 - Initial fast state
- sigmax2 - State update variance (>0)
- sigmau2 - Motor output variance (>0)
- sigma12 - Initial state variance (>0)
The two-state model assumes motor adaptation is governed by two hidden states:
- Slow state: High retention (aS β 0.98), low learning rate (bS β 0.1)
- Fast state: Low retention (aF β 0.6), high learning rate (bF β 0.3)
The EM algorithm iteratively:
- E-step: Estimates hidden states using Kalman smoothing
- M-step: Updates model parameters via constrained optimization (SLSQP)
Open the tutorial notebook for a complete, reproducible workflow:
pixi run notebook# Run all tests
pixi run test
# Or with pytest directly
pytest tests/- Notebook: See
ipynb/tutorial.ipynb - API: Docstrings in source code
- MATLAB reference:
Matlab/directory
If you use this code, please cite:
Albert, S. T., & Shadmehr, R. (2016). The neural feedback response to error as a teaching signal for the motor learning system. Journal of Neuroscience, 36(17), 4832-4845.
MIT License - See LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
- Original MATLAB implementation: Scott Albert (Johns Hopkins University)
- Python port: 2025
- Lab: Laboratory for Computational Motor Control
- Advisor: Reza Shadmehr