Skip to content

Motor-Learning-Lab/albert-em-port

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

albert-em

Python package for fitting two-state models of sensorimotor adaptation using the Expectation-Maximization (EM) algorithm.

Python License

Overview

This package provides a complete, high-performance Python implementation of the Expectation-Maximization algorithm for fitting two-state models of motor learning. The implementation uses Numba JIT compilation to achieve C-like performance without requiring manual compilation of C++/MEX code.

Original Work:

  • Author: Scott Albert
  • Institution: Johns Hopkins University
  • Lab: Laboratory for Computational Motor Control
  • Advisor: Reza Shadmehr
  • Date: July 25, 2017

Features

✨ High Performance: Numba-optimized likelihood function provides near-C performance
πŸ“¦ Easy Installation: Install as a package via pip or pixi
πŸ”¬ Scientific Computing: Built on NumPy, SciPy, and Numba
πŸ“Š Visualization: Optional matplotlib integration for plotting
πŸ§ͺ Well-Tested: Includes benchmarks and convergence tests
πŸ“– Comprehensive Docs: Type hints and detailed documentation

Installation

Using pixi (recommended)

# Clone the repository
git clone https://github.com/Motor-Learning-Lab/albert-em-port.git
cd albert-em-port

# Install with pixi
pixi install

# Run example
pixi run example

# Run tests
pixi run test

Using pip (from GitHub)

Install directly from the GitHub repository using a PEP 508 URL. The import name is albert_em.

pip install "albert-em @ git+https://github.com/Motor-Learning-Lab/albert-em-port@main"

Then in Python:

import albert_em

You can also add this project as a dependency in your own pyproject.toml:

[project]
dependencies = [
    "albert-em @ git+https://github.com/Motor-Learning-Lab/albert-em-port@main",
]

Using pip (editable clone)

# Clone the repository
git clone https://github.com/Motor-Learning-Lab/albert-em-port.git
cd albert-em-port

# Install the package (editable)
pip install -e .

# Or with visualization support
pip install -e ".[viz]"

# Or for development
pip install -e ".[dev]"

Quick Start

import numpy as np
from albert_em import generalized_expectation_maximization

# Define experimental paradigm
r = np.concatenate([np.zeros(20), 30*np.ones(50), 
                   np.full(20, np.nan), np.zeros(30)])
EC = np.concatenate([np.zeros(70), np.ones(20), np.zeros(30)])
EC_value = np.concatenate([np.full(70, np.nan), np.zeros(20), 
                          np.full(30, np.nan)])
c = np.array([1.0, 1.0])

# Your behavioral data
y = # ... motor output data

# Set up EM
initial_params = np.array([0.95, 0.7, 0.05, 0.30, 0, 0, 2, 2, 5])
search_space = np.array([
    [0, 1.1], [0, 1.1], [0, 1], [0, 1],
    [-30, 30], [-30, 30],
    [0.0000001, 10], [0.0000001, 10], [0.0000001, 10]
])
constraints = np.array([0.001, 0.001])

# Run EM algorithm
parameters, likelihoods = generalized_expectation_maximization(
    initial_params, y, r, EC, EC_value, c, 
    search_space, constraints, num_iterations=100
)

Repository Structure

albert-em-port/
β”œβ”€β”€ src/albert_em/              # Main package
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ em.py                   # Main EM coordinator
β”‚   β”œβ”€β”€ kalman_smoother.py      # E-step
β”‚   β”œβ”€β”€ m_step.py               # M-step
β”‚   β”œβ”€β”€ expected_complete_log_likelihood.py  # Numba-optimized with fallback
β”‚   β”œβ”€β”€ incomplete_log_likelihood.py
β”‚   └── simulation.py
β”œβ”€β”€ ipynb/
β”‚   └── tutorial.ipynb          # End-to-end demo (keep this)
β”œβ”€β”€ tests/
β”‚   └── test_smoke.py           # Fast, clear smoke test
β”œβ”€β”€ Matlab/                     # Original MATLAB code
β”œβ”€β”€ pixi.toml                   # Pixi environment (dev + notebook)
β”œβ”€β”€ pyproject.toml              # Python package metadata
└── README.md

Performance

The Python implementation uses Numba JIT compilation to achieve near-C performance:

  • βœ… No manual compilation required (unlike MEX)
  • βœ… Automatic optimization on first run
  • βœ… Comparable performance to C++/MEX
  • βœ… Pure Python with scientific libraries

Why Numba instead of C++/Rust?

  1. Ease of use: No build toolchain required
  2. Performance: JIT compilation provides 50-100x speedup over pure Python
  3. Maintainability: Keep everything in Python
  4. Cross-platform: Works on Windows, Linux, macOS without recompilation
  5. Scientific ecosystem: Seamless NumPy integration

Model Parameters

The two-state model has 9 parameters:

  1. aS - Slow state retention factor (0-1)
  2. aF - Fast state retention factor (0-1)
  3. bS - Slow state error sensitivity (0-1)
  4. bF - Fast state error sensitivity (0-1)
  5. xS1 - Initial slow state
  6. xF1 - Initial fast state
  7. sigmax2 - State update variance (>0)
  8. sigmau2 - Motor output variance (>0)
  9. sigma12 - Initial state variance (>0)

Model Description

The two-state model assumes motor adaptation is governed by two hidden states:

  • Slow state: High retention (aS β‰ˆ 0.98), low learning rate (bS β‰ˆ 0.1)
  • Fast state: Low retention (aF β‰ˆ 0.6), high learning rate (bF β‰ˆ 0.3)

The EM algorithm iteratively:

  1. E-step: Estimates hidden states using Kalman smoothing
  2. M-step: Updates model parameters via constrained optimization (SLSQP)

Example: notebook

Open the tutorial notebook for a complete, reproducible workflow:

pixi run notebook

Testing

# Run all tests
pixi run test

# Or with pytest directly
pytest tests/

Documentation

  • Notebook: See ipynb/tutorial.ipynb
  • API: Docstrings in source code
  • MATLAB reference: Matlab/ directory

Citation

If you use this code, please cite:

Albert, S. T., & Shadmehr, R. (2016). The neural feedback response to error as a teaching signal for the motor learning system. Journal of Neuroscience, 36(17), 4832-4845.

License

MIT License - See LICENSE file for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Acknowledgments

  • Original MATLAB implementation: Scott Albert (Johns Hopkins University)
  • Python port: 2025
  • Lab: Laboratory for Computational Motor Control
  • Advisor: Reza Shadmehr

About

Port to Python of Albert and Shadmehr's EM algorithm

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors