This project demonstrates two brain-inspired alternatives to standard backpropagation for training neural networks on the MNIST dataset.
- No chain rule
- No global backpropagation
In standard backpropagation (BP), the weights used in the backward pass are the transpose of the weights used in the forward pass. This is known as the "weight symmetry" problem. Feedback Alignment relaxes this constraint by using a fixed, random matrix for the backward pass instead of the transpose of the forward weights.
Surprisingly, the network "aligns" its forward weights with these random feedback weights, allowing it to learn effective representations without the weight symmetry required by BP.
- Implementation:
feedback-alignment.py - Key Features: Custom Autograd Function, fixed random feedback weights, LayerNorm, and SELU activation.
Predictive Coding (PC) is a theory of brain function where the brain constantly generates and updates a mental model of the environment. In neural networks, PC can be implemented as a local learning rule where each layer tries to predict the activity of the layer below it.
The learning process involves two phases:
- Inference Phase: Hidden states are updated to minimize the prediction error between layers.
- Learning Phase: Weights are updated to better predict the (now more accurate) hidden states.
- Implementation:
predictive_coding.py - Key Features: Local prediction error minimization, iterative inference phase, and weight updates without global backpropagation.
- Dataset: MNIST
- Framework: PyTorch
- Environment Management: uv
Use the Makefile to trigger training and testing:
make
# or
make run-famake run-pcTo remove data and caches:
make clean