Project investigating how inhibitory wiring motifs change the learned representations of the excitatory layer
- Winning Contribution for the Spiking Neural Networks Hackathon at the University of Osnabrück 2025
- For simulating SNNs I used BindsNET
- Check out the presentation
How can inhibitory wiring motifs change the learned representations of the excitatory layer?
➢ The connections between the inhibitory and excitatory layer can help reducing redundancy in weights and increase sparsity in the excitatory layer’s activity. When the strength of the inhibitory connections increases with distance, learned representations can be pushed into clusters.
| Control | Diehl & Cook, 2015 wiring motif | Hazan et al., 2018 wiring motif |
|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
I chose to have a two-layer set up which also complies with Dale’s law since there is a separate excitatory (red) and inhibitory (blue) population. The independent variable is the wiring motif between the excitatory and inhibitory layer, which is fixed for each experiment.
-
E-Layer: DiehlAndCookNodes()
-
I-Layer: LIFNodes()
-
fixed E→I, E←I connections
-
train E-Layer with STDP
-
Data: MNIST encoded with a PoissonEncoder().
Install BindsNET with
!pip install numpy scipy matplotlib git+https://github.com/BindsNET/bindsnet.git
(should automatically install torch)






