BEHEMA (BEHavioral EMergent Automaton) is a spiking neural network library inspired by cellular automata.
Behema borrows concepts such as grid layout and kernels from cellular automata to boost efficiency in highly parallel environments. The implementation aims at mimicking a biological brain as closely as possible without losing performance.
The learning pattern of a Behema neural network is continuos, with no distinction between training, validation and deploy. The network is continuously changed by its inputs and therefore can produce unexpected (emerging) results.
The CPU version automatically scales on all available CPU cores in order to parallalize the workflow.
Run installation by calling the installation command make install or make std-install with the following options:
-
Defines the C compiler to use.
-
Default value:
gcc -
Example:
make install CCOMP=gcc-14
-
Enables compiler optimizations or debug capabilities.
-
Default value:
release -
Possible values:
-
debugenables debug functionalities and removes all optimizations. -
releasedisables all debug functionalities and enables all optimizations.
-
-
Example:
make install COMPILE_MODE=debug
-
Defines whether to install the library as static or dynamic.
-
Default value:
dynamic -
Possible values:
-
staticinstalls as static library -
dynamicinstalls as dynamic library
-
-
Example:
make install INSTALL_MODE=static
-
Sets the headers' installation directory. Library headers will be installed here.
-
Default value:
/usr/includeon Linux and/usr/local/includeon MacOS -
Can be set to any directory the installing user has access to.
-
Example:
make install HDR_DST_DIR=/home/user/headers
-
Sets the library binary installation directory. Library binaries will be installed here.
-
Default value:
/usr/libon Linux and/usr/local/libon MacOS -
Can be set to any directory the installing user has access to.
-
Example:
make install LIB_DST_DIR=/home/user/lib
Run make cuda-install to install the CUDA parallel (GPU) package in a system-wide dynamic or static library.
The following options are available:
-
Defines which specific CUDA architecture to target during compilation.
-
Possible values: official CUDA documentation
-
Example:
make cuda-install CUDA_ARCH=sm_61
-
Enables compiler optimizations or debug capabilities.
-
Default value:
release -
Possible values:
-
debugenables debug functionalities and removes all optimizations. -
releasedisables all debug functionalities and enables all optimizations.
-
-
Example:
make cuda-install COMPILE_MODE=debug
-
Defines whether to install the library as static or dynamic.
-
Default value:
dynamic -
Possible values:
-
staticinstalls as static library -
dynamicinstalls as dynamic library
-
-
Example:
make cuda-install INSTALL_MODE=static
Warnings:
- The CUDA version only works with NVIDIA GPUS
- The CUDA version requires the CUDA SDK and APIs to work
- The CUDA SDK or APIs are not included in any install_deps.sh script
Coming soon...
Run make uninstall to uninstall any previous installation.
WARNING: Every time you install a new version the previously installed one is overwritten.
Once the installation is complete you can include the library by #include <behema/behema.h> and directly use every function in the packages you compiled.
During linking you can specify -lbehema in order to link the compiled functions.
The first step is to create and initialize two cortices by:
// Define starting parameters.
cortex_size_t cortex_width = 100;
cortex_size_t cortex_height = 60;
nh_radius_t nh_radius = 2;
// Define the sampling interval, used later.
ticks_count_t sampleWindow = 10;
// Create the cortices.
cortex2d_t even_cortex;
cortex2d_t odd_cortex;
// Initialize the two cortices.
c2d_create(&even_cortex, cortex_width, cortex_height, nh_radius);
c2d_create(&odd_cortex, cortex_width, cortex_height, nh_radius);
This will setup two identical 100x60 cortices with default values.
Optionally, before copying the first cortex to the second, its properties can be set by:
c2d_set_evol_step(&even_cortex, 0x20U);
c2d_set_pulse_window(&even_cortex, 0x3A);
c2d_set_syngen_beat(&even_cortex, 0.1F);
c2d_set_max_touch(&even_cortex, 0.2F);
c2d_set_sample_window(&even_cortex, sampleWindow);
Now the updated cortex needs to be copied to the second by:
odd_cortex = *c2d_copy(&even_cortex);
The two cortices will be updated alternatively at each iteration step.
Now the cortex can already be deployed, but it's often useful to setup its inputs and outputs:
// Support variable for input sampling.
ticks_count_t samplingBound = sampleWindow - 1;
// Define an input rectangle (the area of neurons directly attached to domain inputs).
// Since even_cortex and odd_cortex are 2D cortices, inputs are arranged in a 2D surface.
// inputCoords contains the bound coordinates of the input rectangle as [x0, y0, x1, y1].
cortex_size_t inputsCoords[] = {10, 5, 40, 20};
// Allocate inputs according to the defined area.
ticks_count_t* inputs = (ticks_count_t*) malloc((inputsCoords[2] - inputsCoords[0]) * (inputsCoords[3] - inputsCoords[1]) * sizeof(ticks_count_t));
// Support variable used to keep track of the current step in the sampling window.
ticks_count_t sample_step = samplingBound;
Input mapping defines how input values are mapped to spike trains.
Every cortex has a field for defining a pulse_window, which is the amount of timesteps over which inputs are mapped into spike trains.
Input mapping defines how a numerical input maps to a spike train over the pulse_window.
Therefore spike trains repeat every pulse_window timesteps and you can see the shape spike trains acquire using different input mapping algorithms.
In the following plots, you can see numerical inputs on the y-coordinate (premapped to a [0.0,1.0] domain) and time on the x-coordinate. White means there's a spike, while black means no spike.
| pulse_window | 10 | 20 | 100 | 1000 |
| linear | ![]() |
![]() |
||
| floored proportional | ![]() |
![]() |
![]() |
![]() |
| rounded proportional | ![]() |
![]() |
![]() |
![]() |
Coming soon...
Refer to API Reference for some details on functions and structures.
The examples directory contains some useful use cases










