A unified PyTorch library providing easy access to state-of-the-art Linear RNN architectures for sequence modeling. The technical report of this system was accepted to EACL Student Research Workshop 2026. We recommend reading the report before using / contributing to the library.
# standard installation
pip install lrnnx
# with optional causal-conv1d
pip install "lrnnx[conv1d]"
# for development
pip install "lrnnx[dev]"We recommend installing PyTorch first, matching your specific CUDA version. After that, install our library using --no-build-isolation.
pip install lrnnx --no-build-isolationWe recommend installation with uv, though standard pip is also supported.
git clone https://github.com/SforAiDl/lrnnx.git
cd lrnnx
# standard installation
uv sync
# with optional causal-conv1d
uv sync --extra conv1d
# for development
uv sync --extra devgit clone https://github.com/SforAiDl/lrnnx.git
cd lrnnx
# standard installation
pip install -e . --no-build-isolation
# with optional causal-conv1d
pip install -e ".[conv1d]" --no-build-isolation
# for development
pip install -e ".[dev]" --no-build-isolationNote that since our library builds several custom CUDA kernels, it can take time for this installation to finish.
Along with causal-conv1d the full installation can take about 30 minutes, depending on the number of CPUs available.
Our library provides implementations of the following Linear RNN architectures:
- S4
- S4D
- S5
- Event-SSM (inside
S5, use by passingintegration_timesteps) - LRU
- S6 (we implemented other discretizations)
- STREAM (inside
S6, use by passingintegration_timesteps) - RG-LRU
- S7
- aTENNuate
We expose several levels of API for each model, including a scan, a recurrent step, and a full layer API matching the paper. For S5 we implement both a convolution based approach and a parallel scan approach. The latter is more stable and faster for most use cases, but the convolution based approach can be faster for very long sequences.
It is easy to instantiate a model from our library
from lrnnx.models.lti import LRU
from lrnnx.models.ltv import Mamba
model_lti = LRU(d_model, d_state).cuda()
x = torch.randn(
batch_size, seq_len, d_model, dtype=torch.float32, device="cuda"
)
output = model_lti(x)
model_ltv = Mamba(d_model, d_state).cuda()
x = torch.randn(
batch_size, seq_len, d_model, dtype=torch.float32, device="cuda"
)
output = model_ltv(x)Linear RNNs in torch require special handling during inference, following mamba, we also implement CUDA graphs based inference which reduces CPU overheads, this leads to > 10x speedup compared to using a simple for loop over the sequence length. The main file is generation.py which provides a simple API for autoregressive generation with any of the models in our library. You can see a simple way to use it in our benchmarking script.
This script will run both training and inference benchmarks.
python -m benchmarks.run_allWe also implement some common architectures based on the models in our library, such as a U-Net (inspired from aTENNuate ) and a hierarchical classifier (inspired from Event-SSM). Additionally, there is a Language Model architecture inspired from Mamba and RG-LRU which can be used for language modeling tasks, with replaceable LRNN and attention layers. This can be used as
from lrnnx.models.language_model import LRNNLMHeadModel
model = LRNNLMHeadModel(
d_model, d_state, num_layers, vocab_size, mixer_types=["s5", "s6", "attn"]
)
input_ids = torch.randint(0, vocab_size, (batch_size, seq_len))
logits = model(input_ids)Based on the architectures, there are tutorials on how to use them for 2 very popular use cases:
Please check out our Contributing Guide for details on how to contribute to this project.
If you use lrnnx in your research, please cite:
@misc{bania2026textttlrnnxlibrarylinearrnns,
title={$\texttt{lrnnx}$: A library for Linear RNNs},
author={Karan Bania and Soham Kalburgi and Manit Tanwar and Dhruthi and Aditya Nagarsekar and Harshvardhan Mestha and Naman Chibber and Raj Deshmukh and Anish Sathyanarayanan and Aarush Rathore and Pratham Chheda},
year={2026},
eprint={2602.08810},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2602.08810},
}This project is licensed under the MIT License - see the LICENSE file for details.
This library builds upon the excellent work of researchers who developed the individual LRNN models. Please see individual model documentation for proper citations of the original papers.