We've been very happy to see Mamba being adopted by many organizations and research labs to speed up their training / inference. This page contains a partial list of places where Mamba is being used. If you'd like to add links to your organization / product / codebase, please open a PR or email us. We'd very much like to hear from you!
-
vLLM
-
Nvidia's TensorRT-LLM
-
Nvidia GPUs