Skip to content

trymirai/lalamo

Repository files navigation

Mirai

Listen to our podcast View our deck Discord Contact us Read docs License

lalamo

A set of tools for adapting Large Language Models to on-device inference using the uzu inference engine.

Quick Start

To get the list of supported models, run:

uv run lalamo list-models

To convert a model, run:

uv run lalamo convert MODEL_REPO

Note: on some CPU platform you may be getting an error saying The precision 'F16_F16_F32' is not supported by dot_general on CPU. This is due to a bug in XLA, which causes matmuls inside jax.jit not work correctly on CPUs. The workaround is to set the environment variable JAX_DISABLE_JIT=1 when running the conversion.

After that, you can find the converted model in the models folder. For more options see uv run lalamo convert --help.

Model Support

To add support for a new model, write the corresponding ModelSpec, as shown in the example below:

ModelSpec(
    vendor="Google",
    family="Gemma-3",
    name="Gemma-3-1B-Instruct",
    size="1B",
    quantization=None,
    repo="google/gemma-3-1b-it",
    config_type=HFGemma3TextConfig,
    weights_type=WeightsType.SAFETENSORS,
)

Optional Features

PyAudio

PyAudio enables audio playback for TTS models and is used as an optional Lalamo feature because it requires PortAudio to be installed.

How to:

Then run :

uv run --with  pyaudio lalamo path/to/model --replay

About

JAX infrastructure for model optimisation

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages