
Why LFM2?
Built on a new hybrid architecture, LFM2 sets a new standard in quality, speed, and memory efficiency.3x Faster Training — New hybrid architecture accelerates training and inference
State-of-the-art Quality — Outperforms similar-sized models on benchmarks
Memory Efficient — Optimized for resource-constrained environments
Deploy Anywhere — Compatible with major inference frameworks and platforms
Get Started
Explore Models
Browse our collection of language models and their capabilities
Inference Guides
Learn how to run models for different use cases and platforms
Fine-tuning
Customize models for your specific requirements
Examples
End-to-end examples for mobile, laptop, and web