Skip to content
#

tinyllama

Here are 101 public repositories matching this topic...

onenm_local_llm is a Flutter plugin that simplifies on-device language model inference on Android using llama.cpp. It removes the complexity of setting up native runtimes, model loading, and inference pipelines, so developers can integrate local AI into their apps through a simple API.

  • Updated Mar 19, 2026
  • C++

The LLM FineTuning and Evaluation project πŸš€ enhances FLAN-T5 models for tasks like summarizing Spanish news articles πŸ‡ͺπŸ‡ΈπŸ“°. It features detailed notebooks πŸ“š on fine-tuning and evaluating models to optimize performance for specific applications. πŸ”βœ¨

  • Updated Oct 6, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the tinyllama topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the tinyllama topic, visit your repo's landing page and select "manage topics."

Learn more