Local AI anywhere, for everyone — LLM inference, chat UI, voice, agents, workflows, RAG, and image generation. No cloud, no subscriptions.
-
Updated
Apr 18, 2026 - Rust
Local AI anywhere, for everyone — LLM inference, chat UI, voice, agents, workflows, RAG, and image generation. No cloud, no subscriptions.
This is a mirror of the Strix Halo HomeLab wiki, to browse the wiki click on the link below
Bare-metal AI platform for AMD Strix Halo. One script. Everything works. Lego blocks — snap in what you need.
Experimental support for many TTS/STT LLMs wrapped in a Wyoming API for consumption via Homeassistant
Sixunited AXB35 EC control & monitoring for Windows
The definitive Strix Halo LLM guide — 65 t/s on a $2,999 mini PC. Live benchmarks, tested optimizations, and everything that doesn't work.
A comprehensive guide to running Linux (Omarchy/Arch) on the 2025 ASUS ROG Flow Z13 (AMD Strix Halo). Includes CachyOS Kernel setup, Tablet Mode fixes, and Power Management for the Ryzen AI Max
Tools and documentation related to the AMD Strix-Halo AGU family (Ryzen AI Max 395) of systems. Tested on GMKtec EVO-2
Simple installer script which take a download (if newer) and installs it globally. Sets Vulkan support
Ansible playbook to configure AMD Strix Halo machines (e.g. Framework Desktop or GMKtec EVO-X2) as local AI inference servers running Fedora 43. Sets up llama.cpp with llama-swap and Open WebUI and downloads GGUF models. With NGINX reverse proxy and TLS via ACME or self-signed certificate.
Claude Code skill for AMD Strix Halo (Ryzen AI MAX+ 395) ML setup. Handles PyTorch installation (official wheels don't work with gfx1151), GTT memory config, and environment setup. Enables 30B parameter models.
llama.cpp setup on dedicated AMD Strix Halo machine
Talos-O (Omni): A sovereign, embodied agentic organism forged on AMD Strix Halo. Integrating the Chimera Kernel (Linux 7.0), Zero-Copy Introspection, and the Phronesis Engine. Built from First Principles.
Local LLM benchmarks on AMD Strix Halo — 26+ models tested across RADV, AMDVLK, and ROCm with llama.cpp
Monitoring app shown important rocm-related metrics in a browser window. Provides /metrics endpoint
sample application showing use of Farscape bindings for Linux/AMD generated binding librires
Add a description, image, and links to the strix-halo topic page so that developers can more easily learn about it.
To associate your repository with the strix-halo topic, visit your repo's landing page and select "manage topics."