iowolf.dev
Cloud Native Dev Pack
- 2 followers
- United States of America
- @iowolfdev
- b0id.lonewolf.dev@gmail.com
Popular repositories Loading
-
-
-
forgeram-llama.cpp-rs-backend
forgeram-llama.cpp-rs-backend PublicForked from ggml-org/llama.cpp
Multi-GPU inference runtime for large LLMs on AMD GPUs. Memory-virtualized, layer-sharded, and built for quantized GGUF models.
C++
Repositories
Showing 3 of 3 repositories
- forgeram-llama.cpp-rs-backend Public Forked from ggml-org/llama.cpp
Multi-GPU inference runtime for large LLMs on AMD GPUs. Memory-virtualized, layer-sharded, and built for quantized GGUF models.
iowolfdev/forgeram-llama.cpp-rs-backend’s past year of commit activity
People
This organization has no public members. You must be a member to see who’s a part of this organization.
Top languages
Loading…
Most used topics
Loading…