• 18 Posts
  • 305 Comments
Joined 3 years ago
cake
Cake day: June 17th, 2023

help-circle

  • I suggest using llama.cpp instead of ollama, you can easily squeeze +10% in inference speed and other memory optimizations from llama.cpp. With hardware prices nowadays I think every % saved on resources matters. Here is a simple ansible role to setup llama.cpp, it should give you a good idea of how to deploy it.

    A dedicated inference rig is not gonna be cheap. What I did, since I need a gaming rig; is getting 32GB DDR5 (this was before the current RAMpocalypse, if I had known I would have bought 64) and an AMD 9070 (16GB VRAM - again if I had known how crazy prices would get I’d probably ahve bought a 24GB VRAM card). The home server runs the usual/non-AI stuff, and llamacpp runs on the gaming desktop (the home server just has a proxy to it). Yeah the gaming desktop has to be powered up when I want to run inference, this is my main desktop so it’s powered on most of the time, no big deal




  • Email

    Most applications/services offer mail as notification channel. Even old school unix utilities such as cron support sending mail (through the system MTA). I use msmtp. Then configure K-9 mail or any decent mail client on your phone, setup filters so that mail from your services ends up in a high priority folder in your mailbox with notifications enabled.

    I want to be able to receive notifications both on mobile and desktop, this is the only reasonable option I found and have been running with it for > 10 years.











  • There are better alternatives, podman is daemonless and rootless by default, comes with a docker-compatible CLI, and far better container network implementations. It is also provided as stable/LTS package in Debian repositories so you won’t have to upgrade your container runtime every 2 days (causing downtime), like running the upstream Docker package does.

    The only reason to keep using Docker nowadays is if you have a lot of legacy apps that depend on Docker-specific features (e.g. require rootful containers). For most workflows it is a matter of alias docker=podman. If you use docker-compose, you do need to port your setup to podman quadlets, or systemd-managed containers though. For me it was worth it.


    • Small 4B models like gemma3 will run on anything (I have it running on a 2020 laptop with integrated graphics). Don’t expect superintelligence, but it works for basic classification tasks, writing/reviewing/fixing small scripts and basic chat, writing, etc
    • I use https://github.com/ggml-org/llama.cpp in server mode pointing to a directory of GGUF model files downloaded from huggingface. I access it it from the built-in web interface or API (wrote a small assistant script)
    • To load larger models you need more RAM (preferably fast VRAM/GPU but DDR5 on the motherboard will work - it will be noticeably slower). My gaming rig with 16GB AMD 9070 runs 20-30B models at decent speeds. You can grab quantized (lower precision, lower output quality) versions of those larger models if the full-size/unquantized models don’t fit. Check out https://whatmodelscanirun.com/
    • For image generation I found https://github.com/vladmandic/sdnext which works extremely well and fast wth Z-Image Turbo, FLUX.1-schnell, Stable Diffusion XL and a few other models

    As for the prices… well the rig I bought for ~1500€ in september is now up to ~2200€ (once-in-a-decade investment). It’s not a beast but it works, the primary use case was general computing and gaming, I’m glad it works for local AI, but costs for a dedicated, performant AI rig are ridiculously high right now. It’s not economically competitive yet against commercial LLM services for complex tasks, but that’s not the point. Check https://old.reddit.com/r/LocalLLaMA/ (yeah reddit I know). 10k€ of hardware to run ~200-300B models, not counting electricity bills




  • I’m in the same boat, running a Gitlab Mattermost instance for a small team.

    Gitlab has not announced yet what will happen with the bundled Mattermost, but I guess it will be dropped entirely, or be hit by the new limitations (what will hit us the hardest is the 10000-most-recent messages limitation, anything further than that will be hidden behind a paywall - including messages sent before the new limitations come in effect - borderline ransomware if you ask me)

    I know there are forks that remove the limitation, may end up doing that if the migration path is not too rough.

    I used to run a Rocket.Chat instance for another org, became open-core bullshit as well. I’m done with this stuff.

    I have a small, non-federated personal Matrix + Element instance that barely gets any use (but allows me to get a feeling of what it can do) - I don’t like it one bit. The tech stack is weird, the Element frontend receives constant updates/new releases that are painful to keep up with, and more importantly, UX is confusing and bad.

    So I think I’ll end up switching this one for a XMPP server. Haven’t decided which one or which components around it precisely. I used to run prosody with thick clients a whiiille ago and it was OK. Nextcloud Talk might also work.

    My needs are simple, group channels, 1-to-1 chat, posting files to a channel. ideally temporary many-to-many chats, decent web UI.

    Voice capabilities would be a bonus (I run and use a mumble server and it absolutely rules once you’ve configured the client, but it doesn’t integrate properly into anything else, and no web UI), as well as some kind of integration with my Jitsi Meet instance. E2E encryption nice but not mandatory. Semi-decent mobile clients would be nice.

    For now, wait and see.








  • A full-blown samba domain is extremely overkill if you don’t have a fleet of windows machines.

    You can get centralized user management with a simple LDAP server or similar, no need for a domain.

    Also, snapshots-based backups have limited uses (can’t easily restore only a single file, eats quite a bit of storage). The only times where I actually needed backups were because I fucked up a single application or database, don’t want to rollback the whole OS/data drive for that.