• 3 Posts
  • 25 Comments
Joined 3 years ago
cake
Cake day: August 5th, 2023

help-circle
  • That’s a cool docker compose setup and is definitely competitive with a single node k8s deployment I run for hobby projects (k3s). The simplicity of this docker compose setup is an advantage, but it is missing some useful features compared to k8s:

    • No Let’s Encrypt support for Nginx using Cert Manager.
    • No git ops support, the deployment shell script is manually run.
    • No management ui’s like Lens or k9s.
    • Will not scale to multiple machines without changing the technology to k8s or Nomad.

    That said, I would be happy to run their setup if I hadn’t gone through the pain of learning k8s.







  • I do this for sites where I don’t care at all about security. One minor tip, that will protect against automated attacks if the password is cracked, is to add part of the website name into the password (e.g “mystrongp4ss!lemworld”) .

    A human could easily crack it, but automated systems that replay the password on different sites would probably not bother to calculate the pattern.



  • aliensciencetoRustKellnr has a new UI
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    At $work we write closed source Rust but we do not use Kellnr.

    Instead we use a mono-repo, using a workspace, that contains most of our applications and libraries.

    Our setup is mostly OK but needs some workarounds for problems we have hit:

    • Slow cargo clean && cargo build, to speed this up we use sccache.
    • Very slow Docker builds. To speed these up we use cargo chef.
    • Slow CI/CD. To speed this up we use AWS instances as Github runners that we shutdown, but do not destroy, after use. This allows us to cache build dependencies for faster builds.

    I am generally happy with our setup, but I am a fan of mono-repos. If it ever becomes to difficult to keep compiles times reasonable, I think that we would definitely look at Kellnr.




  • LOL, yes. Just in case it is of interest:

    • ESP32-S3 is the chip, this family usually comes with CPU + Bluetooth + Wifi.
    • Reverse TFT, this is a small display put on the other side of the circuit board from the chip.
    • w.FL Antenna, this is the connector on the Wifi Antenna.

    I like these small boards, they are tiny and I need a magnifying glass for soldering. Its mind blowing how these tiny boards are more powerful than mainframe computers filling a room, and supporting 20 users, used to be.






  • Just to add to this point. I have been running a separate namespace for CI and it is possible to limit total CPU and memory use for each namespace. This saved me from having to run a VM. Everything (even junk) goes onto k8s isolated by separate namespaces.

    If limits and namespaces like this are interesting to you, the k8s resources to read up on are ResourceQuota and LimitRange.


  • I am not sure if it is best practice, but this is what I do and it might provide some inspiration:

    • Bootstrap from a private gitlab.com repository with a base ansible setup. Executed from a laptop.
    • The bootstrap setups up k8s and installs a bare bones git repository docker container based on https://codeberg.org/al13nsc13nc3/gitsrv.
    • Flux CD is installed into the bare bones git repository and k8s.
    • Flux CD is used to install Forgejo and Woodpecker CI using the bare bones git repository as the gitops source of truth.

    This has the advantage that Gitops and normal git repositories are separate. I think that a similar principle would work with docker compose instead of k8s.




  • I looked at Tekton, but the complexity of doing simple things put me off. I have been running woodpecker which now has Kubernetes support.

    Installing the Helm Chart for the Woodpecker agent gives K8s support with no special configuration needed. My needs are simple but I have been really impressed with how easy it has been.