• 0 Posts
  • 445 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle



  • [I]f you throw a frog in a pot of boiling water, he’ll jump right out. But if you stick that same frog in the water when it’s at room temperature, he’ll just sit there. He won’t move because everything’s fine. Then he put the pot on the heat. The temperature goes up, and still the frog doesn’t jump because it’s only a degree hotter than before. Eventually, the frog dies, boiled alive. When the frog was thrown in the boiling water, he immediately knew he was in danger. But because of the incrementalism of the heat from room temperature, he didn’t realize he was in danger until it was too late.

    If it used to be okay, but it’s not okay anymore, then maybe you should do something about it. Don’t compare your circumstances with how they were yesterday. Look at how they were years ago. We’re supposed to be making the world… The universe… A better place for our children. If it’s not better, if you’re dealing with cruelty, with neglect, then you should do something about it. So, yeah. Fuck 'em. Fuck [dictator’s name] and his asshole children. If you’re unhappy with your government, then kick them out and set up your own, one that represents the people’s best interests. You shouldn’t have to put up with some loser who’s going to take the people’s money and waste it on games, especially when those games entail killing people weaker than him with little or no real danger to himself. What a pussy. That’s my opinion.

    - Dungeon Crawler Carl



  • sloppy_diffuser@sh.itjust.workstoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    26 days ago

    NixOS. Started with Yellow Dog Linux in 1998.

    I don’t do everything through nix’s derivation system.

    Many of my configs are just an outOfStoreSymlink to my configs in the same dotfiles repos. I don’t need every change wrapped in a derivation. Neovim is probably the largest. A few node projects for automations I’m fine using pnpm to manage. Nix still places everything but I can tweek those configs and commit in the same repo without a full blown activation.





  • With these sorts of tasks models really seem to suffer from not knowing what packages or conventions have been deprecated. This is really obvious with an immature ecosystem like nix.

    This is where custom setups will start to shine.

    https://github.com/upstash/context7 - Pull version specific package documentation.

    https://github.com/utensils/mcp-nixos - Similar to above but for nix (including version specific queries) with more sources.

    https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking - Break down problems into multiple steps instead of trying to solve it all at once. Helps isolate important information per step so “the bigger picture” of the entire prompt doesn’t pollute the results. Sort of simulates reasoning. Instead of finding the best match for all keywords, it breaks the queries down to find the best matches per step and then assembles the final response.

    https://github.com/CaviraOSS/OpenMemory - Long conversations tend to suffer as the working memory (context) fills up so it compresses and details are lost. With this (and many other similar tools) you can have it remember and recall things with or without a human in the loop to validate what’s stored. Great for complex planning or recalling of details. I essentially have a loop setup with global instructions to periodically emit reinforced codified instructions to a file (e.g., AGENTS.md) with human review. Combined with sequential thinking it will identify contradictions and prompt me to resolve any ambiguity.

    The quality of the output is like going from 80% to damn near 100% as your knowledge base grows from external memory and codified instructions in files. I’m still lazy sometimes and will use something like Kagi assistant for a quick question or web search, but they have a pretty good baseline setup with sequential thinking in their online tooling.



  • It’s really not that different from a traditional web search under the hood. It’s basically a giant index and my input navigates the results based on probability of relevance. It’s not “thinking” about me or deciding what I should see. When I say a good assistant setup, I mean I don’t use Gemini or ChatGPT or any of the prepackaged stuff that tries to build a profile on you. I run my own setup, pick my own models, and control what context they get. If you check my post history I’m heavily privacy conscious, I’m not handing that over to Google or OpenAI.

    The summary helps me evaluate if my input was good and the results are actually relevant to what I’m after without wading through 20 minutes of SEO garbage to get there. For me it’s like getting the quality results you used to get before search got enshitified. It actually surfaces stuff that doesn’t even show up on the front page of a traditional search anymore.


  • I’m in software development and land on both sides of this argument.

    Having to review or maintain AI slop is infuriating.

    That said, it has replaced traditional web searching for me. A good assistant setup can run multiple web searches for me, distill the useful info cutting through the blog spam and ads, run follow up searches for additional info if needed, and summarize the results in seconds with references if I want to validate its output.

    There was a post a couple days ago about it solving a hard math problem with guidance from a mathematician. Sparked a discussion about AI being a powerful tool in the right hands.



  • Totally agree with your overall point.

    That said, I have to come to the defense of my terminal UI (TUI) comrades with some anecdotal experience.

    I’ve got all the same tools in Neovim as my VSCode/Cursor colleagues, with a deeper understanding of how it all works under the hood.

    They have no idea what an LSP is. They just know the marketing buzzword “IntelliSense.” As we build out our AI toolchains, it doesn’t even occur to them that an agent can talk to an LSP to improve code generation because all they know are VSCode extensions. I had to pick and evaluate my MCP servers from day one as opposed to just accepting the defaults, and the quality of my results shows it. The same can be done in GUI editors, but since you’re never forced to configure these things yourself, the exposure is just lower. I’ve had to run numerous trainings explaining that MCPs are traditionally meant to be run locally, because folks haven’t built the mental model that comes with wiring it all up yourself.

    Again, totally agree with your overall point. This is more of a PSA for any aspiring engineers: TUIs are still alive and well.



  • Firefox Nightly + arkenfox userjs + uBlock Origin + Bitwarden as my daily driver.

    Been a couple years since I checked up on arkenfox still being good. I get flagged as a bot all the time and constantly get popups about WebGL (GPU fingerprinting) so I assume its working as intended for my threat model.

    Tails when I really care.

    Mullvad VPN as my regular VPN with ProtonVPN for torrents.

    GrapheneOS / NixOS as my OS.

    Proton Visionary for most cloud services except passwords and I don’t really use Proton Drive. I do use ProtonPass for unique emails to every provider.

    Kagi for searches / AI.

    Etesync for contacts because Proton didn’t sync with the OS last I checked.

    Backblaze B2 for cloud storage with my own encryption via rclone (Round Sync on GrapheneOS)

    Keypass for a few things like my XMR wallets and master passwords I don’t even trust in Bitwarden.

    https://jmp.chat/ for my mobile provider.

    Pihole with encrypted DNS to Quad9.

    https://onlykey.io/ for the second half of my sensitive passwords (Bitwarden, LUKS, Keypass, OS login). First half memorized.

    Its a lot. I burned myself out a couple years ago keeping up with optimizing privacy and this setup has served me well for 2 years without really changing anything. The cloud services are grey areas in terms of privacy but the few ads that leak through uBlock have zero relevance to anything about me.



  • So, I don’t use OpenWRT (for main router), but generally in each vlan you will need:

    • The WG interface in that vlan so all hosts can send their traffic to it.
    • DHCP server that sends the WG (local side IP) as the default route. Can also set statically on all devices. When a device on that vlan wants to send a packet to the internet it will do an ARP request for the local vlan IP then forward the IP packet to the router.
    • You will need to do some NAT as you have many private IPs for your devices in the vlan mapped to one IP given through WG. Packets that hit the WG interface should be forwarded down the tunnel with a translated source address of the local WG IP and whatever ports are in use publicly. Return packets reverse this operation.
    • Repeat for additional vlans.