• 4 Posts
  • 944 Comments
Joined 1 年前
cake
Cake day: 2025年2月15日

help-circle
  • The problem could be anywhere in between the internet and your server.

    Ofc. it could be your routet. But I think the following is more likely:

    It might also be your internet service provider that doesn’t allow those ports for inbound connections.

    Or you’re behind a CGNAT so your real external ip is different from the one you think it is. (look up online how to test this)


  • Sounds good.

    Hmm next you probably should confirm ports 80 and 443 are actually reachable from the internet.

    Use an online port checker like https://canyouseeme.org/

    After that you should check your apache config like somebody else already suggested. I haven’t used apache in a while but if I remember correctly:

    Ensure it says: Listen 80 NOT: Listen 127.0.0.1:80

    (and same with 443)

    Also check your VirtualHost — it should look something like:

    <VirtualHost *:80>
        ServerName yourdomain.com
        DocumentRoot /var/www/wordpress
        # ... other settings
    </VirtualHost>
    

    (and same with 443)


  • Njalla’s default TTL for DNS records is 3600 seconds (1 hour). If you just created or modified the A record, it can take up to that full hour for the change to propagate across the internet, which would perfectly explain why Certbot is connecting to the right IP but failing to fetch the file (the request might be hitting an old IP or a cached null response).

    Before changing any more configurations, you should verify what the rest of the internet is actually seeing for your domain right now.

    Check the current DNS record

    You can usedig to see exactly what IP your domain is resolving to, and importantly, the remaining TTL on that record.

    From your local machine (or any computer), run:

    dig yourdomain.com +noall +answer
    

    This will output something like:

    yourdomain.com.    3412    IN      A       203.0.113.45
    

    The second column (3412) is the remaining TTL in seconds. If that number is counting down from 3600, the record is still propagating. If the IP address shown there doesn’t match your server’s current public IP, the change hasn’t taken effect yet for that DNS server.

    Check from a different perspective

    To ensure it’s not just your local ISP or router cache serving an old record, query an external public DNS server directly:

    dig yourdomain.com @1.1.1.1 +noall +answer
    dig yourdomain.com @8.8.8.8 +noall +answer
    

    If these external servers show the correct IP but Certbot still fails, the DNS is fine, and the problem is somewhere in your network routing or web server config. If they show a wrong IP or no record at all, you simply need to wait for the TTL to expire.












  • there’s prep and glam I like to do or tend to set up but that I would prefer to explicitly set up instead of it being done automaticall

    Back in the x11 days I had a script that would take a config file and open multiple programs in a specified arrangement across my displays.

    I used KDE activities by task and had such a config for each task. KDE activities can run arbitrary scripts on being started. So when I opened the “work” activity for example, all my work apps would open up in my preffered arrangement. When I opened the gaming activity, steam would start on my side monitor and the main monitor had all of the other gaming related shortcuts on it etc.

    Together with the preload daemon or a custom vm-touch (i switched from one to the other at some point) it was blazingly fast and very comfy. (Again, I overprovisioned my RAM so I used it by filling it post boot with a cache of pages that my apps load on startup)

    Then wayland came and broke it and I didn’t bother to fix it yet.

    But everybody has their own prefered workflows, I’m not saying one is better than the other. Just wanted to share.



  • Sorry, I can’t help you with your problem.

    But just in case you were serious about “We don’t shutdown.”:

    In my case - clean boot takes 25s. Waking up from hibernation takes over 60 seconds, because of huge RAM. And sleep is broken due to some USB interface shenanigans. Soooo yeaaah, I fully shut down and power on every day.

    Oh and btw. by default windows doesn’t do a full shutdown, but a sneaky hibernate. You can see that for example if you “shutdown” windows (not reboot.), then power on the pc and boot into linux - trying to access the windows drive, you will see an error that windows “didn’t shutdown properly” and is still claiming access to the windows drive. Because it’s hibernating and changing content on the drive might break the wakeup.


  • HelloRoottoLocalLLaMA@sh.itjust.worksIn search for a new self-hosted LLM
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    15 天前

    I think people are sleeping on GLM.

    Tried it out recently and I like the results a lot so far.

    GLM4.5 and 4.7 was good already, now they released 5 and 5.1 https://github.com/zai-org/GLM-5

    It says it’s for vibecoding but I use it like I would use chatgpt and it gives useable ansers to all of my varied questions. (ofc. you always have to check for correctness, even if it’s correct most of the time, which I do cause I’m paranoid)

    I guess the only downside is how frigging huge it is.


  • We’re not.

    Huh? You linked me an article where server cost is the lions share of signal operation.

    Most users doesn’t even donate 1€ when using free messengers.

    How does signal operate then.

    They don’t offer ANY “nice-to-have” features

    If they don’t have high server costs, unlike the example from the article you brought up, they should hire cheaper software engineers from a different country or scale down development and have a longer runway.

    Like I said - them having this problem is probably due to poor planing.


  • Well, when talking about server costs, Threema somehow has been running on a 5€ lifetime license and business customer subscribtions for over a decade.

    While briar and simplex are peer to peer and have nearly no ops costs.

    Sure, it can be made to be very expensive, but I’m arguing that doing so is a business/design decision.

    Servers can help improve the UX, but are expensive. Threema for example, only stores media on their servers temporarely, so they have way lower storage cost with a small tradeoff in userfriendlyness (of having to migratethe old media files you want to keep when you get a new phone). And so on.

    If your nonprofit only has 65k, don’t hire multiple devs and provide nice-to-have features that lead to high ops expenses in servers and storage. It’s called minimal viable prpduct for a reason.