LinuxPizza https://blogs.linux.pizza/ Personal notes and occasional posts - 100% human, 0% AI generated Sat, 21 Mar 2026 05:33:59 +0000 How to issue 7 days certificate with Lets Encrypt and Certbot https://blogs.linux.pizza/how-to-issue-7-days-certificate-with-lets-encrypt-and-certbot <![CDATA[It is actually pretty simple, for example with NGINX: certbot --nginx --required-profile shortlived As you can see, use the option It can also be used with DNS-validation, the Apache plugin and so on. Example, wildcard cert with the Bunny.Net plugin with ECC-certificates: certbot certonly --key-type ecdsa --required-profile shortlived --authenticator dns-bunny --dns-bunny-credentials /var/lib/private/bunny.ini -d *.linux.pizza -d linux.pizza Have fun! #linux #certbot #letsencrypt]]> It is actually pretty simple, for example with NGINX:

certbot --nginx --required-profile shortlived

As you can see, use the option --required-profile shortlived. It can also be used with DNS-validation, the Apache plugin and so on. Example, wildcard cert with the Bunny.Net plugin with ECC-certificates:

certbot certonly --key-type ecdsa --required-profile shortlived --authenticator dns-bunny --dns-bunny-credentials /var/lib/private/bunny.ini -d *.linux.pizza -d linux.pizza

Have fun!

#linux #certbot #letsencrypt

]]>
https://blogs.linux.pizza/how-to-issue-7-days-certificate-with-lets-encrypt-and-certbot Tue, 03 Feb 2026 07:30:55 +0000
Secure your API with Client-certificate authenatication in NGINX https://blogs.linux.pizza/secure-your-api-with-client-certificate-authenatication-in-nginx <![CDATA[After a few hours trying to make it work with my current CA, where the Root is stored offline, AIA, OCSP, CRL and all that stuff is done by the book - I gave up. Somehow, the Open Source variant of Nginx does not really like my OCSP setup, no idea why and I have no idea how to troubleshoot that. Solution? KISS-principle! I'll write this down, quick and dirty. But hopefully it helps someone. Lets start with create the private key for the CA that we will create: openssl genpkey -algorithm RSA -out CAROOT.key -aes256 With this command, we have created a private key with AES256. You will be prompted to give a password - write that down. And the following command will create a certificate from the private key, valid for 10 years. openssl req -x509 -new -nodes -key CAROOT.key -sha256 -days 3650 -out CAROOT.crt Fill in the information that the above command wants of you, like country-code, and so on. After that, your CA is done. The crude, ugly and honestly boring CA. But it'll work for this usecase. Let's create the client-certificate! First, will start by creating the private.key, and the .csr: openssl genpkey -algorithm RSA -out client-cert.key openssl req -new -key client.key -out client-cert.csr And again, fill out the information wanted by openssl that will populate the .csr. Make it looks pretty. Ideally, the commands shall be run on the client only, so the private-key never leaves the client. The .csr is what the CA will need to sign and create a valid certificate. Bring the .csr to the CA, and sign it: openssl x509 -req -in client-cert.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client-cert.crt -days 365 -sha256 This will give you a signed certificate for your client named "client-cert.crt" - you may bring that to the client-machine and install it. Firefox wants a .pfx: In order to import the certificate into Firefox, you'll need to convert it to p12/pfx format: openssl pkcs12 -export -out client-cert.pfx -inkey client-cert.key -in client-cert.crt -certfile CAROOT.crt Please note, that you'll need the CAROOT.crt file too that you created. Configure NGINX to do client-certificate authentication Navigate to the virtualhost you want to enable client-certificate authentication on, and add the following: sslclientcertificate /etc/ssl/private/CAROOT.crt; sslverifyclient on; sslverifydepth 2; Please note, that you have to place the CA_ROOT.crt file in Restart NGINX and try to visit the site. You'll probably be asked for permission to use client-certificate authentication. #linux #openssl #nginx #pki ]]> After a few hours trying to make it work with my current CA, where the Root is stored offline, AIA, OCSP, CRL and all that stuff is done by the book – I gave up. Somehow, the Open Source variant of Nginx does not really like my OCSP setup, no idea why and I have no idea how to troubleshoot that.

Solution? KISS-principle!

I'll write this down, quick and dirty. But hopefully it helps someone.

Lets start with create the private key for the CA that we will create:

openssl genpkey -algorithm RSA -out CA_ROOT.key -aes256

With this command, we have created a private key with AES256. You will be prompted to give a password – write that down. And the following command will create a certificate from the private key, valid for 10 years.

openssl req -x509 -new -nodes -key CA_ROOT.key -sha256 -days 3650 -out CA_ROOT.crt

Fill in the information that the above command wants of you, like country-code, and so on. After that, your CA is done. The crude, ugly and honestly boring CA. But it'll work for this usecase.

Let's create the client-certificate!

First, will start by creating the private.key, and the .csr:

openssl genpkey -algorithm RSA -out client-cert.key
openssl req -new -key client.key -out client-cert.csr

And again, fill out the information wanted by openssl that will populate the .csr. Make it looks pretty. Ideally, the commands shall be run on the client only, so the private-key never leaves the client. The .csr is what the CA will need to sign and create a valid certificate.

Bring the .csr to the CA, and sign it:

openssl x509 -req -in client-cert.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client-cert.crt -days 365 -sha256

This will give you a signed certificate for your client named “client-cert.crt” – you may bring that to the client-machine and install it.

Firefox wants a .pfx:

In order to import the certificate into Firefox, you'll need to convert it to p12/pfx format:

openssl pkcs12 -export -out client-cert.pfx -inkey client-cert.key -in client-cert.crt -certfile CA_ROOT.crt

Please note, that you'll need the CA_ROOT.crt file too that you created.

Configure NGINX to do client-certificate authentication

Navigate to the virtualhost you want to enable client-certificate authentication on, and add the following:

    ssl_client_certificate /etc/ssl/private/CA_ROOT.crt;
    ssl_verify_client on;
    ssl_verify_depth 2;

Please note, that you have to place the CA_ROOT.crt file in /etc/ssl/private/

Restart NGINX and try to visit the site. You'll probably be asked for permission to use client-certificate authentication.

#linux #openssl #nginx #pki

]]>
https://blogs.linux.pizza/secure-your-api-with-client-certificate-authenatication-in-nginx Sat, 15 Nov 2025 20:32:09 +0000
Linux.Pizza Matrix-server is (re)launching https://blogs.linux.pizza/linux-pizza-matrix-server-is-re-launching <![CDATA[After 5 years, the Linux.Pizza Matrix-server is relauching. Last time, we housed over 3k active accounts. However, 3k active accounts is not something that we aim to achieve this time, but rather - a complement to your social.linux.pizza Mastodon account. We achieve this by just enabling social.linux.pizza as a OIDC-provider on the matrix-server - the same functionality that already is being used when you authenticate your mobile application. In order to login with your social.linux.pizza account. Just used the Matrix-client you prefer (Element(X), SchlidiChat/SchlidiChat Next, Cinny or even Thunderbird) - set "synapse.linux.pizza" as your "Homeserver", and the option to login with social.linux.pizza should appear. Image showing the login-process to the Linux.Pizza Matrix-server Image showing the login-process to the Linux.Pizza Matrix-server Image showing the login-process to the Linux.Pizza Matrix-server Image showing the login-process to the Linux.Pizza Matrix-server Worth noting, is that this service will launch as a Beta-service, so every tester is welcome :)]]> After 5 years, the Linux.Pizza Matrix-server is relauching. Last time, we housed over 3k active accounts.

However, 3k active accounts is not something that we aim to achieve this time, but rather – a complement to your social.linux.pizza Mastodon account.

We achieve this by just enabling social.linux.pizza as a OIDC-provider on the matrix-server – the same functionality that already is being used when you authenticate your mobile application.

In order to login with your social.linux.pizza account. Just used the Matrix-client you prefer (Element(X), SchlidiChat/SchlidiChat Next, Cinny or even Thunderbird) – set “synapse.linux.pizza” as your “Homeserver”, and the option to login with social.linux.pizza should appear.

Image showing the login-process to the Linux.Pizza Matrix-server

Image showing the login-process to the Linux.Pizza Matrix-server

Image showing the login-process to the Linux.Pizza Matrix-server

Image showing the login-process to the Linux.Pizza Matrix-server

Worth noting, is that this service will launch as a Beta-service, so every tester is welcome :)

]]>
https://blogs.linux.pizza/linux-pizza-matrix-server-is-re-launching Sat, 04 Jan 2025 23:17:35 +0000
Enable SNMP on Cisco SG350XG-2F10 https://blogs.linux.pizza/enable-snmp-on-cisco-sg350xg-2f10 <![CDATA[Writing this down, so people and myself can easily find this solution The Cisco docs is incomplete, this is the correct way of enabling SNMP on the SG350 series: configure term snmp-server community public RO snmp-server community private RW snmp-server server snmp-server location hackerspace Thanks to @[email protected] for telling me about the "snmp-server server" step. #cisco #networking #switching #snmp #observium]]> Writing this down, so people and myself can easily find this solution

The Cisco docs is incomplete, this is the correct way of enabling SNMP on the SG350 series:

configure term
snmp-server community public RO
snmp-server community private RW
snmp-server server
snmp-server location hackerspace

Thanks to @[email protected] for telling me about the “snmp-server server” step.

#cisco #networking #switching #snmp #observium

]]>
https://blogs.linux.pizza/enable-snmp-on-cisco-sg350xg-2f10 Tue, 09 Apr 2024 07:27:38 +0000
Replace the default certificate on a Unifi Dream Router with your own https://blogs.linux.pizza/replace-the-default-certificate-on-a-unifi-dream-router-with-your-own <![CDATA[I dont claim responsibility for anything being done on your router. This short TODO is written for myself - dont follow if you are not familiar with certificates and PKI. 1 SSH into your machine Navigate to Replace Replace Restart Unifi Core: systemctl restart unifi-core Done! A screenshot, showing a valid certificate on udr.selea.se, located on a Unifi Dream Router #linux #pki #certificates #unifi]]> I dont claim responsibility for anything being done on your router. This short TODO is written for myself – dont follow if you are not familiar with certificates and PKI.

1 SSH into your machine 2. Navigate to /data/unifi-core/config 3. Replace unifi-core.key with your private key 4. Replace unifi-core.crt with your TLS-certificate 5. Restart Unifi Core:

systemctl restart unifi-core

Done! A screenshot, showing a valid certificate on udr.selea.se, located on a Unifi Dream Router

#linux #pki #certificates #unifi

]]>
https://blogs.linux.pizza/replace-the-default-certificate-on-a-unifi-dream-router-with-your-own Sun, 24 Mar 2024 15:51:35 +0000
Random stuff cheat-sheet https://blogs.linux.pizza/random-stuff-cheat-sheet <![CDATA[LVM stuff WARNING: PV /dev/sda2 in VG vg0 is using an old PV header, modify the VG to update. Update the metadata with the vgck command - where the "vg0" is your own pool. vgck --updatemetadata vg0 curl stuff Curl a specific IP with a another host-header curl -H "Host: subdomain.example.com" http://172.243.6.400/ git stuff tell git.exe to use the built-in CA-store in Windows git config --global http.sslBackend schannel random stuff See which process is using a file fuser file Import RootCert into Java-keystore example sudo /usr/lib/java/jdk8u292-b10-jre/bin/keytool -import -alias some-rootcert -keystore /usr/lib/java/jdk8u292-b10-jre/lib/security/cacerts -file /usr/share/ca-certificates/extra/someRoot.crt` Apache2 configs example Enable AD-authentication for web-resources Location / AuthName "AD authentication" AuthBasicProvider ldap AuthType Basic AuthLDAPGroupAttribute member AuthLDAPGroupAttributeIsDN On AuthLDAPURL ldap://IP:389/OU=Users,OU=pizza,DC=linux,DC=pizza? sAMAccountName?sub?(objectClass=) AuthLDAPBindDN cn=tomcat7,ou=ServiceAccounts,ou=Users,OU=pizza,dc=linux,dc=pizza AuthLDAPBindPassword "exec:/bin/cat /etc/apache2/ldap-password.conf" Require ldap-group CN=somegroup,OU=Groups,OU=pizza,DC=linux,DC=pizza ProxyPass "http://localhost:5601/" ProxyPassReverse "http://localhost:5601/" /Location Insert Matomo tracking script in Apache using modsubstitute AddOutputFilterByType SUBSTITUTE text/html Substitute "s-/head-script type=\"text/javascript\"var paq = paq || [];paq.push(['trackPageView']);paq.push(['enableLinkTracking']);(function() {var u=\"https://matomo.example.com/\";paq.push(['setTrackerUrl', u+'matomo.php']);paq.push(['setSiteId', '1']);var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0];g.type='text/javascript'; g.async=true; g.defer=true; g.src=u+'matomo.js'; s.parentNode.insertBefore(g,s);})();/script/head-n" Load balance backend-servers Proxy balancer://k3singress BalancerMember http://x.x.x.1:80 BalancerMember http://x.x.x.2:80 BalancerMember http://x.x.x.3:80 BalancerMember http://x.x.x.4:80 ProxySet lbmethod=bytraffic ProxySet connectiontimeout=5 timeout=30 SetEnv force-proxy-request-1.0 1 SetEnv proxy-nokeepalive 1 /Proxy ProxyPass "/" "balancer://k3singress/" ProxyPassReverse "/" "balancer://k3singress/" ProxyVia Full ProxyRequests On ProxyPreserveHost On Basic Apache-config for PHP-FPM VirtualHost :80 ServerName www.example.com DocumentRoot /srv/www.example.com/htdocs Directory /srv/www.example.com/htdocs AllowOverride All Require all granted DirectoryIndex index.html index.htm index.php FilesMatch "\.php$" SetHandler proxy:unix:/run/php/www.example.com.sock|fcgi://localhost /FilesMatch /Directory SetEnvIf x-forwarded-proto https HTTPS=on /VirtualHost Basic PHP-fpm pool [www.example.com] user = USER group = GROUP listen = /var/run/php/$pool.sock listen.owner = www-data listen.group = www-data pm = ondemand pm.processidletimeout = 10 pm.maxchildren = 1 chdir = / phpadminvalue[sendmailpath] = /usr/sbin/sendmail -t -i -f [email protected] phpadminvalue[mail.log] = /srv/ftp.selea.se/log/mail.log phpadminvalue[openbasedir] = /srv/ftp.selea.se:/tmp phpadminvalue[memorylimit] = 64M phpadminvalue[uploadmaxfilesize] = 64M phpadminvalue[postmaxsize] = 64M phpadminvalue[maxexecutiontime] = 180 phpadminvalue[maxinputvars] = 1000 phpadminvalue[disablefunctions] = passthru,exec,shellexec,system,procopen,popen,curlexec,curlmultiexec,parseinifile,showsource,mail Netplan - use device MAC instead of /etc/machine-id for DHCP network: ethernets: eth0: dhcp4: true dhcp-identifier: mac version: 2 HPs apt repo for various utilities for proliant machines deb http://downloads.linux.hpe.com/SDR/repo/mcp buster/current non-free psql stuff CREATE DATABASE yourdbname; CREATE USER youruser WITH ENCRYPTED PASSWORD 'yourpass'; GRANT ALL PRIVILEGES ON DATABASE yourdbname TO youruser; Get entity for AD/SMB based user so you can put it in getent passwd USERNAME Nicely shutdown NetApp cluster system node autosupport invoke -node -type all -message "MAINT=48h Power Maintenance" system node halt -node -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true Allow a process to listen on ports 0-1000 in systemd.service file [Service] AmbientCapabilities=CAPNETBINDSERVICE #linux #kubernetes #netplan #php-fpm #apache #LVM ]]> LVM stuff
WARNING: PV /dev/sda2 in VG vg0 is using an old PV header, modify the VG to update.

Update the metadata with the vgck command – where the “vg0” is your own pool.

vgck --updatemetadata vg0

curl stuff

Curl a specific IP with a another host-header

curl -H "Host: subdomain.example.com" http://172.243.6.400/

git stuff

tell git.exe to use the built-in CA-store in Windows

git config --global http.sslBackend schannel

random stuff

See which process is using a file

fuser file

Import RootCert into Java-keystore example

sudo /usr/lib/java/jdk8u292-b10-jre/bin/keytool -import -alias some-rootcert -keystore /usr/lib/java/jdk8u292-b10-jre/lib/security/cacerts -file /usr/share/ca-certificates/extra/someRoot.crt`

Apache2 configs example

Enable AD-authentication for web-resources

<Location />
   AuthName "AD authentication"
   AuthBasicProvider ldap
   AuthType Basic
   AuthLDAPGroupAttribute member
   AuthLDAPGroupAttributeIsDN On
   AuthLDAPURL ldap://IP:389/OU=Users,OU=pizza,DC=linux,DC=pizza? 
   sAMAccountName?sub?(objectClass=*)
   AuthLDAPBindDN cn=tomcat7,ou=ServiceAccounts,ou=Users,OU=pizza,dc=linux,dc=pizza
  AuthLDAPBindPassword "exec:/bin/cat /etc/apache2/ldap-password.conf"
  Require ldap-group 
  CN=some_group,OU=Groups,OU=pizza,DC=linux,DC=pizza
  ProxyPass "http://localhost:5601/"
  ProxyPassReverse "http://localhost:5601/"

</Location>

Insert Matomo tracking script in Apache using mod_substitute

AddOutputFilterByType SUBSTITUTE text/html
Substitute "s-</head>-<script type=\"text/javascript\">var _paq = _paq || [];_paq.push(['trackPageView']);_paq.push(['enableLinkTracking']);(function() {var u=\"https://matomo.example.com/\";_paq.push(['setTrackerUrl', u+'matomo.php']);_paq.push(['setSiteId', '1']);var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0];g.type='text/javascript'; g.async=true; g.defer=true; g.src=u+'matomo.js'; s.parentNode.insertBefore(g,s);})();</script></head>-n"

Load balance backend-servers

<Proxy balancer://k3singress>
	BalancerMember http://x.x.x.1:80
	BalancerMember http://x.x.x.2:80
	BalancerMember http://x.x.x.3:80
	BalancerMember http://x.x.x.4:80
	ProxySet lbmethod=bytraffic
	ProxySet connectiontimeout=5 timeout=30
	SetEnv force-proxy-request-1.0 1
	SetEnv proxy-nokeepalive 1
</Proxy>
       ProxyPass "/" "balancer://k3singress/"
       ProxyPassReverse "/" "balancer://k3singress/"
       ProxyVia Full
       ProxyRequests On
       ProxyPreserveHost On

Basic Apache-config for PHP-FPM

<VirtualHost *:80>
  ServerName www.example.com
  DocumentRoot /srv/www.example.com/htdocs
  <Directory /srv/www.example.com/htdocs>
    AllowOverride All
    Require all granted
    DirectoryIndex index.html index.htm index.php
    <FilesMatch "\.php$">
      SetHandler proxy:unix:/run/php/www.example.com.sock|fcgi://localhost
    </FilesMatch>
  </Directory>
  SetEnvIf x-forwarded-proto https HTTPS=on
</VirtualHost>

Basic PHP-fpm pool

[www.example.com]
user = USER
group = GROUP

listen = /var/run/php/$pool.sock

listen.owner = www-data
listen.group = www-data

pm = ondemand
pm.process_idle_timeout = 10
pm.max_children = 1

chdir = /

php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected]
php_admin_value[mail.log] = /srv/ftp.selea.se/log/mail.log
php_admin_value[open_basedir] = /srv/ftp.selea.se:/tmp
php_admin_value[memory_limit] = 64M
php_admin_value[upload_max_filesize] = 64M
php_admin_value[post_max_size] = 64M
php_admin_value[max_execution_time] = 180
php_admin_value[max_input_vars] = 1000

php_admin_value[disable_functions] = passthru,exec,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source,mail

Netplan – use device MAC instead of /etc/machine-id for DHCP

network:
  ethernets:
    eth0:
      dhcp4: true
      dhcp-identifier: mac
  version: 2

HPs apt repo for various utilities for proliant machines

deb http://downloads.linux.hpe.com/SDR/repo/mcp buster/current non-free

psql stuff

CREATE DATABASE yourdbname;
CREATE USER youruser WITH ENCRYPTED PASSWORD 'yourpass';
GRANT ALL PRIVILEGES ON DATABASE yourdbname TO youruser;

Get entity for AD/SMB based user so you can put it in /etc/passwd:

getent passwd USERNAME

Nicely shutdown NetApp cluster

system node autosupport invoke -node * -type all -message "MAINT=48h Power Maintenance"
system node halt -node * -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true

Allow a process to listen on ports 0-1000 in systemd.service file

[Service]
AmbientCapabilities=CAP_NET_BIND_SERVICE

#linux #kubernetes #netplan #php-fpm #apache #LVM

]]>
https://blogs.linux.pizza/random-stuff-cheat-sheet Fri, 30 Jun 2023 07:53:42 +0000
Where the /¤"# is /var/log/syslog in Debian 12????? https://blogs.linux.pizza/where-the-is-var-log-syslog-in-debian-12 <![CDATA[Imagine my suprise when I could not tail the syslog anymore.. Debian 12 has moved the syslog to journalctl. So just run If you want to check the logs from for example apache: If you want to format the logs as json, just append #linux #debian #logging ]]> Imagine my suprise when I could not tail the syslog anymore..

Debian 12 has moved the syslog to journalctl. So just run journalctl -f and you will be greeted with the logs running throu the screen :)

If you want to check the logs from for example apache:

journalctl -u apache2.service

If you want to format the logs as json, just append o json-pretty

#linux #debian #logging

]]>
https://blogs.linux.pizza/where-the-is-var-log-syslog-in-debian-12 Fri, 16 Jun 2023 09:59:02 +0000
A note regarding mirror.linux.pizza https://blogs.linux.pizza/a-note-regarding-mirror-linux-pizza <![CDATA[8 years ago, I saw a post somewhere about a pretty small niché distro that was looking for a mirror for its packages. That got me thinking about the possibility to provide a public mirror for Linux packages for various distros. It started back then in my home office, with redundant ISP and the two HP Microservers and the Supermicro box that I had running. My ambitions did not stop, and I applied to be an official mirror for Debian, Ubuntu, Parabola, Linux-Libre and more in the weeks after. One year after that, I got access to a nice environment that my friends had. With 100TB of storage and unlimited bandwidth - I moved the mirror there, and it has been living there ever since. Fast forward a couple of years... The small distros that mirror.linux.pizza was the sole mirror for has dissappeared, and the other projects such as Parabola, EndeavourOS and PureOS where I was the first one to start mirroring them - has gotten plenty of more mirrors to help out. I've decided to shut mirror.linux.pizza down, the reason is financial and I want to focus my effort on the community that is social.linux.pizza instead. I've already notified the different projects about the shut down, and I will take steps to ensure that systems does not break after the mirror goes offline, such as HTTP-redirects to other mirrors in the nordic. I've also reached out to the hosting providers that have been using the mirror exclusively to notify them about the upcoming change, so they can prepare for that aswell. I am thankful that I have been able to give something back to the community by hosting this mirror - around 100k unique IP-addresses connect to it every day. So it did definitely help out! #linux #mirror #mirrorlinuxpizza #sunset #debian #ubuntu #pureos]]> 8 years ago, I saw a post somewhere about a pretty small niché distro that was looking for a mirror for its packages. That got me thinking about the possibility to provide a public mirror for Linux packages for various distros.

It started back then in my home office, with redundant ISP and the two HP Microservers and the Supermicro box that I had running. My ambitions did not stop, and I applied to be an official mirror for Debian, Ubuntu, Parabola, Linux-Libre and more in the weeks after.

One year after that, I got access to a nice environment that my friends had. With 100TB of storage and unlimited bandwidth – I moved the mirror there, and it has been living there ever since.

Fast forward a couple of years...

The small distros that mirror.linux.pizza was the sole mirror for has dissappeared, and the other projects such as Parabola, EndeavourOS and PureOS where I was the first one to start mirroring them – has gotten plenty of more mirrors to help out.

I've decided to shut mirror.linux.pizza down, the reason is financial and I want to focus my effort on the community that is social.linux.pizza instead.

I've already notified the different projects about the shut down, and I will take steps to ensure that systems does not break after the mirror goes offline, such as HTTP-redirects to other mirrors in the nordic.

I've also reached out to the hosting providers that have been using the mirror exclusively to notify them about the upcoming change, so they can prepare for that aswell.

I am thankful that I have been able to give something back to the community by hosting this mirror – around 100k unique IP-addresses connect to it every day. So it did definitely help out!

#linux #mirror #mirrorlinuxpizza #sunset #debian #ubuntu #pureos

]]>
https://blogs.linux.pizza/a-note-regarding-mirror-linux-pizza Mon, 27 Mar 2023 16:33:51 +0000
Kubectl cheat-sheet https://blogs.linux.pizza/kubectl-cheat-sheet <![CDATA[## Just some random #kubectl commands for myself. I have tested these on 1.20 1.25 Get all ingress logs (if your ingress is nginx) kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx Get all logs from Deployment kubectl logs deployment/deployment -n namespace --watch Why is the pod stuck in "ContainerCreating"? kubectl get events --sort-by=.metadata.creationTimestamp --watch Restart your deployment, nice and clean kubectl rollout restart deployment/deployment -n namespace Check which namespaces are using the most disk space kubectl get namespace --no-headers | xargs -I {} sh -c 'echo {}; kubectl get pods -n {} --no-headers | xargs -I {} sh -c "kubectl logs {} -n {} | wc -c"' | awk '{print $1" "($2/1024/1024)" MB"}' | sort -k2 -n -r | head Check if any pods are using a lot of disk space kubectl get pods --all-namespaces -o json | jq '.items[].spec.containers[].resources.requests.storage' | grep -v null Check the Kubernetes event logs for any disk-related errors kubectl get events --field-selector involvedObject.kind=Node,reason=OutOfDisk I'll add more when I find more usefull stuff #linux #k8s #kubernetes #kubectl #ingress #nginx #deployment #logs]]> Just some random #kubectl commands for myself. I have tested these on 1.20 <> 1.25

Get all ingress logs (if your ingress is nginx)

kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx

Get all logs from Deployment

kubectl logs deployment/<deployment> -n <namespace> --watch

Why is the pod stuck in “ContainerCreating”?

kubectl get events --sort-by=.metadata.creationTimestamp --watch

Restart your deployment, nice and clean

kubectl rollout restart deployment/<deployment> -n <namespace>

Check which namespaces are using the most disk space

kubectl get namespace --no-headers | xargs -I {} sh -c 'echo {}; kubectl get pods -n {} --no-headers | xargs -I {} sh -c "kubectl logs {} -n {} | wc -c"' | awk '{print $1" "($2/1024/1024)" MB"}' | sort -k2 -n -r | head

Check if any pods are using a lot of disk space

kubectl get pods --all-namespaces -o json | jq '.items[].spec.containers[].resources.requests.storage' | grep -v null
kubectl get events --field-selector involvedObject.kind=Node,reason=OutOfDisk

I'll add more when I find more usefull stuff

#linux #k8s #kubernetes #kubectl #ingress #nginx #deployment #logs

]]>
https://blogs.linux.pizza/kubectl-cheat-sheet Tue, 28 Feb 2023 08:04:47 +0000
Download all files in a remote catalogue over SFTP with lftp https://blogs.linux.pizza/download-all-files-in-a-remote-catalogue-over-sftp-with-lftp <![CDATA[Hopefully this will save some of you alot of time, energy, and save you day. I recently had troubles getting a job to work. The short story is: Download all files in a remote catalogue, over SFTP, on certain times. I had a working solution with curl, but when the naming of the files changed (such as whitespaces) - the function broke. lftp - the saver After have spent a couple of hours trying to grasp lftp via the manpage, I came up with a solution: lftp -c ' open sftp://USER:[email protected]:22 mirror --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/ ' And if you want to remove the source-files after download: lftp -c ' open sftp://USER:[email protected]:22 mirror --Remove-source-files --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/ ' This download all files in the specified remote catalogue to the specified local one, then exits. #linux #bash #sftp #lftp]]> Hopefully this will save some of you alot of time, energy, and save you day.

I recently had troubles getting a job to work. The short story is:

Download all files in a remote catalogue, over SFTP, on certain times.

I had a working solution with curl, but when the naming of the files changed (such as whitespaces) – the function broke.

lftp – the saver

After have spent a couple of hours trying to grasp lftp via the manpage, I came up with a solution:

lftp -c '
open sftp://USER:[email protected]:22
mirror --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/
'

And if you want to remove the source-files after download:

lftp -c '
open sftp://USER:[email protected]:22
mirror --Remove-source-files --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/
'

This download all files in the specified remote catalogue to the specified local one, then exits.

#linux #bash #sftp #lftp

]]>
https://blogs.linux.pizza/download-all-files-in-a-remote-catalogue-over-sftp-with-lftp Wed, 11 Jan 2023 08:58:22 +0000