certbot --nginx --required-profile shortlived
As you can see, use the option --required-profile shortlived.
It can also be used with DNS-validation, the Apache plugin and so on.
Example, wildcard cert with the Bunny.Net plugin with ECC-certificates:
certbot certonly --key-type ecdsa --required-profile shortlived --authenticator dns-bunny --dns-bunny-credentials /var/lib/private/bunny.ini -d *.linux.pizza -d linux.pizza
Have fun!
]]>I'll write this down, quick and dirty. But hopefully it helps someone.
Lets start with create the private key for the CA that we will create:
openssl genpkey -algorithm RSA -out CA_ROOT.key -aes256
With this command, we have created a private key with AES256. You will be prompted to give a password – write that down. And the following command will create a certificate from the private key, valid for 10 years.
openssl req -x509 -new -nodes -key CA_ROOT.key -sha256 -days 3650 -out CA_ROOT.crt
Fill in the information that the above command wants of you, like country-code, and so on. After that, your CA is done. The crude, ugly and honestly boring CA. But it'll work for this usecase.
First, will start by creating the private.key, and the .csr:
openssl genpkey -algorithm RSA -out client-cert.key
openssl req -new -key client.key -out client-cert.csr
And again, fill out the information wanted by openssl that will populate the .csr. Make it looks pretty. Ideally, the commands shall be run on the client only, so the private-key never leaves the client. The .csr is what the CA will need to sign and create a valid certificate.
Bring the .csr to the CA, and sign it:
openssl x509 -req -in client-cert.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client-cert.crt -days 365 -sha256
This will give you a signed certificate for your client named “client-cert.crt” – you may bring that to the client-machine and install it.
In order to import the certificate into Firefox, you'll need to convert it to p12/pfx format:
openssl pkcs12 -export -out client-cert.pfx -inkey client-cert.key -in client-cert.crt -certfile CA_ROOT.crt
Please note, that you'll need the CA_ROOT.crt file too that you created.
Navigate to the virtualhost you want to enable client-certificate authentication on, and add the following:
ssl_client_certificate /etc/ssl/private/CA_ROOT.crt;
ssl_verify_client on;
ssl_verify_depth 2;
Please note, that you have to place the CA_ROOT.crt file in /etc/ssl/private/
Restart NGINX and try to visit the site. You'll probably be asked for permission to use client-certificate authentication.
]]>We achieve this by just enabling social.linux.pizza as a OIDC-provider on the matrix-server – the same functionality that already is being used when you authenticate your mobile application.
In order to login with your social.linux.pizza account. Just used the Matrix-client you prefer (Element(X), SchlidiChat/SchlidiChat Next, Cinny or even Thunderbird) – set “synapse.linux.pizza” as your “Homeserver”, and the option to login with social.linux.pizza should appear.




Worth noting, is that this service will launch as a Beta-service, so every tester is welcome :)
]]>The Cisco docs is incomplete, this is the correct way of enabling SNMP on the SG350 series:
configure term
snmp-server community public RO
snmp-server community private RW
snmp-server server
snmp-server location hackerspace
Thanks to @[email protected] for telling me about the “snmp-server server” step.
]]>1 SSH into your machine
2. Navigate to /data/unifi-core/config
3. Replace unifi-core.key with your private key
4. Replace unifi-core.crt with your TLS-certificate
5. Restart Unifi Core:
systemctl restart unifi-core
Done!

WARNING: PV /dev/sda2 in VG vg0 is using an old PV header, modify the VG to update.
Update the metadata with the vgck command – where the “vg0” is your own pool.
vgck --updatemetadata vg0
Curl a specific IP with a another host-header
curl -H "Host: subdomain.example.com" http://172.243.6.400/
tell git.exe to use the built-in CA-store in Windows
git config --global http.sslBackend schannel
See which process is using a file
fuser file
sudo /usr/lib/java/jdk8u292-b10-jre/bin/keytool -import -alias some-rootcert -keystore /usr/lib/java/jdk8u292-b10-jre/lib/security/cacerts -file /usr/share/ca-certificates/extra/someRoot.crt`
<Location />
AuthName "AD authentication"
AuthBasicProvider ldap
AuthType Basic
AuthLDAPGroupAttribute member
AuthLDAPGroupAttributeIsDN On
AuthLDAPURL ldap://IP:389/OU=Users,OU=pizza,DC=linux,DC=pizza?
sAMAccountName?sub?(objectClass=*)
AuthLDAPBindDN cn=tomcat7,ou=ServiceAccounts,ou=Users,OU=pizza,dc=linux,dc=pizza
AuthLDAPBindPassword "exec:/bin/cat /etc/apache2/ldap-password.conf"
Require ldap-group
CN=some_group,OU=Groups,OU=pizza,DC=linux,DC=pizza
ProxyPass "http://localhost:5601/"
ProxyPassReverse "http://localhost:5601/"
</Location>
AddOutputFilterByType SUBSTITUTE text/html
Substitute "s-</head>-<script type=\"text/javascript\">var _paq = _paq || [];_paq.push(['trackPageView']);_paq.push(['enableLinkTracking']);(function() {var u=\"https://matomo.example.com/\";_paq.push(['setTrackerUrl', u+'matomo.php']);_paq.push(['setSiteId', '1']);var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0];g.type='text/javascript'; g.async=true; g.defer=true; g.src=u+'matomo.js'; s.parentNode.insertBefore(g,s);})();</script></head>-n"
<Proxy balancer://k3singress>
BalancerMember http://x.x.x.1:80
BalancerMember http://x.x.x.2:80
BalancerMember http://x.x.x.3:80
BalancerMember http://x.x.x.4:80
ProxySet lbmethod=bytraffic
ProxySet connectiontimeout=5 timeout=30
SetEnv force-proxy-request-1.0 1
SetEnv proxy-nokeepalive 1
</Proxy>
ProxyPass "/" "balancer://k3singress/"
ProxyPassReverse "/" "balancer://k3singress/"
ProxyVia Full
ProxyRequests On
ProxyPreserveHost On
<VirtualHost *:80>
ServerName www.example.com
DocumentRoot /srv/www.example.com/htdocs
<Directory /srv/www.example.com/htdocs>
AllowOverride All
Require all granted
DirectoryIndex index.html index.htm index.php
<FilesMatch "\.php$">
SetHandler proxy:unix:/run/php/www.example.com.sock|fcgi://localhost
</FilesMatch>
</Directory>
SetEnvIf x-forwarded-proto https HTTPS=on
</VirtualHost>
[www.example.com]
user = USER
group = GROUP
listen = /var/run/php/$pool.sock
listen.owner = www-data
listen.group = www-data
pm = ondemand
pm.process_idle_timeout = 10
pm.max_children = 1
chdir = /
php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected]
php_admin_value[mail.log] = /srv/ftp.selea.se/log/mail.log
php_admin_value[open_basedir] = /srv/ftp.selea.se:/tmp
php_admin_value[memory_limit] = 64M
php_admin_value[upload_max_filesize] = 64M
php_admin_value[post_max_size] = 64M
php_admin_value[max_execution_time] = 180
php_admin_value[max_input_vars] = 1000
php_admin_value[disable_functions] = passthru,exec,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source,mail
network:
ethernets:
eth0:
dhcp4: true
dhcp-identifier: mac
version: 2
deb http://downloads.linux.hpe.com/SDR/repo/mcp buster/current non-free
CREATE DATABASE yourdbname;
CREATE USER youruser WITH ENCRYPTED PASSWORD 'yourpass';
GRANT ALL PRIVILEGES ON DATABASE yourdbname TO youruser;
Get entity for AD/SMB based user so you can put it in /etc/passwd:
getent passwd USERNAME
Nicely shutdown NetApp cluster
system node autosupport invoke -node * -type all -message "MAINT=48h Power Maintenance"
system node halt -node * -skip-lif-migration-before-shutdown true -ignore-quorum-warnings true -inhibit-takeover true
Allow a process to listen on ports 0-1000 in systemd.service file
[Service]
AmbientCapabilities=CAP_NET_BIND_SERVICE
]]>Debian 12 has moved the syslog to journalctl. So just run journalctl -f and you will be greeted with the logs running throu the screen :)
If you want to check the logs from for example apache:
journalctl -u apache2.service
If you want to format the logs as json, just append o json-pretty
It started back then in my home office, with redundant ISP and the two HP Microservers and the Supermicro box that I had running. My ambitions did not stop, and I applied to be an official mirror for Debian, Ubuntu, Parabola, Linux-Libre and more in the weeks after.
One year after that, I got access to a nice environment that my friends had. With 100TB of storage and unlimited bandwidth – I moved the mirror there, and it has been living there ever since.
Fast forward a couple of years...
The small distros that mirror.linux.pizza was the sole mirror for has dissappeared, and the other projects such as Parabola, EndeavourOS and PureOS where I was the first one to start mirroring them – has gotten plenty of more mirrors to help out.
I've decided to shut mirror.linux.pizza down, the reason is financial and I want to focus my effort on the community that is social.linux.pizza instead.
I've already notified the different projects about the shut down, and I will take steps to ensure that systems does not break after the mirror goes offline, such as HTTP-redirects to other mirrors in the nordic.
I've also reached out to the hosting providers that have been using the mirror exclusively to notify them about the upcoming change, so they can prepare for that aswell.
I am thankful that I have been able to give something back to the community by hosting this mirror – around 100k unique IP-addresses connect to it every day. So it did definitely help out!
#linux #mirror #mirrorlinuxpizza #sunset #debian #ubuntu #pureos
]]>kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx
kubectl logs deployment/<deployment> -n <namespace> --watch
kubectl get events --sort-by=.metadata.creationTimestamp --watch
kubectl rollout restart deployment/<deployment> -n <namespace>
kubectl get namespace --no-headers | xargs -I {} sh -c 'echo {}; kubectl get pods -n {} --no-headers | xargs -I {} sh -c "kubectl logs {} -n {} | wc -c"' | awk '{print $1" "($2/1024/1024)" MB"}' | sort -k2 -n -r | head
kubectl get pods --all-namespaces -o json | jq '.items[].spec.containers[].resources.requests.storage' | grep -v null
kubectl get events --field-selector involvedObject.kind=Node,reason=OutOfDisk
I'll add more when I find more usefull stuff
#linux #k8s #kubernetes #kubectl #ingress #nginx #deployment #logs
]]>I recently had troubles getting a job to work. The short story is:
Download all files in a remote catalogue, over SFTP, on certain times.
I had a working solution with curl, but when the naming of the files changed (such as whitespaces) – the function broke.
After have spent a couple of hours trying to grasp lftp via the manpage, I came up with a solution:
lftp -c '
open sftp://USER:[email protected]:22
mirror --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/
'
And if you want to remove the source-files after download:
lftp -c '
open sftp://USER:[email protected]:22
mirror --Remove-source-files --verbose --use-pget-n=8 -c /remote/catalogue/ /local/catalogue/
'
This download all files in the specified remote catalogue to the specified local one, then exits.
]]>