Anatoli Nicolae https://anatolinicolae.com Technology Connoisseur Wed, 21 Jan 2026 22:50:02 +0000 en-US hourly 1 https://anatolinicolae.com/wp-content/uploads/2024/03/cropped-favicon-1-32x32.png Anatoli Nicolae https://anatolinicolae.com 32 32 Failed to download metadata for repo ‘appstream’: relocated CentOS mirror https://anatolinicolae.com/failed-to-download-metadata-for-repo-appstream/ https://anatolinicolae.com/failed-to-download-metadata-for-repo-appstream/#respond Tue, 19 Apr 2022 22:05:09 +0000 https://anatolinicolae.com/?p=13 You’ve installed an older CentOS version and most likely dnf fails to update or install packages. This error is due to CentOS mirror being relocated to the vault instead.

[root@av1 ~]# dnf install python3-librepo -y
Failed loading plugin "osmsplugin": No module named 'librepo'
CentOS Linux 8 - AppStream                                                                      426  B/s |  38  B     00:00    
Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist

In order to fix this error, you’ll have to rename the target mirror as follows:

sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-Linux-*

Running any dnf command, updating and installing packages should be working just fine now.

Photo by Jason Dent on Unsplash

]]>
https://anatolinicolae.com/failed-to-download-metadata-for-repo-appstream/feed/ 0
New Oracle Cloud instance: failed loading plugin osmsplugin — no module named librepo? https://anatolinicolae.com/failed-loading-plugin-osmsplugin-no-module-named-librepo/ https://anatolinicolae.com/failed-loading-plugin-osmsplugin-no-module-named-librepo/#respond Tue, 19 Apr 2022 18:31:39 +0000 https://anatolinicolae.com/?p=1 It’s the nth time that I deploy a CentOS instance on Oracle Cloud just to see osmsplugin nagging about missing librepo module. It’s not a huge deal for sure, but it just annoys the hell of you at every dnf run since it doesn’t actually provide a hint about how to fix the error.

[root@av1 ~]# dnf clean all
Failed loading plugin "osmsplugin": No module named 'librepo'
93 files removed

The fix for this error is pretty simple. All you have to do is simply install the librepo Python module:

dnf install python3-librepo -y

After a successful install, the nag should simply disappear:

[root@av1 ~]# dnf clean all
93 files removed

Photo by Luke Zhang on Unsplash

]]>
https://anatolinicolae.com/failed-loading-plugin-osmsplugin-no-module-named-librepo/feed/ 0
Le migliori app del momento sono quelle ibride: perché? https://anatolinicolae.com/le-migliori-app-del-momento-sono-quelle-ibride-perche/ https://anatolinicolae.com/le-migliori-app-del-momento-sono-quelle-ibride-perche/#respond Wed, 28 Jul 2021 20:48:00 +0000 https://anatolinicolae.com/?p=47 Perché le app Android sono spesso migliori di quelle sviluppate per iOS?

Qualitativamente spesso è il contrario: le app iOS sono migliori di quelle Android. Questo è dato dal fatto che nell’ecosistema Apple è molto più semplice sviluppare un’app performante perché sia il device che il software a bordo del device viene fatto dalla stessa entità. Nel caso di Android, ci sono sempre più device da supportare con caratteristiche diverse.

Alcuni device Android potrebbero certamente avere delle caratteristiche nuove sul mercato, ma l’adozione del pubblico generale è molto lenta data la grossa frammentazione della versione del sistema operativo con versioni più vecchie.

Ovviamente la piattaforma incide relativamente poco sull’applicazione, perché ciò che conta è come essa viene sviluppata e ottimizzata per risultare migliore su una piattaforma piuttosto che un’altra.

Quale tipologia di app consiglierebbe ad un’azienda locale che desidera aumentare le vendite dei propri prodotti?

Al giorno d’oggi ci sono tantissimi modi per sviluppare un’app. Ovviamente è sempre meglio puntare alla qualità e alla velocità, ma investire in sviluppo di app native diventa facilmente molto oneroso per le piccole imprese. Sempre più accessibili invece sono le app ibride, ovvero che utilizzano un’unica implementazione contemporaneamente su più piattaforme.

Un esempio? React Native e Flutter. Le app ibride ci permettono di raggiungere immediatamente un pubblico più ampio, pubblicando l’app sia su App Store che su Google Play.

Il passo successivo sarà migliorare le prestazioni delle app individuali quando sia l’attività che il budget per lo sviluppo delle app crescono, dividendo e ottimizzando le versioni per singola piattaforma.

Qual è il costo di un servizio di sviluppo di un’app aziendale?

Lo sviluppo dell’app parte innanzitutto dal cliente che dovrà avere le idee ben chiare degli obiettivi da raggiungere con l’app, requisiti che poi verranno suddivisi in tre aree d’azione. I tre step dello sviluppo dell’app sono la pianificazione della user experience, per definire le schermate che un utente visualizzerà e le azioni che potrà eseguire.

Seguita quest’ultima dalla user interface, per definire lo stile grafico dell’app ed infine l’effettiva traduzione da mockup grafici in codice.

Nelle aziende più strutturate questi tre step sono eseguiti spesso da team dedicati, ma questo non significa che l’intero lavoro non possa essere eseguito da un team più piccolo per velocizzare il processo e spendere molto meno sul prodotto finale.

Il costo dello sviluppo di un’app quindi si calcola in base agli obiettivi e richieste del cliente, considerando anche se si vuole un’app nativa o ibrida, in quanto con un’app ibrida non si sdoppia il lavoro specifico alle piattaforme.

Quali sono le più importanti tecniche SEO?

Le tecniche SEO più importanti ma anche quelle che portano più guadagni di fronte all’investimento sono principalmente tre:

  • Long-tail keywords: scelta di parole chiave specifiche all’argomento e all’attività, che permette di indirizzare direttamente la nicchia di target e avere molte più possibilità di conversione rispetto ad una parola chiave “generica”.
  • Costanza: impegno di creare contenuti ad intervalli ben definiti (un post ogni giorno, ogni settimana) per fidelizzare i lettori ed “esserci” per il tuo pubblico.
  • Qualità: scelta di argomenti che possono essere effettivamente utili e interessanti dal punto di vista della tua audience—aiutando gli altri, aiuti te stesso.

Non bisogna ovviamente sottovalutare l’ottimizzazione del contenuto con titoli, link, testo alternativo delle immagini e velocità delle pagine, in quanto tutto incide sul ranking finale.

Contenuto estratto dall’intervista in collaborazione con ProntoPro riguardo agli ultimi trend in termini di applicazioni. Leggi l’intera intervista »

]]>
https://anatolinicolae.com/le-migliori-app-del-momento-sono-quelle-ibride-perche/feed/ 0
Replacing heavily worn SSDs https://anatolinicolae.com/replacing-heavily-worn-ssds/ https://anatolinicolae.com/replacing-heavily-worn-ssds/#respond Mon, 13 Apr 2020 00:00:02 +0000 https://anatolinicolae.com/?p=858 During the second half of past december we decided to move to a new server on Hetzner for our new production server, moving away from our old friend Plesk. After setting it up and running for a while, we’ve noticed that MySQL operations were taking a huge amount of time.

After some time trying to debug MySQL, switching versions and testing dumps we’ve come to disk checks. It looks like the dedicated server we’ve got, had a relative intensive use before and the disks were worn out.

We’ve then arranged with Hetzner’s guys to work out a step by step replacements of RAIDed disks. We would swap one disk at a time, first removing it from the array, physically swapping it, re-adding it to the array and syncing.

Swapping a disk

In order to be able to detach a disk and replace it, we have to first mark it as faulty.

mdadm --manage /dev/md0 --fail /dev/sda1
mdadm --manage /dev/md1 --fail /dev/sda2
mdadm --manage /dev/md2 --fail /dev/sda3

We can then proceed to remove partitions from the array.

mdadm --manage /dev/md0 --remove /dev/sda1
mdadm --manage /dev/md1 --remove /dev/sda2
mdadm --manage /dev/md2 --remove /dev/sda3

When asking your provider to swap a disk, you may find useful to communicate the serial of the disk to be swapped for a coordinated process. You will be sure you’re swapping the correct disk that you’ve previously marked as faulty.

udevadm info --query=all --name=/dev/sda | grep ID_SERIAL
ID_SERIAL=Crucial_CT256MX100SSD1_000000000000
ID_SERIAL_SHORT=000000000000

Proceed with the physical disk swap, then boot the system again and start adjusting fresh new disk’s partition.

First thing to do would be to partition yourself the new disk. The partitioning should match previous disk’s paritioning schema and can be achieved manually or even better by copying existing disk’s partition (/dev/sdb) to the new one (/dev/sda).

sfdisk -d /dev/sdb | sfdisk /dev/sda

Once the partition schema matches the one expected by the array, we can proceed to adding back the disk to the array.

mdadm --manage /dev/md0 --add /dev/sda1
mdadm --manage /dev/md1 --add /dev/sda2
mdadm --manage /dev/md2 --add /dev/sda3

Wait for mdadm to perform the synchronization and you can proceed with the other disk.

# Mark disk as failed
mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md2 --fail /dev/sdb3
# Remove disk from array
mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm --manage /dev/md1 --remove /dev/sdb2
mdadm --manage /dev/md2 --remove /dev/sdb3
# Fetch disk infos
udevadm info --query=all --name=/dev/sdb | grep ID_SERIAL
# ID_SERIAL=Crucial_CT256MX100SSD1_111111111111
# ID_SERIAL_SHORT=111111111111
# Copy partition
sfdisk -d /dev/sda | sfdisk /dev/sdb
# Add disk back to array
mdadm --manage /dev/md0 --add /dev/sdb1
mdadm --manage /dev/md1 --add /dev/sdb2
mdadm --manage /dev/md2 --add /dev/sdb3

Bootable partitions

If RAIDed disk act as booting disk as well, make sure to make them bootable and to run grub-install after adding them to the array or you may run into booting issues otherwise.

NVMe or SSD disks

When using NVMe or SSD disk by either relying or not on software RAID, always make sure you’re also TRIMming data on the disks or it may induce some slowness over time. Most distributions already have tools to help you with that, the easiest one to use is fstrim service.

systemctl enable fstrim.timer

Conclusions

You should have two brand new working disk back in your mdadm array. Also, here are some other commands you may find useful.

# Synchronize data on disk with memory
sync
# Watch mdadm synchronization process
watch cat /proc/mdstat
]]>
https://anatolinicolae.com/replacing-heavily-worn-ssds/feed/ 0
Get things done with GitLab Runners https://anatolinicolae.com/get-things-done-with-gitlab-runners/ https://anatolinicolae.com/get-things-done-with-gitlab-runners/#respond Mon, 20 Jan 2020 00:00:00 +0000 https://anatolinicolae.com/?p=854 Most of projects now have some kind of automation, where it’s building a Docker image or a website using webpack. There are also a lot of free CI/CD solutions that allow you to build your projects, but most of them are limited, hard to figure out or simply not enough. Here’s where Gitlab come in help, providing a solid infrastructure to run your pipelines, using their Gitlab Runners.

What are GitLab Runners?

GitLab Runners are agents that interface with a shell, a Docker daemon, on your server or using their shared runners. That’s right, you can host the Runners yourself!

So what’s the plan?

We want to be able to use, let’s say, a local server running 2 VMs to build our projects. We’re planning to have both Windows and Linux compatibility, so we’ll run a Ubuntu Bionic box as well as a Windows Server 2019 box. Both of them will have Docker on-board, allowing us to use whatever image we want to.

How to install them?

GitLab’s documentation about this is pretty easy to understand and apply on Linux, what we’re gonna dive into is the Windows part which is not that difficult either so here’s a copy-pastable block:

# Create GitLab runner home
New-Item -ItemType Directory -Force -Path C:\GitLab-Runner
# Exclude it from antivirus scans
Add-MpPreference -ExclusionPath C:\GitLab-Runner
# Download gitlab-runner.exe
(New-Object Net.WebClient).DownloadFile("https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-windows-amd64.exe", "C:\GitLab-Runner\gitlab-runner.exe")
# Register shell executor
Start-Process -FilePath C:\GitLab-Runner\gitlab-runner.exe -Argumentlist register,"--url https://gitlab.com/","--executor shell"
# Register Docker executor
Start-Process -FilePath C:\GitLab-Runner\gitlab-runner.exe -Argumentlist register,"--url https://gitlab.com/","--executor docker-windows","--tag-list docker,windows,server-2019","--docker-image mcr.microsoft.com/windows/servercore"
# Install Service
Start-Process -FilePath C:\GitLab-Runner\gitlab-runner.exe -Argumentlist install
# Add Docker as dependency
cmd.exe /c "sc config gitlab-runner depend= /"
cmd.exe /c "sc config gitlab-runner depend= docker"
# Reboot
Restart-Computer

How to use them now?

To use your freshly created GitLab Runners you’ll simply have to write your own .gitlab-ci.ymlrelying on Docker images and run builds inside them, or even using Visual Studio via shell-runners.

The catch to successfully use your brand new runners is to correctly tag them, or your build may fail due to platform or executor incompatibilities. The way we tagged our runners is pretty straight forward, there are some examples of what you could possibly find useful:

  • os (server-2019, bionic, xenial)
  • platform (windows, linux, mac)
  • executor (docker, docker-windows, shell, cmd, powershell)
  • specific apps (visual-studio, sql-server)
  • environment (development, testing, production)

You can now use the tags in your .gitlab-ci.yml as follows:

job:
  tags:
    - bionic
    - docker
    - development

or

job:
  tags:
    - windows
    - visual-studio
    - sql-server

Conclusion

Even though it requires a bit of configuration and maybe porting your old pipeline definitions over, GitLab CI/CD is a great solution compared to other tools such as Drone CI or Jenkins. You are free to build inside Docker, or build Docker images, thing that can be tricky if your project already runs on Docker (DIND is hard and awful), and everything is easier to understand thanks to pretty straight-forwards .gitlab-ci.ymls.

What do you use? How do you manage multi-platform builds? Is your pipeline clean and easy to catch up on?

]]>
https://anatolinicolae.com/get-things-done-with-gitlab-runners/feed/ 0
PowerDNS master-slave cluster https://anatolinicolae.com/powerdns-master-slave-cluster/ https://anatolinicolae.com/powerdns-master-slave-cluster/#respond Sun, 29 Dec 2019 22:12:00 +0000 https://anatolinicolae.com/?p=1 This guide’s purpose is to help you set up a replicated PowerDNS cluster using AXFR notifications between servers instead of full DB replication, which can be tricky to set up some times.

Prerequisites

This guide assumes we have the following 3 servers running on CentOS 7:

HostnameIPTypeOperation Mode
hostmaster.example.com172.16.0.1Mastersupermaster
ns01.example.com192.168.0.1Slavesuperslave
ns02.example.com192.168.0.2Slavesuperslave

Note that these local IPs are purely explanatory and you should use your servers’ public IPs instead, if they’re not the same network.

Operation mode on the slave servers is superslave, which allows them to automatically create zones and sync records, while just slave mode will not create new zones. Learn more about superslaves.

Make sure that domains on the Master node have their type set to MASTER, since other values will not notify the slaves. You should use NATIVE when performing a DB replication instead.

Install repos

The first step is to enable EPEL and PowerDNS repos to access all packages needed. We’ll run an update too just to fully have the system up to date.

yum install -y epel-release yum-plugin-priorities
curl -o /etc/yum.repos.d/powerdns-auth-42.repo https://repo.powerdns.com/repo-files/centos-auth-42.repo
yum update -y

Install MariaDB and pdns

We can now install in one shot both MariaDB and PowerDNS’ packages running the following command.

yum install -y mariadb mariadb-server pdns pdns-backend-mysql

Enable firewall, MariaDB and pdns services

Enable all three services and start them up using --now. PowerDNS may give an error on startup since there are no backends configured yet, but that’s not an issue at this point. We also proceed to add DNS to our firewall rules in order to accept connections on port 53, reloading then the firewall.

systemctl enable --now firewalld mariadb pdns
firewall-cmd --add-service=dns --permanent
firewall-cmd --reload

MariaDB setup

The DB should be up and running now, but we first need to finish the setup running the following command:

mysql_secure_installation

This will allow us to log in now as root using the password we set above.

mysql -u root -p

Now that we’re in the DB, we can create powerdns’ database and the user we’ll use to connect to the DB.

CREATE DATABASE powerdns;
GRANT ALL ON powerdns.* TO 'powerdns'@'localhost' IDENTIFIED BY 'powerdns';
FLUSH PRIVILEGES;

Install schema

While still in the DB, we can run the following command to use our newly created DB:

USE powerdns;

We now have to create the schema that PowerDNS runs on. You can find more about the configuration, and a copy-pastable schema, on PowerDNS’ documentation at the following link: configuring database connectivity.

Add supermasters

On each of the superslaves we have to define our supermaster (172.16.0.1). To do this, we have to run INSERT a new row on ns01 as following:

INSERT INTO `powerdns`.`supermasters` (`ip`, `nameserver`) VALUES ('172.16.0.1', 'ns01.example.com');

and the following one on ns02:

INSERT INTO `powerdns`.`supermasters` (`ip`, `nameserver`) VALUES ('172.16.0.1', 'ns02.example.com');

PowerDNS slave setup

We can now configure pdns to send notification from the master server, and to receive them on the slave servers.

If you already have anything in your configuration, just make a copy of it and possibly merge it later.

cp /etc/pdns/pdns.conf /etc/pdns/pdns.conf.original

Here’s a master boilerplate configuration which should work, but you may change it to fit your own setup.

cat > /etc/pdns/pdns.conf <<'EOF'
# Master
daemon=no
guardian=no
setgid=pdns
setuid=pdns
cache-ttl=20
launch=gmysql
webserver-port=8081
webserver-allow-from=127.0.0.1,::1
api-key=powerdns123
expand-alias=no
webserver=no
api=True
include-dir=/etc/pdns/local.d
resolver=no
version-string=anonymous
webserver-address=127.0.0.1
launch=gmysql
gmysql-host=localhost
gmysql-dbname=powerdns
gmysql-user=powerdns
gmysql-password=powerdns
gmysql-dnssec=no
default-ttl=60
dnsupdate=yes
master=yes
EOF

Here’s a slave boilerplate configuration which should work, but you may change it to fit your own setup.

cat > /etc/pdns/pdns.conf <<'EOF'
# Slave
daemon=no
guardian=no
setgid=pdns
setuid=pdns
cache-ttl=20
expand-alias=no
webserver=no
resolver=no
version-string=anonymous
launch=gmysql
gmysql-host=localhost
gmysql-dbname=powerdns
gmysql-user=powerdns
gmysql-password=powerdns
gmysql-dnssec=no
allow-axfr-ips=172.16.0.1/32
allow-dnsupdate-from=172.16.0.1/32
allow-notify-from=172.16.0.1/32
dnsupdate=yes
master=no
slave=yes
superslave=yes
EOF

Note that the Master could also have an active API to receive updates via REST API, which is the default configuration in ApisCP and you should definitely check it out.

Restarting the service shouldn’t give an error anymore, and if you did everything correctly by following this guide obviously, it will all run smoothly!

systemctl restart pdns

Conclusion

Congratulations! You’ve successfully set up a PowerDNS cluster. You can verify the replication by keeping an eye on the slave databases, which should automatically create new zone and insert records upon master notification.

Here are two one-liners which you may find helpful, such as:

  • all zone renotify:
pdns_control list-zones --type master | sed '$d' | xargs -L1 pdns_control notify
  • zone cleanup:
pdns_control list-zones --type slave | sed '$d' | xargs -I {} sh -c "host -t SOA {} 172.16.0.1 | tail -n1 | grep -q 'has SOA record' || pdnsutil delete-zone {}"

Are you a PowerDNS guru? Share your insights on similar cluster configuration or any tips you think could help in the comments! It’s not that easy to get this right, and it would be awesome if you could help!

More links:

]]>
https://anatolinicolae.com/powerdns-master-slave-cluster/feed/ 0
Hunting down the best home server https://anatolinicolae.com/hunting-down-the-best-home-server/ https://anatolinicolae.com/hunting-down-the-best-home-server/#respond Tue, 11 Jun 2019 12:00:00 +0000 https://anatolinicolae.com/?p=862 Some time ago I’ve went out and searched for months for an HP Microserver Gen8 to replace my day-to-day HDDs which have been failing one too many times already.

As they’re discontinued for some years now, people selle them for a lot money but after a long search and plenty Google Alerts I’ve found a used one in good condition which was also cheap. 🙌

Turns out that pumping it with a better CPU, 16GBs of RAM and 2 SSDs is not enough for running a lot of VMs and Docker stuff. The space of a mini-ATX is not enough—we have to go bigger.

Procrastination

So there’s the YouTube algorithm, which kinda knows what you’re into, that somehow guesses my next build. I see Jarrod’s Tech video of a 16-core and 128GB RAM beast suggested in the feed, and I knew that it was going to be my next build. had to have that shiny beast.

Looks like procrastinating is sometimes actually good.

This build is actually way better than my HP Microserver mainly because you’re not bound to manufacturer’s hardware limits which for example wouldn’t allow you to put more than 8GB per memory slot.

Research

After enjoying Jarrod’s amazing video, I’ve immediately sent that to my friend, it seemed to be the best upgrade that I could possibly go for. And so I did.

I started researching the components but it seemed to be kinda the same as for the Microserver, for which there was a great demand as more and more people used to look for a nice NAS which could handle Plex or similar software.

The motherboard was the main piece of the build and unfortunately there was nobody selling it on eBay or such sites. A similar build with CPU and RAM as available on the site Jarrod suggested, but was too expensive.

Continuing to monitor the market, I found a listing of the exact board I was looking for on an italian website. Bought it for about €150 shipped, and the research of the other parts has resumed.

CPU

The motherboard had two LGA2011 sockets, so I needed to find a CPU that supports dual socket environments. There were a lot of comparison tables, a lot of opinions and whatever you can throw at it. Choosing one wasn’t easy at all.

I wanted a CPU that could handle a lot of work, since the main use case was to have multiple VMs running on it. I also wanted it not to require much power, so the final choice was an eBay listing of used Intel Xeon E5-2650, for €50 each.

Power supply

I haven’t built an custom PC for a long time, so I thought any kind of PSU could be a good fit. WRONG! Found out later that you should actually consider all system components’ power consumption and get a PSU that could handle them all. I also wanted to get a better PSU with the best 80 Plus rating, so I went for a Seasonic SSR-650TR which is a 650W Titanium rated PSU that could handle well enough my setup.

Disks

Safe data was the main goal here. I already had 2x500GB SSDs but I still needed larger drives.

Seagate’s Ironwolf 8TB drives were my choice for best quality/capacity ratio, and fortunately they were brought down to about €200 each.

Case and cooling

Jarrod’s case choice was really inspiring. I WANTED it to be lit, but again there was nobody selling it for a relatively good price. eBay came again in great help and I was able to get it for around €150. It was awesome.

The case came already with a lot of fans, but I needed two more for the CPUs since this wasn’t a Gen8 with kinda 👌 passive cooling. Again, Jarrod’s choice of two Noctua NH-U12DX was more than great.

Assembling

All the parts came in relatively fast shipping times and in few weeks I had all the parts for the assembly.

It didn’t take much to figure out where everything fit and it was up and running in about 2 hours.

It wasn’t lighting up tho, so I bought a LED strip. While mounting it, I noticed there was another molex to connect, which purpose was to—guess what—power the integrated LEDs. 🤦‍♂️

Next steps

After powering everything up there’s another choice to make: the OS (coming soon).

]]>
https://anatolinicolae.com/hunting-down-the-best-home-server/feed/ 0 Home Server Build - Choosing Hardware and Benchmarks nonadult