The post Server Security Hardening for Running<span class="no-widows"> </span>WordPress appeared first on SpinupWP.
]]>This is article 10 of 10 in the series “Hosting WordPress Yourself”
In chapter 3 you learned how to add HTTPS sites to your server, but there is more that we can do to improve the security of our server. In this chapter, we’ll continue strengthening our server’s security by implementing best practices that protect both, network access and web services, from common attack vectors. This includes hardening SSH access, improving TLS configuration, configuring browser security headers, and reducing the risk of attacks such as XSS, Clickjacking, and MIME sniffing.
Disabling PermitRootLogin and PasswordAuthentication already goes a long way towards protecting your server from unauthorized access. However, bad actors can still attempt to compromise your server through other means such as exploiting weak key exchange algorithms and ciphers. To help us protect ourselves against these sorts of exploits, we’ll use SSH-Audit to scan our server for potential SSH vulnerabilities.

A Standard Audit will suffice for our purposes, but you may consider using a custom Policy Audit should your environment require it.

As you can see, our server didn’t do too well based on the scan. Thankfully, SSH-Audit also provides us with all the necessary steps to address the issues highlighted in the audit. However, for your convenience, we’ll go through those steps below.
Start by removing the default SSH host keys from the server:
sudo rm /etc/ssh/ssh_host_*
Then, re-generate the server’s ED25519 and RSA SSH host keys:
sudo ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -N ""
sudo ssh-keygen -t rsa -b 4096 -f /etc/ssh/ssh_host_rsa_key -N ""
Next, remove the small Diffie-Hellman keys from the moduli file:
sudo awk '$5 >= 3071' /etc/ssh/moduli | sudo tee /etc/ssh/moduli.safe > /dev/null
sudo mv /etc/ssh/moduli.safe /etc/ssh/moduli
Now, create a new SSH configuration file inside /etc/ssh/sshd_config.d/:
sudo nano /etc/ssh/sshd_config.d/ssh_hardening.conf
And add the following lines to it:
# Restrict key exchange, cipher, and MAC algorithms, as per ssh-audit.com hardening guide.
KexAlgorithms [email protected],gss-curve25519-sha256-,curve25519-sha256,[email protected],diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha256,gss-group16-sha512-,diffie-hellman-group16-sha512
Ciphers [email protected],[email protected],aes256-ctr,aes192-ctr,[email protected],aes128-ctr
MACs [email protected],[email protected],[email protected]
RequiredRSASize 3072
HostKeyAlgorithms [email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256
CASignatureAlgorithms [email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256
GSSAPIKexAlgorithms gss-curve25519-sha256-,gss-group16-sha512-
HostbasedAcceptedAlgorithms [email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256
PubkeyAcceptedAlgorithms [email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256
Lastly, restart the service for the changes to take effect:
sudo systemctl restart ssh.service
Running the scan again produces the following result:

Let’s first figure out where we’re at and see what we have to improve by having a couple of free security scanning services scan our site. You can check the status of your site’s security headers using SecurityHeaders.com, which is an excellent free resource created by cybersecurity expert Scott Helme. Our site did not do so well here either:

The SSL Server Test by Qualys SSL Labs gives us a good idea of how we might improve our SSL configuration. Our site did pretty well here, but there’s still room for improvement:

Although your site is configured to only handle HTTPS traffic via your SSL certificate from Let’s Encrypt, it still allows the client to attempt further HTTP connections. Adding the Strict-Transport-Security header to the server response will ensure all future connections enforce HTTPS. An article by Scott Helme gives a thorough overview of the Strict-Transport-Security header.
Let’s configure Nginx by opening your virtual host file:
sudo nano /etc/nginx/sites-available/globex.turnipjuice.media
Add the following directive below add_header Alt-Svc 'h3=":443"; ma=86400' always;:
##
# Security Headers
##
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
Hit CTRL + X followed by Y to save the changes.
You may be wondering why the 301 redirect is still needed if this header automatically enforces HTTPS traffic: unfortunately the header isn’t supported by IE10 and below.
Now let’s update our SSL configuration as per the recommendations of Mozilla’s SSL Configuration Generator on the “Intermediate” setting. Open the main nginx.conf file:
sudo nano /etc/nginx/nginx.conf
Find the ssl_protocols directive and replace that line with the following three lines:
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
ssl_dhparam /etc/nginx/dhparam;
This ensures we aren’t allowing the use of old, insecure protocols and ciphers.
Now find the ssl_prefer_server_ciphers directive and update it to off:
ssl_prefer_server_ciphers off;
This allows the client to choose the most performant cipher suite for their hardware configuration from our list of supported ciphers above.
Hit CTR + X followed by Y to save the file.
Now let’s download that dhparam file that we referenced in the SSL configuration update above and save it to the server:
sudo sh -c 'curl https://ssl-config.mozilla.org/ffdhe4096.txt > /etc/nginx/dhparam'
It is a bit strange to be downloading a key from the web instead of generating our own, but there’s a good discussion that details why this is ok. Suffice it to say that you should use this key file.
Before reloading the Nginx configuration, ensure there are no syntax errors.
sudo nginx -t
If no errors are shown, reload the configuration.
sudo systemctl reload nginx.service
HTTPS connections are a lot more resource hungry than regular HTTP connections. This is due to the additional handshake procedure required when establishing a connection. However, it’s possible to cache the SSL session parameters, which will avoid the SSL handshake altogether for subsequent connections. Just remember that security is the name of the game, so you want clients to re-authenticate often. A happy medium of 10 minutes is usually a good starting point.
Open the main nginx.conf file:
sudo nano /etc/nginx/nginx.conf
Add the following directives within the http block under the SSL Settings:
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
Before reloading the Nginx configuration, ensure there are no syntax errors.
sudo nginx -t
If no errors are shown, reload the configuration.
sudo systemctl reload nginx.service
The most effective way to deal with XSS is to ensure that you correctly validate and sanitize all user input in your code, including that within the WordPress admin areas. But most input validation and sanitization is out of your control when you consider third-party themes and plugins. You can however reduce the risk of vulnerability to XSS attacks by configuring Nginx to provide an additional response header.
Let’s assume an attacker has managed to embed a malicious JavaScript file into the source code of your site or web application, maybe through a comment form or something similar. By default, the web browser will unknowingly load this external file and allow its contents to execute. Enter the Content Security Policy header, which allows you to define a whitelist of sources that are approved to load assets (JS, CSS, etc.). If the script isn’t on the approved list, it doesn’t get loaded.
Creating a Content Security Policy can require some trial and error, as you need to be careful not to block assets that should be loaded such as those provided by Google or other third party vendors. As such, we’ll define a fairly relaxed policy as a starting point.
Open the virtual host file:
sudo nano /etc/nginx/sites-available/globex.turnipjuice.media
Add the following to the Security Headers section:
add_header Content-Security-Policy "default-src 'self' https: data: 'unsafe-inline' 'unsafe-eval';" always;
This will block any non HTTPS assets from loading.
As an example of a stricter policy, you could add the following instead:
add_header Content-Security-Policy "default-src 'self' https://*.google-analytics.com https://*.googleapis.com https://*.gstatic.com https://*.gravatar.com https://*.w.org data: 'unsafe-inline' 'unsafe-eval';" always;
This will only allow assets of the current domain and a few sources from Google and WordPress.org to load on the site.
While we’re on the topic of XSS protection, you may come across the X-XSS-Protection header in older security guides or configuration examples. In the past, it was common to see this header configured as follows:
X-XSS-Protection "1; mode=block"
For this reason, it’s important to explicitly disable the X-XSS-Protection header to prevent legacy browsers from attempting to apply outdated or unsafe filtering mechanisms. You can do this by adding the following line to the Security Headers section of the virtual host configuration:
add_header X-XSS-Protection "0" always;
This ensures that the header is set to 0, effectively disabling the deprecated filter and avoiding potential security risks.
X-XSS-Protection header, see the resources below: Clickjacking is an attack which fools the user into performing an action which they did not intend to, and is commonly achieved through the use of iframes. An article by Troy Hunt has a thorough explanation of clickjacking attacks.
The most effective way to combat this attack vector is to completely disable frame embedding from third party domains. To do this, add the following directive below the X-XSS-Protection header:
add_header X-Frame-Options "SAMEORIGIN" always;
This will prevent all external domains from embedding your site directly into their own through the use of the iframe tag:
<iframe src="proxy.php?url=http://mydomain.com"</iframe>
MIME sniffing can expose your site to attacks such as “drive-by downloads.” The X-Content-Type-Options header counters this threat by ensuring only the MIME type provided by the server is honored. An article by Microsoft explains MIME sniffing in detail.
To disable MIME sniffing add the following directive:
add_header X-Content-Type-Options "nosniff" always;
The Referrer-Policy header allows you to control which information is included in the Referrer header when navigating from pages on your site. While referrer information can be useful, there are cases where you may not want the full URL passed to the destination server, for example, when navigating away from private content (think membership sites).
In fact, since WordPress 4.9 any requests from the WordPress dashboard will automatically send a blank referrer header to any external destinations. Doing so makes it impossible to track these requests when navigating away from your site (from within the WordPress dashboard), which helps to prevent broadcasting the fact that your site is running on WordPress by not passing /wp-admin to external domains.
We can take this a step further by restricting the referrer information for all pages on our site, not just the WordPress dashboard. A common approach is to pass only the domain to the destination server, so instead of:
https://myawesomesite.com/top-secret-url
The destination would receive:
https://myawesomesite.com
You can achieve this using the following policy:
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
A full list of available policies can be found over at MDN.
The Permissions-Policy header allows a site to enable and disable certain browser features and APIs. This allows you to manage which features can be used on your own pages and anything that you embed.
A Permissions Policy works by specifying a directive and an allowlist. The directive is the name of the feature you want to control and the allowlist is a list of origins that are allowed to use the specified feature. MDN has a full list of available directives and allowlist values. Each directive has its own default allowlist, which will be the default behavior if they are not explicitly listed in a policy.
You can specify several features at the same time by using a comma-separated list of policies. In the following example, we allow geolocation across all contexts, we restrict the camera to the current page and the specified domain, and we block the microphone across all contexts:
add_header Permissions-Policy "geolocation=*, camera=(self 'https://example.com'), microphone=()";
Download the complete set of Nginx config files including these security directives
That’s all of the suggested security headers implemented. Save and close the file by hitting CTRL+X followed by Y. Before reloading the Nginx configuration, ensure there are no syntax errors.
sudo nginx -t
If no errors are shown, reload the configuration.
sudo systemctl reload nginx.service
After reloading your site you may see a few console errors related to external assets. If so, adjust your Content-Security-Policy as required.
Now if we rescan our site’s security headers, we get a much better result:

And if we rescan our SSL configuration (you may need to click the little “Clear cache” link to rescan), we also see a small improvement:

That concludes this chapter. In the next chapter of our installing WordPress on Ubuntu 24.04 guide, we’ll move a WordPress site from one server to another with minimal downtime.
The post Server Security Hardening for Running<span class="no-widows"> </span>WordPress appeared first on SpinupWP.
]]>The post Configure Redis Object Cache and Nginx FastCGI Page Cache for<span class="no-widows"> </span>WordPress appeared first on SpinupWP.
]]>This is article 9 of 10 in the series “Hosting WordPress Yourself”
In the previous chapter, I walked you through the process of obtaining an SSL certificate, configuring Nginx for HTTPS, and creating your first database and WordPress site on your Linux server and LEMP stack. However, we need to do more if we want our sites to feel snappy. In this chapter I will guide you through the process of caching a WordPress site on the backend. Caching will increase throughput (requests per second) and decrease response times (improve load times).
I want to show you how this setup handles traffic prior to any caching. It’s difficult to simulate real web traffic. However, it is possible to send a large amount of concurrent requests to a server and track the time of responses. This gives you a rough indication of the amount of traffic a server can handle, but also allows you to measure the performance gains once you’ve implemented the optimizations.
The VPS I’ve set up for this series is a 2 GB / 1 Regular CPU DigitalOcean Droplet running Ubuntu 24.04 LTS. I’m using Loader to send an incremental amount of concurrent users to the server within a 60 second time period. The users scale, starting with 1 concurrent user and increasing to 50 concurrent users by the end of the test.

The server was able to handle a total of 674 requests. You’ll see that as concurrent users increase, so does the site’s response time. Meaning the more visitors on the site at the same time, the slower it will load, which could eventually lead to timeouts where a user’s session expires. Based on the results, the server can theoretically handle 970,560 requests a day with an average response time of 2,144ms.
Monitoring the server’s resource usage using the htop command, you can see that PHP and MySQL are using all of the CPU.

It’s time to optimize!
An object cache stores database query results so that instead of running the query again the next time the results are needed, the results are served from the cache. This greatly improves the performance of WordPress as there is no longer a need to query the database for every piece of data required to return a response.
Valkey is a fully open-source, high-performance key-value store and a drop-in alternative to Redis for object caching. Some providers (e.g. DigitalOcean) have switched from Redis to Valkey due to licensing changes and so we’re doing the same.
To install Valkey, simply run the following command:
sudo apt install valkey-server valkey-redis-compat -y
Now let’s make sure that Valkey starts when the server is rebooted:
sudo systemctl enable --now valkey-server.service
You could also install the Redis Nginx-module on your server for Valkey to perform simple caching, but in order for WordPress to use Valkey as an object cache, you need to install an object cache plugin like Redis Object Cache by Till Krüss, which is one of the most popular object cache plugins.

Once installed and activated, go to Settings > Redis.

Click the Enable Object Cache button.

This is also the screen where you can flush the cache if required.
I’m not going to run the benchmarks again as the results won’t dramatically change. Although object caching reduces the average amount of database queries on the front page from 22 to 2 (theme and plugin dependant), the database server is still being hit. Establishing a MySQL connection on every page request is one of the biggest bottlenecks within WordPress.
The benefit of object caching can be seen when you look at the average database query time, which has decreased from 2.1ms to 0.3ms. The average query times were measured using Query Monitor.
To see a big leap in performance and a big decrease in server resource usage, we must avoid a MySQL connection and PHP execution altogether.
Although an object cache can go a long way to improving your WordPress site’s performance, there is still a lot of unnecessary overhead in serving a page request. For many sites, content is rarely updated. It’s therefore inefficient to load WordPress, query the database, and build the desired page on every single request to the web server. Instead, you should serve a static HTML version of the requested page.
Nginx FastCGI cache allows you to automatically cache a static HTML version of a page using the FastCGI module. Any subsequent requests to the page will receive the cached HTML page version without ever hitting PHP or MySQL.
Setup requires a few changes to your Nginx server block. If you would find it easier to see the whole thing at once, feel free to download the complete Nginx config kit now. Otherwise, open your virtual host file:
sudo nano /etc/nginx/sites-available/globex.turnipjuice.media
Add the following line before the server block, ensuring that you change the fastcgi_cache_path directive and keys_zone. You’ll notice that I store my cache within the site’s directory, on the same level as the logs and public directories.
fastcgi_cache_path /home/abe/globex.turnipjuice.media/cache levels=1:2 keys_zone=globex.turnipjuice.media:100m inactive=60m;
You need to instruct Nginx to not cache certain pages. The following will ensure admin screens and pages for logged in users are not cached, plus a few others. This should go above the add_header Alt-Svc 'h3=":443"; ma=86400' always; directive.
set $skip_cache 0;
# POST requests should always go to PHP
if ($request_method = POST) {
set $skip_cache 1;
}
# URLs containing query strings should always go to PHP
if ($query_string != "") {
set $skip_cache 1;
}
# Don't cache uris containing the following segments
if ($request_uri ~* "/wp-admin/|/wp-json/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
set $skip_cache 1;
}
# Don't use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
set $skip_cache 1;
}
add_header Fastcgi-Cache $upstream_cache_status;
The last directive adds an extra header to server responses so that you can easily determine whether a request is being served from the cache. Next, within the second location block (i.e. the one with PHP-FPM) add the following directives:
fastcgi_cache globex.turnipjuice.media;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache_valid 60m;
Download the complete set of Nginx config files
Notice how the fastcgi_cache directive matches the keys_zone set before the server block. In addition to changing the cache location, you can also specify the cache duration by replacing 60m with the desired duration in minutes. The default of 60 minutes is a good starting point for most people.
If you modify the cache duration, you should consider updating the inactive parameter in the fastcgi_cache_path line as well. The inactive parameter specifies the length of time cached data is allowed to live in the cache without being accessed before it is removed.
Hit CTRL + X followed by Y to save the changes.
Next, you need to modify your nginx.conf file:
sudo nano /etc/nginx/nginx.conf
Add the following below the gzip settings:
##
# Cache Settings
##
fastcgi_cache_key "$scheme$request_method$http_host$request_uri";
This directive instructs the FastCGI module on how to generate key names, which will be used to serve and store the cache.
Hit CTRL + X followed by Y to save the changes. Now restart Nginx:
sudo systemctl restart nginx.service
Now when you visit the site and view the headers, you should see an extra parameter.

The possible return values are:
The final step is to install the Nginx Cache plugin, also by Till Krüss. This will automatically purge the FastCGI cache of specific cache files whenever specific WordPress content changes. You can manually purge the entire cache from the top bar in the WordPress dashboard.
You can also purge the entire cache by SSH’ing into your server and removing all the files in the cache folder:
sudo rm -Rf /home/abe/globex.turnipjuice.media/cache/*
This is especially handy when your WordPress dashboard becomes inaccessible, like if a redirect loop has been cached.
Once installed, navigate to Tools > Nginx Cache and define your cache zone path. This should match the value you specified for the fastcgi_cache_path directive in your Nginx hosts file.

Although page caching is desired for the majority of front-end pages, there are times when it can cause issues, particularly on ecommerce sites. For example, in most cases you shouldn’t cache the shopping cart, checkout, or account pages as they are generally unique for each customer. You wouldn’t want customers seeing the contents of other customer’s shopping carts!
Additional cache exclusions can be added using conditionals and regular expressions (regex). The following example will work for the default pages (Cart, Checkout, and My Account) created by WooCommerce:
# Don’t use cache for WooCommerce pages
if ($request_uri ~* "/cart/|/checkout/|/my-account/") {
set $skip_cache 1;
}
You may also want to skip the cache when a user has items in their cart:
# Don't use the cache when cart contains items
if ($http_cookie ~* "woocommerce_items_in_cart") {
set $skip_cache 1;
}
Open the Nginx configuration file for your site:
sudo nano /etc/nginx/sites-available/globex.turnipjuice.media
Add these new exclusions to the server block, directly below the existing conditionals. Once you’re happy, hit CTRL + X followed by Y to save the changes. Then test the configuration:
sudo nginx -t
If all is good, reload Nginx:
sudo systemctl reload nginx.service
You should now see that the “fastcgi-cache” response header is set to “BYPASS” when visiting any of the WooCommerce pages and/or if you have an item in your cart.
WooCommerce isn’t the only plugin to create pages that you should exclude from the FastCGI cache. Plugins such as Easy Digital Downloads, WP eCommerce, BuddyPress, and bbPress all create pages that you will need to exclude. Each plugin should have documentation on how to add caching rules to exclude its pages from caching.
For some strategies on how to continue using page caching on pages containing just a little bit content that’s personalized to the user, check out Full Page Caching With Personalized Dynamic Content.
With the caching configured, it’s time to perform a final benchmark. This time I’m going to up the maximum concurrent users from 50 to 750.

Not bad at all! The server was able to handle a total of 162,797 requests with an average response time of 136ms.
The server’s resource usage looks a little different too. Nginx is now solely causing the increased CPU usage spikes.

Performance optimization is much more difficult on highly dynamic sites where the content updates frequently, such as those that use bbPress or BuddyPress.
In these situations it’s required to disable page caching on the dynamic sections of the site (the forums for example). This is achieved by adding additional rules to the skip cache section within the Nginx server block. This will force those requests to always hit PHP and generate the page on the fly. Doing so will often mean you have to scale hardware sooner, thus increasing server costs. Another option is to implement micro caching.
At this point you may be wondering why we chose this route instead of installing a plugin such as WP Rocket, W3 Total Cache or WP Super Cache. First, not all plugins include an object cache. For those that do, you will often need to install additional software on the server (Redis for example) in order to take full advantage of the feature. Second, caching plugins don’t perform as well as server-based caching.
One significant way to reduce server requests is to use a plugin like WP Offload Media to move files that you upload to the server through the WordPress Media Library to cloud storage. The plugin will automatically rewrite the media URLs to serve the files from cloud storage.
WP Offload Media also allows you to configure a CDN to serve your media much faster, which means your pages load faster. This can lead to increased conversions and may even help improve your Google search engine rankings. Offloading your media will also mean your site’s media files don’t use up your server disk space.
Once you install the WP Offload Media Lite plugin, configure your storage provider settings. The plugin will guide you through this for the cloud storage providers it supports (Amazon S3, DigitalOcean Spaces, and Google Cloud Storage).

After configuring your storage settings, you can adjust your Delivery settings to take advantage of the benefits of a CDN.

That concludes this tutorial on caching and speed improvements. In the next chapter we’ll dig into cron and email sending.
The post Configure Redis Object Cache and Nginx FastCGI Page Cache for<span class="no-widows"> </span>WordPress appeared first on SpinupWP.
]]>The post Automated Backups to Amazon<span class="no-widows"> </span>S3 appeared first on SpinupWP.
]]>This is article 8 of 10 in the series “Hosting WordPress Yourself”
In the previous chapter, I walked you through how to configure a WordPress server-level cron and set up outgoing email for your Ubuntu server. In this chapter, we’ll configure and automate backups for your sites.
Performing backups on a regular basis is essential. It’s inevitable that at some point in the future you will need to restore backup data – whether that’s due to user error, corruption, or a security breach. You never know what could go wrong, so having a recent backup on hand can really make your life easier as a systems administrator.
There are generally two types of backups we recommend you perform. The first is a full system backup and the second is a backup of each individual site hosted on the server.
Full system backups are best performed by your VPS provider, but they are not usually enabled by default. Most VPS providers, including DigitalOcean, Akamai/Linode, Google Cloud, and AWS, offer this service for a fee.
A full system backup is generally reserved for situations where you need to recover an entire server. For example, in the event of a rare, catastrophic failure where all the data on your server was lost. You won’t want to restore the entire system if only a single site needs restoration.
A single site backup saves the database and all files of the site, allowing you to restore just that site. For a WordPress site, you might think all you need to back up are the database and the uploads directory. After all, WordPress core files, themes, and plugins can be re-downloaded as needed. Maybe you’re even thinking of skipping backups for your uploads directory if you’re using a plugin like WP Offload Media, as the files are automatically sent to your configured cloud storage provider when added to the media library. This approach to backups can lead to trouble down the line.
There are two reasons we recommend including all data and files in a single site backup.
First, some WordPress plugins may have functionality that stores files to the uploads directory, often in a separate location from the WordPress Media Library directory structure. A common example is forms plugins that allow users to upload files from the frontend instead of the backend. Your media offloading solution won’t move these files to the offsite storage provider. If you exclude the uploads directory from your backup, you won’t be able to restore them.
Second, if you only back up your database and uploads directory, you’ll have to manually download the WordPress core files and any themes or plugins. This is not ideal if you are hosting high traffic sites, like ecommerce, membership, or community sites. You need to recover from a failure quickly, or you will lose business.
A weekly backup should suffice for sites that aren’t updated often, but you may want to perform them more frequently. For example, you may want to perform backups for an ecommerce site every few hours, depending on how often new orders are received.
To set up backups for a site, first, create a new backups directory in the site’s root directory. This will store all your backup files.
cd /home/abe/globex.turnipjuice.media
mkdir backups
If you’ve been following the rest of this guide, the backups directory will sit alongside the existing cache, logs, and public directories.
abe@pluto:~/globex.turnipjuice.media$ ls -l total 16 drwxrwxr-x 2 abe abe 4096 Sep 28 20:50 backups drwx------ 5 abe root 4096 Sep 28 20:21 cache drwxr-xr-x 2 abe abe 4096 Sep 23 02:41 logs drwxr-xr-x 5 abe abe 4096 Sep 28 20:38 public
Next, we’ll create a new shell script called backup.sh.
nano backup.sh
Paste the following contents into the file.
#!/bin/bash
NOW=$(date +%Y%m%d%H%M%S)
SQL_BACKUP=${NOW}_database.sql
FILES_BACKUP=${NOW}_files.tar.gz
DB_NAME=$(sed -n "s/define( *'DB_NAME', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_USER=$(sed -n "s/define( *'DB_USER', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_PASSWORD=$(sed -n "s/define( *'DB_PASSWORD', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_HOST=$(sed -n "s/define( *'DB_HOST', *'\([^']*\)'.*/\1/p" wp-config.php)
# Backup database
mysqldump --add-drop-table -u"$DB_USER" -p"$DB_PASSWORD" -h"$DB_HOST" "$DB_NAME" > ../backups/"$SQL_BACKUP" 2>&1
# Compress the database dump file
gzip ../backups/"$SQL_BACKUP"
# Backup the entire public directory
tar -zcf ../backups/"$FILES_BACKUP" .
What this script does:
NOW), a SQL filename variable (SQL_BACKUP) which includes the current date in the file name, and an archive file name variable (FILES_BACKUP), which also includes the current date.wp-config.php file and sets them up as variables to use in the mysqldump command which exports the database to the SQL_FILE file in the backups directory. It also ensures that the SQL file includes the drop table MySQL command. This is useful when using this file to restore one database over another that has existing tables with the same name.gzip to compress the SQL file so that it takes up less space. The resulting compressed filename looks something like this: 20240428191120_database.sql.gz.public directory in the backups directory. The resulting archive filename looks something like this: 20240428191120_files.tar.gz.You will also notice that any time we’re referring to the location of the backups directory, we’re using ../. This Linux file system syntax effectively means ‘go one directory above the current directory’ which we’re doing because we’re running the script from inside the public directory. We’ll also need to be aware of this when we set up the scheduled cron job later on.
Hit CTR + X followed by Y to save the file.
The next step is to ensure the newly created script has execute permissions so that it can be run by a server cron job.
chmod u+x backup.sh
The last step is to schedule the backup script to run at a designated time. Begin by opening the crontab file for the current user.
crontab -e
Add the following line to the end of the file.
0 5 * * 0 cd /home/abe/globex.turnipjuice.media/public/; /home/abe/globex.turnipjuice.media/backup.sh >/dev/null 2>&1
This cron job will change the current directory to the site’s public directory, and then run the backup.sh script in the context of that directory, every Sunday morning at 05:00, server time.
If you would prefer to run the backup daily, you can edit the last cron date/time field.
0 5 * * * cd /home/abe/globex.turnipjuice.media/public/; /home/abe/globex.turnipjuice.media/backup.sh >/dev/null 2>&1
Just remember, whichever option you use, you’ll need to add this crontab entry for each individual site you wish to back up.
A little note about WP-CLI. You probably know that you could use the WP-CLI wp db export command, especially as we installed WP-CLI back in Chapter 2 and we use it in many of our WordPress tutorials.
However, it’s better to use mysqldump instead of WP-CLI, because it reduces dependencies and risk. For example, let’s say you update to a new version of PHP, but WP-CLI doesn’t work with that version. Your backups will be broken.
Over time, this backup process is going to create a bunch of SQL and file archives in the backups directory, which can be a common reason for running out of server disk space. Depending on the data on your site, and how often it’s updated, you probably aren’t going to need to keep backups older than a month. So it would be a good idea to clean up old site backups you don’t need.
To remove old backups, add a line to the bottom of the backups.sh script.
# Remove backup files that are a month old
rm -f ../backups/"$(date +%Y%m%d --date='1 month ago')"*.gz
This line uses a date command to get the date one month ago and creates a filename string with the wildcard character *. This will match any filename starting with the date of one month ago and ending in .gz, and removes those files. For example, if the script is running on July 24th, it will remove any backup files created on June 24th. So long as your script runs every day, it will always remove backups from a month ago.
The updated backup script looks like this:
#!/bin/bash
NOW=$(date +%Y%m%d%H%M%S)
SQL_BACKUP=${NOW}_database.sql
FILES_BACKUP=${NOW}_files.tar.gz
DB_NAME=$(sed -n "s/define( *'DB_NAME', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_USER=$(sed -n "s/define( *'DB_USER', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_PASSWORD=$(sed -n "s/define( *'DB_PASSWORD', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_HOST=$(sed -n "s/define( *'DB_HOST', *'\([^']*\)'.*/\1/p" wp-config.php)
# Backup database
mysqldump --add-drop-table -u"$DB_USER" -p"$DB_PASSWORD" -h"$DB_HOST" "$DB_NAME" > ../backups/"$SQL_BACKUP" 2>&1
# Compress the database dump file
gzip ../backups/"$SQL_BACKUP"
# Backup the entire public directory
tar -zcf ../backups/"$FILES_BACKUP" .
# Remove backup files that are a month old
rm -f ../backups/"$(date +%Y%m%d --date='1 month ago')"*.gz
One problem with our WordPress site backups we’ve just set up is that the backup files still reside on your VPS server. If the server goes down, it will take the backups with it. Therefore, it’s a good idea to store your individual site backups somewhere other than your server. One great option for this is to move them to an Amazon S3 bucket.
First, we’ll need to create a new S3 bucket to hold our backups.
Log in to the AWS Console and navigate to Services => S3. Ensure the region selected in the navigation bar at the top of the screen is the region where you’d like to create the bucket. Click the Create bucket button and enter a name for the bucket. You can leave the rest of the settings as their defaults.

Scroll down and click the Create bucket button to create the bucket.
Now that we have a bucket, we need a user with permission to upload to it. For step-by-step instructions with screenshots, see our Amazon S3 Storage Provider doc, but the short version is:
Be sure to hang onto your Access Keys as you will need them later.
Amazon offers an official set of command line tools for working with all its services including S3. They also provide detailed installation instructions but the easiest and best way to install on Ubuntu is the following command:
sudo snap install aws-cli --classic
Once the AWS CLI is installed you can run aws from your command line terminal.
To upload your backups to Amazon S3, we first need to configure the AWS CLI with the Access Keys of the user we created earlier, by running the aws configure command. Set the default region to the same region you chose for the S3 bucket and leave the default output format.
aws configure
abe@pluto:~$ aws configure AWS Access Key ID [None]: AKIA3BPKIAF3MEJJNHXQ AWS Secret Access Key [None]: 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 Default region name [None]: us-east-2 Default output format [None]:
Once this is done, it’s straightforward to upload a file to our S3 bucket, using the aws s3 cp command:
aws s3 cp ../backups/20240428191120_database.sql.gz s3://backups-globex-turnipjuice-media/ --storage-class STANDARD_IA
Now we need to add this to our backup script. At the bottom of the file, add the following to upload both the SQL backup and the files backup to s3 storage:
# Copy the files to the S3 bucket
aws s3 cp ../backups/"$SQL_BACKUP".gz s3://backups-globex-turnipjuice-media/ --quiet --storage-class STANDARD_IA
aws s3 cp ../backups/"$FILES_BACKUP" s3://backups-globex-turnipjuice-media/ --quiet --storage-class STANDARD_IA
Now that the basics of the backup script are in place, let’s review the script and see if we can improve it. It would be great if the script was more generic and could be used for any site.
backups folder existsHere is the updated version of the backup script, with those additions in place.
#!/bin/bash
# Get the bucket name from an argument passed to the script
BUCKET_NAME=${1-''}
if [ ! -d ../backups/ ]
then
echo "This script requires a 'backups' folder 1 level up from your site files folder."
exit
fi
NOW=$(date +%Y%m%d%H%M%S)
SQL_BACKUP=${NOW}_database.sql
FILES_BACKUP=${NOW}_files.tar.gz
DB_NAME=$(sed -n "s/define( *'DB_NAME', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_USER=$(sed -n "s/define( *'DB_USER', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_PASSWORD=$(sed -n "s/define( *'DB_PASSWORD', *'\([^']*\)'.*/\1/p" wp-config.php)
DB_HOST=$(sed -n "s/define( *'DB_HOST', *'\([^']*\)'.*/\1/p" wp-config.php)
# Backup database
mysqldump --add-drop-table -u"$DB_USER" -p"$DB_PASSWORD" -h"$DB_HOST" "$DB_NAME" > ../backups/"$SQL_BACKUP" 2>&1
# Compress the database dump file
gzip ../backups/"$SQL_BACKUP"
# Backup the entire public directory
tar -zcf ../backups/"$FILES_BACKUP" .
# Remove backup files that are a month old
rm -f ../backups/"$(date +%Y%m%d --date='1 month ago')"*.gz
# Copy files to S3 if bucket given
if [ ! -z "$BUCKET_NAME" ]
then
aws s3 cp ../backups/"$SQL_BACKUP".gz s3://"$BUCKET_NAME"/ --quiet --storage-class STANDARD_IA
aws s3 cp ../backups/"$FILES_BACKUP" s3://"$BUCKET_NAME"/ --quiet --storage-class STANDARD_IA
fi
Finally, it would be useful to move the backup.sh script. Because we’ve made sure the script could potentially be located anywhere, you could even move it to the server’s /usr/local/bin directory, and make it available across the entire server. For our purposes, we’ll just move it to a scripts location in the current user’s home directory.
mkdir /home/abe/scripts
mv /home/abe/globex.turnipjuice.media/backup.sh /home/abe/scripts/
In the cron job, we’ll update the path to the script and include the bucket name to copy the files to S3 like this:
0 5 * * * cd /home/abe/globex.turnipjuice.media/public/; /home/abe/scripts/backup.sh backups-globex-turnipjuice-media >/dev/null 2>&1
If you don’t want to copy files to S3, you would omit the bucket name:
0 5 * * * cd /home/abe/globex.turnipjuice.media/public/; /home/abe/scripts/backup.sh >/dev/null 2>&1
In our S3 commands above, you may have noticed --storage-class STANDARD_IA. This tells S3 to use the Standard-Infrequent Access storage class, which is intended for infrequently accessed data and is cheaper than the Standard storage class.
If you’re planning to keep backups for at least 90 days, you may also consider the Glacier Instant Retrieval (GLACIER_IR) storage class. And if you’re planning to keep backups beyond 180 days, you might consider the Glacier Deep Archive (DEEP_ARCHIVE) storage class. Be mindful of the retrieval times for each storage class however. It can take up to 12 hours to retrieve a backup from Deep Archive storage for example. If you read the documentation on the aws cp command you will see that all you need to do to implement the Glacier storage class is to change the --storage-class option.
Check out the Amazon S3 Storage Classes page on the AWS site for more details and more storage class options.
Wouldn’t it be great if you could keep the most recent backups in Standard-IA storage, and then move them to Glacier after a set number of days?
With Amazon S3 Lifecycle rules, you can configure your S3 bucket to transition your backup files from one storage class to another and even set expiration dates on them. The expiration option is great for cleaning outdated backups from your S3 bucket, saving you the cost of keeping these files around forever.
We’re going to configure an S3 Lifecycle rule that transitions the backup files to Glacier after 30 days and deletes them after one year. You might want to increase/decrease these values, depending on your requirements. It’s also worth noting that once an object has been moved to the Glacier storage class, there is a minimum storage duration of 90 days. This means if you delete any item in Glacier storage that’s been there for less than 90 days, you will still be charged for the 90 days.
To create an S3 Lifecycle rule, access your bucket in the AWS management console. If you have quite a few buckets, you can use the search box to filter by bucket name.

Click on the bucket name in the list to view the bucket details, then click on the Management tab. Click on either of the Create lifecycle rule buttons.

Give the rule a name, and then choose the Apply to all objects in the bucket scope. Tick the checkbox to acknowledge that you understand that this rule will apply to all objects in the bucket.
Under Lifecycle rule actions, tick to select the specific actions you want to apply. We want to use the Move current versions of objects between storage classes action and the Expire current versions of objects action.

We’re configuring both actions in one Lifecycle rule. However, there is nothing stopping you from creating one rule for the transition and another for the expiration.
The final step is to configure each of the actions.

For the transition rule I’ve selected Glacier Instant Retrieval for the “Choose storage class transitions” field and 30 for the “Days after object creation” field. This configuration will move the backup files to the Glacier storage class 30 days after they are copied to the bucket.
For the expiration rule I’ve set 365 as the value for the “Days after object creation” field, which means it will expire any objects in the bucket after one year.

The bottom of the Lifecycle rule configuration page shows an overview of the actions you’ve configured. As you can see, current versions of objects are uploaded on day 0, moved to Glacier on day 1, and expired on day 365.
Click the Save button once you’re happy with your rules. If you’ve configured the rule correctly, after 30 days, you’ll see your backup files have moved from the Standard-IA storage class to Glacier-IR.
So there you have it, a fairly straightforward setup to backup your WordPress site and store it remotely. You may also want to consider using the WP Offload Media plugin to copy files to S3 as they are uploaded to the Media Library. Not only can you save disk space by storing those files in S3 instead of your server, but you can configure Amazon CloudFront or another CDN to deliver them very fast. You can also enable versioning on the bucket so that all your files are restorable in case of accidental deletion.
That concludes this chapter. In the next chapter, we’ll improve the security of our server with tweaks to the Nginx configuration.
The post Automated Backups to Amazon<span class="no-widows"> </span>S3 appeared first on SpinupWP.
]]>The post Configure Nginx to Serve WordPress Over<span class="no-widows"> </span>HTTPS appeared first on SpinupWP.
]]>This is article 7 of 10 in the series “Hosting WordPress Yourself”
In the previous chapter, I showed you how to install PHP 8.4, Nginx, WP-CLI, and MySQL on the backend, which formed the foundations of a working Linux web server & LEMP stack. In this chapter, I will guide you through the process of deploying your first HTTPS enabled WordPress site with HTTP/3 support.
HTTPS is the secure version of the HTTP protocol, adding encryption to protect communication between a server and a client. It ensures that all data sent between the devices is encrypted and that only the intended recipient can decrypt it. Without HTTPS any data transmitted will be sent in plain text, allowing anyone who might be eavesdropping to read the information.
HTTPS is especially important on sites which process credit card information but has gained widespread adoption and very few sites run without it. Google also considers it a factor in ranking sites in search results.
In the previous chapter, you may recall we configured a basic “catch-all” server block to handle unmatched requests:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 444;
}
Now that we’re enabling HTTPS, we’ll need to extend this default block to also handle secure connections. To do that, add the following lines below listen [::]:80 default_server;:
listen 443 ssl default_server deferred;
listen [::]:443 ssl default_server deferred;
This ensures that Nginx can gracefully handle incoming HTTPS traffic, even for requests to unknown domains.
Because this default server isn’t tied to any particular domain, we’ll generate a self-signed SSL certificate for it. First, create a dedicated directory to store the certificate and key:
sudo mkdir -p /etc/nginx/ssl/default
Then generate a long-lived self-signed SSL certificate using the following command:
sudo openssl req -x509 -newkey rsa:4096 -days 36500 -keyout /etc/nginx/ssl/default/privkey.pem -out /etc/nginx/ssl/default/cert.pem -subj "/CN=default" -nodes
Finally, reference the certificate and key by adding the following lines below server_name _;:
ssl_certificate /etc/nginx/ssl/default/cert.pem;
ssl_certificate_key /etc/nginx/ssl/default/privkey.pem;
Your completed default server block should look like this:
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl default_server deferred;
listen [::]:443 ssl default_server deferred;
server_name _;
ssl_certificate /etc/nginx/ssl/default/cert.pem;
ssl_certificate_key /etc/nginx/ssl/default/privkey.pem;
return 444;
}
This configuration allows Nginx to properly accept and terminate HTTPS connections that don’t match any existing site, returning a silent 444 response rather than triggering SSL handshake errors. It helps prevent confusing browser warnings and ensures unmatched HTTPS requests are handled securely and silently.
HTTP/3 is the latest version of the Hypertext Transfer Protocol, designed to improve performance, reliability, and security on the web. Unlike HTTP/2, which runs over TCP, HTTP/3 is built on QUIC, a transport protocol that uses UDP to reduce latency and speed up connections, especially on unreliable or high-latency networks.
Because QUIC includes built-in encryption and faster connection establishment through 0-RTT (Zero Round Trip Time) handshakes, HTTP/3 enables quicker page loads and smoother browsing experiences. It also handles network changes more gracefully, like switching from Wi-Fi to mobile data without interrupting active connections.
To enable HTTP/3 support in Nginx, we’ll need to extend the default catch-all server block to also listen to QUIC traffic on port UDP/443. Simply add the following lines below listen [::]:443 ssl default_server deferred;:
listen 443 quic reuseport default_server;
listen [::]:443 quic reuseport default_server;
Your completed default server block should now look like this:
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl default_server deferred;
listen [::]:443 ssl default_server deferred;
listen 443 quic reuseport default_server;
listen [::]:443 quic reuseport default_server;
server_name _;
ssl_certificate /etc/nginx/ssl/default/cert.pem;
ssl_certificate_key /etc/nginx/ssl/default/privkey.pem;
return 444;
}
In Chapter 1, we choose a domain name for our server and set it up as a hostname by configuring the DNS to point at the IP address of our server. Now we need to do something similar for the site we’re about to set up.
I’ve chosen the domain name globex.turnipjuice.media for my site and have created a CNAME record pointing to globex.turnipjuice.media in my DNS for that domain as well as www.globex.turnipjuice.media.

It’s good practice to use CNAME records here instead of A records so that if you ever need to update the IP address of your server in the future, you only need to update one record.
Now let’s install Certbot, the free, open source tool for managing Let’s Encrypt certificates:
sudo apt install software-properties-common
sudo add-apt-repository universe
sudo apt update
sudo apt install certbot python3-certbot-nginx
To obtain a certificate, you can now use the Nginx Certbot plugin, by issuing the following command. The certificate can cover multiple domains (100 maximum) by appending additional d flags.
sudo certbot --nginx certonly -d globex.turnipjuice.media -d www.globex.turnipjuice.media
After entering your email address and agreeing to the terms and conditions, the Certbot client will generate the requested certificate. Make a note of where the certificate file fullchain.pem and key file privkey.pem are created, as you will need them later.
Successfully received certificate. Certificate is saved at: /etc/letsencrypt/live/globex.turnipjuice.media/fullchain.pem Key is saved at: /etc/letsencrypt/live/globex.turnipjuice.media/privkey.pem
Certbot will handle renewing all your certificates automatically, but you can test automatic renewals with the following command:
sudo certbot renew --dry-run
Now we need to set up a server block so that Nginx knows how to deal with requests for these domains. By default, our Nginx configuration will drop any connections it receives, as in the previous chapter you created a catch-all server block. This ensures that the server only handles traffic to domain names that you explicitly define.
When we went through the process to install Nginx you may remember we created a php.info file in the /var/www/html directory. This was because this is the default document root that Nginx configures. However, we want a more manageable directory structure for our WordPress sites.
If you’re not already there, navigate to your home directory.
cd ~/
For simplicity’s sake, all of the sites that you host are going to be located in your home directory and have the following structure:
abe@pluto:~$ ls -l ~/globex.turnipjuice.media/ total 8 drwxr-xr-x 2 abe abe 4096 Apr 6 14:02 logs drwxr-xr-x 2 abe abe 4096 Apr 6 14:02 public
The logs directory is where the Nginx access and error logs will be stored, and the public directory will be the site’s root directory, which will be publicly accessible.
Begin by creating the required directories and setting the correct permissions:
mkdir -p globex.turnipjuice.media/logs globex.turnipjuice.media/public
chmod -R 755 globex.turnipjuice.media
With the directory structure in place it’s time to create the server block in Nginx. Navigate to the sites-available directory:
cd /etc/nginx/sites-available
Create a new file to hold the site configuration. Naming this the same as the site’s root directory will make server management easier when hosting a number of sites:
sudo nano globex.turnipjuice.media
Copy and paste the following configuration, ensuring that you change the server_name, access_log, error_log, and root directives to match your domain and file paths. You will also need to replace the file paths to the certificate and certificate key obtained in the previous step. The ssl_certificate directive should point to the fullchain.pem file, and the ssl_certificate_key directive should point to the privkey.pem file. Hit CTRL + X followed by Y to save the changes.
server {
listen 443 ssl;
listen [::]:443 ssl;
listen 443 quic;
listen [::]:443 quic;
http2 on;
server_name globex.turnipjuice.media;
ssl_certificate /etc/letsencrypt/live/globex.turnipjuice.media/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/globex.turnipjuice.media/privkey.pem;
access_log /home/abe/globex.turnipjuice.media/logs/access.log;
error_log /home/abe/globex.turnipjuice.media/logs/error.log;
root /home/abe/globex.turnipjuice.media/public/;
index index.php;
add_header Alt-Svc 'h3=":443"; ma=86400' always;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php/php8.4-fpm.sock;
fastcgi_index index.php;
include fastcgi.conf;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
listen 443 quic;
listen [::]:443 quic;
http2 on;
server_name www.globex.turnipjuice.media;
ssl_certificate /etc/letsencrypt/live/globex.turnipjuice.media/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/globex.turnipjuice.media/privkey.pem;
add_header Alt-Svc 'h3=":443"; ma=86400' always;
return 301 https://globex.turnipjuice.media$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name globex.turnipjuice.media www.globex.turnipjuice.media;
return 301 https://globex.turnipjuice.media$request_uri;
}
Download the complete set of Nginx config files
This is a bare-bones server block that informs Nginx to serve the globex.turnipjuice.media domain over HTTPS. The www subdomain will be redirected to globex.turnipjuice.media and HTTP requests will be redirected to HTTPS.
The two location blocks essentially tell Nginx to pass any PHP files to PHP-FPM for interpreting. Other file types will be returned directly to the client if they exist, or passed to PHP if they don’t.
By default Nginx won’t load this configuration file. If you take a look at the nginx.conf file you created in the previous chapter, you will see the following lines:
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
Only files within the sites-enabled directory are automatically loaded. This allows you to easily enable or disable sites by adding or removing a symbolic link (or symlink) in the sites-enabled directory, linked to the configuration file in sites-available.
To enable the newly created site, symlink the file that you just created into the sites-enabled directory, using the same filename:
sudo ln -s /etc/nginx/sites-available/globex.turnipjuice.media /etc/nginx/sites-enabled/globex.turnipjuice.media
In order for the changes to take effect, you must reload Nginx. However, before doing so you should check the configuration for any errors:
sudo nginx -t
If the test fails, recheck the syntax of the new configuration file. If the test passes, reload Nginx:
sudo systemctl reload nginx.service
With Nginx configured to serve the new site, it’s time to create the database so that WordPress can be installed.
When hosting multiple sites on a single server, it’s good practice to create a separate database and database user for each individual site. You should also lock down the user privileges so that the user only has access to the databases that they require.
Log into MySQL with the root user.
mysql -u root -p
You’ll be prompted to enter the password which you created when setting up a MySQL database.
abe@pluto:~$ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 8 Server version: 8.0.43-0ubuntu0.24.04.2 (Ubuntu) Copyright (c) 2000, 2025, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
Once logged in, create the new database, replacing globex with your chosen database name:
CREATE DATABASE globex CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_520_ci;
Next, create the new user using the following command, remembering to substitute globex and password with your own values:
CREATE USER 'globex'@'localhost' IDENTIFIED BY 'password';
You then need to add the required privileges. To keep things simple, you can grant all privileges but restrict them to your database only, like so:
GRANT ALL PRIVILEGES ON globex.* TO 'globex'@'localhost';
Alternatively, you can have more granular control and explicitly define the privileges the user should have:
GRANT SELECT, INSERT, UPDATE, DELETE ON globex.* TO 'globex'@'localhost';
Be careful not to overly restrict permissions. Some plugins and major WordPress updates require heightened MySQL privileges (CREATE, DROP, ALTER, etc.), therefore revoking them could have adverse effects. The WordPress Codex has more information on MySQL privileges.
For the changes to take effect you must flush the MySQL privileges table:
FLUSH PRIVILEGES;
Finally, you can exit MySQL:
exit;
Now that you have a new database, it’s time to install WordPress.
You could install WordPress manually by using something like cURL or wget to download the latest.zip or latest.tar.gz archive, extract it, and then follow the WordPress installer in a web browser. But since we installed WP-CLI in the previous chapter, we’ll be using that instead.
Start by navigating to the site’s public directory:
cd ~/globex.turnipjuice.media/public
Then, using WP-CLI, download the latest stable version of WordPress into the working directory:
wp core download
You now need to create a wp-config.php file. Luckily, WP-CLI has you covered. Make sure to use the database details you set up in the previous step:
wp core config --dbname=globex --dbuser=globex --dbpass='password'
Finally, with the wp-config.php file in place, you can install WordPress and set up the admin user in one fell swoop:
wp core install --skip-email --url=https://globex.turnipjuice.media --title='Globex Corporation' --admin_user=abe [email protected] --admin_password='password'
You should see the following message:
Success: WordPress installed successfully.
You should now be able to visit the domain name in your browser and be presented with a default WordPress installation:

Additional sites can be added to your server using the same procedure as above and you should be able to fire up new sites within a couple of minutes. Here’s a quick breakdown of how to add additional sites:
logs and public).sites-available directory within Nginx and copy an existing config file for the new server block. Ensure you change the relevant directives.sites-enabled directory to enable the site and restart NginxYou’re free to add as many sites to your server as you like, the only limiting factors are available system resources (CPU, memory, and disk space) and bandwidth restrictions imposed by your VPS provider. Both of which can be overcome by upgrading your server. Caching will also greatly reduce system resource usage, which is a tutorial that I will guide you through in the next chapter.
The post Configure Nginx to Serve WordPress Over<span class="no-widows"> </span>HTTPS appeared first on SpinupWP.
]]>The post 2025 Year in Review: A New<span class="no-widows"> </span>Chapter appeared first on SpinupWP.
]]>This is my eleventh year in review post since I started writing them: 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024.
This past year was a continuation of our 2024 strategy of staying lean and focusing on things that work.
We welcomed a talented Laravel developer, Vincent, to the team in March, but had to say goodbye to James just a couple of weeks later. James had to move back to the United States from Canada and was unfortunately no longer eligible for the government program that we take advantage of. His excellent work and great sense of humour will be missed (see the recommendation I left on his LinkedIn profile).
It has been wonderful to see Vincent ramp up as quickly as he has and ship lots of great stuff over the past eight months. He has been an excellent addition to the team and I couldn’t be happier with his contributions.
Our dev team of George, James/Vincent, and Lewis cranked through our roadmap in 2025 and I was very happy with their work and what we shipped (see the Product section below). I’ve been grateful to have a superb team of talented folks to work with.
Our support team of Jaime and Andre did an excellent job in 2025 as well, frequently praised by customers who were surprised by their helpfulness and how far they are willing to go to resolve issues. Customer support has been a weakness of ours in the past, so it’s such a relief to have talented, dependable folks helping customers over the past couple of years.
Jaime and Andre also started working more closely with the dev team this year. Andre helping test features before and after they’re shipped. And Jaime has been researching configuration changes to our server software (e.g. Nginx), testing them, and providing developers with implementation instructions. He has also been updating our Install WordPress on Ubuntu guide. Both Andre and Jaime have been a huge help to the developers, saving them loads of time.
Currently there are no plans to hire in the next 12 months.
I’m very proud of what our tiny team shipped in 2025.
Probably the biggest release of the year was the SpinupWP Assistant. Way back in 2017, when we started SpinupWP, I had a vision of what server management could be like with good software and this felt like the realization of that vision. It felt great to get it shipped. We followed this up with the ability to remove outdated versions of PHP from the server, bringing the Assistant closer to being able to do all maintenance tasks. It’s really just the ability to run apt upgrade that’s left.
Another big release was the recent page cache improvements, allowing the page to still be cached and served from the cache when specific query strings are present. This was long overdue and was celebrated in the comments on the community post.
Another long time thorn in the side of our customers has been the server crons all running at the same time and spiking the CPU. Our new cron system took care of that and has been working great.
We started supporting quite a few services as backup storage providers: Vultr, Hetzner, Cloudflare R2, and SFTP.
We also stayed on top of the latest software, shipping PHP 8.4 early in the year and then PHP 8.5 just a few weeks ago. We also started offering MySQL 8.4 for new servers.
Here’s the complete list of what we shipped in 2025:
Technically, we also shipped something that we’ve been working on for a long time. It’s live, in production, and running, but no customers have access to it yet. I’m talking about the new dashboard. We plan to enable it for select customers this month, then once we’re confident we’ve found all the major issues, we’ll be opening it up to all customers. It’s going to be a massive improvement for those who have 10+ servers and sites.
We also have three more big projects that are very close to the finish line and will be shipping early this year. The first is a new object cache system. We’ve completely revamped our Redis configuration to greatly improve reliability issues that have plagued customers in the past. It also enhances the security isolation between sites sharing a server.
The second large project is SpinupWP subdomains. We’ve reworked the new site creation flow, removing HTTPS options and the need to update your DNS. Instead, we generate a complementary SpinupWP subdomain (e.g. kfh4mfvj34d.xyz.spinupwp.site) and you can enable HTTPS after the site is created if you wish. This will greatly simplify and speed up the site creation process. No more wrestling with DNS issues when trying to quickly create a site.
The third major project that’s close is the PHP settings project. Workers, Upload Max File Size, Post Max Size, Memory Limit, etc. You’ll be able to change all the PHP settings you commonly have to tweak right from the site dashboard.
Exciting stuff lined up for early 2026!
For later in the year, we’re planning to work on Cloudflare DNS integration, HTTP/3 support, the ability to define default settings for new servers and sites, and the Assistant’s ability to run non-security server software updates.
As I mentioned above, our customer support has never been stronger. Jaime and Andre have been doing an incredible job helping customers, taking feedback, distilling it, and relaying it to the team for improvements to the product. They also started reviewing each other’s work and offering feedback in an effort to help each other improve. Exactly what’s needed to keep raising the bar.
In the past, I’ve mentioned adding to our support team to cover more of the clock, and I still think that will be great in the future, but it’s not planned for 2026.
I did quite a bit of marketing work in the second half of 2024 and as it was paying off going into 2025, I was energized to do more. I was looking forward to working on the Install WordPress on Ubuntu guide, the VPS Control Panel Comparison Tool, and tidying up existing content. Unfortunately, none of that happened. I did hardly anything on the marketing front in 2025.
Late in the year, I finally realized that I wasn’t going to do anything myself, so I asked Jaime to do some updates to the Install WordPress on Ubuntu guide. I’m very happy to say that the guide has never been in better shape. Lots of very nice updates. Jaime also added a number of new docs and updated existing ones.
Lewis also took it upon himself to refresh parts of our site and keep things up-to-date, so the site is also in good shape. There’s still lots that needs to be done though. Articles need updating and some need to be purged. The VPS Control Panel Comparison Tool needs to be updated and more control panels added.
Given that we haven’t done much marketing, it probably comes as no surprise that traffic is down 29%, free trials are down 20%, and new subscribers are down 26% compared to 2024. I wish it were as simple as just lack of effort. In the good ‘ol days, we could just get to work and turn things around. But we’re in the AI era now.
Many of my entrepreneur friends whose businesses depend on SEO are all seeing similar declines despite their continued marketing efforts. AI is really throwing a wrench in the gears and it seems no one knows how to turn things around. The whole situation has been bothering me so much that I wrote an article about it: No Clicks, No Content: The Unsustainable Future of AI Search.
At the moment, the best idea seems to be to just operate as we have before: publish great content and hope that Google rewards us for it. Hopefully it works.
At this point, you may be wondering, what happened? Why did I do so little marketing in 2025?
In January, my priorities shifted. I started to prioritize my health, my family, my friends, and my community. I did the bare minimum for SpinupWP. I ran the weekly meeting and made sure the teams had what they needed, but marketing just wasn’t a priority.
About mid year, I realized that my commitment to the company wasn’t fair to the team or our customers. They deserved better. It was time for a change. SpinupWP needed a new owner.
I started talking to potential buyers in my network and let George and Lewis know that I was looking for a new owner for the company. To my surprise, George was interested and in just a few weeks we worked out a way he could buy the company. We closed the deal on October 31st and George has been the new owner of SpinupWP ever since. Everything has been transitioned over to him at this point and my role is now as an advisor.
I couldn’t be happier about this. I was only selling SpinupWP to someone who would do right by the team and the customers and George fits this mold perfectly. He has been a senior developer on the team for 5 years and knows all the ins and outs of the app and the company. Plus he has a world-class team behind him. SpinupWP is in great hands and I’m confident it will thrive going forward.
I’d like to thank my team for all the awesome work they’ve done not just in the past year, but since 2017. I’m very proud of SpinupWP. We’ve built a great product together, I’m a huge fan, and I can’t wait to see what ships next.
It has been my pleasure working with the fine folks at SpinupWP these past 8 years and I wish George, Lewis, Andre, Vincent, and Jaime the best of luck going forward.
Here’s to the SpinupWP team and here’s to 2026! 
The post 2025 Year in Review: A New<span class="no-widows"> </span>Chapter appeared first on SpinupWP.
]]>The post Migrating WordPress to a New<span class="no-widows"> </span>Server appeared first on SpinupWP.
]]>This is article 6 of 10 in the series “Hosting WordPress Yourself”
In the previous chapter we enhanced security and performance with tweaks to the Nginx configuration. In this article, I’m going to walk you through this step-by-step guide to migrating an existing WordPress website to a new server.
There can be lots of reasons to migrate a site. Perhaps you’re moving to a new hosting provider from an old host. If you’re moving a site to a server you’ve set up with SpinupWP, the following guide will work but I recommend using our documentation on migrating a site to a SpinupWP server for more specific instructions. I promise it will save you time and headaches. 
Another good reason to migrate a site is to retire a server. We don’t recommend upgrading a server’s operating system (OS). That is, we don’t recommend upgrading Ubuntu even though Ubuntu might encourage it. The truth is a lot can go wrong upgrading the OS of a live server and it’s just not worth the trouble.
A much safer approach is to spin up a fresh server, migrate existing sites, and shut down the old server. This approach allows you to test that everything is working on the new server before switching the DNS and directing traffic to it.
If you haven’t already completed the previous chapters to fire up a fresh new server, you should start at the beginning. (Interested in a super quick and easy way to provision new servers for hosting WordPress lightning fast? Check out SpinupWP.) Let’s get started!
Before we begin migrating files, we need to figure out the best way to copy them to the new server. You could use free file manager software like FileZilla to copy the files to your computer and then on to the next server over SFTP, but it’s quite a bit slower having to download then upload. Here we’ll use SCP.
SCP will allow us to copy the files server-to-server, without first downloading them to our local machine. Under the hood, SCP uses SSH; therefore step 1 is to generate a new SSH key so that we can connect to our old server from the new server. On the newly provisioned server, create a new SSH key using the following command:
ssh-keygen -t ed25519 -C "your_server_ip_or_hostname"
Then step 2 is to copy the public key to your clipboard. You can view the public key, like so:
cat ~/.ssh/id_ed25519.pub
For step 3, on the old server add the public key to your authorized_keys file:
sudo echo "public_key" >> ~/.ssh/authorized_keys
Then for step 4, verify that you’re able to connect to the old server from the new server using SSH.
ssh [email protected]
If you’re unable to connect, go back and verify the previous steps before continuing.
We’ll start by migrating the site’s files, which includes WordPress and any other files in the web root. Issue the following command from the new server. Remember to substitute your old server’s IP address and the path to the site’s web root.
scp -r [email protected]:~/globex.turnipjuice.media ~/globex.turnipjuice.media
With the site’s files taken care of, it’s time to add the site to Nginx.
There are a couple of ways you can add the site to Nginx:
I recommend copying the existing configuration, as you know it works. However, starting afresh can be useful, especially if your virtual host file contains a lot of redundant directives. You can download a zip file of complete Nginx configs as a fresh starting point.
In this example I’m going to copy the existing configuration. As we did with the site data, copy the file using SCP:
scp -r [email protected]:/etc/nginx/sites-available/globex.turnipjuice.media ~/globex.turnipjuice.media
Next, move the file into place and ensure the root user owns it:
sudo mv globex.turnipjuice.media /etc/nginx/sites-available
sudo chown root:root /etc/nginx/sites-available/globex.turnipjuice.media
The last step is to enable the site in Nginx by symlinking the virtual host into the enabled-sites directory:
sudo ln -s /etc/nginx/sites-available/globex.turnipjuice.media /etc/nginx/sites-enabled/globex.turnipjuice.media
Before testing if our configuration is good, we should copy over our SSL certificates.
Certificate file permissions are more locked down, so you will need to SSH to the old server and copy them to your home directory first.
sudo cp /etc/letsencrypt/live/globex.turnipjuice.media/fullchain.pem ~/
sudo cp /etc/letsencrypt/live/globex.turnipjuice.media/privkey.pem ~/
Then, ensure our SSH user has read/write access:
sudo chown abe *.pem
Back on the new server, copy the certificates.
scp -r [email protected]:~/*.pem ~/
We’re going to generate fresh certificates using Let’s Encrypt once the DNS has switched over (see Finishing Up), so we’ll leave the certificate files in our home directory for the time being and update the Nginx configuration to reflect the new paths.
sudo nano /etc/nginx/sites-available/globex.turnipjuice.media
You’ll need to update the ssl_certificate and ssl_certificate_key directives.
ssl_certificate /home/abe/fullchain.pem;
ssl_certificate_key /home/abe/privkey.pem;
To confirm the directives are correct, once again test the Nginx config:
sudo nginx -t
If everything looks good, reload Nginx:
sudo systemctl reload nginx.service
It’s a good idea to test the new server as we go. We can do this by spoofing our local DNS, which will ensure the old server remains active for your visitors but allow you to test the new server. On your local machine add an entry to your /etc/hosts file, which points the new server’s IP address to the site’s domain name:
46.101.3.65 globex.turnipjuice.media
Once updated, if you refresh the site you should see “Error establishing a database connection” because we haven’t imported the database yet. Let’s handle that next.
Before continuing, remember that the domain now points to the new server’s IP address. If you usually SSH to the server using the hostname, this will no longer work. Instead, you should SSH to each server using their IP addresses until the migration process is complete.
Before we can perform the import, we need to create the MySQL database and its database user. On the new server, log in to MySQL using the root user:
mysql -u root -p
Create the database:
CREATE DATABASE globex CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_520_ci;
Then, create the database user with privileges for the new database:
CREATE USER 'globex'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON globex.* TO 'globex'@'localhost';
FLUSH PRIVILEGES;
EXIT;
With that taken care of, it’s time to export the data. We’re going to use mysqldump to perform the database export. If you need to do anything more complex, like exclude post types or perform a find and replace on the data, I would recommend using WP Migrate.
To export the database, issue the following command from the old server, replacing the database credentials with those found in your wp-config.php file:
mysqldump --no-tablespaces -u DB_USER -p DB_NAME > ~/export.sql
Switch back to your terminal of the new server and transfer the database export file:
scp [email protected]:~/export.sql ~/
Finally, import the database:
mysql -u DB_USER -p DB_NAME < export.sql
If any of the database connection information is different from that of the old server you will need to update your wp-config.php file to reflect those changes. Refresh the site to confirm that the database credentials are correct. If everything is working, you should now see the site.
You now have an exact clone of the live site running on the new server. It’s time to test that everything is working as expected.
For ecommerce sites, you should confirm that the checkout process is working and any other critical paths. Remember, this is only a clone of the live site, so anything saved to the database won’t persist, as we’ll be re-importing the data shortly.
Once you’re happy that everything is working as expected, it’s time to perform the migration.
On busy sites, it’s likely that the database will have changed since performing the previous export. To ensure data integrity, we need to prevent the live site from modifying the database while we carry out the migration. To do that we’ll perform the following actions:
To stop the live site from modifying the database we’re going to show the following ‘Back Soon’ page:
<!doctype html>
<html>
<head>
<title>Back Soon</title>
<style>
body { text-align: center; padding: 150px; }
h1 { font-size: 50px; }
body { background-color: #e13067; font: 20px Helvetica, sans-serif; color: #fff; line-height: 1.5 }
article { display: block; width: 650px; margin: 0 auto; }
</style>
</head>
<body>
<article>
<h1>Back Soon!</h1>
<p>
We're currently performing server maintenance.<br>
We'll be back soon!
</p>
</article>
</body>
</html>
We’ll save this as an index.html page, upload it to the web root and update Nginx to serve this file, instead of index.php.
On the old server, modify your site’s virtual host file:
sudo nano /etc/nginx/sites-available/globex.turnipjuice.media
Ensure that the index directive looks like below, which will ensure that our ‘Back Soon’ page is loaded for all requests instead of WordPress:
index index.html index.php;
Once done, reload Nginx. Your live site will now be down. If you’re using Nginx FastCGI caching, any cached pages will continue to be served from the cache. However, requests to admin-ajax.php and the WordPress REST API will fail. Therefore, you will not be able to use WordPress migration plugins such as WP Migrate to perform the migration.
Before continuing, you should confirm that your live site is indeed showing the ‘Back Soon’ page by checking it from another device or removing the entry from your /etc/hosts file, which we added earlier.
Now that the live site is down it’s time to export and import the database once more (as we did above) so that any changes that occurred to the database while we were testing are migrated. However, this time you won’t need to create a database or database user.
Once the export/import is complete you may want to add the entry back into your /etc/hosts file (if you removed it) so that you can quickly check that the database migration was successful. Once you’re confident that everything is working as expected, log into your DNS control panel and update your A records to point to the new server. Modifying your DNS records will start routing traffic to your new server. However, keep in mind that DNS queries are cached, so anyone who has visited your site recently will likely still be routed to the old server and see the ‘Back Soon’ page. Once the user’s machine re-queries for the domain’s DNS entries they should be forwarded to the new server.
We use Cloudflare as our DNS provider, with a TTL of 300 seconds. This means that most users are routed to the new server quickly when we make a DNS change. However, if your DNS TTL is higher, I would recommend lowering it a few days prior to performing the migration. This will ensure DNS changes propagate more quickly to your users.
Now that the new server is live, there are a few loose ends we need to take care of, but fortunately we’ve already covered them in previous chapters:
That’s everything there is to know about migrating a WordPress site to a new server. If you follow the steps outlined here, you should have a smooth WordPress migration with little downtime. In the final chapter of our Install WordPress on Ubuntu 24.04 tutorial, we’ll cover how to keep your server and sites operational with ongoing maintenance and monitoring.
The post Migrating WordPress to a New<span class="no-widows"> </span>Server appeared first on SpinupWP.
]]>The post Install Nginx, PHP 8.4, WP-CLI, and MySQL<span class="no-widows"> </span>8.4 appeared first on SpinupWP.
]]>This is article 5 of 10 in the series “Hosting WordPress Yourself”
In chapter 1 of this guide, I took you through the initial steps of setting up and securing a VPS on DigitalOcean using Ubuntu 24.04. In this chapter I will guide you through the process of setting up Nginx, PHP-FPM, and MySQL — which on Linux is more commonly known as a LEMP stack — that will form the foundations of a working web application and server.
Before moving on with this tutorial, you will need to open a new SSH connection to the server, if you haven’t already:
ssh [email protected]
Welcome to Ubuntu 24.04.3 LTS (GNU/Linux 6.8.0-83-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/pro System information as of Wed Sep 17 13:56:57 UTC 2025 System load: 0.0 Users logged in: 0 Usage of /: 4.5% of 47.39GB IPv4 address for eth0: 178.62.70.190 Memory usage: 16% IPv4 address for eth0: 10.50.0.5 Swap usage: 0% IPv6 address for eth0: 2604:a880:5:1::d04:0 Processes: 99 Expanded Security Maintenance for Applications is not enabled. 0 updates can be applied immediately. Enable ESM Apps to receive additional future security updates. See https://ubuntu.com/esm or run: sudo pro status Last login: Wed Sep 17 13:57:30 2025 from 190.140.118.55 abe@pluto:~$
Nginx has become the most popular web server software used on Linux servers, so it makes sense to use it rather than Apache. Although the official Ubuntu package repository includes Nginx packages, they’re often very outdated. Instead, we use the package repository maintained by Ondřej Surý that includes the latest Nginx stable packages.
First, add the repository and update the package lists:
sudo add-apt-repository ppa:ondrej/nginx -y
sudo apt update
There may now be some packages that can be upgraded, let’s do that now:
sudo apt dist-upgrade -y
Then install Nginx:
sudo apt install nginx -y
Once complete, you can confirm that Nginx has been installed with the following command:
nginx -v
abe@pluto:~$ nginx -v nginx version: nginx/1.28.1
Now you can try visiting the domain name pointing to your server’s IP address in your browser and you should see an Nginx welcome page. Make sure to type in http:// as browsers default to https:// now and that won’t work as we have yet to set up SSL.

Now that Nginx has successfully been installed it’s time to perform some basic configuration. Out-of-the-box Nginx is pretty well optimized, but there are a few basic adjustments to make. However, before opening the configuration file, you need to determine your server’s open file limit.
Run the following to get your server’s open file limit and take note, as we’ll use it in a minute:
ulimit -n
Next, open the Nginx configuration file, which can be found at /etc/nginx/nginx.conf:
sudo nano /etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
I’m not going to list every configuration directive but I am going to briefly mention those that you should change. If you would find it easier to see the whole thing at once, feel free to download the complete Nginx config kit now.
Start by setting the user to the username that you’re currently logged in with. This will make managing file permissions much easier in the future, but this is only acceptable security-wise when running a server where only a single user has access.
The events block contains two directives, the first worker_connections should be set to your server’s open file limit. This tells Nginx how many simultaneous connections can be opened by each worker_process. Therefore, if you have two CPU cores and an open file limit of 1024, your server can handle 2048 connections per second. However, the number of connections doesn’t directly equate to the number of users that can be handled by the server, as the majority of web pages and browsers open at least two connections per request. The multi_accept directive should be uncommented and set to on. This informs each worker_process to accept all new connections at a time, opposed to accepting one new connection at a time.
Moving down the file you will see the http block. The first directive to add is keepalive_timeout. The keepalive_timeout determines how many seconds a connection to the client should be kept open before it’s closed by Nginx. This directive should be lowered, as you don’t want idle connections sitting there for up to 75 seconds if they can be utilized by new clients. I have set mine to 15. You can add this directive just above the sendfile on; directive:
http {
##
# Basic Settings
##
keepalive_timeout 15;
sendfile on;
For security reasons, you should uncomment the server_tokens directive and ensure it is set to off. This will disable emitting the Nginx version number in error messages and response headers.
Underneath server_tokens add the following line to set the maximum upload size you require in the WordPress Media Library.
client_max_body_size 64m;
I chose a value of 64m but you can increase it if you run into issues uploading large files.
Further down the http block, you will see a section dedicated to gzip compression. By default, gzip is enabled but you should tweak these values further for better handling of static files. First, you should uncomment the gzip_proxied directive and set it to any, which will ensure all proxied request responses are gzipped. Secondly, you should uncomment the gzip_comp_level and set it to a value of 5. This controls the compression level of a response and can have a value in the range of 1 – 9. Be careful not to set this value too high, as it can have a negative impact on CPU usage. Finally, you should uncomment the gzip_types directive, leaving the default values in place. This will ensure that JavaScript, CSS, and other file types are gzipped in addition to the HTML file type which is always compressed by the gzip module.
That’s the basic Nginx configuration dealt with. Hit CTRL + X followed by Y to save the changes.
You must restart Nginx for the changes to take effect. Before doing so, ensure that the configuration files contain no errors by issuing the following command:
sudo nginx -t
If everything looks OK, go ahead and restart Nginx:
sudo systemctl restart nginx.service
If it’s not already running, you can start Nginx with:
sudo systemctl enable --now nginx.service
abe@pluto:~$ sudo nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful abe@pluto:~$ sudo systemctl enable --now nginx.service Synchronizing state of nginx.service with SysV service script with /usr/lib/systemd/systemd-sysv-install. Executing: /usr/lib/systemd/systemd-sysv-install enable nginx abe@pluto:~$
Brotli is a modern, high-performance compression algorithm developed by Google. It offers significantly better compression ratios than gzip, resulting in smaller file sizes and faster load times for users. Brotli is supported by all major browsers and can be safely enabled for most web applications.
To enable Brotli support in Nginx, you’ll first need to install the Brotli dynamic modules:
sudo apt install libnginx-mod-http-brotli-filter libnginx-mod-http-brotli-static -y
Once installed, you can enable and configure Brotli in the Nginx configuration file. Open the main configuration file:
sudo nano /etc/nginx/nginx.conf
Then add the following lines right below the Gzip Settings section:
##
# Brotli Settings
##
brotli on;
brotli_comp_level 5;
brotli_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
It should end up looking like this:
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
gzip_proxied any;
gzip_comp_level 5;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Brotli Settings
##
brotli on;
brotli_comp_level 5;
brotli_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
Once configured, test your Nginx configuration for syntax errors:
sudo nginx -t
If all goes well, reload the service:
sudo systemctl reload nginx.service
Brotli will now automatically compress eligible responses before they are sent to clients, further reducing bandwidth usage and improving page performance.
Just as with Nginx, the official Ubuntu package repository does contain PHP packages. However, they are not the most up-to-date. Again, I use one maintained by Ondřej Surý for installing PHP. Add the repository and update the package lists as you did for Nginx:
sudo add-apt-repository ppa:ondrej/php -y
sudo apt update
Then install PHP 8.4, as well as all the PHP packages you will require:
sudo apt install php8.4-fpm php8.4-common php8.4-mysql \
php8.4-xml php8.4-intl php8.4-curl php8.4-gd \
php8.4-imagick php8.4-cli php8.4-dev php8.4-imap \
php8.4-mbstring php8.4-opcache php8.4-redis \
php8.4-soap php8.4-zip -y
You’ll notice php-fpm in the list of packages being installed. FastCGI Process Manager (FPM) is an alternative PHP FastCGI implementation with some additional features that plays really well with Nginx. It’s the recommended process manager to use when installing PHP with Nginx.
After the installation has completed, test PHP and confirm that it has been installed correctly:
php-fpm8.4 -v
abe@pluto:~$ php-fpm8.4 -v
PHP 8.4.12 (fpm-fcgi) (built: Aug 29 2025 06:48:12) (NTS)
Copyright (c) The PHP Group
Built by Debian
Zend Engine v4.4.12, Copyright (c) Zend Technologies
with Zend OPcache v8.4.12, Copyright (c), by Zend Technologies
Once Nginx and PHP are installed you need to configure the user and group that the service will run under. This setup does not provide security isolation between sites by configuring PHP pools, so we will run a single PHP pool under your user account. If security isolation between sites is required we do not recommend that you use this approach and instead use SpinupWP to provision your servers.
Open the default pool configuration file:
sudo nano /etc/php/8.4/fpm/pool.d/www.conf
Change the following lines, replacing www-data with your username:
user = abe
group = abe
listen.owner = abe
listen.group = abe
Hit CTRL + X and Y to save the configuration.
Next, you should adjust your php.ini file to increase the WordPress maximum upload size. Both this and the client_max_body_size directive within Nginx must be changed for the new maximum upload limit to take effect. Open your php.ini file:
sudo nano /etc/php/8.4/fpm/php.ini
Change the following lines to match the value you assigned to the client_max_body_size directive when configuring Nginx:
upload_max_filesize = 64M
post_max_size = 64M
While we’re editing the php.ini file, let’s also enable the OPcache file override setting. When this setting is enabled, OPcache will serve the cached version of PHP files without checking if the file has been modified on the file system, resulting in improved PHP performance.
Hit CTRL + W and type file_override to locate the line we need to update. Now uncomment it (remove the semicolon) and change the value from zero to one:
opcache.enable_file_override = 1
Hit CTRL + X and Y to save the configuration. Before restarting PHP, check that the configuration file syntax is correct:
sudo php-fpm8.4 -t
abe@pluto:~$ sudo php-fpm8.4 -t [21-Sep-2025 03:58:04] NOTICE: configuration file /etc/php/8.4/fpm/php-fpm.conf test is successful
If the configuration test was successful, restart PHP using the following command:
sudo systemctl restart php8.4-fpm.service
Now that Nginx and PHP have been installed, you can confirm that they are both running under the correct user by issuing the htop command:
htop
If you hit SHIFT + M, the output will be arranged by memory usage which should bring the php-fpm processes into view. If you scroll to the bottom, you’ll also find a couple of nginx processes.
Both processes will have one instance running under the root user. This is the main process that spawns each worker. The remainder should be running under the username you specified.

If not, go back and check the configuration, and ensure that you have restarted both the Nginx and PHP-FPM services.
To check that Nginx and PHP are working together properly, enable PHP in the default Nginx site configuration and create a PHP info file to view in your browser. You are welcome to skip this step, but it’s often handy to check that PHP files can be correctly processed by the Nginx web server.
First, you need to uncomment a section in the default Nginx site configuration which was created when you installed Nginx:
sudo nano /etc/nginx/sites-available/default
Find the section which controls the PHP scripts.
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/run/php/php8.4-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
As we’re using php-fpm, we can change that section to look like this:
# pass PHP scripts to FastCGI server
location ~ \.php$ {
include snippets/fastcgi-php.conf;
# With php-fpm (or other unix sockets):
fastcgi_pass unix:/run/php/php8.4-fpm.sock;
}
Save the file by using CTRL + X followed by Y. Then, as before, test to make sure the configuration file was edited correctly.
sudo nginx -t
If everything looks okay, go ahead and restart Nginx:
sudo systemctl restart nginx.service
Next, create an info.php file in the default web root, which is /var/www/html.
cd /var/www/html
sudo nano info.php
Add the following PHP code to that info.php file, and save it by using the same CTRL + X, Y combination.
<?php phpinfo();
Lastly, because you set the user directive in your nginx.conf file to the user you’re currently logged in with, give that user permissions on the info.php file.
sudo chown abe info.php
Now, if you visit the info.php file in your browser, using the domain name you set up in chapter 1, you should see the PHP info screen, which means Nginx can process PHP files correctly.

Once you’ve tested this, you can go ahead and delete the info.php file.
sudo rm /var/www/html/info.php
Currently, when you visit the server’s domain name in a web browser you will see the Nginx welcome page. However, it would be better if the server returned an empty response for domain names that have not been configured in Nginx.
Begin by removing the following two default site configuration files:
sudo rm /etc/nginx/sites-available/default
sudo rm /etc/nginx/sites-enabled/default
Now you need to add a catch-all block to the Nginx configuration. Edit the nginx.conf file:
sudo nano /etc/nginx/nginx.conf
Towards the bottom of the file you’ll find a line that reads:
include /etc/nginx/sites-enabled/*;
Underneath that, add the following block:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 444;
}
Hit CTRL + X followed by Y to save the changes and then test the Nginx configuration:
sudo nginx -t
If everything looks good, restart Nginx:
sudo systemctl restart nginx.service
Now when you visit your domain name you should receive an error:

Here’s my final nginx.conf file, after applying all of the above changes. I have removed the mail block, as this isn’t something that’s commonly used.
user abe;
worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
multi_accept on;
}
http {
##
# Basic Settings
##
keepalive_timeout 15;
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
server_tokens off;
client_max_body_size 64m;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
gzip_proxied any;
gzip_comp_level 5;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Brotli Settings
##
brotli on;
brotli_comp_level 5;
brotli_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 444;
}
}
Download the complete set of Nginx config files
If you have never used WP-CLI before, it’s a command-line tool for managing WordPress installations, and greatly simplifies the process of downloading and installing WordPress (plus many other tasks).
Navigate to your home directory:
cd ~/
Using cURL, download WP-CLI:
curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
You can then check that it works by issuing:
php wp-cli.phar --info
The command should output information about your current PHP version and a few other details.
To access the command-line tool by simply typing wp, you need to move it into your server’s PATH and ensure that it has execute permissions:
chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp
You can now access the WP-CLI tool by typing wp.
NAME wp DESCRIPTION Manage WordPress through the command-line. SYNOPSIS wpSUBCOMMANDS cache Adds, removes, fetches, and flushes the WP Object Cache object. cap Adds, removes, and lists capabilities of a user role. cli Reviews current WP-CLI info, checks for updates, or views defined aliases. comment Creates, updates, deletes, and moderates comments. config Generates and reads the wp-config.php file. core Downloads, installs, updates, and manages a WordPress installation.
The final package to install is MySQL. By default, Ubuntu provides MySQL packages through its own repository, but these are often one or more major versions behind the official MySQL releases. To ensure access to the latest stable and long-term supported (LTS) versions, we’ll configure Ubuntu to use MySQL’s official APT repository instead. This approach guarantees timely security patches, newer features, and better alignment with upstream support.
To access MySQL’s repository, we’ll first need to download MySQL’s apt configuration release package:
wget https://dev.mysql.com/get/mysql-apt-config_0.8.36-1_all.deb
Once downloaded, we can go ahead and install it with:
sudo dpkg -i mysql-apt-config_0.8.36-1_all.deb
Next, select Ok to complete the installation:

Lastly, proceed to update the repository with the following command:
sudo apt update
We’re now ready to install MySQL on the server. Simply run the following command:
sudo apt install mysql-server -y
sudo apt install mysql-server -y will automatically select packages from the Ubuntu repository instead.
You’ll be prompted to set a password for MySQL’s root user:

Enter a password and select Ok.
Finally, to complete the setup process, we’ll go ahead and run MySQL’s secure installation script:
sudo mysql_secure_installation
Follow the instructions and answer the questions. You’ll enter the password that you just set. Here are my answers:
abe@pluto:~$ sudo mysql_secure_installation Securing the MySQL server deployment. Enter password for user root: VALIDATE PASSWORD COMPONENT can be used to test passwords and improve security. It checks the strength of password and allows the users to set only those passwords which are secure enough. Would you like to setup VALIDATE PASSWORD component? Press y|Y for Yes, any other key for No: y There are three levels of password validation policy: LOW Length >= 8 MEDIUM Length >= 8, numeric, mixed case, and special characters STRONG Length >= 8, numeric, mixed case, special characters and dictionary file Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 2 Using existing password for root. Estimated strength of the password: 100 Change the password for root ? ((Press y|Y for Yes, any other key for No) : n ... skipping. By default, a MySQL installation has an anonymous user, allowing anyone to log into MySQL without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? (Press y|Y for Yes, any other key for No) : y Success. Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y Success. By default, MySQL comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y - Dropping test database... Success. - Removing privileges on test database... Success. Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y Success. All done! abe@pluto:~$
That’s all for this chapter. In the next chapter I will guide you through the process of setting up your first WordPress site and how to manage multiple WordPress installs.
The post Install Nginx, PHP 8.4, WP-CLI, and MySQL<span class="no-widows"> </span>8.4 appeared first on SpinupWP.
]]>The post Set Up and Secure a VPS on<span class="no-widows"> </span>DigitalOcean appeared first on SpinupWP.
]]>This is article 4 of 10 in the series “Hosting WordPress Yourself”
You will need a domain name to follow along in this guide. A subdomain is perfectly fine, in fact we will be using pluto.turnipjuice.media for this tutorial. You will also need access to update your domain’s DNS. We highly recommend using Cloudflare’s DNS service and they’re a pretty good place to buy domains too.
In this tutorial I’m not going to go into detail on the initial VPS creation process, as DigitalOcean has their own doc. However, here are some things you should keep in mind when creating your new DigitalOcean Droplet:

Before we can install the web server software (e.g., PHP, MySQL database, etc) required for a WordPress installation, we first need to configure a few things on the server. We’ll start by logging into the server via SSH. If you’ve never SSH’ed into a server before, you may want to check out our Beginner’s Guide to SSH before proceeding.
ssh [email protected]
You’ll be asked to enter a password. Enter the password you provided when creating the Droplet in the previous step.
Welcome to Ubuntu 24.04.3 LTS (GNU/Linux 6.8.0-71-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/pro System information as of Tue Aug 5 12:59:21 UTC 2025 System load: 1.09 Processes: 27 Usage of /home: unknown Users logged in: 0 Memory usage: 5% IPv4 address for eth0: 10.10.10.2 Swap usage: 0% Expanded Security Maintenance for Applications is not enabled. 0 updates can be applied immediately. Enable ESM Apps to receive additional future security updates. See https://ubuntu.com/esm or run: sudo pro status The list of available updates is more than a week old. To check for new updates run: sudo apt update The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. root@pluto:~#
Now that you’re logged into the server, let’s set the hostname and fully qualified domain name (FQDN). The hostname should be unique but doesn’t require any relationship to the sites that will be hosted, for example, some people opt to name their servers after astronomical objects.
Correctly setting the hostname and FQDN will make connecting to your server much easier in the future as you won’t have to remember the IP address each time. To set the hostname, issue the following command (altered for your chosen domain name):
hostnamectl hostname pluto.turnipjuice.media
In order to connect to the server using your hostname you need to update your domain name’s DNS settings. Log into your DNS control panel and create a new A record:

Make sure that the A record matches the hostname you configured on your web server and that the IP address of the web server is associated with your domain name. You may need to wait a while for the DNS settings to propagate.
If you’re using Cloudflare for your DNS, make sure to toggle OFF the proxy switch.
Once the DNS settings have propagated, if you exit out of the current SSH session you should be able to connect to the server using the new hostname.
ssh [email protected]
DigitalOcean can default the new server setup to the same timezone as the data center region. To set the server timezone, you must configure it through the timedatectl utility. This will ensure that the system log files show the correct date and time. The following command will allow you to configure the timedatectl timezone (altered for your chosen timezone):
timedatectl set-timezone UTC
timedatectl list-timezones
Once set, the new timezone can be displayed along with the current date and time by issuing the following command:
timedatectl
Although you have only just provisioned your new server, it is likely that some software packages are out of date. Let’s ensure you are using the latest software by pulling in updated package lists:
apt update
Once completed, let’s update all of the currently installed packages.
apt dist-upgrade
It is recommended to use apt dist-upgrade vs. apt upgrade because it will intelligently handle dependencies.
You will be shown a list of the packages that will be updated, how much disk space will be used, and a prompt asking if you’d like to continue with the updates. Hit Enter to continue with the updates.
When the upgrades have completed you will be shown which packages have been installed, and also which packages are no longer required by the system.
You can remove the outdated packages by issuing the following command:
apt autoremove
It’s a good idea to reboot the server at this point. Run the following command:
reboot now
This will disconnect you from the server. You will need to wait until the server reboots before you can connect again via SSH:
ssh [email protected]
It’s vitally important that you keep your server software updated so that software vulnerabilities are patched. Thankfully, Ubuntu can automatically perform software updates, keeping your server secure. It’s important to remember that this convenience can be quite dangerous and it’s recommended that you only enable security updates. This will automatically patch new vulnerabilities as they are discovered.
Non-security software updates should be tested on a staging server before installing them so as not to introduce breaking changes, which could inadvertently take your WordPress websites offline.
On some systems, this feature may automatically be enabled. If not, or you’re unsure, follow the steps below:
Install the unattended-upgrades package:
apt install unattended-upgrades
Create the required configuration files:
dpkg-reconfigure unattended-upgrades
You should see the following screen:

Choose “Yes” and hit Enter. Then, edit the configuration file:
nano /etc/apt/apt.conf.d/50unattended-upgrades
If you’re not familiar with editing files on the command line with nano, you might want to check out our tutorial How to Easily Edit Files Over SSH with Nano. If you’re already proficient with vim or some other command line editor, by all means use that instead.
Ensure that the security origin is allowed and that all others are removed or commented out. It should look like this:
// Automatically upgrade packages from these (origin:archive) pairs
//
// Note that in Ubuntu security updates may pull in new dependencies
// from non-security sources (e.g. chromium). By allowing the release
// pocket these get automatically pulled in.
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}";
"${distro_id}:${distro_codename}-security";
// Extended Security Maintenance; doesn't necessarily exist for
// every release and this system may not have it installed, but if
// available, the policy for updates is such that unattended-upgrades
// should also install from here by default.
"${distro_id}ESMApps:${distro_codename}-apps-security";
"${distro_id}ESM:${distro_codename}-infra-security";
// "${distro_id}:${distro_codename}-updates";
// "${distro_id}:${distro_codename}-proposed";
// "${distro_id}:${distro_codename}-backports";
};
Save the file using CTRL + X and then Y.
You may also wish to configure whether or not the system should automatically restart if it’s required for an update to take effect. The default behavior is to restart the server immediately after installing the update. To disable this completely, find the following line and uncomment it:
Unattended-Upgrade::Automatic-Reboot "false";
You can also replace false with a time if you’d like the server to be restarted automatically at a specific time:
Unattended-Upgrade::Automatic-Reboot-Time "04:00";
If your server does restart you must remember to start all critical services. By default Nginx, PHP and MySQL will automatically restart, but check out this Stack Overflow thread on how to add additional services if needed.
Finally, set how often the automatic updates should run:
nano /etc/apt/apt.conf.d/20auto-upgrades
Ensure that Unattended-Upgrade is in the list.
APT::Periodic::Unattended-Upgrade "1";
The number indicates how often the upgrades will be performed in days. A value of 1 will run upgrades every day.
Once you’ve finished editing, save the file using CTRL + X and then Y and restart the service to have the changes take effect:
systemctl restart unattended-upgrades.service
We’ve finished configuring the web server basics and security updates. The next step in this tutorial is adding a new user to your server. This is done for two reasons:
This new user will be added to the sudo group so that you can execute commands which require heightened permissions, but only when required.
First, create the new user:
adduser abe
You’ll be prompted to enter a password, then some basic user information. As mentioned previously, this password should be complex:
root@pluto:~# adduser abe
Adding user `abe' ...
Adding new group `abe' (1000) ...
Adding new user `abe' (1000) with group `abe' ...
Creating home directory `/home/abe' ...
Copying files from `/etc/skel' ...
New password:
Retype new password:
passwd: password updated successfully
Changing the user information for abe
Enter the new value, or press ENTER for the default
Full Name []: Abe
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n]
Next, you need to add the new user to the sudo group:
usermod -aG sudo abe
Now ensure your new account is working by logging out of your current SSH session and initiating a new one:
logout
Then login with the new account:
ssh [email protected]
At this point, your new user is ready to use. For enhanced security, you are going to set up public key authentication. As you’re planning to configure WordPress on this server, it means it’s going to be publicly accessible, and therefore a possible target for attackers. It’s important to lock it down as best we can.
First, we’re going to need an SSH key pair, public and private keys. You may already have a generated them previously. If you haven’t generated an SSH key pair before, you might want to check out our Beginner’s to SSH for an in-depth explanation.
To create a key pair, enter the following command in your computer’s terminal (not the remote server):
ssh-keygen -t ed25519 -C "abe@laptop"
Replace “[email protected]” with something to help you identify this SSH key (it doesn’t have to be an email address).
You should receive a message as I have below, just hit return to accept the default location. You’ll then be prompted to enter a passphrase (optional), which will require you to enter a password every time you log in with this key pair:
abe@Abes-MBP:~$ ssh-keygen -t ed25519 -C "abe@laptop" Generating public/private ed25519 key pair. Enter file in which to save the key (/Users/bradt/.ssh/id_ed25519): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /Users/bradt/.ssh/id_ed25519 Your public key has been saved in /Users/bradt/.ssh/id_ed25519.pub The key fingerprint is: SHA256:6zrqae1MT26zBVOHzGWVWJxH8xjrWr8TWM8io6Qsdx8 abe@laptop The key's randomart image is: +--[ED25519 256]--+ | o=+=.| | o +. +++| | = . o..| | . . . . | | S =..| | +. o+.oo| | ...oo..Eo .o| | .++=*.o . ..| | o+o+*=+ .. ..| +----[SHA256]-----+
Now that you have your SSH key pair, you need to copy the public key to your server. First, let’s create a place for it on the server. Go back to the SSH session to your remote server, ensuring you are logged in with the newly created user. Now create the .ssh directory and set the correct permissions:
mkdir ~/.ssh
chmod 700 ~/.ssh
Within the .ssh directory create a new file called authorized_keys:
nano ~/.ssh/authorized_keys
Now switch back to your computer’s terminal (not the remote server). Assuming you saved the key in the default location, the following command will copy the key to your clipboard:
cat ~/.ssh/id_ed25519.pub | pbcopy
Switch back to the remote server terminal and paste your public key into the authorized_keys file. Save the `file using CTRL + X and then Y. Finally, set the correct permissions on the file:
chmod 600 ~/.ssh/authorized_keys
Now if you log out of the current SSH session and try reconnecting, you should no longer have to enter your user password. Remember, if you set a passphrase when creating the SSH key, you will need to enter it when prompted.
For the rest of this tutorial, you’ll notice I’m using sudo in front of each command, to heighten privileges for this command. This allows my ‘normal’ user to make ‘root’ user level changes.
With your new user created, it’s time to further secure the server by configuring SSH. The first thing you are going to do is disable SSH access for the root user, which will no longer let you log into the server via SSH using the root user. Open the SSH configuration file using nano:
sudo nano /etc/ssh/sshd_config
Find the line that reads PermitRootLogin yes and change it to PermitRootLogin no. Hit CTRL + X then Y to save the changes. In order for the changes to take effect you must restart the SSH service:
sudo systemctl restart ssh.service
Now if you exit out of the current SSH session and try connecting with the root user you should receive a permission denied error message after entering the correct password for the root user.
The final step to securing SSH is to disable user login using a password. This ensures that you need your private SSH key to log into the server. Remember, if you lose your private key you will be locked out of the server, so keep it safe! Most virtual machine server providers like DigitalOcean do have other means of logging in, but it’s best not to rely on those methods:
sudo nano /etc/ssh/sshd_config
Find the line that reads #PasswordAuthentication yes and change it to PasswordAuthentication no. Hit CTRL + X then Y to save the changes. Once again, you must restart the SSH service for the changes to take effect.
sudo systemctl restart ssh.service
Now, before you log out of your server, you should test your new configuration. To do this open a new terminal window, without closing the current SSH session and attempt to connect:
ssh [email protected]
You should log in to the server successfully. To further test that password authentication is disabled, temporarily rename the SSH key located in my .ssh directory. When attempting to log into the server this time you should receive a Permission denied (publickey) error.
If you’re still able to login with a password, there could be an included configuration file that’s overriding the PasswordAuthentication setting. Check the /etc/ssh/sshd_config.d folder to see if there are any configuration files in there:
abe@pluto:~$ sudo ls -la /etc/ssh/sshd_config.d total 12 drwxr-xr-x 2 root root 4096 Apr 5 14:10 . drwxr-xr-x 4 root root 4096 Apr 5 14:09 .. -rw------- 1 root root 27 Apr 5 12:33 50-cloud-init.conf
In this case, there’s one configuration file containing one line PasswordAuthentication yes. Comment out that line or delete the file, restart the SSH service again, and hopefully you get the Permission denied (publickey) error.
The firewall provides an additional transport layer of security to your server by blocking inbound network traffic. I’m going to demonstrate the iptables firewall, which is the most commonly used across Linux and is installed by default. In order to simplify the process of adding rules to the firewall, we use a package called ufw, which stands for Uncomplicated Firewall. The ufw package is usually installed by default, but if it isn’t go ahead and install it using the following command:
sudo apt install ufw
Now you can begin adding to the default rules, which deny all incoming traffic and allow all outgoing traffic. For now, add the ports for SSH (22), HTTP (80), and HTTPS (443):
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https
To review which rules will be added to the firewall, enter the following command:
sudo ufw show added
You should see the following output:
abe@pluto:~$ sudo ufw show added Added user rules (see 'ufw status' for running firewall): ufw allow 22/tcp ufw allow 80/tcp ufw allow 443
Before enabling the firewall rules, ensure that the port for SSH is in the list of added rules – otherwise, you won’t be able to connect to your server! The default port is 22. If everything looks correct, go ahead and enable the configuration:
sudo ufw enable
To confirm that the new rules are active, enter the following command:
sudo ufw status verbose
You will see that all inbound traffic is denied by default except on ports 22, 80, and 443 for both IPv4 and IPv6, which is a good starting point for most servers.
abe@pluto:~$ sudo ufw status verbose Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), disabled (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443 ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) 80/tcp (v6) ALLOW IN Anywhere (v6) 443 (v6) ALLOW IN Anywhere (v6)
Fail2ban is a tool that works alongside your firewall. It functions by monitoring intrusion attempts to your server and blocks the offending host for a set period of time. It does this by adding any IP addresses that show malicious activity to your firewall rules. It’s highly recommended to install something like Fail2ban on your servers that will be running a WordPress configuration in order to secure and protect your web server, especially if you intend to install any third-party plugins.
The Fail2ban program isn’t installed by default, so let’s install it now:
sudo apt install fail2ban
The default configuration should suffice, which will ban a host for 10 minutes after 6 unsuccessful login attempts via SSH. To ensure the fail2ban service is running enter the following command:
sudo systemctl enable --now fail2ban.service
And to check that it’s running, run the status command:
abe@pluto:~$ sudo systemctl status fail2ban.service
● fail2ban.service - Fail2Ban Service
Loaded: loaded (/usr/lib/systemd/system/fail2ban.service; enabled; preset: enabled)
Active: active (running) since Mon 2025-09-15 01:49:26 UTC; 14min ago
Docs: man:fail2ban(1)
Main PID: 12792 (fail2ban-server)
Tasks: 5 (limit: 2318)
Memory: 20.9M (peak: 21.1M)
CPU: 1.391s
CGroup: /system.slice/fail2ban.service
└─12792 /usr/bin/python3 /usr/bin/fail2ban-server -xf start
Sep 15 01:49:26 pluto.turnipjuice.media systemd[1]: Started fail2ban.service - Fail2Ban Service.
Sep 15 01:49:26 pluto.turnipjuice.media fail2ban-server[12792]: 2025-09-15 01:49:26,222 fail2ban.configreader [12792]: WARNING 'allowipv6' not defined in 'Definition'. Using default one: 'auto'
Sep 15 01:49:26 pluto.turnipjuice.media fail2ban-server[12792]: Server ready
fail2ban.local within the /etc/fail2ban/ directory: sudo nano /etc/fail2ban/fail2ban.local [DEFAULT] allowipv6 = auto sudo systemctl restart fail2ban.service
Job done! You now have a good platform to begin building your WordPress web server and have taken the necessary steps to prevent unauthorized access. However, it’s important to remember that security is an ongoing process and you should keep in mind the following points:
That’s all for chapter 1. Later on in this guide, we’ll cover things like obtaining a Let’s Encrypt SSL certificate and setting up automated remote backups among other things. However, in the next chapter, I will guide you through installing Nginx, PHP-FPM, and MySQL.
The post Set Up and Secure a VPS on<span class="no-widows"> </span>DigitalOcean appeared first on SpinupWP.
]]>The post Complete Nginx Configuration Kit for<span class="no-widows"> </span>WordPress appeared first on SpinupWP.
]]>This is article 3 of 10 in the series “Hosting WordPress Yourself”
In the previous chapter we set up server monitoring and discussed ongoing maintenance for our Ubuntu web server. In this final chapter I offer a complete Nginx configuration optimized to configure WordPress websites.
In addition to amalgamating all information from the previous 9 chapters, I will be drawing upon best practices from my experience and various sources I’ve come across over the years. The following example domains are included, each demonstrating a different scenario:
single-site.com – WordPress on HTTPSsingle-site-with-caching.com – WordPress on HTTPS with FastCGI page cachingmultisite-subdomain.com – WordPress Multisite using subdomainsmultisite-subdirectory.com – WordPress Multisite using subdirectoriesBefore diving into this configuration, we recommend double-checking that you have the latest version of MySQL by referencing the tutorial in chapter 2 of this guide. Once that’s confirmed, you’ll see that the configuration files contain inline documentation throughout and are structured in a way to reduce duplicate directives, which are common across multiple WordPress configurations. This should allow you to quickly create new WordPress sites with sensible defaults out of the box, which can be customized as required.
You can use these configs as a reference for creating your own configuration, or directly by copying into your etc directory. Follow the steps below to replace your existing Nginx server configuration.
Back up any existing config with the following command:
sudo mv /etc/nginx /etc/nginx.backup
Copy one of the example configurations from sites-available to sites-available/yourdomain.com:
sudo cp /etc/nginx/sites-available/single-site.com /etc/nginx/sites-available/yourdomain.com
Edit the config as necessary, paying close attention to the server name and server paths. You will also need to create any directories used within the configuration and configure Nginx to have read/write permissions.
To enable the site, symlink the configuration into the sites-enabled directory:
sudo ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/yourdomain.com
Test the configuration:
sudo nginx -t
If the configuration passes, reload Nginx:
sudo systemctl reload nginx.service
The following is a preview of the single-site.com Nginx configuration file that’s contained in the package. It should give you a good idea of what it’s like to use our configs.
server {
# Ports to listen on
listen 443 ssl;
listen [::]:443 ssl;
listen 443 quic;
listen [::]:443 quic;
http2 on;
# Server name to listen for
server_name single-site.com;
# Path to document root
root /sites/single-site.com/public;
# Paths to certificate files.
ssl_certificate /etc/letsencrypt/live/single-site.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/single-site.com/privkey.pem;
# File to be used as index
index index.html index.php;
# Overrides logs defined in nginx.conf, allows per site logs.
access_log /sites/single-site.com/logs/access.log;
error_log /sites/single-site.com/logs/error.log;
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~* /\.(?!well-known\/) {
deny all;
}
# Prevent access to certain file extensions
location ~\.(ini|log|conf)$ {
deny all;
}
# Deny access to any files with a .php extension in the uploads directory
# Works in sub-directory installs and also in multisite network
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}
# Hide Nginx version in error messages and reponse headers.
server_tokens off;
# Don't allow pages to be rendered in an iframe on external domains.
add_header X-Frame-Options "SAMEORIGIN" always;
# MIME sniffing prevention
add_header X-Content-Type-Options "nosniff" always;
# The X-XSS-Protection header has been deprecated by modern browsers and its use can introduce additional security issues on the client side.
# As such, it is recommended to set the header as X-XSS-Protection: 0 in order to disable the XSS Auditor, and not allow it to take the default behavior of the browser handling the response.
# Please use Content-Security-Policy instead.
add_header X-XSS-Protection "0" always;
# Whitelist sources which are allowed to load assets (JS, CSS, etc). The following will block
# only none HTTPS assets, but check out https://scotthelme.co.uk/content-security-policy-an-introduction/
# for an in-depth guide on creating a more restrictive policy.
# add_header Content-Security-Policy "default-src 'self' https: data: 'unsafe-inline' 'unsafe-eval';" always;
# Don't cache appcache, document html and data.
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires 0;
}
# Cache RSS and Atom feeds.
location ~* \.(?:rss|atom)$ {
expires 1h;
}
# Caches images, icons, video, audio, HTC, etc.
location ~* \.(?:jpg|jpeg|gif|png|avif|webp|ico|cur|gz|svg|mp4|mp3|ogg|ogv|webm|htc)$ {
expires 1y;
access_log off;
}
# Cache svgz files, but don't compress them.
location ~* \.svgz$ {
expires 1y;
access_log off;
gzip off;
}
# Cache CSS and JavaScript.
location ~* \.(?:css|js)$ {
expires 1y;
access_log off;
}
# Cache WebFonts.
location ~* \.(?:ttf|ttc|otf|eot|woff|woff2)$ {
expires 1y;
access_log off;
add_header Access-Control-Allow-Origin *;
}
# Don't record access/error logs for robots.txt.
location = /robots.txt {
try_files $uri $uri/ /index.php$is_args$args;
access_log off;
log_not_found off;
}
# Don't use outdated SSLv3 protocol. Protects against BEAST and POODLE attacks.
ssl_protocols TLSv1.2 TLSv1.3;
# Use secure ciphers
ssl_ecdh_curve X25519:prime256v1:secp384r1;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
ssl_dhparam /etc/nginx/dhparam;
ssl_prefer_server_ciphers off;
ssl_session_tickets off;
# Define the size of the SSL session cache in MBs.
ssl_session_cache shared:SSL:10m;
# Define the time in minutes to cache SSL sessions.
ssl_session_timeout 1h;
# Use HTTPS exclusively for 1 year, uncomment one. Second line applies to subdomains.
add_header Strict-Transport-Security "max-age=31536000;";
# add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
# Advertises support for HTTP/3
add_header Alt-Svc 'h3=":443"; ma=86400';
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri =404;
include global/fastcgi-params.conf;
# Use the php pool defined in the upstream variable.
# See global/php-pool.conf for definition.
fastcgi_pass $upstream;
}
}
# Redirect http to https
server {
listen 80;
listen [::]:80;
server_name single-site.com www.single-site.com;
return 301 https://single-site.com$request_uri;
}
# Redirect www to non-www
server {
listen 443 ssl;
listen [::]:443 ssl;
listen 443 quic;
listen [::]:443 quic;
http2 on;
server_name www.single-site.com;
# Advertises support for HTTP/3
add_header Alt-Svc 'h3=":443"; ma=86400';
return 301 https://single-site.com$request_uri;
}
Job done! I encourage you to explore the config files further and read through the documented configuration to get a feel for what’s going on. It should feel familiar as it follows the same conventions used throughout this guide.
Over time I will improve the configuration and add new best practices as they emerge. If you have any improvements, please let me know.
That concludes this chapter and the guide as a whole. It’s been quite a journey, but hopefully you’ve learned a lot and are more confident managing a server than when you started.
The post Complete Nginx Configuration Kit for<span class="no-widows"> </span>WordPress appeared first on SpinupWP.
]]>The post WordPress Cron and Email<span class="no-widows"> </span>Sending appeared first on SpinupWP.
]]>This is article 2 of 10 in the series “Hosting WordPress Yourself”
In the previous chapter, I walked you through WordPress caching. In this chapter I will demonstrate how to configure WordPress cron and set up outgoing email.
WordPress has built-in support for task scheduling, which allows certain processes to be performed in the background at designated times. Out-of-the-box WordPress performs the following scheduled tasks:
However, the cron system built-into WordPress isn’t the most performant or precise on its own. Scheduled tasks in WordPress are triggered during the lifecycle of a page request, therefore if your WordPress site doesn’t receive any visits for a period of time, no cron event will be triggered during this time.
This is especially true of sites that use page caching, such as Nginx FastCGI cache introduced in the previous chapter. With page caching enabled, WordPress is no longer processing each page request if the page cache is hit. This means that cron will not fire until the page cache expires. If you have configured the cache to expire after 60 minutes this may not be an issue, however, if you are caching for longer periods of time this may be problematic.
Using page requests to execute the cron is also problematic on sites without page caching that receive a lot of traffic. Checking if the cron needs to be executed on every page request is hard on server resources and several simultaneous requests could cause the cron to execute multiple times.
To overcome these issues cron should be configured using the operating system daemon (background process), available on Linux and all Unix-based systems. Because cron runs as a daemon it will run based on the server’s system time and no longer requires a user to visit the WordPress site.
Before configuring cron it’s recommended that you disable WordPress from automatically handling cron. Add the following line to your wp-config.php file:
define('DISABLE_WP_CRON', true);
Scheduled tasks on a server are added to a text file called crontab and each line within the file represents one cron event. If you’re hosting multiple sites on your server, you will need one cron job per site and should consider staggering the execution of many cron jobs to avoid running them all at the same time and overwhelming your CPU.
Begin by connecting to your server.
ssh [email protected]
Open the crontab using the following command. If this is the first time you have opened the crontab, you may be asked to select an editor. Nano is usually the easiest.
crontab -e

I’m not going to go into detail on the crontab syntax, but adding the following to the end of the file will trigger WordPress cron every 5 minutes. Remember to update the file path to point to your WordPress installation and to repeat the entry for each site.
*/5 * * * * cd /home/abe/globex.turnipjuice.media/public; /usr/local/bin/wp cron event run --due-now --quiet
Some articles suggest using wget or curl for triggering cron, but using WP-CLI is recommended. Both wget and cURL make requests through Nginx and are subject to the same timeout limits as web requests. However, you may want your new cron jobs to run for longer periods of time. There is no timeout limit when running WordPress cron via WP-CLI, it will execute until complete.
The --quiet flag ensures that no emails are sent to the Unix user account initiating the WordPress cron job scheduler.
Save the file by hitting CTRL + X followed by Y.
Cron is now configured using the Unix system cron tool. We’ll check that it’s running correctly later on.
Email servers are notoriously difficult to set up. Not only do you need to ensure that emails successfully hit recipient inboxes, but you also have to consider how you’ll handle spam and viruses (sent as email attachments). Installing the required software to run your own mail server can also eat up valuable system resources and potentially open up your server to more security vulnerabilities. This DigitalOcean article discusses in more detail why you may not want to host your own mail server.
I do not recommend that you configure your server to handle email and instead use a solid service provider, such as Google Workspace. However, WordPress still needs to send outgoing emails:
And that’s just WordPress core. Add new plugins to the mix and the volume and importance of emails sent from your site can balloon. Think WooCommerce and order receipts.
We recommend choosing a solid WordPress plugin for email sending and pair it with your favorite transactional email sending service. Look for a plugin with a sending queue that handles failures better than just adding an entry to your error log.
WP Offload SES is a good choice, as is WP Mail SMTP. If your site sends very little mail, configuring SMTP using Gmail (or whatever provider you use for email) for sending site email via SMTP isn’t a bad option.
In order to test that both cron and outgoing emails are working correctly, I have written a small plugin that will send an email to the admin user every 5 minutes. This isn’t something that you’ll want to keep enabled indefinitely, so once you have established that everything is working correctly, remember to disable the plugin!
Create a new file called cron-test.php within your plugins directory, with the following code:
<?php
/**
* Plugin Name: Cron & Email Test
* Plugin URI: https://spinupwp.com/hosting-wordpress-yourself-cron-email-automatic-backups/
* Description: WordPress cron and email test.
* Author: SpinupWP
* Version: 1.0
* Author URI: http://spinupwp.com
*/
/**
* Schedules
*
* @param array $schedules
*
* @return array
*/
function db_crontest_schedules( $schedules ) {
$schedules['five_minutes'] = array(
'interval' => 300,
'display' => 'Once Every 5 Minutes',
);
return $schedules;
}
add_filter( 'cron_schedules', 'db_crontest_schedules', 10, 1 );
/**
* Activate
*/
function db_crontest_activate() {
if ( ! wp_next_scheduled( 'db_crontest' ) ) {
wp_schedule_event( time(), 'five_minutes', 'db_crontest' );
}
}
register_activation_hook( __FILE__, 'db_crontest_activate' );
/**
* Deactivate
*/
function db_crontest_deactivate() {
wp_unschedule_event( wp_next_scheduled( 'db_crontest' ), 'db_crontest' );
}
register_deactivation_hook( __FILE__, 'db_crontest_deactivate' );
/**
* Crontest
*/
function db_crontest() {
wp_mail( get_option( 'admin_email' ), 'Cron Test', 'All good in the hood!' );
}
add_action( 'db_crontest', 'db_crontest' );
Upon activating the plugin, you should receive an email shortly after. If not, check your crontab configuration and WP Offload SES settings.
That concludes this chapter. In the next chapter we’ll look at configuring automatic backups for your WordPress websites.
The post WordPress Cron and Email<span class="no-widows"> </span>Sending appeared first on SpinupWP.
]]>