DevMaverick https://devmaverick.com/ Building robust web solutions Sat, 14 Mar 2026 09:13:09 +0000 en-US hourly 1 https://devmaverick.com/wp-content/uploads/2017/05/cropped-logo-small-full-thick-border-square-gold-150x150.png DevMaverick https://devmaverick.com/ 32 32 WordPress 7.0 and My WordPress https://devmaverick.com/wordpress-7-0-and-my-wordpress/ https://devmaverick.com/wordpress-7-0-and-my-wordpress/#respond Sat, 14 Mar 2026 09:05:43 +0000 https://devmaverick.com/?p=7774 WordPress 7.0 In April we’ll be getting a new version of WordPress, a major update. As I’m writing this article, WordPress 7.0 is on beta 5. Here’s the complete list of […]

The post WordPress 7.0 and My WordPress appeared first on DevMaverick.

]]>
WordPress 7.0

In April we’ll be getting a new version of WordPress, a major update.

As I’m writing this article, WordPress 7.0 is on beta 5.

Here’s the complete list of features planned for WordPress 7.0:

A few of the features I’m looking forward to:

  • Collaboration
    This seems like an incredibly useful feature. Many of us are already familiar with this kind of collaboration from Figma, where you can leave comments on a design, tag someone, and mark an issue as resolved. Having this available in Gutenberg opens up a lot of possibilities. The marketing team can work on drafts and leave suggestions and comments until they reach the final version. Much more natural than making changes in a document.
  • WP Client AI API
    An API integration that can then be called and used from plugins or themes, but in a centralized way, trying to use the model you’ve defined in WP.
  • Creating blocks and patterns on the server
    The ability to create blocks directly from PHP. There are some limitations around the fields that can be used with these blocks, but at least now this option exists for those who don’t want to use React.
    https://make.wordpress.org/core/2026/03/03/php-only-block-registration/
  • Navigation Block
    Better control over navigation, including on mobile.
  • Responsive editing mode
    In my opinion, this is the most important update for editing in Gutenberg. A lot of people prefer to use Elementor or Kadence simply because Gutenberg doesn’t offer the ability to control core blocks based on the device’s breakpoint. I hope it will be implemented in 7.0, with support not just for hiding certain blocks at different screen sizes, but also for applying different values for padding, margin, background-color, and so on.

My WordPress

You can now run WordPress directly in your browser by visiting my.wordpress.net.

A WordPress instance that runs directly in your browser, locally, and is saved for future sessions. So if you come back later and visit the address, everything you worked on will still be there.

It’s not hosted anywhere, it can’t be accessed from the outside. It’s a local instance that even supports installing themes and plugins.

Its purpose seems to be more geared toward keeping a personal journal or organizing information, with a strong focus on privacy, since everything is stored on your device.

Not a development tool like Studio or Local.

If it becomes popular, I think we’ll see a lot of hosting services or plugins that let you sync from this local instance directly to the server where your site lives.

One example would be working on your articles directly in this instance, and when you’re ready, triggering a sync that pushes everything to your production site.

The same could apply across devices — if you want to open it on your laptop or phone, you’d always have the latest version synced between them.

For now, it comes with a few available apps, different from what we’re used to in WP:

  • Personal CRM
  • Personal RSS Reader
  • AI Workspace and Knowledge Base

My WordPress Apps available

More details:

The post WordPress 7.0 and My WordPress appeared first on DevMaverick.

]]>
https://devmaverick.com/wordpress-7-0-and-my-wordpress/feed/ 0
Issue with autoplay video element on iPhone using webm https://devmaverick.com/issue-with-autoplay-video-element-on-iphone-using-webm/ https://devmaverick.com/issue-with-autoplay-video-element-on-iphone-using-webm/#respond Tue, 14 Oct 2025 14:50:46 +0000 https://devmaverick.com/?p=7714 iPhones don’t autoplay WEBM videos, even when muted and looped. Swapping from MP4 to WEBM broke autoplay, forcing a revert to MP4.

The post Issue with autoplay video element on iPhone using webm appeared first on DevMaverick.

]]>
When something doesn’t work as it is supposed to, I swear to God, it’s an Apple device all the time.

Most recently, I had a problem with autoplay of video elements in the browser for mobile. Instead of using gifs, we decided to use muted, looped videos and it worked fine.

At some point, we replaced them from mp4 to webm. Nothing crazy.

Just to find out a few days later that we’re having issues on iPhone devices that display a blank space instead of the video.

I went over the settings again, making sure it’s muted, it has autoplay, it has the poster in place, all that stuff.

Just to find out that iPhone will not autoplay webm files, and we need to revert back to the mp4 version.

This example below works, it will autoplay the muted video in a loop on your iPhone, as long as it’s not a webm.

<video poster="video-poster.png" autoplay playsinline loop muted>
    <source src="proxy.php?url=video.mp4" type="video/mp4" />
    Your browser does not support the video tag.
</video>

The post Issue with autoplay video element on iPhone using webm appeared first on DevMaverick.

]]>
https://devmaverick.com/issue-with-autoplay-video-element-on-iphone-using-webm/feed/ 0
How to translate a custom WordPress Gutenberg block https://devmaverick.com/how-to-translate-a-custom-wordpress-gutenberg-block/ https://devmaverick.com/how-to-translate-a-custom-wordpress-gutenberg-block/#respond Tue, 30 Sep 2025 14:22:16 +0000 https://devmaverick.com/?p=7665 A quick guide on how to translate custom WordPress Gutenberg blocks using JSON files for JS with explanations on why you got stuck.

The post How to translate a custom WordPress Gutenberg block appeared first on DevMaverick.

]]>

Table of contents

Intro

I’m writing this article because I just ran into this issue today.

I’ve created a custom Gutenberg block that uses view.js to render the content on the front-end
Usually I use render.php to do that, but in this case the front-end block rendered a React application.

This now had to be translated and it was a bit annoying until I understood all the details of how to do it.

I’m going to walk you through the steps of adding translation to a custom WordPress Gutenberg block that relies on JS to render the front-end.

Setup

I’m using the npx wordpress/create-block to get the scaffolding and the initial setup for my blocks.

You should end-up with a folder structure similar to the one below:

├── src
	├── block-name
		├── block.json
		├── edit.js
		├── editor.scss
		├── index.js
		├── render.php
		├── style.scss
		├── view.js

Since in most cases I only need to translate the front-end render.php using the __(), it works fine and there is no issue.

But in this case I’m not using the render.php, but instead the view.js.
My block.json looks something like this.

{
	"$schema": "https://schemas.wp.org/trunk/block.json",
	"apiVersion": 3,
	"name": "custom-block",
	"version": "0.1.0",
	"title": "Custom Block",
	"category": "dm-block-general",
	"description": "Some description goes in here",
    "keywords": [ "custom"],
	"supports": {
        "anchor": true,
		"color": {
			"text": true,
            "background": true
		}
	},
    "example": {
    },
	"textdomain": "my-text-domain",
	"editorScript": "file:./index.js",
	"editorStyle": "file:./index.css",
	"style": "file:./style-index.css",
	"viewScript": "file:./view.js"
}

Internationalization

What you need to do is make sure you add

import { __ } from "@wordpress/i18n";

in your view.js. This way you can start using the internationalization with your text-domain inside the JS file.

view.js will look like this:

import { __ } from "@wordpress/i18n";
return (
		<section
			className="custom-block"
		>
			{__("Here is some text that needs to be translated", "my-text-domain")}
		</section>
	);

Generate PO/POT and JSON files

If you don’t have a PO or POT file, you’ll need to generate them because we’ll need them later.
You can run make-pot

Next you need to generate the JSON file for the JS translation.
For your PHP you use the PO/MO files, but for JS you need to use JSON.

You can run the command make-json

if you already have an PO file you can run

wp i18n make-json es_ES.po

Add --no-purge at the end if you want the PO to remain untouched.

wp i18n make-json es_ES.po --no-purge

This will generate a JSON file based on the PO that you indicated.

The new JSON file might have a name like this:
my-text-domain-es_ES-1234556789.json
make sure you change the name to this
[text-domain]-[language]-[script-handle].json

We’ll get to the actual naming a bit lower on the page, showing you 2 different examples.

Load JSON translation

Next we need to make sure the text-domain is registered and that the script translation is loaded correctly.

TBH I don’t think you need to do the text-domain explicitly, it’s already mentioned in the block.json and it seems WordPress picks it up automatically, but I’m going to add it here anyway.

function dm_load_theme_textdomain() {
    load_theme_textdomain( 'my-text-domain', get_template_directory() . '/languages' );

    // Load the translation for the block
    wp_set_script_translations( 'customb-block-view-script', 'my-text-domain', __DIR__ . '/languages/' );
}
add_action( 'init', 'dm_load_theme_textdomain' );

As you can see in the code above, I’m using /languages/ as the path for all translation files. That is where they are expected to be.

And now for the confusing part, for me at least, loading the translation JSON file to the block JS.

It requires

wp_set_script_translations( string $handle, string $domain = 'default', string $path = '' )

  • $handle is the script handle the textdomain will be attached to.
  • $domain is the text-domain
  • $path is the full file path to the directory containing translation files.

Determine script handle for Gutenberg blocks

For our example with a custom Gutenberg block, the handle is the thing that confused me.
The handle is composed from the block name (in block.json) + -view-script, if you’re referencing the view.js or block name + -edit-script, if you’re referencing the edit.js.

The name will be custom-block which makes the handle custom-block-view-script or custom-block-edit-script.

And if you used a block name that has a grouping / namespace, like the one below.

{
	"$schema": "https://schemas.wp.org/trunk/block.json",
	"apiVersion": 3,
	"name": "dm-blocks/custom-block",
}

The name will be dm-blocks-custom-block which makes the handle dm-blocks-custom-block-view-script or dm-blocks-custom-block-edit-script.

For me the confusion started from the fact there is nowere a mention of -edit-script or -view-script.

Correct JSON file naming

Now, with this information in mind we can rename the language JSON correctly.
It will be my-text-domain-es_ES-custom-block-view-script.json or if you use a grouping name, it will be my-text-domain-es_ES-dm-blocks-custom-block-view-script.json.

Once you’ve figure this out and it’s all in place, the translation works great.

Resources

The post How to translate a custom WordPress Gutenberg block appeared first on DevMaverick.

]]>
https://devmaverick.com/how-to-translate-a-custom-wordpress-gutenberg-block/feed/ 0
Implement Multilingual in a WordPress Gutenberg / FSE https://devmaverick.com/implement-multilingual-in-a-wordpress-gutenberg-fse/ https://devmaverick.com/implement-multilingual-in-a-wordpress-gutenberg-fse/#comments Thu, 07 Aug 2025 16:30:04 +0000 https://devmaverick.com/?p=7595 Add multilingual support to your WordPress FSE with Polylang Pro and Loco Translate. Easy guide for translating blocks and templates.

The post Implement Multilingual in a WordPress Gutenberg / FSE appeared first on DevMaverick.

]]>

Table of contents

Intro

The purpose of this article is to give you a list of steps you can follow when implementing multilingual support in your custom WordPress FSE (Full Site Editing).

In my scenario, we have a custom WordPress theme and a plugin that contains a lot of custom Gutenberg blocks we’ve created. We want to translate the pages, the custom blocks, template parts, patterns, and everything else that comes to mind.

My approach is using Polylang. I assume something similar can be achieved with WPML.

You will need Polylang Pro in order to translate template parts. This is very important, as this is one of the key parts in WordPress FSE and the thing that I struggled with.

Prepare your Gutenberg Blocks

Before moving on, make sure you add language support to your Gutenberg blocks.

I’m using the @wordpress/scripts and the scaffolding has: edit.js and render.php.

In the plugin that we’re using for all the custom blocks, go through the render.php file and wrap your static strings in __() with a dedicated text domain. This will help retrieve the translation we’re going to generate with Loco Translate.

__('String that needs translation','my-custom-plugin')

Make sure you go through all the blocks and wrap the static strings in the translation function.

You can also translate the strings that are for the back-end, the ones in edit.js, but only if you think it makes sense. Your editors might not need those bits translated into their language. More importantly, it’s the front end that needs to be translated.

<div className="dm-related-product-wrap text-center">
	<p>
		{__(
			"No related products found for the current product.",
			"my-custom-plugin",
		)}
	</p>
</div>

Polylag

As mentioned in the introduction, you will need Polylang Pro to be able to translate template parts in WordPress FSE.

Some other extra options that you want from Polylang Pro are:

  • Share slugs: Allows to share the same URL slug across languages for posts and terms.
  • Translate slugs: Allows to translate custom post types and taxonomies slugs in URLs.

We’re going to use Polylang to manage the different language content and identify the language that your visitor is on and keep delivering in the correct language.

Loco Translate

Since we have custom Gutenberg blocks in our implementation, we need to translate the static strings that are present in those blocks.

In a couple of steps ago we added translation support for all those static strings. Now we need to use Loco Translate and generate the .po files that will hold the string translation.

Go to: Dashboard > Loco Translate > Home

Find the text domain that you used in the second step and click on the bundle name.

Click on New language >  Select the language you want (ideally the same you defined in Polylang) > Choose location, leave it on Custom

Loco Translate add new language

Once you do that, the Spanish version in this case will appear, and you can go in and edit the strings that were identified.

If you didn’t map out all the strings you can sync again and it will grab any new strings that have that text domain.

Once you translated the strings the .po files are being generated and added to the plugin folder, this way you can also commit them via git to your repository.

Loco translate string translation example

Convert patterns into template parts

UPDATE:
Seems like patterns are translatable now using Polylang Pro.

With Polylang Pro, you can translate template parts from the Site Editor in different languages, but you can’t do the same with patterns.

What I ended up doing was to convert the patterns that I use into template parts. This way, I could translate them into Spanish.

We had a CTA section that was appearing in multiple places on the site, and we needed to translate it into Spanish. Converting it to a template part was the easiest solution.

Switching to Spanish

Now that template part has the option to switch languages and edit the content in Spanish. No matter where I use it, I know it will be rendered in the correct language.

Translate content

Translating the content is pretty straightforward.

You can create the translation version of an existing post/page/custom post type and add the content in there.

I recommend using the bulk selector, as it gives you the option to generate the Spanish (or whatever language you use) version by duplicating the English content (original language).

This made things easier for me, as some of the pages were very heavy with multiple different Gutenberg blocks that I didn’t want to copy-paste or recreate from scratch. Now all you need to do is change the English text into the one you want, and that’s it.

Polylang bulk translate duplicate content

I recommend translating the slugs and the custom taxonomies as well. This is also easily done with Polylang Pro, and it will be good for SEO.

Troubleshooting

Here are a few things to be aware of:

  • Do not restore a template part because it will delete the translation.
    If you made changes to the template via your .html files in the theme, do not restore them on the site as it will delete the translation.
  • Be sure to generate the .po files when you install Loco Translate in case you don’t have them.
  • If you don’t see your strings in Loco Translate make sure you added the text domain correctly and wrapped your static strings in the correct function mentioned in Prepare your Gutenberg blocks.
    Hit the “Sync” button inside the Loco Translate text domain language.

Conclusion

Getting multilingual support up and running in a custom WordPress FSE setup might feel a bit overwhelming at first, but it’s totally doable.

Using Polylang Pro to handle the structure and Loco Translate to take care of the static strings means you can translate everything from template parts to custom Gutenberg blocks and make sure your site looks great in every language.

The post Implement Multilingual in a WordPress Gutenberg / FSE appeared first on DevMaverick.

]]>
https://devmaverick.com/implement-multilingual-in-a-wordpress-gutenberg-fse/feed/ 2
Optimizing WordPress Performance with WP Rocket, Imagify & Bunny CDN https://devmaverick.com/optimizing-wordpress-performance-with-wp-rocket-imagify-bunny-cdn/ https://devmaverick.com/optimizing-wordpress-performance-with-wp-rocket-imagify-bunny-cdn/#respond Sun, 13 Jul 2025 17:16:34 +0000 https://devmaverick.com/?p=7568 Streamlined WordPress optimization setup using WP Rocket, Imagify & Bunny CDN for faster performance, easy scaling, and EU-based premium tools.

The post Optimizing WordPress Performance with WP Rocket, Imagify & Bunny CDN appeared first on DevMaverick.

]]>

Table of contents

Intro

Disclaimer: all these services are paid (they might have some free versions). 

In the past, I’ve used all sorts of combinations when it came to WordPress optimization plugins:

  • WP Fastest Cache with Autoptimize and reSmush.it
  • W3 Total Cache with reSmush.it
  • LiteSpeed Cache
  • WP Rocket Cache and reSmush.it or

And while it was fun to tweak and adjust them based on my needs, it got tiring at some point.

When you have multiple clients who need ongoing maintenance, you want a setup that shares common elements across websites. It makes it easier to troubleshoot issues, apply updates, maintain consistency, and quickly implement it for new clients.

Some of the solutions mentioned above still have their place. For example, W3 Total Cache works great with an ElasticCache server, allowing the same cache to be shared across multiple servers hosting the same WordPress instance. LiteSpeed Cache is ideal if you’re running a LiteSpeed server. However, these setups are niche-specific and limited in scope.

WP Rocket Cache

WP Rocket doesn’t need any introduction.

It’s a well known WordPress plugin that has been tested and used for so many years in the WordPress community. It has great tracking record when it comes to reviews and support.

What I appreciate most is its simplicity: it makes it easy to enable performance features without diving deep into a complex list of options. Advanced configurations are handled behind the scenes.

I purchased the 50-site license, which fits our current needs. One helpful detail is that subdomains under a domain are covered by the same license, so you won’t need an extra license for your staging or development environments.

Imagify

Imagify comes from the same family of products as WP Rocket, which made the decision even easier.

I opted for the Infinite plan, so I don’t need to worry about file limits or media size.

I can slap it on any of the WordPress sites that I’m working on, run the whole media library through Imagify, and be done with it.

You can also use the web interface to optimize images not hosted on your WordPress site, or leverage their API to integrate with other projects.

No complaints so far, it follows the same philosophy as WP Rocket: install, activate, and it works. You can tweak the settings, but 90% of the time you won’t have to. I’ve left mine with the default configuration.

Bunny CDN

While working on my personal website earlier this year, I discovered Bunny CDN.

I wanted a CDN and storage solution that wasn’t based in the U.S., something outside the realm of the big cloud providers. This was around the time Trump-era tariffs sparked conversations about EU-based alternatives.

Bunny offers more than just CDN services—they also provide storage, edge scripts, DNS, performance optimization, and security tools. For this setup, we’re only focusing on the CDN.

Bunny CDN is a paid service.

Pricing depends on which regions serve your content. It starts at $0.01/GB for Europe and North America, and goes up to $0.06/GB in the Middle East and Africa. You can control which regions are enabled to manage costs.

For reference, in June 2025 we transferred 143 GB across 4.4 million requests (3.9 million cached) with all regions enabled. The total cost? $2.30 USD. I’m happy with that.

Does Bunny CDN work with Cloudflare?

Yes, Bunny CDN works fine with Cloudflare.

If you’re already using Cloudflare for DNS management and security, you can still use Bunny CDN to serve static assets.

You’ll need to create a Pull Zone in Bunny and optionally configure a CNAME in Cloudflare. This isn’t mandatory but is useful for branding and integration.

My recommendation is to continue using Cloudflare for DNS, DDoS protection, and security features, while using Bunny for optimized content delivery.

How they all work together?

WP Rocket optimization settings

After activating WP Rocket, there are a few settings that you need to do.

WP Rocket > File Optimization

  • Minify CSS Files – YES
  • Minify JavaScript Files – YES
  • Load JavaScript deferred – YES
  • Excluded JavaScript Files – add in here what you want to exclude. I recommend jquery and jquery.min.js
  • Delay JavaScript execution – YES
  • Delay JavaScript execution, Exclude JavaScript Files – add in here any scripts you want to exclude.

WP Rocket - File optimization settings

WP Rocket > Media

  • LazyLoad Enable for images – YES
  • LazyLoad Enable for iframes and videos – YES
  • Image Dimensions Add missing image dimensions – YES
  • Fonts Preload fonts – YES
  • Fonts Self-host Google fonts – YES

WP Rocket - Media optimization settings

WP Rocket > CDN

  • CDN Enable Content Delivery Network – YES
  • Add the CDN CNAME from Bunny CDN

WP Rocket - CDN optimization settings

Imagify settings

Leave the defaults on. Only change them if you really need something different.

  • Auto-Optimize images on upload – YES
  • Backup original images – YES
  • Lossless compression – NO (unless working with text-heavy or professional photo images)
  • Next-gen image format – WEBP
  • Resize larger images – YES

Bunny CDN settings

Again, simple setup.

Go and create a Pull Zone with the following settings:

  • Name: keep it simple, match your domain;
  • Origin URL: your site URL
  • Tier: Standard
  • Pricing Zones: Select what you need, I recommend all.

Bunny CDN create Pull Zone

A Pull Zone will be generated and you go into WP Rocket > CDN and in the CDN CNAME add the zone.

Each Pull Zone has additional configuration options, but we’ll keep it basic here—Bunny does a great job handling sensible defaults.

Conclusion

Without realizing it, I’ve built a WordPress optimization stack entirely based in the EU: WP Rocket and Imagify from France, Bunny from Slovenia.

This setup offers everything we need:

  • Easy of use
  • Standardization across multiple clients
  • Premium, reliable services
  • Flexibility with advanced configuration options

The post Optimizing WordPress Performance with WP Rocket, Imagify & Bunny CDN appeared first on DevMaverick.

]]>
https://devmaverick.com/optimizing-wordpress-performance-with-wp-rocket-imagify-bunny-cdn/feed/ 0
Use case for DIRECTORY_SEPARATOR constant in PHPUnit tests https://devmaverick.com/use-case-for-directory_separator-constant-in-phpunit-tests/ https://devmaverick.com/use-case-for-directory_separator-constant-in-phpunit-tests/#respond Thu, 05 Jun 2025 16:36:53 +0000 https://devmaverick.com/?p=7574 Using DIRECTORY_SEPARATOR constant in your PHPUnit tests might save you the trouble if your team is using different OS on their devices.

The post Use case for DIRECTORY_SEPARATOR constant in PHPUnit tests appeared first on DevMaverick.

]]>
It’s been a month since I’ve started using Linux Mint on a daily basis for work-related stuff. I want to get more familiarized with it as a consumer OS and not interact with it only via a console for our VPS instances.

Last week we ran into an issue with one of our repositories.

The problem was caused by the PHPUnit tests that were written on Windows. In some of those specific tests, we needed to check the existence of a file in a specific place, or look for a certain line inside that file.

The path used to get to that file was written for Windows.

When I tried to run the tests, they failed badly and I got F after F.

The solution was pretty simple, we ended up updating all those paths using the DIRECTORY_SEPARATOR constant that PHP offers. This makes the path OS-agnostic and fixes all those issues.

The initial code that looked like this:

$seederFilesFolderPath = $this->seederFilesFolderPath('\images\default-covers');

Turned into this:

$seederFilesFolderPath = $this->seederFilesFolderPath(DIRECTORY_SEPARATOR . 'images' . DIRECTORY_SEPARATOR . 'default-covers');

This issue probably would’ve been caught earlier if we’d run the tests in the pipeline, but we run the tests on our machines before deploying versions. And yes, those were Windows, until now.

The post Use case for DIRECTORY_SEPARATOR constant in PHPUnit tests appeared first on DevMaverick.

]]>
https://devmaverick.com/use-case-for-directory_separator-constant-in-phpunit-tests/feed/ 0
How to set the correct number of PHP-FPM child processes for your application https://devmaverick.com/how-to-set-the-correct-number-of-php-fpm-child-processes-for-your-application/ https://devmaverick.com/how-to-set-the-correct-number-of-php-fpm-child-processes-for-your-application/#respond Wed, 25 Sep 2024 15:55:24 +0000 https://devmaverick.com/?p=7453 A quick guide that will help you fine tune your PHP-FPM performance, how to determine the correct number of child processes based on your application.

The post How to set the correct number of PHP-FPM child processes for your application appeared first on DevMaverick.

]]>

Table of contents

 

Issue

We ran into a memory issue with one of our AWS Lightsail instances where the memory kept getting used up to 80%-90% and staying like that forever. The only moment it went down again was if we restarted Apache and PHP.

Since it was a 8GB instance and the application that ran in there had no way of using that amount of memory, I started to investigate the issue.

I quickly got to the PHP-FPM settings file where all the pm options were generated by Lightsail when the instance was deployed. Those values didn’t mean much to me, but compared to other servers they seemed pretty high.

The instance has a pm set to dynamic with the number of child processes set to high compared to the available memory and the memory used by the PHP-FPM process, which resulted in random crashes after a few months.

PHP-FPM settings

pm (process manager) has the following options:

  • static
  • dynamic
  • ondeman

Static

This will always keep the same number of child processes alive/available to handle requests.

The only setting that you’re going to configure is the pm.max_children. That will spawn the number of child processes you indicated and always keep them alive.

pm=static
pm.max_children=12

Dynamic

  • pm.max_children – the maximum number of child processes allowed.
  • pm.start_servers – the number of child processes when PHP-FPM starts.
  • pm.min_spare_servers – the minimum number of idle child processes that will be created. If current idle child processes lower than this value, more will be created.
  • pm.max_spare_servers – the maximum number of idle child processes allowed. If current idle child processes  higher than this value, some will be killed.
  • pm.max_requests – the number of requests a child process will execute before being killed and spawned again.
  • pm.process_idle_timeout – the idle time (seconds) after which a process child is killed.

The pm.max_requests and pm.process_idle_timeout exclude each other in most cases, so pick one of them.

If you go with pm.max_requests, use 500 or 1000 as values, not some ridiculous high values. You want the process to be terminated and spawned to prevent memory leaks for example.

pm=dynamic
pm.max_children=12
pm.start_servers=2
pm.min_spare_servers=1
pm.max_spare_servers=3
pm.max_requests=1000

ondemand

No process will be spawned by default, they will appear based on demand up to the maximum you set.

They will decrease again if there is no load and you might end-up with no process at all. This method might affect the user interaction if they are the ones accessing in the moment of no process existing (it depends on the complexity of the request).

pm=ondemand
pm.max_children=12
pm.max_requests=1000

Determine the PHP-FPM memory usage

In order to determine the correct number of child processes, you will need to estimate how much memory your PHP-FPM processes use.

Luckily there are some simple ways of doing that.

This will output your PHP-FPM processes and with some information, in column RSS you get the amount of memory used by that process

ps -ylC php-fpm --sort:rss

This one will output an average already calculated from the processes mentioned above

ps --no-headers -o "rss,cmd" -C php-fpm | awk '{ sum+=$1 } END { printf ("%d%s\n", sum/NR/1024,"Mb") }'

Let’s say the value is 100 MB to make it easy for us to do the math.

Calculate the correct number of child processes

Now that you know a process on average will use 100 MB of memory you can calculate the max number of child processes you can use.

If you run a 4 GB instance, depending on the OS and all the other programs that might be between 700 MB1.2 GB. That will leave you with 2.8 GB of memory for your application.

2800 MB / 100 MB = 28 child processes

That would be the maximum amount: pm.max_children.

Given that that is an average, I would go with a max of 25, just to have some spare memory left.

Based on that value, you can now configure your PHP-FPM settings.

static

pm=static
pm.max_children=25

dynamic

pm=dynamic
pm.max_children=25
pm.start_servers=8
pm.min_spare_servers=2
pm.max_spare_servers=4
pm.max_requests=1000

ondemand

pm=ondemand
pm.max_children=25
pm.max_requests=1000

What option should I choose?

If you’re running multiple sites/apps on a VPS where all the resources are shared I would suggest go with ondemand or dynamic. This way you don’t destroy the experience for the other apps running.

Make sure to play with the dynamic settings to help you achieve the correct result. For example pm.start_servers can make a big impact on the constant memory that is being used.

If you’re running a VPS dedicated to a single site/app than you can go with static, that will give you the best performance by having the maximum child processes always available and capable to work well even when you get a spike in traffic.

Resources

The post How to set the correct number of PHP-FPM child processes for your application appeared first on DevMaverick.

]]>
https://devmaverick.com/how-to-set-the-correct-number-of-php-fpm-child-processes-for-your-application/feed/ 0
Monitor multiple Lightsail instances at once using Prometheus and Grafana https://devmaverick.com/monitor-multiple-lightsail-instances-at-once-using-prometheus-and-grafana/ https://devmaverick.com/monitor-multiple-lightsail-instances-at-once-using-prometheus-and-grafana/#comments Fri, 31 May 2024 09:39:02 +0000 https://devmaverick.com/?p=7394 This step by step guide will help you set-up a Grafana dashboard feeding from multiple Lightsail instances using a single Prometheus data source.

The post Monitor multiple Lightsail instances at once using Prometheus and Grafana appeared first on DevMaverick.

]]>

Table of contents

Intro

In this article, you’re going to learn how to monitor multiple Lightsail instances at once in a single Grafana dashboard.

You can check the article regarding a single instance, Monitor a Lightsail instance using Prometheus and Grafana if you want to get familiar with the process. Most of the steps will be explained here again.

By default, you can monitor the Lightsail instance from the AWS account, but this will require you to log into AWS and get on the Lightsail services section to check each instance in particular. That is kind of annoying and it’s hard to get a good overview.

That’s where the Grafana dashboards come into play.

The flow is like this:

  • We’ll install Node Exporter on all the Lightsail instances that you want to monitor
  • Install Prometheus on a separate machine and set all the other Lightsail instances as targets
  • Grafana will read the Prometheus info as the data source and put it into a Dashboard.

In this setup, we have multiple Lightsail instances monitored in a single Grafana dashboard.
This would be a regular setup if have your application running behind a load balancer.

As showcased above, the Node Exporter will run on each of the Ligthsail instances we want to monitor, while Prometheus and Grafana will run on a different machine.

For a single Lightsail instance, check the Monitor a Lightsail instance using Prometheus and Grafana.

Requirements

  • Multiple Lightsail instances that use Ubuntu 20.04
  • Static IP attached to each instance
  • Port 9100 opened on the instances
    Lightsail >  Instance > Networking
    9100 is used by Node Exporter.
  • A different instance/server/machine that will host Prometheus and Grafana
  • Port 9090 opened on this separate instance
    9090 is used by Prometheus

Install Node Exporter

1. Connect to your Lightsail instance using SSH

Lightsail connect via SSH

 

2. Create an account named exporter for Node Exporter

sudo useradd --no-create-home --shell /bin/false exporter

 

3. Download the node_exporter binary package to your instance

Select the operating system to be linux and architecture amd64.

Right-click on node_exporter latest version,  “Copy link” and save it in the notepad for later.

 

4. Connect via SSH to your Lightsail instance again

5. Go to home

 

6. Download the node_exporter binary package to your instance

curl -LO node_exporter_url_address_you_saved_in_notepad

Example:

curl -LO https://github.com/prometheus/node_exporter/releases/download/v1.8.1/node_exporter-1.8.1.linux-amd64.tar.gz

 

7. Extract the contents for the downloaded packages for Node Exporter
Make sure the archive names match the ones in the links you copied in step 6.

tar -xvf node_exporter-1.8.1.linux-amd64.tar.gz

 

8. Copy the node_exporter file from ./node_exporter* to /usr/local/bin

sudo cp -p ./node_exporter-1.8.1.linux-amd64/node_exporter /usr/local/bin

 

9. Change ownership of the file to the exporter user

sudo chown exporter:exporter /usr/local/bin/node_exporter

Configure Node Exporter

1. Connect via SSH to your Lightsail instance again

2. Create a file that will hold the basic_auth for Node Exporter

First, create a folder for the file in /etc/

sudo mkdir /etc/node_exporter

Quickly edit with Vim:

  • Press i to enter insert mode and make changes to the file
  • Press ESC to exit insert mode
  • Press : to enter command mode and add wq! to save the file

Here are some more essential commands for Vim.

Create and edit the file

sudo vim /etc/node_exporter/web.yml

Add the following text to the file and save it.

basic_auth_users:
  node_exporter_uberadmin: $2a$12$GxinAvcHwkGQclBiJmz6ce5is67Bxj7mvnKyY4L0DxJooqFgA7hvu

Note: the password needs to be generated using a bcrypt hash of the string you want to use.
Use an online bcrypt tool for that if it’s easier.

The example above is the hash for password123.

 

3. Save the file

Press the Esc key to exit insert mode, and type :wq! to save your changes and quit Vim.

 

4. Create a systemd service for node_exporter in Vim

sudo vim /etc/systemd/system/node_exporter.service

 

5. Press the i key to enter insert mode in Vim

6. Edit the file as follows to collect all the available data from the server

[Unit]
Description=NodeExporter
Wants=network-online.target
After=network-online.target


[Service]
User=exporter
Group=exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter \
--web.config.file="/etc/node_exporter/web.yml"

[Install]
WantedBy=multi-user.target

 

7. Save the file

Press the Esc key to exit insert mode, and type :wq! to save your changes and quit Vim.

 

8. Reload systemd process

sudo systemctl daemon-reload

 

9. Start the node_exporter

sudo systemctl start node_exporter

 

10. Check node_exporter status

sudo systemctl status node_exporter

 

11. Press Q to exit the status command

12. Enable Node Exporter on instance boot

sudo systemctl enable node_exporter

 

13. Repeat the Install Node Exporter and Configure Exporter steps on the other available instances that you want to monitor

Or you can also snapshot this current instance and deploy it multiple times.

Install Prometheus

This part will happen on a different machine/instance.

Make sure to open port 9090 to make Prometheus accessible from outside, in this case for Grafana to use it.

1. Connect to your Lightsail instance using SSH
Lightsail connect via SSH

 

2. Create an account named prometheus for Prometheus

sudo useradd --no-create-home --shell /bin/false prometheus

 

3. Create local system directories

sudo mkdir /etc/prometheus /var/lib/prometheus

sudo chown prometheus:prometheus /etc/prometheus

sudo chown prometheus:prometheus /var/lib/prometheus

 

4. Download Prometheus binary package

Select the operating system to be linux and architecture amd64.

The right click on prometheus latest version,  “Copy link” and save it in the notepad for later.

Prometheus grab binary package URL

 

5. Connect via SSH to your Lightsail instance again

6. Go to home

 

7. Download the prometheus binary package to your instance

curl -LO prometheus_url_address_you_saved_in_notepad

Example:

curl -LO https://github.com/prometheus/prometheus/releases/download/v2.52.0/prometheus-2.52.0.linux-amd64.tar.gz

 

8. Extract the contents for the downloaded packages for Prometheus
Make sure the archive names match the ones in the links you copied in step 4.

tar -xvf prometheus-2.52.0.linux-amd64.tar.gz

 

9. Copy the extracted prometheus and promtool to the /usr/local/bin directory

sudo cp -p ./prometheus-2.52.0.linux-amd64/prometheus /usr/local/bin

sudo cp -p ./prometheus-2.52.0.linux-amd64/promtool /usr/local/bin

 

10. Change ownership of prometheus and promtool folders to the prometheus user we created at the start

sudo chown prometheus:prometheus /usr/local/bin/prom*

 

11. Copy the consoles and console_libraries to /etc/prometheus. Make it recursive copy of all directories within the hierarchy using -r

sudo cp -r ./prometheus-2.52.0.linux-amd64/consoles /etc/prometheus

sudo cp -r ./prometheus-2.52.0.linux-amd64/console_libraries /etc/prometheus

 

12. Change ownership of the copied folders to prometheus user

sudo chown -R prometheus:prometheus /etc/prometheus/consoles

sudo chown -R prometheus:prometheus /etc/prometheus/console_libraries

 

13. Copy the configuration file prometheus.yml to /etc/prometheus and change ownership to prometheus user

sudo cp -p ./prometheus-2.52.0.linux-amd64/prometheus.yml /etc/prometheus

sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml

Configure Prometheus

1. Connect via SSH to your Lightsail instance again

2. Create a backup copy to your prometheus.yml config file

sudo cp /etc/prometheus/prometheus.yml /etc/prometheus/prometheus.yml.backup

 

3. Open the prometheus.yml config file in Vim

sudo vim /etc/prometheus/prometheus.yml

 

4. Edit the prometheus.yml 

  • scrape_inteval – this defines how often Prometheus will scrape/collect data from the targets available
  • job_name – This is used to identify the exporters available for Prometheus
  • targets – These will point to the actual machines from where you’re going to scrape data. They are written in the format IP:PORT.

We’re going to edit the default prometheus.yml file by:

  • Setting the correct static  IP on the prometheus job_name
  • Adding a basic_auth for the prometheus
  • Creating a job_name for node_exporter that we’re going to name node_lightsail_targets
  • Group all the Lightsail instances under the job node_lightsail_targets
  • Adding a basic_auth for node

# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
       - targets:
         # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["your_prometheus_static_ip:9090"]

     basic_auth: 
       username: prometheus_uberadmin 
       password: password123

  - job_name: "node_lightsail_targets"

    static_configs:
      - targets: ["your_lightsail_static_IP:9100","your_lightsail_static_IP:9100","your_lightsail_static_IP:9100"]

    basic_auth:
      username: node_exporter_uberadmin
      password: password123

 

5. Save the file

Press the Esc key to exit insert mode, and type :wq! to save your changes and quit Vim.

 

6. Create a file that will hold the basic_auth for Prometheus

sudo vim /etc/prometheus/web.yml

Add the following text to the file and save it.

basic_auth_users:
  prometheus_uberadmin: $2a$12$GxinAvcHwkGQclBiJmz6ce5is67Bxj7mvnKyY4L0DxJooqFgA7hvu

Note: the password needs to be generated using a bcrypt hash of the string you want to use.
Use an online bcrypt tool for that if it’s easier.

The example above is the hash for password123.

 

7. Save the file

Press the Esc key to exit insert mode, and type :wq! to save your changes and quit Vim.

 

8. Start Prometheus

sudo -u prometheus /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /var/lib/prometheus --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries

 

9. After the running service is validated, press Ctrl+C to stop it.

10. Open systemd configuration file in Vim

sudo vim /etc/systemd/system/prometheus.service

 

11. Insert the following lines into the file

[Unit]
Description=PromServer
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
--config.file /etc/prometheus/prometheus.yml \
--storage.tsdb.path /var/lib/prometheus/ \
--web.config.file /etc/prometheus/web.yml \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target

The instructions above are going to be used by Linux to start Prometheus on the server.

 

12. Press the Esc key to exit insert mode, and type :wq! to save your changes and quit Vim.

13. Load the new information into the systemd

sudo systemctl daemon-reload

 

14. Restart Prometheus

sudo systemctl start prometheus

 

15. Check Prometheus status

sudo systemctl status prometheus

Prometheus status in console

 

16. Press Q to exit the status

17. Enable Prometheus on instance boot

sudo systemctl enable prometheus

 

18. Check Prometheus in the browser

Go to HTTP://your_prometheus_static_ip:9090

You will be prompted to add a username and password. Use the ones you set in the basic_auth.

After you will be redirected to the Prometheus dashboard.

Prometheus dashboard

 

19. Check the targets

Go to Status > Targets

In there, you should now see the node job listed.

 

Configure Grafana

I assume you already have Grafana installed. In this tutorial, we’re skipping the part about installing Grafana.

You can check the Resources at the bottom where we have some links on how to install Grafana.

I recommend having Grafana on a different machine since you will use it in the future to monitor different servers probably.

1. Add Data source
Go to Data sources > Add new data source

Grafana list of Data sources

2. Select a Prometheus data source

Grafana add a Prometheus data source

3. Add URL and credentials

Complete the Prometheus server URL with the HTTP://static_ip:9090

And on Authentication select Basic Authentication and add the credentials we created. In our case prometheus_uberadmin and password123.

Grafana add Prometheus data source details

Scroll down and click Save & Test.

4. Create a Dashboard

Go to Dashboards > New > New Dashboard

Grafana add dashboard

4. Click on Import a dashboard

You can go to this page and find a dashboard that matches your needs https://grafana.com/grafana/dashboards

I recommend one of these 2 to get a good overview of your instance in Grafana:

Paste the ID of the dashboard in the import section and load it.

Grafana import dashboard based on ID

5. Select the Data source

Add a name for the Dashboard and select the Prometheus data source we created in step 3.

Grafana select data source for dashboard

Once you click Import a new dashboard will be generated with the details pulled from Prometheus.

Conclusion

Following these steps will allow you to monitor multiple Lightsail instances into a single Grafana dashboard using Node Exporter and Prometheus.

This is a very useful setup for an application running on multiple Lightsail instances behind a load balancer.

You can check Grafana’s documentation on how you can do more, or tweak the existing dashboard.

Resources

The starting point for this article was the tutorial provided by AWS on how to get started with Prometheus on Lightsail.

The post Monitor multiple Lightsail instances at once using Prometheus and Grafana appeared first on DevMaverick.

]]>
https://devmaverick.com/monitor-multiple-lightsail-instances-at-once-using-prometheus-and-grafana/feed/ 1
Monitor a Lightsail instance using Prometheus and Grafana https://devmaverick.com/monitor-a-lightsail-instance-using-prometheus-and-grafana/ https://devmaverick.com/monitor-a-lightsail-instance-using-prometheus-and-grafana/#comments Mon, 27 May 2024 14:00:57 +0000 https://devmaverick.com/?p=7311 This is a tutorial on how you can monitor your Lightsail instance Prometheus and Grafana. The step by step guide will help you set-up a Grafana dashboard feeding from a Prometheus data source in Lightsail.

The post Monitor a Lightsail instance using Prometheus and Grafana appeared first on DevMaverick.

]]>

Table of contents

Intro

The purpose of this article is to monitor the Lightsail instance using Grafana. By default, you can monitor the Lightsail instance from the AWS account, but this will require you to log into AWS and get on the Lightsail services section.

Lightsail’s monitoring info offered in AWS is not very detailed.

That is the main reason why I needed to use Grafana.

The flow is like this:

  • We’ll install Prometheus with Node Exporter, to read all the data and create an endpoint for those values to be displayed
  • Prometheus will act as our data source
  • Grafana will read the Prometheus info as the data source and put it into a Dashboard.

Diagram on how node exporter, Prometheus and Grafana communicate

We’ll follow the official AWS tutorial to add Prometheus to Lightsail but with some changes.

In this setup, we have 1 instance of Lightsail. If you have a setup with multiple instances of Lightsail using a Load Balancer, check the Monitor multiple Lightsail instances at once using Prometheus and Grafana.

This simple setup allows Node Exporter and Prometheus to run on the same machine.

Requirements

  • A Lightsail instance that uses Ubuntu 20.04
  • A static IP attached to the instance
  • Ports 9090 and 9100 opened on the instance
    Lightsail >  Instance > Networking
    9090 is used by Prometheus, 9100 is used by Node Exporter.

Lightsail open ports for Prometheus and Node Exporter

Install Prometheus with Node Exporter

1. Connect to your Lightsail instance using SSH

Lightsail connect via SSH

 

2. Create an account named exporter for Node Exporter and prometheus for Prometheus

sudo useradd --no-create-home --shell /bin/false exporter

sudo useradd --no-create-home --shell /bin/false prometheus

 

3. Create local system directories

sudo mkdir /etc/prometheus /var/lib/prometheus

sudo chown prometheus:prometheus /etc/prometheus

sudo chown prometheus:prometheus /var/lib/prometheus

 

4. Download Prometheus binary package

Select the operating system to be linux and architecture amd64.

The right click on prometheus latest version,  “Copy link” and save it in the notepad for later.

Scroll down and do the same for node_exporter.

Prometheus grab binary package URL

 

5. Connect via SSH to your Lightsail instance again

6. Go to home

 

7. Download the prometheus binary package to your instance

curl -LO prometheus_url_address_you_saved_in_notepad

Example:

curl -LO https://github.com/prometheus/prometheus/releases/download/v2.52.0/prometheus-2.52.0.linux-amd64.tar.gz

 

8. Download the node_exporter binary package to your instance

curl -LO node_exporter_url_address_you_saved_in_notepad

 

9. Extract the contents for the downloaded packages one by one for Prometheus and Node Exporter
Make sure the archive names match the ones in the links you copied in step 4.

tar -xvf prometheus-2.52.0.linux-amd64.tar.gz

tar -xvf node_exporter-1.8.1.linux-amd64.tar.gz

 

10. Copy the extracted prometheus and promtool to the /usr/local/bin directory

sudo cp -p ./prometheus-2.52.0.linux-amd64/prometheus /usr/local/bin

sudo cp -p ./prometheus-2.52.0.linux-amd64/promtool /usr/local/bin

 

11. Change ownership of prometheus and promtool folders to the prometheus user we created at the start

sudo chown prometheus:prometheus /usr/local/bin/prom*

 

12. Copy the consoles and console_libraries to /etc/prometheus. Make it recursive copy of all directories within the hierarchy using -r

sudo cp -r ./prometheus-2.52.0.linux-amd64/consoles /etc/prometheus

sudo cp -r ./prometheus-2.52.0.linux-amd64/console_libraries /etc/prometheus

 

13. Change ownership of the copied folders to prometheus user

sudo chown -R prometheus:prometheus /etc/prometheus/consoles

sudo chown -R prometheus:prometheus /etc/prometheus/console_libraries

 

14. Copy the configuration file prometheus.yml to /etc/prometheus and change ownership to prometheus user

sudo cp -p ./prometheus-2.52.0.linux-amd64/prometheus.yml /etc/prometheus

sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml

 

15. Copy the node_exporter file from ./node_exporter* to /usr/local/bin

sudo cp -p ./node_exporter-1.8.1.linux-amd64/node_exporter /usr/local/bin

 

16. Change ownership of the file to the exporter user

sudo chown exporter:exporter /usr/local/bin/node_exporter

 

Configure Prometheus

1. Connect via SSH to your Lightsail instance again

2. Create a backup copy to your prometheus.yml config file

sudo cp /etc/prometheus/prometheus.yml /etc/prometheus/prometheus.yml.backup

 

3. Open the prometheus.yml config file in Vim

sudo vim /etc/prometheus/prometheus.yml

Quickly edit with Vim:

  • Press i to enter insert mode and make changes to the file
  • Press ESC to exit insert mode
  • Press : to enter command mode and add wq! to save the file

Here are some more essential commands for Vim.

 

4. Edit the prometheus.yml 

  • scrape_inteval – this defines how often Prometheus will scrape/collect data from the targets available
  • job_name – This is used to identify the exporters available for Prometheus
  • targets – These will point to the actual machines from where you’re going to scrape data. They are written in the format IP:PORT.

We’re going to edit the default prometheus.yml file by:

  • Setting the correct static  IP on the prometheus job_name
  • Adding a basic_auth for the prometheus
  • Creating a job_name for node_exporter that we’re going to name it node
  • Adding a basic_auth for node

# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
       - targets:
         # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["your_static_IP:9090"]

     basic_auth: 
       username: prometheus_uberadmin 
       password: password123

  - job_name: "node"

    static_configs:
      - targets: ["your_static_IP:9100"]

    basic_auth:
      username: node_exporter_uberadmin
      password: password123

 

5. Save the file

Press the Esc key to exit insert mode, and type :wq! to save your changes and quit Vim.

 

6. Create a file that will hold the basic_auth for Prometheus

sudo vim /etc/prometheus/web.yml

Add the following text to the file and save it.

basic_auth_users:
  prometheus_uberadmin: $2a$12$GxinAvcHwkGQclBiJmz6ce5is67Bxj7mvnKyY4L0DxJooqFgA7hvu

Note: the password needs to be generated using a bcrypt hash of the string you want to use.
Use an online bcrypt tool for that if it’s easier.

The example above is the hash for password123.

 

7. Save the file

Press the Esc key to exit insert mode, and type :wq! to save your changes and quit Vim.

 

8. Start Prometheus

sudo -u prometheus /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /var/lib/prometheus --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries

 

9. After the running service is validated, press Ctrl+C to stop it.

10. Open systemd configuration file in Vim

sudo vim /etc/systemd/system/prometheus.service

 

11. Insert the following lines into the file

[Unit]
Description=PromServer
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
--config.file /etc/prometheus/prometheus.yml \
--storage.tsdb.path /var/lib/prometheus/ \
--web.config.file /etc/prometheus/web.yml \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target

The instructions above are going to be used by Linux to start Prometheus on the server.

 

12. Press the Esc key to exit insert mode, and type :wq! to save your changes and quit Vim.

13. Load the new information into the systemd

sudo systemctl daemon-reload

 

14. Restart Prometheus

sudo systemctl start prometheus

 

15. Check Prometheus status

sudo systemctl status prometheus

Prometheus status in console

 

16. Press Q to exit the status

17. Enable Prometheus on instance boot

sudo systemctl enable prometheus

 

18. Check Prometheus in the browser

Go to HTTP://your_static_ip:9090

You will be prompted to add a user and password. Use the ones you set in the basic_auth.

After you will be redirected to the Prometheus dashboard.

Prometheus dashboard

 

19. Check the targets

Go to Status > Targets

In there, you should now see the node job listed.

Configure Node Exporter

1. Connect via SSH to your Lightsail instance again

2. Create a file that will hold the basic_auth for Node Exporter

First, create a folder for the file in /etc/

sudo mkdir /etc/node_exporter

Create and edit the file

sudo vim /etc/node_exporter/web.yml

Add the following text to the file and save it.

basic_auth_users:
  node_exporter_uberadmin: $2a$12$GxinAvcHwkGQclBiJmz6ce5is67Bxj7mvnKyY4L0DxJooqFgA7hvu

Note: the password needs to be generated using a bcrypt hash of the string you want to use.
Use an online bcrypt tool for that if it’s easier.

The example above is the hash for password123.

 

3. Save the file

Press the Esc key to exit insert mode, and type :wq! to save your changes and quit Vim.

 

4. Create a systemd service for node_exporter in Vim

sudo vim /etc/systemd/system/node_exporter.service

 

5. Press the i key to enter insert mode in Vim

6. Edit the file as follows to collect all the available data from the server

[Unit]
Description=NodeExporter
Wants=network-online.target
After=network-online.target


[Service]
User=exporter
Group=exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter \
--web.config.file="/etc/node_exporter/web.yml"

[Install]
WantedBy=multi-user.target

 

7. Save the file

Press the Esc key to exit insert mode, and type :wq! to save your changes and quit Vim.

 

8. Reload systemd process

sudo systemctl daemon-reload

 

9. Start the node_exporter

sudo systemctl start node_exporter

 

10. Check node_exporter status

sudo systemctl status node_exporter

 

11. Press Q to exit the status command

12. Enable Node Exporter on instance boot

sudo systemctl enable node_exporter

Configure Grafana

I assume you already have Grafana installed. In this tutorial, we’re skipping the part about installing Grafana.

You can check the Resources at the bottom where we have some links on how to install Grafana.

I recommend having Grafana on a different machine since you will use it in the future to monitor different servers probably.

1. Add Data source
Go to Data sources > Add new data source

Grafana list of Data sources

2. Select a Prometheus data source

Grafana add a Prometheus data source

3. Add URL and credentials

Complete the Prometheus server URL with the HTTP://static_ip:9090

And on Authentication select Basic Authentication and add the credentials we created. In our case prometheus_uberadmin and password123.

Grafana add Prometheus data source details

Scroll down and click Save & Test.

4. Create a Dashboard

Go to Dashboards > New > New Dashboard

Grafana add dashboard

4. Click on Import a dashboard

You can go to this page and find a dashboard that matches your needs https://grafana.com/grafana/dashboards

I recommend one of these 2 to get a good overview of your instance in Grafana:

Paste the ID of the dashboard in the import section and load it.

Grafana import dashboard based on ID

5. Select the Data source

Add a name for the Dashboard and select the Prometheus data source we created in step 3.

Grafana select data source for dashboard

Once you click Import a new dashboard will be generated with the details pulled from Prometheus.

Grafana Dashboard

Conclusion

Following these steps will allow you to install Prometheus + Node Exporter on a Lighsail instance from AWS and feed that into a Grafana dashboard.

You can check Grafana’s documentation on how you can do more, or tweak the existing dashboard.

As mentioned before this is an example with a single Lightsail instance.

In a future article, I will explain how you can do it on multiple Lightsail instances and feed that information on a single Grafana dashboard. That would be very useful if you have a setup with a Load balancer in Lightsail.

Resources

The starting point for this article was the tutorial provided by AWS on how to get started with Prometheus on Lightsail.

The post Monitor a Lightsail instance using Prometheus and Grafana appeared first on DevMaverick.

]]>
https://devmaverick.com/monitor-a-lightsail-instance-using-prometheus-and-grafana/feed/ 1