{"version":"https://jsonfeed.org/version/1.1","title":"Max Dietrich - Technical Product Owner GIS","description":"Technical Product Owner GIS at Bayernwerk (E.ON). I ride my mountain bike in the alps, code and design my website and publish new content whenever I can.","home_page_url":"https://mxd.codes","feed_url":"https://mxd.codes/feed.json","language":"en","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}],"items":[{"id":"https://mxd.codes/articles/self-hosted-analytics-with-next-js-and-postgresql","url":"https://mxd.codes/articles/self-hosted-analytics-with-next-js-and-postgresql","title":"Self-hosted Analytics with Next.js and PostgreSQL","summary":"How I built a privacy-first analytics system for mxd.codes using Next.js API routes, PostgreSQL and MaxMind GeoLite2 -without any third-party tracking service.","content_html":"
For a long time I used Plausible Analytics to track pageviews on this site. It is a great product, but I was already running a PostgreSQL instance for comments, webmentions and location data. Adding another service just for analytics felt unnecessary. So I built my own.
\nThe result is a simple, self-hosted analytics system built entirely on Next.js API routes and PostgreSQL. No cookies, no third-party scripts, no vendor lock-in. All collected data is displayed publicly on /about-this-site.
\nEverything is stored in a single pageviews table:
CREATE TABLE IF NOT EXISTS pageviews (\n id SERIAL PRIMARY KEY,\n path TEXT NOT NULL,\n referrer TEXT,\n visitor_hash TEXT NOT NULL,\n country TEXT,\n city TEXT,\n latitude REAL,\n longitude REAL,\n user_agent TEXT,\n device_type TEXT,\n browser TEXT,\n os TEXT,\n language TEXT,\n screen_width INT,\n created_at TIMESTAMPTZ DEFAULT NOW()\n);\n\nEach row represents a single pageview. There are no sessions, no persistent user IDs and no cookies. Unique visitors are identified by a daily rotating hash described below.
\nTo keep queries fast I added partial indexes on the columns used most often:
\nCREATE INDEX ON pageviews (created_at);\nCREATE INDEX ON pageviews (path);\nCREATE INDEX ON pageviews (visitor_hash, created_at);\nCREATE INDEX ON pageviews (latitude, longitude) WHERE latitude IS NOT NULL AND longitude IS NOT NULL;\n\nThe tracking component is a small React component included in the root layout:
\n\"use client\";\n\nimport { usePathname } from \"next/navigation\";\nimport { useEffect, useRef } from \"react\";\n\nexport default function PageviewTracker() {\n const pathname = usePathname();\n const lastTrackedPath = useRef<string | null>(null);\n\n useEffect(() => {\n if (pathname === lastTrackedPath.current) return;\n if (localStorage.getItem(\"notrack\") === \"1\") return;\n lastTrackedPath.current = pathname;\n\n const payload = JSON.stringify({\n path: pathname,\n referrer: document.referrer || null,\n screenWidth: window.screen?.width || null,\n });\n\n if (navigator.sendBeacon) {\n const blob = new Blob([payload], { type: \"application/json\" });\n navigator.sendBeacon(\"/api/pageview\", blob);\n } else {\n fetch(\"/api/pageview\", {\n method: \"POST\",\n headers: { \"Content-Type\": \"application/json\" },\n body: payload,\n keepalive: true,\n }).catch(() => {});\n }\n }, [pathname]);\n\n return null;\n}\n\nIt uses usePathname to detect route changes in the Next.js App Router and fires on every navigation. navigator.sendBeacon is preferred over fetch because it is non-blocking and survives page unloads reliably.
The API route at /api/pageview handles each incoming event. It does several things before writing to the database:
Rate limiting rejects more than 30 requests per minute from the same IP. The limiter is a plain Map in memory. Each IP gets a counter and a reset timestamp. No Redis, no external dependency:
const rateLimitMap = new Map<string, { count: number; resetTime: number }>();\n\nif (!entry || now > entry.resetTime) {\n rateLimitMap.set(ip, { count: 1, resetTime: now + interval });\n return { success: true, remaining: limit - 1 };\n}\nif (entry.count >= limit) {\n return { success: false, remaining: 0 };\n}\n\nThe client IP is read from x-real-ip first (set by nginx), falling back to the last entry in x-forwarded-for. Using the last entry rather than the first prevents clients from spoofing the header by prepending a fake IP.
Bot filtering tests the User-Agent against a regex of known crawler patterns before doing any database work.
\nSame-origin validation rejects requests that did not originate from the site. It parses the Origin or Referer header with new URL() and compares the host against the request Host header. A simple substring check would allow evil-mxd.codes to pass, so exact host matching is important. Requests with no Origin or Referer are allowed through. Those are same-site form submissions or direct navigations where the browser does not send either header.
Geolocation looks up the visitor's IP using the MaxMind GeoLite2 City database. The .mmdb file is loaded on the first request and cached in memory via @maxmind/geoip2-node and reused for every request with no external API call and no network latency. Coordinates are rounded to one decimal place to reduce precision:
latitude: Math.round(response.location.latitude * 10) / 10\n\nUser-Agent parsing is done with plain regex functions instead of a library. The browser, OS and device type are each extracted with a short chain of pattern checks. The order matters. Edge and Opera both include chrome in their User-Agent string, so they have to be matched first.
Visitor hashing creates a daily identifier without storing any persistent user data. The hash is a SHA-256 of the IP address, a server-side secret and the current date. The same visitor gets the same hash all day, which makes unique visitor counting possible. On the next day the hash is different, so there is no cross-day tracking.
\nAll analytics queries live in src/lib/analytics.ts. The functions cover the most common dimensions:
getAnalyticsStats() returns total pageviews and total unique visitor-daysgetCurrentVisitors() counts distinct hashes seen in the last 5 minutesgetTopPages(limit) groups by path and orders by view countgetTopReferrers(limit) filters out self-referrals and empty referrersgetTopBrowsers, getTopOS, getTopLanguages, getTopCountries, getTopCities each group by their columngetDeviceBreakdown() splits into mobile, tablet and desktopgetScreenWidthDistribution() buckets screen widths into four categoriesgetPageviewsOverTime(days) returns a daily count for the last N days for the sparkline chartgetVisitorLocations() returns grouped coordinates for the visitor mapThe /api/stats endpoint handles the overall pageview and visit counts, current visitor count and a few additional counts from other tables (comments, webmentions, emoji reactions, subscribers). It caches the result in memory for 24 hours. The more detailed breakdown queries (top pages, referrers, device types, countries and so on) are called directly from the about-this-site page on the server at request time.
Everything is displayed at /about-this-site, a server-rendered page with dynamic = \"force-dynamic\" so it always shows fresh data. The page fetches all the analytics functions in parallel on the server and passes the results down as props.
The 30-day pageview trend is rendered as an SVG sparkline chart built without any charting library:
\nconst w = 400;\nconst h = 80;\nconst padding = 4;\nconst range = maxViews - minViews || 1;\n\nconst coords = data.map((d, i) => {\n const x = (i / (data.length - 1)) * (w - padding * 2) + padding;\n const y = h - padding - ((d.views - minViews) / range) * (h - padding * 2);\n return { x, y };\n});\n\nconst polyline = coords.map((c) => `${c.x},${c.y}`).join(\" \");\nconst areaPath = `M${coords[0].x},${coords[0].y} ${coords\n .slice(1)\n .map((c) => `L${c.x},${c.y}`)\n .join(\" \")} L${w - padding},${h - padding} L${padding},${h - padding} Z`;\n\nThe area fill uses an SVG <path> element and the line is a <polyline>. Normalizing against the range (maxViews - minViews) rather than the absolute max keeps the chart readable even on low-traffic days. Colors come from the CSS custom property --secondary-color so the chart respects the site's light and dark theme automatically. Here is the live chart for this site:
Visitor locations are rendered on an interactive map using OpenLayers with cluster styling that scales logarithmically with the number of visits from each location.
\nThe rest of the page uses simple HTML lists and inline percentage bars rendered with CSS width set proportionally to the maximum value in each group.
No cookies are set. No data is shared with third parties. The IP address is used only to look up a rough location and to create the daily hash, then it is discarded. Coordinates are stored with reduced precision.
\nThe full analytics dashboard is publicly visible at /about-this-site, which I think is a reasonable trade-off: if I am collecting data, I should be transparent about what it shows.
","date_published":"2026-03-19T00:00:00.000Z","date_modified":"2026-03-19T00:00:00.000Z","tags":["next-js","react","selfhosted","data-privacy","indie-web","javascript"],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/colota-v1-1-0-native-maps-tracking-profiles-pause-zones","url":"https://mxd.codes/articles/colota-v1-1-0-native-maps-tracking-profiles-pause-zones","title":"Colota v1.1.0: Native Maps, Tracking Profiles, and Pause Zones","summary":"After releasing Colota 1.0 as an open-source Android GPS tracker, I spent the last months rebuilding the map engine, adding automatic tracking profiles, geofence-based pause zones, and a bunch of smaller improvements.","content_html":"I have been using Colota daily since the closed testing phase and kept running into things that bothered me. The WebView-based maps felt sluggish, switching GPS settings between walking and driving was annoying, and I did not want to record my location while sitting at my desk all day. Version 1.1.0 is the result of fixing all of that.
\nThe biggest change is the map engine. In v1.0 the maps were rendered using OpenLayers inside a WebView. Panning had noticeable lag, pinch-to-zoom was not smooth, and memory usage was higher than it should be.
\nI replaced the entire map stack with MapLibre GL Native via @maplibre/maplibre-react-native. The maps now render on the GPU. Panning and zooming that used to stutter are instant now, even with geofence overlays and accuracy circles drawn on top.
The tile source is OpenFreeMap which provides free vector tiles based on OpenStreetMap data without requiring an API key. This keeps Colota fully FOSS-compatible, which matters for the F-Droid build.
\n
With vector tiles and MapLibre I could add a proper dark mode for the map. The app fetches the OpenFreeMap style JSON once, transforms the paint properties to dark colors, and caches the result. No extra network request after the first load.
\n
The color palette uses a navy/indigo family that fits well with the rest of the dark theme. Water is almost black, buildings are a subtle purple-gray, and text labels use a light gray with a dark halo for readability.
\nThis was the feature I wanted most for my own use. The GPS settings that work well for walking (high frequency, small distance threshold) drain the battery when driving. And the settings that work for driving miss too many points when walking.
\nTracking profiles solve this by automatically switching GPS settings based on conditions. You can define profiles that activate when:
\n![]()
For example, I have a \"Driving\" profile that activates when Android Auto connects. It sets the GPS interval to 4 seconds with a 20m distance threshold. When I disconnect from the car, it switches back to my default settings (2 second interval, 2m threshold).
\nThe profile system uses priority-based resolution when multiple profiles match. It also has a deactivation delay to prevent rapid toggling when your speed fluctuates around the threshold.
\nSometimes you do not want to record your location at all. I do not need a GPS point every second while sitting at my desk at home or at the office.
\nPause zones are geofences that automatically stop location recording when you enter them. You define a center point and a radius on the map, give it a name, and the app handles the rest. When you leave the zone, recording resumes automatically.
\n
The distance calculation uses the haversine formula. The geofence check runs on every GPS fix inside the foreground service, so it works even when the React Native UI is not active. The zones are also visible on the dashboard map as colored circles with labels.
\nThe location history map got a visual upgrade. Track segments are now colored by speed using a green-to-yellow-to-red gradient. This makes it easy to see at a glance where you were walking, cycling, or driving.
\nEach point on the track is tappable. A popup shows the exact coordinates, speed, accuracy, altitude, and timestamp. I also added a daily distance counter that shows how far you moved on any given day.
\nSetting up the app with all the server details (endpoint URL, auth credentials, sync settings) is tedious to do manually. Colota now supports a colota://setup deep link that lets you encode the entire configuration in a base64 payload.
The URL format looks like this:
\ncolota://setup?config=eyJlbmRwb2ludCI6Imh0dHBzOi8vZXhhbXBsZS5jb20vYXBpL2xvY2F0aW9ucyIsInVzZXJuYW1lIjoidXNlciJ9\n\nThe base64 payload decodes to a JSON object with all configuration fields. You can generate a setup link on your server and share it. Scanning or tapping it on the phone configures everything in one step.
\nThe settings screen got a cleanup. Sync presets (Instant, Balanced, Power Saver) make it easier to pick the right tradeoff between freshness and battery life without touching individual values.
\n
The app has a new icon. The old one was a placeholder I threw together in five minutes. The new one follows the Android adaptive icon guidelines and actually looks decent on both light and dark launchers.
\nThe app is available on Google Play, F-Droid (pending review), and as a direct APK download on GitHub. The full source code is AGPL-3.0 licensed.
\nFor setup instructions with different backends (Traccar, Home Assistant, OwnTracks, Dawarich, PhoneTrack) check the documentation. If you run into issues with background tracking being killed by your phone manufacturer, have a look at the battery optimization guide.
\nIf you have been following along from my earlier post about location tracking with OwnTracks and Node.js, Colota is basically the evolution of that setup. The tracking app is now my own, fully open-source, and does not depend on OwnTracks anymore. The server-side stack (PostgreSQL, GeoServer, MapProxy) still works the same way. You just point Colota to your webhook endpoint and it sends the same kind of location payloads.
","date_published":"2026-02-21T00:00:00.000Z","date_modified":"2026-02-21T00:00:00.000Z","tags":["android","react-native","gps","maplibre","open-source","selfhosted","data-privacy"],"image":"https://mxd.codes/content/posts/published/colota-v1-1-0-native-maps-tracking-profiles-pause-zones/cover.png","banner_image":"https://mxd.codes/content/posts/published/colota-v1-1-0-native-maps-tracking-profiles-pause-zones/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/gran-canaria-2026","url":"https://mxd.codes/photos/gran-canaria-2026","title":"Gran Canaria 2026","content_html":"","date_published":"2026-02-12T12:00:00.000Z","image":"https://mxd.codes/content/photos/gran-canaria-2026/photo-21.jpg","attachments":[{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-21.jpg","mime_type":"image/jpeg","title":"PXL_20260206_185057008.MP.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-1.jpg","mime_type":"image/jpeg","title":"IMG-20260124-WA0013.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-2.jpg","mime_type":"image/jpeg","title":"IMG-20260127-WA0024.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-3.jpg","mime_type":"image/jpeg","title":"IMG-20260127-WA0033.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-4.jpg","mime_type":"image/jpeg","title":"IMG-20260127-WA0058.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-5.jpg","mime_type":"image/jpeg","title":"IMG-20260128-WA0003.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-6.jpg","mime_type":"image/jpeg","title":"IMG-20260128-WA0008.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-7.jpg","mime_type":"image/jpeg","title":"IMG-20260129-WA0031.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-8.jpg","mime_type":"image/jpeg","title":"IMG-20260129-WA0039.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-9.jpg","mime_type":"image/jpeg","title":"IMG-20260129-WA0042.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-10.jpg","mime_type":"image/jpeg","title":"IMG-20260201-WA0005.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-11.jpg","mime_type":"image/jpeg","title":"IMG-20260203-WA0017.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-12.jpg","mime_type":"image/jpeg","title":"IMG-20260204-WA0008.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-13.jpg","mime_type":"image/jpeg","title":"IMG-20260204-WA0017.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-14.jpg","mime_type":"image/jpeg","title":"IMG-20260205-WA0001.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-15.jpg","mime_type":"image/jpeg","title":"IMG-20260207-WA0003.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-16.jpg","mime_type":"image/jpeg","title":"IMG-20260212-WA0004.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-17.jpg","mime_type":"image/jpeg","title":"PXL_20260130_144952315.MP.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-18.jpg","mime_type":"image/jpeg","title":"PXL_20260201_172433358.MP~2.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-19.jpg","mime_type":"image/jpeg","title":"PXL_20260202_145723748.MP.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-2026/photo-20.jpg","mime_type":"image/jpeg","title":"PXL_20260205_152639299.MP.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/croatia-and-italy-2026","url":"https://mxd.codes/photos/croatia-and-italy-2026","title":"Croatia and Italy 2026","content_html":"","date_published":"2025-09-22T14:44:59.840Z","image":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-1.jpg","mime_type":"image/jpeg","title":"IMG-20250913-WA0012.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-2.jpg","mime_type":"image/jpeg","title":"PXL_20250911_144141402.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-3.jpg","mime_type":"image/jpeg","title":"PXL_20250914_122609684.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-4.jpg","mime_type":"image/jpeg","title":"PXL_20250914_105430904.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-5.jpg","mime_type":"image/jpeg","title":"IMG-20250914-WA0055.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-6.jpg","mime_type":"image/jpeg","title":"IMG-20250913-WA0013.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-7.jpg","mime_type":"image/jpeg","title":"IMG-20250913-WA0056.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-8.jpg","mime_type":"image/jpeg","title":"PXL_20250911_120114106.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-9.jpg","mime_type":"image/jpeg","title":"PXL_20250915_170617328.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-10.jpg","mime_type":"image/jpeg","title":"PXL_20250914_131127865.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-11.jpg","mime_type":"image/jpeg","title":"PXL_20250911_145818877.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-12.jpg","mime_type":"image/jpeg","title":"PXL_20250913_170135015.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-13.jpg","mime_type":"image/jpeg","title":"PXL_20250914_122836912.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-14.jpg","mime_type":"image/jpeg","title":"PXL_20250911_122347690.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-15.jpg","mime_type":"image/jpeg","title":"PXL_20250914_114208122.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-16.jpg","mime_type":"image/jpeg","title":"PXL_20250915_162540439.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-17.jpg","mime_type":"image/jpeg","title":"PXL_20250914_163734297.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-18.jpg","mime_type":"image/jpeg","title":"PXL_20250910_181629542.MP.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-19.jpg","mime_type":"image/jpeg","title":"PXL_20250916_143217989.MP~2.jpg"},{"url":"https://mxd.codes/content/photos/croatia-and-italy-2026/photo-20.jpg","mime_type":"image/jpeg","title":"PXL_20250912_183430558.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/effortless-wildcard-ssl-secure-your-domain-with-let-s-encrypt-nginx-docker-and-cloudflare-dns","url":"https://mxd.codes/articles/effortless-wildcard-ssl-secure-your-domain-with-let-s-encrypt-nginx-docker-and-cloudflare-dns","title":"Effortless Wildcard SSL: Secure Your Domain with Let's Encrypt, Nginx, Docker and Cloudflare DNS","summary":"Learn how to generate and automate Let's Encrypt wildcard SSL certificates for Nginx using Docker and Cloudflare DNS API. Secure all your subdomains with easy setup, automatic renewal, and zero-downtime Nginx reloads.","content_html":"Securing web applications with HTTPS is a must, and Let’s Encrypt makes it easy by offering free SSL certificates. But what if you want a wildcard certificate to cover all subdomains under a domain? Fortunately, Let’s Encrypt supports wildcard certificates via the DNS-01 challenge, which requires updating DNS TXT records.
\nThis guide is specific to using Cloudflare as your DNS provider, using their API to automate DNS updates during certificate issuance and renewal. Let’s walk through the process step by step.
\nBefore we dive in, make sure you have:
\nexample.com)\n\nIf you use other DNS providers like DigitalOcean or AWS Route 53, you’ll need different DNS plugins and API credentials. This guide is tailored specifically for Cloudflare.
\n
Create a directory for your setup:
\nmkdir nginx-wildcard-ssl && cd nginx-wildcard-ssl\n\nCreate a docker-compose.yml file with the following content:
\nversion: '3'\nservices:\n nginx:\n image: nginx:latest\n container_name: nginx\n restart: unless-stopped\n ports:\n - \"80:80\"\n - \"443:443\"\n volumes:\n ## Config\n - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro\n - ./nginx/sites-available:/etc/nginx/sites-enabled:ro\n ## SSL\n - /etc/ssl:/etc/ssl\n - /data/containers/nginx/ssl/dhparam.pem:/etc/ssl/dhparam.pem:ro\n - /data/containers/certbot/conf:/etc/letsencrypt:ro\n ## Logs (optional)\n #- /data/containers/nginx/logs:/var/log/nginx:rw\n command: /bin/sh -c \"while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g 'daemon off;'\"\n networks:\n - web\n - internal\n\n certbot:\n container_name: certbot\n image: certbot/dns-cloudflare\n restart: unless-stopped\n volumes:\n - /data/containers/certbot/conf:/etc/letsencrypt:rw\n - /data/containers/certbot/www:/var/www/certbot:rw\n entrypoint: \"/bin/sh -c 'trap exit TERM; while :; do certbot renew --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/.secrets/cloudflare.ini; sleep 48h & wait $${!}; nginx -s reload; done;'\"\n networks:\n - internal\n\nnetworks:\n web:\n external: true\n name: nginx\n internal:\n driver: bridge\n\nTo allow Certbot to update DNS TXT records automatically for the DNS-01 challenge, you need a Cloudflare API token with DNS edit permissions.
\nHow to create the API token:
\nCertbot DNS Token).Save the token securely
\nCreate a file /data/containers/certbot/conf/.secrets/cloudflare.ini with:
dns_cloudflare_api_token = your_cloudflare_api_token_here\n\nImportant: This file contains sensitive credentials!
\nSo it's recommended to restrict permissions for the file. Therefore secure the credentials file with:
\nchmod 600 /data/containers/certbot/conf/.secrets/cloudflare.ini\n\nThis command sets file permissions so only the owner can read and write the file. It prevents other users on the system from reading your API token, enhancing security.
\nRequest your wildcard certificate by running:
\ndocker run --rm \\\n -v /data/containers/certbot/conf:/etc/letsencrypt \\\n -v /data/containers/certbot/www:/var/www/certbot \\\n certbot/dns-cloudflare certonly \\\n --dns-cloudflare \\\n --dns-cloudflare-credentials /etc/letsencrypt/.secrets/cloudflare.ini \\\n --email your-email@example.com \\\n --agree-tos \\\n --no-eff-email \\\n -d example.com \\\n -d \"*.example.com\"\n\nWhat this command does:
\nIf successful, you’ll see something like:
\nIMPORTANT NOTES:\n - Congratulations! Your certificate and chain have been saved at:\n /etc/letsencrypt/live/example.com/fullchain.pem\n Your key file has been saved at:\n /etc/letsencrypt/live/example.com/privkey.pem\n - Your certificate will expire on 2025-09-15. To obtain a new or tweaked\n version of this certificate in the future, simply run certbot again.\n - If you like Certbot, please consider supporting our work by:\n\n Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate\n\nCreate a virtual host config, for example ./nginx/sites-available/example.conf:
server {\n listen 443 ssl;\n server_name example.com *.example.com;\n\n ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;\n ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;\n ssl_dhparam /etc/ssl/dhparam.pem;\n\n location / {\n proxy_pass http://your_backend;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n }\n}\n\nserver {\n listen 80;\n server_name example.com *.example.com;\n return 301 https://$host$request_uri;\n}\n\nYou’ll also need a basic nginx.conf to include your sites:
user nginx;\nworker_processes auto;\n\nerror_log /var/log/nginx/error.log warn;\npid /var/run/nginx.pid;\n\nevents {\n worker_connections 1024;\n}\n\nhttp {\n include /etc/nginx/mime.types;\n default_type application/octet-stream;\n\n # Logging\n access_log /var/log/nginx/access.log;\n error_log /var/log/nginx/error.log;\n\n # Performance\n sendfile on;\n tcp_nopush on;\n tcp_nodelay on;\n keepalive_timeout 65;\n types_hash_max_size 2048;\n\n # Gzip Compression\n gzip on;\n gzip_disable \"msie6\";\n gzip_vary on;\n gzip_proxied any;\n gzip_comp_level 6;\n gzip_buffers 16 8k;\n gzip_min_length 1024;\n gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;\n\n # Security Headers (can be overridden in virtual hosts)\n add_header X-Frame-Options \"SAMEORIGIN\";\n add_header X-Content-Type-Options \"nosniff\";\n\n # SSL Defaults (override per-site)\n ssl_protocols TLSv1.2 TLSv1.3;\n ssl_prefer_server_ciphers on;\n\n # Include Virtual Hosts\n include /etc/nginx/sites-enabled/*.conf;\n}\n\nRestart the stack or reload Nginx to apply changes:
\ndocker-compose up -d\n\nThe Certbot container is configured to:
\nThis is handled by the command in docker-compose.yml:
entrypoint: \"/bin/sh -c 'trap exit TERM; while :; do certbot renew --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/.secrets/cloudflare.ini; sleep 48h & wait $${!}; nginx -s reload; done;'\"\n\nYou can test renewal manually with:
\ndocker run --rm \\\n -v /data/containers/certbot/conf:/etc/letsencrypt \\\n certbot/certbot renew --dry-run\n\nIf you use another DNS provider, look for the appropriate Certbot DNS plugin and adjust the API credentials accordingly.
\nFeel free to ask if you want me to help with other providers or configurations!
","date_published":"2025-06-15T10:49:10.361Z","date_modified":"2025-06-15T11:06:15.971Z","tags":["selfhosted","docker"],"image":"https://mxd.codes/content/posts/published/effortless-wildcard-ssl-secure-your-domain-with-let-s-encrypt-nginx-docker-and-cloudflare-dns/cover.png","banner_image":"https://mxd.codes/content/posts/published/effortless-wildcard-ssl-secure-your-domain-with-let-s-encrypt-nginx-docker-and-cloudflare-dns/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/how-to-integrate-plausible-analytics-in-a-next-js-app-without-getting-blocked","url":"https://mxd.codes/articles/how-to-integrate-plausible-analytics-in-a-next-js-app-without-getting-blocked","title":"How to Integrate Plausible Analytics in a Next.js App (Without Getting Blocked)","summary":"Learn how to integrate Plausible Analytics into a Next.js App Router project with route tracking and ad blocker-resistant proxying. A complete guide for privacy-focused, cookie-free analytics.\n","content_html":"Plausible Analytics is a lightweight, privacy-focused, cookie-free analytics solution. In this article, we’ll implement it inside a Next.js App Router project in a way that bypasses ad blockers using proxying.\nBy proxying the tracking script and API requests through your own domain, you significantly reduce the chance of them being blocked by common ad-blocking extensions.
\nWe’ll also ensure that page views are correctly tracked on every route change in a client-side rendered app.
\n\n\nThis guide assumes you are self-hosting Plausible on a custom subdomain such as
\nanalytics.yourdomain.com.
To reduce the chance of being blocked by ad blockers, we'll proxy Plausible's script and API through your own domain.
\nIn your next.config.js, add the following:
async rewrites() {\n return [\n {\n source: \"/js/script.js\",\n destination:\n \"https://analytics.yourdomain.com/js/script.file-downloads.hash.outbound-links.pageview-props.revenue.tagged-events.js\",\n },\n {\n source: \"/api/event\",\n destination: \"https://analytics.yourdomain.com/api/event\",\n },\n ];\n}\n\nNext.js uses this to route requests internally without triggering a redirect (i.e., the user sees the original URL in their browser).
\n/js/script.js or /api/event, the request is internally proxied to the external URL.analytics.yourdomain.com.What is this script variant?
\nThe URL points to a self-hosted and enhanced version of the Plausible tracking script:
\nscript.file-downloads.hash.outbound-links.pageview-props.revenue.tagged-events.js
This version includes support for:
\nThis is ideal if you want richer insights without modifying your app further.
\nCreate a new file: app/RouteTracker.tsx
'use client';\n\nimport { usePathname } from 'next/navigation';\nimport { useEffect } from 'react';\n\nconst RouteTracker = () => {\n const pathname = usePathname();\n\n useEffect(() => {\n // 1. Add the Plausible script to the DOM if not already added\n if (!document.getElementById(\"next-p\")) {\n const script = document.createElement(\"script\");\n script.id = \"next-p\";\n script.async = true;\n script.defer = true;\n script.setAttribute(\"data-domain\", \"yourdomain.com\");\n script.src = \"/js/script.js\"; // Note: this uses the proxied script\n document.head.appendChild(script);\n }\n\n // 2. Add Plausible's minimal global initializer\n if (!document.getElementById(\"next-p-init\")) {\n const initScript = document.createElement(\"script\");\n initScript.id = \"next-p-init\";\n initScript.innerHTML =\n \"window.plausible = window.plausible || function() { (window.plausible.q = window.plausible.q || []).push(arguments) }\";\n document.head.appendChild(initScript);\n }\n\n // 3. Manually track a pageview when route changes\n const trackPageview = (url: string) => {\n const eventData = {\n name: \"pageview\",\n url,\n domain: window.location.hostname,\n ...(document.referrer && { referrer: document.referrer }),\n };\n\n fetch(\"/api/event\", {\n method: \"POST\",\n headers: {\n \"Content-Type\": \"application/json\",\n },\n body: JSON.stringify(eventData),\n }).catch((err) => console.error(\"Error tracking pageview:\", err));\n };\n\n trackPageview(pathname);\n }, [pathname]);\n\n return null;\n};\n\nexport default RouteTracker;\n\nWhat this does
\nOpen your root layout file (app/layout.tsx) and import the tracker:
import RouteTracker from \"@/src/hooks/plausible\"; // adjust path to match your project\n\nThen include it in your layout:
\n<body>\n <RouteTracker />\n {children}\n</body>\n\nFinal Notes
\nBy combining script proxying and client-side tracking, you get powerful, privacy-compliant analytics without sacrificing usability or insight.
","date_published":"2025-06-10T19:25:05.698Z","date_modified":"2025-06-11T19:30:58.878Z","tags":["analytics","next-js","react","data-privacy","selfhosted"],"image":"https://mxd.codes/content/posts/published/how-to-integrate-plausible-analytics-in-a-next-js-app-without-getting-blocked/cover.png","banner_image":"https://mxd.codes/content/posts/published/how-to-integrate-plausible-analytics-in-a-next-js-app-without-getting-blocked/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/how-to-create-a-backup-script-for-postgre-sql-and-maria-db-containers-on-a-server","url":"https://mxd.codes/articles/how-to-create-a-backup-script-for-postgre-sql-and-maria-db-containers-on-a-server","title":"How to Create a Backup Script for PostgreSQL and MariaDB Containers on a Server","summary":"Learn how to automate backups for PostgreSQL and MariaDB running in Docker containers on a server. Protect your data with this simple, customizable backup script, ensuring regular backups with minimal effort.","content_html":"When running databases like PostgreSQL and MariaDB on a server, ensuring regular backups is crucial for protecting your data from unexpected events such as crashes, human error, or system failure. While there are several ways to create backups, scripting a backup solution gives you complete control and automation.
\nIn this article, we'll show you how to create a simple yet effective backup script for PostgreSQL and MariaDB running in Docker containers on a server. We'll automate the process to ensure that your databases are regularly backed up without you needing to manually intervene.
\nBefore diving into the script, let’s take a moment to highlight why having regular backups is essential:
\nBefore creating the backup script, make sure you have:
\nFor the purpose of this tutorial, let’s assume your PostgreSQL and MariaDB containers are named postgres_container and mariadb_container.
Start by creating a directory on your server where the backups will be stored. This will help keep everything organized.
\nmkdir -p /home/youruser/backup\n\nReplace /home/youruser/backup with the location where you'd like to store your backups.
Now let’s create a bash script that will run daily backups for both PostgreSQL and MariaDB databases. Open your favorite text editor and create a file named backup_databases.sh.
#!/bin/bash\nset -euo pipefail\n\n# Backup a PostgreSQL database into a daily file.\n\nBACKUP_DIR=\"/data/backups\"\nLOG_FILE=\"/var/log/db_backup.log\"\nDAYS_TO_KEEP=30\nPOSTGRESDATABASES=(\"db1\" \"db2\") # PostgreSQL DBs to backup\nMARIADBDATABASES=(\"db1\") # MariaDB DBs to backup\nPOSTGRESCONTAINER=\"postgres_container\"\nMARIADBCONTAINER=\"mariadb_container\"\n\n# Create necessary directories\nmkdir -p \"${BACKUP_DIR}\"\nmkdir -p \"$(dirname \"${LOG_FILE}\")\"\n\n# Create backup directory if it doesn't exist\nmkdir -p \"${BACKUP_DIR}\"\n\n# Function to log messages\nlog() {\n local level=\"$1\"\n local message=\"$2\"\n local timestamp\n timestamp=$(date +\"%Y-%m-%d %H:%M:%S\")\n\n # Log to both stdout and a log file\n echo -e \"${timestamp} [${level}] ${message}\" | tee -a \"${LOG_FILE}\"\n}\n\nfor DATABASE in \"${POSTGRESDATABASES[@]}\"; do\n TIMESTAMP=$(date +\"%Y%m%d%H%M\")\n FILE=\"${TIMESTAMP}_${DATABASE}.sql.gz\"\n OUTPUT_FILE=\"${BACKUP_DIR}/${FILE}\"\n\n log \"INFO\" \"Starting backup for database: ${DATABASE}\"\n\n # Perform the backup and compress the output\n if docker exec -i \"${POSTGRESCONTAINER}\" /usr/bin/pg_dump -U \"${USER}\" \"${DATABASE}\" | gzip -9 > \"${OUTPUT_FILE}\"; then\n log \"SUCCESS\" \"Backup created: ${OUTPUT_FILE}\"\n ls -l \"${OUTPUT_FILE}\" | tee -a \"${LOG_FILE}\"\n else\n log \"ERROR\" \"Backup failed for database ${DATABASE}\" >&2\n continue\n fi\n\n # Prune old backups\n find \"${BACKUP_DIR}\" -maxdepth 1 -mtime +\"${DAYS_TO_KEEP}\" -name \"*${DATABASE}.sql.gz\" -exec rm -f {} \\; \\\n && log \"INFO\" \"Old backups deleted for database ${DATABASE}\" \\\n || log \"ERROR\" \"Failed to delete old backups for ${DATABASE}\" >&2\ndone\n\n## MariaDB Backup\nfor DATABASE in ${MARIADBDATABASES[@]}; do\n TIMESTAMP=$(date +\"%Y%m%d%H%M\")\n FILE=\"${TIMESTAMP}_${DATABASE}.sql.gz\"\n OUTPUT_FILE=\"${BACKUP_DIR}/${FILE}\"\n\n # Perform the database backup (dump)\n if docker exec ${MARIADBCONTAINER} /usr/bin/mariadb-dump -u root --password=yourpassword ${DATABASE} | gzip -9 > ${OUTPUT_FILE}; then\n log \"SUCCESS\" \"Backup created: ${OUTPUT_FILE}\"\n ls -l \"${OUTPUT_FILE}\" | tee -a \"${LOG_FILE}\"\n else\n log \"ERROR\" \"Backup failed for database ${DATABASE}\" >&2\n continue\n fi\n\n # Prune old backups\n find \"${BACKUP_DIR}\" -maxdepth 1 -mtime +\"${DAYS_TO_KEEP}\" -name \"*${DATABASE}.sql.gz\" -exec rm -f {} \\; \\\n && log \"INFO\" \"Old backups deleted for database ${DATABASE}\" \\\n || log \"ERROR\" \"Failed to delete old backups for ${DATABASE}\" >&2\ndone\n\nlog \"INFO\" \"Finished database backups!\"\n\nExplanation:
\ndocker exec to run the pg_dump command inside the PostgreSQL container to dump previously defined databases.docker exec to run the mysqldump inside the MariaDB container and backup previously defined databases.gzip to save space.DAYS_TO_KEEP days to prevent disk space issues.Customizing the Script:
\nBACKUP_DIR to the directory where you want to store your backups.DAYS_TO_KEEP with the days how long you want to keep backups.POSTGRESDATABASES with the PostgreSQL databases to backup.MARIADBDATABASES with the MariaDB databases to backup.POSTGRESCONTAINER and MARIADBCONTAINER with the name of your PostgreSQL and MariaDB containers.yourpassword with the password for the root user in MariaDB..After saving the script, make it executable:
\nchmod +x /path/to/backup_databases.sh\n\nTo schedule automatic backups, set up a cron job.
\ncrontab -e\n\n0 2 * * * /path/to/backup_databases.sh\n\nThis will execute the backup script every day at 2:00 AM.
\nMake sure to adjust the path /path/to/backup_databases.sh to the correct location of your script.
It’s always a good idea to manually run the backup script once to ensure everything is working correctly.
\n/path/to/backup_databases.sh\n\nCheck the backup directory to ensure that the backup files have been created and compressed.
\nIn case you need to restore a backup, you can use the following commands to load the backups back into your PostgreSQL and MariaDB containers.
\nPostgreSQL Restore:
\ndocker exec -i postgres_container psql -U postgres -d database < /home/youruser/backup/DATE_DATABASE.sql\n\nMariaDB Restore:
\ndocker exec -i mariadb_container mariadb u root --password=yourpassword database < /home/youruser/backup/DATE_DATABASE.sql\n\nReplace DATE and DATABASE with the appropriate backup file’s date and database name
By following these steps, you've created a simple and automated backup solution for your PostgreSQL and MariaDB databases running inside Docker containers. Regular backups are essential for protecting your data, and this script ensures that your backups run smoothly without manual intervention. You can also use this script to backup databases e.g. in Unraid with the User Scriptplugin.
You can further enhance this backup strategy by sending notifications, backing up to remote storage (e.g., AWS S3 or Google Cloud), or setting up encryption for additional security.
\nWith your databases securely backed up, you can rest easy knowing your data is safe and easily recoverable in case of an emergency.
","date_published":"2025-02-02T15:59:44.506Z","date_modified":"2025-02-16T22:08:27.915Z","tags":["d-1","selfhosted","docker"],"image":"https://mxd.codes/content/posts/published/how-to-create-a-backup-script-for-postgre-sql-and-maria-db-containers-on-a-server/cover.png","banner_image":"https://mxd.codes/content/posts/published/how-to-create-a-backup-script-for-postgre-sql-and-maria-db-containers-on-a-server/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/farewell-to-winter-and-the-cold","url":"https://mxd.codes/photos/farewell-to-winter-and-the-cold","title":"Farewell to winter and the cold!","content_html":"","date_published":"2025-02-02T13:02:25.702Z","image":"https://mxd.codes/content/photos/farewell-to-winter-and-the-cold/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/farewell-to-winter-and-the-cold/photo-1.jpg","mime_type":"image/jpeg","title":"PXL_20250124_152413591.MP.jpg"},{"url":"https://mxd.codes/content/photos/farewell-to-winter-and-the-cold/photo-2.jpg","mime_type":"image/jpeg","title":"PXL_20250123_155110483.MP.jpg"},{"url":"https://mxd.codes/content/photos/farewell-to-winter-and-the-cold/photo-3.jpg","mime_type":"image/jpeg","title":"PXL_20250124_152601419.MP.jpg"},{"url":"https://mxd.codes/content/photos/farewell-to-winter-and-the-cold/photo-4.jpg","mime_type":"image/jpeg","title":"PXL_20250124_161215756.MP.jpg"},{"url":"https://mxd.codes/content/photos/farewell-to-winter-and-the-cold/photo-5.jpg","mime_type":"image/jpeg","title":"PXL_20250121_155002681.MP.jpg"},{"url":"https://mxd.codes/content/photos/farewell-to-winter-and-the-cold/photo-6.jpg","mime_type":"image/jpeg","title":"PXL_20250117_180657303.RAW-01.COVER.jpg"},{"url":"https://mxd.codes/content/photos/farewell-to-winter-and-the-cold/photo-7.jpg","mime_type":"image/jpeg","title":"PXL_20250129_161324994.MP.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/enhancing-social-interactions-implementing-webmentions-with-next-js","url":"https://mxd.codes/articles/enhancing-social-interactions-implementing-webmentions-with-next-js","title":" Enhancing Social Interactions: Implementing Webmentions with Next.js and PostgreSQL","summary":"Learn how to integrate Webmentions into your Next.js site using PostgreSQL. Enhance engagement, foster social interactions, and build a dynamic web community with this step-by-step guide.","content_html":"Webmentions are a powerful tool for adding decentralized social interactions, such as comments, likes, reposts, and replies, directly on your website. If you're building a dynamic site with Next.js, integrating Webmentions can help encourage cross-site conversations, boost SEO, and enhance user engagement. In this guide, I will show you how to implement Webmentions into your Next.js project with PostgreSQL for storing and displaying them.
\nWebmention is an open web standard (W3C Recommendation) that enables decentralized cross-site interactions.
\nIn simpler terms, Webmentions allow users to interact with your content across the web by leaving comments, likes, reposts and other responses on other sites. These interactions enrich your site’s user experience, and they help establish meaningful connections with others.
\nWhen you link to a webpage, you can send a Webmention notification. If the receiving site supports Webmentions, it may display your post as a comment, like, or response—enabling rich cross-site conversations.
\nWhy Should You Use Webmentions?
\nHere’s an example of how Webmentions appear on my site:
\n
You can check a live version of this in the Webmentions section of this article: [/articles/fetching-and-storing-activities-from-garmin-connect-with-strapi-and-visualizing-them-with-next-js#replies].
\nA typical webmentions structure in JSON looks like this:
\n{\n \"type\": \"entry\",\n \"author\": {\n \"type\": \"card\",\n \"name\": \"Some Name\",\n \"photo\": \"URL to author image\",\n \"url\": \"URL to author profile\"\n },\n \"url\": \"Webmention URL\",\n \"wm-received\": \"Date of Webmention\",\n \"wm-id\": 1876563,\n \"wm-source\": \"Source URL\",\n \"wm-target\": \"Target URL\",\n \"wm-property\": \"Type of mention (e.g., like-of, repost-of)\",\n \"wm-private\": false\n}\n\nTo keep your Webmentions accessible even if an external service is discontinued, it’s a good idea to store them locally. In this tutorial, we’ll guide you through setting up a PostgreSQL database to store Webmentions and display them dynamically in your Next.js app.
\nBefore we dive into Webmentions, ensure you have PostgreSQL installed on your server. If not, check on of these guides.
\nOnce PostgreSQL is ready:
\n# Create a new database for storing Webmentions\ncreatedb personalwebsite\n\n-- Define a table structure to store Webmentions\n\n-- DROP TABLE public.webmentions;\n\nCREATE TABLE public.webmentions (\n id serial4 NOT NULL,\n wm_id int8 NOT NULL,\n wm_source text NOT NULL,\n wm_target text NOT NULL,\n wm_property text NOT NULL,\n url text NULL,\n author_name text NULL,\n author_photo text NULL,\n author_url text NULL,\n content_html text NULL,\n content_text text NULL,\n published_at timestamp NULL,\n received_at timestamp DEFAULT CURRENT_TIMESTAMP NULL,\n CONSTRAINT webmentions_pkey PRIMARY KEY (id),\n CONSTRAINT webmentions_wm_id_key UNIQUE (wm_id)\n);\n\n-- public.webmention_fetch_log definition\n\n-- Drop table\n\n-- DROP TABLE public.webmention_fetch_log;\n\nCREATE TABLE public.webmention_fetch_log (\n id serial4 NOT NULL,\n last_fetch timestamptz NOT NULL,\n CONSTRAINT webmention_fetch_log_pkey PRIMARY KEY (id)\n);\n\nBefore we store webmentions to display them we have to get them somewhere. If you dont want to implement your own Webmentions Receiver I recommend to use Webmention.io which is a service to easily receive webmentions.
\nSteps:
\n_app.tsx:<Head>\n ...\n <link rel=\"webmention\" href=\"https://webmention.io/username/webmention\" />\n ...\n </Head>\n\nFrom here on, Webmention.io will collect all the Webmentions for your site. Now, let’s create a script that fetches and stores them every ten minutes in the PostgreSQL database
\nTo keep Webmentions up-to-date, we'll fetch them periodically. Here’s the logic of the script.
\n
Create a script src/utils/fetch-webmentions.js to fetch and store Webmentions:
import fetch from \"node-fetch\"\nimport { Pool } from \"pg\"\n\nconst pool = new Pool({\n user: process.env.PGUSER,\n host: process.env.PGHOST,\n database: process.env.PGDATABASE,\n password: process.env.PGPASSWORD,\n port: process.env.PGPORT,\n})\n\nfunction isNotOlderThanTenMinutes(date: Date) {\n if (!(date instanceof Date) || isNaN(date.getTime())) return false\n return Date.now() - date.getTime() <= 10 * 60 * 1000\n}\n\nexport async function fetchAndStoreWebmentions() {\n const client = await pool.connect() // Use a client for transaction safety\n try {\n console.log(\"🔄 Checking last webmention fetch...\")\n\n // Get the latest fetch timestamp\n const { rows } = await client.query(\n `SELECT last_fetch FROM webmention_fetch_log ORDER BY last_fetch DESC LIMIT 1`\n )\n const lastFetchDate = rows[0]?.last_fetch\n\n if (isNotOlderThanTenMinutes(lastFetchDate)) {\n console.log(\"✅ Webmentions are already updated!\")\n return\n }\n\n // Insert new fetch timestamp\n const now = new Date().toISOString()\n await client.query(`INSERT INTO webmention_fetch_log (last_fetch) VALUES ($1)`, [now])\n console.log(\"📌 Updated Webmentions fetch log\")\n\n // Generate Webmention API URL\n const baseUrl = `https://webmention.io/api/mentions.jf2?domain=mxd.codes&per-page=1000&page=0&token=${process.env.WEBMENTION_IO_TOKEN}`\n const webmentionsUrl =\n lastFetchDate instanceof Date && !isNaN(lastFetchDate.getTime())\n ? `${baseUrl}&since=${lastFetchDate.toISOString()}`\n : baseUrl\n\n // Fetch new Webmentions from Webmention.io\n console.log(\"🔄 Fetching webmentions from Webmention.io...\")\n const response = await fetch(webmentionsUrl)\n const { children: webmentions } = await response.json()\n\n if (!Array.isArray(webmentions) || webmentions.length === 0) {\n console.log(\"⚠️ No new webmentions found.\")\n return\n }\n\n console.log(`📥 Processing ${webmentions.length} webmentions...`)\n\n // Prepare batch insert query\n const insertQuery = `\n INSERT INTO webmentions (\n wm_id, wm_source, wm_target, wm_property, url,\n author_name, author_photo, author_url, content_html, content_text, published_at, received_at\n ) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12)\n ON CONFLICT (wm_id) DO NOTHING;\n `\n\n for (const mention of webmentions) {\n const values = [\n mention[\"wm-id\"],\n mention[\"wm-source\"],\n mention[\"wm-target\"],\n mention[\"wm-property\"],\n mention[\"url\"],\n mention.author?.name || null,\n mention.author?.photo || null,\n mention.author?.url || null,\n mention.content?.html || null,\n mention.content?.text || null,\n mention.published ? new Date(mention.published) : null,\n new Date(mention[\"wm-received\"]),\n ]\n await pool.query(insertQuery, values)\n }\n\n console.log(`✅ Stored ${webmentions.length} webmentions successfully!`)\n } catch (error) {\n console.error(\"❌ Error fetching or storing webmentions:\", error)\n } finally {\n client.release() // Ensure client is released back to the pool\n }\n}\n\n// Run the function\nfetchAndStoreWebmentions()\n\nThis function can now be called everytime before webmentions are queried for a page from PostgreSQL database.\nIdeally, you should abstract the logic for determining whether new webmentions need to be fetched into an API layer. This prevents unnecessary database queries with every request, but this is out of scope for this article.
\nTo retrieve Webmentions dynamically for a page, we create an API route in pages/api/get-webmentions.js. This route allows us to fetch mentions for a specific target URL stored in our PostgreSQL database.
import { Pool } from \"pg\"\nimport { fetchAndStoreWebmentions } from \"@/src/utils/fetch-webmentions\"\n\nconst pool = new Pool({\n user: process.env.PGUSER,\n host: process.env.PGHOST,\n database: process.env.PGDATABASE,\n password: process.env.PGPASSWORD,\n port: process.env.PGPORT,\n})\n\nexport default async function handler(req, res) {\n if (req.method !== \"GET\")\n return res.status(405).json({ error: \"Method not allowed\" })\n\n const { target } = req.query\n if (!target) return res.status(400).json({ error: \"Missing target URL\" })\n\n // Updating Webmentions before selecting for page url\n await fetchAndStoreWebmentions()\n const query = `SELECT wm_id, wm_source, wm_target, wm_property, url, author_name, author_photo, author_url, content_text, published_at FROM webmentions WHERE wm_target LIKE '%${target}%' ORDER BY received_at DESC;`\n const result = await pool.query(query)\n\n res.json(result.rows)\n}\n\nNow you can call this API route and pass a query param target with the URL to get all Webmentions for a page.
To visually display Webmentions on our website, we create a dedicated React component components/Webmentions.js. This component fetches the Webmentions from our API and renders them.
import { useEffect, useState } from \"react\";\n\n// React component to display Webmentions for a given page\nconst Webmentions = ({ targetUrl }) => {\n const [mentions, setMentions] = useState([]);\n\n useEffect(() => {\n // Fetch Webmentions for the target URL from the API route\n fetch(`/api/get-webmentions?target=${encodeURIComponent(targetUrl)}`)\n .then((res) => res.json())\n .then((data) => setMentions(data));\n }, [targetUrl]);\n\n return (\n <div>\n <h3>Webmentions</h3>\n {mentions.length === 0 ? (\n <p>No webmentions yet.</p>\n ) : (\n mentions.map((mention) => (\n <div className=\"vcard h-card p-author\" key={mention.wm_id} style={{ border: \"1px solid #ddd\", padding: \"10px\", marginBottom: \"10px\" }}>\n {mention.author_photo && (\n // Display author profile picture if available\n <img src={mention.author_photo} alt={mention.author_name} className=\"u-photo\" style={{ width: \"40px\", height: \"40px\", borderRadius: \"50%\" }} />\n )}\n <p>\n <strong>{mention.author_name}</strong> {mention.wm_property.replace(\"-\", \" \")}\n {mention.wm_property === \"like-of\" && \" ❤️\"}\n {mention.wm_property === \"repost-of\" && \" 🔁\"}\n {mention.wm_property === \"in-reply-to\" && \" 💬\"}\n </p>\n <a className=\"u-url\" href={mention.url || mention.wm_source} target=\"_blank\" rel=\"noopener noreferrer\">View Mention</a>\n </div>\n ))\n )}\n </div>\n );\n};\n\nexport default Webmentions;\n\nI highly recommend periodically verifying the authenticity of Webmention sources to prevent spam.
\nTo ensure your Webmentions setup works correctly, use the following tools:
\nBy integrating Webmentions into your Next.js site, you can create an interactive and engaging web community. Whether you're running a blog, portfolio, or e-commerce site, Webmentions provide a powerful way to enhance content, boost SEO, and encourage meaningful connections.
\nIf you have created a response to this post you can send me a webmention and it will appear below the post.
\nMany developers rely on Google’s Static Maps API to generate map images, but this has limitations such as:
\nFor a long time I was looking for an alternative to Maps Static API from Google which can be selfhosted but I couldn't find anything which seemed to fit my needs.
\nHowever I found staticmaps which is a Node.JS library for creating map images with markers, polylines, polygons and text. But the library doesn't provide a web interface so I decided to built one on top of it with express and containerized the staticmaps API.
\ndocker-staticmaps is a containerized web version for staticmaps with express.
\nIn general docker-staticmaps provides a self-hosted alternative that allows you to generate static map images on your own server without relying on third-party APIs.
To get a static map from the endpoint /staticmaps several prameters have to be provided.
center - Center coordinates of the map in the format lon, latzoom - Set the zoom level for the map.width - default 300 - Width in pixels of the final imageheight - default 300 - Height in pixels of the final imageformat - default png (e.g. png, jpg or webp)basemap - default osm - Map base layerFor different basemaps docker-staticmaps is using exisiting tile-services from various providers. Be sure to check their Terms of Use for your use case or use a custom tileserver with the tileUrl parameter!
basemap - default \"osm\" - Select the basemaposm - default - Open Street Mapstreets - Esri street basemapsatellite - Esri's satellite basemaphybrid - Satellite basemap with labelstopo - Esri topographic mapgray - Esri gray canvas with labelsgray-background - Esri gray canvas without labelsoceans - Esri ocean basemapnational-geographic - National Geographic basemapotm - OpenTopoMapstamen-toner - Stamen Toner black and white map with labelsstamen-toner-background - Stamen Toner map without labelsstamen-toner-lite - Stamen Toner Light with labelsstamen-terrain - Stamen Terrain with labelsstamen-terrain-background - Stamen Terrain without labelsstamen-watercolor - Stamen Watercolorcarto-light - Carto Free usage for up to 75,000 mapviews per month, non-commercial services only.carto-dark - Carto Free usage for up to 75,000 mapviews per month, non-commercial services only.carto-voyager - Carto Free usage for up to 75,000 mapviews per month, non-commercial services only.custom - Pass through the tile URL using parameter tileurlWith the parameter polyline you can add a polyline to the map in the following format:
polyline=polylineStyle|polylineCoord1|polylineCoord2|...
polylineCoord - required - in format lat,lon and seperated by |. Atleast two locations are needed to draw a polyline.The polylineStyle consists of the following two parameters separated by |.
weight - default 5 - Weight of the polyline in pixels, e.g. weight:5color - default blue -24-Bit-color hex value, e.g. color:0000ffIf no center is specified, the polyline will be centered.
Polyline with no
\n zoom, weight:6 and color:0000ff
http://localhost:3000/staticmaps?width=600&height=600&polyline=weight:6|color:0000ff|48.726304979176675,-3.9829935637739382|48.72623035828412,-3.9829726446543385|48.726126671101639,-3.9829546542797467|48.725965124843256,-3.9829070729298808|48.725871429380568,-3.9828726793245273|48.725764250990267,-3.9828064532306628|48.725679557682362,-3.9827385375789146|48.72567025076134,-3.9827310750289113|48.725529844164292,-3.9826617613709225|48.725412537198615,-3.9826296635284164|48.725351694726704,-3.9826201452878531|48.725258599474508,-3.9826063049230411|48.725157520450125,-3.9825900299314232|48.725077863838543,-3.9825779905509102|48.724930435729831,-3.9825514102373938|48.724815578113535,-3.9825237355887291|48.724760905376989,-3.9825013965800564|48.724677938456551,-3.9824534296566916|48.724379435330384,-3.9822469276001118|48.724304509274596,-3.9821850264836076|48.7242453124599,-3.9821320570321772|48.724206187829317,-3.9821063430223207|48.724117073204575,-3.9820862134785551
\n\n
With the parameter polygon you can add a polygon to the map in the following format:
polygon=polygonStyle|polygonCoord1|polygonCoord2|...
polygonCoord - required - in format lat,lon and seperated by |. First and last locations have to be the same to close the polygon.The polygonStyle consists of the following two parameters separated by |.
color - default blue -24-Bit-color hex value, e.g. color:4874dbweight - default 5 - Weight of the polygon in pixels, e.g. weight:5fill - default green -24-Bit-color hex value, e.g. fill:eb7a34If no center is specified, the polygon will be centered.
http://localhost:3000/staticmaps?width=600&height=600&polygon=color:4874db|weight:7|fill:eb7a34|41.891169,12.491691|41.890633,12.493697|41.889012,12.492989|41.889467,12.490811|41.891169,12.491691Polygon with no
\n zoom, color:4874db,weight:7 and fill:eb7a3

With the parameter markers you can draw one or multiple markers depending on how much pair of coordinates you pass to the parameter
markers=markerCoord1|markerCoord2|...
markerCoord - required - in format lat,lon and separated by |. Atleast one coordinate is needed to draw a marker.If no center is specified, the markers will be centered.
Markers
\n
http://localhost:3000/staticmaps?width=600&height=600&markers=48.726304979176675,-3.9829935637739382|48.724117073204575,-3.9820862134785551
\n\n
With the parameter circle you can add a circle to the map in the following format:
circle=circleStyle|circleCoord
circleCoord - required - in format lat,lon and separated by |. Atleast one locations is needed to draw a marker.The circleStyle consists of the following parameters seperated by |.
radius - required - Circle radius in meter, e.g. radius:500color - default #0000bb - Stroke color of the circle, e.g. color:#0000bbwidth - default 3 - Stroke width of the circle, e.g. width:3fill - default #AA0000 - Fill color of the circle, e.g. fill:#AA0000If no center is specified, the circle will be centered.
Circle with no zoom
\n
http://localhost:3000/staticmaps?width=600&height=600&basemap=osm&circle=radius:100|48.726304979176675,-3.9829935637739382
\n\n
Minimal example:
\n center and zoom
http://localhost:3000/staticmaps?center=-119.49280,37.81084&zoom=9
\n\n
\n width=500, height=500, center=-73.99515,40.76761, zoom=10, format=webp, basemap=carto-voyager
http://localhost:3000/staticmaps?width=500&height=500¢er=-73.99515,40.76761&zoom=10&format=webp&basemap=carto-voyager
\n\n
Markers and Polyline
\n
http://localhost:3000/staticmaps?width=600&height=600&polyline=weight:6|color:0000ff|48.726304979176675,-3.9829935637739382|48.72623035828412,-3.9829726446543385|48.726126671101639,-3.9829546542797467|48.725965124843256,-3.9829070729298808|48.725871429380568,-3.9828726793245273|48.725764250990267,-3.9828064532306628|48.725679557682362,-3.9827385375789146|48.72567025076134,-3.9827310750289113|48.725529844164292,-3.9826617613709225|48.725412537198615,-3.9826296635284164|48.725351694726704,-3.9826201452878531|48.725258599474508,-3.9826063049230411|48.725157520450125,-3.9825900299314232|48.725077863838543,-3.9825779905509102|48.724930435729831,-3.9825514102373938|48.724815578113535,-3.9825237355887291|48.724760905376989,-3.9825013965800564|48.724677938456551,-3.9824534296566916|48.724379435330384,-3.9822469276001118|48.724304509274596,-3.9821850264836076|48.7242453124599,-3.9821320570321772|48.724206187829317,-3.9821063430223207|48.724117073204575,-3.9820862134785551&markers=48.726304979176675,-3.9829935637739382|48.724117073204575,-3.9820862134785551
\n\n
with Docker
\ndocker run -d \\\n --name='static-maps-api' \\\n -p '3003:3000/tcp' \\\n 'mxdcodes/docker-staticmaps:latest'\n\nwith Node.js
\ngit clone https://github.com/dietrichmax/docker-staticmaps\nnpm i\nnpm run start\n\nLinks
\nIn general there are two possibilies to use GoogleAdsense on your GatsbyJS website:
\nDepending on whether you choose to include Adsense ads on certain spots or whether you will leave this job to the Google AI, you can choose one/and or the other.
\nWith Auto Ads, the optimal positions for an advertising banner are determined using a Google AI and a display ad is automatically switched there. All you have to do is place the following AdSense code in html.js.
<script data-ad-client=\"ca-pub-0037698828864449\" async src=\"https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js\"></script>\n\n
\nand activate Auto ads in Adsense.
On GIS-Netzwerk.com I used Auto ads and I'm honestly surprised how well it works.\nAds are displayed in a text every few paragraphs and are also responsive.
\nYou also have the option of increasing or decreasing the number of ads in the settings.\nUnfortunately, you cannot specify a specific number of ads.
\n
In my opinion, a lot of ads are shown even when you set it to \"min\".\nYou can play around with the ad load and find out the best setting for your purposes.\nSomestimes it can take a few minutes until the new ad load is effective.
\nYou can also influence the ad formats.\nBasically there are:
\nI have only deactivated anchor texts, because I personally find them very annoying.
\n
In addition, you can also completely exclude individual pages from advertisements.
\nIf you want to use Auto ads on your GatsbyJS page, you can do it super easily with the plugin gatsby-plugin-google-adsense.
\nInstall
\nnpm install --save gatsby-plugin-google-adsense\n\nor
\nyarn add gatsby-plugin-google-adsense\n\nmodify gatsby-config.js
\n// In your gatsby-config.js file\nplugins: [\n {\n resolve: `gatsby-plugin-google-adsense`,\n options: {\n publisherId: `ca-pub-xxxxxxxxxx`\n },\n },\n]\n\nThe remaining settings can then be adjusted on Adsense.
\nIn addition to auto ads, there is also the \"classic\" option of inserting individual ad units at specific positions.\nWith the React Component react-adsense you can insert Google AdSense and Baidu ads in any place.
\nnpm install --save react-adsense\n\nor
\nyarn add react-adsense\n\nIn order for the components to be rendered, you still need the AdSense script code. You can either insert this manually in the html.js file or, if you want to combine individual ad units with Auto ads, you can also use the plug-in already mentioned to insert the script.
\n\nWhen auto ads and individual ad units are combined, the individual ad units always have a higher \"priority\". This means that all ad units that are inserted manually are usually also rendered and, if the text / ads ratio permits, additional ads from Auto ads are automatically inserted.
\n
If the script has been integrated and react-adsense has been installed, you can use
\nimport React from 'react';\nimport AdSense from 'react-adsense';\n\n// ads with no set-up\n<AdSense.Google\n client='ca-pub-7292810486004926'\n slot='7806394673'\n/>\n\n// ads with custom format\n<AdSense.Google\n client='ca-pub-7292810486004926'\n slot='7806394673'\n style={{ width: 500, height: 300, float: 'left' }}\n format=''\n/>\n\n// responsive and native ads\n<AdSense.Google\n client='ca-pub-7292810486004926'\n slot='7806394673'\n style={{ display: 'block' }}\n layout='in-article'\n format='fluid'\n/>\n\n// auto full width responsive ads\n<AdSense.Google\n client='ca-pub-7292810486004926'\n slot='7806394673'\n style={{ display: 'block' }}\n format='auto'\n responsive='true'\n layoutKey='-gw-1+2a-9x+5c'\n/>\n\nto insert components for the ad units.
\nThe respective client id
\nclient='ca-pub-7292810486004926'\n\n\nand the ad slot
\nslot='7806394673'\n\nmust always be specified.
\nThe rest is optional.
\nOptional props:\n className:\n style:\n layout:\n layoutKey:\n format:\n responsive:\n\nIn case you have more questions there is also ad Adsense community where you can get some answers.\nGoogle AdSense Help Community
","date_published":"2024-02-22T14:19:02.793Z","date_modified":"2025-02-01T16:46:28.539Z","tags":["react","gatsby","a-1"],"image":"https://mxd.codes/content/posts/published/using-google-adsense-with-gatsby-js/cover.png","banner_image":"https://mxd.codes/content/posts/published/using-google-adsense-with-gatsby-js/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/how-to-create-a-web-map-with-open-layers-and-react","url":"https://mxd.codes/articles/how-to-create-a-web-map-with-open-layers-and-react","title":"Mastering React and OpenLayers Integration: A Comprehensive Guide","summary":"Unlock the full potential of interactive maps in your React applications by delving into the seamless integration of OpenLayers.","content_html":"Maps have long been a fundamental element in web development, transforming static websites into dynamic, location-aware applications. Whether you're navigating through the bustling streets of a city, planning a route for your next adventure, or visualizing data in a geographic context, maps play a crucial role in enhancing user experiences.
\nOpenLayers, a robust open-source JavaScript library, stands at the forefront of enabling you to seamlessly integrate interactive maps into web applications. Its versatile and feature-rich nature makes it a go-to choice for projects requiring dynamic geospatial visualizations.
\nAt its core, OpenLayers provides a comprehensive set of tools to manipulate maps, overlay data, and interact with geographic information. Its capabilities extend from simple map displays to complex GIS applications, offering you the flexibility to create compelling and interactive mapping solutions. OpenLayers supports a modular and extensible architecture, allowing you to tailor their maps precisely to project requirements.
\nUnderstanding the key concepts within OpenLayers is fundamental to harnessing its full potential:
\nMap
\nIn OpenLayers, a map is a container for various layers and the view, serving as the canvas where geographical data is displayed. you can create multiple maps within an application, each with its set of layers and views.
\nThe markup below could be used to create a <div> that contains your map.
<div id=\"map\" style=\"width: 100%; height: 400px\"></div>\n\nThe script below constructs a map that is rendered in the <div> above, using the map id of the element as a selector.
import Map from 'ol/Map.js';\n\nconst map = new Map({target: 'map'});\n\nAPI Doc: ol/Map
\nView
\nThe view in OpenLayers determines the center, zoom and projection of the map. It acts as the window through which users observe the geographic data. you can configure different views to represent varying perspectives or zoom levels within a single map.
import View from 'ol/View.js';\n\nmap.setView(new View({\n center: [0, 0],\n zoom: 2,\n}));\n\nThe projection determines the coordinate system of the center and the units for map resolution calculations. If not specified (like in the above snippet), the default projection is Spherical Mercator (EPSG:3857), with meters as map units.
\nThe available zoom levels are determined by maxZoom (default: 28), zoomFactor (default: 2) and maxResolution (default is calculated in such a way that the projection's validity extent fits in a 256x256 pixel tile).
API Doc: ol/View
\nSource
\nSources provide the data for layers. OpenLayers supports different sources, including Tile sources for raster data, Vector sources for vector data, and Image sources for static images. These sources can fetch data from various providers or be customized to handle specific data formats.
\nTo get remote data for a layer you can use the ol/source subclasses.
import OSM from 'ol/source/OSM.js';\n\nconst source = OSM();\n\nAPI Doc: ol/source.
\nLayer
\nLayers define the visual content of the map. OpenLayers supports various layer types, such as Tile layers for raster data, Vector layers for vector data, and Image layers for rendering images. Layers can be stacked to combine different types of information into a single, coherent map.
\nol/layer/Tile - Renders sources that provide tiled images in grids that are organized by zoom levels for specific resolutions.ol/layer/Image - Renders sources that provide map images at arbitrary extents and resolutions.ol/layer/Vector - Renders vector data client-side.ol/layer/VectorTile - Renders data that is provided as vector tiles.import TileLayer from 'ol/layer/Tile.js';\n\n// ...\nconst layer = new TileLayer({source: source});\nmap.addLayer(layer);\n\nAPI Doc: ol/slayer.
\nTo start your journey into the world of interactive maps with OpenLayers and React, the first step is to install OpenLayers using your preferred package manager – npm or yarn. Open a terminal and execute one of the following commands:
\nnpm install ol\n# or\nyarn add ol\n\nThis command fetches the latest version of OpenLayers and installs it as a dependency in your project. With the library now available, you're ready to embark on the next steps of integrating OpenLayers with React.
\nNow that OpenLayers is part of your project, the next crucial step is to create a React component that will serve as the container for your interactive map.
\nIf you try to render the Map before the component has been mounted (meaning outside of useEffect) like following you will get an error message.
\nconst MapComponent = () => {\n const mapRef = useRef()\n\n // Incorrect: Rendering content before the component has mounted\n const map = new Map({\n target: mapRef.current\n ...\n })\n return <div ref={mapRef} style={{ width: '100%', height: '400px' }}></div>;\n};\n\nSolution:\nEnsure that you only render content when the component has properly mounted. You can use lifecycle methods like componentDidMount in class components or useEffect in functional components.
const MapComponent = () => {\n const mapRef = useRef()\n\n useEffect(() => {\n // Code here runs after the component has mounted\n const map = new Map({\n target: mapRef.current,\n ...\n } \n return () => map.setTarget(undefined)\n }, []);\n\n return <div ref={mapRef} style={{ width: '100%', height: '400px' }}></div>;\n};\n\nThe return function will reponsible for resource cleanup for the map.
\nSo a basic OpenLayers React example could look like the following:
\n// MapComponent.js\nimport React, { useEffect, useRef } from \"react\"\nimport { Map, View } from \"ol\"\nimport TileLayer from \"ol/layer/Tile\"\nimport OSM from \"ol/source/OSM\"\nimport \"ol/ol.css\"\n\nfunction MapComponent() {\n const mapRef = useRef<HTMLDivElement | null>(null)\n\n useEffect(() => {\n const osmLayer = new TileLayer({\n preload: Infinity,\n source: new OSM(),\n })\n\n const map = new Map({\n target: mapRef.current,\n layers: [osmLayer],\n view: new View({\n center: [0, 0],\n zoom: 0,\n }),\n })\n return () => map.setTarget(undefined)\n }, [])\n\n return (\n <div\n style={{ height: \"300px\", width: \"100%\" }}\n ref={mapRef}\n className=\"map-container\"\n />\n )\n}\n\nexport default MapComponent\n\nIn this example, the MapComponent initializes an OpenLayers map with a simple OpenStreetMap layer and the useEffect hook ensures that the map is created when the component mounts.
To ensure the correct styling and functionality of OpenLayers, it's crucial to import the necessary CSS and modules. In the MapComponent.js file, notice the import statement for the OpenLayers CSS:
import 'ol/ol.css'; // Import OpenLayers CSS\n\nThis line imports the essential stylesheets required for OpenLayers to render properly. \nAdditionally, other modules from OpenLayers, such as Map, View, TileLayer, and OSM, are imported to create the map instance and layers.
\nBy following these steps, you've successfully set up a basic React component housing an OpenLayers map. You're now ready to delve deeper into the capabilities of OpenLayers and explore advanced features for creating dynamic and interactive maps within your React applications.
\nAlso I created two examples for React and Openlayers:
\nMarkers, popups, and custom overlays enhance the visual storytelling capabilities of a map, providing users with valuable context. OpenLayers simplifies the process of adding these elements:
\n

Here's a simplified example demonstrating the addition of a marker with a popup:
\n// MarkerPopupMap.js\nimport { useEffect, useRef } from \"react\"\nimport \"ol/ol.css\"\nimport Map from \"ol/Map\"\nimport View from \"ol/View\"\nimport Overlay from \"ol/Overlay\"\nimport { toLonLat } from \"ol/proj.js\"\nimport { toStringHDMS } from \"ol/coordinate.js\"\nimport styled from \"styled-components\"\nimport { Icon, Style } from \"ol/style.js\"\nimport Feature from \"ol/Feature.js\"\nimport { Vector as VectorSource } from \"ol/source.js\"\nimport { Tile as TileLayer, Vector as VectorLayer } from \"ol/layer.js\"\nimport Point from \"ol/geom/Point.js\"\n\nconst Popup = styled.div`\n background-color: var(--content-bg);\n padding: var(--space-sm);\n`\n\nconst MarkerPopupMap = () => {\n const mapRef = useRef()\n const popupRef = useRef()\n\n const osm = new TileLayer({\n preload: Infinity,\n source: new OSM(),\n })\n\n const iconFeature = new Feature({\n geometry: new Point([0, 0]),\n name: \"Null Island\",\n population: 4000,\n rainfall: 500,\n })\n\n const iconStyle = new Style({\n image: new Icon({\n anchor: [0.5, 46],\n anchorXUnits: \"fraction\",\n anchorYUnits: \"pixels\",\n src: \"https://openlayers.org/en/latest/examples/data/icon.png\",\n }),\n })\n\n iconFeature.setStyle(iconStyle)\n\n const vectorSource = new VectorSource({\n features: [iconFeature],\n })\n\n const vectorLayer = new VectorLayer({\n source: vectorSource,\n })\n\n useEffect(() => {\n const overlay = new Overlay({\n element: popupRef.current,\n autoPan: {\n animation: {\n duration: 250,\n },\n },\n })\n\n const map = new Map({\n target: mapRef.current,\n layers: [osm, vectorLayer],\n view: new View({\n center: [0, 0],\n zoom: 3,\n }),\n overlays: [overlay],\n })\n\n /**\n * Add a click handler to the map to render the popup.\n */\n\n map.on(\"singleclick\", function (evt) {\n // Get Coordinates of click\n const coordinate = evt.coordinate;\n const hdms = toStringHDMS(toLonLat(coordinate));\n\n // Show popup at clicked position\n overlay.setPosition(coordinate);\n\n if (popupRef.current) {\n popupRef.current.innerHTML = `<p>You clicked here:</p><code>` + hdms + `</code>`;\n }\n\n overlay.setPosition(coordinate)\n })\n\n return () => map.setTarget(undefined)\n }, [])\n\n return (\n <div>\n <div ref={mapRef} style={{ width: \"100%\", height: \"400px\" }} />\n <div ref={popupRef} className=\"ol-popup\" style={popupStyle} />\n </div>\n )\n}\n\nconst popupStyle = {\n position: \"absolute\",\n backgroundColor: \"white\",\n padding: \"5px\",\n borderRadius: \"5px\",\n border: \"1px solid black\",\n transform: \"translate(-50%, -100%)\",\n pointerEvents: \"none\",\n width: \"220px\",\n color: \"black\"\n};\n\nexport default MarkerPopupMap\n\nClick anywhere on the map to create a popup:\n
Interactive maps come to life when you handle events and user interactions effectively. OpenLayers simplifies this process by providing robust event handling mechanisms. Consider the following example demonstrating how to capture a click event on the map:
\n // Handle a click event on the map\n map.on('click', (event) => {\n const clickedCoordinate = event.coordinate;\n console.log('Clicked Coordinate:', clickedCoordinate);\n });\n\nThis example displays OpenLayers' event handling to log the coordinates of a click event on the map. you can extend this functionality to respond to various user interactions, such as dragging, zooming, or even custom gestures.
\nuseState for Managing State: Use the useState hook to manage state within the React component. This is particularly useful for dynamic changes to the map, such as updating the center or zoom level based on user interactions.
\nconst [mapCenter, setMapCenter] = useState([0, 0]);\n\n// Update the map's center based on user interaction\nconst handleMapInteraction = (event) => {\n const newCenter = event.map.getView().getCenter();\n setMapCenter(newCenter);\n};\n\nAdding Vector Layers and Working with GeoJSON Data
\nVector layers in OpenLayers allow you to display and interact with vector data, opening up possibilities for intricate and detailed map representations. Leveraging GeoJSON, a popular format for encoding geographic data, is a common practice. Below is an example of incorporating a vector layer with GeoJSON data into a React component:
\n// VectorLayerMap.js\nimport { useEffect, useRef } from \"react\"\nimport \"ol/ol.css\"\nimport Map from \"ol/Map\"\nimport View from \"ol/View\"\nimport TileLayer from \"ol/layer/Tile\"\nimport OSM from \"ol/source/OSM\"\nimport VectorLayer from \"ol/layer/Vector\"\nimport VectorSource from \"ol/source/Vector\"\nimport GeoJSON from \"ol/format/GeoJSON\"\nimport {getCenter} from 'ol/extent';\n\nconst VectorLayerMap = () => {\n const mapRef = useRef()\n\n // read geojson feature\n const geoJSONFeatures = new GeoJSON().readFeatures(geojsonObject)\n\n // create vector source\n const vectorSource = new VectorSource({\n features: geoJSONFeatures,\n })\n\n // create vector layer with source\n const vectorLayer = new VectorLayer({\n source: vectorSource,\n })\n\n // default view\n const view = new View({\n center: [0, 0],\n zoom: 2,\n })\n\n useEffect(() => {\n const map = new Map({\n target: mapRef.current,\n layers: [\n new TileLayer({\n source: new OSM(),\n }),\n vectorLayer,\n ],\n view: view\n })\n\n // fit view to geometry of geojson feature with padding\n view.fit(geoJSONFeatures[0].getGeometry().getExtent(), { padding: [100, 100, 100, 100]});\n\n return () => map.setTarget(undefined)\n }, [])\n\n return (\n <div\n ref={mapRef}\n style={{ position: \"relative\", width: \"100%\", height: \"400px\" }}\n ></div>\n )\n}\n\nexport default VectorLayerMap\n\nconst geojsonObject = {\n type: \"Feature\",\n geometry: {\n type: \"MultiLineString\",\n coordinates: [\n [\n [-1e6, -7.5e5],\n [-1e6, 7.5e5],\n ],\n [\n [1e6, -7.5e5],\n [1e6, 7.5e5],\n ],\n [\n [-7.5e5, -1e6],\n [7.5e5, -1e6],\n ],\n [\n [-7.5e5, 1e6],\n [7.5e5, 1e6],\n ],\n ],\n },\n}\n\n1. Addressing Rendering Performance Concerns:
\nEfficient rendering is paramount in any web application, and integrating OpenLayers with React requires careful consideration of performance concerns. Here are some strategies to address rendering performance:
\nDebouncing and Throttling: When handling events that trigger frequent updates, such as map movements or zoom changes, implement debouncing or throttling techniques. This prevents excessive re-renders and ensures that updates are processed at a controlled rate.
Batched State Updates: Use React's setState batching mechanism to group multiple state updates into a single render cycle. This reduces the number of renders triggered by multiple state changes, resulting in a more efficient rendering process.
2. Implementing Lazy Loading for Map Components:
\nTo enhance overall application performance, especially in scenarios where maps are not initially visible or are part of larger applications, consider implementing lazy loading for map components. This ensures that the OpenLayers library and associated map components are only loaded when needed.
\nReact.lazy to load OpenLayers and map components lazily. This approach allows you to split your code into smaller chunks that are loaded on-demand, reducing the initial page load time.// Example using React.lazy\nconst LazyLoadedMap = React.lazy(() => import('./LazyLoadedMap'));\n\nconst App = () => (\n <div>\n {/* Other components */}\n <React.Suspense fallback={<div>Loading...</div>}>\n <LazyLoadedMap />\n </React.Suspense>\n </div>\n);\n\n3. Memoization Techniques Using React Hooks:
\nMemoization is a powerful technique to optimize expensive calculations and prevent unnecessary renders. React provides hooks like useMemo and useCallback for effective memoization.
useMemo: Use useMemo to memoize the result of a computation and ensure that it is only recalculated when dependencies change. This is particularly useful when dealing with derived data or complex computations within your map components.
const expensiveData = /* some expensive computation */;\n\nconst MyMapComponent = ({ center, zoom }) => {\n const memoizedData = React.useMemo(() => expensiveData, [center, zoom]);\n\n // Component logic using memoizedData...\n};\n\nuseCallback: When passing functions as props to child components, use useCallback to memoize those functions. This ensures that the same function reference is maintained across renders unless its dependencies change.
const MyMapComponent = ({ onMapClick }) => {\n const handleClick = React.useCallback(() => {\n // Handle map click...\n onMapClick();\n }, [onMapClick]);\n\n // Component logic using handleClick...\n};\n\nThese practices contribute to a more responsive and optimized integration of OpenLayers within React applications, enhancing the overall user experience.
\nFor additional inspiration and examples, explore the OpenLayers API Documentation. You can also find valuable examples specific to React and OpenLayers at https://codesandbox.io/examples/package/react-openlayers.
\nResources:
\n","date_published":"2024-02-22T14:18:16.469Z","date_modified":"2026-02-04T22:39:03.494Z","tags":["web-mapping","open-layers","react"],"image":"https://mxd.codes/content/posts/published/how-to-create-a-web-map-with-open-layers-and-react/cover.png","banner_image":"https://mxd.codes/content/posts/published/how-to-create-a-web-map-with-open-layers-and-react/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/geography-and-gis-blogs","url":"https://mxd.codes/articles/geography-and-gis-blogs","title":"Geography and GIS Blogs","summary":"Here you will find a list of interesting and informative geographic and gis blogs.","content_html":"Of course, to find out new things, I take a look at one or the other website that deals with GIS Or geoinformatics in general. You can find them here:
\nMaps and GIS by Caitlin Dempsey Morais. She has been blogging about GIS for more than 20 years.
\n\nBlog about GIS and Geographie.
\n\nAnita Graser's blog about QGIS, open source, analysis and simulation.
\n\nHow does location localization affect us?
\nhttps://www.geospatialworld.net
\nBlog about GIS, geodata and everything that goes with it.
\n\nGIStimes is for everything that happens on the geodata market.
\n\nGIS news and articles about GNSS, Big Data, Addressing, BIM, and Smart Cities.
\nhttps://www.gis-professional.com/news
\nhttp://geospatial-solutions.com/
\nGoogle Maps blog.
\nhttps://www.blog.google/products/maps/
\nSaaS provider CartoDB also runs a very interesting GIS blog.
\n\nA Reddit community about geographic information systems.
\n\nGeodata, analysis, programming.
\nhttps://www.benjaminspaulding.com/
\nGIS and technology news for mapping experts.
\n\nEsri's blog.
\nhttps://www.esri.com/about/newsroom/blog
\nGIS themes for .NET developers.
\n\nThere is also a much larger list of links to GIS Blogs on Wiki.GIS (http://wiki.gis.com/wiki/index.php/ListofGIS-related_Blogs). By the way, Wiki.GIS is a very extensive GIS encyclopedia.
","date_published":"2024-02-22T12:05:03.422Z","date_modified":"2024-02-22T14:15:24.357Z","tags":["gis"],"image":"https://mxd.codes/content/posts/published/geography-and-gis-blogs/cover.png","banner_image":"https://mxd.codes/content/posts/published/geography-and-gis-blogs/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/docker-ci-cd-for-nextjs-with-github-actions","url":"https://mxd.codes/articles/docker-ci-cd-for-nextjs-with-github-actions","title":"Dockerizing a Next.js Application with GitHub Actions","summary":"In this article, we'll explore how to Dockerize a Next.js application and automate its deployment using GitHub Actions, thereby simplifying the deployment workflow and enhancing development productivity.","content_html":"In this article, we'll explore how to Dockerize a Next.js application and automate its deployment using GitHub Actions, thereby simplifying the deployment workflow and enhancing development productivity.
\nBefore we dive into Dockerizing our Next.js application and setting up GitHub Actions for deployment, ensure you have the following prerequisites:
\nDocker allows you to package your application and its dependencies into a container, ensuring consistency across different environments. Start by creating a Dockerfile in the root of your Next.js project:
FROM node:18-alpine AS base\n\n# Install dependencies only when needed\nFROM base AS deps\n# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.\nRUN apk add --no-cache libc6-compat\nWORKDIR /app\n\n# Install dependencies based on the preferred package manager\nCOPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./\nRUN \\\n if [ -f yarn.lock ]; then yarn --frozen-lockfile; \\\n elif [ -f package-lock.json ]; then npm ci; \\\n elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm i --frozen-lockfile; \\\n else echo \"Lockfile not found.\" && exit 1; \\\n fi\n\n\n# Rebuild the source code only when needed\nFROM base AS builder\nWORKDIR /app\nCOPY --from=deps /app/node_modules ./node_modules\nCOPY . .\n\n# Next.js collects completely anonymous telemetry data about general usage.\n# Learn more here: https://nextjs.org/telemetry\n# Uncomment the following line in case you want to disable telemetry during the build.\n# ENV NEXT_TELEMETRY_DISABLED 1\n\nRUN \\\n if [ -f yarn.lock ]; then yarn run build; \\\n elif [ -f package-lock.json ]; then npm run build; \\\n elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm run build; \\\n else echo \"Lockfile not found.\" && exit 1; \\\n fi\n\n# Production image, copy all the files and run next\nFROM base AS runner\nWORKDIR /app\n\nENV NODE_ENV production\n# Uncomment the following line in case you want to disable telemetry during runtime.\n# ENV NEXT_TELEMETRY_DISABLED 1\n\nRUN addgroup --system --gid 1001 nodejs\nRUN adduser --system --uid 1001 nextjs\n\nCOPY --from=builder /app/public ./public\n\n# Set the correct permission for prerender cache\nRUN mkdir .next\nRUN chown nextjs:nodejs .next\n\n# Automatically leverage output traces to reduce image size\n# https://nextjs.org/docs/advanced-features/output-file-tracing\nCOPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./\nCOPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static\n\nUSER nextjs\n\nEXPOSE 3000\n\nENV PORT 3000\n# set hostname to localhost\nENV HOSTNAME \"0.0.0.0\"\n\n# server.js is created by next build from the standalone output\n# https://nextjs.org/docs/pages/api-reference/next-config-js/output\nCMD [\"node\", \"server.js\"]\n\nThis Dockerfile is the default Dockerfile provided by Vercel to set up a Node.js environment, install dependencies, build the Next.js application, and exposing port 3000.
\nYou have to ensure you are using output: \"standalone\" in your next.config.js.
const nextConfig = {\n output: \"standalone\",\n}\n\nThe standalone mode in Next.js builds a self-contained application that includes all necessary files, libraries, and dependencies required to run the application. This contrasts with the default mode (\"experimental-serverless-trace\"), which generates smaller bundles but relies on additional runtime steps.
Before proceeding further, it's crucial to test our Dockerized Next.js application locally to ensure everything functions as expected. Open a terminal in the project directory and execute the following commands:
\n# Build the Docker image\ndocker build -t my-nextjs-app .\n\n# Run the Docker container\ndocker run -p 3000:3000 my-nextjs-app\n\nVisit http://localhost:3000 in your web browser to verify that your Next.js application is running within the Docker container.
GitHub Actions automate the CI/CD pipeline directly from your GitHub repository. Basically the pipeline looks like this:
\nname: Build and Deploy Next.js\n\non:\n push:\n branches:\n - main # Triggers when code is pushed to the main branch\n\nOnce the workflow is triggered, the following steps occur:
\n jobs:\n build:\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout repository\n uses: actions/checkout@v3\n\n - name: Install dependencies\n run: npm install\n\n - name: Build Next.js app\n run: npm run build\n\n - name: Run tests\n run: npm run test\n\n - name: Build Docker Image\n run: docker build -t myapp:latest .\n\n - name: Log in to Docker Hub\n run: echo \"${{ secrets.DOCKER_PASSWORD }}\" | docker login -u \"${{ secrets.DOCKER_USERNAME }}\" --password-stdin\n\n - name: Push\n\nPutting it together: Create a .github/workflows/pipeline.yml file with the following content if you want to publish your docker images to Docker Hub and GitHub. If you just want to use one of them you have to remove the according login step and remove the according tags.
name: Docker Build & Publish\n\non:\n push:\n branches: [main]\n\njobs:\n push_to_registries:\n name: Push Docker image to multiple registries\n runs-on: ubuntu-latest\n permissions:\n packages: write\n contents: read\n attestations: write\n id-token: write\n\n steps:\n - name: Check out repository code 🛎️\n uses: actions/checkout@v4\n\n - name: Set up Docker Buildx 🚀\n uses: docker/setup-buildx-action@v3\n\n - name: Login to Docker Hub 🚢\n uses: docker/login-action@v3\n with:\n username: ${{ secrets.DOCKER_HUB_USERNAME}}\n password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN}}\n\n - name: Log in to the Container registry\n uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1\n with:\n registry: ghcr.io\n username: ${{ github.actor }}\n password: ${{ secrets.GITHUB_TOKEN }}\n\n - name: Build and push 🏗️\n uses: docker/build-push-action@v2\n with:\n context: .\n file: ./Dockerfile\n push: true\n tags: |\n ${{ secrets.DOCKER_HUB_USERNAME}}/{docker_repository}:${{ github.sha }}\n ${{ secrets.DOCKER_HUB_USERNAME}}/{docker_repository}:latest\n ghcr.io/${{ github.repository }}:${{ github.sha }}\n ghcr.io/${{ github.repository }}:latest\n\nIf you want to publish to Docker Hub you have to store secrets ${{ secrets.DOCKER_USERNAME }} and ${{ secrets.DOCKER_PASSWORD }} in your repository's under settings -> Secrets and variables -> Actions -> Repository secrets.
This workflow will build your container from your GitHub repositiory and push it to your Docker Container registry with two tags:
\n:latest and:{github.sha}In case you need some environment variables you have to adjust the Dockerfile with some additional parameters. To be able to use environment variables which are stored in your repository as secrets you will need to mount and export every environment variable like the following to your npm run build command.
RUN --mount=type=secret,id=NEXT_PUBLIC_CMS_URL \\\n export NEXT_PUBLIC_CMS_URL=$(cat /run/secrets/NEXT_PUBLIC_CMS_URL) && \\\n npm run build\n\nYou can have a look at the Dockerfile for my site for a example: personal website Dockerfile.
\nAlso you will need to modify the step Build and push in the workflow like this:
- name: Build and push 🏗️\n uses: docker/build-push-action@v2\n with:\n context: .\n file: ./Dockerfile\n push: true\n tags: |\n ${{ secrets.DOCKER_HUB_USERNAME}}/personal-website:${{ github.sha }}\n ${{ secrets.DOCKER_HUB_USERNAME}}/personal-website:latest\n secrets: |\n \"NEXT_PUBLIC_STRAPI_API_URL=${{ secrets.NEXT_PUBLIC_CMS_URL }}\"\n\nWith this setup, every push to the main branch of your GitHub repository triggers the CI/CD pipeline. Continuous Integration and Continuous Deployment for Dockerized Next.js applications provide a streamlined and efficient development process, ensuring that your application is always in a deployable state. By combining GitHub Actions with Docker, you can automate the deployment process and focus on building and improving your Next.js application.
","date_published":"2024-02-06T20:00:01.998Z","date_modified":"2025-04-16T09:44:56.069Z","tags":["docker","ci-cd","next-js","react"],"image":"https://mxd.codes/content/posts/published/docker-ci-cd-for-nextjs-with-github-actions/cover.png","banner_image":"https://mxd.codes/content/posts/published/docker-ci-cd-for-nextjs-with-github-actions/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/setting-up-map-proxy-with-docker-and-serving-cached-tiles-via-nginx","url":"https://mxd.codes/articles/setting-up-map-proxy-with-docker-and-serving-cached-tiles-via-nginx","title":" Setting Up MapProxy with Docker and Serving Cached Tiles via Nginx","summary":"MapProxy is a powerful open-source proxy for geospatial data that allows for efficient caching and serving of map tiles. Combining MapProxy with Docker and Nginx can provide a scalable and easily manageable solution for serving cached map tiles.","content_html":"MapProxy is a powerful open-source proxy for geospatial data that allows for efficient caching and serving of map tiles. Combining MapProxy with Docker and Nginx can provide a scalable and easily manageable solution for serving cached map tiles. This guide will walk you through the process of setting up MapProxy using Docker and configuring Nginx to serve cached tiles.
\nPrerequisites:
\nIf you don't meet this prerequisites yet I recommend to have a look at the following guides first:
\n\nStart by creating a Network in Docker with:
\nsudo docker create network nginx\n\nBy adding the network nginx to the Nginx Container and the Mapproxy container the containers can communicate with each other without exposing ports on the server.
\nCreate a Docker Compose file docker-compose.yml with the following content:
networks:\n default:\n external: true\n name: nginx\n\nservices:\n mapproxy:\n image: kartoza/mapproxy\n container_name: mapproxy\n restart: always\n environment:\n PRODUCTION: true\n PROCESSES: 4\n CHEAPER: 2\n THREADS: 8\n MULTI_MAPPROXY: true\n MULTI_MAPPROXY_DATA_DIR: /multi_mapproxy/configurations\n ALLOW_LISTING: true\n volumes:\n - /data/containers/mapproxy/data:/multi_mapproxy\n\nSave the file and run:
\ndocker-compose up -d\n\nThis will pull the MapProxy Docker image and start a container. MapProxy will be accessible on the container with http://localhost:8080.
Afterwards you need to create a Docker Compose file docker-compose.yml for Nginx. Here's a example:
version: '3'\n\nnetworks:\n default:\n external: true\n name: nginx\n\nservices:\n nginx:\n image: nginx:latest\n container_name: nginx\n restart: always\n ports:\n - \"80:80\"\n - \"443:443\"\n volumes:\n ## Config\n - /data/containers/nginx/config/:/etc/nginx/\n ## SSL\n - /etc/letsencrypt/:/etc/letsencrypt/\n - /etc/ssl/:/etc/ssl/\n ## Logs\n - /data/containers/nginx/logs/:/var/log/nginx\n ## Cache\n - /data/containers/nginx/cache:/var/cache/nginx\n\nThen you can create a virtual host for mapproxy under /data/containers/nginx/config/sites-available/mapproxy with:
sudo nano /data/containers/nginx/config/sites-available/mapproxy\n\nCopy and paste the following virtual host configuration for your Mapproxy Container :
\nupstream mapproxy_upstream {\n server mapproxy:8080;\n}\n\nserver {\n\n server_name mapproxy.domain.com;\n\n listen 80;\n\n ## Mapproxy default\n location / {\n proxy_pass http://mapproxy_upstream/;\n proxy_set_header Host $http_host;\n }\n}\n\nAfter you created your configuration you need to create a symlink to /data/containers/nginx/config/sites-enabled/ with:
sudo ln -S /data/containers/nginx/config/sites-available/mapproxy /data/containers/nginx/config/sites-enabled/\n\nIn order that your domain gets resolved you have to create a A-record for your Domain under which you want to publish MapProxy which points to your server ip.\nThen you need to restart your Nginx Container and now should be able to access http://mapproxy.domain.com where you will be greeted by your MapProxy Instance.
\n
So far you have set up Nginx and MapProxy with Docker and MapProxy will cache by default all served tiles with the default configuration.\nHowever there are some limitations to the MapProxy caching process because each time a tile gets requested from MapProxy, MapProxy will save the tile to it's cache. But it won't check by default if the tile is old and a newer tile could be served.
\nE.g. I am serving tiles from MapProxy which visualize my locations where I have ever been to. So I don't want to cache the tiles infinite because they change probably every day
\nBut you can force MapProxy to refresh tiles from the source while serving if they are found to be expired.\nThe validity conditions are the same as for seeding:
\n#Explanation\n # absolute as ISO time\n refresh_before:\n time: 2010-10-21T12:35:00\n # relative from the time of the tile request\n refresh_before:\n weeks: 1\n days: 7\n hours: 4\n minutes: 15\n # modification time of a given file\n refresh_before:\n mtime: path/to/file\n\nSo to stay at my example i added:
\nrefresh_before:\n days 1\n\nThis way MapProxy will refresh tiles everyday. This of course only makes sense for data where you want always latest data
\nSo edit your mapproxy.yaml like this:
services:\n demo:\n tms:\n use_grid_names: true\n # origin for /tiles service\n origin: 'nw'\n kml:\n use_grid_names: true\n wmts:\n wms:\n md:\n title: MapProxy WMS Proxy\n abstract: This is a minimal MapProxy example.\n\nlayers:\n - name: osm\n title: Omniscale OSM WMS - osm.omniscale.net\n sources: [osm_cache]\n\ncaches:\n osm_cache\n grids: [webmercator]\n sources: [osm_wms]\n refresh_before:\n days: 1\n\nsources:\n osm_wms:\n type: wms\n req:\n url: https://maps.omniscale.net/v2/demo/style.default/service?\n layers: osm\n\ngrids:\n webmercator:\n base: GLOBAL_WEBMERCATOR\n\nglobals:\n\nSave the file and restart your MapProxy Container.
\nNow MapProxy will always refresh tiles everyday.
\nVisit http://mapproxy.domain.com in your browser, and you should see MapProxy serving tiles through Nginx.
By following these steps, you've successfully set up MapProxy with Docker and configured Nginx to serve cached map tiles. This scalable solution allows for efficient geospatial data delivery with the added benefit of easy container management. Adjust the configurations based on your specific requirements and integrate this setup into your mapping projects.
\nIf you decide to to take use of MultiMapProxy(scroll down to MultiMapProxy) you can just create more configuration files for MapProxy in /data/containers/mapproxy/data/configurations and append the cache path and location blocks like in your existing Nginx configuration for MapProxy.
🏖️💻
","date_published":"2024-01-25T09:47:34.071Z","image":"https://mxd.codes/content/photos/gran-canaria-workation/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/gran-canaria-workation/photo-1.jpg","mime_type":"image/jpeg","title":"cover_PXL_20231230_180051640_8d02a6095e.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-workation/photo-2.jpg","mime_type":"image/jpeg","title":"cover_PXL_20240107_112559115_e2495a1030.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-workation/photo-3.jpg","mime_type":"image/jpeg","title":"cover_PXL_20240107_113638116_43cd01625a.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-workation/photo-4.jpg","mime_type":"image/jpeg","title":"cover_PXL_20240107_114045885_PORTRAIT_ec64fb3806.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-workation/photo-5.jpg","mime_type":"image/jpeg","title":"cover_PXL_20240107_115500999_bebaf4443d.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-workation/photo-6.jpg","mime_type":"image/jpeg","title":"cover_PXL_20240107_120149906_9dee5cf19c.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-workation/photo-7.jpg","mime_type":"image/jpeg","title":"cover_PXL_20231230_181821247_423dcc90a2.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-workation/photo-8.jpg","mime_type":"image/jpeg","title":"cover_PXL_20240103_171155620_1bbf70aa8a.jpg"},{"url":"https://mxd.codes/content/photos/gran-canaria-workation/photo-9.jpg","mime_type":"image/jpeg","title":"cover_PXL_20231229_181431459_f69af010ab.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/optimizing-images-for-next-js-sites-with-imgproxy-and-docker","url":"https://mxd.codes/articles/optimizing-images-for-next-js-sites-with-imgproxy-and-docker","title":"Optimizing images for Next.js sites with imgproxy and docker","summary":"How to transform and optimize images with imgproxy hosted with docker for your Next.js application.","content_html":"Next.js Image Component next-image is a feature introduced in Next.js version 10.0.0 to optimize images and improve the performance of your web-application.
\nWhen you use the Next.js Image Component, it automatically optimizes and serves images in modern image formats that improves the performance of your web application. It supports various image sources, such as local images, images from the web, and third-party sources.
\nHowever you cannot transform image, e.g. crop images, which is the reason I was looking for a solution which enables my personal website mxd.codes to resize images to my needs.
\nimgproxy is an open-source image processing server designed to simplify the resizing, cropping, and manipulation of images on the fly. It is often used as part of a web application's infrastructure to ensure efficient delivery of images with optimized sizes and quality.
\nKey features of imgproxy include:
\nOn-the-Fly Image Processing: Imgproxy allows you to resize, crop, rotate, and perform other image manipulations on the fly, based on the URL parameters. This enables efficient delivery of images in various sizes and formats without having to store multiple versions of the same image.
Security: Imgproxy provides security features such as URL signature generation. This helps prevent unauthorized access and abuse of the image manipulation service.
Performance: Imgproxy is designed to be performant and can efficiently handle high loads of image processing requests.
Integration with Existing Storage: Imgproxy can be integrated with various storage solutions, including Amazon S3, Google Cloud Storage, and more.
While searching for a way to deploy imgproxy with docker I found a imgproxy Docker Compose Project on GitHub where I changed minor things like the volumes and the web-server configuration.
\nYou can copy this docker-compose.ymlfile and paste it into Portainer or save it manually in a folder on your server.
version: '3'\n\n################################################################################\n# Ultra Image Server\n# A production grade image processing server setup powered by imgproxy and nginx\n#\n# Author: Mai Nhut Tan <shin@shin.company>\n# Copyright: 2021-2023 SHIN Company https://code.shin.company/\n# URL: https://shinsenter.github.io/docker-imgproxy/\n################################################################################\n\nnetworks:\n################################################################################\n default:\n driver: bridge\n\n\nservices:\n################################################################################\n web:\n image: nginx:alpine\n container_name: imgproxy-nginx\n restart: always\n volumes:\n - /data/containers/imgproxy:/var/www/html:ro\n - /etc/imgproxy/imgproxy-nginx.conf:/etc/nginx/conf.d/default.conf:ro\n ports:\n - 8080:80\n links:\n - imgproxy:imgproxy\n environment:\n NGINX_ENTRYPOINT_QUIET_LOGS: 1\n\n################################################################################\n imgproxy:\n restart: unless-stopped\n image: darthsim/imgproxy:${IMGPROXY_TAG:-latest}\n container_name: imgproxy_app\n security_opt:\n - no-new-privileges:true\n volumes:\n - /data/containers/imgproxy:/var/www/html:ro\n expose:\n - 8080\n healthcheck:\n test: [\"CMD\", \"imgproxy\", \"health\"]\n environment:\n ### See:\n ### https://docs.imgproxy.net/configuration/options\n\n ### log and debug\n IMGPROXY_LOG_LEVEL: \"warn\"\n IMGPROXY_ENABLE_DEBUG_HEADERS: \"false\"\n IMGPROXY_DEVELOPMENT_ERRORS_MODE: \"false\"\n IMGPROXY_REPORT_DOWNLOADING_ERRORS: \"false\"\n\n ### timeouts\n IMGPROXY_READ_TIMEOUT: 10\n IMGPROXY_WRITE_TIMEOUT: 10\n IMGPROXY_DOWNLOAD_TIMEOUT: 10\n IMGPROXY_KEEP_ALIVE_TIMEOUT: 300\n IMGPROXY_MAX_SRC_FILE_SIZE: 33554432 # 32MB\n IMGPROXY_MAX_SRC_RESOLUTION: 48\n\n ### image source\n IMGPROXY_TTL: 2592000 # client-side cache time is 30 days\n IMGPROXY_USE_ETAG: \"false\"\n IMGPROXY_SO_REUSEPORT: \"true\"\n IMGPROXY_IGNORE_SSL_VERIFICATION: \"true\"\n IMGPROXY_LOCAL_FILESYSTEM_ROOT: /home\n IMGPROXY_SKIP_PROCESSING_FORMATS: \"svg,webp,avif\"\n\n ### presets\n IMGPROXY_AUTO_ROTATE: \"true\"\n #IMGPROXY_WATERMARK_PATH: /home/noimage_thumb.jpg\n IMGPROXY_PRESETS: default=resizing_type:fit/gravity:sm,logo=watermark:0.5:soea:10:10:0.15,center_logo=watermark:0.3:ce:0:0:0.3\n\n ### compression\n IMGPROXY_STRIP_METADATA: \"true\"\n IMGPROXY_STRIP_COLOR_PROFILE: \"true\"\n IMGPROXY_FORMAT_QUALITY: jpeg=80,webp=70,avif=50\n IMGPROXY_JPEG_PROGRESSIVE: \"false\"\n IMGPROXY_PNG_INTERLACED: \"false\"\n IMGPROXY_PNG_QUANTIZATION_COLORS: 128\n IMGPROXY_PNG_QUANTIZE: \"false\"\n IMGPROXY_MAX_ANIMATION_FRAMES: 64\n IMGPROXY_GZIP_COMPRESSION: 0\n IMGPROXY_AVIF_SPEED: 8\n\n ### For URL signature\n IMGPROXY_KEY: IMGPROXY_KEY_KEY\n IMGPROXY_SALT: IMGPROXY_KEY_SALT\n IMGPROXY_SIGNATURE_SIZE: 32\n network_mode: \"host\" \n\nYou will also need a nginx-configuration file for imgproxy which should be saved to /etc/imgproxy/imgproxy-nginx.conf. Of course you can also store the file anywhere else but be sure to change the volume in the docker-compose.yml.
upstream upstream_imgproxy {\n server imgproxy:8080;\n keepalive 16;\n}\n\nserver {\n server_name _;\n\n location / {\n proxy_pass http://upstream_imgproxy;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection 'upgrade';\n proxy_set_header Host $host;\n }\n\n}\n\nNow you can deploy the stack with
\ndocker-compose up -d --build --remove-orphans --force-recreate\n\nor on Portainer.
\nYour imgproxy instance should be now running on http://localhost:8080 which you already can use.
\n
But I wanted to integrate it within my personal site built with Next.js so I also had to modify the nginx-configuration for my personal site.\nSo i used the existing configuration Nginx reverse proxy with caching for Next.js with imgproxy and copied it to /etc/nginx/sites-available/default.
\n# Based on https://steveholgado.com/nginx-for-nextjs/\n\n# - /var/cache/nginx sets a directory to store the cached assets\n# - levels=1:2 sets up a two‑level directory hierarchy as file access speed can be reduced when too many files are in a single directory\n# - keys_zone=STATIC:10m defines a shared memory zone for cache keys named “STATIC” and with a size limit of 10MB (which should be more than enough unless you have thousands of files)\n# - inactive=7d is the time that items will remain cached without being accessed (7 days), after which they will be removed\n# - use_temp_path=off tells NGINX to write files directly to the cache directory and avoid unnecessary copying of data to a temporary storage area first\nproxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:10m inactive=7d use_temp_path=off;\n\nupstream nextjs_upstream {\n server localhost:3000;\n}\n\nupstream imgproxy_upstream {\n server localhost:8080;\n}\n\nserver {\n listen 80 default_server;\n\n server_name _;\n\n server_tokens off;\n\n gzip on;\n gzip_proxied any;\n gzip_comp_level 4;\n gzip_types text/css application/javascript image/svg+xml;\n\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection 'upgrade';\n proxy_set_header Host $host;\n proxy_cache_bypass $http_upgrade;\n\n # Imgproxy paths can contain multiple slashes (e.g. local:///image/file.jpg)\n merge_slashes off;\n\n location /img/ {\n\n proxy_cache STATIC;\n\n proxy_pass http://imgproxy_upstream/;\n\n # For testing cache - remove before deploying to production\n add_header X-Cache-Status $upstream_cache_status;\n }\n\n location /_next/static {\n proxy_cache STATIC;\n proxy_pass http://nextjs_upstream;\n\n # For testing cache - remove before deploying to production\n add_header X-Cache-Status $upstream_cache_status;\n }\n\n location /static {\n proxy_cache STATIC;\n\n # Ignore cache control for Next.js assets from /static, re-validate after 60m\n proxy_ignore_headers Cache-Control;\n proxy_cache_valid 60m;\n\n proxy_pass http://nextjs_upstream;\n\n # For testing cache - remove before deploying to production\n add_header X-Cache-Status $upstream_cache_status;\n }\n\n location / {\n proxy_pass http://nextjs_upstream;\n }\n}\n\nWith this configuration all requests with the path /img/ will be redirected to the imgproxy instance and all other paths to my personal-website.
You can test the configuration with sudo nginx -t and restart nginx when the test is successfull with sudo systemctl restart nginx.
Now when you access https://mxd.codes/img/ you will be redirected to the imgroxy instance and when you access https://mxd.codes you will be redirected to my personal website.
\nThe last missing piece is a custom image loader for the Next.js site.
\nYou can configure a custom loaderFile in your next.config.js like the following:
images: {\n loader: \"custom\",\n loaderFile: \"./src/utils/loader.js\",\n}\n\nThis must point to a file relative to the root of your Next.js application. The file must export a default function that returns a string:
\nexport default function imgproxyLoader({ src, width, height, quality }) {\n\n const path =\n `/size:${width ? width : 0}:${height ? height : 0}` +\n `/resizing_type:fill` +\n (quality ? `/quality:${quality}` : \"\") +\n `/sharpen:0.5` +\n `/plain/${src}` +\n `@webp`\n\n const host = process.env.NEXT_PUBLIC_IMGPROXY_URL\n\n const imgUrl = `${host}/insecure${path}`\n\n return imgUrl\n}\n\nNow all images you serve with next/image will use your custom loader which will be using imgproxy to transform and optimize your images for your Next.js site.
Recently I also started to deploy my personal site with docker so the whole docke-compose.yml now looks like the following, while the nginx configuration file remains the same:
\nversion: \"3\"\n\nservices:\n nextjs:\n image: mxdcodes/personal-website:latest\n container_name: personal-website\n restart: always\n ports:\n - \"3000:3000\"\n environment:\n NODE_ENV: production\n network_mode: \"host\" \n\n imgproxy:\n restart: unless-stopped\n image: darthsim/imgproxy:${IMGPROXY_TAG:-latest}\n container_name: imgproxy_app\n security_opt:\n - no-new-privileges:true\n volumes:\n - /data/containers/imgproxy/www:/home:cached\n ports:\n - \"8080:8080\"\n healthcheck:\n test: [\"CMD\", \"imgproxy\", \"health\"]\n environment:\n ### See:\n ### https://docs.imgproxy.net/configuration/options\n\n ### options\n IMGPROXY_ALLOWED_SOURCES: https://mxd.codes/\n\n ### log and debug\n IMGPROXY_LOG_LEVEL: \"warn\"\n IMGPROXY_ENABLE_DEBUG_HEADERS: \"false\"\n IMGPROXY_DEVELOPMENT_ERRORS_MODE: \"false\"\n IMGPROXY_REPORT_DOWNLOADING_ERRORS: \"false\"\n\n ### timeouts\n IMGPROXY_READ_TIMEOUT: 10\n IMGPROXY_WRITE_TIMEOUT: 10\n IMGPROXY_DOWNLOAD_TIMEOUT: 10\n IMGPROXY_KEEP_ALIVE_TIMEOUT: 300\n IMGPROXY_MAX_SRC_FILE_SIZE: 33554432 # 32MB\n IMGPROXY_MAX_SRC_RESOLUTION: 48\n\n ### image source\n IMGPROXY_TTL: 2592000 # client-side cache time is 30 days\n IMGPROXY_USE_ETAG: \"false\"\n IMGPROXY_SO_REUSEPORT: \"true\"\n IMGPROXY_IGNORE_SSL_VERIFICATION: \"false\"\n IMGPROXY_LOCAL_FILESYSTEM_ROOT: /home\n IMGPROXY_SKIP_PROCESSING_FORMATS: \"svg,webp,avif\"\n\n ### presets\n IMGPROXY_AUTO_ROTATE: \"true\"\n #IMGPROXY_WATERMARK_PATH: /home/noimage_thumb.jpg\n IMGPROXY_PRESETS: default=resizing_type:fit/gravity:sm,logo=watermark:0.5:soea:10:10:0.15,center_logo=watermark:0.3:ce:0:0:0.3\n\n ### compression\n IMGPROXY_STRIP_METADATA: \"true\"\n IMGPROXY_STRIP_COLOR_PROFILE: \"true\"\n IMGPROXY_FORMAT_QUALITY: jpeg=80,webp=70,avif=50\n IMGPROXY_JPEG_PROGRESSIVE: \"false\"\n IMGPROXY_PNG_INTERLACED: \"false\"\n IMGPROXY_PNG_QUANTIZATION_COLORS: 128\n IMGPROXY_PNG_QUANTIZE: \"false\"\n IMGPROXY_MAX_ANIMATION_FRAMES: 64\n IMGPROXY_GZIP_COMPRESSION: 0\n IMGPROXY_AVIF_SPEED: 8\n\n ### For URL signature\n IMGPROXY_KEY: KEY\n IMGPROXY_SALT: SALT\n IMGPROXY_SIGNATURE_SIZE: 32\n network_mode: \"host\" \n","date_published":"2024-01-12T12:14:29.108Z","date_modified":"2025-02-01T16:43:43.228Z","tags":["docker","next-js","cloud","selfhosted","react"],"image":"https://mxd.codes/content/posts/published/optimizing-images-for-next-js-sites-with-imgproxy-and-docker/cover.png","banner_image":"https://mxd.codes/content/posts/published/optimizing-images-for-next-js-sites-with-imgproxy-and-docker/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/usa-roadtrip","url":"https://mxd.codes/photos/usa-roadtrip","title":"USA Roadtrip","content_html":"","date_published":"2023-07-23T11:12:31.971Z","image":"https://mxd.codes/content/photos/usa-roadtrip/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-1.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230701_031551845_b167e3d5e0.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-2.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230623_122024152_23e98fa803.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-3.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230705_204610196_09aeb4abac.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-4.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230626_222900844_6324406675.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-5.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230625_164112783_abc47ccdf8.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-6.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230627_185234307_bb6e85914f.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-7.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230629_040505247_227be9aa9d.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-8.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230701_011804772_cb6ff5bd09.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-9.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230629_231727178_565d95855b.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-10.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230702_004836254_13d2b71c79.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-11.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230702_183556611_acbd3882af.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-12.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230619_140058494_b8fbb695cc.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-13.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230619_002545764_82f034c7cb.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-14.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230625_151604653_502f42decb.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-15.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230625_151646409_25b9aebec4.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-16.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230621_010200163_88c3eeb801.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-17.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230702_200434150_89f0ce71cc.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-18.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230705_182544393_8159193908.jpg"},{"url":"https://mxd.codes/content/photos/usa-roadtrip/photo-19.jpg","mime_type":"image/jpeg","title":"cover_PXL_20230704_174757943_dca62dc665.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/building-a-table-of-contents-toc-from-markdown-for-your-react-blog","url":"https://mxd.codes/articles/building-a-table-of-contents-toc-from-markdown-for-your-react-blog","title":"Building a Table of Contents (TOC) from markdown for your React blog","summary":"How to create a Table of Contents (TOC) from markdown for your React blog with Javascript without any third party dependencies.","content_html":"Since I store blog posts in a self-hosted version of strapi, I've been looking for a way to automatically generate a table of contents from Markdown for all posts in my Next.js site.
\nThe idea is that during the build process all captions are extracted from the article content (I use getStaticProps for all articles) and then display them fixed next to the content using a separate component.
\nAfter some research and trial and error I decided to use regex to extract the headers from the markdown text using the hash symbol.
\nSince there are links in the markdown text with anchor elements and codeblocks that also contains hash symbols which will be misinterpreted as headers, these are removed first from the whole text.
\n const regexReplaceCode = /(```.+?```)/gms\n const regexRemoveLinks = /\\[(.*?)\\]\\(.*?\\)/g\n\n const markdownWithoutLinks = markdown.replace(regexRemoveLinks, \"\")\n const markdownWithoutCodeBlocks = markdownWithoutLinks.replace(regexReplaceCode, \"\")\n\nThen, using the hash symbol, the headings h1 to h6 are filtered from the text and added to an array named titles.
const regXHeader = /#{1,6}.+/g\n const titles = markdownWithoutCodeBlocks.match(regXHeader)\n\nNext, using the headings, levels of headings, titles, and anchor links are created and added to an array toc so that the headings can later be nested with child headings and anchor links can be added. The anchor links can then be used to jump from the table of contents to a heading.
let globalID = 0\ntitles.map((tempTitle, i) => {\n const level = tempTitle.match(/#/g).length - 1\n const title = tempTitle.replace(/#/g, \"\").trim(\"\")\n const anchor = `#${title.replace(/ /g, \"-\").toLowerCase()}`\n level === 1 ? (globalID += 1) : globalID\n\n toc.push({\n level: level,\n id: globalID,\n title: title,\n anchor: anchor,\n })\n })\n\nThe array toc is returned and I pass this for example as post.toc to the respective post, where post.toc in turn is passed as props to the ToC component.
export async function getStaticProps({ params }) {\n const content = (await data?.posts[0]?.content) || \"\"\n const toc = getToc(content)\n\n return {\n props: {\n post: {\n content,\n toc\n },\n },\n }\n}\n\nEach element from the toc array is now added to the table of contents component. The levels variable is used to dynamically create indentation for subordinate headings with margin and the anchor is used for links.
import styled from \"styled-components\"\n\nconst ToCListItem = styled.li`\n list-style-type: none;\n margin-bottom: 1rem;\n padding-left: calc(var(--space-sm) * 0.5);\n border-left: 3px solid var(--secondary-color);\n margin-left: ${(props) => (props.level > 1 ? `${props.level * 10}px` : \"0\")};\n`\n\nexport default function TableOfContents({ toc }) {\n function TOC() {\n return (\n <ol className=\"table-of-contents\">\n {toc.map(({ level, id, title, anchor }) => (\n <ToCListItem key={id} level={level}>\n <a href={anchor}>{title}</a>\n </ToCListItem>\n ))}\n </ol>\n )\n }\n\n return (\n <>\n <p>Table of contents</p>\n <divr>\n <TOC />\n </div>\n </>\n )\n}\n\nHowever, the anchor links do not work yet, since the corresponding section IDs still have to be added to the titles in Markdown content.
\nFor rendering the actual post content I use react-markdown. With the help of custom renderers you can now edit all html elements in react-markdown. To add anchor links to the titles I use custom renderers for h1 to h6.
const renderers = {\n h2: { children }) => {\n const anchor = `${children[0].replace(/ /g, \"-\").toLowerCase()}`\n return <h2 id={anchor}>{children}</h2>\n },\n h3: ({children }) => {.\n const anchor = `${children[0].replace(/ /g, \"-\").toLowerCase()}`\n return <h3 id={anchor}>{children}</h2>\n },\n h4: ({children }) => {.\n const anchor = `${children[0].replace(/ /g, \"-\").toLowerCase()}`\n return <h4 id={anchor}>{children}</h2>\n },\n h5: ({children }) => {.\n const anchor = `${children[0].replace(/ /g, \"-\").toLowerCase()}`\n return <h5 id={anchor}>{children}</h2>\n },\n h6: ({children }) => {.\n const anchor = `${children[0].replace(/ /g, \"-\").toLowerCase()}`\n return <h6 id={anchor}>{children}</h2>\n },\n\nLastly, I added a little scroll effect with the following css-property scroll-behavior: smooth;
Recently I went out of storage for my homelab so I bought an used NAS (Synology DS214 play) to have some more capacities for Proxmox Backups and OpenStreetMap. I still had a 1TB hdd lying around at home, which I now use for proxmox backups.
\nTo have some redundancy (and to learn something new) I decieded to copy the Proxmox backups to the cloud, in particular to an Azure Storage Account with AzCopy and in the following I will describe with more details how I was able to do it.
\nOverall this article will cover the following informations:
\nFirst off all you need an active Azure subscription and an storage account to be able to store your backups. In the Azure Portal you can search for the service \"Storage Accounts\" which you will need.
\n
In the service \"Storage Accounts\" you can create a new storage account. For the storage account you will need
\n
\n
You can keep all the other settings as default. After your Storage Account has been deployed you can add a lifecycle rule from \"Lifecycle Management\" which will move files from the \"cold\" access tier to the archive storage.
\n
For example I created a rule which moves all new files after one day to the archive storage tier.
\n

By storing files in archive storage instead of in the regular \"cold\" access tier you can actually save about 82%. But keep in mind that accessing data in the Archive storage is more expensive than in the cold (or any other) storage tier.
\nAlso you could create another rule which will for example will delete all all blobs which were created 365 days ago.
\n
Please have a look at https://azure.microsoft.com/en-us/pricing/details/storage/blobs/ for uptodate Azure Storage pricing.
\nAfter the storage account has been configured you will need to create a Container where the actual files will be stored.\nGo to \"Data storage\" -> \"Containers\" and create a Container.\n
\nAgain name it however you want.
Due to the fact that the current version of AzCopy V10 does not support Azure AD authorization in cron jobs I used an SAS token to be able to upload files to the container. You can create a SAS token in the container at \"Shared access tokens\".
\n
For the Shared access token you will need to select the permissions for Add/Create/Write and select an expiry date for security reasons. Then you can generate the SAS token and URL. Copy that Blob SAS URL because you will need it for the upload script.
\n\n\nAzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. This article helps you download AzCopy, connect to your storage account, and then transfer data. (https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10)
\n
To get AzCopy for Linux you have to download a tar file and decompress the tar file anywhere you like. You can then just use AzCopy because it's an executable file, so nothing has to be installed.
\n#Download AzCopy\ncd ~\nwget https://aka.ms/downloadazcopy-v10-linux\n \n#Expand Archive\ntar -xvf downloadazcopy-v10-linux\n \n#(Optional) Remove existing AzCopy version\nrm /usr/bin/azcopy\n \n#Move AzCopy to the destination you want to store it\ncp ./azcopy_linux_amd64_*/azcopy /usr/bin/\n\n# Remove Azcopy from home\nrm -r downloadazcopy-v10-linux\nrm -r azcopy_linux_amd64_10.16.2/\n\nBy adding the azcopy file location as system path you can just type azcopy from any directory on your system.\nYou can add it to your system path with:
\nnano ~/.profile\n\nand then adding these lines:
\nexport PATH=/usr/bin/azcopy:$PATH\n\nLastly, update your system variables:
\nsource ~/.profile\n\nThe only piece missing now is the script which will upload the the Proxmox backup files to the previously created azure storage container after the backup task has finished. \nFor copying the backups to Azure we will use azcopy copy because acopy uses less memory and incurs lower billing costs because a copy operation does not need to index the source or destination before moving files in comparison to azcopy sync.\nAzcopy also compares file names and last modified timestamp to only upload new or changed files to the storage container, which overall will reduce bandwidth usage and it will work perfectly with the previously created lifecycle rule.
\nFor automatically starting the upload after the backup has finished we can use a hook script for vzdump. Therefore you need to add the following line to the end of the \"/etc/vzdump.conf\" file.
\nscript: /home/youruser/scripts/upload-backups-to-azure.sh\n\nAfterwards you can create the script which will upload the files with:
\ncd ~\nmkdir scripts\ncd scripts\nnano upload-backups-to-azure.sh\n\nThen copy paste the following content into the file and replace the content for src with the location of your dumps. Note that there is \"/*\" at the end of src so that only the files inside the directory will be copied. Also replace token with the Blob SAS URL.
\n#!/bin/bash\n# Script to upload Proxmox backups to Azure Storage\n\nsrc=\"/mnt/pve/xyz/dump/*\"\ntoken=\"Blob SAS URL\"\n\ndobackup(){\n echo \"Uploading Proxmox backups from $src to Azure...\"\n azcopy copy \"$src\" \"$token\" --overwrite=false\n echo \"Finished Uploading!\"\n}\n\nif [ \"$1\" == \"job-end\" ]; then\n dobackup\nfi\n\nexit 0\n\nClose the file and make it executable for the user with:
\nchmod +x ~/scripts/upload-backups-to-azure.sh\n\nNow the next time your backup task has finished the files will be automatically uploaded to your Azure storage container.\nDue to the hook script you can check the status of the copy process in the proxmox ui.
","date_published":"2023-01-01T15:17:30.963Z","date_modified":"2024-12-27T13:02:25.307Z","tags":["selfhosted","cloud","power-shell"],"image":"https://mxd.codes/content/posts/published/synchronizing-your-proxmox-backups-with-az-copy-to-azure-storage-containers/cover.png","banner_image":"https://mxd.codes/content/posts/published/synchronizing-your-proxmox-backups-with-az-copy-to-azure-storage-containers/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/links/doing-more-with-docker","url":"https://mxd.codes/links/doing-more-with-docker","title":"Doing More with Docker","external_url":"https://blog.gurucomputing.com.au/doing-more-with-docker/","content_html":"This guide is designed for people who want to learn docker infrastructure. Not just how to use docker, but how to stand up docker services in a robust and secure manner. This guide is about standing up a single node docker environment with backups, IP whitelisting, non-root containers (where supported), single sign-on and reverse proxying.
","date_published":"2022-10-09T09:04:28.580Z","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/trans-dolomiti-mtb-tour","url":"https://mxd.codes/photos/trans-dolomiti-mtb-tour","title":"Trans Dolomiti MTB Tour","content_html":"","date_published":"2022-10-08T14:43:02.201Z","image":"https://mxd.codes/content/photos/trans-dolomiti-mtb-tour/photo-1.webp","attachments":[{"url":"https://mxd.codes/content/photos/trans-dolomiti-mtb-tour/photo-1.webp","mime_type":"image/jpeg","title":"IMG_20220714_214010_081_e67e420473.webp"},{"url":"https://mxd.codes/content/photos/trans-dolomiti-mtb-tour/photo-2.webp","mime_type":"image/jpeg","title":"IMG_20220714_214009_999_9cbf2d8159.webp"},{"url":"https://mxd.codes/content/photos/trans-dolomiti-mtb-tour/photo-3.webp","mime_type":"image/jpeg","title":"IMG_20220714_214009_965_1aa4ba0927.webp"},{"url":"https://mxd.codes/content/photos/trans-dolomiti-mtb-tour/photo-4.webp","mime_type":"image/jpeg","title":"IMG_20220714_214010_020_d29e370477.webp"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/how-to-create-web-maps-with-leaflet-react-and-functional-components","url":"https://mxd.codes/articles/how-to-create-web-maps-with-leaflet-react-and-functional-components","title":"Understanding Leaflet and React: A Guide to Web GIS Applications","summary":"In this article I will explain how you can create a basic web map with Leaflet and React by using functional components without any third party packages. So i will strongly recommend to have a look at the Leaflet API reference.","content_html":"In this article I will explain how you can create a basic web map with Leaflet and React by using functional components without any third party packages. So i will strongly recommend to have a look at the Leaflet API reference.
\nLeaflet stands out as a versatile and free JavaScript library, empowering developers to craft seamless Web GIS applications. Leveraging HTML5 and CSS3, Leaflet is compatible with all major web browsers, providing a user-friendly platform for integrating raster and vector data from diverse sources.
\nDiving deeper into the integration of React and Leaflet components, this article explains the process of creating a web map with fundamental features:
\nFirst of all you need a react app which you can create with:
\nnpx create-react-app leaflet-react\ncd leaflet-react\n\nand you will need to install Leaflet in your project with:
\nnpm install leaflet\n\nAfter you have installed the package you can import it with import L from \"leaflet\" into your App.js. The import of the leaflet.css is also important because without it the map-tiles will be misplaced.
//App.js\nimport React, { useEffect } from \"react\"\nimport L from \"leaflet\"\nimport \"leaflet/dist/leaflet.css\"\n\nconst App = () => {\n const mapStyles = {\n width: \"100%\",\n height: \"300px\",\n }\n const layer = L.tileLayer(\n `https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png`,\n {\n attribution:\n '© <a href=\"https://www.openstreetmap.org/copyright\">OpenStreetMap</a> contributors',\n }\n )\n\n const mapParams = {\n center: [52, 4],\n zoom: 4,\n layers: [layer],\n }\n\n // This useEffect hook runs when the component is first mounted,\n // similar to componentDidMount() lifecycle method of class-based\n // components:\n useEffect(() => {\n const map = L.map(\"map\", mapParams)\n }, [])\n\n return (\n <div>\n <div id=\"map\" style={mapStyles} />\n </div>\n )\n}\n\nexport default App\n\nSince Leaflet doesn't support server-side rendering, the useEffect hook ensures the map rendering post-component mounting.
useEffect(() => {\n L.map(\"map\", mapParams);\n}, []);\n\nThe \"map\" parameter is the id of the html-element in which the map will be rendered. With mapParams you can pass some basic parameters as props for the Leaflet map. These parameters can just be created in a object Leaflet API: Map Creation:
\nconst mapParams = {\n center: [0, 0],\n zoom: 0,\n layers: [layer]\n};\n\nTileLayers with OpenStreetMap Data are created with L.tileLayer(url, options) (Leaflet API: TileLayer).
const layer = L.tileLayer(`https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png`, {\n attribution: '© <a href=\"https://www.openstreetmap.org/copyright\">OpenStreetMap</a> contributors'\n });\n\nAlso some basic css in js is created for the map container which makes the map fullscreen and will be passed as style props:
\nconst mapStyles = {\n width: \"100%\",\n height: \"100vh\"\n };\n\nIn the end you just need html element in which the map will be rendered:
\n return (\n <div>\n <div id=\"map\" style={mapStyles} />\n </div>\n )\n\nIn case something didn't work out as expected you can just clone the following repositiory:
\nGithub Repositiory: https://github.com/dietrichmax/leaflet-react-functional-component\nLive Demo: https://dietrichmax.github.io/leaflet-react-functional-component/
\nTo add GeoJSON the the map first of all you will need to create a GeoJSON object:
\nfunction getGeoJson() {\n return {\n type: \"GeometryCollection\",\n geometries: [\n {\n type: \"Polygon\",\n coordinates: [\n [\n [6.000000248663241, 56.000000155530984],\n [7.000000192318055, 56.000000155530984],\n [8.000000135973096, 56.000000155530984],\n [9.000000247266257, 56.000000155530984],\n [10.000000190921071, 56.000000155530984],\n [11.000000134576112, 56.000000155530984],\n [12.000000245869273, 56.000000155530984],\n [12.000000245869273, 55.000000211876],\n [12.000000245869273, 54.00000010058284],\n [12.000000245869273, 53.00000015692797],\n [12.000000245869273, 52.00000021327298],\n [12.000000245869273, 51.00000010197982],\n [12.000000245869273, 50.00000015832478],\n [12.000000245869273, 49.00000004703179],\n [12.000000245869273, 48.000000103376806],\n [11.000000134576112, 48.000000103376806],\n [10.000000190921071, 48.000000103376806],\n [9.000000247266257, 48.000000103376806],\n [8.000000135973096, 48.000000103376806],\n [7.000000192318055, 48.000000103376806],\n [6.000000248663241, 48.000000103376806],\n [6.000000248663241, 49.00000004703179],\n [6.000000248663241, 50.00000015832478],\n [6.000000248663241, 51.00000010197982],\n [6.000000248663241, 52.00000021327298],\n [6.000000248663241, 53.00000015692797],\n [6.000000248663241, 54.00000010058284],\n [6.000000248663241, 55.000000211876],\n [6.000000248663241, 56.000000155530984],\n ],\n ],\n },\n ],\n }\n}\n\nThe object is wrapped in a function which will return the GeoJSON. You could also fetch a GeoJSON object from somewhere else here.
\nThen you will need to add the GeoJSON object to the map with:
\n useEffect(() => {\n const map = L.map(\"map\", mapParams)\n L.geoJSON(getGeoJson()).addTo(map)\n }, [])\n\nAnd thats it! You map should look now like this:
\nThe code for this component looks like this:
\nimport React, { useEffect } from \"react\"\nimport L from \"leaflet\"\nimport \"leaflet/dist/leaflet.css\"\n\nconst Map = () => {\n const mapStyles = {\n width: \"100%\",\n height: \"300px\",\n }\n const layer = L.tileLayer(\n `https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png`,\n {\n attribution:\n '© <a href=\"https://www.openstreetmap.org/copyright\">OpenStreetMap</a> contributors',\n }\n )\n\n // This useEffect hook runs when the component is first mounted,\n // similar to componentDidMount() lifecycle method of class-based\n // components:\n useEffect(() => {\n const map = L.map(\"map\", mapParams)\n L.geoJSON(getGeoJson()).addTo(map)\n }, [])\n\n return (\n <div>\n <div id=\"map\" style={mapStyles} />\n </div>\n )\n}\n\nexport default Map\n\nfunction getGeoJson() {\n return {\n type: \"GeometryCollection\",\n geometries: [\n {\n type: \"Polygon\",\n coordinates: [\n [\n [6.000000248663241, 56.000000155530984],\n [7.000000192318055, 56.000000155530984],\n [8.000000135973096, 56.000000155530984],\n [9.000000247266257, 56.000000155530984],\n [10.000000190921071, 56.000000155530984],\n [11.000000134576112, 56.000000155530984],\n [12.000000245869273, 56.000000155530984],\n [12.000000245869273, 55.000000211876],\n [12.000000245869273, 54.00000010058284],\n [12.000000245869273, 53.00000015692797],\n [12.000000245869273, 52.00000021327298],\n [12.000000245869273, 51.00000010197982],\n [12.000000245869273, 50.00000015832478],\n [12.000000245869273, 49.00000004703179],\n [12.000000245869273, 48.000000103376806],\n [11.000000134576112, 48.000000103376806],\n [10.000000190921071, 48.000000103376806],\n [9.000000247266257, 48.000000103376806],\n [8.000000135973096, 48.000000103376806],\n [7.000000192318055, 48.000000103376806],\n [6.000000248663241, 48.000000103376806],\n [6.000000248663241, 49.00000004703179],\n [6.000000248663241, 50.00000015832478],\n [6.000000248663241, 51.00000010197982],\n [6.000000248663241, 52.00000021327298],\n [6.000000248663241, 53.00000015692797],\n [6.000000248663241, 54.00000010058284],\n [6.000000248663241, 55.000000211876],\n [6.000000248663241, 56.000000155530984],\n ],\n ],\n },\n ],\n }\n}\n\nIf you are curious how to add some more features like vector layers, some controls or markers have a look at the Leaflet API Reference and much fun playing around with your Leaflet web map created with React and functional components
","date_published":"2022-09-23T16:38:42.053Z","date_modified":"2025-01-22T15:53:27.116Z","tags":["gis","web-mapping","react","leaflet"],"image":"https://mxd.codes/content/posts/published/how-to-create-web-maps-with-leaflet-react-and-functional-components/cover.webp","banner_image":"https://mxd.codes/content/posts/published/how-to-create-web-maps-with-leaflet-react-and-functional-components/cover.webp","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/how-to-create-a-custom-cookie-banner-for-your-react-application","url":"https://mxd.codes/articles/how-to-create-a-custom-cookie-banner-for-your-react-application","title":"Implementing a Custom Cookie Banner in Next.js","summary":"Recently I implemented a custom cookie banner solution on my Next.js site which you probably have seen a few seconds before. There are a lot of prebuilt cookie banners you can use for React or Next js sites but i wanted to create a custom cookie banner which also has some personal touch and keeps the design with the website in line.","content_html":"When setting up my Next.js site, I opted to build a custom cookie banner instead of using prebuilt solutions. This approach allowed me to maintain a consistent design and add personal touches. The process took time as I had to implement features such as opt-in functionality and conditional rendering based on the current page.
\n
This article outlines the steps I took to implement and design the cookie banner, which may help you build your own custom solution for a Next.js or React application.
\nThe first step involves declaring a visible state variable, which is initially set to false:
const CookieBanner = ({ debug }) => {\n const [visible, setVisible] = useState(false);\n\nUsing useEffect(), the component checks if the cookie consent is undefined or if debug mode is enabled. If so, the cookie banner becomes visible, and scrolling is disabled to ensure the user interacts with the banner first.
useEffect(() => {\n // If cookie is undefined or debug is true, show the banner\n if (Cookie.get(\"consent\") === undefined || debug) {\n document.body.style.overflow = \"hidden\";\n setVisible(true);\n }\n }, []);\n\nHowever, when users navigate to pages like /privacy-policy or /site-notice to get some informations about the website. the banner should not obstruct content. To achieve this, the component conditionally renders the banner only if the user has not visited one of these pages:
\n // Don't render if the banner should not be visible\n if (\n !visible ||\n window.location.href.includes(\"privacy-policy\") ||\n window.location.href.includes(\"site-notice\") ||\n window.location.href.includes(\"sitemap\")\n ) {\n return null;\n }\n\nAdditionally, scrolling is re-enabled when users visit these pages. However, if they navigate elsewhere without accepting cookies, the banner reappears, and scrolling is disabled again:
\n useEffect(() => {\n // Handle page load and visibility\n if (\n window.location.href.includes(\"privacy-policy\") ||\n window.location.href.includes(\"site-notice\")\n ) {\n document.body.style.overflow = \"scroll\";\n } else if (Cookie.get(\"consent\") === undefined || debug) {\n document.body.style.overflow = \"hidden\";\n }\n }, []);\n\nNow the user has to finally decied if she or he is fine with using (third-party) cookies. For that reason there will be some explanation in the cookie banner how the cookies are used and the user will find two buttons for 'Accept required and optional cookies' and 'Accept required cookies'.
\n
The banner provides two options: 'Accept required and optional cookies' and 'Accept required cookies'. The first option is highlighted more prominently to encourage users to accept optional cookies.
\n<Button onClick={() => handleConsent(true)}>Accept required and optional cookies</Button>\n\nClicking an option triggers the handleConsent() function, which:
visible variable to false and finally const handleConsent = (accepted) => {\n Cookie.set(\"consent\", accepted, { sameSite: \"strict\", expires: 365 });\n setVisible(false);\n document.body.style.overflow = \"scroll\";\n if (accepted) {\n // enableGoogleAnalytics();\n // enableGoogleAdsense();\n }\n };\n\nCookies are created with help of the js-cookie. If you are using Server Component you can also use the cookies function from Nextjs.
\nThe enableGoogleAnalytics() and enableGoogleAdsense() functions are stored separately because they will also be needed in the _app.js file, whichs wrapps the whole application.
The reason behind this is, that the analytics and ad scripts are just injected into the one page where the third-party cookies have been accepted by the user. But as soon as the user navigates to any other page after accepting third-party cookies the injected scripts are not exisiting in this page.
\nTo ensure scripts persist across page changes, the _app.js file rechecks the consent state and reinjects scripts as needed:
const MyApp = ({ Component, pageProps }) => {\n useEffect(() => {\n if (window.location.href.includes(config.domain)) {\n if (Cookie.get(\"consent\") === \"true\") {\n enableGoogleAnalytics();\n enableGoogleAdsense();\n }\n }\n }, []); // Runs once on mount\n\n return <Component {...pageProps} />;\n};\n\nAnd that's it. That's how I created a cookie banner with two options which will be rendered conditionally depending on a consent cookie and depending on the current page the user is visiting. The whole CookieBanner component looks like the following:
import styled from \"styled-components\"\nimport Link from \"next/link\"\nimport media from \"styled-media-query\"\nimport Image from \"next/legacy/image\"\nimport Logo from \"@/components/logo/logo\"\nimport { Button } from \"@/styles/templates/button\"\nimport { FaLinkedin } from \"@react-icons/all-files/fa/FaLinkedin\"\nimport { FaInstagram } from \"@react-icons/all-files/fa/FaInstagram\"\nimport { FaGithub } from \"@react-icons/all-files/fa/FaGithub\"\nimport { FaBluesky } from \"@react-icons/all-files/fa6/FaBluesky\"\nimport { FaXing } from \"@react-icons/all-files/fa/FaXing\"\nimport { SiStrava } from \"@react-icons/all-files/si/SiStrava\"\n//import { enableGoogleAnalytics } from \"@/components/google-analytics/google-analytics\"\n//import { enableGoogleAdsense } from \"@/components/google-adsense/google-adsense\"\nimport config from \"@/src/data/internal/SiteConfig\"\n//import { push } from \"@socialgouv/matomo-next\"\nimport { useState, useEffect } from 'react';\nimport Cookie from 'js-cookie'; \n\nconst Background = styled.div`\n position: fixed;\n z-index: 9997;\n right: 0;\n bottom: -200px;\n top: 0; \n left: 0;\n background-color: rgba(0, 0, 0, 0.5);\n`\n\nconst CookieContainer = styled.div`\n position: fixed;\n right: 0;\n bottom: 0;\n top: 0;\n left: 0;\n z-index: 9998;\n vertical-align: middle;\n white-space: nowrap;\n max-height: 100%;\n max-width: 100%;\n overflow-x: auto;\n overflow-y: auto;\n text-align: center;\n -webkit-tap-highlight-color: transparent;\n font-size: 14px;\n overflow-y: scroll;\n`\n\nconst CookieInnerContainer = styled.div`\n width: var(--content-width);\n height: auto;\n max-width: none;\n border-radius: var(--border-radius);\n display: inline-block;\n z-index: 9999;\n background-color: var(--body-bg);\n white-space: normal;\n box-shadow: 0 2px 10px 0 rgb(0 0 0 / 20%);\n position: relative;\n line-height: 1.65;\n border: 1px solid var(--body-bg);\n vertical-align: middle;\n top: 20%;\n ${media.lessThan(\"medium\")`\n width: 90%;\n `}\n`\n\nconst Wrapper = styled.div`\n max-height: 100%;\n height: auto;\n max-width: none;\n text-align: left;\n border-radius: 16px;\n display: inline-block;\n white-space: normal;\n`\n\nconst CookieHeader = styled.div`\n padding: var(--space);\n display: flex;\n justify-content: space-between;\n`\n\n\nconst CookieContentBlock = styled.div`\n margin-top: var(--space);\n margin-bottom: var(--space-sm)\n`\n\nconst CookieTextList = styled.ul`\n margin: 0;\n padding: 0;\n padding-inline-start: 1rem;\n`\n\nconst CookieTextItem = styled.li`\n margin: var(--space-sm) 0;\n`\n\nconst CookieBannerText = styled.div`\n padding: 0 var(--space);\n`\n\nconst CookieHeadline = styled.h1`\n font-size: 24px;\n font-weight: 400;\n margin-bottom: var(--space);\n`\n\nconst Text = styled.div`\n margin-bottom: var(--space-sm);\n`\n\nconst CookieLink = styled.a`\n border-bottom: 1px solid var(--text-color);\n &:hover {\n border-bottom: none;\n }\n cursor: pointer;\n margin-right: var(--space-sm);\n`\n\nconst TextLink = styled.a`\n border-bottom: 1px solid var(--text-color);\n &:hover {\n text-decoration: none;\n border-bottom: none;\n }\n`\n\nconst List = styled.ol`\n list-style: none;\n padding-inline-start: 0;\n display: flex;\n`\n\nconst SocialItem = styled.li`\n margin: var(--space-sm) var(--space-sm) var(--space-sm) 0;\n transition: 0.2s;\n background-color: var(--content-bg);\n padding: 8px 10px 4px 10px;\n &:hover {\n color: var(--secondary-color);\n cursor: pointer;\n }\n`\n\nconst ButtonContainer = styled.div`\n margin: var(--space);\n display: flex;\n justify-content: space-between;\n ${media.lessThan(\"medium\")`\n flex-direction: column;\n gap: var(--space-sm);\n `}\n`\n\n\nconst CookieBanner = ({ debug }) => {\n const [visible, setVisible] = useState(false);\n\n useEffect(() => {\n const consent = Cookie.get(\"consent\");\n if (!consent || debug) {\n document.body.style.overflow = \"hidden\";\n setVisible(true);\n } else {\n document.body.style.overflow = \"scroll\";\n }\n }, [debug]);\n\n const handleConsent = (accepted) => {\n Cookie.set(\"consent\", accepted, { sameSite: \"strict\", expires: 365 });\n setVisible(false);\n document.body.style.overflow = \"scroll\";\n };\n\n if (!visible || [\"privacy-policy\", \"site-notice\", \"sitemap\"].some((page) => window.location.href.includes(page))) {\n return null;\n }\n\n const socialLinks = [\n { href: config.socials.bluesky, title: \"@mmxdcodes on Bluesky\", icon: <FaBluesky /> },\n { href: config.socials.github, title: \"mxdietrich on GitHub\", icon: <FaGithub /> },\n { href: config.socials.strava, title: \"Max Dietrich on Strava\", icon: <SiStrava /> },\n { href: config.socials.xing, title: \"Max Dietrich on Xing\", icon: <FaXing /> },\n { href: config.socials.linkedin, title: \"Max Dietrich on Linkedin\", icon: <FaLinkedin /> }\n ];\n\n return (\n <>\n <Background />\n <CookieContainer>\n <CookieInnerContainer>\n <Wrapper>\n <CookieHeader>\n <Logo />\n <Image\n src=\"/logos/android/android-launchericon-48-48.png\"\n width=\"48\"\n height=\"48\"\n title=\"Max Dietrich\"\n alt=\"Photo of Max Dietrich\"\n className=\"profile u-photo\"\n />\n </CookieHeader>\n\n <CookieBannerText>\n <CookieHeadline>Hi, welcome on mxd.codes 👋</CookieHeadline>\n <CookieContentBlock>\n <p>You can easily support me by accepting optional (third-party)\n cookies. These cookies will help with the following:</p>\n <CookieTextList>\n <CookieTextItem>\n <b>Collect audience interaction data and site statistics</b>\n </CookieTextItem>\n <CookieTextItem>\n <b>Deliver advertisements and measure the effectiveness of\n advertisements</b>\n </CookieTextItem>\n <CookieTextItem>\n <b>Show personalized content (depending on your settings)</b>\n </CookieTextItem>\n </CookieTextList>\n </CookieContentBlock>\n <Text>\n <p>\n If you prefer not to share data but still want to support, visit <TextLink href=\"/support\">mxd.codes/support</TextLink> or connect via socials:\n <List>\n {socialLinks.map(({ href, title, icon }) => (\n <SocialItem key={href} title={title}>\n <a href={href} title={title}>{icon}</a>\n </SocialItem>\n ))}\n </List>\n </p>\n <p>\n For more information about cookies and how they are used\n please have a look at the Privacy Policy.\n </p>\n </Text>\n\n <Link href=\"/privacy-policy\" legacyBehavior>\n <CookieLink>Privacy Policy</CookieLink>\n </Link>\n <Link href=\"/site-notice\" legacyBehavior>\n <CookieLink>Site Notice</CookieLink>\n </Link>\n </CookieBannerText>\n\n <ButtonContainer>\n <Button onClick={() => handleConsent(false)} backgroundColor=\"var(--content-bg)\" color=\"#70757a\">\n Accept required cookies\n </Button>\n <Button onClick={() => handleConsent(true)}>Accept required and optional cookies</Button>\n </ButtonContainer>\n </Wrapper>\n </CookieInnerContainer>\n </CookieContainer>\n </>\n );\n};\n\nexport default CookieBanner;\n\nIf you also want to know how the previously mentioned enableGoogleAnalytics and enableGoogleAdsense() functions work keep reading.
To enable Google Analytics, three functions are used:
\naddGoogleAnalytics() - Injects the analytics script into the document head.initializeGoogleAnalytics() - Configures and initializes Google Analytics.trackGoogleAnalytics() - Tracks page views when users navigate.export function enableGoogleAnalytics () {\n addGoogleAnalytics().then((status) => {\n if (status) {\n initializeGoogleAnalytics()\n trackGoogleAnalytics()\n }\n })\n}\n\nFirst of all the Analytics script will be created and appended with the individual GA_TRACKING_ID to the head-element.
export function addGoogleAnalytics () {\n return new Promise((resolve, reject) => {\n const head = document.getElementsByTagName('head')[0]\n const scriptElement = document.createElement(`script`)\n scriptElement.type = `text/javascript`\n scriptElement.async\n scriptElement.defer\n scriptElement.src = `https://www.googletagmanager.com/gtag/js?id=${process.env.NEXT_PUBLIC_GA_TRACKING_ID}`\n scriptElement.onload = () => {\n resolve(true)\n }\n head.appendChild(scriptElement);\n });\n}\n\nAfter the script has been added to the site it needs to be initialized. I am also anonymizing IP adresses there and tracking a page view.
\nexport function initializeGoogleAnalytics () {\n window.dataLayer = window.dataLayer || [];\n window.gtag = function(){window.dataLayer.push(arguments);}\n window.gtag('js', new Date())\n window.gtag('config', process.env.NEXT_PUBLIC_GA_TRACKING_ID, {\n 'anonymize_ip': true,\n 'allow_google_signals': true\n })\n const pagePath = location ? location.pathname + location.search + location.hash : undefined\n window.gtag(`event`, `page_view`, { page_path: pagePath })\n}\n\nTo be able to also track a user changing pages we will use \"next-router\". It will track a page_view event everytime the route change has completed (a different page has been visited).
\nexport function trackGoogleAnalytics () {\n Router.events.on('routeChangeComplete', (url) => {\n window.gtag(`event`, `page_view`, { page_path: url })\n });\n}\n\nSo by calling the function enableGoogleAnalytics() the Google Analytics Script will be added to the page, Google Analytics will be initalized and also all page changes will be tracked with it.
You also can have a look at https://github.com/dietrichmax/google-analytics-next which shows you how you can integrate Google Analytics in Nextjs.
\nThe enableGoogleAdsense() function is similiar to the enableGoogleAnalytics() function. It will also create the default Google Adsense script and place it into the head of your react application.
export function enableGoogleAdsense () {\n const head = document.getElementsByTagName('head')[0]\n const scriptElement = document.createElement(`script`)\n scriptElement.type = `text/javascript`\n scriptElement.async\n scriptElement.src = `https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=${process.env.NEXT_PUBLIC_ADSENSE_ID}`\n scriptElement.crossOrigin = \"anonymous\"\n head.appendChild(scriptElement);\n}\n\nAfterwards you just need to place ad containers with the according client and slot id.
\nimport styled from 'styled-components';\nimport { useEffect, useState } from 'react';\n\nexport function GoogleAdsenseContainer ( { client, slot }) {\n\n useEffect(() => {\n (window.adsbygoogle = window.adsbygoogle || []).push({});\n }, []);\n\n const AdLabel = styled.span`\n font-size: 12px;\n `\n\n return (\n <div \n style={{textAlign: 'left',overflow: 'hidden'}}\n >\n <AdLabel>Advertisment</AdLabel>\n <ins\n className=\"adsbygoogle\"\n style={{ display: \"block\" }}\n data-ad-client={client}\n data-ad-slot={slot}\n data-ad-format=\"auto\"\n data-full-width-responsive=\"true\"\n ></ins>\n\n </div>\n ); \n}\n\nIn case I missed some important information which you would add please let me know and if you liked the article feel free to share it.
","date_published":"2022-09-23T16:33:29.179Z","date_modified":"2025-06-09T18:19:02.377Z","tags":["react","next-js","gatsby","data-privacy"],"image":"https://mxd.codes/content/posts/published/how-to-create-a-custom-cookie-banner-for-your-react-application/cover.webp","banner_image":"https://mxd.codes/content/posts/published/how-to-create-a-custom-cookie-banner-for-your-react-application/cover.webp","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/how-to-build-a-related-posts-component-for-your-react-blog","url":"https://mxd.codes/articles/how-to-build-a-related-posts-component-for-your-react-blog","title":"How to build a related posts component for your Next.js blog","summary":"Some blogs have these related articles or posts sections where visitors can have a preview at more content after they just read a post. That's what I wanted to create for my personal website which is built with React (Nextjs) and in this article I want to show you how you also can do it for any other react application.","content_html":"Some blogs have these related articles or posts sections where visitors can have a preview at more content after they just read a post. That's what I wanted to create for my personal website which is built with Nextjs and in this article I want to show you how you also can do it for your own Next.js site or any other react application.
\nThe keypoint to be able to show related Posts is that you somehow have to create a relation between the posts which doesn't exist yet. All my posts have
\n---\ntitle: \"Post about Web-Development with React\"\ndescription: \"This is a sample description for the post.\"\ndate: \"2022-05-02\"\ntags: [\"React\", \"Web-Development\"]\nimage: \"../image.jpg\"\n---\n\nI decided to use the tags to create a relation between the posts because it's the only information which all posts can have in common and is related to the actual topic of the post. Therefore i needed data of all posts overall and data about the current posts. The data from the current post will be just passed as props to the component. All post data for my website is created in a CMS which can be accessed via GraphQL. The query to get allPosts looks like this.
\nexport async function getAllPosts() {\n const data = await fetchStrapiAPI(\n `\n {\n posts(sort: \"published_at:desc\") {\n id\n published_at\n title\n slug\n content\n excerpt\n tags {\n name\n }\n coverImage {\n url\n }\n }\n }\n `\n )\n return data?.posts\n}\n\nThe only relevant information here is the slug and the tags with their names.
\nNow the current post gets filtered out from the posts array and a variable maxPosts for the maximum number of posts which should be displayed will be created.
\n// filter out current post\nlet posts = allPosts.filter((aPost) => aPost.slug !==post.slug);\n\n// define maxPosts to display\nconst maxPosts = 3\n\nFor better readability I assigned the tasks of the current posts to a variable called currentTags
\n// get tags of current posts\nconst currentTags = post.tags.map((tag) => {\n return tag.name\n})\n\nNow you have to map through posts and the tags post.tags of these posts to check if one of these tags is the same as one of the currentTags. If one tag is the same we will just enumerate a new relevance variable.
\n // rate posts depending on tags\n posts.forEach((post) => {\n post.relevance = 0\n post.tags.forEach((tag) => {\n if (currentTags.includes(tag.name)) {\n post.relevance ++\n }\n })\n })\n\nThe post with the highest relevance will be the post with the most common tags and be the most related post. If you are also using categories you can of course also adjust the relevance depending on the categories and the tags. For example you could add two relevance points for categories and one relevance point for tags.
\nThen you can sort the array of all posts descending by relevance.
\n // sort posts by relevance\n const sortedPosts = posts.sort(function(a, b) {\n return b.relevance - a.relevance;\n });\n\nIn the end you can slice them with maxPosts and finally render them.
\nimport PostPreview from 'src/components/article/article-preview/article-preview'\n\nexport default function RecommendedPosts({ post, allPosts }) {\n\n // filter out current post\n let posts = allPosts.filter((aPost) => aPost.slug !==post.slug);\n\n // define maxPosts to display\n const maxPosts = 3\n\n // get tags of current posts\n const tags = post.tags.map((tag) => {\n return tag.name\n })\n\n // rate posts depending on tags\n posts.forEach((post) => {\n post.relevance = 0\n post.tags.forEach((tag) => {\n if (tags.includes(tag.name)) {\n post.relevance ++\n }\n })\n })\n\n // sort posts by relevance\n const sortedPosts = posts.sort(function(a, b) {\n return b.relevance - a.relevance;\n });\n\n return (\n <>\n {sortedPosts.slice(0,maxPosts).map((post, i) => (\n <PostPreview\n key={i} \n postData={post}\n />\n ))}\n </>\n )\n }\n","date_published":"2022-09-23T16:25:27.329Z","date_modified":"2025-02-01T16:49:08.742Z","tags":["react","next-js","gatsby"],"image":"https://mxd.codes/content/posts/published/how-to-build-a-related-posts-component-for-your-react-blog/cover.png","banner_image":"https://mxd.codes/content/posts/published/how-to-build-a-related-posts-component-for-your-react-blog/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/how-to-add-google-adsense-to-next-js-applications","url":"https://mxd.codes/articles/how-to-add-google-adsense-to-next-js-applications","title":"How to add Google Adsense to Next.js applications","summary":"In this article I am going to explain, how you can implement Google Adsense in Next.js applications (or any other react applications). There are several approaches for implementing Adsense on a react site and I want to show you how you can add Adsense with privacy in mind.","content_html":"In this article I am going to explain, how you can implement Google Adsense in Next.js applications (or any other react applications). There are several approaches for implementing Adsense on a react site and I want to show you how you can add Adsense with privacy in mind.
\nAs soon as you signed up for your site on Adsense and it has been approved, you have to place the Adsense code (or ad unit code) in your pages. This code in general exists of three parts.
\nThe first part will load the actual Adsense script. This script is typically placed between the <head></head> or <body></body> section.
We will not just place it there because we just want the script to be inserted after a user has given consent to allow third-party cookies and services. So I for example outsourced it in a separate function which will be triggered by accepting cookies.
\nexport function enableGoogleAdsense () {\n const head = document.getElementsByTagName('head')[0]\n const scriptElement = document.createElement(`script`)\n scriptElement.type = `text/javascript`\n scriptElement.async\n scriptElement.src = `https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=${process.env.NEXT_PUBLIC_ADSENSE_ID}`\n scriptElement.crossOrigin = \"anonymous\"\n head.appendChild(scriptElement);\n}\n\nBy clicking the 'Accept required and optional cookies' button on this site, this function will be triggered which will then place the adsense script into the <head></head> section.
If you want to use Auto ads you are actually already done as long as Auto Ads are enabled in your Adsense Account for your site.
\nOtherwise, if you want to place ad units individually you can do this now like the following.
\nI would recommend to create a separate component for ad units. This could look like the following:
\nimport styled from 'styled-components';\nimport { useEffect } from 'react';\n\nexport function GoogleAdsenseContainer ( { client, slot }) {\n\n useEffect(() => {\n (window.adsbygoogle = window.adsbygoogle || []).push({});\n }, []);\n\n const AdLabel = styled.span`\n font-size: 12px;\n `\n\n return (\n <div \n style={{textAlign: 'left',overflow: 'hidden'}}\n >\n <AdLabel>Advertisment</AdLabel>\n <ins\n className=\"adsbygoogle\"\n style={{ display: \"block\" }}\n data-ad-client={client}\n data-ad-slot={slot}\n data-ad-format=\"auto\"\n data-full-width-responsive=\"true\"\n ></ins>\n\n </div>\n ); \n}\n\nIn this component you will find the other to parts of the original Adsense script. So here the actual ad unit element is placed with a small ad-label. You can load this component in every page and position you like and place your individual client and slot ID. After the ad unit is placed window.adsbygoogle in the useEffect hook will fill this ad unit with the actual advertisement graphics.
With the useEffect hook ads will also be requested/refreshed when a user navigates to any other page without refreshing the page.
","date_published":"2022-09-23T16:20:33.302Z","date_modified":"2025-01-20T18:45:53.013Z","tags":["react","next-js","a-1"],"image":"https://mxd.codes/content/posts/published/how-to-add-google-adsense-to-next-js-applications/cover.png","banner_image":"https://mxd.codes/content/posts/published/how-to-add-google-adsense-to-next-js-applications/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/how-to-deploy-your-gatsby-site-on-your-own-server","url":"https://mxd.codes/articles/how-to-deploy-your-gatsby-site-on-your-own-server","title":"How to deploy your GatsbyJS site on your own server","summary":"With Gatsby 4 bringing in Server-Side Rendering (SSR) and Deferred Static Generation (DSG) you need an alternative methode to just hosting static files. Each page using SSR or DSG will be rendererd after a user requests it so there has be a server in the background which will handle these requests and build the pages if needed.","content_html":"With Gatsby 4 bringing in Server-Side Rendering (SSR) and Deferred Static Generation (DSG) you need an alternative methode to just hosting static files. Each page using SSR or DSG will be rendererd after a user requests it so there has be a server in the background which will handle these requests and build the pages if needed.
\nIn this post i will show you how you can deploy your Gatsby site with SSR and/or DSG on your own server with a CI/CD pipeline via PM2 and Github Webhooks.
\nTherefore i will be using
\nFirst of all you need an server with root access. I strongly recommend to have a look at the guide \"Initial Server Setup with Ubuntu 18.04\" from the DigitalOcean community which will lead you through the process of:
\nAfter you have done that you can continue by installing all necessary dependencies on your server.\nInstall Node.js
\nAgain there is an guide by DigitalOcean which will help you installing Node.js using PPA.
\nAfter completing
\nyou will have to change npm's default directory.
\nCreate a .npm-global directory and set the path to this directory for node_modules:
cd ~\nmkdir ~/.npm-global\nnpm config set prefix '~/.npm-global'\n\nCreate (or modify) a ~/.profile and add the following line:
sudo nano ~/.profile\n\n# set PATH so global node modules install without permission issues\nexport PATH=~/.npm-global/bin:$PATH\n\nNow you have to update your system variables:
\nsource ~/.profile\n\nNow you should be able to check your installed Node.js version with:
\nnode -v\n\nCheck if git is already installed with:
\ngit --version\n\nIf it isn't installed yet you can install it with
\nsudo apt install git\n\nand configure Git with
\ngit config --global user.name \"Your Name\"\ngit config --global user.email \"youremail@domain.com\"\n\nAfter git is installed and configured you can deploy your Gatsby site by cloning it from Github.
\nIt is important that you are loggin in as non-root user for the following steps.
\ncd ~\ngit clone https://github.com/your-githubuser/your-gatsby-repo.git path your-gatsby-site\n\nAfter you have deployed your project (optionally with environment variables) you can install all dependencies and build your Gatsby site with:
\ncd ./your-gatsby-site/\nnpm install\nnpm run build\n\nNow you should have a copy of your local project/Gatsby site on your remote server.
\nNext you are going to setup PM2 which will be used to keep your site alive and restart it with every reboot.
\nYou can install PM2 with:
\nnpm install pm2@latest -g\n\nYou will need to create/configure an ecosystem.config.js file which will restart the default Gatsby server.
cd ~\npm2 init\nsudo nano ecosystem.config.js\n\nCopy/paste the template and replace the content.
\nmodule.exports = {\n apps: [\n {\n name: 'gatsby-site',\n cwd: ' /home/your-name/my-gatsby-site',\n script: 'npm',\n args: 'serve',\n env: {\n //NODE_ENV: 'production',\n },\n },\n // optionally a second project\n],};\n\nWith
\ncd ~\npm2 start ecosystem.config.js\n\nyou can start your server which will run on the Port 9000.
\nYou can always check the status with:
\npm2 status\n\nAfter the server reboots this PM2 should be always automatically be restarted. For that you are going to need a small Startup script which you can also copy/paste.\nGenerate and configure a startup script to launch PM2:
\ncd ~\npm2 startup systemd\n\n[PM2] Init System found: systemd\n[PM2] To setup the Startup Script, copy/paste the following command:\n**sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u your-name --hp /home/your-name**\n\n**sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u your-name --hp /home/your-name**\n\n[PM2] Init System found: systemd\nPlatform systemd\n\n. . .\n\n\n[PM2] [v] Command successfully executed.\n+---------------------------------------+\n[PM2] Freeze a process list on reboot via:\n $ pm2 save\n\n[PM2] Remove init script via:\n $ pm2 unstartup systemd\n\npm2 save\n\n[PM2] Saving current process list...\n[PM2] Successfully saved in /home/your-name/.pm2/dump.pm2\n\nIf you reboot your server now with sudo reboot the script should be automatically restart your Gatsby site. Give it a try!
One thing missing now is an continuos integration and continuos delivery (CI/CD) pipeline which you will setup using Github webhooks.
\nTherefore you need to create a new Webhook in your repository.
\nThe following articles provide additional information to the steps below:
\n\nYou need to create a server script which will do something if it is triggered by the Github webhook.
\ncd ~\nmkdir NodeWebHooks\ncd NodeWebHooks\nsudo nano webhook.js\n\nThe script is going to create a server running on Port 8100. (Your Github webhook should be of course sending the webhook to something like http://server-ip:8100.)
\nIf it gets triggered by a webhook it will
\n~/my-gatsby-site/,const secret = \"your-secret-key\";\nconst repo = \"~/my-gatsby-site/\";\n\nconst http = require('http');\nconst crypto = require('crypto');\nconst exec = require('child_process').exec;\n\nconst BUILD_CMD = 'npm run build';\nconst PM2_CMD = 'pm2 restart gatsby-site';\n\nhttp.createServer(function (req, res) {\n req.on('data', function(chunk) {\n let sig = \"sha1=\" + crypto.createHmac('sha1', secret).update(chunk.toString()).digest('hex');\n\n if (req.headers['x-hub-signature'] == sig) {\n exec('cd ' + repo + ` && git pull && npm install && ${BUILD_CMD} && ${PM2_CMD}`);\n }\n });\n\n res.end();\n}).listen(8100);\n\nYou will need to allow communication on Port 8100 with:
\nsudo ufw allow 8100/tcp\nsudo ufw enable\n\nCommand may disrupt existing ssh connections. Proceed with operation (y|n)? y\nFirewall is active and enabled on system startup
\nEarlier you setup PM2 to restart your Gatsby site whenever the server reboots or is started. You will now do the same for the webhook script.
\nRun echo $PATH and copy the output for use in the next step.
\necho $PATH\n\n/home/your-name/.npm-global/bin:/home/your-name/bin:/home/your-name/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin\n\nCreate a webhook.service file:
\ncd ~\nsudo nano /etc/systemd/system/webhook.service\n\nIn the editor, copy/paste the following script, but make sure to replace your-name in two places with your username. Earlier, you ran echo $PATH, copy this to the Environment=PATH= variable, then save and exit:
[Unit]\nDescription=Github webhook\nAfter=network.target\n\n[Service]\nEnvironment=PATH=your_path\nType=simple\nUser=your-name\nExecStart=/usr/bin/nodejs /home/your-name/NodeWebHooks/webhook.js\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target\n\nEnable and start the new service so it starts when the system boots:
\nsudo systemctl enable webhook.service\nsudo systemctl start webhook\n\nCheck the status of the webhook:
\nsudo systemctl status webhook\n\nYou can test your webhook with these instructions.
\nThe Gatsby server is now running on your-ip:9000 and you implemented a CI/CD pipeline via PM2 and Github Webhooks but you still can't access your website via a domain because you need to configure a webserver like Nginx.
\nI am using Cloudflare to manage DNS for my domains but you can do this with every other provider also.
\nCreate two A Records which will point your-domain.com and www.your-domain.com to the IP-adress of your server.
After that you will need to configure Nginx.
\nThe following instructions are based on How To Install Nginx on Ubuntu 18.04 [Quickstart].
\nsudo apt update\n\nsudo apt install nginx\n\nand adjust the Firewall:
\nsudo ufw allow 'Nginx Full'\nsudo ufw delete allow 'Nginx HTTP'\n\nYou should now be able to see the Nginx landing page on http://your_server_ip.
Create the directory for your-domain.com, using the -p flag to create any necessary parent directories:
\nsudo mkdir -p /var/www/your-domain.com/html\n\nAssign ownership of the directory:
\nsudo chown -R $USER:$USER /var/www/your-domain.com/html\n\nThe permissions of your web roots should be correct if you haven’t modified your umask value, but you can make sure by typing:
\nsudo chmod -R 755 /var/www/example.com\n\nMake a new server block at /etc/nginx/sites-available/your-domain.com:
\nsudo nano /etc/nginx/sites-available/example.com\n\nCopy/Paste the following Gatsby-nginx configuration and update the server_name sections:
\nserver {\n # Listen HTTP\n listen 80;\n listen [::]:80;\n\n server_name your-domain.com www.your-domain.com;\n\n # Redirect HTTP to HTTPS\n return 301 https://$host$request_uri;\n}\n\nserver {\n # Listen HTTP\n listen 443 ssl;\n listen [::]:443 ssl;\n\n server_name your-domain.com www.your-domain.com;\n\n # SSL config\n include snippets/self-signed.conf;\n include snippets/ssl-params.conf;\n\n # Proxy Config\n location / {\n proxy_pass http://localhost:9000\n proxy_http_version 1.1;\n proxy_set_header X-Forwarded-Host $host;\n proxy_set_header X-Forwarded-Server $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_set_header Host $http_host;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"Upgrade\";\n proxy_pass_request_headers on;\n }\n location ~ /.well-known {\n allow all;\n }\n}\n\nSave the file and close it when you are finished.
\nEnable the file by creating a link from it to the sites-enabled directory:
\nsudo ln -s /etc/nginx/sites-available/your-domain.com /etc/nginx/sites-enabled/\n\nTest for syntax errors:
\nsudo nginx -t\n\nand finally enable the changes:
\nsudo systemctl restart nginx\n\nNginx should now be serving your gatsby site on your domain name. That means if you have a look at http://your-domain.com you should see your Gatsby site.
In the end should deny traffic to Port 9000 because Nginx is handling the requests with:
\ncd ~\nsudo ufw deny 9000\n\nTo install SSL, you will need to install and run Certbot by Let's Encrypt.
","date_published":"2022-09-23T16:16:38.402Z","date_modified":"2025-02-01T16:45:09.771Z","tags":["react","gatsby","selfhosted","ci-cd","cloud"],"image":"https://mxd.codes/content/posts/published/how-to-deploy-your-gatsby-site-on-your-own-server/cover.png","banner_image":"https://mxd.codes/content/posts/published/how-to-deploy-your-gatsby-site-on-your-own-server/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/cap-formentor-mallorca","url":"https://mxd.codes/photos/cap-formentor-mallorca","title":"Cap Formentor, Mallorca","content_html":"","date_published":"2022-09-23T14:42:31.292Z","image":"https://mxd.codes/content/photos/cap-formentor-mallorca/photo-1.webp","attachments":[{"url":"https://mxd.codes/content/photos/cap-formentor-mallorca/photo-1.webp","mime_type":"image/jpeg","title":"content_IMG_20220321_114604_268e9efe5e_156a44cc2a.webp"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/alcudia-illes-balears-mallorca-spanien","url":"https://mxd.codes/photos/alcudia-illes-balears-mallorca-spanien","title":"Alcúdia, Illes Balears, Mallorca, Spanien","content_html":"","date_published":"2022-09-23T14:41:00.148Z","image":"https://mxd.codes/content/photos/alcudia-illes-balears-mallorca-spanien/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/alcudia-illes-balears-mallorca-spanien/photo-1.jpg","mime_type":"image/jpeg","title":"cover_IMG_20220321_133245_1_626060b19d.jpg"},{"url":"https://mxd.codes/content/photos/alcudia-illes-balears-mallorca-spanien/photo-2.webp","mime_type":"image/jpeg","title":"content_IMG_20220321_114604_268e9efe5e_156a44cc2a.webp"},{"url":"https://mxd.codes/content/photos/alcudia-illes-balears-mallorca-spanien/photo-3.jpg","mime_type":"image/jpeg","title":"IMG_20220321_114725_1_4276068e14.jpg"},{"url":"https://mxd.codes/content/photos/alcudia-illes-balears-mallorca-spanien/photo-4.jpg","mime_type":"image/jpeg","title":"cover_IMG_20220322_124235_1_695b74a3aa.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/how-to-create-a-mapnik-stylesheet-for-displaying-any-data-from-postgre-sql-post-gis","url":"https://mxd.codes/articles/how-to-create-a-mapnik-stylesheet-for-displaying-any-data-from-postgre-sql-post-gis","title":"How to create a Mapnik stylesheet for displaying any data from PostgreSQL/PostGIS","summary":"In this article i want to show you how you can build your own Mapnik stylesheet for displaying any data from PostgreSQL/PostGIS. The Mapnik Stylesheet XML can be used for a tile-server with your custom style.","content_html":"2021-03-02 i started tracking my current location with OwnTracks and Strapi (How i constantly track my location and display a web-map with all the locations) and created a /map which shows all the locations i have ever been to.
\nRecently i reached one million datasets and unfortunately that means every user had to download about 50MB of location data before the map could be rendered. \nSo i definitely needed a much faster solution and decided to render and serve server side some tiles which will be used to display the locations. \nIn the end i built a tile-server following Manually building a tile server (20.04 LTS) with a custom Mapnik stylesheet.
\nIn this article i want to show you how you can build your own Mapnik stylesheet for displaying any data from PostgreSQL/PostGIS.
\nThe Mapnik stylesheet XML is not very handy but fortunately there are some tools which will help you to create one. After some research i decided to go with TileMill which is an open source map design studio to design maps. It offers a simple UI and (more important) the possibility to export the created map-style as Mapnik stylesheet.
\nActually i couldn't get the latest version running so i decided to go with TileMill v0.10.1 which offers anything you will need to create a stylesheet. At https://tilemill-project.github.io/tilemill/docs/win-install/ you can download and install TileMill.
\nAfter you have installed TileMill you can create a new project and uncheck 'default data'. Otherwise TileMill will create some kind of basemap with a default style.
\n
Now you will have to add some data to the project. Therefore you click on the layer button and add a new layer.
\n
Switch to the PostGIS tab and fill out ID, Connection, Unique key field, Geometry field and SRS. It's actually pretty straight forward. \nKeep in mind that TileMill doesn't like large datasets so i would recommend you to set some extent.
\nIn SRS you have to specify the PROJ.4 projection string. If you don't it know a look at https://epsg.io/[epsg-code] where you will replace [epsg-code] with the epsg code of your coordinate system and scroll down to PROJ.4 where you can copy the projection string.
\n
Afterwards save the layer and the project. You won't see your data/features yet because you need to define some style for it. You can style the map with CartoCSS which is very similiar to CSS. For example i specified my layer id as locations and i am styling points so the CartoCSS properties could look like the following.
\n#locations {\n [vel >= 0] { marker-width:6; marker-fill: #f45; marker-line-color: #813; marker-allow-overlap: true; }\n [vel >= 50] { marker-width:6; marker-fill: #f45; marker-line-color: #813; marker-allow-overlap: true; }\n [vel >= 100] { marker-width:6; marker-fill: #f45; marker-line-color: #813; marker-allow-overlap: true; }\n}\n\nWith CartoCSS you can also dynamically style your features depending on some attribute values. For more information about CartoCSS have a look at Styling data from TileMill.
\nWhen you are happy with your style you can export if as Mapnik-XML and use if for example for your tile-server.
\n
At /map you can see my current stylesheet in 'action'.
","date_published":"2022-01-25T23:29:14.045Z","date_modified":"2025-02-01T16:42:47.864Z","tags":["gis","web-mapping","d-1"],"image":"https://mxd.codes/content/posts/published/how-to-create-a-mapnik-stylesheet-for-displaying-any-data-from-postgre-sql-post-gis/cover.png","banner_image":"https://mxd.codes/content/posts/published/how-to-create-a-mapnik-stylesheet-for-displaying-any-data-from-postgre-sql-post-gis/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/links/tutorials-for-making-3-d-looking-maps-with-blender-and-qgis","url":"https://mxd.codes/links/tutorials-for-making-3-d-looking-maps-with-blender-and-qgis","title":"Tutorials for making 3D-looking maps with Blender and QGIS","external_url":"https://github.com/joewdavies/geoblender","content_html":"This guide will help you prepare DEM data using QGIS in order to render 3D looking shaded-relief maps in Blender.
","date_published":"2021-10-04T23:04:11.679Z","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/location-tracking-with-colota-postgresql-martin-and-maplibre","url":"https://mxd.codes/articles/location-tracking-with-colota-postgresql-martin-and-maplibre","title":"Location Tracking and Visualization with Colota, PostgreSQL, Martin Tile Server and MapLibre","summary":"How I track my location continuously with Colota, my self-developed Android app, store it in PostgreSQL with PostGIS, serve vector tiles via Martin and render an interactive map with MapLibre GL JS.","content_html":"Inspired by Aaron Parecki who has been tracking his location since 2008 with an iPhone app and a server-side tracking API, I decided to build a similar system, but entirely with tools I control.
\nMy goal: continuously track my location using my Android phone, store the data in a PostgreSQL database, and visualize all historical locations on a web map. Over time the stack evolved significantly. The original setup relied on OwnTracks, a Node.js webhook, GeoServer, MapProxy and OpenLayers. Today I use:
\nTo install PostgreSQL with PostGIS support, first add the repository and install the packages:
\nsudo apt update\nsudo apt install gnupg2 wget vim\nsudo sh -c 'echo \"deb https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main\" > /etc/apt/sources.list.d/pgdg.list'\ncurl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/postgresql.gpg\nsudo apt update\nsudo apt-get -y install postgresql postgresql-contrib postgis\n\nStart and enable the service:
\nsudo systemctl start postgresql\nsudo systemctl enable postgresql\n\nConnect and create a database and user:
\nsudo su postgres\npsql\n\nCREATE DATABASE locations;\nCREATE USER <username> WITH ENCRYPTED PASSWORD '<password>';\nGRANT ALL PRIVILEGES ON DATABASE locations TO <username>;\n\nCreate the locations table:
\nCREATE TABLE public.locations (\n id bigserial NOT NULL,\n created_at timestamptz NULL DEFAULT CURRENT_TIMESTAMP,\n lat float8 NULL,\n lon float8 NULL,\n acc int4 NULL,\n alt int4 NULL,\n batt int4 NULL,\n bs int4 NULL,\n cog numeric(10, 2) NULL,\n rad int4 NULL,\n t varchar(255) NULL,\n tid varchar(255) NULL,\n tst int4 NULL,\n vac int4 NULL,\n vel int4 NULL,\n p numeric(10, 2) NULL,\n conn varchar(255) NULL,\n topic varchar(255) NULL,\n inregions jsonb NULL,\n ssid varchar(255) NULL,\n bssid varchar(255) NULL\n);\n\nThe key columns are lat, lon and alt. The others (velocity, battery level, connection type) are used on my /now page to show what I am currently up to.
Enable the PostGIS extension and create a view that exposes a proper geometry column:
\n\\c locations\nCREATE EXTENSION postgis;\n\nCREATE OR REPLACE VIEW public.locations_geom AS\nSELECT\n id,\n lat,\n lon,\n alt,\n vel,\n ST_SetSRID(ST_MakePoint(lon, lat, alt::double precision), 4326) AS geom\nFROM locations;\n\nThis view is what Martin will query to generate vector tiles.
\nColota is the Android app I built to replace OwnTracks. It is written in React Native (TypeScript + Kotlin) and sends location payloads in the OwnTracks HTTP format, which makes it compatible with the webhook described below. It supports tracking profiles, geofencing and multiple backends including custom endpoints.
\nTo receive location payloads from Colota and write them to PostgreSQL, I run a small Node.js HTTP server. Colota sends a JSON POST request for each location update in the OwnTracks format. The server parses the body and inserts the relevant fields into the locations table.
const http = require(\"http\");\nconst { Pool } = require(\"pg\");\n\nconst pool = new Pool({\n user: \"username\",\n database: \"locations\",\n password: \"password\",\n port: 5432,\n host: \"localhost\",\n});\n\nasync function insertData(body) {\n try {\n await pool.query(\n \"INSERT INTO locations (lat, lon, acc, alt, batt, bs, tst, vac, vel, conn, topic, inregions, ssid, bssid) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)\",\n [body.lat, body.lon, body.acc, body.alt, body.batt, body.bs, body.tst, body.vac, body.vel, body.conn, body.topic, body.inregions, body.ssid, body.bssid]\n );\n } catch (error) {\n console.error(error);\n }\n}\n\nconst server = http.createServer((request, response) => {\n let body = [];\n if (request.method === \"POST\") {\n request.on(\"data\", (chunk) => body.push(chunk)).on(\"end\", () => {\n insertData(JSON.parse(Buffer.concat(body).toString()));\n });\n }\n response.end();\n});\n\nserver.listen(9001);\n\nThe server listens on port 9001. Point Colota at http://yourserverip:9001 and location data will start flowing into the database. In production, add an API key check in the request handler to restrict access to authorized clients only.
Martin is a Rust-based tile server that reads directly from PostGIS and serves vector tiles (MVT) with no heavy backend required.
\nRun Martin with Docker Compose:
\nservices:\n martin:\n image: ghcr.io/maplibre/martin:latest\n container_name: martin\n restart: always\n ports:\n - \"3000:3000\"\n environment:\n DATABASE_URL: postgresql://<username>:<password>@<host>/locations\n command:\n - --listen-addresses=0.0.0.0:3000\n\nMartin will automatically detect all tables and views with a geometry column and expose them as tile endpoints. The locations_geom view created earlier becomes available at:
https://your-server/martin/locations_geom/{z}/{x}/{y}\n\nYou can verify it is working by opening the tilejson endpoint:
\nhttps://your-server/martin/locations_geom\n\nMapLibre GL JS is an open-source fork of Mapbox GL JS that renders vector tiles using WebGL. I use OpenFreeMap as the basemap, which provides free hosted vector tiles based on OpenStreetMap.
\nThe map component for my website reads the current theme (data-theme attribute on <html>) and switches between light and dark basemap styles accordingly:
import { useEffect, useRef } from \"react\";\nimport maplibregl from \"maplibre-gl\";\nimport \"maplibre-gl/dist/maplibre-gl.css\";\n\nconst STYLE_LIGHT = \"https://tiles.openfreemap.org/styles/bright\";\nconst STYLE_DARK = \"https://tiles.openfreemap.org/styles/dark\";\n\nfunction getStyle(): string {\n const attr = document.documentElement.getAttribute(\"data-theme\");\n const prefersDark = window.matchMedia(\"(prefers-color-scheme: dark)\").matches;\n return attr === \"dark\" || (!attr && prefersDark) ? STYLE_DARK : STYLE_LIGHT;\n}\n\nconst LiveMap = ({ coords }: { coords?: { lat: number; lon: number } }) => {\n const mapElement = useRef<HTMLDivElement>(null);\n\n useEffect(() => {\n if (!mapElement.current) return;\n\n const getPrimaryColor = () =>\n getComputedStyle(document.documentElement)\n .getPropertyValue(\"--primary-color\")\n .trim() || \"#39b5e0\";\n\n const addLocationsLayer = (map: maplibregl.Map) => {\n map.addSource(\"locations\", {\n type: \"vector\",\n tiles: [\"https://your-martin-server/locations_geom/{z}/{x}/{y}\"],\n minzoom: 0,\n maxzoom: 16,\n });\n map.addLayer({\n id: \"locations\",\n type: \"circle\",\n source: \"locations\",\n \"source-layer\": \"locations_geom\",\n paint: {\n \"circle-radius\": 3,\n \"circle-color\": getPrimaryColor(),\n \"circle-opacity\": 0.7,\n },\n });\n };\n\n const map = new maplibregl.Map({\n container: mapElement.current,\n style: getStyle(),\n center: [coords?.lon ?? -15.439457, coords?.lat ?? 28.128124],\n zoom: 10,\n });\n\n map.on(\"load\", () => addLocationsLayer(map));\n\n // Switch style when theme changes\n const observer = new MutationObserver(() => {\n map.setStyle(getStyle());\n map.once(\"styledata\", () => {\n if (!map.getSource(\"locations\")) addLocationsLayer(map);\n map.setPaintProperty(\"locations\", \"circle-color\", getPrimaryColor());\n });\n });\n\n observer.observe(document.documentElement, {\n attributes: true,\n attributeFilter: [\"data-theme\"],\n });\n\n return () => {\n observer.disconnect();\n map.remove();\n };\n }, []);\n\n return <div style={{ height: \"100%\", width: \"100%\" }} ref={mapElement} />;\n};\n\nexport default LiveMap;\n\nThe MutationObserver watches for data-theme changes and swaps the basemap style on the fly, then re-adds the locations layer once the new style has loaded.
The result is the interactive map on /map. It is limited to Gran Canaria for privacy reasons.
\nTyler Sticka published an article on cloudflour where he presents an awsome library he made which allows you to create svg placeholders for img-elements.
","date_published":"2021-07-30T20:54:53.706Z","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/bruennstein-1","url":"https://mxd.codes/photos/bruennstein-1","title":"Brünnstein","content_html":"Went on top of the Brünnstein via the Dr.-Julius-Mayr-Weg and took a rest at the Brünnsteinhaus 🍰.
","date_published":"2021-07-21T18:31:27.825Z","image":"https://mxd.codes/content/photos/bruennstein-1/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/bruennstein-1/photo-1.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210721_120725_76dd2f7513.jpg"},{"url":"https://mxd.codes/content/photos/bruennstein-1/photo-2.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210721_101803_07142b724d.jpg"},{"url":"https://mxd.codes/content/photos/bruennstein-1/photo-3.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210721_113524_eaff00f9e9.jpg"},{"url":"https://mxd.codes/content/photos/bruennstein-1/photo-4.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210721_114207_17e0a4e1da.jpg"},{"url":"https://mxd.codes/content/photos/bruennstein-1/photo-5.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210721_103939_21012d7c1e.jpg"},{"url":"https://mxd.codes/content/photos/bruennstein-1/photo-6.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210721_114159_85779758d6.jpg"},{"url":"https://mxd.codes/content/photos/bruennstein-1/photo-7.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210721_125420_e03ab1f8a0.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/hiking-up-the-bruennstein","url":"https://mxd.codes/photos/hiking-up-the-bruennstein","title":"Hiking up the Brünnstein","content_html":"","date_published":"2021-07-20T12:14:59.306Z","image":"https://mxd.codes/content/photos/hiking-up-the-bruennstein/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/hiking-up-the-bruennstein/photo-1.jpg","mime_type":"image/jpeg","title":"35618167_943146972529735_8494513286404898816_n_23cc6300b7.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/links/checklist-the-a11-y-project","url":"https://mxd.codes/links/checklist-the-a11-y-project","title":"Checklist - The A11Y Project","external_url":"https://www.a11yproject.com/checklist/","content_html":"The creator/s of a11yproject.com have created a checklist for reviewing a website in terms of accessibility.
","date_published":"2021-07-17T09:31:51.230Z","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/kranzhorn-05-21","url":"https://mxd.codes/photos/kranzhorn-05-21","title":"Kranzhorn 05/21","content_html":"Pretty easy climb with phenomenal view. ⛰️
","date_published":"2021-05-28T08:34:27.743Z","image":"https://mxd.codes/content/photos/kranzhorn-05-21/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/kranzhorn-05-21/photo-1.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210530_100622_66c9699a84.jpg"},{"url":"https://mxd.codes/content/photos/kranzhorn-05-21/photo-2.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210530_092227_59e6de9300.jpg"},{"url":"https://mxd.codes/content/photos/kranzhorn-05-21/photo-3.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210530_085208_2f701b9c36.jpg"},{"url":"https://mxd.codes/content/photos/kranzhorn-05-21/photo-4.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210530_100628_ccc50fa4bf.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/bike-trip-in-the-chiemgau-alps","url":"https://mxd.codes/photos/bike-trip-in-the-chiemgau-alps","title":"Bike trip in the Chiemgau Alps","content_html":"Bike trip in the Chiemgau Alps.\nHad to carry our bikes for a bit because of too much snow.
","date_published":"2021-05-18T08:37:52.975Z","image":"https://mxd.codes/content/photos/bike-trip-in-the-chiemgau-alps/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/bike-trip-in-the-chiemgau-alps/photo-1.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210404_130242_3dd824b3af.jpg"},{"url":"https://mxd.codes/content/photos/bike-trip-in-the-chiemgau-alps/photo-2.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210404_140535_6709e5924c.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/after-work-session-with-my-new-scott-spark","url":"https://mxd.codes/photos/after-work-session-with-my-new-scott-spark","title":"After work session with my new Scott Spark","content_html":"After work session with my new Scott Spark 😎
\nFirst ride with my new spark 🌄 #ridemore #scottspark #scottbikes #thatshowweroll
","date_published":"2021-04-16T08:36:28.141Z","image":"https://mxd.codes/content/photos/first-ride-with-my-new-spark/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/first-ride-with-my-new-spark/photo-1.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210324_170922_5f046148ed.jpg"},{"url":"https://mxd.codes/content/photos/first-ride-with-my-new-spark/photo-2.jpg","mime_type":"image/jpeg","title":"cover_IMG_20210324_165804_585304ff9e.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/fetching-and-storing-activities-from-garmin-connect-with-strapi-and-visualizing-them-with-next-js","url":"https://mxd.codes/articles/fetching-and-storing-activities-from-garmin-connect-with-strapi-and-visualizing-them-with-next-js","title":"Fetching and storing activities from Garmin Connect with Strapi and visualizing them with NextJS","summary":"Step-by-step guide explaining how to fetch data from Garmin Connect, store it in Strapi and visualize it with NextJS and React-Leaflet.","content_html":"With getting into the IndieWeb i started to reflect about myself and how i can actually own my data instead of giving them to so called silos.
\nDue to the fact I am a passionate (mountain) bike rider i was thinking about how i could use tracking/activity apps in a way to get my data back, because obviously when i am going for a ride with my bike and i am tracking the route with strava and/or komoot the data is then saved by them.\nConsidering that every route is tracked on a Garmin device anyway and then synchronised to the apps i decided to have a look at the Garmin Connect/Activity API.
\nUnfortunately the official Garmin Activity API is only available for approved business developer.
\nBut after some searching i found the npm-package garmin-connect which allows you to connect to Garmin Connect for sending and receiving activity data.
\nYou can install the package with
\nnpm install garmin-connect\n\nor
\nyarn add garmin-connect\n\nand use it like
\nconst { GarminConnect } = require('garmin-connect');\n// Create a new Garmin Connect Client\nconst GCClient = new GarminConnect();\n// Uses credentials from garmin.config.json or uses supplied params\nawait GCClient.login('my.email@example.com', 'MySecretPassword');\nconst userInfo = await GCClient.getActivities());\n\nI stored the email and the password for the login in environment variables and used them with
\nconst { GarminConnect } = require('garmin-connect');\nconst GCClient = new GarminConnect();\nawait GCClient.login(process.env.GARMIN_EMAIL ,process.env.GARMIN_PWD);\nconst userInfo = await GCClient.getActivities());\n\nAfterwards experimented a bit with the Garmin Connect and found out there are some very low limits. \nApproximately after ~50 requests in one minute couldn't get anymore any data and had to wait for some (maybe one?; i am not sure) hours until the request was successful again.
\nIn general you can probably do way more with Garmin Connect as you will need, like for example:
\nI used only GCClient.getActivities(); to get all activities and and GCClient.getActivity({ activityId: id }); to get the details of the activity (like spatial data representing the route, start-point and end-point).
To be able to store the data in Strapi i created a new content type collection activities with the following fields/attributes:
\n
Afterwards new entrys for activities can be created.
\n\n\nStrapi has a documention which explains how to fetch external data and create entries with it: Fetching external data.
\n
To get the data from Garmin Connect into Strapi i created a function getGarminConnectActivities.js for Strapi (https://gist.github.com/dietrichmax/306b36abd5a9d1ac0c938adcd15f2f69)
The function will take care of:
\nand basically looks like this:
\nmodule.exports = async () => {\n await GCClient.login(process.env.GARMIN_USERNAME, process.env.GARMIN_PWD)\n const activities = await GCClient.getActivities()\n const exisitingActivities = await getExistingActivities()\n activities ? activities.map((activity) => {\n const isExisting = exisitingActivities.includes(activity.activityId)\n isExisting ? console.log(activity.activityId + \" already exists\") : createEntry(activity)\n })\n : console.log(\"no activities found\")\n}\n\nAfter all activities from Garmin are fetched, i am mapping through them to
\nThe exisiting activities in my CMS are fetched with
\nconst getExistingActivities = async () => {\n const existingActivityIds = []\n const activities = await axios.get(`https://strapi.url/activities`)\n\n activities.data.map((activity) => {\n existingActivityIds.push(activity.activityID)\n })\n return existingActivityIds\n}\n\nand the activityIds (originally from Garmin Connect) are returned to be able to check if an entry already exists.\nIf the entry doesn't exist, details for the missing activity are fetched and a new entry is created with:
\nconst createEntry = async (activity) => {\n const details = await GCClient.getActivity({ activityId: activity.activityId });\n await strapi.query('activity').create({\n activityID: activity.activityId,\n activityName: activity.activityName,\n beginTimestamp: activity.beginTimestamp,\n activityType: activity.activityType,\n distance: activity.distance,\n duration: activity.duration,\n elapsedDuration: activity.elapsedDuration,\n movingDuration: activity.movingDuration,\n elevationGain: activity.elevationGain,\n elevationLoss: activity.elevationLoss,\n minElevation: activity.minElevation,\n maxElevation: activity.minElevation,\n sportTypeId: activity.sportTypeId,\n averageSpeed: activity.averageSpeed * 3.6 //(m/s -> km/h),\n maxSpeed: activity.maxSpeed * 3.6 //(m/s -> km/h),,\n startLatitude: activity.startLatitude,\n startLongitude: activity.startLongitude,\n endLatitude: activity.endLatitude,\n endLongitude: activity.endLongitude,\n details: details\n })\n}\n\nYou can save way more but i tried to cut it down to the ones i really need or eventually will need.
\nOnly thing missing is some automatic triggering. For this you can use cron jobs in Strapi (/config/functions/cron.js).
\nmodule.exports = {\n// Add your own logic here (e.g. send a queue of email, create a database backup, etc.).\n\n '0 0 18 * * *': () => {\n strapi.config.functions.getGarminConnectActivities();\n },\n};\n\nI decied to trigger the function everyday at 6 pm so i can have a look at my activites in the evening. 😎
\nThats the fun from the 'backend part'.
\nNext step is to visualize the data in NextJS.
\nI am really liking the embeddable tours of komoot optic-wise, so i decided to create a similiar looking option for the preview of my activities in the posts-feed and activites-feed.
\nSo the preview should consist of
\nThe component looks like this at the moment:
\n
In the activityType object you can find typeId which correlates to the type of the activity, e.g. cycling, running etc.\nI created a small function which will return a icon from react-icons visualizing the activity type.
import { FaRunning, FaBiking } from 'react-icons/fa';\n\nconst getTypeIcon = activity => {\n if (activity.activityType.typeId == 5) {\n return <FaBiking/>\n } else if (activity.activityType.typeId == 15) {\n return <FaRunning/>\n }\n\ngetTypeIcon(activity)\n\nDue to the fact the duration is given in seconds and i wanted it to display like 1h 10m 12s there is also a need for a workaround which looks like the following:
\nconst secondsToHms = (s) => {\n const hours = (((s - s % 3600) / 3600) % 60)\n const minutes = (((s - s % 60) / 60) % 60) \n const seconds = (s % 60) \n return (`${hours}h ${minutes}min ${seconds}s`)\n }\nsecondsToHms(activity.duration)\n\nThen i created a small map with react-leaflet displaying
\nTherefore i created a new map-component:
\nimport React, { useEffect, useState } from \"react\"\nimport { Marker, MapContainer, TileLayer, LayersControl, Polyline } from \"react-leaflet\";\n\nconst Map = (data) => {\n const geo = data.data\n const style= { \n color: '#11a9ed',\n weight: \"5\"\n }\n\n const bounds = [[geo.maxLat, geo.maxLon], [geo.minLat, geo.minLon]]\n return (\n <MapContainer\n style={{ height: \"500px\", width: \"100%\" }}\n bounds={bounds} \n scrollWheelZoom={false}\n >\n <LayersControl position=\"topright\">\n <LayersControl.BaseLayer checked name=\"OpenStreetMap.Mapnik\">\n <TileLayer \n url='https://{s}.basemaps.cartocdn.com/rastertiles/voyager/{z}/{x}/{y}{r}.png'\n attribution ='© <a href=\"https://www.openstreetmap.org/copyright\">OpenStreetMap</a> contributors © <a href=\"https://carto.com/attributions\">CARTO</a>'\n\n />\n </LayersControl.BaseLayer>\n <LayersControl.BaseLayer name=\"Esri World Imagery\">\n <TileLayer\n attribution='Tiles © Esri — Source: Esri, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, UPR-EGP'\n url=\"https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{z}/{y}/{x}\"\n />\n </LayersControl.BaseLayer>\n\n <Marker id=\"start\" position={geo.startPoint}/>\n <Polyline pathOptions={style} positions={geo.polyline} />\n <Marker id=\"end\" position={geo.endPoint}/>\n\n </LayersControl>\n </MapContainer>\n );\n};\n\nexport default Map;\n\n\nLuckily the data from the Garmin Connect has already exactly the structure we need to create the map, which are the coordinates for the polyline and the two points.
\nThe coordinates can be found in geoPolylineDTO in the activitydetails
\n
With maxLat, maxLon, minLat and minLon i created the bounds which will set the default view for the map when passed to the MapContainer.
\nconst bounds = [[geo.maxLat, geo.maxLon], [geo.minLat, geo.minLon]]\n\n<MapContainer\n style={{ height: \"200px\", width: \"100%\" }}\n bounds={bounds} \n scrollWheelZoom={false}\n >\n.\n.\n\nThen i added the LayerControl to be able to toggle between the two Tilelayers
\nAfter that i just created two markers and a polyline with the exisiting objects startPoint, endPoint and polyline.
.\n.\n <Marker id=\"start\" position={geo.startPoint}/>\n <Polyline pathOptions={style} positions={geo.polyline} />\n <Marker id=\"end\" position={geo.endPoint}/>\n.\n.\n\nYou can find several other tilelayers in leaflet-providers-preview.
\nFor an example of the preview head over to /activities.\nYou can find the code for the activity-preview in my github-repositiory.
\nFor the actual post of the activity i made the map a bit larger and added some more metrics for now.
\n(osm)\n
(Aerial view)\n
In this article Francesco Schwarz is explaining how he fetches data from Garmin Connect and uses it to display runs on his website.
","date_published":"2021-03-21T09:47:22.428Z","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/links/creating-accessible-forms","url":"https://mxd.codes/links/creating-accessible-forms","title":"Creating Accessible Forms","external_url":"https://webaim.org/techniques/forms/controls","content_html":"WebAIM provides some very interesting documentation on how to create a web with web accessiblity in mind.
","date_published":"2021-03-01T07:14:35.402Z","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/links/using-webmentions-in-eleventy","url":"https://mxd.codes/links/using-webmentions-in-eleventy","title":"Using Webmentions in Eleventy","external_url":"https://mxb.dev/blog/using-webmentions-on-static-sites/","content_html":"A brilliant post by Max Böck on how to get incoming and outgoing Webmentions for example via Bridgy and webmention.io and how to handle and filter the data to be able to finally implement webmentions on your website.
","date_published":"2021-02-14T21:36:46.226Z","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/syntax-highlighting-with-prism-and-next-js","url":"https://mxd.codes/articles/syntax-highlighting-with-prism-and-next-js","title":"Syntax Highlighting with Prism.js and Next.js","summary":"Prism.js is a compact, expandable syntax highlighter that was developed with modern web standards in mind.","content_html":"Integrating syntax highlighting into a Next.js project has evolved with the latest Next.js versions (12–14) and React Server Components. This guide shows a modern, performant approach to syntax highlighting using Prism.js, including lazy highlighting, a copy-to-clipboard button, and MDX integration.
\nInstall Prism.js and a tree-shakable icon library for copy buttons:
\nnpm install prismjs\nnpm install https://github.com/react-icons/react-icons/releases/download/v5.4.0/react-icons-all-files-5.4.0.tgz\n\nThe @react-icons/all-files package allows importing only the icons you need, keeping bundle size small.
In your app/layout.tsx (or _app.tsx if using the Pages Router), import your Prism CSS:
import \"@/styles/prism.css\";\n\nDownload a custom Prism CSS theme here: Prism Download. Save it under styles/prism.css.
Highlight code only when visible using IntersectionObserver:
// SyntaxHighlighter.tsx\n\"use client\";\n\nimport Prism from \"prismjs\";\nimport { useEffect, useRef } from \"react\";\n\n// Import only needed Prism languages\nimport \"prismjs/components/prism-bash\";\nimport \"prismjs/components/prism-jsx\";\nimport \"prismjs/components/prism-tsx\";\nimport \"prismjs/components/prism-python\";\nimport \"prismjs/components/prism-sql\";\nimport \"prismjs/components/prism-yaml\";\nimport \"prismjs/components/prism-nginx\";\nimport \"prismjs/components/prism-git\";\nimport \"prismjs/components/prism-json\";\nimport \"prismjs/components/prism-docker\";\nimport \"prismjs/components/prism-powershell\";\n\ninterface SyntaxHighlighterProps {\n language?: string;\n code?: string;\n}\n\nconst SyntaxHighlighter = ({ language, code }: SyntaxHighlighterProps) => {\n const ref = useRef<HTMLDivElement>(null);\n\n useEffect(() => {\n const observer = new IntersectionObserver(\n (entries) => {\n entries.forEach((entry) => {\n if (entry.isIntersecting) {\n Prism.highlightAllUnder(entry.target);\n }\n });\n },\n { rootMargin: \"100%\" }\n );\n\n if (ref.current) observer.observe(ref.current);\n return () => {\n if (ref.current) observer.unobserve(ref.current);\n };\n }, []);\n\n return (\n <div ref={ref}>\n <pre className={`language-${language}`} tabIndex={0}>\n <code className={`language-${language}`}>{code?.trim() ?? \"\"}</code>\n </pre>\n </div>\n );\n};\n\nexport default SyntaxHighlighter;\n\nProvide instant copy feedback with icons:
\n// CopyCodeButton.tsx\n\"use client\";\n\nimport { FaCopy } from \"@react-icons/all-files/fa/FaCopy\";\nimport { FaCheck } from \"@react-icons/all-files/fa/FaCheck\";\nimport { useState } from \"react\";\nimport styles from \"./CopyCodeButton.module.css\";\n\nexport default function CopyCodeButton({ children }) {\n const [copied, setCopied] = useState(false);\n\n const handleClick = () => {\n navigator.clipboard.writeText(children.props.children);\n setCopied(true);\n setTimeout(() => setCopied(false), 2000);\n };\n\n return (\n <div className={styles.copyButton} onClick={handleClick} title=\"Copy code\">\n <div className={styles.copyWrapper}>\n {copied ? (\n <>\n <FaCheck className={`${styles.icon} ${styles.iconCopied}`} /> Copied!\n </>\n ) : (\n <>\n <FaCopy className={`${styles.icon} ${styles.iconCopy}`} /> Copy code\n </>\n )}\n </div>\n </div>\n );\n}\n\nOverride the code component in your MDX renderer:
// renderers.tsx\nimport SyntaxHighlighter from \"./SyntaxHighlighter\";\nimport CopyCodeButton from \"./CopyCodeButton\";\nimport styles from \"./Markdown.module.css\";\n\nexport const markdownComponents = {\n code: ({ inline, className, children, ...props }) => {\n const match = /language-(\\w+)/.exec(className || \"\");\n if (!inline && match) {\n return (\n <div className={styles.codeBlock}>\n <SyntaxHighlighter language={match[1]} code={children} />\n <CopyCodeButton>{children}</CopyCodeButton>\n </div>\n );\n }\n return (\n <code className={styles.defaultCode} {...props}>\n {children}\n </code>\n );\n },\n};\n\nHere’s how to pass your custom renderers to MDX using next-mdx-remote:
// mdxWrapper.tsx\n\"use client\";\n\nimport { MDXRemote, MDXRemoteProps } from \"next-mdx-remote\";\nimport { markdownComponents as renderers } from \"../renderers/renderers\";\nimport styles from \"./mdxWrapper.module.css\";\n\nconst MDXWrapper: React.FC<{ content: MDXRemoteProps }> = ({ content }) => {\n return (\n <div className={`${styles.contentWrapper} markdown`}>\n <MDXRemote {...content} components={renderers} />\n </div>\n );\n};\n\nexport default MDXWrapper;\n\nThis ensures:
\nSyntaxHighlighter, CopyCodeButton) are applied.rehype-prism-plus or rehype-pretty-code) for build-time performance if desired.This approach is fully Next.js 14 / React 18 ready, client-friendly, and keeps bundle size minimal.
","date_published":"2020-09-22T20:48:44.913Z","date_modified":"2025-09-24T17:01:07.834Z","tags":["react","next-js"],"image":"https://mxd.codes/content/posts/published/syntax-highlighting-with-prism-and-next-js/cover.png","banner_image":"https://mxd.codes/content/posts/published/syntax-highlighting-with-prism-and-next-js/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/hosting-next-js-private-server-pm2-github-webhooks-ci-cd","url":"https://mxd.codes/articles/hosting-next-js-private-server-pm2-github-webhooks-ci-cd","title":"Hosting NextJS on a private server using PM2 and Github webhooks as CI/CD","summary":"This article shows you how can host your Next.js site on a (virtual private) server with Nginx, a CI/CD pipeline via PM2 and Github Webhooks.","content_html":"This article shows you how can host your Next.js site on a (virtual private) server with Nginx and a CI/CD pipeline via PM2 and Github Webhooks.
\nFirst of all you need an server with root access.\nI strongly recommend to have a look at the guide \"Initial Server Setup with Ubuntu 18.04\" from the DigitalOcean community which will lead you through the process of:
\nAfter you have done that you can continue by installing all necessary dependencies on your server.
\nAgain there is an guide by DigitalOcean which will help you installing Node.js using PPA.
\nAfter completing
\nyou will have to change npm's default directory.
\n.npm-global directory and set the path to this directory for node_modules:cd ~\nmkdir ~/.npm-global\nnpm config set prefix '~/.npm-global'\n\n~/.profile and add the following line:sudo nano ~/.profile\n\n# set PATH so global node modules install without permission issues\nexport PATH=~/.npm-global/bin:$PATH\n\nNow you have to update your system variables:
\nsource ~/.profile\n\nNow you should be able to check your installed Node.js version with:
\nnode -v\n\nCheck if git is already installed with:
\ngit --version\n\nIf it isn't installed yet you can install it with
\nsudo apt install git\n\nand configure Git with
\ngit config --global user.name \"Your Name\"\ngit config --global user.email \"youremail@domain.com\"\n\nAfter git is installed and configured you can deploy your project by cloning it from Github.
\nIt is important that you are loggin in as non-root user for the following steps.
\ncd ~\ngit clone https://github.com/your-name/your-project-repo.git path\n\nCreate a .env on the server if you are using one locally and copy/paste your content.
After you have deployed your project (optionally with environment variables) you can install all dependencies and build your Next.js site with:
\ncd ./my-project/\nnpm install\nNODE_ENV=production npm run build\n\nNow you should have a copy of your local project/Next.js site on your remote server.
\nNext you are going to setup PM2 which will be used to keep your site alive and restart it after every reboot.
\nYou can install PM2 with:
\nnpm install pm2@latest -g\n\nYou will need to create/configure an ecosystem.config.js file which will restart the default Next.js server.
cd ~\npm2 init\nsudo nano ecosystem.config.js\n\nCopy/paste the template and replace the content.
\nmodule.exports = {\n apps: [\n {\n name: 'next-site',\n cwd: ' /home/your-name/my-nextjs-project',\n script: 'npm',\n args: 'start',\n env: {\n NEXT_PUBLIC_...: 'NEXT_PUBLIC_...',\n },\n },\n // optionally a second project\n],};\n\nWith
\ncd ~\npm2 start ecosystem.config.js\n\nyou can start your server which will run on the Port 1337.
You can always check the status with:
\npm2 status next-site\n\nAfter the server reboots this PM2 should be always automatically be restarted. For that you are going to need a small Startup script which you can also copy/paste.
\ncd ~\npm2 startup systemd\n\n[PM2] Init System found: systemd\n[PM2] To setup the Startup Script, copy/paste the following command:\n**sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u your-name --hp /home/your-name**\n\n**sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u your-name --hp /home/your-name**\n\n[PM2] Init System found: systemd\nPlatform systemd\n\n. . .\n\n\n[PM2] [v] Command successfully executed.\n+---------------------------------------+\n[PM2] Freeze a process list on reboot via:\n $ pm2 save\n\n[PM2] Remove init script via:\n $ pm2 unstartup systemd\n\nsystemctl.pm2 save\n\n[PM2] Saving current process list...\n[PM2] Successfully saved in /home/your-name/.pm2/dump.pm2\n\nIf you reboot your server now with sudo reboot the script should be automatically restart your Next.js site. Give it a try!
One thing missing now is an continuos integration and continuos delivery (CI/CD) pipeline which you will setup using Github webhooks.
\nTherefore you need to create a new Webhook in your repository.
\nThe following articles provide additional information to the steps below:
\n\nYou need to create a server script which will do something if it is triggered by the Github webhook.
\ncd ~\nmkdir NodeWebHooks\ncd NodeWebHooks\nsudo nano webhook.js\n\nThe script is going to create a server running on Port 8100. \n(Your Github webhook should be of course sending the webhook to something like http://server-ip:8100.)
If it gets triggered by a webhook it will
\n~/my-nextjs-project/, const secret = \"your-secret-key\";\nconst repo = \"~/my-nextjs-project/\";\n\nconst http = require('http');\nconst crypto = require('crypto');\nconst exec = require('child_process').exec;\n\nconst BUILD_CMD = 'npm install && NODE_ENV=production npm run build';\nconst PM2_CMD = 'pm2 restart next-site';\n\nhttp.createServer(function (req, res) {\n req.on('data', function(chunk) {\n let sig = \"sha1=\" + crypto.createHmac('sha1', secret).update(chunk.toString()).digest('hex');\n\n if (req.headers['x-hub-signature'] == sig) {\n exec('cd ' + repo + ` && git pull && npm install && ${BUILD_CMD} && ${PM2_CMD}`);\n }\n });\n\n res.end();\n}).listen(8100);\n\nYou will need to allow communication on Port 8100 with:
sudo ufw allow 8100/tcp\nsudo ufw enable\n\nCommand may disrupt existing ssh connections. Proceed with operation (y|n)? y\nFirewall is active and enabled on system startup\n\nEarlier you setup PM2 to restart the services (your Next.js site) whenever the server reboots or is started. You will now do the same for the webhook script.
\necho $PATH\n\n/home/your-name/.npm-global/bin:/home/your-name/bin:/home/your-name/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin\n\ncd ~\nsudo nano /etc/systemd/system/webhook.service\n\n[Unit]\nDescription=Github webhook\nAfter=network.target\n\n[Service]\nEnvironment=PATH=your_path\nType=simple\nUser=your-name\nExecStart=/usr/bin/nodejs /home/your-name/NodeWebHooks/webhook.js\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target\n\nsudo systemctl enable webhook.service\nsudo systemctl start webhook\n\nsudo systemctl status webhook\n\nYou can test your webhook with these instructions.
\nThe Next.js server is now running on your-ip:3000 and you implemented a CI/CD pipeline via PM2 and Github Webhooks but you still can't access your website via a domain because you need to configure a webserver like Nginx.
\nI am using Cloudflare to manage DNS for my domains but you can do this with every other provider also.
\nA Records which will point your-domain.com and www.your-domain.com to the IP-adress of your server.After that you will need to configure Nginx.
\nThe following instructions are based on How To Install Nginx on Ubuntu 18.04 [Quickstart].
\nsudo apt update\n\nsudo apt install nginx\n\nsudo ufw allow 'Nginx Full'\nsudo ufw delete allow 'Nginx HTTP'\n\nYou should now be able to see the Nginx landing page on http://your_server_ip.
your-domain.com, using the -p flag to create any necessary parent directories:sudo mkdir -p /var/www/your-domain.com/html\n\nsudo chown -R $USER:$USER /var/www/your-domain.com/html\n\numask value, but you can make sure by typing:sudo chmod -R 755 /var/www/example.com\n\n/etc/nginx/sites-available/your-domain.com: sudo nano /etc/nginx/sites-available/example.com\n\nserver {\n # Listen HTTP\n listen 80;\n listen [::]:80;\n\n server_name your-domain.com www.your-domain.com;\n\n # Redirect HTTP to HTTPS\n return 301 https://$host$request_uri;\n}\n\nserver {\n # Listen HTTP\n listen 443 ssl;\n listen [::]:443 ssl;\n\n server_name your-domain.com www.your-domain.com;\n\n # SSL config\n include snippets/self-signed.conf;\n include snippets/ssl-params.conf;\n\n # Proxy Config\n location / {\n proxy_pass http://localhost:3000\n proxy_http_version 1.1;\n proxy_set_header X-Forwarded-Host $host;\n proxy_set_header X-Forwarded-Server $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_set_header Host $http_host;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"Upgrade\";\n proxy_pass_request_headers on;\n }\n location ~ /.well-known {\n allow all;\n }\n}\n\nSave the file and close it when you are finished.
\nsudo ln -s /etc/nginx/sites-available/your-domain.com /etc/nginx/sites-enabled/\n\nsudo nginx -t\n\nsudo systemctl restart nginx\n\nNginx should now be serving content on your domain name. That means if you have a look at http://your-domain.com you should see your Next.js site.
In the end should deny traffic to Port 3000 with:
cd ~\nsudo ufw deny 3000\n\nThis guide is also using parts of Strapi Deployment on DigitalOcean which helped me a lot setting up Strapi and Next.js on a server in a proper way.
","date_published":"2020-09-13T23:00:55.440Z","date_modified":"2025-01-22T15:42:39.278Z","tags":["next-js","ci-cd","selfhosted","cloud"],"image":"https://mxd.codes/content/posts/published/hosting-next-js-private-server-pm2-github-webhooks-ci-cd/cover.png","banner_image":"https://mxd.codes/content/posts/published/hosting-next-js-private-server-pm2-github-webhooks-ci-cd/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/build-and-deploy-your-gatsby-site-with-google-cloud-build-to-firebase","url":"https://mxd.codes/articles/build-and-deploy-your-gatsby-site-with-google-cloud-build-to-firebase","title":"Build and deploy your Gatsby site with Google Cloud Build to Firebase","summary":"Ultimate guide to automate your Gatsby builds with Google Cloud Build, deploying to Firebase and optional Cloud Scheduler.","content_html":"With Google Cloud (Build) you can automate your whole workflow from building your Gatsby site up to deploying your site to Firebase hosting.
\nWhat will you need?
\nTo set up Firebase you will need an Google Cloud Account which has billing enabled and at least one project.\nYou can add the Firebase SDK with:
\nnpm install --save firebase\n\nand configure it with
\nfirebase init\n\nIn the process you can setup a new project which will create a new .firebaserc file if it doesn't exist yet and enable Hosting.

As public directory you have to set your public folder. After you have set your public folder you can reject all oncoming messages.
\n\n\nYou can also have a look at the Google Firebase Docs for setting up Firebase.
\n
In the end it will also create a firebase.json where you can copy/paste the following to optimize your hosting for a Gatsby site.
\n{\n \"hosting\": {\n \"public\": \"public\",\n \"ignore\": [\n \"firebase.json\", \n \"**/.*\", \n \"**/node_modules/**\"],\n \"headers\": [\n {\n \"source\": \"**/*\",\n \"headers\": [\n {\n \"key\": \"cache-control\",\n \"value\": \"cache-control: public, max-age=0, must-revalidate\"\n }\n ]\n },\n {\n \"source\": \"static/**\",\n \"headers\": [\n {\n \"key\": \"cache-control\",\n \"value\": \"public, max-age=31536000, immutable\"\n }\n ]\n },\n {\n \"source\": \"**/*.@(css|js)\",\n \"headers\": [\n {\n \"key\": \"cache-control\",\n \"value\": \"public, max-age=31536000, immutable\"\n }\n ]\n },\n {\n \"source\": \"sw.js\",\n \"headers\": [\n {\n \"key\": \"cache-control\",\n \"value\": \"cache-control: public, max-age=0, must-revalidate\"\n }\n ]\n },\n {\n \"source\": \"page-data/**\",\n \"headers\": [\n {\n \"key\": \"cache-control\",\n \"value\": \"cache-control: public, max-age=0, must-revalidate\"\n }\n ]\n }\n ],\n }\n}\n\nWhen you have build your Gatsy site and public folder is present you can upload it to your Firebase Hosting with:
\nfirebase deploy\n\nIf you want to use your custom domain (for example mxd.codes) you have to go to Hosting in the Firebase console where you will find DNS-records to point your domain to your Firebase Hosting.\nYou can also create a second domain (for example www.gis-netzwerk.com) to redirect automatically to your root domain.
\nIn your project settings you can find your Firebase configurations which look like:
\nconst firebaseConfig = {\n apiKey: \"apiKey\",\n authDomain: \"{.firebaseapp.com\",\n databaseURL: \"https:// projectId.firebaseio.com\",\n projectId: \" projectId\",\n storageBucket: \" projectId.appspot.com\",\n messagingSenderId: \"1\",\n appId: \"2\",\n measurementId: \"G-123\"\n};\n\nYou probably want to save these as environment variables.
\nTo create a CI-/CD-Pipeline you have to activate Cloud Build for your account. The console itself is quite clear in comparison to AWS CodeBuild.\nYou will see Dashboard which is displaying some basic informations, the history of your Cloud Builds, Triggers and Options.
\nYou just will have to create a new trigger which will trigger a new build everytime new content is pushed to your linked GitHub repository..
\nFirst of all you have to connect your repository. After that you can create the trigger.
\n
Important options are:
\nAfter that you can create all environment variables as Substitution variables.\nYou will notice that all variables have to start with an underscore.\nBecause of that we will need a small workaround in the cloudbuild.yaml configuration file.
\nFor now you can just create your substitions variables and add the underscore to your default variable names.
\nTo be able to deploy via firebase you will need to authorize Firebase with a '$_TOKEN'.\nYou can retrive this token on your local machine with:
\nfirebase login:ci\n\nA new page will be opened in your prefered browser where you will have to login with your Google account to get the token.\nOnce you have the token you can also add it as substition variable.
\n_TOKEN : {TOKEN VALUE}\n\nIf you have created all substitution variables you can check again if they are inserted correct and create the new trigger.
\nCloud build needs the cloudbuild.yaml to know what it should do.\nIf you have inserted the path like above mentioned you will need a cloudbuild.yaml in your root directory.
\nYou can copy the following into it:
\nsteps: \n# Install dependencies\n - name: node:10.16.0\n id: Installing dependencies...\n entrypoint: npm\n args: [\"install\"] \n waitFor: [\"-\"] # Begin immediately\n\n# Install Firebase \n - name: node:10.16.0 \n id: Installing Firebase...\n entrypoint: npm \n args: [\"install\", \"firebase-tools\"]\n waitFor:\n - Install dependencies...\n\n# Create file with env-variables\n - name: node:10.16.0\n id: Creating Envirnonment variables...\n entrypoint: npm\n args: [\"run\", \"create-env\"]\n env:\n - \"CLIENT_EMAIL=${_CLIENT_EMAIL}\"\n - \"PRIVATE_KEY=${_PRIVATE_KEY}\"\n - \"MAIL_CHIMP=${_MAIL_CHIMP}\"\n - \"GA_ID=${_GA_ID}\"\n - \"GA_VIEW_ID=${_GA_VIEW_ID}\"\n - \"IG_TOKEN=${_IG_TOKEN}\"\n - \"FIREBASE_API_KEY=${_FIREBASE_API_KEY}\"\n - \"FIREBASE_APP_ID=${_FIREBASE_APP_ID}\"\n - \"FIREBASE_AUTH_DOMAIN=${_FIREBASE_AUTH_DOMAIN}\"\n - \"FIREBASE_DB_URL=${_FIREBASE_DB_URL}\"\n - \"FIREBASE_MEASUREMENT_ID=${_FIREBASE_MEASUREMENT_ID}\"\n - \"FIREBASE_MESSAGE_SENDER_ID=${_FIREBASE_MESSAGE_SENDER_ID}\"\n - \"FIREBASE_PROJECT_ID=${_FIREBASE_PROJECT_ID}\"\n - \"FIREBASE_STORAGE_BUCKET=${_FIREBASE_STORAGE_BUCKET}\"\n - \"GATSBY_EXPERIMENTAL_PAGE_BUILD_ON_DATA_CHANGES=true\"\n waitFor: [\"-\"] # Begin immediately\n\n# Gatsby build\n - name: node:10.16.0\n id: Building Gatsby site...\n entrypoint: npm\n args: [\"run\", \"build\"]\n waitFor:\n - Installing dependencies...\n - Creating Envirnonment variables...\n\n# Deploy\n - name: node:10.16.0 \n id: Deploying to Firebase...\n entrypoint: \"./node_modules/.bin/firebase\" \n args: [\"deploy\", \"--project\", \"$PROJECT_ID\", \"--token\", \"$_TOKEN\"]\n waitFor:\n - Installing Firebase...\n - Building Gatsby site...\n\ntimeout: 30m0s\n\nThe cloudbuild.yaml is basically divided into six parts which will
\nAs you can see you will create a file with the environment variables which will map the substitutions variables with your \"default\" variables.\nEverything else should be pretty self-explanatory.
\n\n\nCloud Build will timeout the build by default after 10 minutes. So if your build is gonna take longer you will have to set up a custom timeout like in the cloudbuild.yaml above. You can also set a timeout for each step.
\n
Another import point is that you will have to add the Plugin for Google Analytics Reporting Api as dynamic plugin like for example the following because otherwise you will get errors during your build.
\nif (\n process.env.CLIENT_EMAIL &&\n process.env.PRIVATE_KEY &&\n process.env.GA_VIEW_ID\n) {\n const startDate = new Date()\n startDate.setMonth(startDate.getMonth() - 3)\n dynamicPlugins.push(\n /*{\n resolve: `gatsby-plugin-guess-js`,\n options: {\n GAViewID: process.env.GA_VIEW_ID,\n jwt: {\n client_email: process.env.CLIENT_EMAIL,\n private_key: process.env.PRIVATE_KEY.replace(/\\\\n/g, \"\\n\"),\n },\n period: {\n startDate,\n endDate: new Date(),\n },\n }\n },*/\n {\n resolve: `gatsby-source-google-analytics-reporting-api`,\n options: {\n email: process.env.CLIENT_EMAIL,\n key: process.env.PRIVATE_KEY.replace(/\\\\n/g, \"\\n\"),\n viewId: process.env.GA_VIEW_ID,\n startDate: `2009-01-01`,\n }\n }\n )\n}\n\nmodule.exports = {\n plugins: [\n plugins...\n ].concat(dynamicPlugins),\n};\n\nDue to the fact that Code Build uses Linux machines small and capital letters are important (Windows doesnt care).\nThat means if you import a component like\n\njavascript\nimport MyComponent from \"../Mycomponent\"
\nand the actual folder name is ```MyComponent``` your build will fail.\n\n## Speeding up your builds\n\n+ **Image optimization**\n\nIf you aren't using preoptimized images yet you should consider to crop, resize images etc. **before** you will build the site because it can basically save a lot of time (depending on the amount of images).\n\nBefore i optimized my images a build on Google Cloud took about ~ 1800 sec.\nAfter i optimized all my images for posts with Python ([Image optimization with Python](/articles/scaling-and-cropping-images-using-python \"Image optimization with Python\")) the build time went down to ~ 620 sec. So actually just a third.\n\n## Set Cloud Scheduler (optional)\n\nWith Cloud Scheduler you can trigger a build automatically after a specific time.\nThe first 3 jobs a month a free and after that you have to pay $0.10 for every job after that.\n\n(You can also use Cloud Scheduler with your default trigger (Push to branch) or without).\n\nThe trigger you created before has an ID and with a POST request you can start the trigger anytime you want.\nTo get the ID of the trigger you have to open the Cloud Shell, type ```gcloud beta builds triggers list``` and search for **id**. Copy that.\n\nThe URL for the POST request looks like (without []):\n`https://cloudbuild.googleapis.com/v1/projects/[PROJECT_ID]/triggers/[TRIGGER_ID]:run`\nNow you have to create a new job in Cloud Schedule.\n\n\n\ncron\n0 3 * * *
\nwill trigger a build every day at 3 am MESZ. \n\nThe URL for the POST request looks like (without []):\n`https://cloudbuild.googleapis.com/v1/projects/[PROJECT_ID]/triggers/[TRIGGER_ID]:run\n`\n\nIn Text you will need\n\njson\n{\n \"branchName\": \"master\"\n}\n```
\nand you will authorize the job with your service account.
","date_published":"2020-08-31T21:07:00.598Z","date_modified":"2025-02-01T16:20:54.658Z","tags":["gatsby","react","cloud","ci-cd"],"image":"https://mxd.codes/content/posts/published/build-and-deploy-your-gatsby-site-with-google-cloud-build-to-firebase/cover.png","banner_image":"https://mxd.codes/content/posts/published/build-and-deploy-your-gatsby-site-with-google-cloud-build-to-firebase/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/scaling-and-cropping-images-using-python","url":"https://mxd.codes/articles/scaling-and-cropping-images-using-python","title":"Scaling and Cropping images using Python","summary":"This articles shows you how to edit, crop and resize your pictures with a little Python script.","content_html":"Since I used a lot of pictures (and also very large ones in the beginning), this had a huge impact on the speed of the page.\nSince PageSpeed is a not unimportant ranking factor for search engines like Google and Co, you should of course make a page as fast as possible.
\nThe images used here are mainly satellite images from ESA, which are licensed under the CC BY-SA 3.0 IGO IGO) and may therefore also be used for your own purposes under certain conditions.
\nThese pictures are often ~ 30MB, which is a bit too big for a website.\nSince I didn't want to crop all pictures manually, I decided to solve this problem with Python or Pillow.
\nPillow is a Python library for image processing that can be installed on with (assuming you have Python already installed)
\npip install Pillow\n\nFinally you can import the library with
\nfrom PIL import Image\n\ninto your Python script.
\nAll images for posts are in a separate \"images /\" folder in the root directory of the project.
\nFirst, all \".jpg\" files in a certain directory are opened with Pillow and all file names are saved in an array. In addition, a variable is required and can later access any name in the array.
\ncount = 0\nimage_list = []\n\nfor file in glob.iglob('path/to/images/*.jpg'):\n im=Image.open(file)\n image_list.append(os.path.basename(file))\n\nNow you should know to which sizes the pictures should be cut and whether, for example, the proportions should be retained.\nFor all \"PostCover\" (pictures in posts) the aspect ratio is ignored and the picture is simply cut to a certain size, which is declared in a global variable.
\nsize = (1903,453) #(width,height)\n\nWith all \"PostThumbnails\" (picture preview) the aspect ratio should be retained and only scaled smaller. A global standard width is defined for this.
\nbasewidth = 500\n\nThen the original width and height of the images are determined, as we need them to be able to calculate and maintain the aspect ratio.\nOnly the new height is needed here, as the standard width has already been predefined.
\n width, height = im.size\n wpercent = (basewidth / float(im.size[0]))\n hsize = int((float(im.size[1]) * float(wpercent)))\n\nNow you can cut the images with Image.crop or scale them with Image.resize. The new width \"basewidth\" and the calculated height \"hsize\" are now used as parameters for scaling.
imThumbnail = im.resize((basewidth, hsize), Image.LANCZOS)\n imCover = im.crop(((width-size[0])//2, (height-size[1])//2, (width+size[0])//2, (height+size[1])//2))\n\nThen I renamed the thumbnail and saved both new files under static/assets with a quality of \"85\" . \nWith the additional parameter \"optimize = True\" a few KB can be saved.
newCover = 'static/assets/{}'.format(image_list[count])\n newThumbnail = 'static/assets/{}_thumbnail.jpg'.format(image_list[count].replace(\".jpg\", \"\"))\n imCover.save(newCover,optimize=True,quality=85)\n imThumbnail.save(newThumbnail,optimize=True,quality=90)\n count +=1 \n\nComplete script:
\nfrom PIL import Image\nimport glob, os\n\ncount = 0\nimage_list = []\nbasewidth = 500\nsize = (1903,453)\n\n\nfor file in glob.iglob('path/to/images/*.jpg'):\n im=Image.open(file)\n image_list.append(os.path.basename(file))\n width, height = im.size\n wpercent = (basewidth / float(im.size[0]))\n hsize = int((float(im.size[1]) * float(wpercent)))\n imThumbnail = im.resize((basewidth, hsize), Image.LANCZOS)\n imCover = im.crop(((width-size[0])//2, (height-size[1])//2, (width+size[0])//2, (height+size[1])//2))\n newCover = 'static/assets/{}'.format(image_list[count])\n newThumbnail = 'static/assets/{}_thumbnail.jpg'.format(image_list[count].replace(\".jpg\", \"\"))\n imCover.save(newCover,optimize=True,quality=85)\n imThumbnail.save(newThumbnail,optimize=True,quality=90)\n count +=1 \n\nIn order not to have to run the script manually every time, you can add the following to \"package.json\".
\n \"img-optimize\": \"py ./src/utils/resize_images.py\"\n\nSo you can optimize all images automatically with npm run img-optimize.
Navigation devices, smartphones and weather forecasts are dependent on satellites and without these we have to rely on some services that make our everyday life easier.
\nImages of the Earth from satellites or aircraft are constantly being recorded. These remote sensing data often have a resolution of up to 30cm, are recorded in a range from 450nm to 2273nm and are mostly referenced by the operators of the satellites.
\nThese pictures are then sold or even provided by many providers free of charge.
\nOn Sentinel Open Access Hub you can find free products from the Copernicus Program, those from the European Union and others operated by the European Space Agency (ESA).
\nThe Copernicus program basically comprises six satellites (Sentinel-1, Sentinel-2, Sentinel-3, Sentinel-4 (planned start in 2021), Sentinel-5 and Sentinel-6 (planned start in late 2020) .
\nAll of these satellites perform different tasks and help to observe land, sea and the atmosphere.
\nIn the Copernicus Open Access Hub, you can now download all the data provided after registering free of charge. All available data is displayed for download via search criteria and a desired image section, which you can simply draw with a rectangle
\nArcGIS currently supports level 1C products.
\nThese image files are relatively large and it may take a while (depending on the internet bandwidth) before the ZIP file is downloaded.
\nThese multispectral bands can then be integrated in Arc-GIS or QGIS.
\nA classification now consists of two main components.
\nFirst, a classification method must be selected (e.g. supervised and pixel-based). This classification method divides individual pixels of the satellite image into thematic classes, e.g.
\nHere the system is taught (Machine Learning) that, for example, a green pixel stands for the forest class, blue for water, light green for meadow and gray for settlement ,
\nIn order to get the most realistic result possible, one should choose areas / pixels that are as clear as possible for these samples. That For example, there shouldn't be a gray pixel in the class.
\nWhen the assignment of samples for each class is done, the actual classification now begins.
\nThere are various classification methods that can be used e.g. Maximum likelihood. The classification method now only has to be selected and the classification can then be started.
\nIn order to check the quality of the classification, the results are usually validated with e.g. the ground truthing. method.
\nThen it is checked how many points were classified in the correct class (for example, a settlement was recognized as a settlement). Here one should not be too stingy with the Validierungssamples to achieve a meaningful validation of the image classification.
\nThis can be noted in an Excel list and the overall accuracy of the classification can be easily calculated. Of course you have to write down the wrong and correct classification.
\nAnd that's it.
\nIn short, remote sensing is the extraction of geodata or satellite images by satellites and the subsequent methodology (image classification) for evaluating this remote sensing data about the nature of the earth's surfac
","date_published":"2020-08-31T20:53:39.052Z","date_modified":"2024-02-22T18:28:40.371Z","tags":["gis"],"image":"https://mxd.codes/content/posts/published/remote-sensing-and-image-classification/cover.png","banner_image":"https://mxd.codes/content/posts/published/remote-sensing-and-image-classification/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/free-fme-licence-for-private-use","url":"https://mxd.codes/articles/free-fme-licence-for-private-use","title":"Free FME-licence for private use","summary":"FME (Feature Manipulation Engine) is a powerful and the most used spatial ETL tool for the migration and processing of spatial data and non-spatial data.","content_html":"\n\nSafe is at the moment not offering Free FME licences for private use anymore!
\n
FME (Feature Manipulation Engine) is a powerful and the most used spatial ETL tool for the migration and processing of spatial data and non-spatial data. The software is very flexible and powerful. It can also handle very large amounts of data without problems.
\nFeature Manipulation Engine supports over 300 different data sources such as GIS - databases] (PostGIS, MySQL, Oracle, of course also most non-spatial databases), CAD files ( DWG, DXF), raster data, web services, coordinate lists, XML, KML, GML, GeoJSON and much more.
\nThe software is very easy to use and it comes with a nice graphical user interface, in which the source and target data model or format is specified. This also makes complex processing processes very clear.
\nIn between the readers and writers, countless so-called transformers can be used, with which the data can be processed before being imported into a new data source. This workflow can also be supplemented with Python or SQL scripts.
\nSafe Software offers FME in three versions:
\nThat sounds great, but you still have no idea?
\nYou can apply for a free (home) license for FME Desktop at Free Licenses for Home Use. If you are a student you can apply for a separate license here.
\nThis license is of course only for personal and not commercial projects.
\nSubmitting the application is very easy. All you have to do is enter your name, your email address, your company if applicable, and how you will use the license. Here it is enough to simply write that you want to get to know the program and of course want to learn it.
\nAs soon as the application has been accepted, you will receive an email with the license key.
\nOn the page Downloads you can then download the desktop version and enter the license key you received after the installation process. The license is valid for one year (four months for students) and can be expanded as required.
\nThere is a Knowledge Base where you can find thousands of tutorials.
\nFor more complex problems, it is also advisable to take a look at gis-stackexchange.com.
","date_published":"2020-08-31T20:49:43.880Z","date_modified":"2025-01-22T19:20:42.734Z","tags":["gis"],"image":"https://mxd.codes/content/posts/published/free-fme-licence-for-private-use/cover.webp","banner_image":"https://mxd.codes/content/posts/published/free-fme-licence-for-private-use/cover.webp","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/gatsbyjs-codebuild-ci-cd-pipeline","url":"https://mxd.codes/articles/gatsbyjs-codebuild-ci-cd-pipeline","title":"GatsbyJS with CI/CD Pipeline via Codebuild ","summary":"With the free tier for AWS you always get one active AWS code pipeline per month and 100 minutes of AWS code build per month with which you can create a CI / CD pipeline for a GatsbyJS site.","content_html":"With the free contingent for AWS you always get an active AWS code pipeline per month and 100 minutes AWS codebuild per month.
\nSo you can set up a continuous integration and continuous delivery pipeline for free or with more than 100 build minutes a month for relatively little money, which triggers a build with every push to a GitHub repository whoch will be automatically deployed to S3 and optionally also invalidate CloudFront Cache.
\nFirst of all you need a new build project in CodeBuild.\nIn the project configuration you can assign a name for it and select GitHub as the source provider under Source.
\nDepending on whether the repository is public or not, you then select \"Public Repository\" and enter the repository URL or you link your GitHub account and give CodeBuild the necessary rights to access the repository.
\nAn environment image must now be selected under Environment. For GatsbyJS this would be a \"managed image\" and the operating system \"Amazon Linux 2\". \"Standard\" is selected as the runtime (s) and \"aws / codebuild / amazonlinux2-x86_64-standard: 2.0\" as the image.
\n
Now a new service role can be created automatically (which is required) so that CodeBuild has the necessary rights for the AWS account.
\n\n\nThis service role can also be assigned rights for CodePipeline, so that this service role can be used for CodeBuild and CodePipeline.\n If environment variables are used, these can be specified under \"Additional configuration\" in the environment. You can also make sure that \"3GB RAM, 2vCPUs\" is really selected, since only this option is included in the free contingent.
\n
Buildspec now uses a buildspec file in YAML format. For a Gatsby site this should somehow look like the following:
\nversion: 0.2\nphases:\n install:\n runtime-versions:\n nodejs: 12\n commands:\n - 'touch .npmignore'\n - 'npm install -g gatsby'\n pre_build:\n commands:\n - 'npm install'\n build:\n commands:\n - 'npm run build'\n post_build:\n commands: \n - 'find public -type f -regex \".*\\.\\(htm\\|html\\|txt\\|text\\|js\\|css\\|json\\)$\" -exec gzip -f -k {} \\' ## sofern cloudfront nicht automatisch die dateien komprimiert\nartifacts:\n base-directory: public\n files:\n - '**/*'\n discard-paths: no\ncache:\n paths:\n - '.cache/*'\n - 'public/*'\n\nThe buildspec.yml file only needs to be placed in the root directory so that CodeBuild can find it.\nIn addition, the build script must of course still be available in \"package.json\".
\n\"build\": \"gatsby build\",\n\nThe default settings can be retained under Artifacts.\nWith CloudWatch you have the possibility to save logs for CodeBuild in an S3 bucket.
\n\n\nThere may be additional costs!
\n
If all settings have now been entered correctly, the build project can be created.\nThe only thing missing now is the code pipeline that triggers a build and deployed it in the S3 bucket.
\nFor this you switch to CodePipeline and create a new pipeline in which a name and a service role are selected first.
\nAt Source you can now log in with a GitHub account and link the respective repository with a branch.\nYou now have two options to trigger a build.
\nThen you choose the build provider \"AWS CodeBuild\" and the previously created project or (if you have not already done this) create a new project.
\nAfter a build, the public/ folder can also be automatically deployed to an S3 bucket with AWS CodeDeploy.\nAlternatively you can also skip this step and gatsby-plugin-s3, which also optimizes caching.
\nnpm i gatsby-plugin-s3\n\nbzw.
\nyarn add gatsby-plugin-s3\n\nNow the configuration in gatsby-config.js and the deployment script are missing
\nplugins: [\n {\n resolve: `gatsby-plugin-s3`,\n options: {\n bucketName: 'my-website-bucket'\n },\n },\n]\n\n\"scripts\": {\n ...\n \"deploy\": \"gatsby-plugin-s3 deploy --yes\"\n}\n\nThe deployment script \"npm run deploy\" must then of course be added to the buildspec file under post-build commands.\nThe CodePipeline should now look something like this:
\n
Experience shows that a build for around 100 pages takes around 10 minutes if you have some pictures per page.
\nEvery time CodePipeline detects a push to the GitHub repository, a build is automatically triggered and made available on an S3 bucket.
\nWith aws cloudfront create-invalidation --distribution-id DISTRIBUTION_ID --paths / * the CloudFront cache can also be invalidated.
As the volume of data continues to surge, effective management becomes paramount. Geographic Information System (GIS) databases emerge as powerful solutions, facilitating the storage, management, and querying of geodata. Here, we delve into both free and open-source as well as proprietary GIS databases to aid in your data management endeavors.
\nArangoDB Community Edition
\nPostGIS / PostgreSQL
\nMariaDB
\nMySQL
\nOrientDB
\nAn open-source NoSQL database written in Java, combining features of document-oriented and graph databases.
Website: https://www.orientdb.com
\nSQLite / SpatialLite
SpatialLite is an open-source library extending SQLite with comprehensive spatial SQL functions.
Website: https://www.sqlite.org / https://www.gaia-gis.it/fossil/libspatialite
Oracle Spatial
\nWhether you opt for the flexibility of open-source GIS databases or the robust features of proprietary solutions like Oracle Spatial, the choice depends on your project's specific needs.
","date_published":"2020-08-31T20:37:34.710Z","date_modified":"2024-02-19T19:23:18.965Z","tags":["gis","d-1"],"image":"https://mxd.codes/content/posts/published/gis-and-geo-database-management-system-options/cover.png","banner_image":"https://mxd.codes/content/posts/published/gis-and-geo-database-management-system-options/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/gis-applications-which-gis-applications-are-there","url":"https://mxd.codes/articles/gis-applications-which-gis-applications-are-there","title":"GIS Applications - Which GIS Applications are there?","summary":"In order to be able to work with digital maps or information geodata, a geographic information system is used. With GIS, geodata can be recorded, edited, analyzed and displayed appropriately.","content_html":"In order to be able to work with digital maps or information (geodata), a geographic information system (GIS) is used. With GIS, geodata can be recorded, edited, analyzed and displayed appropriately. There are now many good providers of geographic information systems, the two best known of which are probably QGIS (Open Source) and ArcGIS from Esri.
\nNow you have decided on a GIS, but the question still arises which additional GIS applications (also called industry models) are required or which are available at all. I would like to go into this further in this article.
\nBasically there are GIS applications for the following industries:
\nFor each of these industries, GIS service providers offer different GIS applications and also adapt them individually to the needs of the customers.
\nIn the following, I would like to go into more detail about a few applications, especially for municipalities (GIS).
\nA tree cadastre supports municipalities and tree care companies in the collection, control and management of tree stands. Trees can be divided into groups of trees and various material data or media can also be added to the trees:
\nThe trend here is towards mobile solutions. This means apps that allow you to enter data into the GIS directly on a tablet using controls or maintenance measures. This data is stored online and can then be corrected or revised later in the office.
\nDevelopment plans / land use plans
\nDevelopment and land use plans can be easily managed and evaluated in a GIS.
\nDepending on the legal validity, areas of validity can be displayed in different colors, changes can be linked to the main plan, and text files, such as additions to the articles of association, can be added.
\nIn addition, analog development plans can be prepared (scanned, georeferenced) and displayed in the GIS. This can be done with PDF or CAD files, for example.
\nAs a result, you get a data record for each development plan, to which all associated files and changes are linked, and can present them in an appealing or clear manner.
\nReal estate cadastre
\nFor example, in the case of a construction project in a certain area, all affected citizens can be identified and written to very easily by automatically creating reports using templates for writing and selecting all citizens in this area (in this area in the GIS).
\nAll sealed areas of a property are determined in a sealing register. This takes place via a previously carried out aerial survey, in which high-resolution images are taken, or via digitization of satellite images.
\nAll parcels are combined into one plot and the sealed areas of these parcels are linked to the plot. This enables municipalities to determine the split rainwater fee.
\nWater supply networks can be managed digitally in a GIS. Hydrant plans can be created automatically, which can be very helpful, for example, for the local fire brigade in an emergency.
\nRepairs carried out on pipes can be stored digitally, so that you always have an overview of which pipes have already been renovated and which should be renovated in the near future.
\nMany commissions are legally obliged to keep a channel register.
\nA channel register is created either from analog data such as plans, which are digitized, or from a measurement that has been carried out beforehand.
\nIn a channel cadastre, data such as posture length, depth, cable diameter material, etc. can be saved, managed and analyzed.
\nLarge corporations now also have wastewater registers for their company premises.
\nEnables the acquisition, management and analysis of all supply networks.
\nAllows the construction of a tree and green area register for further planning and maintenance of the inventory.
\nEspecially in municipalities, the importance of a geographic information system is beyond question. A lot of time is saved due to less administrative work.
\nThe various industry models can be easily combined with each other, making work processes much more efficient.
\nBut also in the private sector, e.g. In the real estate market, in agriculture or in archeology, the advantages of a geographic information system (GIS) are increasingly recognized and used. Mobile apps that are combined with a GIS are particularly popular.
\nIf you want to see more applications, check out GIS-Geography. There you will find 1000 GIS Applications & Uses - How GIS Is Changing the World.
","date_published":"2020-08-31T20:29:09.791Z","date_modified":"2024-02-19T19:23:06.498Z","tags":["gis","open-source","web-mapping"],"image":"https://mxd.codes/content/posts/published/gis-applications-which-gis-applications-are-there/cover.png","banner_image":"https://mxd.codes/content/posts/published/gis-applications-which-gis-applications-are-there/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/gis-volunteering-make-the-world-a-better-place-with-gis","url":"https://mxd.codes/articles/gis-volunteering-make-the-world-a-better-place-with-gis","title":"GIS volunteering - Make the world a better place with GIS","summary":"Volunteering offers a good opportunity to develop personally and professionally. You can also get involved in a good cause. You can later pack the projects into a pretty portfolio and thus stand out from the competition with extra points when applying.","content_html":"GIS Volunteering offers a good opportunity to develop personally and professionally.
\nYou can also get involved in a good cause. You can later pack the projects into a pretty portfolio and thus stand out from the competition with extra points when applying.
\nWhat's even better is that you can join these organizations with a PC at home and don't have to travel the world.
\n![]()
https://www.openstreetmap.org/
\nOpenStreetMap is an international project that was founded in 2004.
\nThe aim of OSM is to create a free world map and make it available to everyone free of charge. The data is collected by volunteers (also known as ** mappers **). Data on roads, railways, rivers, forests, houses, etc. are collected.
\nThere are many different ways to contribute to OpenStreetMap, from reporting small errors on the map, completing existing data, drawing new buildings from aerial photographs and recording routes and points of interest with the GPS device. Our instructions will help you use the right programs and enter data. (OpenStreetMap).
\n\nThe Humanitarian OpenStreetMap Team (HOT) is an international team dedicated to the mapping and mapping of humanitarian actions and the development of communities. The data are used to reduce risks in the and to work on sustainable development.
\nAs a mapping volunteer you can collect data for maps as with OpenStreetMap. ** Humanitarian and GIS Professionals ** also help in additional areas, such as data processing, validation of maps or create completely new maps and visualizations.
\n\n
https://www.standbytaskforce.org/
\nStandby Task Force is a global network of trained and experienced volunteers who work together online.
\nThe Standby Task Force is a non-profit organization founded in 2010.
\nThe Standby Task Force has been involved in many natural disasters since then, and the volunteers have assisted many humanitarian organizations in election observation and other projects.
\nYou should already have professional experience in the areas of GIS management, disaster management and other technical areas.
\n\n
https://www.giscorps.org/\nGISCorps coordinates short-term, voluntary GIS services for disadvantaged communities.
\nThe projects vary according to the needs of the partner agency and can include all aspects of GIS, including analysis, cartography, app development, needs analysis, technical workshops etc.
\nThe service areas include humanitarian aid, disaster protection, environmental protection, health and health personnel services, GIS training and crowdsourcing of experts. GISCorps is supported by individual donations, companies and other non-profit groups with similar goals.
\nAt GISCorps there are several ways to get involved. You can apply as a volunteer and actively support GIS projects. You can get a one-year ArcGIS license for free here, provided you are accepted.
\nYou can of course also support the project with donations.
\n","date_published":"2020-08-31T20:24:44.651Z","date_modified":"2024-02-19T19:22:34.511Z","tags":["gis"],"image":"https://mxd.codes/content/posts/published/gis-volunteering-make-the-world-a-better-place-with-gis/cover.png","banner_image":"https://mxd.codes/content/posts/published/gis-volunteering-make-the-world-a-better-place-with-gis/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/geo-and-gis-podcasts-to-stay-up-to-date","url":"https://mxd.codes/articles/geo-and-gis-podcasts-to-stay-up-to-date","title":"Geo and GIS Podcasts to stay up to date","summary":"Podcasts are a great way to keep up to date with current developments. Best of all, you can listen to podcasts practically anywhere.","content_html":"Podcasts are a great way to keep up to date with current developments. Best of all, you can listen to podcasts practically anywhere.
\nThe podcast is hosted by Jesse Rouse, Sue Bergeron and Frank Lafone. An established podcast that discusses geography, geographic information technologies and the impact of GIS on everyday digital life.
\n\n
New podcast. Mainly interviews with people from the GIS and geo industry.
\nhttps://mapscaping.com/blogs/the-mapscaping-podcast
\n
Reports and news about OpenStreetMap, the free wiki world map.
\nhttps://podcast.openstreetmap.de/
\n
A podcast that can contain everything and everyone about the geodata world.
\n\n
Geographers and geo-types who talk about how incredible their job is after work.
\nhttp://www.themappyisthour.com/
\n
A podcast by Kurt Towler. The podcast includes interviews with other geospatialists and reviews of conferences.
\n\n
A podcast with a view of the world of modern remote sensing and earth observation. Driven by their passion for all grid and geodata, they strive for a mix of news, opinions, discussions and interviews.
\n\n
Every six weeks, new location-based podcasts are released by another geographic information branch.
\nhttps://player.fm/series/directions-magazine-podcasts
\n
This monthly podcast by James Fee and Bill Dollins is about how you can use spatial technologies in your workflow.
\n\n
Geographical Imaginations Expedition & Institute is a growing public geography initiative for multimedia media that aims to bring together academic and everyday geographic or spatial thinking.
\nhttps://podcasts.apple.com/us/podcast/geographical-imaginations/id1386704057?mt=2
\n
A podcast that serves to make women in the UAS industry better known around the world.
\nhttp://womenanddrones.libsyn.com/
\n
In this post you will find a list of free and/or open-source and proprietary GIS-software options.
\n1. QGIS
\nQGIS is a free and open-source geographic information system.\nWith QGIS you can create, modify, visualize and analyze spatial data on Windows, Mac, Linux, BSD and Android.
\n2. GRASS GIS
\nGRASS GIS is a hybrid, modular geographic information system software with grid and vector-oriented functions.
\n3. SAGA GIS
\nSoftware for automated geoscientific analysis. The SAGA project is mainly developed at the Department of Geography at the University of Hamburg.
\n\nJava based open-source GIS.
\n5. GeoDa
\nGeoDa is a free and open-source GIS-software and serves as an introduction to geodata analysis.
\n6. gvSIG
\nOpen-Source Desktop, Online und Mobile GIS.
\n7. MapmakerPro
\nMapMaker is aimed at specialists who need to create maps. For example foresters, archaeologists, emergency services, etc..
\n8. DIVA GIS
\nDIVA-GIS is a free GIS for mapping and analyzing geographic data.
\n9. TerraLib
\nTerraLib is an open source GIS software library that supports the development of custom geographic applications.
\n10. Kalypso
\nKalypso is an open source modeling program. The focus is on numerical simulations in water management and ecology.
\n11. OrbisGIS
\nOrbisGIS is a cross-platform open source GIS developed by and for research.
\n12. OzGIS
\nOzGIS is a GIS for analyzing and displaying spatial statistics.
\n13. FalconView
\nFalconView is a GIS developed by the Georgia Tech Research Institute.
\n14. ILWIS
\nThe Integrated Land and Water Information System (ILWIS) is a desktop-based GIS and remote sensing software that was developed by ITC up to Release 3.3 in 2005.
\n15. MapWindow GIS
\nMapWindow GIS is an open source GIS desktop application that is used by a large number of users and organizations around the world.
\n16. Whitebox GAT
\nWhitebox Geospatial Analysis Tools is an open source, cross-platform geospatial information system and remote sensing software package.
\n17. Capaware
\n3D-world-viewer.
\n\nThe Generic Mapping Tools are a collection of free software for creating geological or geographical maps and diagrams.
\n19. ArcGIS, ArcView
\nArcGIS is the generic term for various GIS software products from Esri.
\n20. AutoCAD Map3D
\nAutoCAD Map3D from Autodesk is a GIS software solution and offers extensive access to all CAD and GIS data and enables its creation and editing.
\n21. Aquaevo GIS
\nAquaveo is a GIS software for modeling environmental and water resources.
\n22. Bentley Map
\n23. Cadcorp
\nCadcorp's GIS and Web Mapping Software are GIS software products for the creation, analysis and data management of geodata.
\n24. Conform
\nConform is a GIS software for merging, visualizing, editing and exporting 3D environments for urban planning, games and simulations.
\n25. Dragon / ips
\nDragon / ips is a remote sensing image processing software.
\n26. ENVI
\nThe ENVI image analysis software is used by GIS experts, remote sensing scientists and image analysts to extract meaningful information from images to help them make better decisions.
\n27. ERDAS IMAGINE
\nERDAS IMAGINE is software for evaluating remote sensing data, especially graphics and photos.
\n28. Field-Map
\nField-Map is a proprietary integrated tool for programmatic field data acquisition by IFER - Monitoring and Mapping Solutions, Ltd. It is mainly used for mapping forest ecosystems and for data collection during field analysis.
\n29. Geosoft
\nGEOSOFT is one of Germany's leading developers of geodetic computing and organizational software for private and public surveying agencies.
\n30. GeoTime
\nGeoTime is a geodata analysis software that enables the visual analysis of events over time. The third dimension adds time to a two-dimensional map so that users can see changes in time series data.
\n31. Global Mapper
\nGeographic information system with distance and area calculation; offers an integrated scripting language, 3D display and GPS tracks.
\n32. Golden Software
\nSurfer and Mapviewer are two software solutions with a variety of mapping and adaptation options and support any geodata format (including LiDAR data), 3D visualization, as well as volume / distance / area calculations.
\n33. Intergraph
\nGeoMedia is a GIS software from Intergraph. GeoMedia is a software product family with desktop GIS, web GIS and is mainly aimed at municipalities.
\n34. Manifold System
\nManifold System is software for the management of digital maps. Digital maps and remote sensing data can be easily edited.
\n35. MapInfo
\nMapInfo Professional is a geographic information system software from the US company MapInfo Corporation.
\n36. Maptitude
\nMaptitude is a mapping software program created by Caliper Corporation that allows users to view, edit, and integrate maps. The software and technology are designed to facilitate the geographic visualization and analysis of contained data or user-defined external data.
\n37. Netcad
\nNETCAD GIS is a CAD and GIS software that supports international standards and was designed for users of engineering and geographic information systems.
\n38. RegioGraph
\nRegioGraph is a geomarketing software specializing in questions in the areas of marketing, sales, controlling, logistics and corporate strategy.
\n39. RIWA GIS Zentrum
\nThe RIWA GIS Zentrum is a powerful, web-based geographic information system that has been used in numerous municipal administrations and industrial companies for many years.
\n40. Smallworld
\nSmallworld GIS is the professional geographic information system for network operators in the energy and water industries.
\n41. TNTmips
\nTNTmips is a geospatial data analysis system that offers a fully featured GIS, RDBMS and automated image processing system with CAD, TIN, surface modeling, map layout and innovative data publishing tools.
\n42. TerrSet ( IDRISI )
\nTerrSet is an integrated geographic information system and remote sensing software for monitoring and modeling the Earth system.
\n43. Google Earth Pro
\nGoogle Earth Pro Desktop is free and intended for users who need advanced features. You can import and export GIS data and go on a journey through time with the help of historical images.
\n44. Bing Maps
\nBing Maps is an online map service from Microsoft, through which various spatial data can be viewed and spatial services can be used. It is a further development of the MSN Virtual Earth and is part of the Bing search engine.
\n45. Google Maps
\nGoogle Maps is an online map service from the US company Google LLC. The surface of the earth can be viewed as a road map or as an aerial or satellite image, with locations of institutions or known objects also being displayed. The service started on February 8, 2005.
\n46. NASA World Wind
\nNASA World Wind is an open source software that enables satellite and aerial images to be displayed on a virtual globe combined with elevation data and to be zoomed in anywhere in the world in 3D graphics and viewed freely from all sides.
\n47. OpenStreetMap
\nOpenStreetMap is a free project, which collects freely usable geodata, structures it and keeps it in a database for everyone to use. This data is under a free license, the Open Database License.
\n48. Wikimapia
\nWikimapia is a web interface that combines maps with a restricted wiki system without hypertext functions. It allows the user to add information in the form of a note to any position on the earth.
\n49. GDAL/OGR
\nThe Geospatial Data Abstraction Library (GDAL / OGR) provides command line-based auxiliary programs. A large number of raster and vector geodata formats can be converted and processed using these.
\n50. Leaflet
\nLeaflet is the leading open-source JavaScript library for mobile-friendly interactive maps.
\n51. OpenLayers
\nOpenLayers makes it easy to put a dynamic map in any web page. It can display map tiles, vector data and markers loaded from any source.
\n52. R
\nR is a free programming language for statistical calculations and graphics.
\n53. Blender
\nBlender is a free, GPL-licensed 3D graphics suite with which bodies can be modeled, textured and animated.
","date_published":"2020-08-31T20:14:09.355Z","date_modified":"2024-02-19T19:22:12.103Z","tags":["gis"],"image":"https://mxd.codes/content/posts/published/gis-software-options-free-open-source-and-proprietary/cover.png","banner_image":"https://mxd.codes/content/posts/published/gis-software-options-free-open-source-and-proprietary/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/how-to-create-a-mailchimp-newsletter-sign-up-form-for-your-gatsby-site","url":"https://mxd.codes/articles/how-to-create-a-mailchimp-newsletter-sign-up-form-for-your-gatsby-site","title":"How to create a Mailchimp newsletter sign-up-form for your Gatsby Site","summary":"Managing your own newsletter is crucial for creating a sustainable online-business. With E-Mails you can build a relationsship with your audience and engage with them so they will drive some nice traffic to your new post or whatever you just have published and want to promote.","content_html":"Managing your own newsletter is crucial for creating a sustainable online-business. With E-Mails you can build a relationsship with your audience and engage with them so they will drive some nice traffic to your new post or whatever you just have published and want to promote.
\nIf you are using Mailchimp you can use the plugin gatsby-plugin-mailchimp to manage your e-mail list.
\nSimply add the plugin to your package.json with
\nnpm install gatsby-plugin-mailchimp\n\nor
\nyarn add gatsby-plugin-mailchimp\n\nand implement it in your gatsby.config.js like
\n{\n resolve: 'gatsby-plugin-mailchimp',\n options: {\n endpoint: '', // string; add your MC list endpoint here; see instructions below\n timeout: 3500, // number; the amount of time, in milliseconds, that you want to allow mailchimp to respond to your request before timing out. defaults to 3500\n },\n },\n\nIf you don't have your Mailchimp endpoint yet i would suggest to have a README of gatsby-plugin-mailchimp. They described every step with images so it's really easy to get your endpoint url.
\n\n\nOnce you have your Mailchimp endpoint you should save it as environment variable in your project.
\n
Only thing you will need is to import the addToMailChimp method to your newsletter sign-up component which will work like this:
\nimport addToMailchimp from 'gatsby-plugin-mailchimp'\n\n(I am actually working with styled components which are stored in a separate file in the same folder. This file is imported with
\nimport * as S from './styled'\n\nand the components then are used like S.NewsletterWrapper. But to make it a bit more clear i declared everything in the same file for this post.)
\nSo now you need some styled (and responsive) components which will create your actual form like:
\n<NewsletterWrapper>\n <DescriptionWrapper>\n <p>\n Do you want to know when I post something new? <br/> \n Then subscribe to my newsletter.\n 🚀\n </p>\n </DescriptionWrapper>\n <InputWrapper>\n <Input\n type=\"email\"\n name=\"email\"\n id=\"mail\"\n label=\"email-input\"\n placeholder=\"Your e-mail address\"\n onChange={(e) => setEmail(e.target.value)}\n />\n </InputWrapper>\n <ButtonWrapper>\n <Button\n type=\"button\"\n aria-label=\"Subscribe\"\n onClick={() => handleSubmit()}\n >\n Subscribe\n </Button>\n </ButtonWrapper>\n</NewsletterWrapper>\n\nIn this component you will show a different message from the default one after a user has successfully subscribed to your newsletter.\nTo do so you need a variable which will store the current state (submitted = true or submitted = false). \nThis variable will have the default value false, which will be set to true after a user has subscribed successfully.
\nSo if a user clicks on the \"Subscribe-Button\" the function handleSubmit will be executed which does the following:
\naddToMailchimp(email),\n\nThe Mailchimp API will always return a object with the properties result and msg.
\n
{\n result: string; // either `success` or `error` (helpful to use this key to update your state)\n msg: string; // a user-friendly message indicating details of your submissions (usually something like \"thanks for subscribing!\" or \"this email has already been added\")\n}\n\nFinally you just have to check the value of submitted and render the relevant content like the following:
\n return (\n <>\n {submitted ? (\n <NewsletterWrapper>\n <DescriptionWrapper>\n <h2>\n 🎉 Successfully subscribed! 🎉\n </h2>\n <p>\n Thank your for your interest in my content.\n </p>\n </DescriptionWrapper>\n </NewsletterWrapper>\n ) : (\n <NewsletterWrapper>\n <DescriptionWrapper>\n <p>\n Do you want to know when I post something new? <br/> \n Then subscribe to my newsletter.\n 🚀\n </p>\n </DescriptionWrapper>\n <InputWrapper>\n <Input\n type=\"email\"\n name=\"email\"\n id=\"mail\"\n label=\"email-input\"\n placeholder=\"Your e-mail address\"\n onChange={(e) => setEmail(e.target.value)}\n />\n </InputWrapper>\n <ButtonWrapper>\n <Button\n type=\"button\"\n aria-label=\"Subscribe\"\n onClick={() => handleSubmit()}\n >\n \"Subscribe\"\n </Button>\n </ButtonWrapper>\n </NewsletterWrapper>\n )}\n </>\n )\n}\n\nIf you want to learn more about sign-up forms i suggest to have a look at Non-Invasive Sign Up Forms from Slarsen Disney. He is creating super UX-friendly websites and is sharing the code for it.
\nimport addToMailchimp from \"gatsby-plugin-mailchimp\"\nimport React, { useState } from \"react\"\nimport ConfettiAnimation from \"../Animations/ConfettiAnimation\"\nimport { trackCustomEvent } from \"gatsby-plugin-google-analytics\"\nimport styled from 'styled-components';\n\nexport const NewsletterWrapper = styled.form`\n display: flex;\n flex: 0 1 auto;\n flex-direction: row;\n flex-wrap: wrap;\n box-sizing: border-box;\n max-width: 750px;\n justify-content: center;\n`\nexport const DescriptionWrapper = styled.div`\n text-align: center;\n flex-grow: 0; \n flex-shrink: 0;\n flex-basis: 100%; \n max-width: 100%;\n`\n\nexport const InputWrapper = styled.div`\n flex-direction: column;\n justify-content: center;\n display: flex;\n flex-grow: 0;\n flex-shrink: 0;\n flex-basis: 50%;\n max-width: 66.66667%;\n`\n\nexport const Input = styled.input`\n padding-top: 15px!important;\n padding-bottom: 15px!important;\n padding: 12px 20px;\n margin: 8px 0;\n box-sizing: border-box;\n border: 2px solid hsla(0,0%,90.2%,.95);\n :invalid {\n border: 1px solid red;\n }\n`\n\nexport const ButtonWrapper = styled.div`\n flex-direction: column;\n justify-content: center;\n display: flex;\n flex-grow: 0;\n flex-shrink: 0;\n flex-basis: 50%;\n max-width: 33.33333%;\n`\n\nexport const Button = styled.button`\n box-sizing: border-box;\n border: 2px solid ${props =>\n props.background ? props.background : 'white'};\n color: white;\n text-transform: uppercase;\n position: relative;\n padding-top: 15px!important;\n padding-bottom: 15px!important;\n outline: none;\n overflow: hidden;\n width: 100%;\n transition: all .2s ease-in-out;\n text-align: center;\n background: ${props =>\n props.background ? props.background : 'hsla(0,0%,90.2%,.95)'};\n :hover {\n box-shadow: rgba(0, 0, 0, 0.5) 0px 8px 16px 0px;\n transform: translateY(0) scale(1);\n`\n\nexport default ({ }) => {\n const [email, setEmail] = useState(\"\")\n const [submitted, setSubmitted] = useState(false) \n\n\n function errorHandling(data) {\n // your error handling\n }\n\n const handleSubmit = () => {\n addToMailchimp(email).then((data) => {\n\n if (data.result == \"error\") {\n errorHandling(data)\n } else {\n trackCustomEvent({\n category: \"Newsletter\",\n action: \"Click\",\n label: `Newsletter Click`,\n })\n setSubmitted(true)\n }\n })\n }\n\n return (\n <>\n {submitted ? (\n <NewsletterWrapper>\n <DescriptionWrapper>\n <h2>\n 🎉 Successfully subscribed! 🎉\n </h2>\n <p>\n Thank your for your interest in my content.\n </p>\n </DescriptionWrapper>\n </NewsletterWrapper>\n ) : (\n <NewsletterWrapper>\n <DescriptionWrapper>\n <p>\n Do you want to know when I post something new? <br/> \n Then subscribe to my newsletter.\n 🚀\n </p>\n </DescriptionWrapper>\n <InputWrapper>\n <Input\n type=\"email\"\n name=\"email\"\n id=\"mail\"\n label=\"email-input\"\n placeholder=\"Your e-mail address\"\n onChange={(e) => setEmail(e.target.value)}\n />\n </InputWrapper>\n <ButtonWrapper>\n <Button\n type=\"button\"\n aria-label=\"Subscribe\"\n onClick={() => handleSubmit()}\n >\n \"Subscribe\"\n </Button>\n </ButtonWrapper>\n </NewsletterWrapper>\n )}\n </>\n )\n}\n","date_published":"2020-08-31T20:11:17.540Z","date_modified":"2025-01-22T15:35:50.897Z","tags":["gatsby","react"],"image":"https://mxd.codes/content/posts/published/how-to-create-a-mailchimp-newsletter-sign-up-form-for-your-gatsby-site/cover.png","banner_image":"https://mxd.codes/content/posts/published/how-to-create-a-mailchimp-newsletter-sign-up-form-for-your-gatsby-site/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/how-to-create-a-web-map-with-open-layers","url":"https://mxd.codes/articles/how-to-create-a-web-map-with-open-layers","title":"How to create a web-map with OpenLayers","summary":"OpenLayers is a JavaScript library which allows you to visualize easily geodata in web applications (Web GIS).","content_html":"OpenLayers is a JavaScript library that makes it relatively easy to visualize geodata in web applications (Web-GIS).
\nOpenLayers is a programming interface that allows client-side development independent of the server. Map tiles, vector data and markers from various data sources can be displayed.
\nOpen Layers was developed to promote the use of geodata Of all kinds. OpenLayers is also free, open-source and is published under \"2-clause BSD License\".
\nTo be able to create a map with OpenLayers, all you need is a basic general knowledge of programming languages. The missing pieces of the puzzle can be found very easily using the detailed documentation on OpenLayers.
\nFirst of all, an HTML file is required as the basic framework. The basic structure usually looks like this:
\n <html>\n <head>\n <title>OpenLayers Demo</title>\n </head>\n <body>\n </body>\n </html>\n\nYou can now copy this code and paste it into a file that you name, for example, \"jsmap.html\".
\nIf you want to learn more about HTML, you can find a few useful tutorials on w3schools.
\nNow the OpenLayers Javascript is integrated into the HTML. You copy for that
\n <script src=\"http://www.openlayers.org/api/OpenLayers.js\"></script>\n\nbetween <title> and <head>. You can do that right away
function init() {\n map = new OpenLayers.Map(\"basicMap\"); //create a new map\n var mapnik = new OpenLayers.Layer.OSM(); //add an OpenStreetMap layer to have some data in the mapview\n map.addLayer(mapnik); //add the OSM layer to the map\n\n var markers = new OpenLayers.Layer.Markers( \"Markers\" ); //add a layer where markers can be put\n map.addLayer(markers); //add the markers layer to the current map\n\n var lonLat = new OpenLayers.LonLat( 13.0 ,47.8 ) //define a new location with these coordinates in WGS84\n .transform( //transform the location to the coordinate system of our OpenLayers map\n new OpenLayers.Projection(\"EPSG:4326\"), // transform from WGS 1984\n map.getProjectionObject() // to Spherical Mercator Projection\n );\n markers.addMarker(new OpenLayers.Marker(lonLat)); //add the newly created marker to the markers layer\n\n map.setCenter(lonLat, 15); // Use maker to center the map above and set zoom level to 15\n}\n\nalso included in script tags. Here a function with the name init is created, the
\nNow this function must also be loaded and the card placed. For that you are replacing now
\n <body>\n </body>\n\nwith
\n <body onload=\"init();\">\n <div style=\"width: 100%; height: 60%;\" id=\"basicMap\"></div>\n </body>\n\nWith javascript onload =\" init ();the function is executed when loading the HTML file and inserted via the idid = \"basicMap\".
Your complete file should now look like this:
\n <html>\n <head>\n <title>OpenLayers Demo</title>\n <script src=\"http://www.openlayers.org/api/OpenLayers.js\"></script>\n <script>\n function init() {\n map = new OpenLayers.Map(\"basicMap\"); //create a new map\n var mapnik = new OpenLayers.Layer.OSM(); //add an OpenStreetMap layer to have some data in the mapview\n map.addLayer(mapnik); //add the OSM layer to the map\n\n var markers = new OpenLayers.Layer.Markers( \"Markers\" ); //add a layer where markers can be put\n map.addLayer(markers); //add the markers layer to the current map\n\n var lonLat = new OpenLayers.LonLat( 13.0 ,47.8 ) //define a new location with these coordinates in WGS84\n .transform( //transform the location to the coordinate system of our OpenLayers map\n new OpenLayers.Projection(\"EPSG:4326\"), // transform from WGS 1984\n map.getProjectionObject() // to Spherical Mercator Projection\n );\n markers.addMarker(new OpenLayers.Marker(lonLat)); //add the newly created marker to the markers layer\n\n map.setCenter(lonLat, 15); // Use maker to center the map above and set zoom level to 15\n }\n </script>\n </head>\n <body onload=\"init();\">\n <div style=\"width: 100%; height: 60%;\" id=\"basicMap\"></div>\n </body>\n </html>\n\nIf you save the file and open it, you will end up in the browser of your choice and your Javascript web map will be displayed with OpenLayers.
\nIf you now want to change the position of the marker, all you have to do is change the coordinates.
\nNormally, you often want to display several points and not just one on the map.\nIn theory you could
\n var lonLat = new OpenLayers.LonLat( 13.0 ,47.8 ) //define a new location with these coordinates in WGS84\n .transform( //transform the location to the coordinate system of our OpenLayers map\n new OpenLayers.Projection(\"EPSG:4326\"), // transform from WGS 1984\n map.getProjectionObject() // to Spherical Mercator Projection\n );\n markers.addMarker(new OpenLayers.Marker(lonLat)); //add the newly created marker to the markers layer\n\nnow copy for each additional marker you want to create and simply change the coordinates. But since the whole thing becomes relatively confusing, we will solve it differently.
\nFirst, we create an array of arrays.
\n var poi = [ // create array with point of interests\n [ 11.557617 ,48.092757 ],\n [ 8.558350, 50.028917 ],\n [ 6.701660, 51.289406 ],\n [ 13.337402, 52.496160 ]\n ];\n\nAll coordinates for the markers to be displayed are now stored in this array. Now we create a function that can be called to create markers and add them to the map.
\nfunction createmarker (lon,lat) {\n var feature = new OpenLayers.LonLat( lon, lat ) // create features (locations) out of arrays in points\n .transform( //transform the location to the coordinate system of our OpenLayers map\n new OpenLayers.Projection(\"EPSG:4326\"), // transform from WGS 1984\n map.getProjectionObject() // to Spherical Mercator Projection\n );\n markers.addMarker(new OpenLayers.Marker(feature)); // Add new features to markers layer\n} \n\nThis function should now be carried out for each pair of coordinates in the \"poi\" array. This can be solved with a \"for .. of\" loop.
\nfor (var x of poi) { // for each array(object) in array \n createmarker (x[0],x[1]) // create markers\n}\n\nIn this loop, a marker is now created for each coordinate pair in the array, transformed and added to the map. The \"poi\" array can now be expanded as required and the additional markers are automatically added to the map.
\nIn order to make the whole thing more user-friendly and not having to change the code manually every time, we are now creating a simple user interface to add additional markers.
\nThe new coordinates should be entered via two input fields and created with a confirmation on a button.
\nThe HTML framework can look like this and should be placed somewhere in the \"body\" area:
\n <div class=\"add_markers\">\n <div class=\"input_markers\"> \n Add new markers with coordinates in WGS84!\n <div class=\"row\">\n <div class=\"col-25\">\n <label for=\"lat\">Latitude:</label>\n </div>\n <div class=\"col-75\">\n <input type=\"text\" id=\"lat\" name=\"firstname\" placeholder=\"48.060614\">\n </div>\n </div>\n <div class=\"row\">\n <div class=\"col-25\">\n <label for=\"lon\">Longitude:</label>\n </div>\n <div class=\"col-75\">\n <input type=\"text\" id=\"lon\" name=\"lastname\" placeholder=\"12.190876\">\n </div>\n </div>\n <button id=\"add_marker\" class=\"button\">Add marker!</button>\n <div id=\"poi_added\" class=\"poi_added\"></div>\n </div>\n </div>\n </div>\n\nThe most important here are the ids via which the values of the fields are later adopted.
\nTo create these markers we use a function that is called every time the button \"Add Marker!\" is clicked.
\nThe complete function looks like this:
\nfunction addFeature() {\n var lat = parseFloat(document.getElementById(\"lat\").value); // get value of input lat and parse to float\n var lon = parseFloat(document.getElementById(\"lon\").value); // get value of input lon and parse to float\n\n var newFeature = [ lon, lat ] // create array \"newFeature\" with lon , lat\n poi.push(newFeature) // add NewFeature to array \"poi\"\n\n createmarker (lon,lat) // create marker for input lat, lon \n document.getElementById('poi_added').innerHTML = \"Added marker for \" + \"latitude: \" + lat + \"; longitude: \" + lon; // visual feedback for added marker\n}\n\n var lat = parseFloat(document.getElementById(\"lat\").value); // get value of input lat and parse to float\n var lon = parseFloat(document.getElementById(\"lon\").value);\n\nAre references to the elements with the ID \"lat\" and \"lon\", ie the two input fields. Here the two variables \"lat\" and \"lon\" are created, to which the value from the input fields is assigned.
\nThen they are merged into an array, since a marker always consists of two coordinates and is added to the \"poi\" array.
\n var newFeature = [ lon, lat ] // create array \"newFeature\" with lon , lat\n poi.push(newFeature) // add NewFeature to array \"poi\"\n\nAdding it to the \"poi\" array is not functionally necessary, but it can be useful if, for example, you want to create popovers that show the coordinates of each marker.
\nThe coordinates are now saved in \"lat\" and \"lon\" and they only have to be transferred to the previously created function \"createmarker\", which creates the markers and adds them to the map.
\n createmarker (lon,lat) // create marker for input lat, lon \n\nIt would be nice if the user received feedback about what happened after clicking the button. This can be done with
\n document.getElementById('poi_added').innerHTML = \"Added marker for \" + \"latitude: \" + lat + \"; longitude: \" + lon; // visual feedback for added marker\n\nThe last thing that is missing is that the function is executed with a click on the button.
\n document.getElementById('add_marker').addEventListener('click', addFeature); // execute function \"addFeature\" when button with id \"add_marjer\" is clicked\n\nAs soon as the button with the id \"add_marker\" is clicked, the \"addFeature\" function is now executed.
\nWith
\n var extent = map.zoomToExtent(markers.getDataExtent()); // get extent of markers layer\n\nthe extent of the \"markers\" layer is determined, zoomed onto it and assigned to the variable extent.
\nIf you now save and open your file again, you should see your map with all markers and be able to add additional markers via a graphical user interface.
\n <html>\n <head>\n <title>OpenLayers Demo</title>\n <script src=\"http://www.openlayers.org/api/OpenLayers.js\"></script>\n <script>\n function init() {\n map = new OpenLayers.Map(\"basicMap\"); //create a new map\n var mapnik = new OpenLayers.Layer.OSM(); //add an OpenStreetMap layer to have some data in the mapview\n map.addLayer(mapnik); //add the OSM layer to the map\n\n var markers = new OpenLayers.Layer.Markers( \"Markers\" ); //add a layer where markers can be put\n map.addLayer(markers); //add the markers layer to the current map\n\n function createmarker (lon,lat) {\n var feature = new OpenLayers.LonLat( lon, lat ) // create features (locations) out of arrays in points\n .transform( //transform the location to the coordinate system of our OpenLayers map\n new OpenLayers.Projection(\"EPSG:4326\"), // transform from WGS 1984\n map.getProjectionObject() // to Spherical Mercator Projection\n );\n markers.addMarker(new OpenLayers.Marker(feature)); // Add new features to markers layer\n } \n\n var poi = [ // create array with point of interests\n [ 11.557617 ,48.092757 ],\n [ 8.558350, 50.028917 ],\n [ 6.701660, 51.289406 ],\n [ 13.337402, 52.496160 ]\n ];\n\n for (var x of poi) { // for each array(object) in array \n createmarker (x[0],x[1]) // create markers\n }\n\n var extent = map.zoomToExtent(markers.getDataExtent()); // get extent of markers layer\n\n function addFeature() {\n var lat = parseFloat(document.getElementById(\"lat\").value); // get value of input lat and parse to float\n var lon = parseFloat(document.getElementById(\"lon\").value); // get value of input lon and parse to float\n\n var newFeature = [ lon, lat ] // create array \"newFeature\" with lon , lat\n poi.push(newFeature) // add NewFeature to array \"poi\"\n\n createmarker (lon,lat) // create marker for input lat, lon \n document.getElementById('poi_added').innerHTML = \"Added marker for \" + \"latitude: \" + lat + \"; longitude: \" + lon; // visual feedback for added marker\n }\n\n document.getElementById('add_marker').addEventListener('click', addFeature); // execute function \"addFeature\" when button with id \"add_marjer\" is clicked\n }\n\n // popover coordinates markers \n </script> \n <style>\n /*dein style*/\n </style>\n </head>\n <body onload=\"init();\">\n <div id=\"wrapper\" >\n <div style=\"width: 100%; height: 80%\" id=\"basicMap\"></div>\n <div class=\"add_markers\">\n <div class=\"input_markers\"> \n Add new markers with coordinates in WGS84!\n <div class=\"row\">\n <div class=\"col-25\">\n <label for=\"lat\">Latitude:</label>\n </div>\n <div class=\"col-75\">\n <input type=\"text\" id=\"lat\" name=\"firstname\" placeholder=\"48.060614\">\n </div>\n </div>\n <div class=\"row\">\n <div class=\"col-25\">\n <label for=\"lon\">Longitude:</label>\n </div>\n <div class=\"col-75\">\n <input type=\"text\" id=\"lon\" name=\"lastname\" placeholder=\"12.190876\">\n </div>\n </div>\n <button id=\"add_marker\" class=\"button\">Add marker!</button>\n <div id=\"poi_added\" class=\"poi_added\"></div>\n </div>\n </div>\n </div>\n </div>\n </div>\n </body>\n </html>\n","date_published":"2020-08-31T19:50:29.045Z","date_modified":"2025-01-22T15:34:40.165Z","tags":["open-layers","web-mapping","javascript"],"image":"https://mxd.codes/content/posts/published/how-to-create-a-web-map-with-open-layers/cover.png","banner_image":"https://mxd.codes/content/posts/published/how-to-create-a-web-map-with-open-layers/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/open-source-web-gis-applications","url":"https://mxd.codes/articles/open-source-web-gis-applications","title":"Open-Source Web-GIS Applications","summary":"Would you like to know which open source web GIS applications are used to share geodata over the Internet? Then you can find out more here.","content_html":"You want to know which Open-Source Web-GIS applications are used to share geospatial data over the Internet?
\nGeoServer is an open source server for sharing geospatial data.
\n\ndeegree is an open source software for geodata infrastructures and the geospatial web.
\n\nFeatureServer is an implementation of a RESTful Geographic Feature Service.
\n\nMapGuide Open Source is a web-based platform that enables users to develop and deploy web mapping applications and geospatial services.
\n\nMapServer is an open-source platform for publishing geodata and interactive map applications on the web.
\n\nOpenLayers is a JavaScript library that enables geospatial data to be displayed in the web browser. OpenLayers is a programming interface that allows client-side development independent of the server.
\n\nLeaflet is a free JavaScript library that can be used to create Web-GIS applications. The library uses HTML5, CSS3 and therefore supports most browsers.
\n","date_published":"2020-08-30T20:53:38.058Z","date_modified":"2024-02-19T19:45:54.118Z","tags":["gis"],"image":"https://mxd.codes/content/posts/published/open-source-web-gis-applications/cover.png","banner_image":"https://mxd.codes/content/posts/published/open-source-web-gis-applications/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/download-open-street-map-data-as-shapefiles","url":"https://mxd.codes/articles/download-open-street-map-data-as-shapefiles","title":"Download OpenStreetMap data as Shapefiles","summary":"OpenStreetMap is the largest international project that aims to create a free world map.","content_html":"OpenStreetMap is the largest international project that aims to create a free world map. Voluntary \"mappers\" collect data about roads, railways, rivers, forests and houses and make them available online.
\nIf you also want to get involved in the OpenStreetMap project, you can find further information here: https://www.openstreetmap.de/faq.html#wie_mitmachen.
\nThe data is freely available to all people. You can also use OpenStreetMap data commercially because it was published under the Open Data Commons Open Database License
\nThe data is offered by OSM as XML or PBF, which is a \"compact\" data format for the raw data from OpenStreetMap. The file Planet.osm contains the entire planet that has been recorded so far and the full history planet version even contains all version histories of all objects. This file is usually updated once a week.
\nWith tools such as Osmosis or Osm2pgsql, this geodata can then be imported into a Postgis database. However, since this file is very large (76GB), most of you will probably not be able to start with it.\nInstead of using the file of the entire planet it is more useful to extract the part of it that you will need. You can do this on your own or use services such as Geofabrik offers them.
\nFortunately, there are Geofabrik that process OSM files and partially also make them available free of charge.
\nAt https://download.geofabrik.de/ you will find download links for specific regions, where you can finally download OpenStreetMap data as shapefiles. There is also a small map at the top right of the website that shows the area of the selected data.
\nThe data can also be downloaded as .pbf or bz2 files.
\nWith a click on a region you land in the \"sub-region\", in which data of individual countries can then be downloaded. In Europe, shapefiles of the OSM data can be downloaded for almost all countries.
\nFor Germany there is unfortunately only the possibility to download shapefiles of the individual federal states.
\nIn addition, polygons of the dimensions of the individual federal states can be downloaded.
\nThere is of course a unique ID for each object. When looking at street objects, there are so-called \"other_tags\" in addition to the name and type of street (residential, tertiary, secondary, unclassified, etc.).
\nThere you will find all additional attributes that describe the object in more detail. In the case of the street, the maximum permitted speed, the maximum allowed weight, the zip code of the municipality, the material of the street and even more properties are described.
\nWith special queries you can access these \"other_tags\" and, for example, only show all paved roads in QGIS.
\nWith the usual \"OSM Basemap\" all these objects are rendered and displayed in the usual OpenStreetMap design.
\n","date_published":"2020-08-30T20:51:51.650Z","date_modified":"2024-02-19T19:46:05.859Z","tags":["gis"],"image":"https://mxd.codes/content/posts/published/download-open-street-map-data-as-shapefiles/cover.png","banner_image":"https://mxd.codes/content/posts/published/download-open-street-map-data-as-shapefiles/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/export-qgis-layers-as-images-with-py-qgis","url":"https://mxd.codes/articles/export-qgis-layers-as-images-with-py-qgis","title":"Export QGIS layers as images with PyQGIS","summary":"With the help of PyQGIS processes such as the export of images for all layers from a map can be automated.","content_html":"PyQGIS is a powerful tool that enables the automation of various processes, including the seamless export of images for all layers from a map.
\nTo start the automation, you'll need one or more layers containing raster and/or vector data.
\nIn the initial step, if all your files reside in the same folder, you can use a \"for .. in loop\" to read them in. By adding .endswith(\".gpkg\"), for instance, you can specifically target files with the \".gpkg\" extension. The layer names are then stored in an array for future reference.
import os, sys\n from PyQt5.QtCore import QTimer\n\n # path to look for files\n path = \"ordner/nocheinordner/\"\n # set path\n dirs = os.listdir( path )\n # array for storing layer_names\n layer_list = []\n # variable for further processing\n count = 0\n #look for files inpath\n for file in dirs:\n # search for \".gpkg\" files \n if file.endswith(\".gpkg\"):\n #add vectorlayers\n vlayer = iface.addVectorLayer(path + file, \"Layername\", \"ogr\")\n layer_list.append(vlayer.name())\n\nThe newly added vector layers will then appear in the QGIS layer tree.
\nOnce you are satisfied with the display, you can use two functions to export a georeferenced image for each layer.
\n def prepareMap():\n # make all layers invisible\n iface.actionHideAllLayers().trigger()\n # get layer by layer_name\n layer_name = QgsProject.instance().mapLayersByName(layer_list[count])[0]\n # select layer\n iface.layerTreeView().setCurrentLayer(layer_name)\n # set selected layer visible\n iface.actionShowSelectedLayers().trigger()\n # Wait a second and export the map\n QTimer.singleShot(1000, exportMap) \n\nThe \"prepareMap ()\" function first deactivates all layers. A layer is then selected from the \"layer_list\" array using its layer name and then displayed again. The QTimer class is particularly important here. Before an image is created, there must always be a short wait before the selected layer is really visible. Without QTimer, the script would run so quickly that the result would be loud images with the same content. After waiting a second, the \"exportMap\" function is called.
\n def exportMap(): \n global count\n # save current view as image\n iface.mapCanvas().saveAsImage( path + layer_list[count] + \".png\" )\n # feedback for printed map\n print('{}.png exported sucessfully'.format(layer_list[count]))\n # get map for every layer in layer_list\n if count < len(layer_list)-1:\n # Wait a second and prepare next map (timer is needed because otherwise all images have the samec content \n # the script excecutes faster then the mapCanvas can be reloaded\n QTimer.singleShot(1000, prepareMap) \n count += 1\n\nNow the current map, in which only one level is shown, is saved as a PNG image in the source directory. Ultimately, you \"land\" in a loop that goes through all the layers that are in the \"layer_list\" array and calls the \"prepareMap\" function for each layer again.
","date_published":"2020-08-30T20:46:25.818Z","date_modified":"2025-01-22T15:32:55.882Z","tags":["gis","python"],"image":"https://mxd.codes/content/posts/published/export-qgis-layers-as-images-with-py-qgis/cover.png","banner_image":"https://mxd.codes/content/posts/published/export-qgis-layers-as-images-with-py-qgis/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/satellite-imagery-download-high-resolution","url":"https://mxd.codes/articles/satellite-imagery-download-high-resolution","title":"Access High-Resolution Satellite Imagery with Ease - Satellite Imagery Download Options","summary":"For all those who do not yet know the relevant contact points for current satellite images, there are a few links here where you can download satellite images from around the world, mostly free of charge.","content_html":"For individuals seeking access to high-resolution satellite imagery, numerous options are available for convenient and free downloads. Explore the following key sources to download global satellite images effortlessly:
\nThe Copernicus Open Access Hub (Sentinels Scientific Data Hub) facilitates free and open access to Sentinel-1, Sentinel-2, and Sentinel-5P products. Sentinel data is accessible through Copernicus Data and Information Access Services (DIAS) on various platforms. Find more information on Copernicus DIAS.
\nThe GEOSS Portal, operated by the European Space Agency (ESA), offers a map-based online interface for downloading earth observation data globally.
\nNASA Worldview provides an interactive user interface to search for high-resolution and global satellite images. Explore thematic images related to forest fires, air quality, flood monitoring, and more.
\nEuropean Space Imaging is a leading provider of Very High Resolution (VHR) satellite images for Europe, North Africa, and the CIS countries.
\nAccess remote sensing data through the USGS Global Visualization Viewer (GloVis), available since 2001. The platform was redesigned in 2017 to adapt to changing Internet technologies, providing users with easy-to-use navigation tools for instant viewing and downloading of scenes.
\nThe GeoStore, operated by AIRBUS, allows users to order high-resolution and current satellite images.
\nThe EOWEB GeoPortal (EGP) by DLR is a multi-mission web portal providing interactive access to the DLR earth observation database.
","date_published":"2020-08-30T16:28:00.367Z","date_modified":"2025-05-17T07:54:26.740Z","tags":["gis"],"image":"https://mxd.codes/content/posts/published/satellite-imagery-download-high-resolution/cover.png","banner_image":"https://mxd.codes/content/posts/published/satellite-imagery-download-high-resolution/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/host-a-static-website-with-your-own-domain-aws-s3-and-cloud-front","url":"https://mxd.codes/articles/host-a-static-website-with-your-own-domain-aws-s3-and-cloud-front","title":"Host a static website with your own domain, AWS S3 and CloudFront","summary":"With AWS (and in particular the free AWS contingent) you have the option of a static website with a custom domain for a few Hosting cents a month including CDN via CloudFront and CI/CD integration.","content_html":"With AWS (and in particular the free AWS contingent) you have the option of a static website with a custom domain for a few Hosting cents a month including CDN via CloudFront and CI / CD integration.
\nBefore I switched completely to AWS, I had a common shared hosting option that cost me around € 72 a year. With this option I had
\nOn the whole, much more than I need to run my static GatsbyJS website.
\nSo why shouldn't I only use and pay for resources that I ultimately need and also get some cloud computing experience?
\nThe basis for hosting on AWS is formed by S3 Buckets. \nBuckets are \"containers\" on the web where you can save files.\nIn order for redirects from subdomains such as www.mxd.codes to mxd.codes to work, you need a bucket for each domain.
\nFirst of all, you create an S3 bucket for the root domain. In my case, the bucket name is the domain name mxd.codes and you select a region (for example EU (Frankfurt)).\nThe default settings can be kept under Options, unless you want to save different versions or access protocols.\nSo that everyone can access the website content later, remove the tick that is present by default at \"Block any public access\", check the bucket settings again and finally create it.
\nIn each bucket you can or should enter a bucket policy that further defines access.\nTo do this, click on the name of the bucket and go to \"Permissions\" -> \"Bucket Policy\".
\nThe following guideline must then be saved for public access.
\n{\n \"Version\": \"2008-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"AllowPublicRead\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"*\"\n },\n \"Action\": \"s3:GetObject\",\n \"Resource\": \"arn:aws:s3:::mxd.codes/*\"\n }\n ]\n}\n\n\"mxd.codes\" has to be replace with your bucket name!
\nIf everything was done correctly, the permissions should now look something like this:
\n
In the bucket settings you have to activate the \"hosting a static website\" and specify an index document and an error document.\nFor GatsbyJS that would be index.html and 404.html.
\n
Now the S3 bucket for the subdomain www.mxd.codes is still missing.\nSo create a new bucket with the name of the subdomain www.mxd.codes with public access and add the bucket policy.
\nIn the settings for \"hosting a static website\" you use \"redirect requests\" and enter the target bucket mxd.codes and you can enter https as a protocol, because later on the content of the static website is delivered via CloudFront, which can be encrypted with SSL certificates.
\n
The buckets are now created and correctly configured for the operation of a static website including redirect.
\nWith the free AWS contingent, 50 GB of data transfer can be burned per month in Cloudflare.
\nWith a page size of generous 4 MB, this is enough for 12,500 page views per month and should therefore be more than sufficient for a website with average traffic. So why not take a free CDN with you?
\nIf the costs incurred after the free year should deter someone, you still have the option to switch to another CDN provider such as CloudFlare.
\nAt CloudFront you have to create a web distribution for each bucket.\nAs the origin domain name, a bucket can not be selected from the dropdown list, but the end point of the bucket from S3 must be copied.
\n
In the distribution for the bucket mxd.codes, for example \"mxd.codes.s3-website.eu-central-1.amazonaws.com\" is specified as origin.\n\"Origin ID\" will then be filled in automatically. In \"Viewer Protocol Policy\" Redirect HTTP to HTTPS is selected because users should only be able to access the website via HTTPS. \"Compress Objects Automatically\" Yes can be selected, so that CloudFront will zip all files automatically.\nUnder \"Alternate Domain Names (CNAMES)\" you have to specify the bucket for the root domain for the distribution for the root domain. For example mxd.codes
\nAt \"SSL-Certifacte\" you can now create for the two domains mxd.codes and www.mxd.codes two free Amazon SSL certificate via the Certificate Manager (ACM ). To do this, add your two domains in ACM.
\n
You can now have this validated using a DNS or email method. If you have included your domain in Route 53, you can do it more or less automatically by simply following the instructions.
\nBack in the CloudFront Distribution Creation you only have to specify a \"Default Root Object\". -> index.html\nIf you don't do this, CloudFront always shows an \"Access Denied\" message in XML format when you access your domain (atleast that was my case).
\nFinally, the distribution must of course still be activated under \"Distribution State\".
\nFirst distribution finished. The same procedure now for the subdomain www.mxd.codes with the corresponding \"Origin Domain Name\" (Bucket end point!)
\nThis can take up to 20 minutes\n(If you want to clear your CloudFront cache and have already installed AWS CLI, you can do this with the following command:
\naws cloudfront create-invalidation --distribution-id DEINE_DISTRIBUTION ID --paths \"/*\"\n\nIn the meantime, you can create the redirects for Cloudfront in Route 53.
\nIn [Route 53](\"Route 53\") you need a hosted zone (= 0.50€ per month). Then A (and provided that in the CLoudFrontDistribution IPv6 is\nactivated (which it is by default), as well as an AAAA) data record can be created.
\nThat means you basically need 4 \"alias\" data sets:
\nExceptionally, you can select the CDN url from the dropdown list.\nWith \"Routing Guideline\" and \"Assess the state of the target\" you can leave the default settings, unless you want to experiment.
\nNow if you wait a little you should be redirect from
\nUnderstanding the fundamental disparities between Geographic information systems (GIS) and Computer-Aided Design (CAD) is crucial for anyone delving into spatial data and digital modeling. Let's explore the key differences that set these systems apart:
\nGeographic information systems (GIS) serves as a comprehensive system designed for the display and processing of geodata. This includes data enriched with spatial positions, allowing for the structured presentation, representation, and analysis of complex issues. Key differentiators include:
\nCAD systems, on the other hand, are geared towards the creation and graphical modeling of digital content. These systems commonly handle plans, drawings, and 3D models, prioritizing precision in representation. Key attributes of CAD systems include:
\nWhile GIS and CAD serve distinct purposes, they can complement each other effectively. The synergy between these systems arises from their ability to address different aspects of spatial data management. GIS excels in handling diverse geospatial information, while CAD ensures meticulous precision in digital modeling. Together, they form a powerful combination, offering a comprehensive solution for diverse applications.
\nIn conclusion, the choice between GIS and CAD depends on the specific needs of a project. Understanding their unique features enables professionals to leverage the strengths of each system, ultimately enhancing the overall efficiency and effectiveness of spatial data utilization.
","date_published":"2020-08-30T13:50:00.708Z","date_modified":"2024-02-19T19:46:53.764Z","tags":["cad","gis"],"image":"https://mxd.codes/content/posts/published/gis-vs-cad-the-difference-between-gis-and-cad/cover.png","banner_image":"https://mxd.codes/content/posts/published/gis-vs-cad-the-difference-between-gis-and-cad/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/what-is-a-shapefile-shp-dbf-and-shx","url":"https://mxd.codes/articles/what-is-a-shapefile-shp-dbf-and-shx","title":"What is a shapefile? .shp, .dbf and .shx","summary":"The shapefile format is a general format for storing vector data.","content_html":"The shapefile format is a widely used standard for storing vector GIS - data. Developed by Esri, it has become an open format and a preferred choice for data transfer, compatible with major GIS software programs such as ArcGIS and QGIS.
\nDespite the singular name, a shapefile is a collection of three essential files: .shp, .shx, and .dbf. These files, residing in the same directory, collectively enable visualization. Additional files like .prj may contain projection information, and the entire package is often compressed in a ZIP file for easy transmission via email or download links on websites.
\nAll files within a shapefile share the same name but have different formats. Three core files constitute a shapefile:
\nOptional files may include .atx, .sbx, .sbn, .qix, .aih, .ain, .shp.xml, .prj, and .cpg, each serving specific functions.
\nShapefiles store elements of a single geometry type, such as
\npoints,
lines,
surfaces,
polygons or
multi-points
\nHowever, a data record doesn't necessitate an associated geometry; pure factual data can be stored as a shapefile.
For those seeking more robust options, GIS databases, like PostGIS (PostgreSQL) and GeoPackages, emerge as superior alternatives. Databases offer limitless file sizes, support various geometry types, and allow topological creations. Data can be effortlessly shared as geopackages, streamlining the transfer process into a single, convenient file. Shapefiles remain a staple, but exploring these alternatives ensures flexibility and enhanced capabilities in GIS data management.
","date_published":"2020-08-30T13:47:06.733Z","date_modified":"2025-02-01T18:13:45.722Z","tags":["gis"],"image":"https://mxd.codes/content/posts/published/what-is-a-shapefile-shp-dbf-and-shx/cover.png","banner_image":"https://mxd.codes/content/posts/published/what-is-a-shapefile-shp-dbf-and-shx/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/articles/what-are-geodata","url":"https://mxd.codes/articles/what-are-geodata","title":"Understanding Geodata","summary":"Geodata is information with a spatial reference that can be used in a GIS, among other things.","content_html":"Geodata, also known as GIS data, is information with a spatial reference utilized in Geographic Information Systems (GIS). These data play a crucial role in various applications, providing valuable insights into geographical elements.
\nAll geodata comprise two fundamental components: object attributes and object geometries.
\nGeodata is further categorized based on its nature.
\nGeographic information can be stored in various formats:
\nGeodata can be obtained from both public and private providers. Public entities, like the State Office for Digitization, Broadband, and Surveying, may offer aerial photos and topographic maps with legal restrictions and varying costs.
\nThe quality of geodata is critical, depending on its application:
\nConclusion: Geodata as a Decision-Making Medium
\nIn conclusion, geodata represents real-world objects with attributes and geometries. Its primary and secondary forms are stored in various formats, serving as the backbone for GIS analysis. Understanding the quality considerations is paramount for effective decision-making in diverse fields. Geodata is not just information; it is a powerful tool for spatial analysis and decision support.
","date_published":"2020-08-30T13:45:32.795Z","date_modified":"2024-02-19T19:47:14.733Z","tags":["gis"],"image":"https://mxd.codes/content/posts/published/what-are-geodata/cover.png","banner_image":"https://mxd.codes/content/posts/published/what-are-geodata/cover.png","authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/motorbike-tour-through-south-tyrol","url":"https://mxd.codes/photos/motorbike-tour-through-south-tyrol","title":"Motorbike tour through South Tyrol","content_html":"","date_published":"2020-08-18T08:33:19.058Z","image":"https://mxd.codes/content/photos/motorbike-tour-through-south-tyrol/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/motorbike-tour-through-south-tyrol/photo-1.jpg","mime_type":"image/jpeg","title":"cover_IMG_20200825_134355_3bd4309570.jpg"},{"url":"https://mxd.codes/content/photos/motorbike-tour-through-south-tyrol/photo-2.jpg","mime_type":"image/jpeg","title":"cover_IMG_20200825_133745_1_826b2d5171.jpg"},{"url":"https://mxd.codes/content/photos/motorbike-tour-through-south-tyrol/photo-3.jpg","mime_type":"image/jpeg","title":"cover_IMG_20200826_125748_1_92b7f34304.jpg"},{"url":"https://mxd.codes/content/photos/motorbike-tour-through-south-tyrol/photo-4.jpg","mime_type":"image/jpeg","title":"cover_IMG_20200827_160308_72330be24a.jpg"},{"url":"https://mxd.codes/content/photos/motorbike-tour-through-south-tyrol/photo-5.jpg","mime_type":"image/jpeg","title":"cover_IMG_20200826_153522_1_7075d3b9b9.jpg"},{"url":"https://mxd.codes/content/photos/motorbike-tour-through-south-tyrol/photo-6.jpg","mime_type":"image/jpeg","title":"cover_IMG_20200827_160221_02c5ad81f2.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/ski-touring-in-inntal","url":"https://mxd.codes/photos/ski-touring-in-inntal","title":"Ski touring","content_html":"","date_published":"2019-02-18T08:31:08.050Z","image":"https://mxd.codes/content/photos/ski-touring-in-inntal/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/ski-touring-in-inntal/photo-1.jpg","mime_type":"image/jpeg","title":"49335092_1413215778813731_4012934939283981891_n_784f4d9dd8.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/motorcycle-tour-to-walchensee","url":"https://mxd.codes/photos/motorcycle-tour-to-walchensee","title":"Motorcycle tour to walchensee","content_html":"","date_published":"2017-07-18T08:32:28.590Z","image":"https://mxd.codes/content/photos/motorcycle-tour-to-walchensee/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/motorcycle-tour-to-walchensee/photo-1.jpg","mime_type":"image/jpeg","title":"11910185_1649480532000854_210557428_n_7a928a6eb1.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/road-cycling","url":"https://mxd.codes/photos/road-cycling","title":"Road Cycling","content_html":"","date_published":"2017-03-18T08:31:46.422Z","image":"https://mxd.codes/content/photos/road-cycling/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/road-cycling/photo-1.jpg","mime_type":"image/jpeg","title":"13696541_149612202133462_1520651288_n_6466a6c40e.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/highfield-festival-2016","url":"https://mxd.codes/photos/highfield-festival-2016","title":"Highfield Festival 2016","content_html":"","date_published":"2016-07-18T08:35:00.038Z","image":"https://mxd.codes/content/photos/highfield-festival-2016/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/highfield-festival-2016/photo-1.jpg","mime_type":"image/jpeg","title":"14063491_1637639389882292_91922205_n_b95d5bd633.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]},{"id":"https://mxd.codes/photos/summer-sunset","url":"https://mxd.codes/photos/summer-sunset","title":"Summer sunset","content_html":"","date_published":"2014-07-18T08:30:22.014Z","image":"https://mxd.codes/content/photos/summer-sunset/photo-1.jpg","attachments":[{"url":"https://mxd.codes/content/photos/summer-sunset/photo-1.jpg","mime_type":"image/jpeg","title":"11918004_1601369753456626_80774012_n_e98e9fdd6c_d9d566254b.jpg"}],"authors":[{"name":"Max Dietrich","url":"https://mxd.codes"}]}]}