Bleech https://bleech.de/ WordPress Agentur Wed, 15 Oct 2025 18:55:45 +0000 de hourly 1 Warum es Zeit ist auf PHP 8.3 zu upgraden https://bleech.de/blog/warum-es-zeit-ist-auf-php-8-3-zu-upgraden/ Tue, 23 Sep 2025 13:41:33 +0000 https://bleech.de/blog/why-its-time-to-upgrade-to-php-8-3/ Eine schnelle und sichere WordPress Website beginnt mit der PHP-Version, auf der eine Website läuft. WordPress unterstützt nun offiziell PHP 8.3, da die Adoptionsschwelle erreicht wurde. Jetzt ist der perfekte Zeitpunkt für ein Upgrade, um von den Verbesserungen zu profitieren.

The post Warum es Zeit ist auf PHP 8.3 zu upgraden appeared first on Bleech.

]]>
WordPress unterstützt ab sofort offiziell PHP 8.3. Bis vor Kurzem war der Support noch als Beta gekennzeichnet. Das liegt daran, dass gewartet wird bis mindestens 10 % aller Websites eine PHP-Version nutzen, bevor sie als vollständig kompatibel gilt. Diese Marke wurde inzwischen überschritten.

Was ist neu in PHP 8.3

PHP 8.3 wurde am 23. November 2023 veröffentlicht und bringt einige Verbesserungen. Entwickler können jetzt Typen für Klassenkonstanten definieren, dynamische Zugriffe auf Klassenkonstanten verwenden, JSON direkt mit der neuen Funktion json_validate() validieren und mehr. Außerdem gibt es Performance-Optimierungen, Fehlerbehebungen und Deprecations, die den Weg für die Zukunft freimachen.

Der PHP-Lebenszyklus

Ein zentraler Grund warum regelmäßige Updates wichtig sind, ist der Lebenszyklus von PHP selbst. Jede Version erhält zwei Jahre aktiven Support, gefolgt von zwei Jahren mit Sicherheitsupdates. Danach ist Schluss: keine Fehlerkorrekturen, keine Patches, kein Sicherheitsnetz.

Zum aktuellen Zeitpunkt (September 2025) befindet sich PHP 8.1 bereits auf der Zielgeraden und bekommt nur noch Sicherheitsupdates bis Dezember 2025. PHP 8.2 folgt ein Jahr später. PHP 8.3 hat aktiven Support bis Ende 2025 und erhält Sicherheitsupdates bis Dezember 2027. PHP 8.4, die neueste Version, erweitert den Zeitraum bis Ende 2028.

Chart showing lifetime support for PHP 8 versions.

PHP-Lebenszyklus im September 2025 (source)

Die Adoptionsschwelle

Mit dem Entfernen des Beta-Labels unterstützt WordPress 6.8 nun PHP 8.3 ganz offiziell. Das „Beta“-Label sorgt oft für Verwirrung. Tatsächlich ist es eine Vorsichtsmaßnahme:

Support is labelled as „beta support“ until at least 10% of all WordPress sites are running that version or later.

make.wordpress.org

Obwohl WordPress selbst mit dem Beta-Label bereits kompatibel ist, kann es sein, dass das Ökosystem aus Plugins und Themes noch nicht vollständig nachgezogen hat. Deshalb wird empfohlen, die Adoptionsschwelle abzuwarten.

Die 10%-Marke für Websites, die WordPress 6.8 in Kombination mit PHP 8.3 nutzen, wurde im Juli 2025 erreicht. Wer WordPress 6.8 nutzt, kann also zu PHP 8.3 wechseln und davon ausgehen, dass gängige Plugins und Themes ebenfalls kompatibel sind.

Table shows which PHP versions are supported by different WordPress versions, including beta support status.

WordPress- / PHP-Kompatibilität im September 2025 (source)

Was das Upgrade bringt

Es gibt drei Hauptgründe, die PHP-Version aktuell zu halten:

  1. Sicherheit: Veraltete PHP-Versionen sind ein Risiko, denn es ist nur eine Frage der Zeit bis Sicherheitslücken auftreten. Deine Website wird dann angreifbar, selbst wenn WordPress, Plugins und Themes aktuell sind.
  2. Performance: Jede neue PHP-Version bringt Verbesserungen mit sich, die sich positiv auf die Ladezeit und damit auch auf das Nutzererlebnis im Backend und Frontend, die Conversion Rates und Suchmaschinen-Rankings auswirken.
  3. Kompatibilität: Das WordPress-Ökosystem entwickelt sich weiter: Hoster, Plugins und Themes stellen den Support für ältere Versionen ein, um den Wartungsaufwand überschaubar zu halten. Mit einem Upgrade stellst du sicher, dass deine Website zukunftssicher bleibt.

Mögliche Stolpersteine

Einige Plugins oder Themes könnten noch nicht vollständig mit PHP 8.3 kompatibel sein, vor allem wenn die Code-Architektur veraltet ist. Auch eigener, individueller Code kann Warnungen oder Fehler verursachen.

In seltenen Fällen bietet dein Hosting-Anbieter PHP 8.3 noch nicht an. Dann sollte man über einen Anbieterwechsel nachdenken.

So gelingt das Upgrade

Am sichersten ist es, das Upgrade zunächst in einer Test- oder Staging-Umgebung durchzuführen. Aktualisiere WordPress, Plugins und Themes, bevor du die PHP-Version änderst. Schalte dann auf PHP 8.3 um und aktiviere den Debug-Modus, um mögliche Warnungen oder Fehler zu sehen.

Läuft alles einwandfrei, kannst du die Änderung auf Production übernehmen. Stelle sicher, dass es einen Backup-Plan gibt, und beobachte die Website in den ersten Tagen nach dem Umstieg aufmerksam, um Probleme schnell zu erkennen.

FAQ: Upgrade auf PHP 8.3

WordPress 6.8 ist die erste Version mit voller Kompatibilität. Frühere Versionen haben offiziell nur Beta-Support (Stand: September 2025).

Frag am besten noch mal nach. Gibt es keinen Zeitplan für den Rollout des PHP 8.3 Supports, solltest du über einen Wechsel des Web Hosts nachdenken. Aktuelle PHP-Versionen gehören zum 1×1 des Hostings.

Ja, aber warte nicht zu lange. Der Lebenszyklus von PHP 8.1 endet im Dezember 2025 und PHP 8.2 im Dezember 2026. Danach gibt es für diese Versionen keine Sicherheitsupdates mehr!

Stand September 2025 ist WordPress 6.8 noch im Beta-Support für PHP 8.4. Warte besser, bis die 10%-Adoptionsschwelle erreicht und offizielle Kompatibilität gegeben ist.

tl;dr

WordPress 6.8 unterstützt offiziell PHP 8.3 seit Juli 2025, nachdem die 10%-Adoptionsschwelle erreicht wurde. PHP 8.3 erhält Sicherheitsupdates bis Dezember 2027. Ein Upgrade bringt bessere Performance, Sicherheit und langfristige Stabilität im WordPress-Ökosystem. Teste den Umstieg in einer Staging-Umgebung, behebe Warnungen und Fehler und mach deine Website zukunftssicher.

The post Warum es Zeit ist auf PHP 8.3 zu upgraden appeared first on Bleech.

]]>
Pflicht oder Chance? Warum Barrierefreiheit immer gewinnt. https://bleech.de/blog/european-accessibility-act-eaa-website-barrierefrei-machen/ Tue, 22 Jul 2025 22:59:08 +0000 https://bleech.de/?p=14668 Im Juni 2025 trat der European Accessibility Act (EAA) in vollem Umfang in Kraft und macht die Barrierefreiheit im Internet für viele Unternehmen zu einer gesetzlichen Vorschrift. Aber was bedeutet das für deine Website? Und warum ist Barrierefreiheit mehr als nur eine zu erfüllende Vorschrift?

The post Pflicht oder Chance? Warum Barrierefreiheit immer gewinnt. appeared first on Bleech.

]]>
EAA ab 2025: Was du wissen musst!

Der European Accessibility Act (EAA) ist eine EU-Richtlinie, die darauf abzielt, Barrieren für Menschen mit Behinderungen zu beseitigen und dafür zu sorgen, dass ein breiteres Spektrum von Produkten und Dienstleistungen zugänglich ist. Das BFSG und das BITV 2.0 setzen die EU-Richtlinie EAA in deutsches Recht um und tragen zu einem inklusiveren digitalen Raum in Europa bei. Für Websites und mobile Anwendungen bedeutet dies, dass sie von allen Menschen, unabhängig von ihren Fähigkeiten, problemlos genutzt werden können. Ziel ist es, in ganz Europa ein integrativeres digitales Umfeld zu schaffen.

Warum ist der EAA für meine Website wichtig?

Die Einhaltung von Rechtsvorschriften ist zwar ein wichtiger Faktor, aber der wahre Wert der Barrierefreiheit im Internet geht weit über die Vermeidung von Strafen hinaus. Eine barrierefreie Website schafft faire und gleichberechtigte Zugänge für alle Nutzergruppen.

  • Erhöhte Reichweite: Ungefähr 15 % der Menschheit lebt mit einer Behinderung. Wenn ihr eure Website barrierefrei gestaltet, öffnet ihr digitale Türen für einen riesigen, oft unterversorgten Markt. Hier geht es nicht nur um gute Ethik, sondern auch um ein gutes Geschäft.
  • Verbesserte Benutzerfreundlichkeit für ALLE: Funktionen, die Benutzern mit Behinderungen zugute kommen, wie z. B. eine klare Navigation, ein guter Farbkontrast und die Bedienbarkeit mit Tastatur, verbessern das Erlebnis für alle. Stell dir vor, du müsstest versuchen, dein Telefon in hellem Sonnenlicht oder mit einer Hand zu bedienen – barrierefreies Design macht dies für alle Benutzer einfacher.
  • Verbesserte SEO: Viele bewährte Verfahren zur Barrierefreiheit entsprechen den Grundsätzen der Suchmaschinenoptimierung (SEO). Richtige Überschriftenstrukturen, beschreibende Alt-Texte für Bilder und klare Inhalte verbessern das Verständnis und die Bewertung eurer Website durch Suchmaschinen.
  • Stärkere Markenreputation: Wenn ihr euer Engagement für Inklusion zeigt, verbessert das euer Markenimage. Es zeigt, dass ihr euch um alle eure Kunden kümmert, was Vertrauen und Loyalität schafft.

Mach unser schnelles Quiz!

Grafik mit EU-Barrierefreiheits-Symbol und Text: „Gilt der EAA für meine Website?“
Arbeitest du für eine öffentliche Einrichtung oder eine private Organisation?
Verkauft ihr Dienstleistungen oder Produkte an Verbraucher (B2C)?
Hat euer Unternehmen mehr als 9 Mitarbeitende oder erwirtschaftet euer Unternehmen mehr als 10 Millionen Euro Umsatz pro Jahr?

Nach der EU-Richtlinie (Accessibility Act) seid ihr wahrscheinlich nicht betroffen.

Ihr müsst nichts umsetzen – aber könnt. Weil:

  • Schätzungsweise 80 Millionen Menschen in der EU haben langfristige Einschränkungen.
  • Auch vorübergehende Einschränkungen (z. B. nach einer Augenoperation) betreffen viele Menschen.
  • Wenn du dein Smartphone in der Sonne benutzt, brauchst du gute Kontraste und eine klare Struktur.

Barrierefreiheit bedeutet Benutzerfreundlichkeit.

KONTAKT

Schreib uns für eine kostenlose Beratung!

Die EU-Zugänglichkeitsverordnung ist für eure Website wahrscheinlich verbindlich.

Für die Zugänglichkeit eurer Website solltet ihr die international anerkannten Web Content Accessibility Guidelines (WCAG) 2.1 Level AA befolgen. Zu den wichtigsten Anforderungen gehören:

  • Wahrnehmbarkeit: Alternativtexte für Bilder, Farbkontraste, Untertitel, skalierbare Schriftgrößen, AREA-Labels für Screenreader
  • Benutzerfreundlichkeit: Tastaturnavigation, Vermeidung von Anfallsauslösern, Navigations- und Orientierungshilfen, Touch-Zielbereiche, Fokusanzeigen
  • Verständlichkeit: klare und einfache Sprache, vorhersehbare Navigation, Eingabehilfen und Fehlermeldungen, klare und einheitliche Beschriftungen in Formularen
  • Robustheit: Kompatibilität mit unterstützenden Technologien, HTML-Validierung, semantisches Markup von Inhalten


KONTAKT

Schreib uns für eine kostenlose Beratung!

[contact-form-7]

Was muss ich tun, damit meine Website den EAA erfüllt?

Gute Barrierefreiheit im Internet ist keine Frage von ausgefallenen Zusatzfunktionen, sondern von grundlegenden Design- und Entwicklungsentscheidungen. Hier sind einige Schlüsselelemente:

  • Tastatur-Navigation: Die Benutzer sollten in der Lage sein, ihre gesamte Website nur mit der Tastatur (Tabulator, Umschalt+Tabulator, Eingabetaste) zu navigieren, ohne eine Maus zu benötigen.
  • Alt-Texte für Bilder: Jedes aussagekräftige Bild sollte einen beschreibenden „Alt-Text“ haben, damit Bildschirmlesegeräte das Bild für sehbehinderte Benutzer beschreiben können.
  • Ausreichender Farbkontrast: Text- und Hintergrundfarben sollten so kontrastreich sein, dass sie auch für Menschen mit verschiedenen Sehbehinderungen gut lesbar sind.
  • Klare und konsistente Struktur: Verwenden der richtigen Überschriften-Tags (H1, H2, H3 usw.), um den Inhalt logisch zu strukturieren, sodass er für alle Benutzer, einschließlich derjenigen, die Bildschirmlesegeräte verwenden, leicht zu überfliegen und zu verstehen ist.
  • Untertitel und Transkripte für Medien: Videos und Audioinhalte sollten genaue Untertitel und idealerweise vollständige Transkripte enthalten, damit auch gehörlose oder schwerhörige Personen davon profitieren können.
  • ARIA-Attribute für dynamische Inhalte und komplexe Elemente: Bei Elementen wie Dropdown-Menüs, Akkordeons, Registerkarten und Schaltflächen, die nur aus Symbolen bestehen, reicht Standard-HTML allein nicht immer aus, um deren Zustand oder Zweck für Bildschirmlesegeräte zu vermitteln. ARIA-Attribute (Accessible Rich Internet Applications) liefern zusätzliche semantische Informationen. Zum Beispiel:
    • aria-label: Stellt eine Textbeschriftung für ein Element bereit, wenn kein sichtbarer Text verfügbar ist (z. B. für eine Schaltfläche „Suchen“, die nur ein Symbol enthält, aria-label=„Suchen“).
    • aria-expanded: Zeigt an, ob ein zusammenklappbares Element (wie ein Dropdown) gerade aufgeklappt oder zugeklappt ist.
    • aria-haspopup: Identifiziert ein Element, das ein Pop-up auslöst (wie ein Menü oder ein Dialog).
    • role: Definiert den Zweck eines Elements, wenn Standard-HTML nicht ausreicht (z. B. role="navigation" für eine Navigationsmarke).

      Durch die korrekte Verwendung von ARIA wird sichergestellt, dass Bildschirmlesegeräte den Zweck, den Zustand und den Wert interaktiver Komponenten korrekt anzeigen können, sodass die Benutzer effektiv mit komplexen Teilen eurer Website navigieren und interagieren können.

So können wir euch helfen

Wir können dich dabei unterstützen, die Anforderungen zu erfüllen und deine Website auf Barrierefreiheit optimieren.

  • Audit: was müsste an deiner Website für die Erfüllung der Verordnung gemacht werden?
  • Anpassung: wir entscheiden gemeinsam mit dir, welche Anpassungen an der Website gemacht werden sollen und setzen das für dich um.
  • Permanente Prüfung: damit deine Website das Level an Barrierefreiheit beibehält, lassen wir deine Website automatisiert prüfen.

Wir von Bleech haben uns auf die Erstellung von Websites spezialisiert, die nicht nur visuell beeindruckend, sondern auch technisch robust und barrierefrei sind.

Sprich uns gerne an!

The post Pflicht oder Chance? Warum Barrierefreiheit immer gewinnt. appeared first on Bleech.

]]>
Custom 3D models in Mapbox: a step-by-step integration guide https://bleech.de/blog/custom-3d-models-in-mapbox/ Tue, 24 Jun 2025 13:40:29 +0000 https://bleech.de/blog/custom-3d-models-in-mapbox-a-step-by-step-integration-guide/ Mapbox already offers great-looking 3D maps out of the box, but sometimes you need something more specific to tell your story. In this post, I show how I built and integrated a custom 3D model using Three.js to highlight a unique landmark and create a more immersive experience.

The post Custom 3D models in Mapbox: a step-by-step integration guide appeared first on Bleech.

]]>
The need for unique detail

Imagine you want to showcase a specific building or landmark that isn’t available in the Mapbox Standard 3D dataset. You’re faced with a generic, simplified representation that doesn’t capture the architectural details or significance of the location. How do you bridge this gap and bring your vision to life?

Two 3D map views show the same courtyard location marked with a blue pin.

Pontifical University of St. Thomas Aquinas (Angelicum), Rome, Italy in Mapbox Standard – Before and After.

Solving it piece by piece

I followed several important steps, starting with 3D modeling, then integrating the model into the map, and finally fine-tuning the details for the best result.

Gathering measurements: the foundation of an accurate 3D model

Before diving into 3D modeling, precise measurements are crucial. Since architectural plans aren’t always available, I used a combination of tools to estimate dimensions accurately.

  • Google Maps’ measuring tool: I traced the building’s outline and recorded key dimensions.
Screenshot of Google Maps showing a top-down view of a building with measurement lines overlaid, indicating the dimensions being recorded

Measuring the building footprint in Google Maps (top view) to gather accurate dimensions for the 3D model.

  • Custom ruler method: Using photos with minimal perspective distortion and a graphic editor like Figma, I created visual rulers based on a known width from a top-down satellite view in Google Maps. I divided this reference measurement into equal segments to establish an accurate scale. Then, I rotated the rulers vertically to measure heights and architectural details like doors, windows, ledges, and other elements.
Collage of a historic building and courtyard, each overlaid with measurement lines showing their dimensions.

Measuring Up: Using a custom ruler to determine building dimensions from photos.

3D modeling in Blender: bringing the model to life

Blender, a powerful and free open-source 3D creation suite, was my tool of choice for modeling. I imported the top-view screenshot with measurements into Blender as a reference image. Using meters as units, I modeled the building, focusing on capturing key architectural features while maintaining a low-poly style consistent with Mapbox’s 3D buildings. My goal wasn’t photorealistic accuracy but a visually appealing and performant representation.

After completing the geometry, I applied initial materials with approximate colors, then refined them during the lighting setup to better match the colors of Mapbox buildings and the surrounding environment.

Blender-Oberfläche mit Luftbild, auf dem ein Gebäudegrundriss mit Maßen als 3D-Modell nachgezeichnet wird.
Blender-Fenster mit einfachen grauen 3D-Gebäudeblöcken auf einer Luftbildkarte in der Arbeitsansicht.
Blender-Oberfläche mit 3D-Modell eines Gebäudekomplexes, überlagert auf einer Satellitenkarte von oben.
3D-Modell eines Gebäudekomplexes wird in Blender über einer Luftbildkarte bearbeitet und ausgerichtet.
3D-Modell eines großen Klostergebäudes mit Innenhof, Rundbau und Kirche mit Turm und Kreuzen.
Helles 3D-Modell eines großen historischen Gebäudekomplexes mit Innenhof, Türmen und angeschlossener Kirche.

Exporting the model: preparing for Mapbox integration

Once the 3D model was complete, I exported it as a .glb file. This format is a binary version of .gltf that combines the model, textures, and other assets into a compact file, making it easier to manage. It’s widely supported and optimized for web-based 3D graphics, which makes it ideal for Mapbox integration.

When exporting, I always ensure the model’s scale and orientation are correct to avoid issues during map placement. Applying Blender modifiers beforehand is crucial. For the best results, I use meters as the unit scale, check face orientation to prevent flipped normals, and simplify the geometry where possible to improve performance. If the model includes textures, I make sure they are optimized and lightweight to ensure faster loading times.

If you want to enhance your skills in this area, I highly recommend checking out the Three.js Journey course by Bruno Simon. It’s an excellent resource for learning how to work with Three.js, create and optimize 3D models in Blender, and export them properly for use in Three.js and projects like this one.

Map integration with Mapbox GL JS: where the magic happens

Now for the fun part—digging into the technical details. Integrating the custom model into Mapbox GL JS involves several key steps:

1. Clipping the default building

3D city map showing a highlighted construction site area marked with a blue location pin.

Default building clipped using a custom GeoJSON polygon, making space for the 3D model.

To seamlessly integrate my custom model, I needed to remove the default Mapbox building. I achieved this using Mapbox’s clip layer. First, I created a GeoJSON polygon that precisely outlines the area of the default building, using a tool like geojson.io to draw and export the shape. Then, I added a clip layer to the map, referencing the GeoJSON polygon as its source.

map.addSource('eraser', {
  type: 'geojson',
  data: {
    type: 'FeatureCollection',
    features: [
      {
        type: 'Feature',
        properties: {},
        geometry: {
          // Coordinates that we got from the geojson.io
          coordinates: [
            [
              [12.487380531583028, 41.89565029206369],
              [12.488045110910008, 41.895709818778926],
              [12.488039595728878, 41.89582681940209],
              [12.487824503664086, 41.895841187884514],
              [12.487846564388576, 41.896062872638026],
              [12.4884973557632, 41.89599513571156],
              [12.488748296504838, 41.89593150398784],
              [12.488902721576324, 41.89542655322492],
              [12.488797933134919, 41.895315709840276],
              [12.488654538424868, 41.89522334020583],
              [12.487518411111182, 41.89510223314949],
              [12.487380531583028, 41.89565029206369]
            ]
          ],
          type: 'Polygon'
        }
      }
    ]
  }
})

map.addLayer({
  id: 'eraser',
  type: 'clip',
  source: 'eraser',
  layout: {
    'clip-layer-types': ['symbol', 'model']
  },
  minzoom: 1
})

2. Setting up the 3D model

To integrate a 3D model, I defined its geographic location and transformation parameters.

// Define the origin (longitude, latitude) of the 3D model
const modelOrigin = [12.488160, 41.895612]
// Set the altitude of the model (0 means at ground level)
const modelAltitude = 0
// Define the rotation of the model in radians [X,Y,Z]
const modelRotate = [Math.PI / 2, -12.43, 0]

// Convert the geographic coordinates to Mercator coordinates
// This is necessary because Mapbox GL JS uses Web Mercator projection
const modelAsMercatorCoordinate = mapboxgl.MercatorCoordinate.fromLngLat(
  modelOrigin,
  modelAltitude
)

// Create a transformation object for the 3D model
const modelTransform = {
  // Set the X, Y, Z translations using the Mercator coordinates
  translateX: modelAsMercatorCoordinate.x,
  translateY: modelAsMercatorCoordinate.y,
  translateZ: modelAsMercatorCoordinate.z,
  // Set the rotations around each axis
  rotateX: modelRotate[0],
  rotateY: modelRotate[1],
  rotateZ: modelRotate[2],
  // Calculate the scale factor
  // This ensures the model is sized correctly relative to the map
  scale: modelAsMercatorCoordinate.meterInMercatorCoordinateUnits()
}

3. Adding a custom 3D model layer to Mapbox

I added a custom layer to the Mapbox map to render the 3D model. This involved setting up a Three.js scene within Mapbox GL JS, loading the model, and configuring the rendering pipeline. Here’s how the custom layer was structured:

map.addLayer({
  id: '3d-model',
  type: 'custom',
  renderingMode: '3d',
  onAdd: function (map, gl) {
    // Set up Three.js scene, camera, lights, and load the 3D model 
  },
  render: function (gl, matrix) {
    // Render the 3D model
  }
})

For a full breakdown, see the official Mapbox GL JS 3D model example.

4. Loading the GLTF/GLB model

I used GLTFLoader from Three.js to load the model.

onAdd: function (map, gl) {
  // ...
  const loader = new GLTFLoader()
  loader.load('path/to/model.glb', (gltf) => {
    // Traverse through the model’s scene graph
    gltf.scene.traverse((child) => {
      if (child.isMesh) {
        // Enable shadows only for meshes
        child.castShadow = true
        child.receiveShadow = true
      }
    })
    this.scene.add(gltf.scene)
  })
}

The gltf.scene.traverse((child) => {...}) function loops through all the meshes within the model, allowing for selective enabling of shadow casting and receiving. This capability is particularly useful when models contain elements that should not cast or receive shadows, such as transparent or background meshes.

5. Adding lights and configuring shadows

Ambient and directional lights

I used an ambient light for general illumination and two directional lights to nicely light up the model and cast shadows:

onAdd: function (map, gl) {
  // ...
  const ambientLight = new THREE.AmbientLight(0xffffff, 2)
  this.scene.add(ambientLight)

  const directionalLight = new THREE.DirectionalLight(0xffffff, 1)
  directionalLight.position.set(-40, 250, 150)
  directionalLight.castShadow = true
  directionalLight.shadow.bias = -0.003
  this.scene.add(directionalLight)

  // Second directional light to illuminate the model from another side
  const secondaryLight = new THREE.DirectionalLight(0xffffff, 0.5)
  secondaryLight.position.set(50, 100, -50)
  this.scene.add(secondaryLight)
}
Shadow configuration

To improve shadow quality and reduce artifacts, I increased the shadow map resolution and adjusted the shadow camera’s near, far, left, right, top, and bottom properties:

directionalLight.shadow.mapSize.width = 1024
directionalLight.shadow.mapSize.height = 1024
directionalLight.shadow.camera.near = 0.1
directionalLight.shadow.camera.far = 500
directionalLight.shadow.camera.left = -100
directionalLight.shadow.camera.right = 100
directionalLight.shadow.camera.top = 100
directionalLight.shadow.camera.bottom = -100

If the shadow camera’s frustum is too large or too small, it can cause precision issues in the shadow map. Therefore, it is important to ensure the frustum tightly fits the area where shadows are needed.

Fixing shadow acne issue
Side-by-side rooftop views of the same building, left with visible tiles, right with smooth surface.

Shadow rendering before and after bias adjustment.

Shadow acne refers to visual artifacts that appear as a „ladder“ or striped texture on surfaces. It occurs because the shadow map’s depth values are compared to the scene’s depth values, and small precision errors can cause incorrect shadowing. The shadow.bias property offsets the shadow map slightly to avoid these errors.

directionalLight.shadow.bias = -0.003

Tips:

  • Start with a small value (e.g., -0.001) and adjust as needed.
  • If the bias is too high, shadows may appear detached from objects.

6. Adding shadow cast on the map surface

Side-by-side 3D maps show a courtyard building before and after adding detailed paths and landscaping.

Default grey plane compared with ShadowMaterial applied.

In Blender, I added a plane to the 3D model to serve as a surface for receiving shadows. After importing the model into Three.js, I located the plane by its name in the scene graph and applied a ShadowMaterial to it.

gltf.scene.traverse((child) => {
 //...
 if (child.isMesh && child.name === 'Plane') {
    child.material = new THREE.ShadowMaterial()
    child.material.opacity = 0.1
    child.receiveShadow = true
  }
})

This ensures the shadow of the model is visible on the map surface.

7. Rendering the scene

Finally, I updated the Three.js camera and rendered the scene within Mapbox’s render function of the custom layer:

//...
render: function (gl, matrix) {
  const rotationX = new THREE.Matrix4().makeRotationAxis(
    new THREE.Vector3(1, 0, 0),
    modelTransform.rotateX
  )
  const rotationY = new THREE.Matrix4().makeRotationAxis(
    new THREE.Vector3(0, 1, 0),
    modelTransform.rotateY
  )
  const rotationZ = new THREE.Matrix4().makeRotationAxis(
    new THREE.Vector3(0, 0, 1),
    modelTransform.rotateZ
  )

  const m = new THREE.Matrix4().fromArray(matrix)
  const l = new THREE.Matrix4()
    .makeTranslation(
      modelTransform.translateX,
      modelTransform.translateY,
      modelTransform.translateZ
    )
    .scale(
      new THREE.Vector3(
        modelTransform.scale,
        -modelTransform.scale,
        modelTransform.scale
      )
    )
    .multiply(rotationX)
    .multiply(rotationY)
    .multiply(rotationZ)

  this.camera.projectionMatrix = m.multiply(l)
  this.renderer.resetState()
  this.renderer.render(this.scene, this.camera)
  this.map.triggerRepaint()
}

8. Final touches and enhancements

Adjusting material colors

At this point, I adjusted the material colors of the roof, walls, and windows in Blender to better match the surrounding Mapbox buildings. This created a more cohesive appearance, helping the custom 3D model blend seamlessly into the scene. Using Mapbox’s „Day“ light preset along with my lighting setup, I finalized the following color values:

Fixing model distortion on resize

When shadows are enabled, resizing the window can cause the model to appear distorted because the shadow calculations aren’t automatically updated. To resolve this, I dispose of the renderer and reinitialize it whenever the window is resized:

onAdd: function (map, gl) {
  // ...
  const handleResize = () => {
    this.renderer.dispose()
    this.renderer = new THREE.WebGLRenderer({
      canvas: map.getCanvas(),
      context: gl,
      antialias: true
    })
    this.renderer.shadowMap.enabled = true
    this.renderer.shadowMap.type = THREE.PCFSoftShadowMap
    this.renderer.autoClear = false
  }

  map.on('resize', handleResize)
}
Simulating Mapbox’s glowing entrance effect

To replicate Mapbox’s signature glowing entrance effect, I started by creating a 3D mesh in the shape of an upside-down square arch. 

Two similar courtyard scenes compare dark shaded doorways on the left with bright sunlit doorways on the right.

Comparison of the entrance mesh before and after applying the „glowing“ shader.

Next, I applied a custom shader to the mesh to simulate a soft, glowing effect. The shader creates a gradient that goes from white at the back to full transparency at the edges, creating the illusion of soft light emanating from the entrance.

gltf.scene.traverse((child) => {
  //...
  if (child.isMesh && child.name.startsWith('Door_Glow')) {
    const gradientMaterial = new THREE.ShaderMaterial({
      vertexShader: `
        varying vec2 vUv;
        varying float vDepth;
        varying float vHeight;

        void main() {
          vUv = uv;
          vDepth = position.z;
          vHeight = position.y;
          gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
        }
      `,
      fragmentShader: `
        varying vec2 vUv;
        varying float vDepth;
        varying float vHeight;

        void main() {
          vec3 color = vec3(1.0, 1.0, 1.0);
          float backToFront = smoothstep(-1.0, 1.0, vDepth);
          float topToBottom = smoothstep(-1.0, 1.0, vHeight);
          float alpha = backToFront - topToBottom;
          gl_FragColor = vec4(color, alpha);
        }
      `,
      transparent: true
    })

    child.material = gradientMaterial
  }
})

For those interested in diving deeper into shaders, I recommend again checking out the Three.js journey by Bruno Simon (https://threejs-journey.com/) and The Book of Shaders (https://thebookofshaders.com/).

Final result

Conclusion

Integrating custom 3D models into Mapbox GL JS maps truly opens up a world of exciting possibilities for visualizing unique locations and creating highly engaging user experiences. As I’ve explored in this detailed breakdown, overcoming the technical challenges involved in this process allows for the creation of truly stunning and informative 3D map applications.

If you’re curious about how advanced 3D map integration could elevate your next project – or simply have questions you’d like to explore – feel free to reach out and see what we can build together.

The post Custom 3D models in Mapbox: a step-by-step integration guide appeared first on Bleech.

]]>
If there was Astro for WordPress https://bleech.de/blog/if-there-was-astro-for-wordpress/ Tue, 31 Oct 2023 08:04:50 +0000 https://bleech.de/blog/if-there-was-astro-for-wordpress/ Most developers will try to build their own tools and project boilerplates at some time in their career. And so did I, aiming for my ideal dev setup with bleeding edge technologies. This is my story of the struggles I faced along the way and how Flynt became my default choice for developing WordPress websites.

The post If there was Astro for WordPress appeared first on Bleech.

]]>
Introduction

The excitement typically revolves around the ever-evolving revolutionary technologies, the latest shiny frameworks, or the emergence of new programming languages. While these advancements push web development forward, I believe it’s important, from time to time, to recognise the substantial progress we’ve made and appreciate the well-established tools we frequently rely on everyday. One of those tools for me is WordPress.

My web development journey

WordPress was one of my first experiences with Web Development, back when I used to be a designer, and I have to say I had a rollercoaster of feelings when I started using it. As I was inexperienced with code I had to leverage the premium WordPress theme ecosystem to create my websites. Some themes were quite powerful and with a lot of options, although they usually came with a ton of extra features the I would rarely use. That lead to performance issues and from time to time security vulnerabilities as well. At the time I was focused on learning more about CSS and HTML as I didn’t want to overwhelm myself with too much at one time.

After a while, I ended up working on a startup as a designer, where we were developing a pretty complex web app that included, among other things, a real time global chat, a ranking system, used for matchmaking users based on their performance, listing and trading of digital and physical products between users and the platform, leader boards and the list goes on. When we decided to do a re-write of the app, I get involved with the frontend and tried to learn more about web development. That was my first introduction to more advance tooling like bundlers package managers etc.

Creating my own boilerplate for WordPress

With that knowledge in hand I decided to create my own WordPress theme boilerplate that would include all of those technologies. My goal was to have a starting point for each WordPress project that I would develop in the future. While this might sound easy on paper, getting everything right was a long process of using what I created and realising that I had to structure something differently, fix a bug etc. I ended up focusing more on maintaining and improving the boilerplate than actually working on the projects. This made me look for other solutions and at that time the static generators were the new shiny thing.

The rise of static site generators

I started following the web development world when a lot of the rendering was moving from the server to the client and frameworks like React, Vue and Angular became popular. While they are great solutions for creating dynamic web applications, when it comes to simple SEO friendly websites nothing can beat static. Hopefully static site generators where developed to translate the JavaScript code to static HTML. This was, for a while, the way I created websites for my projects. using tools like Gatsby in the beginning and moving to Astro eventually.

Astro website homepage promoting a web framework for building fast, scalable sites and applications.

Lately though, it seems like the path was not a straight line but more of a circle. It started from the server moved to the client and now we are moving back to the server. I am personally very exited about these cycles because I think it tests the limits of what our current technologies can do, while promoting innovation that make the web a better more inclusive space for everyone.

And then I was introduced to Flynt

I learned about Flynt just before I joined Bleech. At first it looked strange to me, as I hadn’t heard of Timber or Twig before, but when I started using it it really made sense. Flynt is basically a WordPress Theme Starter, although if you get to know it, it is way more powerful than that. The current version of Flynt was 1.4 at the time, and It had already solved CSS and Javascript scoping, responsive images and most importantly developer experience. Then version 2.0 came out with a lot of improvements and new features inspired by the modern development ecosystem.

Website homepage promoting Flynt, a fast WordPress starter theme with download button and performance features.

How does it work?

The basic idea of Flynt is that you create reusable component to construct your layouts and content. Some components can be used to create content and others serve as common elements among the website, like the header and footer for example. The anatomy of a component looks like this:

Finder-Fenster zeigt den Ordner „Flynt Component“ mit mehreren Code- und Bilddateien.

  • index.twig This is the html template
  • _style.scss Scoped styles for the frontend
  • functions.php Can be used to run server code, register custom fields, translatable content or modify the data of the component
  • script.js Using web components under the hood to scope styles and lazy load the Javascript code.
  • README.md Used to provide documentation
  • screenshot.png Used as a visual reference both on the codebase as well as the WordPress admin.
  • _admin.scss / admin.js Used to add styles and Javascript on the WordPress admin.

As you can see the structure looks similar to what you would find in other popular frameworks. The component will be available to use inside your codebase as soon as you create some or all of these files. There is an extra step though if you want to use it as part of the content for posts or pages for example, and that is registering it in the appropriate file found under the inc/fieldGroups.

My favourite features

Some of the coolest features for me are the concept of islands architecture, a reference system to access HTML elements and a solid way to communicate data between server and client using simple json scripts. Let’s take a look on how those features look like

1. Islands architecture

Different loading strategies can be defined for each component independently when using the custom element flynt-component:

  • load:on="load"  (default) Initialises immediately when the page loads. Usage example: Elements that need to be interactive as soon as possible.
  • load:on="idle" Initialises after full page load, when the browser enters idle state. Usage example: Elements that don’t need to be interactive immediately.
  • load:on="visible" Initialises after the element get visible in the viewport.Usage example: Elements that go “below the fold” or if you want to load it when the user sees it.
  • load:on:media="(min-width: 1024px)" Initialises when the specified media query matches.Usage example: Elements which may only be visible on certain screen sizes.

This allows you to control how javascript is loaded and, when used correctly, can lead to a great balance between user experience and performance.

2. Reference system

The reference system provides a way to access DOM elements with the buildRefs functions. It requires that you provide a data-ref attribute for single selection or data-refs for multiple, with a name as the value, data-ref=”alertToggle” for example. What this function does under the hood is:

  • When you call the buildRefs functions a new proxy is created that stores a object with the ref name and the corresponding element.
  • Whenever you access a ref the functions checks if the property already exists on the target object.
  • If not, it creates a selector string either from customRefs or constructs one using a querySelector of querySelectorAll and the property name.
  • It then attempts to find the element(s) in the DOM
  • If no element(s) are found and the environment is not production, it logs a warning to the console.
  • Finally, it returns the found element(s) or null if none were found.

Lets take a look on how this works:

For the HTML I will just create a simple button and give it the data-ref=”alertToggle” attribute.

<flynt-component name="BlockAlert">
  <button data-ref="alertToggle">Alert me!</button>
</flynt-component>

Then inside the component script I will use the function buildRefs to access it:

import { buildRefs } from '@/assets/scripts/helpers.js'

export default function (el) {
  const ref = buildRefs(el)
  ref.alertToggle.addEventListener("click", () => alert("I am a ref called alertToggle"))
}

3. Server – Client Sync

There is also a helper function called getJSON. This function will, by default, retrieve the content of a script with type “application/json” from the components markup. This can be useful if you want to pass data from the server to the client in a json format. I find this very useful when I want to persist state from the server to the client on a page reload for example.

Here is an example on how this would look like. First on our functions php lets create some data:

add_filter('Flynt/addComponentData?name=BlockServerSynch', function ($data) {
    $data['jsonData'] = [
        "foo" => "bar"
    ];

    return $data;
});

Then on the twig file we add the scrip with our data.

<flynt-component name="BlockServerSynch">
  <script type="application/json">
    {{ jsonData|json_encode }}
  </script>
</flynt-component>

And then we can grab the data from the server inside the script file:

import { getJSON } from '@/assets/scripts/helpers.js'

export default function (el) {
  const data = getJSON(el)
  console.log(data)
}

I have used this in the past to pass filters, user inputs, translatable string or user options among other things and I really like how simple and to the point it always feels.

Conclusion

As someone that really likes to balance between the bleeding edge and production ready solutions I find that Flynt has become my go to starter theme for WordPress development. It enables me to write maintainable code faster, while it takes care of performance, scoping and reusability in a very robust way. As an official member of the team that develops and maintains Flynt now, I see all the thought and effort that is put into it and I can’t wait to see what new features are coming.

But don’t take my word for it, go and try it yourself! If you need any help you can always start a discussion on Github or reach out on 𝕏, we are always happy to help.

The post If there was Astro for WordPress appeared first on Bleech.

]]>
Custom chatbots made easy: How to build your own ChatGPT agents https://bleech.de/blog/custom-chatbots-made-easy-how-to-build-your-own-chatgpt-agents/ Fri, 13 Oct 2023 22:17:47 +0000 https://bleech.de/blog/custom-chatbots-made-easy-how-to-build-your-own-chatgpt-agents/ Ever wanted to create your own chatbot but felt overwhelmed by the complexity? That's why I built SiegfriedAI – a sleek Node.js script that makes it easy to create and load custom GPT-4 chat agents. Learn how to use it and make it your own!

The post Custom chatbots made easy: How to build your own ChatGPT agents appeared first on Bleech.

]]>
The problem:

Prompts are temporary

ChatGPT can be powerful in helping with very specific tasks like the following:

  • Customer Support: Formulating well written response emails.
  • Development: Helping to answer technical questions.
  • App Support: Helping to find functionalities, like shortcuts.
  • Productivity: Summarising meeting notes and making them actionable.
  • Content Marketing: Creating variations of blog post or video titles.

But to get quality responses, you need to give context to ChatGPT by providing a concise prompt. That’s why I am keeping a list of my day to day prompts in my notes app. Whenever I need a prompt, I open a new conversation in ChatGPT and paste the template. But that’s quite a repetitive process.

There should be a sustainable way to organize, optimize and launch prompt templates as context-aware ChatGPT agents.

Me, Oct. 2023

So I built a tool for that

I developed a lightweight CLI interface for text-based prompt templates. It allows to create, load and interact with custom ChatGPT agents by providing custom GPT prompts in plain text files. Meet SiegfriedAI. 🧔‍♂️👋

To install the CLI tool and learn a few tips, head over to the GitHub repository:

SiegfriedAI - Usage Example

What you’ll get

SiegfriedAI is free and open source. With just 2 kB in size, it’s a compact script that you can easily adapt and extend to your needs. In its simplistic nature, the feature list is small and focused:

Task specific chat agents

Tackle specific, recurring tasks with custom chat agents. Whether it’s customer support, technical assistance or content creation, keep your favorite prompts organized and readily available at your fingertips.

Computer folder window showing a “templates” directory with six plain text files listed by name.

Fast template creation

Simply drop your text files with GPT prompts into the templates folder to create your custom chat agents. The file name will automatically be displayed in the selection prompt. Create as many specialized agents as you like.

Text editor window showing a file named “Final Cut Pro.txt” with instructions for a support agent.

A powerful tool belt

Easily build sophisticated AI solutions on top. With just 74 lines, the script is quick to understand and easy to extend. And with langchain, you are prepped with a powerful toolset to load files, crawl websites, generate images and more.

LangChain website homepage promoting tools to build LLM applications, with install commands for Python and JavaScript.


Understanding the tech

While OpenAI provides a wealth of information on their website, I wasn’t sure what I needed to know to get up and running. Here’s what I learned building the tool, hopefully giving you a head start:

The API toolkit

SiegfriedAI is built with langchain, a framework for developing LLM applications. Beyond providing a simple API and great documentation, langchain includes powerful integrations like document loaders, web loaders, vector stores, action agents, output parsers and more to develop sophisticated AI solutions.

Generating chat completions

Sending a message to the OpenAI API and generating a response is simple:

  1. Create an OpenAI API key here.
  2. Set your API key as an environment variable:
    export OPENAI_API_KEY=you_openai_api_key_here
  3. Install the langchain library:
    npm install -S langchain

Create and run your first chat completion:

import { ChatOpenAI } from "langchain/chat_models/openai";

const model = new ChatOpenAI();
const message = "Name something green.";
const aiResponse = await model.predict(message);

console.log(aiResponse); /* Grass */

Providing chat history for context

To get a proper context-aware chat experience, the AI needs to keep track of the chat history. However, OpenAI doesn’t maintain a history of the chat. Instead, you need to send back the full chat history with every new request. While this seems low-level, it makes it simple to work with.

let messages = [
  ["human",  "Name something green."],
  ["ai",  "Grass."],
  ["human",  "What did you say?"]
];

const aiResponse = await model.predictMessages(messages);

console.log(aiResponse); /* I said "Grass." */

Providing templates as instructions

I’m using Inquirer.js for the template selection and to allow multiline input via editor. It’s easily embeddable and provides a prettier command line interface.

import { select, input } from '@inquirer/prompts';

const template = await select({
  message: 'Select your template:',
  choices: [{
      name: 'ChatGPT',
      value: ''
    }, {
      name: 'Final Cut Pro',
      value: 'Act as a support agent who is an expert in Final Cut Pro for Mac. Only respond with short, precise, helpful messages to my questions.',
    }],
});

The template shall be passed to OpenAI as a system message, which is then followed by user input. The model will then respond with the specified behavior.

let messages = [
  ["system",  template],
  ["human", await input({ message: 'You:' })]
];

For the full script, please refer to the GitHub repository: steffenbew/siegfried-ai

Conclusion

Developing SiegfriedAI has amazed me – working with AI dev tools is a creator’s dream! Who would’ve guessed chatbot creation could be so simple and fun?

The great developer experience and immense potential that AI developer tools bring to the table are a signal that there’s a big wave of AI advancements coming our way. Witnessing this firsthand, I am buzzing with anticipation for the progress we see in this space.

For anyone looking to get their hands on AI development, I hope to have sparked some curiosity and encouragement! I can’t wait for you to experience it yourself.

PS: I’m still looking for great prompts! Which ones make your life considerably easier? Any that elevate your craft to new heights? Please share your favorite ones by dropping me an email – I’d be excited to hear from you!

PPS: Drop by our YouTube channel for more insights: Siegfried, deploy!

The post Custom chatbots made easy: How to build your own ChatGPT agents appeared first on Bleech.

]]>
Enhancing WordPress Archives with HTMX and View Transitions https://bleech.de/blog/enhancing-wordpress-archives-with-htmx-and-view-transitions/ Fri, 06 Oct 2023 09:08:00 +0000 https://bleech.de/blog/enhancing-wordpress-archives-with-htmx-and-view-transitions/ Making WordPress post archives interactive has never been easier. Learn how to implement HTMX for real-time AJAX filtering and seamless page transitions, all while keeping your site lightning-fast and performant.

The post Enhancing WordPress Archives with HTMX and View Transitions appeared first on Bleech.

]]>
Introduction

On a previous article, I look into the topic of programmatically filtering post archives in WordPress. Today, I will be using HTMX and the View Transitions to elevate the user experience. These technologies enable real-time filter updates without requiring a page reload, and offer slick animations with minimal code adjustments.

The benefits of HTMX

HTMX opens the door to a range of modern web features like AJAX, CSS Transitions, WebSockets, and Server-Sent Events via HTML attributes.

Key Advantages:

  • Simplicity & Reduced JavaScript: HTMX lets you implement dynamic web behaviours directly from HTML, thereby reducing dependency on JavaScript or elaborate frontend frameworks.
  • Progressive Enhancement: It encourages a design that works without JavaScript, ensuring base functionality remains accessible, and then enhances the experience when JavaScript is available.
  • Lightweight & Performant: Lighter than most frontend frameworks, HTMX leads to quicker load times and fewer client-side errors.
  • Modern Web Features with Seamless Integration: It offers native support for modern web features and can be seamlessly incorporated into existing tech stacks.

For an in-depth understanding, visit the official HTMX documentation.

The View Transitions API

The View Transitions API provides a mechanism for easily creating animated transitions between different DOM states while also updating the DOM contents in a single step.

Designed primarily to introduce app-like transitions into Multi-Page Applications (MPAs), the View Transitions API is a noteworthy advancement. Unlike specialised libraries in frameworks like React, this API offers the same fluid transitions without the overhead of additional packages. You can learn more from the MDN documentation.

Starting code

For this example, I’ll use categories as filters and allow sorting by date in ascending or descending order. I’ll also include HTMX directly in the markup as a script. Although in real-world applications, it’s advisable to include it in the head section or load it via a package manager like npm. This tutorial uses Twig, but you can easily adapt it to vanilla WordPress.

Below is the markup before we introduce HTMX:

<main class="grid-post-archives">
  <div class="container">
    <h1>Post Archives</h1>
    <div data-ref="content">
      <!-- Form -->
      <form method="get">
        <!-- Categories -->
        <fieldset>
          <legend>Category</legend>
          {% for term in terms %}
            <input
              id="{{ term.id }}"
              name="cat[]"
              value="{{ term.id }}"
              type="checkbox"
              {{ term.selected ? "checked" }}
            />
            <label for="{{ term.id }}">{{ term.name }}</label>
          {% endfor %}
        </fieldset>
        <!-- Order -->
        <fieldset>
          <legend>Order</legend>
          <input
            id="order-asc"
            name="order"
            value="ASC"
            type="radio"
          />
          <label for="order-asc">Ascending</label>
          <input
            id="order-desc"
            name="order"
            value="DESC"
            type="radio"
          />
          <label for="order-desc">Descending</label>
        </fieldset>
        <input type="submit">
      </form>
      <!-- Posts Loop -->
      {% if posts|length > 0 %}
        <ul data-ref="posts" class="posts resetList">
          {% for post in posts %}
            <li class="post">
              <img src="proxy.php?url={{ post.thumbnail.src }}" alt="">
              <h3>{{ post.title }}</h3>
              <p>{{ post.excerpt }}</p>
            </li>
          {% endfor %}
        </ul>
      {% else %}
        <p class="posts-empty">No posts found</p>
      {% endif %}
    </div>
  </div>
</main>

This is what a regular archive page would look like. If you followed my previous article you know that when submitting the form the selected options will be added to the url as params and the page will reload to reflect our selected filters.

Adding HTMX attributes

Adding HTMX attributes turns this into a dynamic, AJAX-powered form. Let’s look at the final markup and decipher what each attribute accomplishes:

<script src="proxy.php?url=https://unpkg.com/[email protected]"></script>
<main 
	class="grid-post-archives" 
	hx-boost="true"
>
  <div class="container">
    <h1>Post Archives</h1>
    <div data-ref="content">
      <form
        method="get"
        hx-push-url="true"
        hx-get="{{ post.link }}"
        hx-target="closest [data-ref='content']"
        hx-select="[data-ref='content']"
        hx-swap="outerHTML show:body:top"
        hx-trigger="change"
      >
	    <!-- Categories -->
        <!-- Order -->
        <noscript>
	      <input type="submit">
        </noscript>
      </form>
	  <!-- Posts Loop -->    
    </div>
  </div>
</main>

The attributes

  • hx-boost=”true”: Transforms anchor tags and forms into AJAX requests.
  • hx-push-url="true": Updates the browser URL to preserve the state across navigation and page refreshes.
  • hx-get="{{ post.link }}": Specifies the content source url (in this case is the link of our post archive page).
  • hx-target="closest [data-ref='content']": Identifies the element to update. It can also be a class or an id among other things. I went with a data attribute to make it more clear.
  • hx-select="[data-ref='content']": Determines what content will replace the target.
  • hx-swap="outerHTML show:body:top": Defines the swapping action and scrolls to the top of the page.
  • hx-trigger="change": Provide the event that initiates the swap. I chose the change event in this case, to provide instant feedback to the users selection.

For users with JavaScript disabled, I wrapped the submit button with a noscript tag that will allow them to still submit the form manually.

And that’s it. We now have a fully functional AJAX-driven filtering functionality for your post archives.

Leveraging View Transitions with HTMX

HTMX offers out-of-the-box support for View Transitions through a single configuration variable. This means no extra code is necessary to leverage this feature.

<script src="proxy.php?url=https://unpkg.com/[email protected]"></script>
<script>
  htmx.config.globalViewTransitions = true
</script>

This action adds a default fade animation to content updates. By adding the view-transition-name property the browser will add animated transitions between common elements.

<li
  class="post"
  style="view-transition-name: post-{{ post.id }}"
>
  <img src="proxy.php?url={{ post.thumbnail.src }}" alt="">
  <h3>{{ post.title }}</h3>
  <p>{{ post.excerpt }}</p>
</li>

You can even create custom CSS animations using the View Transitions API and create a unique User Experience for the visitors of your website.

View transitions as a progressive enhancement

While the View Transitions API is experimental and not universally supported, it serves as a progressive enhancement, ensuring your website remains functional.

By following this guide, you’ll have a WordPress post archive that not only filters posts via AJAX but also offers a polished, app-like user experience.

Conclusion

In this exploration of enhancing post archive filters in WordPress, HTMX and the View Transitions proved to be game-changers for user experience. By leveraging HTMX, the complexity of the codebase is dramatically reduced and the reliance on frontend frameworks or libraries is obsolete. This makes the web page lighter, faster, and less prone to client-side bugs, without compromising on modern functionalities like AJAX requests and seamless transitions.

The View Transitions API further amplifies the experience, enabling fluid transitions between different DOM states. It allows websites to offer an app-like feel, taking user engagement to a new level. Moreover, the progressive enhancement approach ensures that these features act as bonuses for supported browsers, not prerequisites for accessing content.

If you’re a developer interested in modern web technologies and keen to improve your WordPress projects, HTMX and the View Transitions API present a compelling case for further investigation. With minimal code changes, these tools can help you build an enhanced, interactive, and modern UI that stands out.

While the View Transitions API is still experimental, it shows significant promise. The power of these tools is in our hands to explore, integrate, and enhance. Happy coding!

Download the Flynt Component from Github

The post Enhancing WordPress Archives with HTMX and View Transitions appeared first on Bleech.

]]>
The ups and downs of text-wrap: balance and a polyfill https://bleech.de/blog/the-ups-and-downs-of-text-wrap-balance-and-a-polyfill/ Tue, 26 Sep 2023 07:00:00 +0000 https://bleech.de/?p=12349 Balancing text lines in a responsive layout used to be hard. But no longer! With text-wrap: balance, automatic text composition is coming to the browser. Learn its limitations, browser support, and get a first look at its new sibling, text-wrap: pretty.

The post The ups and downs of text-wrap: balance and a polyfill appeared first on Bleech.

]]>
Copying and pasting text from a website design to a WordPress backend rarely results in a flawless result. Designers often balance multi-line headlines, a level of detail that’s hard to replicate on a responsive website without applying clever hacks – until now.

Say hello to text-wrap: balance! It takes you from hand-authoring to full automation, ensuring your text looks just as good online as it does in the design.

What is balanced and unbalanced text?

An unbalanced headline fills the entire container width for each line before breaking onto the next. This often results in the last line of text being shorter than the previous lines, unless you get lucky with perfect alignment.

.unbalanced {
  max-inline-size: 700px;
}
unbalanced text

To balance all lines of text, you’d usually have to manually insert line breaks or adjust the container’s width. However, these methods only work for a predetermined layout width and have their limits with responsive layouts.

That’s where text-wrap: balance comes into play – it automatically aligns the length of text lines across all screen resolutions.

.balanced {
  max-inline-size: 700px;
  text-wrap: balance;
}
balanced text

Technical limitations

Luckily text-wrap: balance does not require a dictionary for each language, which might render it useless for non-English content – I’m looking at you,  hyphens: auto!

Instead, the browser calculates the smallest width for each line without creating additional lines. However, there are at least the following considerations to keep in mind when using this feature:

  1. Performance: The computational load increases with each added line. That’s why Chrome caps the limit for this feature support at six lines per element.
  2. Interference of white-space: It wont’t work if a specific white-space value is set. If the element inherits such value, you should unset it.
  3. Honoring manual line breaks: It will respect inserted <br>-tags, so your intentional line breaks won’t be disrupted.

I consider these prerequisites as a plus. Initially, I was concerned that text-wrap: balance might be too „magical,“ making it difficult to understand and debug. But especially the fact that it respects manual line breaks eases those worries.

Browser support

As of July 2024, all major browsers support text-wrap: balance in their latest version. The partial support flag refers to the possibility of using the longer syntax text-wrap-style in conjunction with text-wrap-mode.

caniuse text-wrap: balance

Browser support (July 2024)

Use cases: When text-wrap: balance shines

Given the expanding browser support, text-wrap: balance is an ideal candidate for progressive enhancement. I think it’s great when a headline plays a key role in the layout, but the content manager cannot control its line breaks.

This might be the case for an article title that is displayed in a hero section on top of a larger background, especially when the headline is centered.

Blog post here section, balanced text

After: Balanced text in a blog post hero section, thanks to text-wrap: balance.

Once browser support expands even further – or if you opt for a polyfill – the applications could be extended to any layout-centric headline that is aiming for a block-style aesthetic.

Caution: why it’s not a silver bullet

Before you throw text-wrap: balance on every headline across your website, hear me out. Initially, I thought, „Why not? It can only improve layouts.“ But that’s not necessarily the case, and here’s why, in two key points:

Caveat #1:

Loosing sight of unsupported browsers

When editing content in a supported browser, you won’t notice how bad a line of text may appear to users on unsupported browser. And when text composition makes a difference, I’d prefer the opportunity to catch these issues early and make necessary adjustments. This could mean adding an extra line break, tweaking the CSS, or even rewording the text.

Blog post here section, unbalanced text

Before: The text layout looks significantly different without browser support.

Caveat #2:

Negative space alters the perceived layout

Humans instinctively read negative space as patterns. Our perception automatically frames multiple lines of text in a box, depending on where those lines break. Therefore, changing the length of lines affects the perceived size of a section. This effect becomes increasingly evident when the original text is highly unbalanced, such as when the last line contains only few characters.

Blog hero with unbalanced text

Before: Unbalanced text fills the width of the parent container.

Blog hero with text-wrap: balance

After: Balanced text causes notable white space on the right hand side.

A lightweight polyfill

The go-to JavaScript polyfill for many is Adobe’s balance-text. However, we found it to be outdated and a bit bloated for our needs. So, Dominik took matters into his own hands and crafted a custom polyfill. He based it on react-wrap-balancer, opting for a lighter, more streamlined algorithm that leverages modern tech like the ResizeObserver.

export default function () {
  if (!window.CSS.supports('text-wrap', 'balance')) {
    const elements = document.querySelectorAll('.textWrapBalance, h1')
    const resizeObserver = new ResizeObserver((entries) => {
      entries.forEach((entry) => {
        relayout(entry.target)
      })
    })
    elements.forEach((element) => {
      relayout(element)
      resizeObserver.observe(element)
    })
    window.addEventListener('resize', () => {
      elements.forEach((element) => {
        relayout(element)
      })
    })
  }
}

function relayout (wrapper, ratio = 1) {
  const container = wrapper.parentElement

  const update = (width) => (wrapper.style.maxWidth = width + 'px')

  wrapper.style.display = 'inline-block'
  wrapper.style.verticalAlign = 'top'
  // Reset wrapper width
  wrapper.style.maxWidth = ''

  // Get the initial container size
  const width = container.clientWidth
  const height = container.clientHeight

  // Synchronously do binary search and calculate the layout
  let lower = width / 2 - 0.25
  let upper = width + 0.5
  let middle

  if (width) {
    // Ensure we don't search widths lower than when the text overflows
    update(lower)
    lower = Math.max(wrapper.scrollWidth, lower)

    while (lower + 1 < upper) {
      middle = Math.round((lower + upper) / 2)
      update(middle)
      if (container.clientHeight === height) {
        upper = middle
      } else {
        lower = middle
      }
    }

    update(upper * ratio + width * (1 - ratio))
  }
}

To integrate the polyfill into your Flynt project, create a new script file in the /assets/scripts/ folder. Paste the above code snippet into it and adjust the selector according to your needs.

/assets/scripts/textWrapBalance-polyfill.js

In order to execute the polyfill code, add the following to your main.js file:

import textWrapBalance from './scripts/textWrapBalance-polyfill.js'
textWrapBalance()

Conclusion

Overall, I’m a big fan of text-wrap: balance and will use it across the board once browser support is great.

But the idea of running intense CSS manipulations in JavaScript on practically every page just to polyfill this feature doesn’t sit well with me at the moment.

So, for now, I’ll keep an eye out for specific use-cases where its native capabilities can progressively enhance the user experience, while holding off on broader implementation until robust browser support arrives.

Bonus tip

Meet its sibling text-wrap: pretty

Beyond the balance value, the spec offers a new pretty value which is supposed to prevent single words at the end of a new line.

In a nutshell, if balance is for layout headlines, then pretty is your go-to for content headlines. But it could also be a solid choice for layout headlines where you prefer to keep the negative space largely unchanged, as mentioned above.

Support for text-wrap: pretty shipped in Chromium 117 first. As of July 2024, Safari and Firefox do not support it yet.

Blog hero with text-wrap: pretty

text-wrap: pretty: Prevents a single word in the headline’s last line.

Btw: This page uses text-wrap: balance on the h1 and text-wrap: pretty on long-copy content headlines (.post-main h1-h6).

The post The ups and downs of text-wrap: balance and a polyfill appeared first on Bleech.

]]>
Creating a modal using the dialog HTML element https://bleech.de/blog/creating-a-modal-using-the-dialog-html-element/ Tue, 19 Sep 2023 09:00:23 +0000 https://bleech.de/blog/creating-a-modal-using-the-dialog-html-element/ Making modals used to be hard and complicated, not anymore! Things got a whole lot easier once the dialog element came into play. In this article you'll learn how to create reusable modals with the native dialog HTML element.

The post Creating a modal using the dialog HTML element appeared first on Bleech.

]]>
Introduction

Back in the day, making a modal was a real challenge that needed loads of work and know-how. You had to really think about how to make it easy to use and accessible, deal with where the focus goes, handle keyboard events, and all that jazz. But things got a whole lot easier once the dialog element came into play.

The purpose of the <dialog> element is to simplify the processes of creating modals or floating interactive elements that appear on user interaction. A dialog will interrupt the typical user flow, usually to display important information, collect user input, or confirm actions.

Here is a simple example on how the dialog works:

<dialog id="dialog">
  <h2>Lorem ipsum</h2>
  <p>Lorem ipsum dolor, sit amet consectetur adipisicing elit. Facere mollitia iste, praesentium sint expedita culpa, veniam dolorem ipsam alias iusto labore quas, quia non minima repellendus. Excepturi laboriosam harum sunt.</p>
  <button id="close-dialog">Close</button>
</dialog>

<button id="open-dialog">Open Dialog</button>

<script>
const dialog = document.getElementById("dialog");
const dialogOpen = document.getElementById("open-dialog");
const dialogClose = document.getElementById("close-dialog");

dialogOpen.addEventListener("click", () => {
  dialog.show();
});

dialogClose.addEventListener("click", () => {
  dialog.close();
});
</script>

And that’s it. There are two methods you can call to open the dialog.

  • dialog.show() – Displays the dialog without a backdrop positioned absolute relative to the document flow.
  • dialog.showModal() – Displays a modal-type dialog in a fixed position with a backdrop, blocking interactions with the regular flow.

Closing the modal on background clicks

When dialogs are used to provide information or an action that is not mandatory for the user experience, it is a good idea to make the dialog easily dismissible. The following example shows how to close the modal when clicking anything outside of the dialog element.

const dialog = document.getElementById("dialog");
const dialogOpen = document.getElementById("open-dialog");
const dialogClose = document.getElementById("close-dialog");

dialogOpen.addEventListener("click", openDialog);
dialogClose.addEventListener("click", closeDialog);

function openDialog() {
  dialog.showModal()
  dialog.addEventListener("click", closeDialogOnClickOutside)
}

function closeDialog() {
  dialog.close()
  dialog.removeEventListener("click", closeDialogOnClickOutside)
}

function closeDialogOnClickOutside (event) {
  event.target === dialog && closeDialog()
}

Wrapping the content of the dialog with a div ensures that the event.target is not the dialog when the user clicks on the content. This is what makes the closeDialogOnClickOutside functions a one liner.

<dialog id="dialog" class="modal">
  <div class="modal-content">
    <h2>Lorem ipsum</h2>
    <p>Lorem ipsum dolor, sit amet consectetur adipisicing elit. Facere mollitia iste, praesentium sint expedita culpa, veniam dolorem ipsam alias iusto labore quas, quia non minima repellendus. Excepturi laboriosam harum sunt.</p>
    <button id="close-dialog">Close</button>
  </div>
</dialog>

<button id="open-dialog">
  Open Dialog
</button>

After taking care of the markup we will need to remove any padding around the dialog element and style the .modal-content instead.

/* Dialog Reset */
dialog {
  border: 0;
  padding: 0;
  background: transparent;
  max-inline-size: min(65ch, 100% - 3rem);
}

.modal-content {
  padding: 2rem;
  background-color: #fff;
}

Creating a component

I like looking at web development as this big puzzle. We usually like to create patterns that we can use to connect 4 worlds: HTML, CSS, JavaScript and Data. We create (puzzle) pieces of reusable code that we then connect to create interfaces. One of the most common patterns in the frontend world is Components.

Below, I’ve created an example on how I would structure my code to create a reusable modal component using plain HTML, CSS and JavaScript. You can also find a Flynt Component that you can drop into your project right away: View on GitHub.

Why you should use it

  1. Accessibility: The <dialog> element is designed to be accessible, making it a valuable choice for developers committed to creating inclusive web experiences. It follows best practices for keyboard navigation and screen reader compatibility.
  2. Ease of Use: Implementing a modal dialog with <dialog> is straightforward. You only need to define the dialog’s content within the element, set its ID, and handle its display and interaction through JavaScript.
  3. Focus Management: The <dialog> element automatically manages the focus within the dialog, preventing users from tabbing outside the modal, which is essential for accessibility and a smoother user experience.
  4. No dependencies: No need to load any external JavaScript libraries.

Why you might want to avoid the dialog element

The dialog element can be a useful tool in certain situations, but there are cases where it might be more suitable to use a JavaScript library or framework instead. Here are some scenarios when you might want to avoid using the <dialog> element and opt for a library or framework:

  1. Legacy Browser Support: If your project requires support for older browsers (mainly Internet Explorer) that do not support the <dialog> element, using a JavaScript library can be the only viable option to implement modal dialogs.
  2. Complex Interactions: If your modal dialogs involve complex interactions, such as multi-step forms, dynamic content loading, or integration with other components, using a library like Solid, React, or Vue.js can help manage these interactions more effectively.
  3. Animations and Transitions: Creating smooth animations and transitions in modal dialogs may be challenging with just the <dialog> element. JavaScript libraries often provide more advanced animation options and transitions that can enhance the user experience.

Conclusion

The <dialog> HTML element is a valuable addition to the web developer’s toolkit. It simplifies the creation of modal dialogs while maintaining accessibility and user-friendliness. By following best practices and integrating it seamlessly into your web applications, you can enhance the overall user experience and make your interactions more intuitive and engaging.

But what about you? Have you used the dialog element? IIf so, what did you like or didn’t like about it? Reach out on 𝕏.

The post Creating a modal using the dialog HTML element appeared first on Bleech.

]]>
Introducing our Figma Design Kit for Flynt https://bleech.de/blog/introducing-our-figma-design-kit-for-flynt/ Thu, 17 Aug 2023 15:28:07 +0000 https://bleech.de/?p=12127 At Bleech, we are thrilled to announce the release of our new Figma Design Kit for Flynt. This comprehensive toolkit is designed to empower designers and developers to create stunning layouts and components, while streamlining the web development process.

The post Introducing our Figma Design Kit for Flynt appeared first on Bleech.

]]>
In this blog article, we will explore the benefits of the Figma Design Kit, discuss the challenges of creating design systems, and reveal how you can get started with Flynt’s Design Kit in Figma.

Unleashing the Potential

With our Figma Design Kit, we provide a selection of components that represent Flynt’s complete Base Style and an example page template. All Figma components serve as an excellent starting point for your project, saving you valuable time and effort that would have been spent starting from scratch.

Additionally, we have optimized the current colors and sizes of text and components to follow accessibility standards, ensuring that your designs are inclusive and accessible to users.

All colors, text styles and components can be easily customized and extended to align with your brand’s CI to match your requirements, empowering you to build your own custom page templates and create a website that truly reflects your unique vision and brand identity.

Preview of Design Kit in Figma

Sneak Peek of the Figma file showing Flynt’s Base Style elements like buttons and text input controls that one can easily customize to match different CIs.

About Flynt

What is Flynt? 

Flynt is a developer-focused WordPress Starter Theme with a component-based architecture. It seamlessly integrates with Advanced Custom Fields Pro (ACF Pro) and Timber, making custom field management and dynamic template creation efficient.

With modern front-end tools like Vite and hot module loading, Flynt offers optimized builds and real-time updates. Its speed architecture with JS Islands ensures faster performance. Enabling you to unlock the true potential of WordPress! Check out Flynt’s website and explore the GitHub repository to learn more.

Challenges of Creating a Design System

Building an effective design system presents its own set of challenges. One key challenge we encountered during the development of the Flynt Design Kit is the technical concept of minimal code. To ensure optimal performance and maintain simplicity, we carefully curated only the most necessary variants within the design.

By striking a balance between simplicity and functionality, we provide designers and developers with the essential tools they need, without burdening them with excessive code.

Now Available in Figma

The Flynt Design Kit is built for Figma, a leading design collaboration platform. This integration allows for effortless collaboration among team members and stakeholders. Designers and developers can work together in real-time, making necessary iterations and adjustments, ensuring everyone stays on the same page throughout the design process. Figma’s intuitive interface combined with Flynt’s Design Kit offers a seamless workflow for unparalleled productivity.

Excited to get your hands on the Flynt Design Kit for Figma? You can download it now for free from the Figma Community! Embark on a journey of effortless web development, where you can focus on unleashing your creativity and bringing your vision to life.

Conclusion

The release of the Figma Design Kit for Flynt marks a significant milestone in our commitment to providing designers and developers with the tools they need to streamline their workflow and create captivating websites. By addressing the challenges of creating design systems and offering a comprehensive set of components, we aim to empower you to build exceptional web experiences. 

Download the Flynt Design Kit for Figma today and embark on a journey of seamless design collaboration, enhanced productivity, and unparalleled creativity.

The post Introducing our Figma Design Kit for Flynt appeared first on Bleech.

]]>
Flynt 2.0 – Redefining Performance and Experience https://bleech.de/blog/flynt-2-0-redefining-performance-and-experience/ Tue, 25 Jul 2023 13:33:54 +0000 https://bleech.de/blog/flynt-2-0-redefining-performance-and-experience/ The WordPress Starter Theme comes with a perfect Google PageSpeed score, WCAG 2.1 accessibility, SEO best practices and an intuitive editing interface with Gutenberg support. It's now easier to develop custom themes faster – all while improving the user experience for your visitors and editors.

The post Flynt 2.0 – Redefining Performance and Experience appeared first on Bleech.

]]>
I’m excited to announce the enhancements of Flynt 2.0, our most advanced WordPress Starter Theme yet. This upgrade is not only a major technological overhaul but also delivers unparalleled performance and editing experience in a streamlined package. Your chance to fall in love with WordPress, all over again.

Setting a New Benchmark in WordPress Themes

Flynt 2.0 provides developers with an array of powerful features and performance improvements, elevating the experience of custom built WordPress websites for both frontend and backend users. Take advantage of Flynt 2.0 to:

  • Accelerate development with next generation tooling powered by Vite.
  • Deliver lightning-fast interactive experiences with JavaScript Islands.
  • Create unique page builders with an intuitive component-based architecture.
  • Make editing long-copy a breeze with the new Gutenberg Block Editor.
  • Ensure top-tier accessibility with built-in WCAG optimizations.
  • Maximize organic reach with technical SEO best practices.
Flynt Starter Theme Performance

Flynt 2.0: Google PageSpeed Performance Results

A Natural Feel: The New Editor Experience

We’ve also enhanced the editor experience, introducing a component search and integrated editor styles. The intuitive interface enables effortless creation of beautifully layouted pages, while the new Gutenberg Block Editor simplifies the writing and editing of lengthy blog posts. By streamlining the component options, editors can focus on crafting stunning websites effortlessly.

Flynt 2.0: The Component Search

Page Layouts: The Component Search

Flynt 2.0: The Gutenberg Block Editor

Blog Posts: The Gutenberg Block Editor

Starting with Flynt 2.0

Set the perfect foundation for any WordPress website project. Launch your site with confidence and focus on developing custom features and content, all without compromising performance. With Flynt 2.0, you’ll have more time to concentrate on what truly matters.

For those already familiar with Flynt, you’ll appreciate the streamlined codebase and backend experience. If you haven’t tried it yet, head over to flyntwp.com and get started today! We can’t wait to see the remarkable websites you’ll create with Flynt and we look forward to your feedback on Flynt 2.0!

Get Started with Flynt

Happy coding! 🎉🚀✨

The post Flynt 2.0 – Redefining Performance and Experience appeared first on Bleech.

]]>