The post Warum es Zeit ist auf PHP 8.3 zu upgraden appeared first on Bleech.
]]>PHP 8.3 wurde am 23. November 2023 veröffentlicht und bringt einige Verbesserungen. Entwickler können jetzt Typen für Klassenkonstanten definieren, dynamische Zugriffe auf Klassenkonstanten verwenden, JSON direkt mit der neuen Funktion json_validate() validieren und mehr. Außerdem gibt es Performance-Optimierungen, Fehlerbehebungen und Deprecations, die den Weg für die Zukunft freimachen.
Ein zentraler Grund warum regelmäßige Updates wichtig sind, ist der Lebenszyklus von PHP selbst. Jede Version erhält zwei Jahre aktiven Support, gefolgt von zwei Jahren mit Sicherheitsupdates. Danach ist Schluss: keine Fehlerkorrekturen, keine Patches, kein Sicherheitsnetz.
Zum aktuellen Zeitpunkt (September 2025) befindet sich PHP 8.1 bereits auf der Zielgeraden und bekommt nur noch Sicherheitsupdates bis Dezember 2025. PHP 8.2 folgt ein Jahr später. PHP 8.3 hat aktiven Support bis Ende 2025 und erhält Sicherheitsupdates bis Dezember 2027. PHP 8.4, die neueste Version, erweitert den Zeitraum bis Ende 2028.
PHP-Lebenszyklus im September 2025 (source)
Mit dem Entfernen des Beta-Labels unterstützt WordPress 6.8 nun PHP 8.3 ganz offiziell. Das „Beta“-Label sorgt oft für Verwirrung. Tatsächlich ist es eine Vorsichtsmaßnahme:
Support is labelled as „beta support“ until at least 10% of all WordPress sites are running that version or later.
make.wordpress.org
Obwohl WordPress selbst mit dem Beta-Label bereits kompatibel ist, kann es sein, dass das Ökosystem aus Plugins und Themes noch nicht vollständig nachgezogen hat. Deshalb wird empfohlen, die Adoptionsschwelle abzuwarten.
Die 10%-Marke für Websites, die WordPress 6.8 in Kombination mit PHP 8.3 nutzen, wurde im Juli 2025 erreicht. Wer WordPress 6.8 nutzt, kann also zu PHP 8.3 wechseln und davon ausgehen, dass gängige Plugins und Themes ebenfalls kompatibel sind.
WordPress- / PHP-Kompatibilität im September 2025 (source)
Es gibt drei Hauptgründe, die PHP-Version aktuell zu halten:
Einige Plugins oder Themes könnten noch nicht vollständig mit PHP 8.3 kompatibel sein, vor allem wenn die Code-Architektur veraltet ist. Auch eigener, individueller Code kann Warnungen oder Fehler verursachen.
In seltenen Fällen bietet dein Hosting-Anbieter PHP 8.3 noch nicht an. Dann sollte man über einen Anbieterwechsel nachdenken.
Am sichersten ist es, das Upgrade zunächst in einer Test- oder Staging-Umgebung durchzuführen. Aktualisiere WordPress, Plugins und Themes, bevor du die PHP-Version änderst. Schalte dann auf PHP 8.3 um und aktiviere den Debug-Modus, um mögliche Warnungen oder Fehler zu sehen.
Läuft alles einwandfrei, kannst du die Änderung auf Production übernehmen. Stelle sicher, dass es einen Backup-Plan gibt, und beobachte die Website in den ersten Tagen nach dem Umstieg aufmerksam, um Probleme schnell zu erkennen.
WordPress 6.8 ist die erste Version mit voller Kompatibilität. Frühere Versionen haben offiziell nur Beta-Support (Stand: September 2025).
Frag am besten noch mal nach. Gibt es keinen Zeitplan für den Rollout des PHP 8.3 Supports, solltest du über einen Wechsel des Web Hosts nachdenken. Aktuelle PHP-Versionen gehören zum 1×1 des Hostings.
Ja, aber warte nicht zu lange. Der Lebenszyklus von PHP 8.1 endet im Dezember 2025 und PHP 8.2 im Dezember 2026. Danach gibt es für diese Versionen keine Sicherheitsupdates mehr!
Stand September 2025 ist WordPress 6.8 noch im Beta-Support für PHP 8.4. Warte besser, bis die 10%-Adoptionsschwelle erreicht und offizielle Kompatibilität gegeben ist.
WordPress 6.8 unterstützt offiziell PHP 8.3 seit Juli 2025, nachdem die 10%-Adoptionsschwelle erreicht wurde. PHP 8.3 erhält Sicherheitsupdates bis Dezember 2027. Ein Upgrade bringt bessere Performance, Sicherheit und langfristige Stabilität im WordPress-Ökosystem. Teste den Umstieg in einer Staging-Umgebung, behebe Warnungen und Fehler und mach deine Website zukunftssicher.
The post Warum es Zeit ist auf PHP 8.3 zu upgraden appeared first on Bleech.
]]>The post Pflicht oder Chance? Warum Barrierefreiheit immer gewinnt. appeared first on Bleech.
]]>Der European Accessibility Act (EAA) ist eine EU-Richtlinie, die darauf abzielt, Barrieren für Menschen mit Behinderungen zu beseitigen und dafür zu sorgen, dass ein breiteres Spektrum von Produkten und Dienstleistungen zugänglich ist. Das BFSG und das BITV 2.0 setzen die EU-Richtlinie EAA in deutsches Recht um und tragen zu einem inklusiveren digitalen Raum in Europa bei. Für Websites und mobile Anwendungen bedeutet dies, dass sie von allen Menschen, unabhängig von ihren Fähigkeiten, problemlos genutzt werden können. Ziel ist es, in ganz Europa ein integrativeres digitales Umfeld zu schaffen.
Die Einhaltung von Rechtsvorschriften ist zwar ein wichtiger Faktor, aber der wahre Wert der Barrierefreiheit im Internet geht weit über die Vermeidung von Strafen hinaus. Eine barrierefreie Website schafft faire und gleichberechtigte Zugänge für alle Nutzergruppen.
Ihr müsst nichts umsetzen – aber könnt. Weil:
Barrierefreiheit bedeutet Benutzerfreundlichkeit.
KONTAKT
Schreib uns für eine kostenlose Beratung!
Für die Zugänglichkeit eurer Website solltet ihr die international anerkannten Web Content Accessibility Guidelines (WCAG) 2.1 Level AA befolgen. Zu den wichtigsten Anforderungen gehören:
KONTAKT
Schreib uns für eine kostenlose Beratung!
Gute Barrierefreiheit im Internet ist keine Frage von ausgefallenen Zusatzfunktionen, sondern von grundlegenden Design- und Entwicklungsentscheidungen. Hier sind einige Schlüsselelemente:
aria-label: Stellt eine Textbeschriftung für ein Element bereit, wenn kein sichtbarer Text verfügbar ist (z. B. für eine Schaltfläche „Suchen“, die nur ein Symbol enthält, aria-label=„Suchen“).aria-expanded: Zeigt an, ob ein zusammenklappbares Element (wie ein Dropdown) gerade aufgeklappt oder zugeklappt ist.aria-haspopup: Identifiziert ein Element, das ein Pop-up auslöst (wie ein Menü oder ein Dialog).role: Definiert den Zweck eines Elements, wenn Standard-HTML nicht ausreicht (z. B. role="navigation" für eine Navigationsmarke).Wir können dich dabei unterstützen, die Anforderungen zu erfüllen und deine Website auf Barrierefreiheit optimieren.
Wir von Bleech haben uns auf die Erstellung von Websites spezialisiert, die nicht nur visuell beeindruckend, sondern auch technisch robust und barrierefrei sind.
The post Pflicht oder Chance? Warum Barrierefreiheit immer gewinnt. appeared first on Bleech.
]]>The post Custom 3D models in Mapbox: a step-by-step integration guide appeared first on Bleech.
]]>Imagine you want to showcase a specific building or landmark that isn’t available in the Mapbox Standard 3D dataset. You’re faced with a generic, simplified representation that doesn’t capture the architectural details or significance of the location. How do you bridge this gap and bring your vision to life?
Pontifical University of St. Thomas Aquinas (Angelicum), Rome, Italy in Mapbox Standard – Before and After.
I followed several important steps, starting with 3D modeling, then integrating the model into the map, and finally fine-tuning the details for the best result.
Before diving into 3D modeling, precise measurements are crucial. Since architectural plans aren’t always available, I used a combination of tools to estimate dimensions accurately.
Measuring the building footprint in Google Maps (top view) to gather accurate dimensions for the 3D model.
Measuring Up: Using a custom ruler to determine building dimensions from photos.
Blender, a powerful and free open-source 3D creation suite, was my tool of choice for modeling. I imported the top-view screenshot with measurements into Blender as a reference image. Using meters as units, I modeled the building, focusing on capturing key architectural features while maintaining a low-poly style consistent with Mapbox’s 3D buildings. My goal wasn’t photorealistic accuracy but a visually appealing and performant representation.
After completing the geometry, I applied initial materials with approximate colors, then refined them during the lighting setup to better match the colors of Mapbox buildings and the surrounding environment.
Once the 3D model was complete, I exported it as a .glb file. This format is a binary version of .gltf that combines the model, textures, and other assets into a compact file, making it easier to manage. It’s widely supported and optimized for web-based 3D graphics, which makes it ideal for Mapbox integration.
When exporting, I always ensure the model’s scale and orientation are correct to avoid issues during map placement. Applying Blender modifiers beforehand is crucial. For the best results, I use meters as the unit scale, check face orientation to prevent flipped normals, and simplify the geometry where possible to improve performance. If the model includes textures, I make sure they are optimized and lightweight to ensure faster loading times.
If you want to enhance your skills in this area, I highly recommend checking out the Three.js Journey course by Bruno Simon. It’s an excellent resource for learning how to work with Three.js, create and optimize 3D models in Blender, and export them properly for use in Three.js and projects like this one.
Now for the fun part—digging into the technical details. Integrating the custom model into Mapbox GL JS involves several key steps:
Default building clipped using a custom GeoJSON polygon, making space for the 3D model.
To seamlessly integrate my custom model, I needed to remove the default Mapbox building. I achieved this using Mapbox’s clip layer. First, I created a GeoJSON polygon that precisely outlines the area of the default building, using a tool like geojson.io to draw and export the shape. Then, I added a clip layer to the map, referencing the GeoJSON polygon as its source.
map.addSource('eraser', {
type: 'geojson',
data: {
type: 'FeatureCollection',
features: [
{
type: 'Feature',
properties: {},
geometry: {
// Coordinates that we got from the geojson.io
coordinates: [
[
[12.487380531583028, 41.89565029206369],
[12.488045110910008, 41.895709818778926],
[12.488039595728878, 41.89582681940209],
[12.487824503664086, 41.895841187884514],
[12.487846564388576, 41.896062872638026],
[12.4884973557632, 41.89599513571156],
[12.488748296504838, 41.89593150398784],
[12.488902721576324, 41.89542655322492],
[12.488797933134919, 41.895315709840276],
[12.488654538424868, 41.89522334020583],
[12.487518411111182, 41.89510223314949],
[12.487380531583028, 41.89565029206369]
]
],
type: 'Polygon'
}
}
]
}
})
map.addLayer({
id: 'eraser',
type: 'clip',
source: 'eraser',
layout: {
'clip-layer-types': ['symbol', 'model']
},
minzoom: 1
})
To integrate a 3D model, I defined its geographic location and transformation parameters.
// Define the origin (longitude, latitude) of the 3D model
const modelOrigin = [12.488160, 41.895612]
// Set the altitude of the model (0 means at ground level)
const modelAltitude = 0
// Define the rotation of the model in radians [X,Y,Z]
const modelRotate = [Math.PI / 2, -12.43, 0]
// Convert the geographic coordinates to Mercator coordinates
// This is necessary because Mapbox GL JS uses Web Mercator projection
const modelAsMercatorCoordinate = mapboxgl.MercatorCoordinate.fromLngLat(
modelOrigin,
modelAltitude
)
// Create a transformation object for the 3D model
const modelTransform = {
// Set the X, Y, Z translations using the Mercator coordinates
translateX: modelAsMercatorCoordinate.x,
translateY: modelAsMercatorCoordinate.y,
translateZ: modelAsMercatorCoordinate.z,
// Set the rotations around each axis
rotateX: modelRotate[0],
rotateY: modelRotate[1],
rotateZ: modelRotate[2],
// Calculate the scale factor
// This ensures the model is sized correctly relative to the map
scale: modelAsMercatorCoordinate.meterInMercatorCoordinateUnits()
}
I added a custom layer to the Mapbox map to render the 3D model. This involved setting up a Three.js scene within Mapbox GL JS, loading the model, and configuring the rendering pipeline. Here’s how the custom layer was structured:
map.addLayer({
id: '3d-model',
type: 'custom',
renderingMode: '3d',
onAdd: function (map, gl) {
// Set up Three.js scene, camera, lights, and load the 3D model
},
render: function (gl, matrix) {
// Render the 3D model
}
})
For a full breakdown, see the official Mapbox GL JS 3D model example.
I used GLTFLoader from Three.js to load the model.
onAdd: function (map, gl) {
// ...
const loader = new GLTFLoader()
loader.load('path/to/model.glb', (gltf) => {
// Traverse through the model’s scene graph
gltf.scene.traverse((child) => {
if (child.isMesh) {
// Enable shadows only for meshes
child.castShadow = true
child.receiveShadow = true
}
})
this.scene.add(gltf.scene)
})
}
The gltf.scene.traverse((child) => {...}) function loops through all the meshes within the model, allowing for selective enabling of shadow casting and receiving. This capability is particularly useful when models contain elements that should not cast or receive shadows, such as transparent or background meshes.
I used an ambient light for general illumination and two directional lights to nicely light up the model and cast shadows:
onAdd: function (map, gl) {
// ...
const ambientLight = new THREE.AmbientLight(0xffffff, 2)
this.scene.add(ambientLight)
const directionalLight = new THREE.DirectionalLight(0xffffff, 1)
directionalLight.position.set(-40, 250, 150)
directionalLight.castShadow = true
directionalLight.shadow.bias = -0.003
this.scene.add(directionalLight)
// Second directional light to illuminate the model from another side
const secondaryLight = new THREE.DirectionalLight(0xffffff, 0.5)
secondaryLight.position.set(50, 100, -50)
this.scene.add(secondaryLight)
}
To improve shadow quality and reduce artifacts, I increased the shadow map resolution and adjusted the shadow camera’s near, far, left, right, top, and bottom properties:
directionalLight.shadow.mapSize.width = 1024
directionalLight.shadow.mapSize.height = 1024
directionalLight.shadow.camera.near = 0.1
directionalLight.shadow.camera.far = 500
directionalLight.shadow.camera.left = -100
directionalLight.shadow.camera.right = 100
directionalLight.shadow.camera.top = 100
directionalLight.shadow.camera.bottom = -100
If the shadow camera’s frustum is too large or too small, it can cause precision issues in the shadow map. Therefore, it is important to ensure the frustum tightly fits the area where shadows are needed.
Shadow rendering before and after bias adjustment.
Shadow acne refers to visual artifacts that appear as a „ladder“ or striped texture on surfaces. It occurs because the shadow map’s depth values are compared to the scene’s depth values, and small precision errors can cause incorrect shadowing. The shadow.bias property offsets the shadow map slightly to avoid these errors.
directionalLight.shadow.bias = -0.003
Tips:
Default grey plane compared with ShadowMaterial applied.
In Blender, I added a plane to the 3D model to serve as a surface for receiving shadows. After importing the model into Three.js, I located the plane by its name in the scene graph and applied a ShadowMaterial to it.
gltf.scene.traverse((child) => {
//...
if (child.isMesh && child.name === 'Plane') {
child.material = new THREE.ShadowMaterial()
child.material.opacity = 0.1
child.receiveShadow = true
}
})
This ensures the shadow of the model is visible on the map surface.
Finally, I updated the Three.js camera and rendered the scene within Mapbox’s render function of the custom layer:
//...
render: function (gl, matrix) {
const rotationX = new THREE.Matrix4().makeRotationAxis(
new THREE.Vector3(1, 0, 0),
modelTransform.rotateX
)
const rotationY = new THREE.Matrix4().makeRotationAxis(
new THREE.Vector3(0, 1, 0),
modelTransform.rotateY
)
const rotationZ = new THREE.Matrix4().makeRotationAxis(
new THREE.Vector3(0, 0, 1),
modelTransform.rotateZ
)
const m = new THREE.Matrix4().fromArray(matrix)
const l = new THREE.Matrix4()
.makeTranslation(
modelTransform.translateX,
modelTransform.translateY,
modelTransform.translateZ
)
.scale(
new THREE.Vector3(
modelTransform.scale,
-modelTransform.scale,
modelTransform.scale
)
)
.multiply(rotationX)
.multiply(rotationY)
.multiply(rotationZ)
this.camera.projectionMatrix = m.multiply(l)
this.renderer.resetState()
this.renderer.render(this.scene, this.camera)
this.map.triggerRepaint()
}
At this point, I adjusted the material colors of the roof, walls, and windows in Blender to better match the surrounding Mapbox buildings. This created a more cohesive appearance, helping the custom 3D model blend seamlessly into the scene. Using Mapbox’s „Day“ light preset along with my lighting setup, I finalized the following color values:
When shadows are enabled, resizing the window can cause the model to appear distorted because the shadow calculations aren’t automatically updated. To resolve this, I dispose of the renderer and reinitialize it whenever the window is resized:
onAdd: function (map, gl) {
// ...
const handleResize = () => {
this.renderer.dispose()
this.renderer = new THREE.WebGLRenderer({
canvas: map.getCanvas(),
context: gl,
antialias: true
})
this.renderer.shadowMap.enabled = true
this.renderer.shadowMap.type = THREE.PCFSoftShadowMap
this.renderer.autoClear = false
}
map.on('resize', handleResize)
}
To replicate Mapbox’s signature glowing entrance effect, I started by creating a 3D mesh in the shape of an upside-down square arch.
Comparison of the entrance mesh before and after applying the „glowing“ shader.
Next, I applied a custom shader to the mesh to simulate a soft, glowing effect. The shader creates a gradient that goes from white at the back to full transparency at the edges, creating the illusion of soft light emanating from the entrance.
gltf.scene.traverse((child) => {
//...
if (child.isMesh && child.name.startsWith('Door_Glow')) {
const gradientMaterial = new THREE.ShaderMaterial({
vertexShader: `
varying vec2 vUv;
varying float vDepth;
varying float vHeight;
void main() {
vUv = uv;
vDepth = position.z;
vHeight = position.y;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
`,
fragmentShader: `
varying vec2 vUv;
varying float vDepth;
varying float vHeight;
void main() {
vec3 color = vec3(1.0, 1.0, 1.0);
float backToFront = smoothstep(-1.0, 1.0, vDepth);
float topToBottom = smoothstep(-1.0, 1.0, vHeight);
float alpha = backToFront - topToBottom;
gl_FragColor = vec4(color, alpha);
}
`,
transparent: true
})
child.material = gradientMaterial
}
})
For those interested in diving deeper into shaders, I recommend again checking out the Three.js journey by Bruno Simon (https://threejs-journey.com/) and The Book of Shaders (https://thebookofshaders.com/).
Integrating custom 3D models into Mapbox GL JS maps truly opens up a world of exciting possibilities for visualizing unique locations and creating highly engaging user experiences. As I’ve explored in this detailed breakdown, overcoming the technical challenges involved in this process allows for the creation of truly stunning and informative 3D map applications.
If you’re curious about how advanced 3D map integration could elevate your next project – or simply have questions you’d like to explore – feel free to reach out and see what we can build together.
The post Custom 3D models in Mapbox: a step-by-step integration guide appeared first on Bleech.
]]>The post If there was Astro for WordPress appeared first on Bleech.
]]>The excitement typically revolves around the ever-evolving revolutionary technologies, the latest shiny frameworks, or the emergence of new programming languages. While these advancements push web development forward, I believe it’s important, from time to time, to recognise the substantial progress we’ve made and appreciate the well-established tools we frequently rely on everyday. One of those tools for me is WordPress.
WordPress was one of my first experiences with Web Development, back when I used to be a designer, and I have to say I had a rollercoaster of feelings when I started using it. As I was inexperienced with code I had to leverage the premium WordPress theme ecosystem to create my websites. Some themes were quite powerful and with a lot of options, although they usually came with a ton of extra features the I would rarely use. That lead to performance issues and from time to time security vulnerabilities as well. At the time I was focused on learning more about CSS and HTML as I didn’t want to overwhelm myself with too much at one time.
After a while, I ended up working on a startup as a designer, where we were developing a pretty complex web app that included, among other things, a real time global chat, a ranking system, used for matchmaking users based on their performance, listing and trading of digital and physical products between users and the platform, leader boards and the list goes on. When we decided to do a re-write of the app, I get involved with the frontend and tried to learn more about web development. That was my first introduction to more advance tooling like bundlers package managers etc.
With that knowledge in hand I decided to create my own WordPress theme boilerplate that would include all of those technologies. My goal was to have a starting point for each WordPress project that I would develop in the future. While this might sound easy on paper, getting everything right was a long process of using what I created and realising that I had to structure something differently, fix a bug etc. I ended up focusing more on maintaining and improving the boilerplate than actually working on the projects. This made me look for other solutions and at that time the static generators were the new shiny thing.
I started following the web development world when a lot of the rendering was moving from the server to the client and frameworks like React, Vue and Angular became popular. While they are great solutions for creating dynamic web applications, when it comes to simple SEO friendly websites nothing can beat static. Hopefully static site generators where developed to translate the JavaScript code to static HTML. This was, for a while, the way I created websites for my projects. using tools like Gatsby in the beginning and moving to Astro eventually.
Lately though, it seems like the path was not a straight line but more of a circle. It started from the server moved to the client and now we are moving back to the server. I am personally very exited about these cycles because I think it tests the limits of what our current technologies can do, while promoting innovation that make the web a better more inclusive space for everyone.
I learned about Flynt just before I joined Bleech. At first it looked strange to me, as I hadn’t heard of Timber or Twig before, but when I started using it it really made sense. Flynt is basically a WordPress Theme Starter, although if you get to know it, it is way more powerful than that. The current version of Flynt was 1.4 at the time, and It had already solved CSS and Javascript scoping, responsive images and most importantly developer experience. Then version 2.0 came out with a lot of improvements and new features inspired by the modern development ecosystem.
The basic idea of Flynt is that you create reusable component to construct your layouts and content. Some components can be used to create content and others serve as common elements among the website, like the header and footer for example. The anatomy of a component looks like this:
index.twig This is the html template_style.scss Scoped styles for the frontendfunctions.php Can be used to run server code, register custom fields, translatable content or modify the data of the componentscript.js Using web components under the hood to scope styles and lazy load the Javascript code.README.md Used to provide documentationscreenshot.png Used as a visual reference both on the codebase as well as the WordPress admin._admin.scss / admin.js Used to add styles and Javascript on the WordPress admin.As you can see the structure looks similar to what you would find in other popular frameworks. The component will be available to use inside your codebase as soon as you create some or all of these files. There is an extra step though if you want to use it as part of the content for posts or pages for example, and that is registering it in the appropriate file found under the inc/fieldGroups.
Some of the coolest features for me are the concept of islands architecture, a reference system to access HTML elements and a solid way to communicate data between server and client using simple json scripts. Let’s take a look on how those features look like
Different loading strategies can be defined for each component independently when using the custom element flynt-component:
load:on="load" (default) Initialises immediately when the page loads. Usage example: Elements that need to be interactive as soon as possible.load:on="idle" Initialises after full page load, when the browser enters idle state. Usage example: Elements that don’t need to be interactive immediately.load:on="visible" Initialises after the element get visible in the viewport.Usage example: Elements that go “below the fold” or if you want to load it when the user sees it.load:on:media="(min-width: 1024px)" Initialises when the specified media query matches.Usage example: Elements which may only be visible on certain screen sizes.This allows you to control how javascript is loaded and, when used correctly, can lead to a great balance between user experience and performance.
The reference system provides a way to access DOM elements with the buildRefs functions. It requires that you provide a data-ref attribute for single selection or data-refs for multiple, with a name as the value, data-ref=”alertToggle” for example. What this function does under the hood is:
buildRefs functions a new proxy is created that stores a object with the ref name and the corresponding element.customRefs or constructs one using a querySelector of querySelectorAll and the property name.null if none were found.Lets take a look on how this works:
For the HTML I will just create a simple button and give it the data-ref=”alertToggle” attribute.
<flynt-component name="BlockAlert">
<button data-ref="alertToggle">Alert me!</button>
</flynt-component>
Then inside the component script I will use the function buildRefs to access it:
import { buildRefs } from '@/assets/scripts/helpers.js'
export default function (el) {
const ref = buildRefs(el)
ref.alertToggle.addEventListener("click", () => alert("I am a ref called alertToggle"))
}
There is also a helper function called getJSON. This function will, by default, retrieve the content of a script with type “application/json” from the components markup. This can be useful if you want to pass data from the server to the client in a json format. I find this very useful when I want to persist state from the server to the client on a page reload for example.
Here is an example on how this would look like. First on our functions php lets create some data:
add_filter('Flynt/addComponentData?name=BlockServerSynch', function ($data) {
$data['jsonData'] = [
"foo" => "bar"
];
return $data;
});
Then on the twig file we add the scrip with our data.
<flynt-component name="BlockServerSynch">
<script type="application/json">
{{ jsonData|json_encode }}
</script>
</flynt-component>
And then we can grab the data from the server inside the script file:
import { getJSON } from '@/assets/scripts/helpers.js'
export default function (el) {
const data = getJSON(el)
console.log(data)
}
I have used this in the past to pass filters, user inputs, translatable string or user options among other things and I really like how simple and to the point it always feels.
As someone that really likes to balance between the bleeding edge and production ready solutions I find that Flynt has become my go to starter theme for WordPress development. It enables me to write maintainable code faster, while it takes care of performance, scoping and reusability in a very robust way. As an official member of the team that develops and maintains Flynt now, I see all the thought and effort that is put into it and I can’t wait to see what new features are coming.
But don’t take my word for it, go and try it yourself! If you need any help you can always start a discussion on Github or reach out on 𝕏, we are always happy to help.
The post If there was Astro for WordPress appeared first on Bleech.
]]>The post Custom chatbots made easy: How to build your own ChatGPT agents appeared first on Bleech.
]]>ChatGPT can be powerful in helping with very specific tasks like the following:
But to get quality responses, you need to give context to ChatGPT by providing a concise prompt. That’s why I am keeping a list of my day to day prompts in my notes app. Whenever I need a prompt, I open a new conversation in ChatGPT and paste the template. But that’s quite a repetitive process.
There should be a sustainable way to organize, optimize and launch prompt templates as context-aware ChatGPT agents.
Me, Oct. 2023
I developed a lightweight CLI interface for text-based prompt templates. It allows to create, load and interact with custom ChatGPT agents by providing custom GPT prompts in plain text files. Meet SiegfriedAI. 🧔♂️👋
To install the CLI tool and learn a few tips, head over to the GitHub repository:
SiegfriedAI is free and open source. With just 2 kB in size, it’s a compact script that you can easily adapt and extend to your needs. In its simplistic nature, the feature list is small and focused:
Tackle specific, recurring tasks with custom chat agents. Whether it’s customer support, technical assistance or content creation, keep your favorite prompts organized and readily available at your fingertips.
Simply drop your text files with GPT prompts into the templates folder to create your custom chat agents. The file name will automatically be displayed in the selection prompt. Create as many specialized agents as you like.
Easily build sophisticated AI solutions on top. With just 74 lines, the script is quick to understand and easy to extend. And with langchain, you are prepped with a powerful toolset to load files, crawl websites, generate images and more.
While OpenAI provides a wealth of information on their website, I wasn’t sure what I needed to know to get up and running. Here’s what I learned building the tool, hopefully giving you a head start:
SiegfriedAI is built with langchain, a framework for developing LLM applications. Beyond providing a simple API and great documentation, langchain includes powerful integrations like document loaders, web loaders, vector stores, action agents, output parsers and more to develop sophisticated AI solutions.
Sending a message to the OpenAI API and generating a response is simple:
export OPENAI_API_KEY=you_openai_api_key_herenpm install -S langchainCreate and run your first chat completion:
import { ChatOpenAI } from "langchain/chat_models/openai";
const model = new ChatOpenAI();
const message = "Name something green.";
const aiResponse = await model.predict(message);
console.log(aiResponse); /* Grass */
To get a proper context-aware chat experience, the AI needs to keep track of the chat history. However, OpenAI doesn’t maintain a history of the chat. Instead, you need to send back the full chat history with every new request. While this seems low-level, it makes it simple to work with.
let messages = [
["human", "Name something green."],
["ai", "Grass."],
["human", "What did you say?"]
];
const aiResponse = await model.predictMessages(messages);
console.log(aiResponse); /* I said "Grass." */
I’m using Inquirer.js for the template selection and to allow multiline input via editor. It’s easily embeddable and provides a prettier command line interface.
import { select, input } from '@inquirer/prompts';
const template = await select({
message: 'Select your template:',
choices: [{
name: 'ChatGPT',
value: ''
}, {
name: 'Final Cut Pro',
value: 'Act as a support agent who is an expert in Final Cut Pro for Mac. Only respond with short, precise, helpful messages to my questions.',
}],
});
The template shall be passed to OpenAI as a system message, which is then followed by user input. The model will then respond with the specified behavior.
let messages = [
["system", template],
["human", await input({ message: 'You:' })]
];
For the full script, please refer to the GitHub repository: steffenbew/siegfried-ai
Developing SiegfriedAI has amazed me – working with AI dev tools is a creator’s dream! Who would’ve guessed chatbot creation could be so simple and fun?
The great developer experience and immense potential that AI developer tools bring to the table are a signal that there’s a big wave of AI advancements coming our way. Witnessing this firsthand, I am buzzing with anticipation for the progress we see in this space.
For anyone looking to get their hands on AI development, I hope to have sparked some curiosity and encouragement! I can’t wait for you to experience it yourself.
—
PS: I’m still looking for great prompts! Which ones make your life considerably easier? Any that elevate your craft to new heights? Please share your favorite ones by dropping me an email – I’d be excited to hear from you!
PPS: Drop by our YouTube channel for more insights: Siegfried, deploy!
The post Custom chatbots made easy: How to build your own ChatGPT agents appeared first on Bleech.
]]>The post Enhancing WordPress Archives with HTMX and View Transitions appeared first on Bleech.
]]>On a previous article, I look into the topic of programmatically filtering post archives in WordPress. Today, I will be using HTMX and the View Transitions to elevate the user experience. These technologies enable real-time filter updates without requiring a page reload, and offer slick animations with minimal code adjustments.
HTMX opens the door to a range of modern web features like AJAX, CSS Transitions, WebSockets, and Server-Sent Events via HTML attributes.
Key Advantages:
For an in-depth understanding, visit the official HTMX documentation.
The View Transitions API provides a mechanism for easily creating animated transitions between different DOM states while also updating the DOM contents in a single step.
Designed primarily to introduce app-like transitions into Multi-Page Applications (MPAs), the View Transitions API is a noteworthy advancement. Unlike specialised libraries in frameworks like React, this API offers the same fluid transitions without the overhead of additional packages. You can learn more from the MDN documentation.
For this example, I’ll use categories as filters and allow sorting by date in ascending or descending order. I’ll also include HTMX directly in the markup as a script. Although in real-world applications, it’s advisable to include it in the head section or load it via a package manager like npm. This tutorial uses Twig, but you can easily adapt it to vanilla WordPress.
Below is the markup before we introduce HTMX:
<main class="grid-post-archives">
<div class="container">
<h1>Post Archives</h1>
<div data-ref="content">
<!-- Form -->
<form method="get">
<!-- Categories -->
<fieldset>
<legend>Category</legend>
{% for term in terms %}
<input
id="{{ term.id }}"
name="cat[]"
value="{{ term.id }}"
type="checkbox"
{{ term.selected ? "checked" }}
/>
<label for="{{ term.id }}">{{ term.name }}</label>
{% endfor %}
</fieldset>
<!-- Order -->
<fieldset>
<legend>Order</legend>
<input
id="order-asc"
name="order"
value="ASC"
type="radio"
/>
<label for="order-asc">Ascending</label>
<input
id="order-desc"
name="order"
value="DESC"
type="radio"
/>
<label for="order-desc">Descending</label>
</fieldset>
<input type="submit">
</form>
<!-- Posts Loop -->
{% if posts|length > 0 %}
<ul data-ref="posts" class="posts resetList">
{% for post in posts %}
<li class="post">
<img src="proxy.php?url={{ post.thumbnail.src }}" alt="">
<h3>{{ post.title }}</h3>
<p>{{ post.excerpt }}</p>
</li>
{% endfor %}
</ul>
{% else %}
<p class="posts-empty">No posts found</p>
{% endif %}
</div>
</div>
</main>
This is what a regular archive page would look like. If you followed my previous article you know that when submitting the form the selected options will be added to the url as params and the page will reload to reflect our selected filters.
Adding HTMX attributes turns this into a dynamic, AJAX-powered form. Let’s look at the final markup and decipher what each attribute accomplishes:
<script src="proxy.php?url=https://unpkg.com/[email protected]"></script>
<main
class="grid-post-archives"
hx-boost="true"
>
<div class="container">
<h1>Post Archives</h1>
<div data-ref="content">
<form
method="get"
hx-push-url="true"
hx-get="{{ post.link }}"
hx-target="closest [data-ref='content']"
hx-select="[data-ref='content']"
hx-swap="outerHTML show:body:top"
hx-trigger="change"
>
<!-- Categories -->
<!-- Order -->
<noscript>
<input type="submit">
</noscript>
</form>
<!-- Posts Loop -->
</div>
</div>
</main>
hx-boost=”true”: Transforms anchor tags and forms into AJAX requests.hx-push-url="true": Updates the browser URL to preserve the state across navigation and page refreshes.hx-get="{{ post.link }}": Specifies the content source url (in this case is the link of our post archive page).hx-target="closest [data-ref='content']": Identifies the element to update. It can also be a class or an id among other things. I went with a data attribute to make it more clear.hx-select="[data-ref='content']": Determines what content will replace the target.hx-swap="outerHTML show:body:top": Defines the swapping action and scrolls to the top of the page.hx-trigger="change": Provide the event that initiates the swap. I chose the change event in this case, to provide instant feedback to the users selection.For users with JavaScript disabled, I wrapped the submit button with a noscript tag that will allow them to still submit the form manually.
And that’s it. We now have a fully functional AJAX-driven filtering functionality for your post archives.
HTMX offers out-of-the-box support for View Transitions through a single configuration variable. This means no extra code is necessary to leverage this feature.
<script src="proxy.php?url=https://unpkg.com/[email protected]"></script>
<script>
htmx.config.globalViewTransitions = true
</script>
This action adds a default fade animation to content updates. By adding the view-transition-name property the browser will add animated transitions between common elements.
<li
class="post"
style="view-transition-name: post-{{ post.id }}"
>
<img src="proxy.php?url={{ post.thumbnail.src }}" alt="">
<h3>{{ post.title }}</h3>
<p>{{ post.excerpt }}</p>
</li>
You can even create custom CSS animations using the View Transitions API and create a unique User Experience for the visitors of your website.
While the View Transitions API is experimental and not universally supported, it serves as a progressive enhancement, ensuring your website remains functional.
By following this guide, you’ll have a WordPress post archive that not only filters posts via AJAX but also offers a polished, app-like user experience.
In this exploration of enhancing post archive filters in WordPress, HTMX and the View Transitions proved to be game-changers for user experience. By leveraging HTMX, the complexity of the codebase is dramatically reduced and the reliance on frontend frameworks or libraries is obsolete. This makes the web page lighter, faster, and less prone to client-side bugs, without compromising on modern functionalities like AJAX requests and seamless transitions.
The View Transitions API further amplifies the experience, enabling fluid transitions between different DOM states. It allows websites to offer an app-like feel, taking user engagement to a new level. Moreover, the progressive enhancement approach ensures that these features act as bonuses for supported browsers, not prerequisites for accessing content.
If you’re a developer interested in modern web technologies and keen to improve your WordPress projects, HTMX and the View Transitions API present a compelling case for further investigation. With minimal code changes, these tools can help you build an enhanced, interactive, and modern UI that stands out.
While the View Transitions API is still experimental, it shows significant promise. The power of these tools is in our hands to explore, integrate, and enhance. Happy coding!
Download the Flynt Component from Github
The post Enhancing WordPress Archives with HTMX and View Transitions appeared first on Bleech.
]]>The post The ups and downs of text-wrap: balance and a polyfill appeared first on Bleech.
]]>Say hello to text-wrap: balance! It takes you from hand-authoring to full automation, ensuring your text looks just as good online as it does in the design.
An unbalanced headline fills the entire container width for each line before breaking onto the next. This often results in the last line of text being shorter than the previous lines, unless you get lucky with perfect alignment.
.unbalanced {
max-inline-size: 700px;
}
To balance all lines of text, you’d usually have to manually insert line breaks or adjust the container’s width. However, these methods only work for a predetermined layout width and have their limits with responsive layouts.
That’s where text-wrap: balance comes into play – it automatically aligns the length of text lines across all screen resolutions.
.balanced {
max-inline-size: 700px;
text-wrap: balance;
}
Luckily text-wrap: balance does not require a dictionary for each language, which might render it useless for non-English content – I’m looking at you, hyphens: auto!
Instead, the browser calculates the smallest width for each line without creating additional lines. However, there are at least the following considerations to keep in mind when using this feature:
white-space value is set. If the element inherits such value, you should unset it.<br>-tags, so your intentional line breaks won’t be disrupted.I consider these prerequisites as a plus. Initially, I was concerned that text-wrap: balance might be too „magical,“ making it difficult to understand and debug. But especially the fact that it respects manual line breaks eases those worries.
As of July 2024, all major browsers support text-wrap: balance in their latest version. The partial support flag refers to the possibility of using the longer syntax text-wrap-style in conjunction with text-wrap-mode.
Browser support (July 2024)
Given the expanding browser support, text-wrap: balance is an ideal candidate for progressive enhancement. I think it’s great when a headline plays a key role in the layout, but the content manager cannot control its line breaks.
This might be the case for an article title that is displayed in a hero section on top of a larger background, especially when the headline is centered.
After: Balanced text in a blog post hero section, thanks to text-wrap: balance.
Once browser support expands even further – or if you opt for a polyfill – the applications could be extended to any layout-centric headline that is aiming for a block-style aesthetic.
Before you throw text-wrap: balance on every headline across your website, hear me out. Initially, I thought, „Why not? It can only improve layouts.“ But that’s not necessarily the case, and here’s why, in two key points:
Caveat #1:
When editing content in a supported browser, you won’t notice how bad a line of text may appear to users on unsupported browser. And when text composition makes a difference, I’d prefer the opportunity to catch these issues early and make necessary adjustments. This could mean adding an extra line break, tweaking the CSS, or even rewording the text.
Before: The text layout looks significantly different without browser support.
Caveat #2:
Humans instinctively read negative space as patterns. Our perception automatically frames multiple lines of text in a box, depending on where those lines break. Therefore, changing the length of lines affects the perceived size of a section. This effect becomes increasingly evident when the original text is highly unbalanced, such as when the last line contains only few characters.
Before: Unbalanced text fills the width of the parent container.
After: Balanced text causes notable white space on the right hand side.
The go-to JavaScript polyfill for many is Adobe’s balance-text. However, we found it to be outdated and a bit bloated for our needs. So, Dominik took matters into his own hands and crafted a custom polyfill. He based it on react-wrap-balancer, opting for a lighter, more streamlined algorithm that leverages modern tech like the ResizeObserver.
export default function () {
if (!window.CSS.supports('text-wrap', 'balance')) {
const elements = document.querySelectorAll('.textWrapBalance, h1')
const resizeObserver = new ResizeObserver((entries) => {
entries.forEach((entry) => {
relayout(entry.target)
})
})
elements.forEach((element) => {
relayout(element)
resizeObserver.observe(element)
})
window.addEventListener('resize', () => {
elements.forEach((element) => {
relayout(element)
})
})
}
}
function relayout (wrapper, ratio = 1) {
const container = wrapper.parentElement
const update = (width) => (wrapper.style.maxWidth = width + 'px')
wrapper.style.display = 'inline-block'
wrapper.style.verticalAlign = 'top'
// Reset wrapper width
wrapper.style.maxWidth = ''
// Get the initial container size
const width = container.clientWidth
const height = container.clientHeight
// Synchronously do binary search and calculate the layout
let lower = width / 2 - 0.25
let upper = width + 0.5
let middle
if (width) {
// Ensure we don't search widths lower than when the text overflows
update(lower)
lower = Math.max(wrapper.scrollWidth, lower)
while (lower + 1 < upper) {
middle = Math.round((lower + upper) / 2)
update(middle)
if (container.clientHeight === height) {
upper = middle
} else {
lower = middle
}
}
update(upper * ratio + width * (1 - ratio))
}
}
To integrate the polyfill into your Flynt project, create a new script file in the /assets/scripts/ folder. Paste the above code snippet into it and adjust the selector according to your needs.
/assets/scripts/textWrapBalance-polyfill.js
In order to execute the polyfill code, add the following to your main.js file:
import textWrapBalance from './scripts/textWrapBalance-polyfill.js'
textWrapBalance()
Overall, I’m a big fan of text-wrap: balance and will use it across the board once browser support is great.
But the idea of running intense CSS manipulations in JavaScript on practically every page just to polyfill this feature doesn’t sit well with me at the moment.
So, for now, I’ll keep an eye out for specific use-cases where its native capabilities can progressively enhance the user experience, while holding off on broader implementation until robust browser support arrives.
Beyond the balance value, the spec offers a new pretty value which is supposed to prevent single words at the end of a new line.
In a nutshell, if balance is for layout headlines, then pretty is your go-to for content headlines. But it could also be a solid choice for layout headlines where you prefer to keep the negative space largely unchanged, as mentioned above.
Support for text-wrap: pretty shipped in Chromium 117 first. As of July 2024, Safari and Firefox do not support it yet.
text-wrap: pretty: Prevents a single word in the headline’s last line.
Btw: This page uses text-wrap: balance on the h1 and text-wrap: pretty on long-copy content headlines (.post-main h1-h6).
The post The ups and downs of text-wrap: balance and a polyfill appeared first on Bleech.
]]>The post Creating a modal using the dialog HTML element appeared first on Bleech.
]]>Back in the day, making a modal was a real challenge that needed loads of work and know-how. You had to really think about how to make it easy to use and accessible, deal with where the focus goes, handle keyboard events, and all that jazz. But things got a whole lot easier once the dialog element came into play.
The purpose of the <dialog> element is to simplify the processes of creating modals or floating interactive elements that appear on user interaction. A dialog will interrupt the typical user flow, usually to display important information, collect user input, or confirm actions.
Here is a simple example on how the dialog works:
<dialog id="dialog">
<h2>Lorem ipsum</h2>
<p>Lorem ipsum dolor, sit amet consectetur adipisicing elit. Facere mollitia iste, praesentium sint expedita culpa, veniam dolorem ipsam alias iusto labore quas, quia non minima repellendus. Excepturi laboriosam harum sunt.</p>
<button id="close-dialog">Close</button>
</dialog>
<button id="open-dialog">Open Dialog</button>
<script>
const dialog = document.getElementById("dialog");
const dialogOpen = document.getElementById("open-dialog");
const dialogClose = document.getElementById("close-dialog");
dialogOpen.addEventListener("click", () => {
dialog.show();
});
dialogClose.addEventListener("click", () => {
dialog.close();
});
</script>
And that’s it. There are two methods you can call to open the dialog.
dialog.show() – Displays the dialog without a backdrop positioned absolute relative to the document flow.dialog.showModal() – Displays a modal-type dialog in a fixed position with a backdrop, blocking interactions with the regular flow.When dialogs are used to provide information or an action that is not mandatory for the user experience, it is a good idea to make the dialog easily dismissible. The following example shows how to close the modal when clicking anything outside of the dialog element.
const dialog = document.getElementById("dialog");
const dialogOpen = document.getElementById("open-dialog");
const dialogClose = document.getElementById("close-dialog");
dialogOpen.addEventListener("click", openDialog);
dialogClose.addEventListener("click", closeDialog);
function openDialog() {
dialog.showModal()
dialog.addEventListener("click", closeDialogOnClickOutside)
}
function closeDialog() {
dialog.close()
dialog.removeEventListener("click", closeDialogOnClickOutside)
}
function closeDialogOnClickOutside (event) {
event.target === dialog && closeDialog()
}
Wrapping the content of the dialog with a div ensures that the event.target is not the dialog when the user clicks on the content. This is what makes the closeDialogOnClickOutside functions a one liner.
<dialog id="dialog" class="modal">
<div class="modal-content">
<h2>Lorem ipsum</h2>
<p>Lorem ipsum dolor, sit amet consectetur adipisicing elit. Facere mollitia iste, praesentium sint expedita culpa, veniam dolorem ipsam alias iusto labore quas, quia non minima repellendus. Excepturi laboriosam harum sunt.</p>
<button id="close-dialog">Close</button>
</div>
</dialog>
<button id="open-dialog">
Open Dialog
</button>
After taking care of the markup we will need to remove any padding around the dialog element and style the .modal-content instead.
/* Dialog Reset */
dialog {
border: 0;
padding: 0;
background: transparent;
max-inline-size: min(65ch, 100% - 3rem);
}
.modal-content {
padding: 2rem;
background-color: #fff;
}
I like looking at web development as this big puzzle. We usually like to create patterns that we can use to connect 4 worlds: HTML, CSS, JavaScript and Data. We create (puzzle) pieces of reusable code that we then connect to create interfaces. One of the most common patterns in the frontend world is Components.
Below, I’ve created an example on how I would structure my code to create a reusable modal component using plain HTML, CSS and JavaScript. You can also find a Flynt Component that you can drop into your project right away: View on GitHub.
<dialog> element is designed to be accessible, making it a valuable choice for developers committed to creating inclusive web experiences. It follows best practices for keyboard navigation and screen reader compatibility.<dialog> is straightforward. You only need to define the dialog’s content within the element, set its ID, and handle its display and interaction through JavaScript.<dialog> element automatically manages the focus within the dialog, preventing users from tabbing outside the modal, which is essential for accessibility and a smoother user experience.The dialog element can be a useful tool in certain situations, but there are cases where it might be more suitable to use a JavaScript library or framework instead. Here are some scenarios when you might want to avoid using the <dialog> element and opt for a library or framework:
<dialog> element, using a JavaScript library can be the only viable option to implement modal dialogs.<dialog> element. JavaScript libraries often provide more advanced animation options and transitions that can enhance the user experience.The <dialog> HTML element is a valuable addition to the web developer’s toolkit. It simplifies the creation of modal dialogs while maintaining accessibility and user-friendliness. By following best practices and integrating it seamlessly into your web applications, you can enhance the overall user experience and make your interactions more intuitive and engaging.
But what about you? Have you used the dialog element? IIf so, what did you like or didn’t like about it? Reach out on 𝕏.
The post Creating a modal using the dialog HTML element appeared first on Bleech.
]]>The post Introducing our Figma Design Kit for Flynt appeared first on Bleech.
]]>With our Figma Design Kit, we provide a selection of components that represent Flynt’s complete Base Style and an example page template. All Figma components serve as an excellent starting point for your project, saving you valuable time and effort that would have been spent starting from scratch.
Additionally, we have optimized the current colors and sizes of text and components to follow accessibility standards, ensuring that your designs are inclusive and accessible to users.
All colors, text styles and components can be easily customized and extended to align with your brand’s CI to match your requirements, empowering you to build your own custom page templates and create a website that truly reflects your unique vision and brand identity.
Sneak Peek of the Figma file showing Flynt’s Base Style elements like buttons and text input controls that one can easily customize to match different CIs.
Flynt is a developer-focused WordPress Starter Theme with a component-based architecture. It seamlessly integrates with Advanced Custom Fields Pro (ACF Pro) and Timber, making custom field management and dynamic template creation efficient.
With modern front-end tools like Vite and hot module loading, Flynt offers optimized builds and real-time updates. Its speed architecture with JS Islands ensures faster performance. Enabling you to unlock the true potential of WordPress! Check out Flynt’s website and explore the GitHub repository to learn more.
Building an effective design system presents its own set of challenges. One key challenge we encountered during the development of the Flynt Design Kit is the technical concept of minimal code. To ensure optimal performance and maintain simplicity, we carefully curated only the most necessary variants within the design.
By striking a balance between simplicity and functionality, we provide designers and developers with the essential tools they need, without burdening them with excessive code.
The Flynt Design Kit is built for Figma, a leading design collaboration platform. This integration allows for effortless collaboration among team members and stakeholders. Designers and developers can work together in real-time, making necessary iterations and adjustments, ensuring everyone stays on the same page throughout the design process. Figma’s intuitive interface combined with Flynt’s Design Kit offers a seamless workflow for unparalleled productivity.
Excited to get your hands on the Flynt Design Kit for Figma? You can download it now for free from the Figma Community! Embark on a journey of effortless web development, where you can focus on unleashing your creativity and bringing your vision to life.
The release of the Figma Design Kit for Flynt marks a significant milestone in our commitment to providing designers and developers with the tools they need to streamline their workflow and create captivating websites. By addressing the challenges of creating design systems and offering a comprehensive set of components, we aim to empower you to build exceptional web experiences.
Download the Flynt Design Kit for Figma today and embark on a journey of seamless design collaboration, enhanced productivity, and unparalleled creativity.
The post Introducing our Figma Design Kit for Flynt appeared first on Bleech.
]]>The post Flynt 2.0 – Redefining Performance and Experience appeared first on Bleech.
]]>Flynt 2.0 provides developers with an array of powerful features and performance improvements, elevating the experience of custom built WordPress websites for both frontend and backend users. Take advantage of Flynt 2.0 to:
Flynt 2.0: Google PageSpeed Performance Results
We’ve also enhanced the editor experience, introducing a component search and integrated editor styles. The intuitive interface enables effortless creation of beautifully layouted pages, while the new Gutenberg Block Editor simplifies the writing and editing of lengthy blog posts. By streamlining the component options, editors can focus on crafting stunning websites effortlessly.
Page Layouts: The Component Search
Blog Posts: The Gutenberg Block Editor
Set the perfect foundation for any WordPress website project. Launch your site with confidence and focus on developing custom features and content, all without compromising performance. With Flynt 2.0, you’ll have more time to concentrate on what truly matters.
For those already familiar with Flynt, you’ll appreciate the streamlined codebase and backend experience. If you haven’t tried it yet, head over to flyntwp.com and get started today! We can’t wait to see the remarkable websites you’ll create with Flynt and we look forward to your feedback on Flynt 2.0!
Happy coding! 🎉🚀✨
The post Flynt 2.0 – Redefining Performance and Experience appeared first on Bleech.
]]>