timomeh.de https://timomeh.de About software development and other thoughts I wanted to elaborate on. Sat, 07 Feb 2026 19:43:00 GMT https://validator.w3.org/feed/docs/rss2.html timomeh.de en <![CDATA[Starsailor]]> https://timomeh.de/posts/starsailor?utm_source=rss https://timomeh.de/posts/starsailor Wed, 18 Mar 2026 23:14:00 GMT This week I discovered Starsailor for the first time. Based on their age and Spotify numbers, they’re not exactly a hidden gem. But it seems like they never quite broke through to the mainstream either. I had just never heard of them before.

Specifically, their debut album Love Is Here. I’ve listened to it so much this week. It’s such a great record, filled with great songs. Their music is raw, punchy, expressive, emotional. There’s so much going on in the arrangements, and structure, and lyrics.

I don’t want to go deep into an analysis, or into how some of their songs remind me of Mando Diao’s Give Me Fire, another fantastic album. This post is just a recommendation. Give them a listen. Maybe you, or someone else reading these lines, will be discovering them for the first time as well, and their music will touch you just as much.

]]>
<![CDATA[Waffle House Marker System]]> https://timomeh.de/posts/waffle-house-marker-system?utm_source=rss https://timomeh.de/posts/waffle-house-marker-system Sat, 07 Feb 2026 19:07:00 GMT This video from Waffle House swept into my Recommendations today. It starts kind of unhinged, then quickly gets fully unhinged and just keeps its trajectory towards insanity.

At Waffle House, cooks remember their orders by putting packets and other seemingly random stuff on a plate, and the types and positions of the packets tells you what order it is. Like, putting an upside down jelly packet with a mustard packet on the bottom of a plate means: scrambled eggs with 3 eggs and wheat bread. The packets are the “markers”. Want some hash browns? Cook sprinkles some hash browns at the top of the plate. Want those hash browns steamed? A butter cup is added.

The customer doesn’t get the markers, they’re removed from the plate before it goes to the customer.

Insanity.

It reminded me of my weird keyboard shortcut “system”, just without the upside-down mustard packets.

A must-watch, in my opinion.

]]>
<![CDATA[I'm not an AI sceptic. I'm a people sceptic.]]> https://timomeh.de/posts/a-people-sceptic?utm_source=rss https://timomeh.de/posts/a-people-sceptic Sat, 17 Jan 2026 13:43:00 GMT I’m currently working on adding an OAuth/OIDC authorization server at work. Claude told me twice, on different occasions, to just implement the OAuth provider from scratch. No library, just raw-dog OAuth and OIDC completely from scratch. It said it would be worth it, because it’s only ~500 LOC.

Sure bro. It’s 500 LOC until it’s not anymore. Until you need to add the next OAuth grant.

I wasn’t interested in building it myself, and I didn’t ask for it. I had already installed node-oidc-provider and simply asked Claude to create the necessary configuration and fill it with some examples so I don’t have to read through the docs. That’s when it told me the library doesn’t support what I wanted (comparing hashed client secrets) and that we should instead build everything ourselves.

Of course the library supports it. It supports freaking everything. node-oidc-provider’s docs aren’t great, but that’s exactly why I asked Claude to research it for me in the first place.

Implementing this yourself is horrible advice. Never implement auth from scratch, especially not an OAuth/OIDC provider, unless you’re doing it for the lulz (to learn), or because you’re creating a standalone project that you intend to maintain responsibly for years.

The first time Claude suggested building it myself was in the initial plan. I explicitly said “wtf no, we won’t build this from scratch” and changed it. It was so convinced that rolling our own OAuth was the better solution that it suggested it again anyway.

I’m experienced enough to know this. I’ve added OAuth flows on multiple occasions in the past.

But what if I wasn’t? What if I were a junior dev? Or a pure vibe coder?

That scares the shit out of me.

I think I’m very good at structuring code. It’s my favorite part of programming. I have a good gut feeling for it. Claude is my bitch. It’s here to do what I don’t like to do. It’s not very good at structuring code. So I provide the structure: files, boundaries, functions, interfaces. It’s fun, I know it will be good, and I can do it fast. Claude does the boring implementation work.

Seeing how Claude structures code on its own scares me. It’s often bound to be short-lived and headed toward a large refactor. That’s where bugs and regressions happen. This concern isn’t new, because humans do this too. But the speed and volume at which we can now do a major refactoring has changed dramatically.

I’m scared of a future where the apps and tools I use are unstable, where you constantly encounter bugs and hear about more and more avoidable security issues.

This can be solved with AI. You can run security reviews, architecture reviews, or enforce strong code guidelines with agents. But to do that, you need expertise. You need to understand why certain things are non-negotiable, like never raw-dogging an authorization server. And right now, I don’t see enough people talking about that.

I’m not scared or sceptic of AI. I’m scared and sceptic of people. Sometimes even of myself.

]]>
<![CDATA[Ending the Season Images]]> https://timomeh.de/posts/ending-the-season-images?utm_source=rss https://timomeh.de/posts/ending-the-season-images Fri, 02 Jan 2026 22:50:00 GMT In October I wrote a post about a little experiment I implemented. An experiement I wanted to build since I was a teen. Now I’m removing this experiment again.

To recap in short, back in the days when I started to create websites, I was fascinated by the idea that the design of a website (or at least a header image) depicts its own little world, with its own simulated weather and urban scenes and people living there. So I did it, with AI, which renewed the header images every 6 hours.

Screenshot of an AI-generated header image in the blog. Depicting a night scene with a market stand and lots of fairy lights

The magic wore off kinda quickly. It was neat to see how it added little scenes around halloween, christmas and new years; but it was also predictable that it will create those scenes. Those special scenes weren’t actually special anymore. And over time, it felt like uninteresting noise. If an artist had created them, it would’ve been much more interesting. Even though it wasn’t 100% AI slop because it had an interesting twist to it, it was still: just AI. People don’t just care about the art, they care about the artist.

I never wanted to keep those images here forever, and I’m happy this idea is now out of my head.

Here are a few oopsies that happened during the experiment:

  • To simulate the weather, the AI is prompted with the weather it generated for the previous days, to create a new weather for “today”. It then uses this weather to write a new prompt for the actual image. But it started to write small scenes into the weather report itself, so it continued to reuse this same scene, and for a while all images were in a town with cobblestone streets and a church and a riverside.
  • The images need a vignette, which was added to the image in a second step: it first creates the image without the vignette, and then uses this image to generate another image with the vignette. I thought that OpenAI will automatically detect the image aspect ratio to generate a new image with the same ratio. But that was wrong, it started to create square images. So for a while, the vignetted images were weirdly cut off and too small.
  • During the React2Shell situation, the image generation halted because I didn’t redeploy the cron. While I updated the Next.js version in my monorepo, there were no changes to the cron (because it doesn’t use Next.js) and the deployment still had the vulnerable version of Next.js installed which we didn’t allow to run on Railway anymore.
  • The AI was prompted to create scenes in a german city. But it did not create fireworks at midnight 2025 → 2026, and no streets littered with fireworks, which is just unrealistic af.
]]>
<![CDATA[Intentionally left ugly]]> https://timomeh.de/posts/intentionally-left-ugly?utm_source=rss https://timomeh.de/posts/intentionally-left-ugly Sun, 14 Dec 2025 17:14:00 GMT My favorite process in software engineering, whether I’m building something new or working on an existing feature, is to first focus purely on the functionality and completely ignore the design. Once everything works, I shift my full attention to making it look and feel great.

I learned this process over a decade ago, and I’ve been recommending it ever since to anyone who’s struggling while building something: struggling with refactors, with design collaboration, with pressure from planning and management, or simply with finding creative direction for the UI.

I first experienced this during an Android development course by Ben Jakuben on teamtreehouse.com. At the time, I was enrolled in a mobile app development class at university where we learned iOS development in Objective-C. But as an Android user, I wanted my final project to be an Android app. Having no clue about Android development, I looked around, found a course on Treehouse, and speedran through it over one or two late nights.

Ben did a fantastic job of guiding you through things logically and leaving you in a place where you could confidently keep learning on your own. The end result of the course was a Snapchat clone built with Pusher as a backend. It was refreshing to not just build a throwaway demo app, but a real media-based app that was genuinely relevant at the time.

The course focused heavily on getting the functionality right first: signup, login, creating posts, listing them, viewing them, expiring them so you could only see them once. Only after everything worked did Ben show how to layer a proper design on top, and really demonstrated the merits of this approach. You could see how the app suddenly came to life, without having to refactor everything again and risk breaking things.

That was a real “woah” moment.

Since then, I’ve often advocated for this approach and repeatedly experienced how much it can help.

  • When engineers are frustrated because designs keep changing and they have to keep adjusting their code: focus on the functionality first and don’t get distracted by work-in-progress designs.
  • When there’s friction between designers and engineers because they have different interpretations of how things should look and work: focus on the functionality first to build a shared understanding, so both parties can elevate the solution instead of getting stuck in disagreements.
  • When teams spend too much time tweaking small design details while major functionality is still missing: focus on the functionality first to reduce stress about big parts that are still missing.
  • When engineers are doing the design themselves but can’t quite figure out how it should look: focus on the functionality first so you can later give design your full attention.

Even though I’m such a huge fan of this process, it wasn’t possible to do in every project, and it doesn’t work equally well for every kind of product. Sometimes “functionality first” doesn’t mean “design later”, but something in between: where interactions are already considered in the functionality, while the visual polish is postponed. And sometimes you’re joining an existing team that works design-first, and you can’t just overthrow their approach: that wouldn’t be professional. But you can observe friction and frustration, and when it makes sense, recommend trying to focus on the functionality first.

Once everything works and you’re happy with how it behaves, there’s a good chance the design will naturally start to form in your head. It often just takes a shower, a night of sleep, or a walk. And if it doesn’t, you can still research and ask for help: “How do other apps solve this?”, or “This is how it should work, can you help me make it look and feel great?”.

The main downside of this process is that some people will assume what you’re building is already the final design. You know that it’s obviously not, but they don’t know. Not everyone is familiar with this way of working, or with how quickly something ugly can completely change into something polished. Not everyone can mentally cleanly separate functionality from appearance, and sometimes early visuals tend to stick with them. You’ll get questions about whether this design is already final. You’ll hear comments about spacing, colors, fonts and alignment.

This can be annoying, but don’t let it distract you. Part of this process is giving people a heads-up… often more than once: I’m focusing on functionality right now. We’ll make it pretty later. This is intentionally left ugly.

]]>
<![CDATA[Why no public buckets on Railway?]]> https://timomeh.de/posts/why-no-public-buckets-on-railway?utm_source=rss https://timomeh.de/posts/why-no-public-buckets-on-railway Sat, 29 Nov 2025 04:00:00 GMT Question: “Why no public buckets on Railway?”

This question came up a few times, and it’s not an easy yes or no. They will come at some point, with more time. In the end, this was kinda my onboarding project and I have to scope stuff.

Public Buckets are good for static sites, but it’s already easy to host static sites on Railway. Buckets are cheaper, but the added benefit for the platform isn’t as huge as private buckets, so they weren’t high on the list of priorities.

Bucket UX is not super easy to get right, and many providers solve this with a list of configurations and assuming that users will know what they all mean and when to use them. Public buckets have lots of footguns. It’s not rare to hear about security incidents involving wrong configuration of public buckets. For most things that aren’t just static assets, you want a backend for authorization.

For caching and saving egress on static assets, CDNs work great. I’m using Cloudflare’s CDN for my stuff, and we’re also cooking on making it easier to add a CDN.

Is the solution just a “public” checkbox? Maybe it will be, but it’s not super clear to me if it really is the best way. If I would’ve went down that path and added it, it would’ve likely resulted in more than just a checkbox. It risks turning into a configuration hell where you don’t really know what those settings will actually do, and combined with the security issues, the end result risks causing more issues. Maybe even so many more issues that I’d rather want to remove it again to solve it better in the future, but you can’t simply take features away.

Ideally we have something that doesn’t require you to know what a bucket actually is, what the difference between a bucket and a CDN is, and what “public” or “private” actually means. Something that works perfectly for what you’re trying to build.

]]>
<![CDATA[About this thing that I always wanted to do]]> https://timomeh.de/posts/about-this-thing-that-i-always-wanted-to-do?utm_source=rss https://timomeh.de/posts/about-this-thing-that-i-always-wanted-to-do Sun, 19 Oct 2025 23:00:00 GMT More than 15 years ago, I had this idea to autogenerate the header image of my website based on the current time of day, season, location, and a simulated weather pattern that naturally progresses. It fascinated me to have a long-running, realistic-feeling, autonomous system that changes itself without my involvement—like its own little world. I liked the idea to open my own site and being surprised by what I see. “Oh, it’s snowing!”, then seeing the snow melt away some time later, or observe the bleakness of misty autumn days.

Back then, I tinkered with layering PNGs on top of each other to create “random” scenes, but it looked terrible. I can design websites, but I can’t draw nice pictures.

I never ended up doing it because 1) I didn’t miraculously become a good artist, and 2) who cares.

Well, I care. I’m older now, and the fascination is definitely weaker, but I still thought about it every year. When my most favorite time of year starts, I get this itch. And this year, I finally scratched it.

AI made this much easier to solve. Not just for creating the images, but also for simulating the weather progression based on the time of day, the season, and previous days. Everything is now truly unpredictable, there isn’t a single line of code where I can already guess what will happen next.

New scenes get generated four times a day, and I feed the AI with previous days to create a natural progression.

I’m storing all the prompts, images and weather simulations (in a Railway Bucket of course).

]]>
<![CDATA[You can tell me, I can take it]]> https://timomeh.de/posts/you-can-tell-me-i-can-take-it?utm_source=rss https://timomeh.de/posts/you-can-tell-me-i-can-take-it Mon, 25 Aug 2025 23:48:00 GMT I don’t like many of my development-related blog posts for one reason: I often feel like I spend too much time explaining the problem instead of getting straight to the point.

Understanding the problem is just as important as understanding how to solve it. So I don’t want to skip explaining the problem properly. And I want to include readers who might not be familiar with the topic. But whenever I start writing, I imagine readers rolling their eyes in light of the “obviousness” of it all.

I often think back to “Words To Avoid in Educational Writing” by Chris Coyier. He warns against using certain common words that make readers feel dumb. Just because you find something obvious doesn’t mean they do.

I’m certainly guilty of using those words myself sometimes, but the message stuck with me. That’s why I always try to explain problems thoroughly: because the reader might not know this yet, and it could be valuable knowledge for them, and I want them to feel included.

Still, I can’t shake the insecurity. Whenever I explain a problem, I worry readers will think that I’m a bit of a dum-dum for explaining something “everyone” already knows. Even when editing my posts before publishing them, I spend most of my time refining the problem explanation, trying to make it not too long but still explain everything.

Do I explain problems too much? You can tell me, I can take it!

]]>
<![CDATA[Vary for images on Cloudflare CDN for free]]> https://timomeh.de/posts/vary-for-images-on-cloudflare-cdn-for-free?utm_source=rss https://timomeh.de/posts/vary-for-images-on-cloudflare-cdn-for-free Sun, 24 Aug 2025 23:23:00 GMT Let’s rebuild a paid Cloudflare feature … on Cloudflare, for free.

Vary for images is a Cloudflare CDN feature that caches multiple variants of the same URL, based on the browser’s specific capabilities. It’s a paid feature. I want it, but I didn’t want to pay for it. That’s no issue, because we can rebuild this functionality on Cloudflare completely for free, with just a bit more configuration.

What’s the problem that I’m trying to solve?

I have a small image optimization service running. It can generate different image sizes and compresses the image on the fly, to improve loading times for users. It can also turn the image into a different format, like WebP or AVIF, if the browser supports it.

This means that the URL timomeh.de/keyboardcat.png can return an image that’s not really a png. When the browser supports AVIF, it would return it as an AVIF, even though the URL ends with .png. When it supports WebP, it returns a WebP, still using the .png extension. When it supports neither, it actually returns a PNG. We have the same URL, but different files can be returned.

The image optimization service is doing that by looking at the Accept header, to see which image formats the browser supports, and then it decides which format to return. We also call this content negotiation.

But this image optimization service is doing all of this on the fly. It doesn’t cache images that it has already generated. Optimizing each image on every request is a bit slow and wasteful. We could add caching into our image optimization service, but that’s more work. We can also just use a CDN, like Cloudflare, and let it do the caching for us.

But if we just enable Cloudflare’s free CDN for our image optimization service, we’ll run into an issue

Cloudflare’s CDN caches files by its URL. When someone requests timomeh.de/keyboardcat.png, it fetches the image from my service and caches it. The next time someone tries to load that file, Cloudflare will see that it already has a file cached for that URL, and will return it directly to the user.

So when the first user’s browser supports AVIF, it will cache the optimized AVIF file. And when the second browser doesn’t support AVIF, they will also get the optimized AVIF file, and they won’t be able to see the image. That’s not great.

To solve that, Cloudflare offers Vary for images, which also does content negotiation, and only returns cached variants that the browser supports. This is a paid feature.

How we can still use content negotiation on Cloudflare’s CDN for free

Cloudflare also has free URL Rewrites, which are actually pretty powerful! We can even rewrite a URL based on an HTTP header, like the Accept header. You might see where this is going: we can use this to rewrite the URL into multiple different URLs for the different image formats, based on what format the browser supports.

Let’s create the first URL Rewrite rule. Go into Cloudflare’s Dashboard, navigate to Rules and create a new rule. Choose URL Rewrite Rule. We’ll use a custom filter expression:

(http.host eq "timomeh.de"
 and any(lower(http.request.headers["accept"][*])[*]
   contains "image/avif"
))

Then, under “Rewrite to”, we’ll use a static query param, like ?format=avif.

Now, when a user requests timomeh.de/keyboardcat.png and the browser supports AVIF, then the URL is rewritten to timomeh.de/keyboardcat.png?format=avif. And this new URL is used to cache the file on the CDN.

My image optimization service is doing absolutely nothing with this query parameter. It doesn’t look at it, it still just looks at the Accept header. The only reason for this query parameter is to have a unique URL for Cloudflare’s CDN.

We can create another rewrite rule for only WebP support:

(http.host eq "timomeh.de"
 and any(lower(http.request.headers["accept"][*])[*]
   contains "image/webp")
 and not any(lower(http.request.headers["accept"][*])[*]
   contains "image/avif")
)

…and we’ll use ?format=webp as rewrite query.

In total, this will automatically get us 3 different URLs:

  • timomeh.de/keyboardcat.png?format=avif
  • timomeh.de/keyboardcat.png?format=webp
  • timomeh.de/keyboardcat.png

They will be separately cached. But we don’t have to link to those 3 different URLs ourselves. We can just continue to link to timomeh.de/keyboardcat.png. Cloudflare does the rest for us.

Done

And now we have content negotiation for free!

To make sure, we can quickly check that content negotiation is working properly by making 3 different curl requests:

$ curl -I \
    -H "Accept: image/avif,image/webp,image/*;q=0.8,*/*;q=0.5" \
    "https://timomeh.de/keyboardcat.png"
HTTP/2 200
content-type: image/avif
content-length: 36665
content-disposition: inline; filename="keyboardcat.avif"
server: cloudflare
cf-cache-status: HIT

$ curl -I \
    -H "Accept: image/webp,image/*;q=0.8,*/*;q=0.5" \
    "https://timomeh.de/keyboardcat.png"
HTTP/2 200
content-type: image/webp
content-length: 35696
content-disposition: inline; filename="keyboardcat.webp"
server: cloudflare
cf-cache-status: HIT

$ curl -I \
    -H "Accept: image/jpeg,image/png,image/*;q=0.8,*/*;q=0.5" \
    "https://timomeh.de/keyboardcat.png"
HTTP/2 200
content-type: image/png
content-length: 59883
content-disposition: inline; filename="keyboardcat.png"
server: cloudflare
cf-cache-status: HIT

We can see it’s working: the first one returns an AVIF, the second one a WebP, and the third returns a PNG. All served by Cloudflare, and we got nice cache hits.

]]>
<![CDATA[A Better Button Component with Composition]]> https://timomeh.de/posts/a-better-button-component-with-composition?utm_source=rss https://timomeh.de/posts/a-better-button-component-with-composition Wed, 02 Jul 2025 18:01:00 GMT Do I really need to write an introduction to button components? We all know them. Buttons are one of the most commonly used components in our user interfaces, and are also some of the messiest components, with crappy interfaces and complex implementations for something as simple as a freaking button, and they only get worse as your codebase ages.

In this post, I’ll briefly explain why button components are a classic problem in software engineering, how composition can solve this problem, and how I implement complex buttons with simple code these days.

What’s our problem with buttons?

Buttons seem so simple at first, but we need a lot from them: Primary buttons, secondary buttons, red buttons for destructive actions, bordered buttons, transparent buttons. In forms we need them to be an actual HTML <button> element, sometimes as type="submit" and sometimes as type="button". Sometimes they aren’t buttons, they are links and should be an <a> element, unless it’s client-side routing where we need to use a special <Link> component. Buttons come in different sizes. They should support icons on the left, icons on the right, or only an icon and no text—but then of course with a tooltip! They can be disabled, but sometimes they can look disabled even though they are not actually disabled. They can have a loading state, so it also needs to support that A͟N͟D͟ T͟H͟E͟N S͟O͟M͟E͟T͟I͟M͟E͟S T̷H̴E̶ ̶B̸U̶T̶T̴O̵N̸S̶ 𝙏𝙃𝙀𝙔 𝘼𝙍𝙀 𝘽𝙐𝙏𝙏𝙊𝙉𝙎 𝘛𝙃𝘼𝙏 𝘼𝙍𝙀𝙉’𝙏¿!? ✖︎ B̷U̸T̶T̷O̴N̸S̶ — THEY REFUSE to exist ╰┈➤ t͞h̸e͟y͞ ͞m͡e͜l͜t͡ ͟i͠n͜t̛o̕ ̴t̕h̨e͝ ͘c̛o͞d͡e͢, THEY TALK <Button prop="kill_me" /> THE PROPS ARE SCREAMING AT ME > Warning: prop 'scream' is not recognized on the DOM element. Did you mean 'screm'? onClick={() => END_MY_SUFFERING()} ⠀⛓ ᴀɴᴅ ʏᴇᴛ ɪ sᴛɪʟʟ ʜᴀᴠᴇ ᴛᴏ ᴅᴇᴀʟ ᴡɪᴛʜ as="a"

It’s a lot.

Buttons have to address many concerns. And our solution is really to try to squeeze all of that into a single component? Really?

You might say: “hang on, we don’t always use a single component!” And you’re right—sometimes you’ll see separate components, such as an <IconButton>. However, I’d argue that this often makes things worse. Because now you have to implement many of those complex concerns twice. While you remove a single concern from your big bad button component (for example, the concern that it only renders an icon and no text), your new special <IconButton> still looks like a button, can be disabled and hovered like a button, and needs to share most of the same styles. You’ll also need to implement the polymorphism twice to render different clickable HTML elements.

This problem sounds familiar! It seems like we’re dealing with separation of concerns, and simply refactoring it into an <IconButton> is the wrong separation. We don’t usually solve separation of concerns like this—we solve it with layering and composition!

The button component is nothing more than a classic software engineering problem that we usually solve with composition… yet I’ve rarely seen it solved with good composition!

Moving the clickable element out of the button

Forget the idea of using a single button component. From now on, we’ll think of every button in our UI as multiple layered components.

The base layer of every button is an unstyled, native interactive element, such as a button element or an anchor, in which we’ll nest a component that looks like a button:

<button>
  <PrimaryButton>Submit form</PrimaryButton>
</button>

<a href="/pricing">
  <PrimaryButton>View our pricing</PrimaryButton>
</a>

function PrimaryButton(props) {
  return (
    <div className="p-4 bg-brand border border-brand-500 rounded">
      {props.children}
    </div>
  )
}
Fig. 1: Basic composition by moving the clickable element out of the button component.

We immediately and completely remove polymorphism from our button components. That’s already a huge improvement. Polymorphic components are a big pain in the ass, and I try to avoid them wherever possible.

Note that we won’t encapsulate our layered button into a single button component again. This would just result in a messy polymorphic component with lots of props and complexity. We will always call all layers as we need them.

We can now achieve a lot with pure CSS. Adding hover and disabled states is straightforward and elegant:

<button disabled>
  <PrimaryButton>Submit form</PrimaryButton>
</button>

function PrimaryButton(props) {
  return (
    <div
      className="
        p-4 bg-brand border border-brand-500 rounded
 // [!code ++:2]
        not-in-disabled:hover:bg-brand-600
        in-disabled:bg-brand-300 in-disabled:text-gray
      "
    >
      {props.children}
    </div>
  )
}
Fig. 2: Styling button states based on the parent element's state.

It’s great that we can solve this simply by adding CSS, without having to change any existing code. There’s no need to concatenate classes dynamically, nor did we need to add an attribute to our button.

However, if necessary, we can also add a disabled attribute to our button component. This would be useful if we needed a button that looks disabled but isn’t.

<button>
  <PrimaryButton disabled>Submit form</PrimaryButton>
</button>

function PrimaryButton(props) {
  return (
    <div
      data-disabled={props.disabled} // [!code ++]
      className="
        p-4 bg-brand border border-brand-500 rounded
// [!code highlight:2]
        not-data-disabled:hover:bg-brand-600
        data-disabled:bg-brand-300 data-disabled:text-gray
      "
    >
      {props.children}
    </div>
  )
}
Fig. 3: Styling button states explicitly with attributes.

I’ve seen some really messy solutions to this problem, such as two separate visuallyDisabled and disabled attributes, where it’s unclear in which combination they should be used. But with composition, we don’t need to worry about that! We can immediately see from the way it’s used that we have a clickable button that isn’t disabled, and it has a visual child that looks disabled.

I cannot stress enough how powerful this is. We’re eliminating ambiguity and complexity while writing code that documents itself.

Isn’t this against DRY?

I’ve heard engineers in the past who argued that this approach is bad because you end up repeating yourself more. For example, writing <button><Button /></button> many times. It’s often seen as a violation of the “Don’t repeat yourself” (DRY) principle.

But DRY doesn’t mean that you’re never allowed to repeat yourself. It’s not about avoiding repetition at any cost. It’s about avoiding duplication of logic and knowledge.

In our case, the repetition is fine. It’s markup. The alternative, like <Button as="a" href="/pricing">, is the same repetition in a worse syntax.

Adding more layers on top

Now that our base is in place, we can start adding individual components for each concern and layer them into our button.

For example, let’s say we want to add a loading state to our buttons. This means:

  • The button should appear disabled.
  • The button itself should be disabled so that users cannot re-submit a loading form.
  • The button’s content should be replaced with a loading state.

Here’s how we can implement this by separating each concern into its own layer:

<button disabled>
  <PrimaryButton disabled>
    <LoadingLabel label="Saving…" active={isSubmitting}> // [!code ++]
      Save
    </LoadingLabel> // [!code ++]
  </PrimaryButton>
</button>

// [!code ++:13]
function LoadingLabel(props) {
  if (props.active) {
    return (
      <>
        <SpinIcon />
        <span role="status" aria-busy>
          {props.label || 'Loading…'}
        </span>
      </>
    )
  }
  return <>{props.children}</>
}

function PrimaryButton(props) {
  return (
    <div
      data-disabled={props.disabled}
      className="
        p-4 bg-brand border border-brand-500 rounded
        not-data-disabled:hover:bg-brand-600
        data-disabled:bg-brand-300 data-disabled:text-gray
      "
    >
      {props.children}
    </div>
  )
}
Fig. 4: Adding a loading state for buttons with composition.

With composition, we were able to solve it without a single change to the PrimaryButton. That’s a good thing! It shows that our composition is working well.

Another positive sign is that all of our attributes are in the right place. We added a label and active attribute, both of which are obvious in the context of the LoadingLabel component. This new component addresses one specific concern, and all its attributes are related to this concern.

To illustrate a poor design choice, let’s assume that we would add the loading state concern into the PrimaryButton. This would result in something like this:

<button disabled>
  <PrimaryButton
    disabled
    loadingLabel="Saving…"
    isLoading={isSubmitting}
  >
    Save
  </PrimaryButton>
</button>
Fig. 5: Anti-pattern for a loading state by overloading the button component with additional attributes.

This is a whack component interface! The attributes loadingLabel and isLoading are related to each other, but disabled is not. All attributes have to be optional because not every button has a loading state. However, setting loadingLabel makes no sense without isLoading. The relationship between the attributes is not immediately obvious. This interface requires documentation.

If you have a whack component interface like this and your attributes are a mess of unrelated concerns, it’s a clear sign that you need to separate these concerns into different layers.

Different button variants

Until now, we have only used a single PrimaryButton. In reality we often have different button variants, such as a SecondaryButton in a different color, or a transparent button, or a red button for destructive actions.

We could simply create an attribute for this, which would be an acceptable solution. But I’ll explain why we might not want this:

<button>
  <Button variant="primary"> // [!code highlight]
    Submit
  </Button>
</button>

<button type="button">
  <Button variant="secondary"> // [!code highlight]
    Reset
  </Button>
</button>
Fig. 6: Button variants with attributes.

According to what I wrote in the previous section, an attribute is sufficient for the variant. We don’t need to add another layer. It still only controls the visual appearance of the button’s base.

But it’s not the implementation that I would choose. I think it’s fine to have separate components for each variant, all within the same file:

function PrimaryButton(props) {
  return (
    <div
      className="
        p-4 bg-brand border border-brand-500 rounded
        not-in-disabled:hover:bg-brand-600
        in-disabled:bg-brand-300 in-disabled:text-gray
      "
    >
      {props.children}
    </div>
  )
}

function SecondaryButton(props) {
  return (
    <div
      className="
        p-4 bg-gray border border-gray-500 rounded
        not-in-disabled:hover:bg-gray-600
        in-disabled:bg-gray-300 in-disabled:text-gray
      "
    >
      {props.children}
    </div>
  )
}

function TransparentButton(props) {
  return (
    <div
      className="
        p-4 bg-transparent border border-transparent rounded
        not-in-disabled:hover:bg-gray-100
        in-disabled:bg-gray-200 in-disabled:text-gray
      "
    >
      {props.children}
    </div>
  )
}
Fig. 7: Button variants with individual components.

Although we’re repeating some styles multiple times, the repeated code is close together, so I don’t consider it a big problem.

Even with different button sizes, I’d still prefer a <PrimaryButton size="sm"> over a <Button variant="primary" size="sm">.

There are two main reasons why I prefer separate components:

  1. Keep your design in mind before implementing a combination of all possible variants and sizes.
    Some button variants only come in specific sizes. For example, the secondary button may always be medium and never large. One benefit of keeping the variants in separate components is that you don’t have to support all possible combinations of variants and sizes.
  2. It discourages contributors from changing a generic button component for “one-off buttons”.
    Instead, it encourages your peers and contributors to create separate components for new buttons that look slightly different. There’s a high chance that it’s a one-off button. You might ask: “what’s a one-off button?”

Make one-off buttons not reusable

Not all components in a design will be reused. For instance, we might use a huge button with a shiny effect on the landing page that is never used again.

There’s often the tendency to change the button component and add attributes like size="xxl" and variant="shiny" to it, even though it’s only used once in this exact combination. While it may seem like a small, quick change and we can reuse the existing button, we then make the decision to carry this burden around. Maintaining the button will be more difficult because we’ll always have to check that our special styling won’t break. At some point, the huge shiny button on the landing page is removed, but nobody removed the styles from the reusable button component.

If you encounter a button style that you haven’t seen before and it’s only used in one place, I recommend writing it inline instead of extending the existing button component. This tells all contributors that it is not meant to be reusable and only supports exactly what was needed.

<a href="/signup">
  <div className="p-8 bg-brand effect-shine text-xxl rounded">
    Sign up
  </div>
</a>
Fig. 8: Implementing a one-off button inline. You don't always need to use a button component.

We don’t need to think about the styles when the special button is disabled because it cannot be disabled. And when it’s removed, no remnants are left behind.

Example: icons in buttons

To wrap up some patterns that I use more often, you can add support for icons with just some CSS. This works well for left icons, right icons, and only icon buttons without text:

function PrimaryButton(props) {
  return (
    <div
      className="
        p-4 bg-brand border border-brand-500 rounded
        not-in-disabled:hover:bg-brand-600
        in-disabled:bg-brand-300 in-disabled:text-gray

// [!code ++:3]
        inline-flex flex-wrap
        gap-1.5 items-center justify-center
        *:[svg]:size-4 *:shrink-0
      "
    >
      {props.children}
    </div>
  )
}

<button>
  <PrimaryButton>
    <FloppyDiskIcon /> // [!code highlight]
    Save
  </PrimaryButton>
</button>

<a href="/page/2">
  <PrimaryButton>
    Next page
    <ArrowRightIcon /> // [!code highlight]
  </PrimaryButton>
</a>

<Tooltip label="Edit">
  <button disabled aria-label="Edit">
    <PrimaryButton>
      <PencilIcon /> // [!code highlight]
    </PrimaryButton>
  </button>
</Tooltip>
Fig. 9: Simple support for icons in buttons.
  • Icons can be added as simple children.
  • The icons are automatically sized.
  • It adds space between icons and text.
  • It ensures that the button wraps nicely.
  • We use composition for the tooltip instead of adding a tooltip into our button.

For more advanced cases—when you feel the need to add an attribute to the PrimaryButton to control the icons—keep in mind that it’s better to separate the concerns and use composition. Don’t overload the button component with weird attributes again. A separate component that only handles the icon layout will be easier to maintain, and we can use it as an overlay on the base button.

<button>
  <PrimaryButton>
    <IconLabel start={<FloppyDiskIcon />}> // [!code highlight]
      Save
    </IconLabel>
  </PrimaryButton>
</button>
Fig. 10: Using composition for icons in buttons.

Bonus: Focus rings

Sadly, focus styles are often overlooked and lack sufficient emphasis. Often, they simply mirror the hover styles, but hover and focus are two very different things. A hover style alone isn’t enough to catch your eye when navigating with a keyboard.

Focus rings are an effective solution because they make the focus state highly visible. And because we separated the button into multiple layers, adding focus rings for all elements is quick and easy:

:focus-visible {
  outline-color: var(--color-brand-300);
  outline-offset: 4px;
  outline-width: 2px;
  border-radius: 2px;
}
Fig. 11: Global styles for focus rings of clickable elements.

The focus ring is now a concern of the native button- and anchor element, so we can style it independently of the button component. It does not affect the button component at all, and the focus rings will always be present and consistent.

Of course, you could still define a focus style in the button component. But you don’t need to. It works great without it.

Tip: avoid className as attribute

Adding a className attribute to a component is one of the easiest ways to shoot yourself in the foot. It opens up the component too much, and quickly becomes hard to control. It invites contributors (and even future-you!) to mess with the internal styles from the outside, making it harder to change anything without breaking stuff.

Just don’t.

Reflection

So, here we are! What started as “I just wanna write a short post about this pattern I’m using for buttons” quickly turned into a long article.

We talked a lot about buttons because, as we all know, they’re often the worst offenders in a codebase. At this point, “the button component” has basically become a meme. But somewhere along the way, it feels like we’ve started to accept that these components are inherently messy.

I asked ChatGPT to “write me a button component in React that implements all the necessary features that I will need”, and it shat out this:

import React from "react";
import { cn } from "@/lib/utils";

export type ButtonProps = {
  asChild?: boolean;
  variant?: "default" | "destructive" | "outline" | "ghost" | "link";
  size?: "default" | "sm" | "lg" | "icon";
  loading?: boolean;
  disabled?: boolean;
  type?: "button" | "submit" | "reset";
  children?: React.ReactNode;
  className?: string;
  onClick?: React.MouseEventHandler<HTMLButtonElement>;
} & React.ButtonHTMLAttributes<HTMLButtonElement>;

const Button = React.forwardRef<HTMLButtonElement, ButtonProps>(
  (
    {
      variant = "default",
      size = "default",
      loading = false,
      disabled = false,
      type = "button",
      className,
      children,
      ...props
    },
    ref
  ) => {
    return (
      <button
        ref={ref}
        type={type}
        disabled={disabled || loading}
        className={cn(
          "inline-flex items-center justify-center rounded-md text-sm font-medium transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:opacity-50 disabled:pointer-events-none",
          {
            default: "bg-primary text-white hover:bg-primary/90",
            destructive: "bg-red-500 text-white hover:bg-red-600",
            outline: "border border-input bg-background hover:bg-accent hover:text-accent-foreground",
            ghost: "hover:bg-accent hover:text-accent-foreground",
            link: "underline-offset-4 hover:underline text-primary",
          }[variant],
          {
            default: "h-10 px-4 py-2",
            sm: "h-9 px-3",
            lg: "h-11 px-8",
            icon: "h-10 w-10",
          }[size],
          className
        )}
        {...props}
      >
        {loading ? <span className="animate-spin mr-2 h-4 w-4 border-2 border-current border-t-transparent rounded-full" /> : null}
        {children}
      </button>
    );
  }
);
Button.displayName = "Button";

export { Button };
Fig. 12: A horrible button component written by ChatGPT, as it exists in many projects.

This is why the button component is a meme. It’s a brittle, hard-to-maintain mess. But it also looks like something I’ve seen a lot in codebases—and, honestly, it’s something I’ve contributed to in the past. It shouldn’t be like this.

While I only talked about buttons in this article, the same patterns can and should be used for other UI components as well. Buttons are just the worst offenders, but they are not the only ones. I now use the same layered UI composition patterns for many more things, and I believe it is some of the best frontend code I have ever written.

]]>
<![CDATA[What's that LaCroix flavor?]]> https://timomeh.de/posts/whats-that-lacroix-flavor?utm_source=rss https://timomeh.de/posts/whats-that-lacroix-flavor Tue, 03 Jun 2025 22:10:00 GMT ]]> <![CDATA[I'm using em dashes—but I'm not an AI]]> https://timomeh.de/posts/i-am-using-em-dashes-but-i-am-not-an-ai?utm_source=rss https://timomeh.de/posts/i-am-using-em-dashes-but-i-am-not-an-ai Mon, 19 May 2025 20:48:00 GMT Please don’t make the em dash—the long hyphen I’m using here—an indicator of AI-written text. I recently saw a post on LinkedIn (it pains me to admit it) where someone wrote a script to hide all comments containing an em dash because they’re highly likely to be AI-generated. I think there’s truth to it. But I also hate the fact that I think there’s truth to it.

For a long time, I used en dashes – the shorter version of the em dash. I always used it with a space, then the en dash, then a space. I liked how it structured sentences visually and caused me to read them with a small pause. I think it looks much nicer than parentheses (which always make me feel like I’m doing maths) and I also think it structures sentences better than commas, especially when the sentence contains a list of comma-separated words.

I used en dashes until I saw a video by John Green where he explained that I had been using them incorrectly all this time. The correct usage is an em dash—you don’t even need to add spaces, although you can.

You can use an en dash for ranges, such as 2–3pm. You can also use an en dash to reduce the ambiguity of a regular hyphen. In John Green’s example, it’s ‘New York–style pizza’ and not ‘New York-style pizza’. This makes it clear that it’s a style of pizza from New York, and not a new pizza in York-style. Or how Merriam-Webster defines it, “in compound adjectives when at least one of the elements is a two-word compound”.

However, you shouldn’t use the en dash as a replacement for a comma, semicolon or parentheses. For this, you use the em dash. —

So I started using the em dash more often as a result. And I looove using it. I still think it’s a great way to structure a sentence visually without cluttering it up, and it makes me pause when I read it.

However, more recently, I’ve noticed that I’ve been trying to avoid using the em dash. I’ve caught myself typing it and then removing it again to replace it with a comma. Simply because I noticed that AI uses em dashes a lot, and I’m afraid people will think my text is AI-generated, even though I’m not.

Is this the slow death of em dashes?

I think I’ll just embrace them and use them whenever I want. Maybe I’ll sometimes surround them with spaces, which AI doesn’t seem to do. Maybe I’ll use them more than necessary to shove them into people’s faces. Perhaps people will think my text is AI-generated, but then again, maybe I’ll just write it so badly that no AI could produce such rubbish.

AI might take my job, but it won’t take my em dashes away!

]]>
<![CDATA[Saw a cat]]> https://timomeh.de/posts/saw-a-cat?utm_source=rss https://timomeh.de/posts/saw-a-cat Sat, 03 May 2025 22:24:00 GMT Saw a cat, took some photos. Cat looked like it was having one of those days.

a brown-grey cat sits on a stone wall and looks majestically to the side, paws tucked in the front, green plants in the background the cat is still sitting on the wall, but looks down sadly (or just very tired)]]>
<![CDATA[How and when should I actually migrate my database?]]> https://timomeh.de/posts/how-and-when-should-i-actually-migrate-my-database?utm_source=rss https://timomeh.de/posts/how-and-when-should-i-actually-migrate-my-database Fri, 02 May 2025 15:23:00 GMT For a long time, across different projects throughout my career, I’ve seen database migrations happen during application startup. These migrations usually run as part of a post-deployment hook, just before the new deployment receives any production traffic. But… what’s happening in this short (or sometimes longer) timeframe after the migration is done and before the new app boots, while the old app is still active?

  • What if you’re doing a slow rollout? Won’t the old instances fail because the database changed?
  • What if your new code has an issue and you have to do an immediate rollback? Won’t the old version now also fail until the database is rolled back? Wouldn’t that result in big disruption of service?
  • What if your migration takes a long time? Will it take a long time until your new code is deployed as well?

I had so many questions, but the root one was: “do I have to worry about this at all?” The answer was yes. Because I saw how issues and incidents happened where those questions suddenly became reality.

And over time, I built up an understanding and realized the answers to those questions—but I also realized how this seemingly dangerous process can actually be pretty chill. So let’s do a quick dive into when (and how) to seamlessly migrate your database.

How a small change turns into a stressful experience

A popular example is how to rename a database column. Let’s take a look into what will happen if you just rename a column in a single migration with a simple ALTER TABLE RENAME COLUMN SQL statement:

  1. You change the schema locally, rename the column, and trigger a deployment to production.
  2. At some point during deployment, the migration runs.
  3. The migration will lock the whole table—no reads and writes to it anymore for a short amount of time. In the example of a simple rename, this is a quick operation.
  4. The migration was successful, but the deployment is not fully done, and the old app is still active and receives traffic. The old app will now start to throw errors because it’s trying to access a column that doesn’t anymore exist. Right now, your users will sometimes see errors popping up.
  5. You sit there, your butt cheeks clenched, and you wait until hopefully all new instances have booted and receive 100% of the production traffic.
  6. All looks good, the new version is active. You feel relieved, and your butt cheeks go back into a relaxed state.
  7. 10 minutes later, you’re alerted to an incident: There’s a high error rate. Your change introduced a critical bug. Luckily, you are prepared for this, because you can roll back to the previous version within seconds with 1 simple click.
  8. But your old code uses the old column name, and the rollback did not revert your migrations. The rolled back version still throws errors. Maybe it refuses to boot at all.
  9. Shit. What now? Are you prepared to run a script to revert the database migration? Do you push a new commit and wait for multiple minutes until it’s deployed?
illustration of an astronaut falling scared into a spiral, holding his clenched buttocks

Of course, it doesn’t always have to result in an incident. But there’s a short amount of time where users will see errors, and it’s definitely a more risky situation. So what can we do instead?

Be backwards compatible… multiple times

To prevent all those issues, we have to divide our migrations into multiple backwards compatible changes. Let’s continue with the example of renaming a column, and how we would divide this into multiple steps:

  1. Add a new column to the schema and run the migration. Don’t migrate the data yet; just update the schema. Your code will still reference the old column. This new column will be empty and should have no default value—then you’re not locking the table. Simply don’t touch the old column, and you won’t break anything.
  2. Migrate the data separately. Do it in batches to avoid database load and locking the whole table. Depending on your setup, this is likely not even a part of your automated deployment pipeline, but instead just a script that you’re executing manually. I’ll talk more about those manual tasks later in this article.
  3. Update and deploy your code to reference the new column. Depending on what data you’re migrating, it’s likely that you want to read and write from both columns.
  4. You should now be in a state where both columns are always in sync. Be sure to do a sanity check to ensure that’s the case.
  5. Now remove any references in your code to the old column and deploy it. Do not yet delete the old column, just don’t access it anymore.
  6. After the new version is fully rolled out to production, the old column is not used anymore, and you can finally drop the old column from the schema. You can wait a few days before you delete the old column, to increase the time frame of backwards compatible changes.
illustration of a happy astronaut flying away from a spiral, with lines indicating relieve coming from his buttocks

Because we’re doing a bunch of backwards compatible changes, if we fuck up at any stage, we can try again and no one will notice. And we can also do instant rollbacks to a previous version in an emergency. The longer we take to do the last step, the more room we give ourselves to roll back.

Now that we know how and why we safely migrate our database, when and where should we perform these steps?

Treat it as an async process in multiple one-off tasks

You can do all of these steps in an async manner. And that’s what I think is the beauty of it. Those steps don’t have to happen as part of your automated deployment pipeline, and you don’t even have to stress about getting it done quickly during a single workday. You can take your time!

In this setup, you wouldn’t even need to migrate the schema during a deployment. It can totally happen during normal operations. And if you don’t want to block the deployment pipeline with a long-running migration, and potentially block any urgent fixes, it maybe shouldn’t happen as part of a regular deployment.

But it’s also a good practice to not raw-dog into your production database from your local machine and execute any SQL.

Prepare your steps as separate scripts, and let your colleagues review them. This isn’t about code style or maintainability—in the end, they’re just one-off scripts. But your colleagues may see some typos, or issues where you’ll accidentally cause high database load.

And then have a way to execute these scripts inside your production infrastructure. For example, a Kubernetes Job which can connect to your database, or any equivalent service in your infrastructure that can connect to your database but is not your production service which handles real traffic. Then you don’t have to directly connect to your database from your local machine or from a deployment machine like GitHub Actions.

Compiled apps, CLI dependencies, and migrations

I’m working a lot with dockerized TypeScript apps. The final docker image only contains bundled code, and dependencies that are referenced in the bundled code. But migrations aren’t part of the bundle—because they aren’t part of my running app. And a CLI to run migrations, like drizzle-kit, also isn’t part of the final bundle!

Knowing that I want to run migrations async in my production environment, I was wondering for some time how the actual fuck I’m supposed to run them within my production docker image—because it doesn’t contain my migrations and the drizzle CLI. I can’t just start that image in a one-off job and execute my tasks. So, should I install drizzle-kit in my production image? Copy all migrations? But that would increase the size of my docker image, and I want to keep it small.

The solution to this is quite simple: just have a separate image. You could, for example, execute a Dockerfile as a one-off script:

FROM node:22-slim
WORKDIR /app
COPY package.json pnpm-lock.yaml
RUN pnpm install --frozen-lockfile
COPY ./migrations ./migrations
CMD ["npx", "drizzle-kit", "migrate"]

That’s what I’m doing, with a small tweak that I’m inlining this Dockerfile inside a docker-compose file, as a separate service called “migrate”. I can just start that service on production, it builds itself, runs the migrations, and then exits.

Or you could even not care about migration files at all, use the schema in your code as the source of truth, and use something like drizzle-kit push:

FROM node:22-slim
WORKDIR /app
COPY package.json pnpm-lock.yaml
RUN pnpm install --frozen-lockfile
COPY . .
CMD ["npx", "drizzle-kit", "push", "--force"]

Of course: the trade-offs!

As with everything, there are trade-offs, and you don’t need to do this strict approach all the time. Some projects simply don’t have those requirements. And in some projects maybe the few seconds of downtime are okay.

Do you need to do all of this in your side project? Probably not.

Would it be okay if your production docker image is a few MB larger and you execute the migrations, including data migrations, during app boot? Could be okay.

But will you still learn something from practicing it? Absolutely.

]]>
<![CDATA[GitHub Releases as RSS Feed]]> https://timomeh.de/posts/github-releases-as-rss-feed?utm_source=rss https://timomeh.de/posts/github-releases-as-rss-feed Wed, 23 Apr 2025 18:37:00 GMT Quick tip: You can follow GitHub Releases in your feed reader. Just add .atom to the end of the releases URL. For example:

https://github.com/glanceapp/glance/releases
👇
https://github.com/glanceapp/glance/releases.atom

You can do the same with other GitHub resources—like commits, users, tags—if that’s useful to you. But I find the releases feed super handy. Sometimes I just wanna stay in the loop on certain software, like apps, frameworks, major libraries, services, databases.

]]>
<![CDATA[If you're using Next.js' Middleware for authorization, you're doing something wrong]]> https://timomeh.de/posts/if-youre-using-next-js-middleware-for-authorization-youre-doing-something-wrong?utm_source=rss https://timomeh.de/posts/if-youre-using-next-js-middleware-for-authorization-youre-doing-something-wrong Sun, 23 Mar 2025 23:10:00 GMT Whenever the topic of authorization in our Next.js apps came up at Gigs, I had a very strict opinion and rule: we don’t use the middleware for authorization. We can use the middleware for some optimistic UI patterns, like an early redirect when a user is logged out, but never as a means to grant a user access to some data. I’m not saying this because I hate the middleware, or because it’s an easily predictable vulnerability, but because of the way the Next.js middleware sits in an application.

If you were affected by CVE-2025-29927 in the sense that you were vulnerable to an authorization bypass, you should rethink the structure of your application. Reading the responses to this CVE makes me think that people either don’t know how Next.js’ middleware works, or they don’t know where authorization should happen in an application.

The Next.js middleware is very bare-bones: you have a single function for your entire application that gets a NextRequest and you return a NextResponse. The response can be a rewrite() or a redirect() or next(), the latter meaning “continue routing”. But unlike other web frameworks, Next.js’ middleware can’t write to a context object that you can read from later in your routes. Your routes only have access to the NextRequest, which you can’t extend however you want. You can only add headers to the request in the middleware, but an HTTP header is obviously not something you should trust blindly.

In other web frameworks, the middleware writes data to a context object that you can control and can trust; it should be impossible for anyone outside to write to this object. In an Express middleware, you would authenticate the user and then add the current user to req.user, in Koa you would add it to ctx.user. Your routes will then do authorization checks based on this context, or you may even have another middleware layer that returns a 401 Unauthorized if the user context is missing.

In these examples, if your authentication middleware failed or was skipped, the context object would be missing, and the routes would subsequently fail. But if your Next.js middleware can fail or be skipped, and your routes just keep working, there’s something seriously wrong with your software design.

Your authorization logic should happen, with your authentication context (i.e. the current user object), close to where you handle data: either after you fetch some data and before you return it to the user, or, if possible, even before you fetch the data. But whether you do it before or after you fetch data, it happens in or very close to your data layer. However, as you might notice, Next.js’ middleware is nowhere near an authorization check in the data layer. I treat it as part of the presentation layer.

Relying on authorization within Next.js’ middleware is like building a single-page application with a login, but the API that you use when the user is logged in has no authorization at all. In this example, you also have no shared context that you can fully trust.

In our Next.js apps at Gigs, we split our codebase into a presentation layer and a data layer. The presentation layer (the /app folder) can never access data directly by calling an API or a database, instead it has to call a function in the data layer. Inside the data layer (the /data folder), we have a userContext() function that returns the acting user by reading it from the request’s encrypted HTTP-Only cookie. The userContext() function is only allowed to be called within the data layer, and not from the presentation layer. We use this user context object every time we try to access data in the data layer, before we return any data back to the presentation layer.

That’s how we make sure the authorization happens at the right place. How you structure your code may be different, but however you structure it, you should still put authorization close to where you’re accessing data.

]]>
<![CDATA[Fish is a great shell that simplified my dotfiles]]> https://timomeh.de/posts/fish-is-a-great-shell-that-simplified-my-dotfiles?utm_source=rss https://timomeh.de/posts/fish-is-a-great-shell-that-simplified-my-dotfiles Sun, 23 Mar 2025 15:30:00 GMT I’ve been a zsh user for a long time, though I’ve had a bit of a love-hate relationship with it. It’s cool once you get it configured, but there’s so much to configure, and from time to time things would just break or the shell would get annoyingly slow.

I started keeping track of my zsh config in a dotfiles repository, so I can roll back to a previous state if the shell suddenly gets slow or something breaks. The dotfiles cover more than just zsh, but zsh was the biggest part of it. After starting with oh-my-zsh many years ago, I tried several different plugin managers.

I ended up using Zinit with stow for a long time. It was fast and I could modularize my zsh config into local plugins. I liked the simplicity of stow, but Zinit has a syntax and concepts that I had to learn and remember: ice modifiers, turbo mode, some plugins are prefixed with “light”… after a while of not looking at it, I forgot how all that works. So in October 2024 I made a big change: I changed all of my dotfiles to be based on Nix with home-manager.


Nix with home-manager sounded cool in theory. What attracted me the most was that it adds a lot of good zsh tools with just a few lines of configuration. And when things break, it’s easy to roll it back to a previous state, and it should work exactly as it did before. However, it also quickly became annoying: every tiny change has to be reapplied, which always takes a few seconds. And since Nix takes full ownership of your tooling, everything has to be installed and configured through it. On top of that, I think it has a very steep learning curve that feels much more complex and mysterious. I started to feel the same struggles I had with Zinit: if I didn’t touch it for a while, I unlearned how it worked.

Then one day, when I had to make a change to my Nix config while I was in a somewhat stressful debugging situation, I couldn’t apply my Nix config anymore. I had no idea why, it was just a bunch of cryptic error messages that looked like compiler errors. After some googling, I learned that it needed some major dependency updates first, which caused breaking changes to the configuration, which I then had to google again.

Nix had to go. When shit hits the fan, I can’t risk not knowing how to change my system configuration, or not being able to install or update tools. It’s just so much unnecessary frustration. Even though I liked how home-manager comes with many batteries included, I just don’t want to learn the language and behavior of something that adds a layer of complexity between me and my shell configuration.


What now? Back to Zinit? I also heard some good things about the fish shell. So when I got rid of Nix, I thought this might be a good chance to try out fish after all this time with zsh.

And the fish experience is great! It’s fast and comes with so many batteries and minimal configuration. Suggestions, completions, syntax highlighting, autocomplete—it all works right out of the box. I don’t have to add so much stuff to my dotfiles that I won’t understand in a few months. And because there’s a lot less configuration, I don’t need a management tool to tame my shell configuration.

And with all that already included, it’s easy to add my usual tooling workflows. Besides my custom git aliases, I make heavy use of fzf, mcfly, navi, thefuck, and often rely on information from the starship prompt, all of which were quick to add and configure for the habits that are ingrained in me after so many years of using zsh.

I’m back to using stow again to link files to my home directory. I’m just installing all the tools with brew and mise. And all the config, including fish, is neatly stored in its corresponding ~/.config directory. Everything is nice and modular, with no additional tools required, just conventions. And whenever I need to change something, I feel confident that I know what to do.

If you would like to take a look, here’s my timomeh/dotfiles repository.

]]>
<![CDATA[Werner by Herzog]]> https://timomeh.de/posts/werner-by-herzog?utm_source=rss https://timomeh.de/posts/werner-by-herzog Tue, 18 Mar 2025 00:01:00 GMT

wait a second i know the synthesizer why don’t i use the synthesizer

There are a few things I always wanted to do with AI. This is one of them.

]]>
<![CDATA[URL Fragments and pushState() are weird]]> https://timomeh.de/posts/url-fragments-and-push-state-are-weird?utm_source=rss https://timomeh.de/posts/url-fragments-and-push-state-are-weird Mon, 17 Mar 2025 20:22:00 GMT Yesterday I learned two weird things that happen when you use pushState() to navigate to a page with a URL fragment:

  1. The CSS :target selector doesn’t work. You can use :target to style an element that is the current URL fragment when doing a full document load, but not with pushState! This is also documented on MDN:

    The target element is set at document load and history.back(), history.forward(), and history.go() method calls. But it is not changed when history.pushState() and history.replaceState() methods are called.

    But that’s weird! Why not? I would argue that it makes :target a little bit unusable in modern web applications. Even though it’s such a useful feature!

  2. The hashchange event doesn’t fire. Even though the hash changes, the event doesn’t fire. This is also documented on MDN, but oddly enough it’s documented in the pushState docs, and not in the hashchange docs.

    Note that pushState() never causes a hashchange event to be fired, even if the new URL differs from the old URL only in its hash.

    That’s weird! And unexpected. This note should be included in the hashchange docs. Might be a good opportunity to open a small pull request.


Update: I did indeed open a small pull request and the hashchange docs now also explain this behavior.

]]>
<![CDATA[Reading less short message social media]]> https://timomeh.de/posts/reading-less-short-message-social-media?utm_source=rss https://timomeh.de/posts/reading-less-short-message-social-media Wed, 05 Mar 2025 14:03:00 GMT I stopped scrolling through Bluesky for a month, and I really enjoyed that month.

Even though Bluesky and Mastodon are much nicer places than Twitter, I realized that I don’t enjoy the experience of scrolling through short message social media. There are a few posts here and there that are nice or funny or interesting, but if there’s just a one post that I don’t want to read or think about, the whole experience is ruined for me.

It’s not about keeping your feed clean of extremism and other bullshit. It was easy to keep my feeds clean on Bluesky and Mastodon. But people think and post about a lot of different things, and I’m just not interested in all of them, nor do I want to spend any amount of mental capacity thinking about them. Scrolling past them is difficult. I have to force myself not to read something I’m not interested in. And that just makes it not a great experience.

I subscribe to a YouTube channel because I’m interested in the kind of content they make, and I assume that future content will be similar. And I follow people on Instagram because I like the pictures they post. But short message services just give me too much variety in what to expect.

So, I will continue what I did for a month. I deleted the Bluesky app from my phone and removed it from my browser bookmarks. I’ll still cross-post my posts there, because of course I want to reach an audience. And maybe I’ll post the occasional meme there. (Why not post them here?) But for now, I don’t want to consume text feeds.

]]>
<![CDATA[Upgrading to Tailwind v4.0]]> https://timomeh.de/posts/upgrading-to-tailwind-v4?utm_source=rss https://timomeh.de/posts/upgrading-to-tailwind-v4 Sat, 25 Jan 2025 18:51:00 GMT On 22 January, 3 days ago at the time of writing, Tailwind v4.0 was released with some major changes. I decided to upgrade today, and the upgrade path was mixed to be honest!

Tailwind provides an automated upgrade tool, which worked okay-ish. One of the major changes is the new CSS-first configuration instead of the old JavaScript-based configuration. Unfortunately, the upgrade tool didn’t migrate the configuration, saying it couldn’t. Maybe because of the typography plugin, maybe because of something else, I don’t know, it didn’t tell me.

/* This is how the new configuration file should look like */
@import "tailwindcss";
@theme {
  --font-display: "Satoshi", "sans-serif";
  --breakpoint-3xl: 1920px;
  --color-avocado-100: oklch(0.99 0 0);
  --color-avocado-200: oklch(0.98 0.04 113.22);
  --color-avocado-300: oklch(0.94 0.11 115.03);
  --color-avocado-400: oklch(0.92 0.19 114.08);
  --color-avocado-500: oklch(0.84 0.18 117.33);
  --color-avocado-600: oklch(0.53 0.12 118.34);
  --ease-fluid: cubic-bezier(0.3, 0, 0, 1);
  --ease-snappy: cubic-bezier(0.2, 0, 0, 1);
}

/* This is what the upgrade tool did */
@config "../../tailwind.config.js";

But the new CSS-based config is a big part of Tailwind 4, and I wanted to use it. The upgrade tool simply loaded the old config in compatibility mode.

So I had to migrate it manually, but the upgrade guide doesn’t really explain how to migrate the old JavaScript-based configuration. So I had to figure it out for myself, and while some of it was obvious and easy to find, other things weren’t so easy.

Dark mode variants

I use multiple dark mode variants. This is what it looked like in the old configuration:

module.exports = {
  darkMode: [
    'variant',
    [
      '@media (prefers-color-scheme: dark) { &:not(html[data-theme=light] *, [data-theme=light]) }',
      '&:is([data-theme=dark] *, html[data-theme=dark])',
    ],
  ]
}

The dark: variant can now be configured with the @custom-variant directive. The Dark mode docs have an example of how to override it with a single rule, but how do I use it with multiple rules like in the old configuration? It took me a while to figure out that the docs use a shorthand syntax, and you can use multiple rules with the long syntax:

/* Example in the docs, uses the shorthand syntax */
@custom-variant dark (&:where([data-theme=dark], [data-theme=dark] *));

/* This is how it works with multiple selectors */
@custom-variant dark {
  &:where([data-theme='dark'] *, [data-theme='dark']) {
    @slot;
  }

  @media (prefers-color-scheme: dark) {
    &:not(html[data-theme='light'] *, [data-theme='light']) {
      @slot;
    }
  }
}

Overriding the gray color palette

Tailwind’s color palette has 5 different gray temperatures, ranging from a cool blueish gray called “Slate” to a warm brownish gray called “Stone”. You usually use only one of these grays. I like to use “Stone” and have aliased it to gray.

const colors = require('tailwindcss/colors')

module.exports = {
  theme: {
    extends: {
      colors: {
        gray: colors.stone, // [!code highlight]
      }
    }
  }
}

How does this aliasing work in the new CSS-based configuration? Well, you have to override all gray shades.

@theme {
  --color-gray-50: var(--color-stone-50);
  --color-gray-100: var(--color-stone-100);
  --color-gray-200: var(--color-stone-200);
  --color-gray-300: var(--color-stone-300);
  --color-gray-400: var(--color-stone-400);
  --color-gray-500: var(--color-stone-500);
  --color-gray-600: var(--color-stone-600);
  --color-gray-700: var(--color-stone-700);
  --color-gray-800: var(--color-stone-800);
  --color-gray-900: var(--color-stone-900);
  --color-gray-950: var(--color-stone-950);
}

Custom background images

To create a utility class with a custom background image (for an image file, not a gradient), you could extend the theme like any other utility:

module.exports = {
  theme: {
    extends: {
      backgroundImage: {
        'dark-grain': "url(dark-grain-pattern.svg)", // [!code highlight]
        'light-grain': "url(light-grain-pattern.svg)" // [!code highlight]
      }
    }
  }
}

The new docs to customize background-image only mention gradients, but not images. So I guessed how it should work based on how it used to work, but I guessed wrong. I also couldn’t figure out if there’s still a way to extend it within the theme, so I’m now simply using a utility.

@theme {
  /* This does not work! */
  --bg-dark-grain: url("dark-grain-pattern.svg"); /* [!code --] */
  --bg-light-grain: url("light-grain-pattern.svg"); /* [!code --] */
}

/* This works */
@utility bg-dark-grain {
  background-image: url("dark-grain-pattern.svg"); /* [!code ++] */
}

@utility bg-light-grain {
  background-image: url("light-grain-pattern.svg"); /* [!code ++] */
}

Using arbitrary transition-property values

I have some places where I only want to transition the transform and opacity properties, and I simply used transition-[transform,opacity] to apply these transitions. But after upgrading to v4, these transitions were broken! I couldn’t find anything in the upgrade guide, and the transition-property docs didn’t look any different at first glance.

Until I saw that transition-transform now uses 4 properties: the usual transform, but 3 new custom properties translate, scale and rotate. This is not mentioned as a breaking change, and the upgrade tool didn’t catch it either.

You now have to transition those properties as well:

<!-- v3 -->
<div class="transition-[transform,opacity]" />

<!-- v4 -->
<div class="transition-[transform,translate,scale,rotate,opacity]" />

Of course, you don’t need to specify all of these properties if you only need to transition some of them.

Using the Tailwind Typography plugin

Tailwind’s typography plugin is great, and I think you should always use it (or any alternative for good standard text rendering). When I was manually migrating the configuration, I finally got to the typography configuration and was wondering how to migrate it to the new CSS-based configuration. The documentation of the plugin didn’t mention the new config, and the upgrade guide didn’t mention anything either. Until I found a note in the functions and directives docs:

@plugin

Use the @plugin directive to load a legacy JavaScript-based plugin:

@plugin "@tailwindcss/typography";

Legacy? Oh no. Let’s hope we get non-legacy plugins in the future.

Anyway, to use the typography plugin, you can use the old configuration and import it into the new configuration.

@config './typography.config.js'; [!code highlight]

@theme {
  /* ... */
}
// typography.config.js

/** @type {import('tailwindcss').Config} */
module.exports = {
  plugins: [require('@tailwindcss/typography')],
  theme: {
    extend: {
      typography: () => ({
        DEFAULT: {
          css: {
            color: '#333',
            a: {
              color: '#3182ce',
              '&:hover': {
                color: '#2c5282',
              }
            }
            // etc

Final thoughts

Although the upgrade path has been a bit of a bumpy ride, I like the overall change in direction that Tailwind has taken. It now not only supports the new features of the platform, it embraces them and shows them to you. It’s much more of an extension of the platform, rather than just a collection of utility classes.

]]>
<![CDATA[Favorite Media 2024]]> https://timomeh.de/posts/favorite-media-2024?utm_source=rss https://timomeh.de/posts/favorite-media-2024 Tue, 31 Dec 2024 01:30:00 GMT An incomplete list of my favorite videos, shows, movies and music of 2024, in no particular order.

]]>
<![CDATA[Only deploy on Fridays]]> https://timomeh.de/posts/only-deploy-on-fridays?utm_source=rss https://timomeh.de/posts/only-deploy-on-fridays Sun, 22 Dec 2024 16:27:00 GMT There’s a poster next to my desk with “Only deploy on Fridays” written on it. The poster is visible on Gigs’ Careers page, and from time to time, colleagues and applicants mention it or ask me about it.

the desk in my home office with the "Only deploy on Fridays" poster hanging on the right side of it

Last year, Daniel sent me a photo of some scribbles he made with funny work-related sentences in a very artsy and typographical style. One thing led to another, and sometime later, I received this poster in the mail. It’s a phrase he picked, based on topics that we often discussed.

Of course, the statement to only deploy on Fridays is exaggerated and satirical. It’s an obvious nod to the “Don’t deploy on Fridays” rule—something I encountered a few times in different jobs and projects, but I can’t identify with.

Restricting deployments to certain days or times is a bit of a short-sighted mentality. Some people are very afraid of deploying to production, but that’s not how it should be. If you’re afraid of it, instead of doing it less often and continuing to be afraid of it, do it more often and take the fear out of it until it becomes second nature.

You should always deploy. Do it on weekdays, evenings, Fridays, weekends—whenever you finish an increment of work. But do it with common sense and responsibility. If you’re deploying something that could have an unintended impact and you need to monitor its rollout, it doesn’t make sense to trigger a deployment and immediately close your laptop. It’s not a very responsible thing to do.

Just because you’re deploying on a Thursday instead of a Friday doesn’t give you proper confidence that it will continue to work over the weekend. You still have to be confident in your changes regardless of the day or time. You can’t skimp on writing tests just because it worked on production for a few days. Conversely, when you’re confident in your work, the day or time of deployment doesn’t matter.

Better yet, decouple the act of deploying with the act of releasing. Put your work behind a feature flag, continuously work on your feature behind this flag, and ship multiple smaller increments to production. Users won’t be impacted by your work in progress because it’s not yet released to them. When the time is right, enable the flag and start rolling it out to users. If you notice unintended side effects, like a spike in errors or something that doesn’t work as expected, you can just disable the feature flag again with a simple click, and the crisis is averted.

I believe that working with Continuous Deployment is an important tool to improve your skills as a software engineer. And Continuous Deployment can’t stop on a Friday: it’s continuous. Some people will be skeptical at first: when I first introduced Continuous Deployment at Gigs, some colleagues were skeptical, some new joiners had doubts about it, and some candidates I’m interviewing sometimes question how this should work. These doubts will always fade away, and everyone will see the benefits. I think they are skeptical at first because it requires you to approach your work from a different angle. It’s different from making larger releases with breaking changes, and not everyone is used to working like this. You have to work in backwards-compatible increments, and you have to increase your confidence in your changes. But all of these are good things. Having the confidence to deploy on a Friday, just like any other day, is a freeing experience.

]]>
<![CDATA[gimme dat snake_case]]> https://timomeh.de/posts/gimme-dat-snake-case?utm_source=rss https://timomeh.de/posts/gimme-dat-snake-case Tue, 10 Dec 2024 19:32:00 GMT Scenario: a magical fairy—the “magical” is important, so you don’t confuse it with a boring non-magical fairy—appears and grants me one wish. What would I wish for?

That JavaScript switches to snake_case over night. We wake up and all the camelCase is gone, replaced by beautiful snake_case.

This isn’t a matter of personal taste. camelCase is simply more difficult to read. It requires more visual effort. There’s even a study on that.

We’re used to reading words that are visually separated, and the underscore provides that visual separation. You probably had no issues reading this sentence, even though it was mostly lowercase letters. But writingLikeThis madeItLikely aBitMoreDifficult toRead, didn’t it?

JavaScript is not a visually appealing language. Perhaps Java and Smalltalk were not the best influences. And with all the new question marks added to its syntax, it looks like a very insecure “mocking spongebob” meme.

And I won’t buy the argument that camelCase is more efficient because you don’t have to type the underscore all the time! It’s as if developers spend their whole day just typing, non-stop, without autocomplete, constantly having to find those keys to press! Aaaaah help, where’s my underscore key gone now, I can’t find it!!!

Nah. snake_case would have been the better choice.

A man can dream though… a man can dream.

]]>
<![CDATA[User-defined color theme in the browser without the initial flash]]> https://timomeh.de/posts/user-defined-color-theme-in-the-browser-without-the-initial-flash?utm_source=rss https://timomeh.de/posts/user-defined-color-theme-in-the-browser-without-the-initial-flash Sun, 08 Dec 2024 22:38:00 GMT When adding dark and light mode to your site, a common approach is to store the theme in localStorage and reading it on the next visit. But our JavaScript usually runs after the page loads, so reading it in JavaScript can cause a flash of the wrong theme—like flashbanging dark mode users with light mode. We can fix this with a small script in the <head>. But wait—isn’t that a blocking script? Aren’t those bad? Let’s take a quick look at why that’s not always true.

First, here’s the small simple script which you can add into the <head> of your document:

<html data-theme="dark">
  <head>
    <!-- all the stuff in your head -->
    <script>
      (function () {
        const theme = localStorage.getItem('theme') || 'dark';
        document.documentElement.setAttribute('data-theme', theme);
      })();
    </script>
  </head>

It reads the preferred theme from local storage and adds it as a data attribute to the <html> element, before the browser renders the site. We can now style our page for light and dark mode based on the html[data-theme=dark] selector.

Your site will only render after the script ran—so it will only render after the attribute is already set, and there will be no “initial flash of incorrect theme”, or FART.

…and to let users switch the theme, update both localStorage and the data-theme attribute.

function toggleTheme() {
  const newTheme =
    document.documentElement.getAttribute('data-theme') === 'light'
      ? 'dark'
      : 'light'
  document.documentElement.setAttribute('data-theme', newTheme)
  localStorage.setItem('theme', newTheme)
}

What about cookies? Why not use one? Aren’t blocking scripts bad?

If you have a server and not just a statically hosted site, you can also use a cookie. If you search online for how to avoid the initial flash, you’ll often find this solution: store the theme in a cookie and render the HTML on the server, with the theme already included in the server-rendered HTML.

A cookie gets rid of the blocking script that we use in the example above. A blocking script pauses the browser’s rendering process until the script is fully executed, which can delay the initial page load. We usually want to avoid such delays, which is why blocking scripts are often discouraged. But this doesn’t mean that a blocking script is always worse.

The downside of cookies is that you have to render the HTML on the server for every request, which also takes some time.

Caching is much easier without a cookie, and it significantly speeds up your site. You could even put a CDN in front of your HTML and cache your whole page. You can’t do that with a user-specific cookie. (Unless you’re Valve on a certain unforgettable Christmas morning in 2015.)

The blocking script takes only approximately 0.1ms to run. That’s 100 microseconds. There’s a chance that server-rendering your uncached HTML will take more time—several milliseconds or even seconds, depending on server and network conditions.

Here’s my take on it:

  • If you render user-based HTML dynamically on the server anyways, just store the user’s preferred theme in your database, where you already store your user’s data. And then you can render <html data-theme={user.preferredTheme}> on the server. It won’t make a difference. No blocking scripts necessary.
  • If your site is serving content that doesn’t need to be rendered per request for each user, use the approach above with a blocking script on the client. You might want to do some benchmarking to check whether your server or the client is faster, but it’d also be a good idea to consider whether you’ll need caching in the future.

Use the user’s preferred color scheme as the default

In the example above, we use dark as the default color scheme. If you want to default to the user’s preferred color scheme instead, but still let the user switch the theme, you could change it to something like this:

let theme = localStorage.getItem('theme');
theme ||= window.matchMedia('(prefers-color-scheme: dark)').matches
  ? 'dark'
  : 'light';
document.documentElement.setAttribute('data-theme', theme);

…or you could also add another value system, as the default. It’s a bit more verbose which can be beneficial in certain situations, but can also make your selectors more complex.

I wish there were a native API

It’s a bummer that we have to fall back to CSS selectors when we want to give a user control over the selected theme. The prefers-color-scheme media query is neat, but even more so the new light-dark() CSS color function. We can’t use them with a CSS-selector-based solution.

I like the ability to manually toggle the theme for a specific site. Just because I use dark mode on my system doesn’t mean that I want dark mode on every site. Especially for sites I visit and use a lot.

How about a small native API that allows us to override prefers-color-scheme for the current site?

// This does not exist
window.setUserPreference('color-scheme', 'dark')

It would persist the user’s selection and let us use those nifty CSS features. The browser could even have a native UI to show the override to a user. It could let a user change or reset their preference directly in the browser’s UI, similar to camera and microphone permissions, without relying on toggles and buttons on the site itself.

A man can dream though… a man can dream.

]]>
<![CDATA[Thought Leaders]]> https://timomeh.de/posts/thought-leaders?utm_source=rss https://timomeh.de/posts/thought-leaders Fri, 06 Dec 2024 23:57:00 GMT I have many thoughts about this topic but I’ll try to keep it short: I don’t like this thought leadership in engineering, where it’s mostly about being an influencer and less about having a good influence.

It rubs me the wrong way. You could just say that it annoys me and I should ignore those parts of the internet. But it actually worries me because it feels like small cult-like groups in which engineers won’t grow: They’re caught in an echo chamber, which makes them feel like they’re growing, but instead of growing as a person and engineer, only a single opinion grows within them.

It’s this thought leadership where the same small set of design patterns is repeated over and over again, and held up as the holy grail. It’s no longer just opinionated, it’s dogmatic. It becomes bad-mouthing of different opinions and fighting over minuscule details, instead of taking a step back and seeing all the similarities.

That’s obviously the wrong way to approach leadership.

In the beginning of my professional career, I was surrounded by people I looked up to — now good friends, whom I still look up to. I was often listening in on their discussions, and while of course everyone did their little opinionated comments for fun and the occasional nerdy joke, it always concluded with: everything is a tradeoff. There are upsides and downsides. What worked great this time might not work great the next time.

I strongly believe that this time in my life shaped me into who I am today. Of course there was a time during which I held certain opinions higher than I should’ve, and boy did I boast about them being the best. It was the classic case of the Dunning-Kruger effect, and I was at the peak of Mount Stupid.

But those people around me asked questions. And I didn’t have an answer to them. And then they shared their professional experience and knowledge, which over time expanded my horizon.

These lessons have influenced how I approach leadership today. Today, I’m in a technical leadership role at work. I’m aware that my opinions can have an influence on my peers. I’m also aware that my technical decisions will have an influence on the business. This awareness, combined with the experiences when I grew as an engineer, shapes how I approach technical leadership at work.

I often emphasize to my colleagues that my decisions are centered around consistency, simplicity, and using established patterns. I’m opinionated in the sense that I strongly believe that this is the best approach for the company, based on business needs and risks.

Obviously I also have some opinions—very small opinions in comparison. For example: I think that state machines tend to make a business process more restrictive than the actual process. But I won’t preach about those little things. If I’m going to preach anything, it’s about being more aware of the business impact you have as an engineer.

I don’t want to call out any of the “thought leaders” or influencers I mentioned earlier, which is why I’ve kept my critique intentionally vague. If you read this and you feel like you might need to defend yourself, because you see yourself as such a leader or as part of a group, I encourage you to reflect on how you influence others, the broader community and yourself.

]]>
<![CDATA[Some of my yearly Christmas music]]> https://timomeh.de/posts/some-of-my-yearly-christmas-music?utm_source=rss https://timomeh.de/posts/some-of-my-yearly-christmas-music Sun, 01 Dec 2024 14:40:00 GMT
  • Quiet Company – Winter is Coming (YouTube, Spotify)
    A cozy EP with a small collection of holiday standards. O Holy Night is my favorite arrangement on it. Quiet Company was featured on Grooveshark back in 2011, and a year later they released this winter EP. It’s been on repeat every December since then. This year it’s the 13th time!
  • John Rutter – Magnificat (YouTube, Spotify)
    Rutter’s Magnificat is one of my favorite pieces of contemporary classical music. It begins immediately with a sleigh ride through a snowy landscape. Listening to it is like watching a movie in my head, with a great mix of quiet scenes and energetic highlights. It’s big and cheerful and wintery, and it ends with a big bang. It’s typical for British composers, which are known for their sentimental and audience-friendly choral works. My music teacher once described them as taking all emotions in their hand and then shoving it right in your face.
  • Coldplay – Christmas Light (YouTube, Spotify)
    One of those songs where the music immediately and always plays a little movie in my head. The chorus in the second half is reminiscent of the family singing together on Christmas Eve. You know, when the family sings together and the beginning of the chorus is a little bit slower because everyone is trying to keep up with everyone else.
  • Spectre Morse – Last Christmas Together (Internet Archive)
    I wrote about this song in the intro to my post about backing up YouTube videos. The song has been removed from all platforms, but is still available through the Internet Archive. I like it because it has such a special atmosphere, with the faint street noise in the background that sounds a bit like a snare drum being brushed, the electric piano that sometimes crackles like a fireplace, and the reverb-y voice on top. It’s very happy-sad.
  • Chris Rea – Driving Home for Christmas (YouTube, Spotify)
    Maybe I’m just be a cheesy guy, but Driving Home for Christmas is on repeat every year when I’m in my car driving home for Christmas. Of course, it’s not on repeat for the whole 2½ hour drive home, I’m not crazy, but usually at the beginning and end of the drive home to my parents.
  • jschlatt – A Very 1999 Christmas (YouTube, Spotify)
    The newest addition because it was just released today. But I already listened to it several times today and it will definitely be on my annual December playlist. Like Winter is Coming, it has the usual holiday standards. Schlatt’s voice is just perfect for these Christmas songs. Have yourself a Merry Little Christmas is my favorite song on it so far. The arrangements have a very old school Disney feel to them.
]]>
<![CDATA[Lüften]]> https://timomeh.de/posts/lueften?utm_source=rss https://timomeh.de/posts/lueften Tue, 26 Nov 2024 21:39:00 GMT Who would have thought that opening the windows several times a day would have a lasting effect on reducing the humidity in my home? My parents, probably. And probably everyone else except me.

Don’t get me wrong, I did air the rooms in my apartment regularly! But most of the time, not multiple times a day. For example, I aired my bedroom most days. Maybe I forgot to do it once a week, but I usually do it after I roll out of bed. But in the fall season, the humidity has always been at the upper limit before it’s too high.

I guess I was a bit ignorant because after opening the window for about 15 minutes, the humidity quickly returned to where it was before. I thought this would happen every time, no matter how many times I opened the window. But somehow I forgot the most logical conclusion: that doing it multiple times a day will incrementally dry out the room more and more.

After only 2 weeks of airing all the rooms several times a day, it’s now at a perfectly average level. And my windows are not fogging up as much overnight. I even bought a dehumidifier last year, which helped a bit, but in retrospect wasn’t necessary. I should’ve just opened the damn windows more than once a day.

To be honest, it’s a bit embarrassing to admit my ignorance. But I have learned something!

]]>
<![CDATA[Don't stress about falling asleep]]> https://timomeh.de/posts/dont-stress-about-falling-asleep?utm_source=rss https://timomeh.de/posts/dont-stress-about-falling-asleep Thu, 21 Nov 2024 21:52:00 GMT For the past three years, there hasn’t been a single night when I couldn’t fall asleep or even struggled to do so. The secret? Simply enjoy the moment.

Three years ago, one night before our company’s Christmas party, I was sitting at a hotel bar in Berlin with a colleague for an evening drink. I had just started my new job at Gigs a few months earlier. We were still a small company, especially the engineering department, which was just a handful of engineers.

As we were chatting, he told me this trick that helped him get his daughter to fall asleep. It’s simple: just focus on how comfortable everything is, and enjoy the fact that you don’t have to do anything. Don’t set expectations to fall asleep. It’s fine. If you think about it, you don’t have to fall asleep. Everything will be fine in the end.

You could also be in a situation where sleep is not an option: It could be a boring lecture, where your eyes start to feel heavy and you have to fight the urge to close them. But you’re not in that situation. Instead, you’re in a bed. A comfy bed. You can snuggle under the blanket, enjoy the warmth, and the sensation of the pillow hugging your head.

And it’s fine to stay awake in bed for a bit longer. There won’t be any consequences just because you’re not falling asleep right now, or maybe at all. Nobody will sue you because you didn’t sleep for long enough. And tomorrow, when you have to get up, you can take it a bit slower. Maybe you’ll be a bit more tired and sluggish, but what gives? A hot drink the next morning will help you to start your day.

Ever since he told me about this, I’ve been able to always fall asleep without any trouble. Before that, I didn’t have any major problems falling asleep, but it sometimes happened. I think it’s something a lot of people experience from time to time. When you find yourself constantly checking the time: 6 hours become 5 hours. 5 hours become 4 hours, 3½, 3, … it’s really not a great feeling.

But not for me anymore. I don’t just do this when I think I’m going to have trouble falling asleep; it’s now my mindset when I go to bed all the time. I enjoy how comfortable it is and how I don’t have to do anything.

It was simple. But it was a life changing experience, just from meeting my new colleague during an evening drink at a hotel bar in Berlin.

]]>
<![CDATA[Breaking UI Changes]]> https://timomeh.de/posts/breaking-ui-changes?utm_source=rss https://timomeh.de/posts/breaking-ui-changes Tue, 19 Nov 2024 23:20:00 GMT I think some apps I use every day, like Spotify for Desktop or YouTube TV, are being way too careless with seemingly small UI changes. It starts to annoy me a lot, so I put together some examples of sloppy and irresponsible UI changes that I noticed in Spotify and YouTube TV to make you think twice before making such careless changes in your apps.

In software development, we often talk about breaking changes: when we make changes to our code that might cause other systems to break or force them to adapt. We try to minimize those changes because they’re super annoying for the people using the software, and we want everyone to have a smooth experience. But when it comes to UI design, the idea of breaking changes rarely comes up.

The apps I use every day are part of my routine. Over time, using them turns into muscle memory. The design becomes the interface between me and the machine. I rely on that interface. But it more often feels like some apps make changes that look minor, but have a huge impact on my routine and muscle memory.

Spotify for Desktop

Spotify, oh Spotify. What are y’all even doing over there? Spotify for Desktop has made some seriously bizarre choices with their user experience for no apparent reason. It feels like they’re sometimes just making changes for the lulz, when they’re trying to reinvent common design patterns for desktop. Honestly, I could write a whole blog post just about all the weirdness in their UI/UX. For example: this new filter-based navigation in the library sidebar—what even is that? But I wanna talk about one change in particular.

Something happened a few months which annoyed the crap out of me: they moved the “Home” button from the sidebar to the top of the app.

The Spotify desktop app as of November 2024

Since 2014, the “Home” button (or “Browse,” as it was called back then) was always the top item in the sidebar. I checked—it’s been that way for a decade. Clicking the first item in the sidebar was muscle memory for me. I didn’t need to think about it; my hand just did the thing. This was a real no-brainer.

But then one day, a few months ago, I opened Spotify and clicked on the first item in the sidebar like I always do. Except now, it triggered a click on “Your Library”, which collapses the sidebar. [Footnote: Big LOL btw on this feature.] At first, I didn’t even visually see that something had changed, I just felt that something was off. All I knew was that this isn’t what usually happens.

Maybe this confusion only lasted 30 seconds, but it felt like an eternity during which I stared into my screen like a monkey, not knowing what I’m supposed to do.

The gist is: this change was completely unnecessary and careless in its execution. It looks like a small design change, but it’s a huge breaking change in the way I use this app. Even months later, I still struggle to find my way through the Spotify app.

YouTube TV

I use the YouTube app on my Google TV on a daily basis. Its design is clean, sticks to simple UI conventions, and has no unnecessary bells and whistles—which is especially important in this case, because you’re using this app with a remote control!

A remote is a very interesting input device because, unlike a mouse or touchscreen, you can use the remote faster than the UI can keep up. If you know the layout and where you want to go, you just press the buttons quickly and let the UI catch up.

When I’m watching a video and want to go back to my subscriptions, I have my very own Konami Code ingrained into me: .

  • Two times left to get into the sidebar,
  • four times down to go to “Subscriptions”,
  • two times right to get into the list of videos.

But a few weeks ago, I suddenly kept ending up in the library screen instead of my subscriptions. You know, that screen where your watch history and playlists are—not where I often want to go. It took me a few hours, maybe even until the next day, before I realized that they had swapped the positions of “Subscriptions” and “Library”.

Now, to go into the Subscriptions feed, I had to press down five times instead of four. A completely unnecessary change, done without any care. Especially on a TV, these kinds of mistakes always feel so “expensive”. (Maybe because of the latency and the lack of an app frame?)

And just as I was getting used to the new key sequence, YouTube apparently realized that it was a stupid change and reverted it. Which meant that I now again started navigating to the wrong screens all the time, and again had to re-train myself. What an utterly useless and frustrating experience.

Generally, YouTube TV fucks around a lot with the navigation. Most of the time, they don’t even change the order of elements, and instead change which element gets the initial focus!

For example, when you’re watching a video, pressing up twice used to focus on the video title (where you could view the description). From there, pressing right would take you to the channel.

Now? The initial focus is on the channel! But wait, it get’s even better: in a recent update, there’s now a new “Subscribe” button on this screen that gets the initial focus.

But now comes the best part: You would think that this button is shown if you’re not subscribed to the channel, except when it’s not! The “Subscribe” button is sometimes there, and sometimes not, even if you haven’t subscribed to this channel. Is this a bug? A feature? At this point, why not both!


Breathe, Timo.

Seriously though, all of this—and many more similar experiences—makes me think that some people with decision-making power just aren’t considering the consequences of their changes. This stuff matters a lot. It alienates me as a user. The fact that YouTube reverted the swapped menu items suggests they got enough negative feedback to come to the conclusion that it’s not an important enough change.

Design isn’t just about making things look pretty. If breaking changes are treated with caution in code and are discussed so often, why aren’t they in design?

]]>
<![CDATA[dragonflydb wouldn't index my JSON]]> https://timomeh.de/posts/dragonflydb-wouldnt-index-my-json?utm_source=rss https://timomeh.de/posts/dragonflydb-wouldnt-index-my-json Mon, 18 Nov 2024 14:11:00 GMT A short story about a small debugging adventure, entirely within the redis-cli.

I’m caching the content on this site in Dragonfly, a drop-in replacement for Redis with Redisearch support. I do this not only to store my content as JSON, but primarily so I can query it. Very nice for filtering, sorting, searching. Very demure, very mindful.

This worked great in development, but when I deployed it to production, it didn’t work: the page was empty. No content there, even though my logs state that the cache was successfully populated. I opened the redis-cli on production to inspect what’s going on, and to check what’s in the database.

$ KEYS *
> "posts:foo"
> "posts:bar"
> [...]

$ FT.INFO posts:index
> [...]
> num_docs 0

Interestingly enough, listing all stored keys with KEYS * told me that it actually stored the records. My logs didn’t lie. But when inspecting the index with FT.INFO posts:index, the final num_docs 0 told me that it didn’t index anything. Zero documents. So they are stored, but not indexed. Weird!

I spent some time blindly trying a few different things, like changing the keys and prefixes, until I decided to act like a proper engineer and go into the Redis CLI and test some things.

$ JSON.SET posts:foo . '{ "title": "lol" }'
> OK

$ FT.INFO posts:index
> [...]
> num_docs 1

I tried to create a new record in the CLI and — Wait… it worked! The post is stored and indexed. But why? Why does it work now? Even re-creating the index made no difference. So I tried the same thing again, just to check if the behavior was at least consistent:

$ JSON.SET posts:bar . '{ "title": "wot" }'
> OK

$ FT.INFO posts:index
> [...]
> num_docs 2

It still works. Okay, it’s consistent. The issue is not the key itself. But why is it working now? Why won’t it work with my actual content? Well, I didn’t test it with my actual content. Maybe I should try and save my actual records instead of just “lol” and “wot”.

$ JSON.SET posts:earbuds . '{
  "title": "Open Ear Earbuds",
  "description": null,
  "frontmatter": {
    "cover": "beans.jpg"
  },
  [...]
}'
> OK

$ FT.INFO posts:index
> [...]
> num_docs 2

Okokok. It didn’t work. The indexed document count didn’t change. I’m onto something. I can reproduce it. It must be the value. But what exactly? Is it the nested JSON?

$ JSON.SET posts:earbuds-2 . '{
  "title": "Open Ear Earbuds",
  "description": null,
  [...]
}'
> OK

$ FT.INFO posts:index
> [...]
> num_docs 2

Removing the nested JSON still won’t work. So I can exclude nested JSON from my suspected issues (they were a bit sus at first because they are referenced with a funky JSON Path).

What’s left? Oh – oh no. Is it just the null value? Let’s try and remove it:

$ JSON.SET posts:earbuds-3 . '{
  "title": "Open Ear Earbuds",
  "frontmatter": {
    "cover": "beans.jpg"
  },
  [...]
}'
> OK

$ FT.INFO posts:index
> [...]
> num_docs 3

ffs, it is just the null! It causes that the document won’t get indexed! And all my records had a null value in them, so nothing got indexed!

Sidebar: Kinda odd developer experience that it happily lets you store null values, but then refuses to use them.

So I added a little utility to strip out all the null’s from the records, deployed it, and everything worked fine on production.

Now the final question is: why did it work locally on my machine? idk, probably some mismatch in docker image versions, I didn’t care enough to spend more time on it lol.

]]>
<![CDATA[What's the deal with Trash Bags?]]> https://timomeh.de/posts/whats-the-deal-with-trash-bags?utm_source=rss https://timomeh.de/posts/whats-the-deal-with-trash-bags Sun, 17 Nov 2024 18:22:00 GMT I don’t know if it’s just me, but trash bags lately suck. Something very strange must be happening in the trash bag industry. So strange that I find myself constantly thinking about trash bags to the point where buying them gives me anxiety.

It used to be that my only issue with trash bags was that I occasionally bought the wrong size. You know, when you accidentally get bags that are too small, and then you struggle to pull them over the trash can, only to give up and let half the bag flop awkwardly into the can, but you also don’t care enough and then try to buy the right size next time. That’s my fault.

Because really, what else could go wrong with trash bags? They have the most basic set of requirements: 1) contain trash and 2) allow me to take out the trash. That’s the bare minimum functionality for a bag: something you’d think humans would have optimized over tens of thousands of years. And yet, in the last year, most of the trash bags I’ve bought have failed to meet even those basic requirements.

Two years ago, I decided to buy household items like trash bags or sponges in larger quantities so I’d only need to stock up once or twice a year. A very adult move, I thought. And everything was fine for a while. Then, earlier this year, I opened a roll of trash bags where most of the bags had long slits down the sides, making them completely unusable. Okay, I thought, probably just a manufacturing defect. No big deal, I have other rolls. But every roll I opened after that had the same issue. All defective.

With that in mind, I obviously switched to a different brand. And let me tell you, this marked the beginning of my trash-bag-induced anxiety—and the most infuriating experience I’ve had with any product in a long time.

These new bags weren’t defective in the same way. They looked fine at first. But they ripped open constantly. They ripped inside the trash can, spilling garbage everywhere. They ripped when I tried to lift them out, leaving trash in the can. One time, they ripped open at the very last moment, right in front of the dumpster downstairs, dumping my trash all over the sidewalk.

I tried double-bagging: they ripped. I tried triple-bagging: they still ripped. I thought maybe I was overfilling them or making them too heavy, and so I started taking out the trash when the bag was not even half full. They still ripped. I wondered if something sharp in the trash can was puncturing them, so I tried it once outside the can. It still ripped open.

Those bags were the bane of my existence. After some time—I first had to process the fact that a product can really be that shitty—I bought a different brand. This time just two rolls, and I specifically chose bags that advertised themselves as “tear-proof”. I think it’s kinda odd that they have to advertise this at all, considering that holding trash in the bag is a basic requirement for trash bags, but I won’t complain if they will do the job.

I arrived at home, weirdly excited about having new trash bags, opened them up, tore off a bag from the roll, and … it had no handle. Confused, I tore off another bag, thinking that I did something wrong. But no—there was no strap, no handle, nothing. They have no handle!? They’re just a plastic tube with one end sealed shut!

I was dumbfounded. I had never seen anything like this. How are you supposed to tie these up? Where do you hold them to carry the trash downstairs? What were they thinking?

While it was still possible to tie them up, you end up with no handle to carry it downstairs in dignity, and you shouldn’t fill them up too much because you need enough material left to tie them up.

Even though I still felt anxious every time I took out the trash—bracing myself for the moment the bag’s molecules would spontaneously decide to disintegrate—they didn’t rip open. So, I guess, one out of two requirements met.

Now I’ve bought yet another kind of bag. One that claims to be both tear-proof and has a strap. Maybe this will finally end my misery. I’m still skeptical.

But something feels off with this industry when a product has to boldly claim that it meets the bare minimum of requirements. This isn’t the first time I’m using trash bags, this used to be different!

]]>
<![CDATA[Changelog, November 2024]]> https://timomeh.de/posts/changelog-november-2024?utm_source=rss https://timomeh.de/posts/changelog-november-2024 Thu, 14 Nov 2024 23:29:00 GMT I’ve motivated myself to write shorter posts more frequently! The previous list of headlines (which is now the Archive) added too much friction for writing down some content—not just because I had to come up with a title, but also because a short post on its own page felt a bit … lonely.

After 2 years of writing posts in GitHub Discussions, I’ve now switched to Keystatic. The writing experience is much better: it’s still in my browser, and the media handling is much better! Images and videos became a bit more cumbersome with GitHub Discussions. It wasn’t impossible, just a bit annoying. And that extra friction became an excuse to not writing down some thoughts I had.

I might have been just a wee bit too excited to release this new version. There are some minor tweaks here and there to improve, but pfft, that’s continuous deployment for you!

]]>
<![CDATA[Not waking up at 6:30am]]> https://timomeh.de/posts/not-waking-up-at-6-30?utm_source=rss https://timomeh.de/posts/not-waking-up-at-6-30 Tue, 15 Oct 2024 18:48:00 GMT Early this morning I woke up briefly, it was 6:30am—yes, that’s early morning for me—and heard very faint shower noises. I usually don’t hear anything from my neighbors, but probably because it was so quiet otherwise, I realized that one of my neighbors had to get up now.

And then I remembered that I used to have to get up at about that time when I went to school. To be exact, I always got up at precisely 6:55am. My morning was always timed to the minute to get the most sleep out of it. The whole family had to coordinate who was in the bathroom at what time. I actually went to bed for another 15 minutes after taking a shower, just to make the most out of every minute of sleep.

And as I laid in bed this morning, remembering all of this because of those faint shower sounds, I just felt at ease. Because instead of having to get up at 6:55, I get to sleep for another 2 hours. If I wanted to, even longer. I was never a morning person, I didn’t like going to school and I had to struggle to get out of bed every morning. Especially when it was fall or winter, because it was still dark outside, and then I had to get out in the cold, hop on my bike and cycle through the cold breeze, to sit in a classroom which was often just a container on our school yard because our school was too small for the number of students they had, and that container was never very comfortable or cozy. Even though I actually really like fall and winter.

It reminded me that I had made it. By “it” I don’t mean everything I want to accomplish in life, but at least this one thing. I no longer have to struggle to get out of bed in the morning. I can enjoy the moody and gloomy mornings. When it gets a little colder at night and you can just wrap yourself up a little more in your blanket. When everything is a bit more quiet and chill. I get up, I’m in my own apartment. I do my morning routine, I make myself an oat milk cappuccino, I sit down in my little home office and start working. Working on things I’m passionate about. I’m doing what I want to do in life and I am enjoying it.

With a smile on my face, wrapped in my blanket, knowing that it’s still dark(ish) and quiet and cold outside, I fell asleep again and just had a really nice, mindful, enjoyable morning.

]]>
<![CDATA[Style elements across nested layouts with Tailwind]]> https://timomeh.de/posts/style-elements-across-nested-layouts-with-tailwind?utm_source=rss https://timomeh.de/posts/style-elements-across-nested-layouts-with-tailwind Sun, 16 Jun 2024 13:38:00 GMT I recently discovered that Tailwind’s group utility is super useful in web frameworks with nested layouts. I only thought of the group utility for smaller UI components, but you actually can use group to style any element based on any other element in the DOM.

This may not seem like a big deal, but consider this example: we hide the header if there’s an element with the CSS class hide-header-signal somewhere in the DOM.

<body class="group/body"> <!-- layout -->
  <header class="group-has-[.hide-header-signal]/body:hidden">...</header>
  <main>
    <h1>Hello world</h1> <!-- page -->
    <div class="hide-header-signal"></div>
  </main>
</body>

This scenario is a common layout pattern: you put your header in your layout and render all your pages within this layout. So now all pages share the same header. But this also means that a page cannot change the header. Except with the group utility, we actually can change the header from within the page! Well, to be precise, we can tell the header to change based on what else is in the document. In fact, we can use the named group group/body to change the style of any element based on any other element.

You might think that you could just conditionally render the header to achieve the same thing, but that’s not always easy to do in every web framework.

Here’s a real world example: My blog has a header on all pages, so the header can simply be in the root layout. But I also have two pages for visual regression testing of my syntax highlighting and markdown rendering, and I don’t want to show the header on those pages, because the tests shouldn’t fail when I change the header. [Footnote: Fun fact: I decided to just let those tests fails when the header changes lol. Because when the header changes, then I’m gonna see it anyways. But I still apply this pattern in other places.] What other solutions would I have?

  • I don’t have access to the current URL in the root layout, in order to conditionally hide the header based on the URL, unless I would turn the layout into a client component.
  • I could also move the header into every page, instead of having it in the layout. Then I could simply not include it in the pages where I don’t want to show the header. But then I use all advantages of layouts.
  • I could create more nested folders with nested layouts, but I would do it just for those two pages. That’s a significant change for 2 simple test pages.

Although I could do a bunch of things to render the header dynamically for those pages, they all have a comparatively high impact on the codebase. Sometimes this impact is justified and one of these approaches might be a better choice, but clearly not in this case where I just have some unlisted pages for testing purposes.

By using the pattern above with the group utility, it’s now super easy to hide the header on some pages:

// layout.tsx
export default function RootLayout({ children }) {
  return (
    <html>
      <body className="group/body">
        <header className="group-has-[.hide-header-signal]/body:hidden">
          <Header />
        </header>
        {children}
      </body>
    </html>
  )
}

// e.g. regular blog post pages
// posts/[...slug]/page.tsx
export default function Page() {
  return (
    <main>
      <p>The header is visible on this page</p>
    </main>
  )
}

// e.g. pages for visual regression testing
// vrt/[...name]/page.tsx
export default function Page() {
  return (
    <main>
      <p>The header is hidden on this page</p>
      <div className="hide-header-signal" />
    </main>
  )
}

Done. A simple change which just added a few classes and affected 2 files, instead of changing a lot of code and affecting many files.

This example is only using the has-[.hide-header-signal] modifier, but it’s not limited to only this modifier. You can do even more than just hiding one element based on the existence of another element. For example, you could change the position of an element based on the data attribute of another element. Or change some colors in the layout for some very special pages.

And the nice thing is: it’s just CSS.

…I guess those CSS selectors are slow as hell, but computers are also fast these days.

]]>
<![CDATA[Open Ear Earbuds]]> https://timomeh.de/posts/open-ear-earbuds?utm_source=rss https://timomeh.de/posts/open-ear-earbuds Sat, 25 May 2024 22:44:00 GMT In-ear earbuds are weird, aren’t they? You’ve got those small rubbery mushrooms that you shove into your ear canal to listen to music and you hope there’s no juicy ear gunk on them when you pull ‘em out.

I have the Pixel Buds Pro and the Sony WF-1000XM5. I bought the Sony earbuds because the Pixel Buds are a little annoying. You have to corkscrew the Pixel Buds into your ear, but it’s also one big touch-sensitive surface, and when you twist them in or out of your ear, you inevitably trigger a touch gesture. If they loosen a bit, you push them back, and you trigger another touch gesture. So I tried the Sony earbuds, and they’re a little better at this.

By the way, both have a great sound. But it’s not about the sound, it’s about the feel. And I don’t always like the feeling of clogging my ear canal.

Earbuds used to be different. Back when you had a cord – at least one that connected both buds together – you could just pull one side out and let it dangle. Like when you’re at the checkout counter at the grocery store and you don’t want to be rude to the cashier and you actually want to hear what they’re saying. Even though earbuds now have this transparency mode where you can still hear your surroundings… it just feels rude to me. If someone tries to talk to me, I pull out a bud.

And what do you do with the single earpiece? Hold it awkwardly between your fingers? Throw it in a pocket to fish it out later, accidentally triggering 20 touch gestures in the process? Pulling out the case just to put it away quickly? Pffffft. Ain’t nobody got time for that.


I have found that I like in-ear earbuds for longer, uninterrupted listening sessions, like on a train or plane or a long walk where I want to be buried in music. But I also wear them a lot for short trips, and then I don’t like them. Running errands is always a short walk away, and I spend most of my daily walks listening to or recording voice memos (almost exclusively with Daniel). That’s where in-ear earbuds don’t do it for me.

I recently stumbled upon the concept of Open Ear Earbuds (roll credits!). You don’t stick them in your ear, they just sit very close to your ear. It’s like a private little speaker that follows you and points at your ear. You can still hear the world around you, which I actually like when I’m not immersed in music. And because I don’t have my ears plugged, I don’t feel like I’m being rude by not taking them out in certain situations. I can just pause the music. No taking them out temporarily. Maybe I’m just a special boy, but it feels better.

In particular, I bought the Huawei FreeClip. Not a brand I would normally choose, but the features were too intriguing.

a Huawei FreeClip earbud and its case

The FreeClip is this bean and ball that you just clip onto your ear. Sort of like a big piercing. The bow between them is flexible, and clipping them on and off is super easy. They’re not tight, but they’re in place and won’t move.

What really sold me was the fact that there are no left or right buds. They’re interchangeable and automatically recognize if they’re left or right. It’s a small detail that makes life less annoying. It makes putting them back in the case a breeze, and if I’m only wearing one bud and the batteries are empty, I can use the other bud on the same side.

They have touch gestures, of course, but you don’t have to touch the earbuds, you can just tap somewhere on your ear. Cool, I guess.

They also sound good, the base is of course less punchy like it’s common with open-ear earbuds, but as I said, I don’t wear them because I want to block out the outside world and just be surrounded by music. What I didn’t expect is that when you turn the volume up to 100%, they turn down the base even more. I don’t know why they do that, maybe because of the drivers, but it’s not a problem for me, I don’t need to use them at full volume.

When I go out to do some quick errands or take a walk to listen to some voice memos, I always use them now, and I really like it.


Two weeks ago I flew to Mallorca for a work trip and took my Pixel Buds to listen to music on the plane. I haven’t used them in a while, and interestingly enough, there was a firmware update and they seem less sensitive now.

I guess that’s my life now. The earbeans for casual everyday use, and the earnubs for active deep listening.

]]>
<![CDATA[Picking those low-hanging fruit in your life]]> https://timomeh.de/posts/picking-those-low-hanging-fruit?utm_source=rss https://timomeh.de/posts/picking-those-low-hanging-fruit Thu, 07 Mar 2024 22:46:00 GMT When I moved into my first apartment over 11 years ago, I learned something that I continue to learn on a regular basis: solve the little problems in your life. Realize that they’re just low-hanging fruit, ready to be picked.

Low-hanging fruit is a term I usually think of only in the context of work. And I still do! Thinking and writing about low-hanging fruit in the context of my personal life feels strange to me, but of course they exist.


I never really liked the faucet in the kitchen of my new apartment. It was a little loose, and when you rotated it, the whole apparatus rotated a few degrees. It was tall, and the water always splashed more than usual when it fell into the sink, splattering water all over the dark granite countertop and stove. You couldn’t pull the faucet down, and it didn’t have a sprayer. It was inconvenient and annoying.

But I didn’t do anything about it because… I don’t know why? Because it’s just a faucet? It didn’t occur to me that this was a low-hanging fruit that I could solve. I just lived with the damn thing for 2 years. Until I realized that a new faucet only costs 50€ and is easy to replace. I could fix this annoyance in my life with the least amount of effort.

Even the little wiggle it had could have been fixed in a few minutes without any tools. Yet I never did.

And I began to avoid using it because it annoyed me so much. Instead of washing my hands in the kitchen, I didn’t wash them at all I actually went from the kitchen to the bathroom sometimes and washed my hands there so I wouldn’t have to wipe the counter because the water would splash everywhere.

I wasted 2 years on something I could have fixed with the click of a button on amazon and 30 minutes of “work”.

That’s just one of many. I’ve started to identify and fix more of them in my life, but it always takes some time to realize “yeah, this has been bothering me for a while, and it’s easy to fix”.

So here’s my call to you: identify those low-hanging fruits and pick them. The big things in life are hard enough, and the little things make a difference.

]]>
<![CDATA[One year with the Aeron Chair]]> https://timomeh.de/posts/one-year-with-the-aeron-chair?utm_source=rss https://timomeh.de/posts/one-year-with-the-aeron-chair Tue, 05 Mar 2024 21:42:00 GMT 1 year ago I wrote about my first two weeks with my new Herman Miller Aeron Remastered chair. So, how is it after a full year?

This chair was the best investment I made last year. My butt never gets tired of this chair.

I still use the tilt mechanism as I configured it a year ago. It’s perfect. I lean back a little, a little more, or not at all. In meetings, I like to lean back a lot, put my hands behind my head and just relax.

It’s a good thing I didn’t get the headrest, because you really don’t need it. When I sit upright, I don’t rest my head. And when I lean back, like I said, I like to put my hands behind my head and lean on them, and it’s really comfortable. It would probably be annoying if there was a headrest.

My back pain never came back! I remember I used to have this constant tension under my right shoulder and looking back it was really not good. But since then it’s just gone. Never had any problems again.

If your job involves sitting at a desk a lot, and sometimes your butt starts to get tired, or you have some tension in your back or shoulders, and you’re not short on money, I would highly recommend buying this chair if you’re not short on money. The price tag may seem high for a chair, but when you consider how expensive a good mattress and pillow is, and how many hours you’ll spend on it, it’s worth every penny. #notsponsored

]]>
<![CDATA[Screenshot an HTML element with transparent background]]> https://timomeh.de/posts/screenshot-transparent-html-element?utm_source=rss https://timomeh.de/posts/screenshot-transparent-html-element Sun, 03 Mar 2024 23:14:00 GMT Today I learned that you can capture a screenshot with transparent background using Safari’s Web Inspector. That’s nice because Chrome’s Dev Tools can’t do it. It will always capture it with the background. Sometimes I just want to quickly paste an HTML element into Figma without recreating it. Now I have at least one reason to open Safari from time to time!

Making a Screenshot of a node in Safari's Web Inspector

A few interesting things to mention:

  • If you zoom in, the screenshot is also more high res, which is nice.
  • If you select text, the text is also selected in the screenshot. Before you ask: the mouse is not included.
  • It only makes a screenshot of whatever is in the bounds of the selected element. If the element has a shadow, the shadow is not in the screenshot. You gotta add some spacing around it if you wanna include the shadow.
  • It also includes any absolute positioned child elements, even if they’re not in the bounds of the element. Which makes sense, you’re capturing the node, which also includes its children.
  • While the screenshot does not include the background, it does include elements that overlap the bounds of screenshot, even if they’re from a totally different node.
  • But it doesn’t include all overlapping elements? It’s a bit difficult to explain, let me demonstrate:
Example of quirky behavior with overlapping elements

I played around with it, and when I remove the inline-block from my name, it is in the screenshot. Interesting, but not interesting enough to try to understand it because it’s likely the most useless knowledge out there.

]]>
<![CDATA[Just give me the answer, ChatGPT]]> https://timomeh.de/posts/just-give-me-the-answer-chatgpt?utm_source=rss https://timomeh.de/posts/just-give-me-the-answer-chatgpt Fri, 29 Dec 2023 00:46:00 GMT Last week, I tinkered with developing a Shopify App. Shopify’s developer guides are… eh – a mixed bag. What I wanted to do just wouldn’t work. So, when I can’t find an answer in their guides and it’s also not easily googled, I turn to ChatGPT. I don’t often use ChatGPT for this kind of stuff – I still mostly use Google because it’s faster – but I’ve found that searching for Shopify’s subscription guides often leads to forum posts with no clear answers.

The issue with ChatGPT is that it can be so annoying. It’s like talking to that one coworker who never responds directly to your questions, trying to hide that they secretly have no idea what they’re doing. Just give me an answer, Chatty!

I know that if I just ask, “Hey beep boop, this no workey, why that?” it would respond with a big list of useless information. I had a hunch that it was because of the payments, but I didn’t want to loop over making configuration changes to see what works. So, I gave it some context about what I was doing and asked my question:

You:
Do shopify subscriptions work with the bogus payment gateway?

ChatGPT:
As of my last update in April 2023, Shopify’s Bogus Gateway (or Bogus Payment Method) is typically used for testing checkout processes and transactions without processing real credit card payments. However, when it comes to testing subscriptions, the functionality can be more limited.

Here are a few key points to consider:
1. Limited Testing Scope: The Bogus Gateway is blah blah more list items

At this point, I’m already annoyed af. I asked a yes or no question. I don’t need all the information in the world. I know it knows the answer; the thing can search the freaking internet. But it always defaults to “Could be, figure it out yourself, here are a few things to consider”. But you’re the computer, you do the work for me! So, I take a deep breath and continue.

You: Please, dude. Just say yes or no.

ChatGPT: [Awkward pause] No.

Thank God. Why not like this in the first place? Okay, while I already have this conversation established, let’s continue with solving the problem.

You:
How do I test subscriptions then? Keep your answer short and concise, you’re using so many unnecessary words

ChatGPT: To test Shopify subscriptions:
1. Use a subscription app’s test mode, if available.
2. Utilize the test mode of your payment gateway.
3. Create test plans with minimal costs.

Again, so much useless content. Stop with your numbered lists. Did they hardcode into ChatGPT’s instructions that it should always respond with bullet points or numbered lists? I actually think they did, but it’s so annoying. And it’s so slow! It pains me to see it typing in real-time.

You:
How do I configure the payment gateway for testing?

ChatGPT:
To configure your payment gateway for testing in Shopify:
1. bullet point list with vague generic instructions for any payment gateway

You:
Which payment gateway should I use?

ChatGPT:
Choosing a payment provider for your Shopify store depends on several factors:
1. Location: Some payment providers are only available in certain countries. Bla bla bla
2. more list items

Uuuugggghhhh!

You:
Dude. It’s for testing. Keep it short

ChatGPT:
For testing subscriptions in Shopify, use Shopify Payments in test mode or select another payment gateway that offers a sandbox or test environment, like Stripe or PayPal’s sandbox mode.

Finally, we’re there. That’s the answer.

And yes, I know about the custom instructions. I already added some to prevent it from being annoying. But it still is. I guess the system message they sent with ChatGPT wires its brain super hard to give a lot of information with many words and always use lists if possible.

Welcome to 2023, where I’m complaining about this crazy AI technology which is actually scarily powerful but annoys me too much.

]]>
<![CDATA[DI-RECT, August 4th 2023, Gebäude 9 Cologne]]> https://timomeh.de/posts/direct-2023-08-04?utm_source=rss https://timomeh.de/posts/direct-2023-08-04 Sat, 05 Aug 2023 13:47:00 GMT

This was one of the best concerts I have ever seen. I was so blown away after the first song that I had to yell “holy fuck”. It sounded so incredible. The drums were super tight, everyone was putting all their energy into the music, and it’s unbelievable how good Marcel Veenendaal sounds live.

If you’ve never heard of DI-RECT (they’re quite popular in the Netherlands but not so much here in Germany), I highly recommend you check them out. This radio concert is how I found them, and I don’t understand how it only has 20k views.

]]>
<![CDATA[On Repeat: Moby – Resound NYC]]> https://timomeh.de/posts/on-repeat-moby-resound-nyc?utm_source=rss https://timomeh.de/posts/on-repeat-moby-resound-nyc Tue, 25 Jul 2023 20:05:00 GMT A few times a year, I come across an album that completely captures me. I get so hooked on it, and end up playing it on repeat, all day long. This summer, that album is Moby’s Resound NYC.

To be honest, I hadn’t really thought about Moby much lately, except for that one How I Met Your Mother episode where they meet a bald guy on New Year’s Eve and mistake him for Moby. Other than that, his music didn’t really grab my attention much. I mean, sure, I knew some of his banger songs, but that was about it.

But then, someone recommended this new album to me, and after the first minute, I knew that I’ll love it. Resound NYC doesn’t bring in any new songs, but instead it’s Moby’s older tracks from 1994 to 2010 in new orchestral arrangements. These versions are all absolute bangers. They’re both familiar but also refreshingly new.

Those songs were all influenced by his life in New York City, hence the title: Resound NYC. In a series of short videos on YouTube, Moby shares the stories behind each track. He talks about what life was like in NYC during those times, what he was going through personally, and how each song came to be. How he was living in decimated neighborhoods that were hit hard by the crack epidemic of the 80s and 90s, and even how we was living in an abandoned factory…? It’s super interesting, inspiring and emotional.

There’s also the album Reprise from 2021, which also consists of new versions of his older songs, but Resound NYC is just… a whole new level. Interestingly enough, both albums feature the track “Extreme Ways”, and listening to both, you immediately notice how Resound NYC is much more vibrant and cinematic.

Even though Resound NYC made me listen to the original versions again, I just like the Resound NYC versions much more. And maybe that’s by design. Why would he want to make something worse? He transformed it from the music he wanted to release 25 years ago, to the music he wanted to release in 2023.

]]>
<![CDATA[Haller & Cäthe & Enno Bunger, April 14th 2023, Gloria Cologne]]> https://timomeh.de/posts/lagerfeuer-deluxe-2023-04-14?utm_source=rss https://timomeh.de/posts/lagerfeuer-deluxe-2023-04-14 Sun, 30 Apr 2023 14:51:00 GMT

Lagerfeuer Deluxe, an unplugged concert with 3 artists: Haller, Cäthe and Enno Bunger. Of course, when Enno is in town, I have to go there. Very nice atmosphere, good music and needless to say Enno’s set was just incredible.

]]>
<![CDATA[Portugal, March 2023]]> https://timomeh.de/posts/portugal-2023?utm_source=rss https://timomeh.de/posts/portugal-2023 Sun, 30 Apr 2023 13:54:00 GMT Our semi-annual Gigs offsite was this time in Portugal, Guimarães. Beautiful city. I’m not a good photographer.

flur

Looks like I have a thing for alleyways.

]]>
<![CDATA[Warhaus, March 18th 2023, Gebäude 9 Cologne]]> https://timomeh.de/posts/warhaus-2023-03-18?utm_source=rss https://timomeh.de/posts/warhaus-2023-03-18 Sat, 18 Mar 2023 22:42:00 GMT

Went there with Chris, was a great night. I am a huge fan of Maarten. His solo music is much more chill than Balthazar, and it’s definitely different and slower compared to a Balthazar concert. But the guy is great, an incredibly talented musician. And I love it when he holds his guitar up under his chin.

Marteen’s concerts, both for Balthazar and Warhaus, are not just songs from the album but-it’s-live. Instead, he rethinks the songs and how they could work live, creating a very unique experience.

The first solo was kind of weird? It felt like the guitarist and the drummer had no idea what to do. But maybe they were just being musical geniuses, who knows.

]]>
<![CDATA[Yukno, March 15th 2023, YUCA Cologne]]> https://timomeh.de/posts/yukno-2023-03-15?utm_source=rss https://timomeh.de/posts/yukno-2023-03-15 Thu, 16 Mar 2023 23:00:00 GMT

Found myself getting totally lost in the music, just vibing along, no sense for time or space. (Besides the fact that the sound guy seemed to have a problem which he apparently couldn’t fix until the end)

]]>
<![CDATA[I'm exceptionally bad at taking breaks]]> https://timomeh.de/posts/exceptionally-bad-at-taking-breaks?utm_source=rss https://timomeh.de/posts/exceptionally-bad-at-taking-breaks Mon, 06 Mar 2023 23:36:00 GMT One of my goals for this year is to get better at taking breaks from work. But, honestly, I’m doing a terrible job at it. Whether it’s taking a lunch break, calling it a day, or taking a little coffee break: I don’t stop when I should stop. I keep telling myself I’ll do better, but in the moment, I screw up. And it’s not because work is stressful or too much is expected of me – it’s my own fault.

The problem is, I’m always telling myself that I’ll just finish this one small thing that’ll only take 2 minutes. But then 5 minutes pass, and another 5 minutes, and another, until half an hour has gone by. And suddenly, I’ve only got a short lunch break before the next meeting starts. It’s funny how in this situation I keep looking at the time, seeing how it flies, and I’m just like, “Oh God, I should get going…” but I don’t do it. Even though I should just get up and go. I’m such a dingus.

I know that I don’t have to respond to messages right away. No one expects an immediate response. But it’s just one message, which then turns into a few messages, and then a longer conversation. Or when doing code reviews, I tell myself that I’ll just write those few last comments and finish up the whole review (even though nobody would be mad if I submit only half a review, and do the next half later). Or when I’m coding, just a few more lines of code.

It’s always small tasks that, for some reason, I think will be done in 2 minutes.

But I always feel like I have to get it all done, or else it’s going to occupy my mind all through lunch or the next day. But will it really? Like… really?

Two weeks ago, I took a day off and didn’t check Slack even once. I woke up to a bunch of notifications on my phone, and without reading them, I paused all work-related apps. And you know what? I didn’t check them all day. This surprised me because I usually open Slack automatically, like how you open TikTok, but not while you’re sitting on the toilet. That was a win!

The thing is, I keep telling myself I’ll do better tomorrow, but then I relapse the day after. It feels like an endless cycle. It makes me less motivated to do better. Even though I know I should just take a goddamn break.

]]>
<![CDATA[Fargo, Season 4]]> https://timomeh.de/posts/fargo-season-4?utm_source=rss https://timomeh.de/posts/fargo-season-4 Sun, 05 Mar 2023 22:06:00 GMT I missed the dark humor of seasons 1-3, where it always felt like a mix of “that’s a whoopsie” and “oh geez” from the very first episode. This is what made Fargo so unique to me. Season 4 felt more like “holy…” – and only after a few episodes.

It was still quite compelling in its own way, had some unexpected twists, and there were some elements of dark humor present. Though the dark humor focused mostly on weird deaths, instead of the typical chain of unfortunate circumstances. At least the unnecessary, yet fun, paranormal stuff wasn’t removed.

Score: 68%

]]>
<![CDATA[Automatically backup YouTube Videos I don't want to lose to my Google Drive, using a Raspberry Pi]]> https://timomeh.de/posts/backup-youtube-videos-to-drive-with-raspberry-pi?utm_source=rss https://timomeh.de/posts/backup-youtube-videos-to-drive-with-raspberry-pi Tue, 21 Feb 2023 01:03:00 GMT There’s a Christmas song on YouTube that has a special place in my heart. I’ve been listening to it every holiday season since its release in 2011. Music has a way of taking us back to memories, and whenever I hear this song, I’m transported back in time. I used to take for granted that it would always be available to me on YouTube, but that all changed in Christmas 2021.

All of a sudden, the song was nowhere to be found.

It was gone from YouTube and even the musician’s bandcamp store. I searched for it everywhere but had no luck finding it. I even reached out to the musician but didn’t hear back from him. Then I stumbled upon a tweet from someone who was also looking for the song, and the musician had responded that it was no longer available online. That was the proof. I’m sure he has a reason for unpublishing it. But for me, it was lost, and I was gutted.

Last Christmas, in 2022, a small miracle happened. I couldn’t stop thinking about the song and desperately wanted to hear it again. Then, a ray of hope: I found the direct link to the YouTube video buried in an old tweet. I entered it into the Internet Archive, and voila! There it was. It played through my speakers, just like that. The Internet Archive had saved this YouTube video from 2011. I was so relieved. And when the song ended, I opened my eyes and immediately made sure to save it, so that I’ll never lose it again.

digital AI art of a man with headphones on sitting in front of a desk with speakers on it, a window with a winter city at night outside

Backing up videos I don’t want to lose

I started to notice more videos disappearing from my playlists. YouTube doesn’t even tell you what’s been deleted, so who knows what I’ve lost. Maybe they were important to me. Maybe not. But that got me thinking, I don’t wanna lose the stuff that means something to me, or just the stuff I like to watch from time to time.

Around the same time, Philipp wrote a post about how he downloads videos and songs to his Plex server automatically. It looked easy enough, but the best part is that it’s super user-friendly. You just add videos to an unlisted playlist and they get downloaded automatically. It’s seriously the easiest solution I’ve seen. Adding a video to a playlist takes just two clicks, whether you’re on your phone or your computer.

Storing them on Google Drive

Philipp’s setup involves downloading videos to his Plex server, but for me, I just want to keep a backup of my favorite YouTube videos in case they disappear. Call it a ✨ Sicherheitskopie. ✨

As an Android user (boooooh!), I already have a Google One subscription to back up my photos, and I barely use 30% of my Google Drive storage, so it’s a no-brainer for me. And if I ever wanna upgrade to 1TB in the future, the pricing is pretty reasonable.

How ethical is it to download a video from a one Google server – YouTube – and upload it to Drive – just another Google server? I don’t care. Google isn’t scanning your personal files against Content ID and then blocking them or anything. That would be just bonkers and a scandal waiting to happen.

The Plan

A few years back, I bought a Raspberry Pi, but it mostly just gathered dust in a box of random cables since then. This could be the perfect project to repurpose it: using it to automatically download and back up my favorite YouTube videos. Here’s the plan:

  1. Mount my Google Drive as a network drive
  2. Download my favorite YouTube videos and save them to the network drive
  3. Set up an automatic schedule to regularly download new videos in my magic Playlist

Teaser: It’s working far better than expected

I will explain the details of how I’ve set everything up in this post – also for me as a reference, in case I have to do it again. But maybe you’re not interested in those details! No worries, let’s take a quick look at the outcome.

When I tried it out the first time, I was a bit blown away by how fast it was. It only took less than 5 minutes to download, recode, and upload a single 53-minute long 720p video to my Google Drive. I thought that would take much longer—especially on a small Raspberry Pi!

The second time I tried it, I downloaded 11 videos, totaling 77 minutes, and it was done in 20 minutes, including recoding and uploading to my Google Drive. 20 minutes! Downloading, recoding, and uploading it again! I didn’t expect this.

I combined the download script with a small script using Node.js, that sends me a Telegram message when the download is completed or when it encountered an error – which is also an idea I borrowed from Philipp. Thanks, Philipp!

Screenshot of the Telegram Bot with messages for all downloaded videos

Philipp implemented the playlist-fetching and memorizing of which videos were downloaded by himself. I simply opted to use yt-dlp’s built-in solution to download a playlist.

While it was easier for me to quickly hack this together by just using yt-dlp’s included features, there are upsides to Philipp’s approach. He has control over each video download step.

Regardless of the details, I absolutely adore this solution. And what’s even better is that I have this solution in place. Whenever I come across a video I want to keep, I add it to the playlist and don’t worry about it again. It’s very satisfying to know that my favorite videos can be safe and sound.

Now, let’s dive into the details.

1. Mount Google Drive as a network drive

Rclone allows you mount your Google Drive as a network drive on your Raspberry Pi. I set it up by following this guide I found, but it was actually so easy to set up, that the guide wasn’t really necessary. At least the guide gave me the confidence that I’m doing the right thing.

When you use Rclone for the first time, it will ask you to set up a new remote, but this is a simple step-by-step process. The only thing that tripped me up was creating API credentials. You don’t actually need them, but it’s recommended for a higher API quota, so I thought I just do it. And even though they have a good guide to walk you through it, I still did it wrong the first time.

Don’t make the same mistake I did: just make sure to configure the OAuth Consent Screen before creating the app credentials. Otherwise Google will just create a client id without a client secret, and you’ll need the client secret later on. Because without the secret, it will look like it works, but it doesn’t, yikes! But this isn’t the guides fault, it’s just my fault. The guide explicitly tells you to first configure the OAuth Consent Screen, and I didn’t do it, because I thought I’m smarter.

At some point, you’ll be asked which Google Drive permissions to grant your Pi. You can go ahead and give it full access, but I chose to limit it to read and write access for only the files it creates. That way, it can’t see any of my other files on Drive.

And last but not least, mount your Google Drive:

rclone mount mygdrive: ~/gdrive/ --daemon

mygdrive is just the name of your remote, which you set up earlier. ~/gdrive/ is the directory where I mounted my Drive. The --daemon flag just runs the mount in the background.

Now Google Drive should be mounted as a directory, and the first step is completed. To make sure everything’s working, I created a new file in my mount directory and it showed up on the Google Drive web interface. So far, so good!

2. Download videos from YouTube

yt-dlp is a fork of youtube-dl, which allows you to download YouTube videos with a ridiculous amount of options. It requires ffmpeg to function properly.

Installing both yt-dlp and ffmpeg on the Raspberry Pi was very quick and painless. In contrast, installing ffmpeg on my expensive MacBook M1X Pro using Homebrew takes ages because it first compiles the Linux Kernel from scratch and then mines a bitcoin often requires updating and installing numerous dependencies.

Below is the complete command that I am using to download videos.

yt-dlp \
  -P "/home/pi/Videos" \
  -P "temp:tmp" \
  -o "../gdrive/Video/%(title)s - %(channel)s - %(id)s.%(ext)s" \
  --restrict-filenames \
  --download-archive "~/Videos/downloads_downloaded.txt" \
  -f "bestvideo[height<=1080][ext=mp4]+bestaudio[ext=m4a]/best[height<=1080]" \
  --recode-video mp4 \
  https://www.youtube.com/playlist?list=xxx

This may look intimidating, but here’s an explanation what each component will do:

First: Setting up some folders

  -P "/home/pi/Videos" \
  -P "temp:tmp" \

To begin with, we need to specify the working directory for yt-dlp. yt-dlp generates some intermediate files during the download process, and we don’t want them to be synced to Google Drive via Rclone, because that would slow us down. For this reason, we set the working directory to be /home/pi/Videos.

The temp:tmp option specifies that temporary files should be stored in a separate directory called “tmp”. I actually don’t know if it’s necessary, but it also won’t hurt to have it.

Next, we define the output path for the downloaded video:

  -o "../gdrive/Video/%(title)s - %(channel)s - %(id)s.%(ext)s" \

As we want the downloaded video to be saved in the mounted Google Drive, we point to the relative path inside Google Drive.

According to the docs, the path must be relative; otherwise, the -P option won’t work. A little bit weird, but it does the job.

This will move the downloaded video into a Video/ folder on my Google Drive, and give it a descriptive name. %(title)s will be replaced with the video’s title, %(channel)s with the name of the channel, %(id)s is the ID of the video, and %(ext)s the file extension.

Finally, the last part of the video file setup:

  --restrict-filenames

This will remove any special characters in the video’s name and replace them with underscores.

Next: Remember if a video was already downloaded

We want to execute the above command regularly to download new videos. But we don’t want to download the same videos again and again. We only need to download them once. And of course youtube-dl (and its cool cousin yt-dlp) has a built-in solution for that: the “Download Archive”.

The Download Archive file stores the IDs of all the videos you’ve already downloaded. All you need to do is add this option to your command:

  --download-archive "~/Videos/downloads_downloaded.txt"

And voila! yt-dlp will check the archive before downloading each video and only download it if it’s not already on the list. I store this file in a local directory and not inside the Drive, because… I used a location inside the network drive and it didn’t work. 🤷‍♂️

Then: Configuring the video quality

Now we can configure what video quality we want to download. I actually don’t know a lot about this part, haha. But I asked ChatGPT to create it for me, and it just worked.

  -f "bestvideo[height<=1080][ext=mp4]+bestaudio[ext=m4a]/best[height<=1080]" \
  --recode-video mp4 \

I wanna download videos with a maximum resolution of 1080p, and only as mp4. It would otherwise download webm, and I don’t trust webm as long as it isn’t a commonly shared file format.

  • bestvideo[height<=1080][ext=mp4]+bestaudio[ext=m4a] requests a 1080p video as mp4, and separately downloads the best audio quality with it. The video and audio track will then be merged into a single file after they’re downloaded.
  • /best[height<=1080] means that if a video using the above criteria cannot be found, I just want the best available audio and video with a maximum resolution of 1080p.
  • --recode-video mp4 is used to tell it to recode the video to an mp4 when it isn’t an mp4. Although we configure mp4 above, it could download another format when it can’t find an mp4.

Final part of that command: Configure your Playlist, and that’s it!

The final part of this command is to specify the URL of your playlist. Philipp explains this well in his post: you’ll need to ensure that the playlist is set to unlisted instead of private, and that the videos are sorted from newest to oldest.

Once you’ve set up the playlist, the script will go through it, check which videos haven’t yet been downloaded, and proceed to download them. Once the downloads (and maybe recoding) is complete, the file will be transferred to the Google Drive, and it’s all done.

3. Run the Download on a regular interval

To run this command automatically, you’ll want to set up a cronjob. The cronjob just runs a command at a specified time interval. You can edit your cronjobs using crontab -e, and paste in something like this:

0 5,11,18,23 * * * sh /home/pi/yt-download.sh

This will run the download script every day at 5:00, 11:00, 18:00 and 23:00. This is what I chose, you could choose whatever schedule tickles your pickle.

I’m not calling this huge command inside the cronjob. I copied the yt-dlp command it into a shell script called yt-download.sh, and run this shell script in the cronjob. That way, if I need to change anything, I can just update the script instead of messing around with the crontab file.

Done. This is what I love about computers

And that’s it! It works!

It’s so satisfying to see how pieces just work together. Like gears which fit perfectly into each other. It’s done, and we just used existing solutions. Not even had to write a single line of code.

To take it a step further, you can add more pieces to it. I’m parsing the output logs, so I can search for new downloads and trigger a Telegram notification when it found one. But that’s just neat gimmicks on top.

]]>
<![CDATA[Blogging like Chris Coyier]]> https://timomeh.de/posts/blogging-like-chris-coyier?utm_source=rss https://timomeh.de/posts/blogging-like-chris-coyier Sat, 18 Feb 2023 15:39:00 GMT Chris Coyier is well-known for blogging a mind-boggling amout of posts on CSS-Tricks (which is now owned by DigitalOcean). But I’m not going to talk about his stuff on CSS-Tricks, but instead about his personal blog chriscoyier.net.

Chris’ blog feels like a stream of consciousness. It’s just what he’s thinking about at the moment, without any deeper meaning or learning. And yet, those thoughts are still interesting and fun to read. Annoyed at System Fonts in macOS? Just blog about it. Orange favicons work great for light and dark themes? Why not write down that thought, then notice that Safari is weird, and end it with “Time for a nap”. It’s refreshing to see someone share their half-baked ideas without feeling the need to make a grand conclusion or provide a takeaway for the reader. You can just end your post if you don’t know what else to write.

It feels like a conversation with a friend. Sometimes you just want to talk about something that’s been on your mind without worrying about whether it’s profound or even fully formed. It’s totally normal in a conversation to suddenly end your thought mid-sentence with a big ol’ shrug. And that’s what his blog feels like to me.

Maybe we sometimes try too hard to offer some kind of education or to present ourselves as experts? At least I sometimes feel that way. But I don’t think that’s necessary. It’s nice to take a break from expert content and just read something that’s relatable and human.

Chris has definitely inspired me to post more half-baked thoughts. And to not think about words and sentences and publishing too much. Just write it down, press the big button, and continue with your day.

]]>
<![CDATA[Two weeks with the Aeron Chair]]> https://timomeh.de/posts/two-weeks-with-the-aeron-chair?utm_source=rss https://timomeh.de/posts/two-weeks-with-the-aeron-chair Thu, 16 Feb 2023 20:26:00 GMT I bought a fully loaded Herman Miller Aeron Remastered Chair, and have been sitting on it for the last two weeks. Fun Fact: Aeron stands for “an expensive Ron”. So you’re asking what it’s like to sit on Ron?

a picture of an Aeron Chair in my home office

Of course it’s good. It has to be good. It’s an expensive chair, it better be good. I’ve sat on a cheap DX Racer for the last 7 years, which isn’t a fair comparison, but whattayawannado.

First of all, this thing is like a hammock for your butt. It doesn’t get uncomfortable even after hours and hours of sitting. And the tilting mechanism – gosh! – it is so well adjusted. When I simply lean back, I sit comfortably straight and it supports my back. And when I use a little more pressure, it tilts back (because I didn’t lock the back rest). But I don’t tilt back until it locks, I just stop wherever feels good to me. And then I comfortably lean back and the thing still supports my back. That’s crazy to me! I don’t need to turn any knobs to change if I want to sit straight or lean back, so I change my sitting position automatically whenever I feel like it.

It’s so comfortable. You can’t really sit wrong on it, you’re automatically in the correct position. The “PostureFit” lumbar support pads I don’t really feel pressing against my back, but I do feel that it supports me. The lack of a missing headrest made me a bit sceptical at first, but I don’t miss the headrest at all. I guess those fancy engineers at Herman Miller were right about that one.

People online reported that they have back pains in the first weeks because they didn’t sit straight for years, but I only felt it for 2 days or so. And then it actually relieved my most noticeable back pain! I always had this tension next to my right shoulder blade, and now it’s gone.

So is it worth it? This thing has 12 years guarantee. This comes down to a bit more than 100€ a year. And they usually last way longer. That’s worth it to me.

]]>
<![CDATA[Kid Francescoli on arte Concerts]]> https://timomeh.de/posts/kid-francescoli-arte-concert?utm_source=rss https://timomeh.de/posts/kid-francescoli-arte-concert Sat, 11 Feb 2023 23:41:00 GMT There are some really good and well produced concerts on arte Concert. In the past weeks, I’m totally obsessed with this live session from Kid Francescoli. It’s not only chill music, but also visually very appealing. The setting on a boat makes for some really cool camera shots. Due to the lack of an audience, it has kind of a lonely undertone to it.

Still frame from the video, a scene on the deck of a boat with multiple keyboards with Kid Francescoli standing between them, and singer Andréa Durand in the front

There’s a version on YouTube which cannot be embedded, hence the screenshot of a still frame. 🙃 Or alternatively on the Arte Mediathek

I only realized a few years ago that arte is actually kind of an interesting channel. When I was a teen and still watched a lot of TV, I thought that arte is just an artsy experimental TV station for snobs. It always felt odd when they were airing something appealing and interesting, because aren’t they supposed to be weird? Maybe arte is still just an artsy channel for snobs, and I’m the one who became a snob?

Anyways, there are also sessions from Jungle, French 79, Loyle Carner and the Grandbrothers which I can recommend if you’re interested in more.

I also watched this Live Show from the Parcels a few times in the past weeks. It’s not from arte, but I thought I just mention it. I watched a lot of those live sessions while I’m coding in the past weeks. Don’t know where that’s coming from.

]]>
<![CDATA[New Offtopic Posts]]> https://timomeh.de/posts/offtopic-thoughts?utm_source=rss https://timomeh.de/posts/offtopic-thoughts Tue, 07 Feb 2023 22:59:00 GMT I felt like posting random stuff sometimes. Not only babble about software development tips. So I added this new Offtopic section. Although I called it Stream in the Sidebar because Offtopic is a long word?

Anyways, what started with a new category ended in a switch from Markdown to MDX and a dark mode, mainly because I’m often looking at my blog at night and it hurt me poor eyes. I’m not completely done, there are a few things I’d like to add and improve, but shipping is good.

I imagined some cool aurora borealis vibes going on, but didn’t yet spent too much time in Figma to make it a bit more like I imagined. I need to change some pictures of previous posts to fit into the dark theme. And some old posts were moved into the Offtopic section because it made more sense. In summary: things happened.

]]>
<![CDATA[Updating to Next.js 13's App Directory]]> https://timomeh.de/posts/updating-to-next-13-app-dir?utm_source=rss https://timomeh.de/posts/updating-to-next-13-app-dir Sun, 08 Jan 2023 00:21:00 GMT Do you remember when I told you the story how I developed this blog using Next.js, getStaticProps and Incremental Static Regeneration? Well, we’ve got some new features in Next.js, which we’ll explore here. This is the story of migrating to a new very much still experimental feature.

Update March 2024: Some of the information and implementation in this post has changed in the meantime and everything is now stable without any quirky workarounds. If you are interested in the current implementation, check out the repo: github.com/timomeh/timomeh.de


2 months ago, Next.js 13 was released with a huge major new feature, sometimes referred to as “App Directory”. It’s a new routing implementation, which sits at the core of Next.js, and changes the way how apps are rendered quite a bit. I’m using Next.js for personal projects and at work, so this Blog is perfect to play around with those new changes.

Playing around I did… and reverted it after an afternoon of work.

I decided to wait for a few patch versions, gave it another try, and it finally worked. 🥳 So here’s a summary of what had to change:

Dynamic Pages with Server Components

The biggest change are Server Components. Components can now be async and are evaluated on the Server, and by default, all Page Components are now Server Components.

Previously you had to fetch data inside an async function called getStaticProps(), which had to return JSON-serializable data, which was then passed as Prop into your Page Component.

With async Server Components, this isn’t anymore necessary. You can directly fetch data inside the Component. It makes everything incredibly simple, like it should be. This is how I now render the list of all posts:

// app/posts/page.tsx
export default async function Posts() {
  const posts = await getBlogPosts()

  return (
    <ul className="space-y-4">
      {posts.map((post) => (
        <li key={post.slug}>
          <Link href={`/posts/${post.slug}`}>{post.title}</Link>
        </li>
      ))}
    </ul>
  )
}

And it’s similarly simple to render a single post:

// app/posts/[slug]/page.tsx
export async function generateStaticParams() {
  return []
}

export default async function Post({ params }) {
  const post = await getBlogPost(params.slug)
  if (!post) notFound()

  return (
    <>
      {/* @ts-expect-error Server Component */}
      <PostTitle title={post.rawTitle} />
      {/* @ts-expect-error Server Component */}
      <PostBody body={post.rawBody} />
      <PostComments discussionNumber={post.discussionNumber} />
    </>
  )
}

About that @ts-expect-error Server Component

Being an early adopter also means that you have to deal with tools which don’t yet support your fancy new stuff. That’s the case with TypeScript here: it thinks that async functions cannot be rendered as Components, and throws an error.

This will certainly be fixed in the future. You can check the corresponding TypeScript PR microsoft/TypeScript#51328 to see the status of it.

Transforming Markdown to HTML inside a Component

Before Server Components, I transformed the Markdown to HTML directly after fetching the data from GitHub inside getStaticProps(), and passed those HTML-strings into the Components. It was the only way to do that, because the transformation is async, and async code could only be executed before starting to render.

Now this can be done inside the Component, which packages the concern neatly within a single piece of code:

export async function PostTitle({ title }) {
  const htmlTitle = await markdownToHtml(title)
  return <span dangerouslySetInnerHTML={{ __html: htmlTitle }} />
}

You need generateStaticParams()

You might’ve noticed that I have this generateStaticParams function, which just returns an empty array. What’s that all about?

generateStaticParams lets you define a list of URLs which should be generated at build time. I don’t want generation at build time, I want it dynamically during runtime. So I thought I could just omit this function completely. And it seemed like I could, because everything was still working – everything except caching.

Unfortunately, I did not think that the problems with caching are correlated to the missing generateStaticParams function. I debugged this for what felt like an eternity. I thought that the API requests to GitHub are causing the cache to be invalidated. [Footnote: Instead of defining a cache duration for a whole page, it’s now possible to define a cache on a per-request basis. The shortest cache duration determines how long the whole page stays cached.] Of course you cannot test caching in dev mode, you have to build the site and run it in production mode, which made this very tedious.

I’m not sure if it was by coincidence or if I found it in a GitHub Issue, but after some time, I added the the empty generateStaticParams in there, and it finally worked.

Rendering the Comments widget only on the Client

The Giscus comments widget is a client-only Component, it should initiate async on the client. PostTitle and PostBody are rendered on the server, which is now the default. But you can opt-in to client-side rendering by adding 'use client' to the top of the file.

// app/posts/[slug]/PostComment.tsx
'use client'

export function PostComments({ discussionNumber }) {
  return (
    <Giscus
      repo="timomeh/timomeh.de"
      mapping="number"
      term={discussionNumber.toString()}
    />
  )
}

Components vs. Rendering

With Server- and Client-Components, it’s easy to think that Server-Components are cached and Client-Components aren’t cached, because “they are only on the Client”, aren’t they?

But pre-rendering is still a thing, and server-rendering didn’t replace it. Client-Components are still pre-rendered and hydrated by default.

I think this can be a bit confusing in the beginning? At least I first had to wrap my head around it. There’s a matrix of different Components types and Rendering modes, and then there’s also Caching mixed into it. It certainly introduces a level of complexity into the mental model, not into the code itself.

Conclusion

For smaller projects, go ahead and use the new App Directory. I wouldn’t recommend blindly migrating an important production app, since it’s still in beta and there are some bugs and missing pieces. But all in all, I’m a huge fan of those new features. It reimagines how we write React Apps quite a bit.

]]>
<![CDATA[What belongs into .gitignore (and what doesn't)]]> https://timomeh.de/posts/what-belongs-into-gitignore?utm_source=rss https://timomeh.de/posts/what-belongs-into-gitignore Mon, 26 Dec 2022 16:51:00 GMT There’s a high chance that your .gitignore file includes a bunch of stuff which doesn’t belong in there. In my opinion, it should only include paths which belong to your software project. For example build artifacts, 3rd party packages, and specifics to the language or runtime of your software.

You’ll also often find paths for Editors and Operating Systems inside .gitignore. Let’s say you’re using VS Code on macOS, so you added this to your .gitignore:

.vscode
.DS_Store

The next developer joining the project uses IntelliJ on Windows, so they update the .gitignore to this:

.vscode
.DS_Store
.idea
Thumbs.db

The neat thing about software projects is that they’re completely independent from your local setup. You can use whatever Operating System and Editor you want. So why do we break this by adding details about our local environments into the project?

Define a global .gitignore

Of course Git has a solution for this. You can define a global gitignore file, only locally for you.

To do this, create a new file ~/.config/git/ignore and add all paths specific to your local setup into it. For VS Code on macOS, it could look like this:

.vscode
.DS_Store

And you’re done. It will be applied to all projects, additionally to the .gitignore of each project.

At the time of writing, I’m using asdf which has a .tool-versions files, so I can specify which Node.js version I want in a specific project. Of course not everyone is using asdf, so I simply added it to my global gitignore.

Un-ignoring still works

Let’s say your team decided to share some common VS Code configurations, but your global gitignore already contains the .vscode folder and won’t be checked in. No worries, overrides will still work. If you un-ignore it inside the .gitignore of the project, it will be checked in again – only for this project, and not for all.

!.vscode/tasks.json
!.vscode/launch.json

It’s their fault

A lot of auto-generators and project templates come with a .gitignore file which already contains such common paths for local setups. This might make it more beginner friendly, but I think it also makes it look like it’s the solution on how to deal with those files. But it’s not.

]]>
<![CDATA[Everything is a CMS – Building a low-maintenance website for a low-budget trash movie]]> https://timomeh.de/posts/making-of-pottoriginale-de?utm_source=rss https://timomeh.de/posts/making-of-pottoriginale-de Mon, 28 Nov 2022 21:15:00 GMT A few months ago, an email from my good friend Gerrit showed up in my inbox. Gerrit made a new “Pottoriginale” movie, and he needed a website for his film tour. I knew that it will only take me an evening to do, I had the feeling that I’d enjoy the challenge, and I knew that I have some time in the coming week. And a few days later, pottoriginale.de was live.

Screenshot of pottoriginale.de

I’ve already built a few websites for Gerrit many years ago, so I know what he needs. Gerrit is very good at providing essential info without asking, which makes the process very nice.

i have 2 posters. + new dates. + a poster for the aftershow party of the premiere.
see here [file sharing link]
tickets for the aftershow party are available here [link]
would be epic to have the trailer on the page [youtube link]
tour dates:
[…]
Director, Idea, Producer: Gerrit Starczewski
Camera: […]
press inquiries, theater inquiries etc [mail]
hugs and many thanks

Everything else, for example the actual design is a nice greenfield project, and he’s always very grateful for everything you do for him. I took some inspiration from the movie poster: colors, fonts, and for example the red banner at the bottom.

Because I know Gerrit, I also know that he wants his website to include up-to-date info. In the end, it’s a website for a movie, and of course there will be new tour dates. Does this mean I need a database and a server? I just want to build a website!

I excel at choosing a CMS

Tour dates need to be updated frequently. I don’t want to make changes to code and deploy a new version just to insert a new date. It should be as easy as possible for both me and for Gerrit to update the dates. Because, when for some reason I cannot make changes anymore, Gerrit should still be able to add new dates. [Footnote: I’m always trying to minimize the bus factor] I don’t want a situation where someone’s career depends on my availability.

So… what’s a free tool, where you can enter structured data, for which everyone already has an account for, is always available, needs no explanation, and where you can restrict who has access to it? ✨ Google Sheets

Screenshot of Tour Dates in Google Sheets

Yes, I use Google Sheets as CMS for pottoriginale.de. This is free real estate. The site uses Next.js’ Incremental Static Regeneration, it’s cached, and works very similar to how I built my blog. And you know the best thing about it? It just works.

It works like a charm. I told Gerrit that if he wants new dates on the site, he should just shoot me an email and it only takes me a few seconds to update it. I shared the Google Sheet with him and told him, that in case I’m not available, he can feel free to add it to the Sheet. There’s a link inside the Sheet you can’t miss, it’s clearly labeled to click on it after making changes (that link invalidates the site’s cache).

Every time Gerrit sends me an email with a new tour date, it just takes a few seconds to do and it’s actually fun. It makes me very happy to see that he gets more and more screenings of his film.

Lesson learned: don’t restrict yourself

A few days later, Gerrit emailed me:

New date
6.10 GELSENKIRCHEN SCHAUBURG
Tickets soon

The date had no link for the ticket presale yet, it should show “Tickets soon”. But I was an idiot, because I did what every developer loves to do, and built an unnecessary restriction into my code: if a row is incomplete, it is ignored. I thought I was smart. I thought that “in case I accidentally write garbled data into the sheet, it won’t break”. But why should I do that? What a stupid decision. There was no reason to restrict myself, it doesn’t make any sense to have.

Easy change, I removed that restriction, and added a “Tickets soon” banner for when there’s no link yet.

List of Events with "Tickets soon" banner

Next time, I would probably go even further and make everything configurable:

  • If there’s no link, just don’t link it.
  • Make a column for the banner text. Instead of baking “Sold out” and “Tickets soon” hard into the code, just make it completely configurable for each date.
  • Maybe even make the “Tickets” text configurable. Because why not, it won’t hurt, it’s just a row in a spreadsheet! I’m a big boy, I can type.

Lesson learned: Optimizing Websites for TikTok & Instagram Stories

After watching some Pottoriginale Instagram Stories, I noticed that Gerrit is posting screenshots of the website’s section with the upcoming dates, to update his followers on upcoming screenings in or near their city. It blew me away because I never thought of it, yet it makes so much sense. It’s easy for him to do and the screenshots are of course already in the mobile form factor.

Instagram Story with Events List from the website

Next time, I will keep the screenshot-ability for social media in mind. For every section. It’s such a good life hack. And it also encourages others to post screenshots, if the screenshots just look nice.

Some ‘00 design in ‘22

I’m not the best at “designing while you code”, it’s just not a process I like to do. I also didn’t want to spend a lot of time on an actual design, because I wanted to do everything in one evening. On top of that, I don’t have any practice anymore in actually sitting in front of a design app and and designing something. So, even if it’s not my most favorite and productive workflow, I designed it while I coded it.

It was a fun process regardless, because I can do whatever. It’s a website for a trashy film, so I have the chance to make a fittingly trashy looking website. Everything just happened while I did it. Why not put a good old tile table on a website? When do you get the chance to do that?

A tile table on a website

Why not make everything jump and move, get some motion in there? Crediting people – why not in the form of scrolling end credits? Taking the design inspiration from the poster and using the colorful scrolling banners as section separators worked great, in my humble opinion.

All in all, it was a lot of fun. In case I didn’t link to it often enough, you can see the result on pottoriginale.de.

]]>
<![CDATA[How I organize myself (without Productivity Tools)]]> https://timomeh.de/posts/organization-without-productivity-tools?utm_source=rss https://timomeh.de/posts/organization-without-productivity-tools Sat, 12 Nov 2022 17:15:00 GMT I’ve always found Productivity Apps ✨ cool ✨. Nice-looking Apps for Todos and Notes, as a single source of truth for everything going on in my life. But I learned that I don’t work like that. Those Apps always felt wrong. I’m very organized without them. For some reason, I’m still envious of people when they show how they use some App to organize their life, but I don’t try them out anymore.

I mainly use Emails, my Desktop, Calendar, and Post-It’s. I know it sounds ridiculously oldschool, but it works pretty well for me. Let me explain:

Emails

Everything in my Email Inbox is something I need to do, and when it’s done, the Email is archived. Some Emails only need a reply, and for other Emails I need to do something before I can reply. Some Emails can stay in my Inbox for weeks, until I can finally respond and archive them. Even when it’s an ongoing task or discussion: as soon as I cannot actively do something and I need to wait for a reply, it’s going to the archive.

This is what I use most of the time. When an online service can send me Email Notifications for things I need to act on, I’ll usually activate them – for example GitHub Pull Request Reviews. Sometimes I also send mails to myself.

Desktop

Similar to how I use Emails, but for longer-running tasks. For example doing taxes, invoices, contracts, collecting documents before I can send them off. I’ll just put them on my Desktop until it’s finished and can be sent off via Email or moved to my Google Drive. My goal is to keep my desktop clean. This often plays together with my Emails where I need to collect documents before I can reply.

For physical documents, I’ll scan them and put them on my Desktop.

Calendar

Easy, just to remind me of my schedule, with the addition that I also use it as task reminder when something needs to be done in a timeframe. For example driving my car to the garage before noon is a calendar event from 8:00 - 12:00. Booking train tickets can be a calendar event spanning a few days.

This plays together with my Emails and Desktop: When I buy concert tickets, I download them to my desktop, create a calendar event, and attach them. For business travels, when someone sends me train tickets and a hotel booking, I’ll create a calendar event and attach it.

Post-It’s

Smaller things don’t fit into the above categories. When I want to remind myself of a task which needs to be done in a few hours or tomorrow, I’ll write it on a sticky note and stick it onto my desk. It often happens during meetings or Slack messages.

Scratch that Itch

Now that I finally wrote this down and thought a little bit more about it, I noticed that all of those places have one thing in common: they make me feel uncomfortable. I don’t like Emails in my Inbox, files on my desktop, notes on my desk, or events in my calendar. With each day, the discomfort increases to finally “clean up”.

Nobody likes doing tedious tasks, and for me, a fancy app contradicts too much with the burden of finishing those tasks. I would feel less of a necessity to actually do them. Maybe this reasoning sounds crazy, like I’m a huge fan of feeling uncomfortable. It’s not that bad. I can relax even when I have some Emails in my Inbox. But I feel even better when they’re finally done.

]]>
<![CDATA[Small changes are the most stressful]]> https://timomeh.de/posts/small-changes-are-stressful?utm_source=rss https://timomeh.de/posts/small-changes-are-stressful Mon, 10 Oct 2022 10:52:00 GMT Last week I got a new sofa, and it’s one of the most stressful changes I had in the past.

Big changes are not an issue for me. 10 years ago, when I moved into my very first apartment and lived alone for the first time: no worries at all. It was exciting!

But I struggled so hard with buying a new sofa. It took me a year to get myself to buy a new one. It’s big and comfortable, looks nice, it’s exactly what I wanted. But all the consequences of that change:

  • I had to move all the furniture in my office, so my old sofa fits in there. Getting rid of my old sofa wasn’t an option. But now I’m sitting in a different direction on my desk. It just feels wrong. The room looks nice and cozy, but it’s so different. My mic boom arm now doesn’t fold away easily because it bangs against the wall. Do I also need to move that thing? So many changes!
  • I always folded my laundry on top of the backrest of my old sofa. It was nice. I could watch TV while folding up my tees. But I cannot do that anymore with my new sofa.

Changing everything? Not an issue.

Changing a small minor detail? Oof.

]]>
<![CDATA[Of Enums and Booleans]]> https://timomeh.de/posts/of-enums-and-booleans?utm_source=rss https://timomeh.de/posts/of-enums-and-booleans Wed, 14 Sep 2022 10:51:00 GMT In programming, we often deal with things being in a state. Users being verified, Modals being open, Posts being published. Yesterday I had an interesting chat with Daniel about rules to decide when you should store state as an enum [Footnote: ”enum” here means an enumerated type, consisting of a set of different values] or as a boolean. We both know that it’s always better to use enums instead of booleans. It’s a common advice and nothing new, just search it on the internet and you’ll see countless articles recommending the use of enums.

But how can we tell that a boolean should rather be an enum?

Having a rule for such decisions not only helps you to write consistently good code – it helps you a lot in Code Reviews. “I think this boolean should be an enum instead” could be a good code improvement, but it’s not a good suggestion because it doesn’t contain an objective reason why it would be better.

For example, when deciding if a value should be a number or a string, there’s an easy rule: if it makes sense to increment or decrement the value, it’s a number. If not, it’s a string which happens to only consist of numbers.

The Rule if State is an Enum or a Boolean

In short: A boolean state is the answer to a question that you ask an enum. It can also be the answer to a combination of different values, and not only enums. But if you don’t have an enum to ask that question, there can’t be an answer, so you need an enum.

When differentiating between different states of a thing, you ask that thing “What are you?”. The answer is always an enum. You cannot answer “What are you?” with “yes”. But you can ask this enum “Are you […]?”, which you can answer with “yes”.

For example, a user could be verified and unverified, and also maybe blocked and deleted. [Footnote: This is only an example for illustration. User states are often much more complex in the real world] You can ask this enum:

  • Are you verified? When it’s verified
  • Are you unverified? When it’s unverified
  • Are you able to login at all? When it’s not blocked or deleted

Let’s say you’ve modelled this as booleans isVerified and isUnverified. Managing both of them independent from each other – not by asking an enum – can cause that both are true, which is an impossible state.

You can never go wrong with storing state in an enum, even if your enum only contains 2 values. It might contain more values in the future, or it might not. But no matter what, your code stays nice and readable, and will have less cognitive load. [Footnote: For example: !user.is_unverified requires you to think for a short moment. The negation ! is far away from the word “unverified”, it doesn’t read well from left to right, and also contains a double negation]

]]>
<![CDATA[Robin Rendle: "Take Care of Your Blog"]]> https://timomeh.de/posts/robin-rendle-take-care-of-your-blog?utm_source=rss https://timomeh.de/posts/robin-rendle-take-care-of-your-blog Mon, 12 Sep 2022 13:12:00 GMT Robin Rendle: “Take Care of Your Blog”

In his blog post Take Care of Your Blog, Robin Rendle wrote some career advice:

Blog your heart! Blog about something you’ve learned, blog about something you’re interested in. Blog about cameras or HTML or that one browser bug you’ve noticed this morning […]

Ignore the analytics and the retweets though. There will be lonely, barren years of no one looking at your work. There will be blog posts that you adore that no one reads and there’ll be blog posts you spit out in ten minutes that take the internet by storm.

That’s exactly the philosophy I want to follow here. And that’s also the reason why I focused on building a blog which allows me to publish content as frictionless as possible. [Footnote: shameless plug to my first post: How to Build a Blog]

Chris’ blog inspired me to finally start blogging again. His blog is a mixture of smaller and larger development topics, mixed with other thoughts and stories from his life. Sometimes I messaged him with some small development thoughts or things I learned that day, and he often responded with “that would be a nice little blog post”.

And he’s right! You just need to overcome the thoughts “that’s not worthy of publishing” and “nobody cares”. At least I had to overcome them.

]]>
<![CDATA[The Curse of the astray Service Worker]]> https://timomeh.de/posts/astray-service-worker?utm_source=rss https://timomeh.de/posts/astray-service-worker Sun, 11 Sep 2022 12:48:00 GMT When I published my new site last week, I continued to still see my old site. Initially I thought it was just DNS, but after a short while it dawned on me, that my old site used a Service Worker which cached everything and made it offline available. When you visited my old site, it automatically installed the Service Worker, and continued to serve it from cache. And now, if you visited my new site, it still served my old site. Great.

My old site used Gatsby, and while browsing through Gatsby’s plugins, I saw a plugin to make it offline available and I just installed it, because why not. It sounded cool. I had no real experience with those Service Workers. After using it for a while, I didn’t really like the User Experience of it. Every time I published a new post, it still showed old posts with a banner “New content available, please reload”. But I also kept it because it didn’t bother me too much.

I could just go into the Dev Tools and unregister the Service Worker myself, but that would only solve it for me. I had no real experience with the Service Worker Cache API, so I had no idea where and how to start to fix this. I first thought that I need to add some JavaScript into the old site’s HTML, but of course that didn’t make sense, because that’s the cached file I cannot update anymore.

How to unregister an old Service Worker

In the end, the solution was kinda easy. The old Service Worker from this Gatsby plugin was automatically created under timomeh.de/sw.js. I simply created a new sw.js with some code to unregister itself which I found in this StackOverflow answer.

self.addEventListener('install', function (e) {
  self.skipWaiting()
})

self.addEventListener('activate', function (e) {
  self.registration
    .unregister()
    .then(function () {
      return self.clients.matchAll()
    })
    .then(function (clients) {
      clients.forEach((client) => client.navigate(client.url))
    })
})

For users which still have the Service Worker installed, it will refetch the sw.js file, run it, it unregisters itself, the page will reload, and the new site was visible. No Service Worker anymore.

Things I’m still unsure about

  • Would this problem have fixed itself after some time if I never created this new sw.js file? I’ve read a few discussions saying that Service Workers update themselves every ~24 hours, but would a gone sw.js also unregister itself after 24 hours?
  • My gut feeling is: no, because that would make the offline-available aspect not so useful anymore.
  • If it would never unregister itself, isn’t this kinda bad? For example you’re buying a domain from someone else which used a Service Worker, or some malicious code got injected into your page and you cleaned it up, there could still be code running in the background for some users, which you don’t even know about.
  • Even if it unregisters itself after some time: while it’s still registered and you don’t know about it, it could do pretty bad things.
]]>
<![CDATA[Mastering git rebase --onto]]> https://timomeh.de/posts/mastering-git-rebase-onto?utm_source=rss https://timomeh.de/posts/mastering-git-rebase-onto Thu, 08 Sep 2022 00:41:00 GMT git rebase --onto is one of my absolute favorite git features. I don’t use it every day, but when I use it, it’s super helpful. For all those situations where you branched off a branch before it got merged, and then you need to rebase your branch onto main without handling dozens of conflicts.

A very common scenario to use git rebase --onto is when you created a branch (feature-b) from another feature branch (feature-a), and feature-a got merged in the meantime. When you use a regular rebase, without --onto, git will go with you step by step through every commit, and most likely will tell you that there are a lot of conflicts to resolve. When using --onto, it’s a single command without any conflicts.

This scenario looks like this:

feature-a branching from main and then merging back into main. feature-b branching from feature-a

You now want feature-b to be branched from main, so the history looks like you never even branched from feature-a. Make sure your local git is up-to-date (main, feature-a and feature-b are fetched). Then run:

git rebase --onto I F feature-b
  • I is the new parent commit of the branch feature-b
  • F is the old parent commit of the branch feature-b
  • feature-b is … feature-b

After that, there shouldn’t be any conflicts, and you are immediately in your desired state. Force push feature-b and you’re done.

a linear main branch with feature-b branching off at the end

This is very helpful when feature-a is a Pull Request which is currently being reviewed, and before it’s merged you want to start working on a new feature which depends on feature-a. After feature-a is reviewed and merged, you want to create a new Pull Request for feature-b.

Personal Note on waiting for Code Reviews

Code Reviews have one of the highest priorities in a team. When you’re assigned for a review, finish your last thoughts or the few lines you’re currently writing – which should just take a few more minutes – and then start the review. Code which is sitting there and waiting for a review is a lost opportunity to provide immediate user- and business value. If you’re constantly blocked and have to wait for many hours or even days until your code is reviewed, be assured that this isn’t how it should be. Talk to your team. Send them this paragraph. The only reason why you shouldn’t immediately start a review is because you’re currently not working, or because you’re working on a highly urgent task, in which case someone else should be assigned for review.

How --onto works

The example above is far away from the only reason why you should know about --onto. In general, --onto allows you to select any range of commits and move it on top of any other commit.

A second useful scenario, and good example to see how it really works, is removing previous commits. In the example below, we want to remove D and E from our feature branch.

a linear branch history with commits A to H, with the range F to H highlighted

We do this by selecting the range from commits F to H, and move them onto C. In order to select the range F to H, we need to use the commit sha before the range starts (which is E), and the last commit sha of the range (H).

git rebase --onto C E H

If we run this, then C is the new parent of F, and D is unattached from it. Basically deleting D and E.

a linear branch history with commits A to H, but D and E are gone

Use --onto with only 2 arguments

The last argument is optional – in the example above H. If you omit it, git will use HEAD by default. If your HEAD was already H before you ran the command above, you could’ve also used the shorter version:

git rebase --onto C E

Back to the first example

Looking back at the first example (with our feature-b branched off from feature-a), we now know why this works. The command we ran was:

git rebase --onto I F feature-b
  • We select the first commit in the range by using F – the commit before the range starts.
  • We select the last commit in the range by using the branch name directly feature-b.
  • With this range selected, we put it onto I.
feature-a branching from main and then merging back into main. feature-b branching from feature-a. feature-b is highlighted
  • If our HEAD already is the latest feature-b, we could’ve omitted it.
  • If our main branch is up-to-date and I is main, we could also write main instead.
  • If we have feature-a checked out locally and it is F, we could also write feature-a instead.

If all of this is the case, we can simply run:

git rebase --onto main feature-a

And now I’ll never forget how this works.

]]>
<![CDATA[How I Built this Blog with GitHub Discussions]]> https://timomeh.de/posts/how-i-built-this-blog?utm_source=rss https://timomeh.de/posts/how-i-built-this-blog Sun, 04 Sep 2022 21:58:00 GMT In my last post I covered my decision to use GitHub Discussions as CMS for my posts. I’m going to build it using Next.js, because that’s what I already use a lot, and it has some features which will become quite useful. Let’s get cracking!

Update March 2024: A lot has changed in the meantime, and now I use Next.js’ App Router and React Server Components for server-side rendering. A few things have stayed the same though: I still use GitHub Discussions in the same way (now even with labels) and still revalidate posts via webhooks.
If you are interested in the current implementation, check out the repo: github.com/timomeh/timomeh.de

Update November 2024: I don’t anymore use GitHub Discussions, haha whoops. I’m now using Keystatic, which still stores content on GitHub, but instead as Markdown files in a private repository. Keystatic has an even better writing experience directly in my browser.


Public Repo, Private Posts

Strangers shouldn’t be able to open a new Discussion and poof, it’s visible on my site. That’s my free real estate, not theirs. You could make your Repository private, but I would like it to be public. This way, the Discussion comments could actually be used as comment section.

That’s where GitHub’s “Announcement” format comes in handy. When you create a Discussion category using the Announcement format, only maintainers and admins (that’s me!) can create a new Discussion in this category. Others can still comment on those posts. I created two of those categories, one for actual posts and one for drafts and ideas.

You could also use Issues and a “published” Label to achieve the same security, because only maintainers can set Labels on Issues. Contrary to Issues, Discussion comments are threaded, which I found could be pretty cool.

The Problem with Slugs

Like Issues and Pull Requests, Discussions use numbers as user-facing identifier, and don’t have native support for slugs. I want URLs with slugs like /posts/how-to-build-a-blog, not /posts/2. So I’m simply using the Discussion’s title as slug. As post title, I’m using the first heading in the actual post body.

You can see it in action on GitHub.

screenshot of GitHub Discussions where the discussion title is a slug called how-to-build-a-blog and the Discussions first Headline is the sentence "How to build a Blog"

Loading Posts via the API

GitHub’s API allows unauthenticated requests to public data with a rate limit of 60 requests per hour and IP. That’s not a lot. But if you use a personal Access Token, the rate limit is 5000 requests per hour. That should be enough. I configured the Access Token to only have access to public data, which makes the attack vector pretty insignificant – if that token would ever leak.

GitHub Discussions aren’t available in their REST API. It’s only in their GraphQL API. While I’m not the biggest fan of GraphQL, it’s also not a problem.

Listing all Posts

… is quite straight forward. You can list up to 100 discussions, filtered by the category ID and sorted by created date. It will take a few days until I have 100 posts, but I love implementing pagination because it’s one of the only reasons to use do...while loops.

Fetching a Single Post

Retrieving a single post is the real fun part. Because this is where the next problem with slugs emerges: there’s no API endpoint to get a single Discussion by its exact title, where I store the slug.

My first idea was to just list all posts, what I already do on the List page, and then filter them in JavaScript by an exact title match. I decided on a different solution, but I still think this would be a totally okay solution. I won’t have thousands of posts, and computers are fast.

My other idea, and what I’m using now, is to use GitHub’s Search API. It allows you to find Discussions with a title which includes a search term. I found some people online saying that the search results aren’t reliable, but I found it works without any issues.

This means it could return multiple results, when the slugs are very similar, like my-post and my-post-part-2. But again, we can just find an exact title match using JavaScript.

async function getBlogPost(slug) {
  const { search } = await api(
    `query postBySlug($search: String!) {
      search(
        query: $search
        type: DISCUSSION
        first: 100
      ) {
        edges {
          node {
            ... on Discussion {
              title
              createdAt
              body
              number
              category {
                id
              }
            }
          }
        }
      }
    }`,
    { search: `"${slug}" in:title repo:${owner}/${repo}` }
  )

  const post = search.edges.find((result) => {
    const discussion = result.node
    const isMatchingSlug = discussion.title === slug
    const isPostsCategory = discussion.category.id === POSTS_CATEGORY_ID
    return isMatchingSlug && isPostsCategory
  })?.node

  return post
}

Side note: You see those node and edges in the code snippet above? That’s one reason why I’m not a big fan of GitHub’s GraphQL API.

I spare you with the details of turning the Markdown body into HTML using remark and rehype because I hated it. But if you’re interested, everything related to data fetching and transforming is located in the lib/blog.ts file in my repo.

Silly Feature: Custom Published Date

If we can find Discussions which include the slug in their title, we could in theory add more information into the title. Nothing crazy, it shouldn’t conflict with the slug itself, but I added a small feature that you can optionally add a custom published date into the title, using the format some-slug YYYY-MM-DD.

I could use this if I ever want to show a different published date than the actual Discussion’s creation date. Maybe it will come in handy some day.

Caching is Next.js’ Best Friend

One thing I hated about my last site was that I had to rebuild it for every change. I don’t want to deal with that anymore. But it also would be unnecessary to call GitHub’s API on every request. The site would be slow, unresponsive when GitHub’s API is flaky, and a chaos monkey could flood me with requests and exceed my rate limit.

Next.js’ Incremental Static Generation to the rescue: it generates the page on the server and caches it, so the next request is served from cache.

// posts/[slug].ts

export async function getStaticPaths() {
  return {
    paths: [], // don't pre-render anything during build time
    fallback: 'blocking',
  }
}

export async function getStaticProps(context) {
  const post = await getBlogPost(context.params.slug)

  if (!post) {
    return { notFound: true }
  }

  return {
    props: { post },
    revalidate: false // cache it forever (default)
  }
}

You could set revalidate: 60 and Next.js would refetch the data if the cache is older than 60 seconds – or any cache time you want to set. But we can do better.

Using Webhooks to Regenerate Pages On Demand

On-demand Revalidation is a Next.js feature on top of Incremental Static Generation, which allows to you regenerate a specific page only when it’s necessary.

All you need to do is to create an API Route, run res.revalidate(path), and let GitHub send a Webhook to this API Route every time a Discussion is created or updated. All affected pages will immediately be regenerated, the site is fast, and you only make a few requests every now and then. API Rate Limits won’t be an issue at all.

// api/github-webhook.ts

export default async function handler(req, res) {
  const webhooks = new Webhooks({
    secret: process.env.GITHUB_WEBHOOK_SECRET,
  })

  // Verify that it's actually GitHub sending the Webhook
  const signature = req.headers['x-hub-signature-256']
  const verified = await webhooks.verify(req.body, signature)
  if (!verified) {
    res.status(401).json({ hint: 'Unverified' })
  }

  // Verify that it's only related to Discussions
  if (!data.discussion) {
    res.status(401).json({ hint: 'Not a discussion event' })
  }

  // Verify that it's the correct Posts Category
  if (data.discussion.category.node_id !== POSTS_CATEGORY_ID) {
    res.status(200).json({ hint: 'Only Blog Post Category allowed.' })
  }

  // New Post: regenerate the List of Posts
  if (data.action === 'created') {
    await res.revalidate('/posts') // [!code highlight]
    res.status(200).json({ hint: 'Revalidated pages' })
    return
  }

  // Updated Post: regenerate the Post and the List of Posts
  if (data.action === 'edited') {
    const { slug } = parseDiscussionTitle(data.discussion.title)
    await res.revalidate('/posts') // [!code highlight]
    await res.revalidate(`/posts/${slug}`) // [!code highlight]
    res.status(200).json({ hint: 'Revalidated pages' })
    return
  }
}

Adding Comments

I didn’t add comments yet, but maybe I’ll add them in the future. giscus looks like the perfect fit to embed the Discussion’s comment section below a post.

Feels Good

I’m really happy with this solution. For the previous post, I could fix some typos on my phone last evening and it was immediately updated on my site. The Next.js site is hosted on Vercel’s free tier, I still have my own minimalistic design, with a free CMS which I don’t need to host, maintain and update.

You can find all code in my timomeh.de GitHub Repository.

]]>
<![CDATA[How to Build a Blog]]> https://timomeh.de/posts/how-to-build-a-blog?utm_source=rss https://timomeh.de/posts/how-to-build-a-blog Sun, 04 Sep 2022 20:50:00 GMT When I was a teen, I learned how to build WordPress themes. During holidays or when school was out, I vividly remember how I sometimes sat in front of my computer for a few days and nights, and created new themes for my blog. Of course I had no blog, but having one was a cool thought. But this stuck to me, and I was really never happy with how my blogs work. Until now!

Flashback

When I was a teen, I learned how to build WordPress themes. During holidays or when school was out, I vividly remember how I sometimes sat in front of my computer for a few days and nights, and created new themes for my blog. Of course I had no blog, but having one was a cool thought. From designing in Photoshop, to programming it in Notepad++ and learning new things about WordPress, to finally dragging everything into FileZilla, and seeing how FTP slowly published my work to the world. It was a glorious feeling.

My themes always supported nested comments and pingbacks, but I hardly had any traffic on my site.

I must’ve created a dozen themes during this time, from simple designs to art-directed content. But I maybe used half of them to actually publish a few posts. The other half never got used, because I created a new theme before actually writing posts. And I always deleted the old content, because I thought it didn’t match the concept, or just because I found it cringe.

During this time, I enjoyed the process of programming and learning, more than I cared about the content.

Bad Habits stick to you

Fast forward into my time after university. When I finally wanted to publish some actual content, I continued doing the same mistake. It was no longer WordPress, but I still implemented solutions for problems I didn’t have.

Yes, it would be cool to build support for interactive code examples and a sticky Table of Contents, displaying the section you’re currently reading, and all just generated from simple Markdown files. But when I wanted to write something, everything I built was useless, and nothing I really wanted and needed was there.

What do I need to write content?

I don’t like to write content in my Code Editor. It’s cool to have Posts and Images tracked in Git, but I hated the process of dragging Images into directories, writing ![](../../images/2020/cat.jpg), committing, pushing, waiting for it to rebuild. Ugh.

  • I want a Textbox inside a Browser. Creating or updating content should be simple and fast.
  • I want a simple “Save” button to publish or update content, without going through Git.
  • When I want to write a 2-sentence Post, it should be published in no time.
  • I like writing Markdown, so that’s okay. I just shouldn’t need my Editor.
  • I want a Preview. When I proofread a Post, I just like to do it in a Preview of the Post.
  • I want to drag an Image from my Desktop into the Browser, it gets uploaded and pasted into the Post, and just works.
  • I don’t want to deal with a database.
  • It should be possible to extend in the future, when needed.
  • I still want to build the UI myself, but no unnecessary features until I really need them.

And most importantly: that’s all I need. I don’t need a setup to write long and complex tech articles with a fancy UI.

Making decisions

My immediate thought was to (ab)use GitHub Issues as CMS. It ticks all my boxes. It’s free real estate. I don’t need to worry about database and hosting (except for worrying about API Rate limits), don’t need to make any backups (maybe I will have a GitHub Action to regularly download images), the content stays available even if I create a completely new site (given I don’t decide to delete the repository), and GitHub is totally never down (Ignoring the 11 Incidents last month. Maybe I need to implement a “GitHub is down, Content may not be up-to-date” banner). And it even has a comment section built in!

The idea of using GitHub Issues as Blog isn’t new. Some research, trying out things, finding some difficulties with it, looking at other implementations, overthinking it again and telling myself not to overthink it, and a few days later: a Blog inside GitHub Discussions was born.

Now I can even use the GitHub App to post from my phone! How futuristic.

Read more about the technical details in Part 2.

]]>