<![CDATA[Developer Blog]]>https://ruwe.dev/https://ruwe.dev/favicon.pngDeveloper Bloghttps://ruwe.dev/Ghost 6.22Sun, 22 Mar 2026 01:20:12 GMT60<![CDATA[The Fairbuds XL weren't what I was expecting]]>Back in 2020, when I was starting with my apprenticeship, I started listening to music more often and wanted to get better headphones instead of those cheap 20CHF ones. So I decided to buy a Sony WH-1000XM3 with active noise canceling.

And wow, they were great. Listening to music for

]]>
https://ruwe.dev/the-fairbuds-xl-werent-what-i-was-expecting/692aa8c38d4d5000014c8668Sat, 29 Nov 2025 10:18:25 GMT

Back in 2020, when I was starting with my apprenticeship, I started listening to music more often and wanted to get better headphones instead of those cheap 20CHF ones. So I decided to buy a Sony WH-1000XM3 with active noise canceling.

The Fairbuds XL weren't what I was expecting

And wow, they were great. Listening to music for hours without cable with good bass and good noise isolation. Even when sitting in trains or the bus, you barely hear much of the world around you. Microphone quality is decent too and you can configure them quite a bit in the app.

But after 5 years of usage and sometimes exposure to harsh conditions, such as light snow or a crammed backpack, they have degraded to a point where using them becomes hard. The headband has disintegrated significantly, has become porous and is flaking off. Additionally, the "outside headband" which houses the sliding mechanism has broken - resulting in the slider mechanism sometimes coming loose and the headphones falling part:

So repairing this should be easy ..... right?

Sadly Sony engineered their headphones to specifically be hard to repair.

  • No replacement parts are sold for the WH-1000XM3 anymore. This means you'll need to go for unofficial (sometimes toxic; 1, 2) third party parts.
    Some parts are in skin contact too, making chemical contamination even more likely. They are of worse quality and some countries may not have access to reliable sources.
  • The headband has been engineered specifically to be very thin - making it very susceptible to break after material fatigue. It is roughly 2mm thick - so 2mm is all that keeps your headphones from falling apart - great.
  • The cable running from both sides is thin and has many incredibly small copper cables inside. Since the cable physically runs through the headband without any option to unplug it, you need to cut the cable and resolder it again after replacing the broken part.
  • Replacing wear-and-tear parts on the headphones requires taking them fully apart, breaking glue seals and forcefully breaking open some of the parts.
  • Sony doesn't provide any repair manuals, only iFixit offers unofficial guides which do not talk about the headband.

Nevertheless, I decided to repair them and ordered a replacement band from AliExpress - the only store with a decent price shipping to Switzerland which is selling that specific part I need. Not only was the part 11CHF, but it also is a significant worse plastic quality than the original. After spending about half a day on the weekend taking apart the headphones step by step and guessing what to do next (no manual), I finally figured out; repairing is too complicated and another part will likely fail soon. So I decided to use some electrotape and tape the broken part together to hopefully keep using it.

No headphones, what now?

So after a fruitless attempt of repairing them, I still need headphones. My work headphones do work, but have very little bass and no good sound quality - plus they're not mine. So which headphones to buy now?

Searching on Galaxus returns the same bad design over and over again - over the hear headphones with next to no repair options. I will have the same exact issue again in 3-5 years - which results in yet another pair of headphones in the electronic waste. Now while there is some decent recycling in Switzerland, much of it is still harming the planet and lands in the incineration plant or the landfill. But then, I remembered something.

A few months ago, I actually bought a new phone because my old one had no working camera and the battery was dying fast - A Fairphone!

Switching to the Fairphone Gen. 6
A few months ago, my Xiaomi Mi 11, purchased in 2021, started to malfunction and the camera was no longer able to focus properly. Additionally, the battery’s capacity decreased significantly over the years, so that I had to recharge between 2–3 times daily. Repair costs in 2025 would have
The Fairbuds XL weren't what I was expecting

And I remembered, that Fairphone also sells some headphones, that are more repairable and sustainable. I also like the design, the green color with the pigments of recycled plastic go really well and look different from other brands. Plus the brown cable, the joystick controller and the CO2 neutrality sounds awesome. And when they were discounted by quite a bit (lowest price ever in Switzerland) a few days before black Friday, I decided to pull the trigger and get them.

The Fairbuds XL weren't what I was expecting

Unboxing

The headphones arrive in a very simple cardboard box.

Besides the headphones, there are also other accessories inside the box.

  • A quick start manual with descriptions for the buttons
  • The headphones themselves
  • A cheap plastic bag to store the headphones in
The Fairbuds XL weren't what I was expecting
Left: Storage bag for headphones, Center: Headphones themselves, Right: Quick start manual

Here is the first disappointment; the accessory bag is one of the cheapest and low quality feeling bags I've ever experienced. It feels very flimsy and doesn't have any branding besides the small blue tag on the side. But fine, who uses these anyway? So how about the headphones?

The initial feel is good - but definitely a bit clunky. The plastic feels very solid and of good quality - It probably won't degrade soon. The cushions for both ear pads and the head pad are very firm - more than on the Sony or the Jabra headphones. They feel very durable and are more thick than on my other headphones.

The brown cable between the two ear cups is actually a off-the-shelf USB-C cable - seriously! But it also is quite thick and feels durable. When folding the headphones, it kind of gets in the way. The change of priorities compared to the Sony is immediately apparent - the part that broke on the Sony is monstrous on the Fairbuds XL made from Aluminum and at least 5 times the thickness of the Sony plastic alternative. The slider mechanism feels less premium but is definitely more rugged - you have to apply quite some force to move the slider and therefore the force to break it will also be higher.

Charging is done via USB-C. They came pre-charged at about 50% but I decided to charge them fully before turning them on.

Turning them on

Some reviews had complained about sounds or the ANC - so lets try them out. Turning them on is done via the joystick and holding it down for a few seconds. Connecting to my Fairphone Gen. 6 was super easy, it was faster than on my Sony WH1000XM3. And after listening to Crab Rave, I can confirm that the sound is slightly worse than the Sony, but still quite good.

I also downloaded the official app and updated the firmware for this review, so I can give a useful review of them.

The voice announcements

This might be nitpicking, but I dislike the voice prompts when turning the headphones on, connecting devices or literally anything you do with them. The voice on the Sony headphones is more neutral and can also be changed to different languages - the Fairbuds XL do not offer multiple languages and the voice cannot be disabled. That is really annoying and until the date of this review, it cannot be changed in the app either. On the flip-side, the boot and shutdown sounds are refreshingly different and cool.

ANC - active noise cancellation

ANC isn't really something I use very often - my ears seem to be sensitive about it and I dislike the side effects it often comes with. The Sony can deal quite well with all kind of noises, including cars, trains and wind. However, the Fairphone are definitely worse in this regard, as they fail to remove wind noise from the microphone and therefore the headphone artificially tries to cancel non-existing noise resulting in an unpleasant noise when in ANC mode. It does cancel other noise, such as cars and people talking quite well, if there is no wind. If there is a little breeze, it starts to fail or be counter-productive.

If ANC is disabled, it feels more pleasant to me subjectively. On average there might be less noise isolation, but there aren't any unpleasant ANC artifacts. The isolation from the over ear cups is enough for most cases, although the sound leakage to the outside seems to be higher for the Fairbuds XL than for the Sony.

Wearing and comfort

This is a double edged sword. On one side, the Sony have a very high contact pressure, meaning they isolate more sound by nature. However, this also is more exhausting for the head and ears and I feel uncomfortable after some time in them. The Fairbuds XL have less pressure, but also do have less isolation. This isn't an issue for me, but might be for you if you run or do sports with your headphones on.

The Fairbuds XL weren't what I was expecting

Wearing the Fairbuds XL is quite comfortable, but definitely less customizable, since they cannot swivel (turn to accommodate uneven heads). In cold weather, they also kept my ears warm and I wasn't bothered by the USB-C cable or the thick headband. Finding the Joystick or the ANC button to control them can be a bit tricky since the headphones have two "levels" of the cup and my hand tends to go to the wrong one.

Equalizer and playing music

Using the app, the headphones can be set to use different profiles or a custom equalizer can be configured. I had no trouble setting up the app and I disliked the default EQ a bit - so I changed to a custom EQ where I set all frequencies to an equal level which improved sound quality.

There isn't any significant audio delay, but when pausing or skipping to the next track, there is a significant reaction delay until the music pauses or skips. The headphones seem to use Android's default audio control - therefore almost any app worked just fine, including Spotify, Symfonium, Floatplane, Youtube and SRF.

One subtle thing I dislike; the volume steps are odd. I always end up listening on either level:

  • One level too quiet, the music is a bit quiet for my preference.
  • One level too loud, it is the next level above the previous one but is weirdly louder than the previous levels

It seems that the volume steps are not always equal or they are wrongly calibrated. This is really not a huge deal and I mostly end up listening on the lower volume since this is healthier for the ears anyway.

Battery and charging

Battery runtime seems to be very good - just like I expected. Charging is fast too and they never ran out of battery so far - I can use them for multiple days without charging them up.

Best of all; if the battery eventually degrades, it can be easily replaced by opening a latch on the side, removing the old one and inserting a new one:

The Fairbuds XL weren't what I was expecting
© fairphone.com, Battery can easily be removed

And best of all, the batter is actually available and an original part. Since Fairphone just released the Fairbuds XL in US, we can expect them they'll sell spare parts for quite a while. Sadly the battery is not yet available to buy on Galaxus in Switzerland, but I believe Galaxus is to blame here - the market in Switzerland is probably to small to add it to the catalogue.

The thing about the price

So are these headphones just worse than the Sony? Probably yes, but they're also much cheaper than the Sony. Additionally, Fairphone is committing for a different goal than Sony: fair and sustainable technology.
I believe they did quite well with these, but it definitely seems less polished than their newer products, such as the Fairphone Gen 6.

Would I buy the Fairbuds XL at full price of €229 or 160CHF? Probably not. But 99CHF is more than fair (get it? 😄).

The Fairbuds XL is definetly not for everyone - their design is a statement and you pay for a product, that you can keep longer than others - not for the best sound quality on the market. But considering the price, I think it is very impressive for a small company to develop a product like this - while making it sustainable and E-Waste neutral.

Conclusion

The Fairbuds XL have been a good experience so far. They are less polished than other products but do offer decent sound quality at a good price. The joystick and modularity are innovative and not an empty promise. Most issues I have with them could be fixed via firmware update and I expect that Fairphone will deliver some updates in the future. I can't recommend the Fairbuds XL to everyone, but definitely to most. And if you value sustainability and fairness (e.g. no slavery, fairly sourced resources), then the Fairbuds XL might be for you.

ℹ️
All pictures (unless otherwise stated) were taken on my Fairphone Gen 6 with the default camera app.
ℹ️
No AI or generative model was used to write this blog.
]]>
<![CDATA[Switching to the Fairphone Gen. 6]]>A few months ago, my Xiaomi Mi 11, purchased in 2021, started to malfunction and the camera was no longer able to focus properly. Additionally, the battery's capacity decreased significantly over the years, so that I had to recharge between 2–3 times daily. Repair costs in

]]>
https://ruwe.dev/switching-to-the-fairphone-gen-6/68692e2d3c4bea0001e18f93Sat, 05 Jul 2025 16:57:55 GMT

A few months ago, my Xiaomi Mi 11, purchased in 2021, started to malfunction and the camera was no longer able to focus properly. Additionally, the battery's capacity decreased significantly over the years, so that I had to recharge between 2–3 times daily. Repair costs in 2025 would have been between 150CHF and 200CHF (just parts), which was slightly more than the market value of the phone. Hence, I decided that I need a new phone - and while I'm at it, why not buy "that new phone", that had been leaked over the last few months?

So I decided to purchase the Fairphone Gen. 6, which I will refer to as the FP6 in the rest of this post, for 529CHF - and I have some interesting thoughts on it.

Design & Modularity

One of the main reasons why I was intrigued by the FP5 & FP6, is the modularity. Not only the battery can be user-replaced, but there are 12 modules that can be replaced by the user. Fairphone also sells 6 accessories at the time of the release, although 1 week after the release, they cannot be purchased yet in Switzerland.
The insides can be accessed by removing only two screws on the back:

Switching to the Fairphone Gen. 6

Both the micro SD card and the nano SIM card can easily be accessed at the bottom by sliding out the SIM holder. On the left side, there is the microphone and on the right side, there is the replaceable USB-C port, which can charge with up to 30W, although none of my third party chargers were able to reach this speed (only 22W), likely because my chargers use older PD and QC protocols.

Switching to the Fairphone Gen. 6

Sadly, the USB-C port only supports USB 2.0 speeds, reaching up to ~50-60MB/s. This is disappointing, since this will be very useful, if you're looking to transfer files via cable. Personally, I have been transferring via WiFi 6 since I own my Xiaomi Mi11, which would be faster than this USB-C port anways, so it's not a huge issue.

The corner radius is very nice on the hands, but the edges are sharp and not rounded. This makes the display look better but hurts my hands more after using the phone continuously for an extended time.

Software

Until the FP6, I have been using heavily modified versions of Android, such as Samsung's UI or MIUI from Xiaomi. Neither I was really fully happy with, since both came with bloatware, ugly looking first-party apps that cannot be uninstalled and a design that doesn't really go well with me. Fairphone uses a more stock version of Android, like the Google Pixel devices. It shipped with Android 15 for me and there was a software update available, but I was never prompted to install it nor was it installed automatically.

The only additional app installed is the "My Fairphone" app, where I later registered my phone for the "extended warranty". However, the page that is supposed to show the hardware specifications, seems broken at the moment:

During my usage, Android generally never hang, but I did have some app crashes (more on this later). The experience was smooth and the high refresh rate display is a joy to use, even when switching between apps. Everything feels fluid and optimised, although the adaptive frequency sometimes did not properly switch and decided to go on low fps in the launcher, making the swiping feel sluggish. Fairphone publicly acknowledges this issue already, and I'd expect that this will be fixed by a software update soon.

Lastly, I want to talk about Fairphone Moments, or the Switch. It is this yellow thingy, that is sticking prominently out of the side:

Switching to the Fairphone Gen. 6

It can be bound to various actions, and at first I didn't expect me using this very often - it seemed rather gimmicky. However, I loved that I can easily quickly switch to "Do not Disturb" mode using this switch and only have access to the most essential apps on my device. My only complaint is, that the "Fairphone Moments" app doesn't feel well integrated yet - you feel quickly, that it's just yet another Android app, that opens on top, and you can even get outside that app with some tricks. When closing (essential) apps, there also is an ugly animation and Fairphone Moments will re-launch, making the experience feel less refined and premium. At the moment, I've configured the Switch to enable or disable Do not Disturb mode instead, since the Moments app is not refined enough.

Camera

The Fairphone 6 features two sensors, a 50MP main sensor and a 13MP wide camera. I expected the Fairphone to take at least equally good pictures as my older Xiaomi Mi 11, but sadly the camera setup is slightly disappointing.

Main sensor

The main camera looks very good, has good and natural colours, the dynamic range is decent and pictures are taken in HDR with a high resolution. They don't have vignetting and the image processing looks good. Here are some test pictures I took:

I am a big fan of high refresh rates, and therefore the maximum refresh rate of 30fps in 4K is disappointing. 1080p@60fps is possible and looks good, but I wouldn't record any serious footage on that resolution. The videos are compressed to 2MP when recording in 1080p, and this really shows because pixels can be recognized easily. Colors are nice and natural.

0:00
/0:08

Example video of some ducks near Bern, recording at 1080@60fps, resulting compressed file is about 31MB. The audio channel is stereo. Geotagging must be explicitly enabled in the settings of the camera app.

The camera app gives you up to 10x zoom, although anything beyond 2X is not very usable, and after 5X it becomes a blurry mess.

Switching to the Fairphone Gen. 6
Example image of a cat in a field, 10X zoom

The wide lens camera

The Camera offers up to 0.6X zoom and will take pictures in HDR. The colours look good and natural, maybe slightly on the warmer side. The compressed image results in a 10MP image which a wide field of view. The lens distortion is minimal and not very noticeable. There also isn't any major vignetting in the pictures.

However, all images so far I took with this camera look too crisp & overprocessed. They are pixelated, oversharpened, details in the dark get lost and worst of all - grainy. You can immediately see, on what camera the picture has been taken:

Switching to the Fairphone Gen. 6
Left: wide lens, crushed details and too extreme shadows Right: normal lens, more natural shadows, less grain and pixelation

If we take the same location, as we used for the main camera, the grain becomes very visible, even on smaller screens. Even on the original size picture (left), the picture looks too processed, that I'd use it to take pictures I really wanted to use. The camera crushes all detail in the shadows (image right), which results in this weird paint-like aesthetic. This looks weird, because in reality the shadows are much less extreme due to the sunlight.

Performance

When buying the FP6, I was afraid that the performance will be worse, since the Mi11's Snapdragon 888 had a big GPU and some serious performance. But surprisingly, the FP6's Snapdragon® 7s Gen 3 performed better in many scenarios, even though it should be less powerful on paper. I suspect this is due to thermal throttling and optimization, since the Xiaomi Mi11 regularly got very hot - once even until it damaged itself and I had to open a warranty case. Playing games or doing anything intensive on the Mi11 was not a joy.

But when playing on the FP6, the phone itself was way more comfortable to touch due to its lower temperature and weight and the performance was very usable for most games:

Switching to the Fairphone Gen. 6

3D Mark "Wild Life"

4467 points, 26.75fps

Switching to the Fairphone Gen. 6

AnTuTu

790574 points (-4% battery, +3°C)

CPU: 263019
GPU: 206115
MEM: 164394

Switching to the Fairphone Gen. 6

Genshin Impact

Low, 30-60fps

Switching to the Fairphone Gen. 6

Zenless Zone Zero

Low, 30-60fps

Playing these games on the FP6 is a joy and a lot of fun. They run fluently and the phone doesn't heat up as much as the Xiaomi Mi 11 (or other phones I've owned) do. Sadly, I've experienced regular crashes during my testing in Zenless Zone Zero, most often during loading screens or when new parts of the map are loaded. This could be the fault of either (FP or miHoYo), and I'd expect that it should be resolved soon.

Battery

Wow, is it nice to finally no longer need a power bank once or twice every day. The FP6 easily hold enough battery for a full day. On an average day with mixed usage, a few minutes of gaming, taking a few pictures and making a call, I reached 2 hours and 39 minutes screen time and 12 hours since the last (full) recharge. That's impressive and plenty for my usage.

Switching to the Fairphone Gen. 6
Screenshot of the battery statistics, about 12 hours of runtime, 2 hour 39 minutes screen time, always on display enabled, 90Hz

Fairphone rates the battery to withstand 1000x cycles until it drops to 80% capacity, which is average for most phones, but at least I can replace the battery myself in minutes, without removing any glue and buying sketchy parts on AliExpress, like I would have needed for my Xiaomi Mi 11. And best of all, the battery only costs €39.95, but is not yet available. Galaxus will likely offer this too f0r approx. 40CHF, since they are already selling the other accessories: https://www.galaxus.ch/en/brand/fairphone-15803?filter=23659%3D7759295

Audio & Connectivity

The audio setup is worse than on my Xiaomi Mi 11. The stereo speakers do not have a lot of punch or base, but they are fine when watching Anime, videos or listening to voices. In contrast, when calling a number, the audio seems clearer than it was on my Xiaomi Mi 11.

My Sony WH-1000XM5 have been working great with the Fairphone and I've had no issues with the quality or connection stability. 5G is also working great and since the phone ships with Android 15, you can finally disable 2G in the settings - you really should.

WLAN performance seems to be marginally better than on my Xiaomi Mi 11, although this could be within the margin of error.

Direction Xiaomi Mi 11 Fairphone 6
Upload 334 Mbps 518 Mbps
Download 383 Mbps 418 Mbps

Measured using Ubiquti AP 6 LR, 5GHz

NFC has also been working great, and I was able to add my cards to Google Pay. Baking apps, such as Postfinance and TWINT are working great too.

The fingerprint is working fast for me, no issues.

Verdict

So far, I'm very satisfied with the overall experience of the Fairphone Gen 6. It is fast, has a good battery life, doesn't kill the planet and commit to better standards, feels very light and does everything I need my phone to do. It comes at a competitive price for the Swiss market and offers a wide range of repair parts and accessories. Some software features have not been polished as much and need further work, but Fairphone is communicating much more openly than most other companies have been - I am certain that they can patch these small issues.

Bigger disappointments include the camera, the availability of spare parts and accessories on launch and the half-baked Fairphone Moments. I don't think this phone is for everyone, but it works well for me and I'm happy wit it. I also agree with the standards that Fairphone is setting and apprechiate the long software support & lower environmental impact.

Let me know in the comments 👇 whether you'd consider buying and using such a phone and if not, why? And if you're looking to purchase it yourself in Switzerland, you can check it out here (no affiliate link):

Switching to the Fairphone Gen. 6

Fairphone (Gen. 6)

256 GB, Horizon Black, 6.31", SIM + eSIM, 50 Mpx, 5G

Galaxus Page
]]>
<![CDATA[How I automated my KiCad projects using Continuous Integration]]>As a software engineer, I'm used to having fully automated every step of the deployment process and reducing the manual input as much as possible. There is just something so satisfying about clicking a button and watching it doing all the work.

But for my weather station project

]]>
https://ruwe.dev/how-i-automated-my-kicad-projects-using-continuous-integration/67d34fa2bacb060001b48c2cFri, 21 Mar 2025 06:06:14 GMT

As a software engineer, I'm used to having fully automated every step of the deployment process and reducing the manual input as much as possible. There is just something so satisfying about clicking a button and watching it doing all the work.

But for my weather station project, this seemed not possible. When I started with this project, I used KiCad 6 with an unversioned directory on my development machine and started designing. I frequently made mistakes or broke parts of the schematic - so I needed some way of being able to roll back and tagging releases. This was also important when I ordered PCBs - without tags it is hard to link your order to the iteration of your KiCad project. So it was time to implement CI for KiCad!

All beginnings are rough

Initially, I started searching the internet for tools that would help me with this. I wanted a simple pipeline, that performs the DRC and ERC, generates images of the schematic, individual layers and a combination of all layers, a BOM (bill of material) and the production files required to send the project to a manufacturer.

At first, I didn't find any applications or tools, that could do this in a headless environment (e.g. the GitHub actions). But then, I came across the KiPlot project, which seemed to exactly do what I wanted:

How I automated my KiCad projects using Continuous Integration

All hopes are quickly shattered though, if the last commit was 7 years ago. when KiCad 7 wasn't even on the horizon. Soon after, I found KiBot, which is an active maintained fork of the original project - and it's compatible with KiCad 7!
And thanks to the maintainers of this project, it even runs in headless environments and doesn't require a dedicated GPU to render images.

Working with Git and KiCad

Next, I needed a git server to push my version controlled files. But would git even be suitable for this? Turns out the KiCad files are just text files with their own file extensions, such as .kicad_sch:

(kicad_sch
	(version 20250114)
	(generator "eeschema")
	(generator_version "9.0")
	(uuid "3e093a46-43d3-4239-969a-ffefe6fd6b27")
	(paper "A4")
	(lib_symbols
		(symbol "Connector:6P6C"
			(pin_names
				(offset 1.016)
			)
			(exclude_from_sim no)
			(in_bom yes)
			(on_board yes)
			(property "Reference" "J"
				(at -5.08 11.43 0)
				(effects
					(font
						(size 1.27 1.27)
					)
					(justify right)
				)
			)

Perfect, so I created a local repository and pushed it to GitHub. Now let's tackle the CI.

Building the CI

First, it should be noted that there are different variants & versions of the KiBot available. They are all based on Debian and only work with the matching major version of KiCad - meaning that KiBot 7 won't work with KiCad 8.

There's the auto_full images and the autoimages. The full images contain additional dependencies, such as LaTeX and Blender, which can be useful if you want certain outputs - more on this later.

To start, I simply copied the GitHub actions workflow in the documentation to my private repository:

name: Run KiBot

on:
  push:
    paths:
    - '**.sch'
    - '**.kicad_pcb'
  pull_request:
    paths:
      - '**.sch'
      - '**.kicad_pcb'

jobs:
  example:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: INTI-CMNB/KiBot@v2_k7
      with:
        config: config.kibot.yaml
        dir: output
        schema: 'schematic.sch'
        board: 'pcb.kicad_pcb'
    - name: upload results
      if: ${{ always() }}
      uses: actions/upload-artifact@v4
      with:
        name: output
        path: output

And because I didn't read the documentation at first, I didn't understand that I needed to create a configuration file too, which defines how KiBot should process the KiCad files and what outputs to generate.

I started adding a few simple outputs that I definetly need, such as the IBoM:

kiplot:
  version: 1

preflight:
  run_erc: true
  update_xml: true
  run_drc: true
  check_zone_fills: true
  ignore_unconnected: false

global:
  units: millimeters

outputs:
  - name: iBoM
    comment: Interactive HTMl BoM
    type: ibom
    dir: assembly/interactive-bill-of-materials/
    options:
      highlight_pin1: true
      checkboxes: "Placed"
      dark_mode: true

Running this workflow would start KiBot and upload the results as artifacts to the GitHub actions run - perfect!

Time to add more outputs

Since the basic concept of the workflow worked now, I started adding more outputs to the config:

kiplot:
  version: 1

preflight:
  run_erc: true
  update_xml: true
  run_drc: true
  check_zone_fills: true
  ignore_unconnected: false

global:
  units: millimeters

outputs:
  - name: iBoM
    comment: Interactive HTMl BoM
    type: ibom
    dir: assembly/interactive-bill-of-materials/
    options:
      highlight_pin1: true
      checkboxes: "Placed"
      dark_mode: true
+ - name: 'print_sch'
+   comment: "Print schematic (PDF)"
+   type: pdf_sch_print
+   dir: schema/
+   options:
+     output: full_schematic.pdf
+ - name: 'print_sch_svg'
+   type: svg_sch_print
+   dir: schema/images
+   comment: "Generate SVG from schematics"
+   options:
+     monochrome: false
+     all_pages: true
+     background_color: true
+     output: "%I.%x"
+     frame: false # do not include frame and title block
+ - name: 'print_front'
+   comment: "Print layer 1"
+   type: pdf_pcb_print
+   dir: pcb/layers/
+   options:
+     output_name: layer_1.pdf
+   layers:
+     - layer: F.Cu
+ - name: 'print_back'
+   comment: "Print layer 4"
+   type: pdf_pcb_print
+   dir: pcb/layers/
+   options:
+     output_name: layer_2.pdf
+   layers:
+     - layer: B.Cu
+ - name: basic_position_CSV
+   comment: Components position for Pick & Place
+   type: position
+   dir: positions/csv
+   options:
+     format: CSV
+     only_smd: false
+     output: '%i %f_pos_%D.%x'
+ - name: basic_render_3d_0deg_top
+   comment: 3D view from 0 degrees top
+   type: render_3d
+   dir: renders/top
+   output_id: _view
+   options:
+     output: '%f-%i%I%v-0deg.%x'
+     ray_tracing: true
+     rotate_x: 0
+     rotate_z: 0
+     download: true
+     height: 1440
+     width: 2560
+     no_virtual: true
+ - name: basic_render_3d_30deg_top
+   comment: 3D view from 30 degrees top
+   type: render_3d
+   dir: renders/top
+   output_id: _view
+   options:
+     output: '%f-%i%I%v-30deg.%x'
+     ray_tracing: true
+     rotate_x: 3
+     rotate_z: -1
+     download: true
+     height: 1440
+     width: 2560
+     no_virtual: true
+ - name: basic_render_3d_0deg_bottom
+   comment: 3D view from 0 degrees bottom
+   type: render_3d
+   dir: renders/bottom
+   output_id: _view
+   options:
+     output: '%f-%i%I%v-0deg.%x'
+     ray_tracing: true
+     rotate_x: 0
+     rotate_z: 0
+     download: true
+     view: bottom
+     height: 1440
+     width: 2560
+     no_virtual: true
+ - name: basic_render_3d_30deg_bottom
+   comment: 3D view from 30 degrees bottom
+   type: render_3d
+   dir: renders/bottom
+   output_id: _view
+   options:
+     output: '%f-%i%I%v-30deg.%x'
+     ray_tracing: true
+     rotate_x: 3
+     rotate_z: -1
+     download: true
+     view: bottom
+     height: 1440
+     width: 2560
+     no_virtual: true
+ - name: step
+   type: step
+   comment: Export the PCB as a 3D model
+   dir: 3d_models/step
+ - name: position
+   type: position
+   dir: positions/pos
+   comment: Positions for components
+ - name: virtualRealityModelLanguage
+   type: vrml
+   dir: 3d_models/vrml
+   comment: VRML 3d model
+   options:
+     download: true

This created the following file tree:

  • schema/
    • images.png
    • full_schematic.pfg
    • images/
      • sheet-1.svg
      • sheet-2.svg
  • 3d_models/
    • step/
      • model-3D.step
    • vrml/
      • model-vrml.wrl
      • shapes3D/
        • XXXX.wrl
  • assembly/
    • interactive-bill-of-materials/
      • project-ibom.html
  • pcb/
    • layers/
      • layer_1.pdf
      • layer_2.pdf
  • positions/
    • csv/
      • top_pos_project.csv
      • bottom_pos_project.csv
    • pos/
      • top_pos_project.pos
      • bottom_pos_project.pos
  • renders/
    • top/
      • project-3D_top-view-30deg.png
      • project-3D_top-view-0deg.png
    • bottom/
      • project-3D_bottom-view-30deg.png
      • project-3D_bottom-view-0deg.png

The resulting ZIP archive, uploaded to the GitHub artifacts, was about 12-16MB. But there was one issue with the file formats of some of the files - when sending them for review to friends or forums, I prefer to upload JPEG or PNG pictures instead, because PDFs aren't widely used and can embed malicious code.

So I updated my workflow file to convert SVG to PNG and PDF to PNG using Inkscape:

+     - name: Install Inkscape
+       run: sudo apt-get install -y inkscape

      - name: Checkout Code
        uses: actions/checkout@v4

      - uses: INTI-CMNB/KiBot@v2_k7
        with:
          config: config.kibot.yaml
          dir: output
          schema: 'schematic.sch'
          board: 'pcb.kicad_pcb'

+     - name: Convert SVG to PNG
+       run: |
+         for svg_file in $(find ./output/schema/images/ -type f -name "*.svg"); do
+           png_file="${svg_file%.svg}.png"
+           sudo inkscape -d 400 "$svg_file" -o "$png_file"
+         done

+     - name: Convert PDF to PNG
+       run: |
+         for pdf_file in $(find ./output/pcb/layers/ -type f -name "*.pdf"); do
+           png_file="${pdf_file%.pdf}.png"
+           sudo inkscape -d 300 "$pdf_file" -o "$png_file"
+         done

      - name: upload results
        if: ${{ always() }}
        uses: actions/upload-artifact@v4
        with:
          name: output
          path: output

Leveraging the matrix 😎

As my project grew, I start splitting modules of the PCB into their own project, since KiCad recommends to only have one PCB per project.
This also required my workflow to be able to handle multiple project files.
GitHub has a great system for this: https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/running-variations-of-jobs-in-a-workflow#using-a-matrix-strategy.

So let's implement it in the workflow:

jobs:
  run-kibot:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        module:
          - name: motherboard
            config: pcbs/motherboard/config.kibot.yaml
            dir: motherboard-outputs
            schema: pcbs/motherboard/motherboard.kicad_sch
            board: pcbs/motherboard/motherboard.kicad_pcb
          - name: module-temperature
            config: pcbs/modules/temperature/config.kibot.yaml
            dir: module-temperature-outputs
            schema: pcbs/modules/temperature/module_temperature.kicad_sch
            board: pcbs/modules/temperature/module_temperature.kicad_pcb

Then, we reference the variables in the steps that will be dynamic depending on the project:

      - name: Run KiBot
        uses: INTI-CMNB/KiBot@v2_dk9
        with:
          config: ${{ matrix.module.config }}
          dir: ${{ matrix.module.dir }}
          schema: ${{ matrix.module.schema }}
          board: ${{ matrix.module.board }}

And let's adjust the upload too, so we don't overwrite the archives with each other:

      - name: Upload Outputs
        uses: actions/upload-artifact@v4
        with:
          name: ${{ matrix.module.name }}-outputs
          path: ${{ matrix.module.dir }}

Conclusion

It seems that Git is capable enough to handle KiCad projects and they can be especially useful if you are used to software development. The GitHub workflow I presented you today shows, that automating is also possible and you could even order PCB automatically.

I think such workflows could be really useful to small projects, where only a few people are involved with the development of a PCB and software for it. Scaling up will be hard, since manually resolving merge conflicts in the KiCad files will be tedious and prone for failure. Another problem could be the runners and data space on GitHub; rendering 3D images with ray tracing takes up to 2 minutes per image on my project, because it is CPU bound. Using a graphic card would speed up the render significantly. And since every run creates between 5 and 20 MB of artifacts, storage on the free GitHub version will fill up quickly. It might be worth it to consider uploading to external storage or pay for additional storage.

Personally, I'm very satisfied with this solution and will continue to use it in my projects. What do you think? Have you used KiCad yet?

]]>
<![CDATA[What happened after we migrated to Monorepository]]>The typical web application usually consists of the backend part, implementing the business logic, and the web application to interface with your users. This can quickly add some additional complexity, especially if different languages are in use and if they interface with each other. It can also lead to discrepancies

]]>
https://ruwe.dev/what-happened-after-we-migrated-to-monorepository/67cf587657e16e00016b00a1Thu, 13 Mar 2025 09:02:48 GMT

The typical web application usually consists of the backend part, implementing the business logic, and the web application to interface with your users. This can quickly add some additional complexity, especially if different languages are in use and if they interface with each other. It can also lead to discrepancies between the source code. Recently, I had a similar case at work and decided to invest some time to figure out a solution. Here's an in-depth summary of how it went.

The problem with the application

The application in question is built using Angular and ASP.NET. It consisted of three repositories:

  • A repository containing all frontend source code, including a PHP server that will be ran in a Docker container to serve the static artifacts from the build.
  • A repository containing all backend source code, including another Docker image to serve the ASP.NET app.
  • A repository for deployment and infrastructure provisioning. Also configuring Ingress and the deployment & services themselves.

One of the main problems is the complexity and scattering of the source - it becomes hard for developers to coordinate reviews, deployments and search through code. As an example, we had an API route in the ASP.NET application to get some data:

public record User
{
    public required string Name { get; init; }

    public required string Email { get; init; }

    public required IReadonlyCollection<string> Roles { get; init }
}

public class UserController : Controller
{
    [Route("/")]
    public async IActionResult<User?> GetCurrentUser()
    {
        return _currentUserService.GetCurrentUserOrDefault();
    }
}

The frontend then contains the respective code to consume this endpoint:

type User = {
    name: string;
    email: string;
    roles: string[];
}

@Injectable()
public class UserService {
    constructor(private readonly httpClient: HttpClient) {}

    public getCurrentUser(): Observable<User> {
        return this.httpClient.get<Observable<User>>('BACKEND_URI');
    }
}

There's one obvious flaw with this model though; they are not linked at all and a developer can change one counterpart without the other part failing. Because both backend and frontend tests are isolated, no tests will ever fail if one forgets to update the corresponding model.

Another issue with the application was the additional web server dependency. Because we had two individual Git repositories and Pipelines, we also had two separate Docker images. We then configured our Ingress to route depending on the path:

http:
  paths:
  - path: /ui
    pathType: Prefix
    backend:
      service:
        name: frontend
        port:
          number: 8080
  - path: /ui-api
    pathType: Prefix
    backend:
      service:
        name: backend
        port:
          number: 8080

But this also meant that we have two services running, two pods running and also two web servers running. Although this might seem harmless, it actually caused over 2 hours of downtime, when a breaking change in the web server for the frontend was accidentally deployed. Because our E2E tests only covered the backend being accessible, our automated deployment also did not prevent the faulty deployment. As you can see, the additional point of failure introduced additional risk and pressure for the developers.

Deployments also need to change

Since we also had two Docker images, that need to be maintained and deployed, we needed to invest additional time to coordinate deployments, especially if we introduced breaking changes in the API. Consider the following example in comparison to the above code:

public record User
{
    public required string Name { get; init; }

    public required string Email { get; init; }

    public required IReadonlyCollection<Role> Roles { get; init }
}

We now changed the type of Roles to a type that is serialized differently by System.Text.Json. Hence we also need to make the same adjustment in the frontend:

type User = {
    name: string;
    email: string;
    roles: Role[];
}

But how do we deploy these changes now? If we first merge the frontend change, we will break it since it expects an object from the API, which will result in deserialization errors. Deploying the backend first doesn't solve the problem; it will serialize the role to a JSON object while the frontend still expects a JSON string array.

To solve this, we would need to coordinate both deployment runs and let Kubernetes switch over to the new pods at the exactly same time. That seems like a huge effort! Because of this limitation, we decided to always make our API backwards compatible, by deprecating old API endpoints instead of deleting them or by using objects if we know more data will be added later on.

Establishing requirements

I knew that we could solve some of these problems using a Monorepository. The frontend experience in the team was quite limited, so I analyzed our requirements, desires and noted them down:

  • The new form and structure should be easier to navigate for developers
  • Deployments of frontend, backend or a combination should be easy to perform
  • The structure change should not break integration with existing tools, such as our development environments or the deployment in the Kubernetes cluster
  • Remove unnecessary dependencies and point of failures to reduce complexity and risk
  • Make code changes more transparent and build the foundation to further improve on the system (e.g. linking models or expanding the testing system)

Analysis of variants

Until now, all of this was just an idea. Since this required us to invest not an insignificant amount of time, I discussed my suggestion with architects and the product owner so we can estimate the advantages we gain and the effort that is needed. To do this, we needed concrete examples and a PoC to proof that it actually works.

So I started by creating a documentation page, where I noted all my findings so other developers can understand my thoughts and implement the change on their own. This leads to a more broad knowledge in the team as well.

💡
The following section has been simplified and reduced to the relevant components. It will still apply to most projects though.

Regarding project stucture, this was our existing directory structure:

Project structure

  • src/
  • src/Company.Common
  • src/Company.Service
  • src/Company.Service.Dockerfile
  • tests/Company.Component
  • README.md

Many projects merge the source from their Polyrepository to the root of the Monorepository. This would result in the following structure:

Merge Polyrepo contents at root

  • src/
  • src/Company.Common
  • src/Company.Serice
  • src/Company.Service.Dockerfile
  • tests/Company.Component
  • web/index.html
  • web/angular.json
  • web/src/index.ts
  • web/src/app.module.ts
  • README.md

However, this will have several disadvantages:

  • We use the debugger inside the Docker container to develop and debug. Because Visual Studio expects the Dockerfile to be exactly at this location, we cannot move the file to a different location. But since the build context now is outside the src/ directory, it breaks a lot of IDE integrations, the pipeline and will cause additional image changes.
  • The root will be polluted with configuration files from both frontend and backend.
  • Since the frontend files would have been moved to the / of the repository, we would get multiple file conflicts, such as the README.md

Becuase of that, I decided to apply the existing pattern to the frontend too:

Merge Polyrepo with existing structure

  • src/
  • src/Company.Common
  • src/Company.Serice
  • src/Company.Service.Dockerfile
  • src/Company.Web/index.html
  • src/Company.Web/angular.json
  • src/Company.Web/src/index.ts
  • src/Company.Web/src/app.module.ts
  • src/Company.Web/README.md
  • tests/Company.Component
  • README.md

I also prepared the implementation by reading the SPA functionality of ASP.NET and experimented with several approches. They also have a guide for Angular projects, but it seemed that it was rather limited and would not work well with our infrastructure setup.

Implementing it

Retain Git History

One criteria for this change was, that we retain our commit history. We often document reasons for actions in commit messages, which would get lost if we squashed them or added the files as untracked new files in the unrelated repository. The problem is, since both repositories are separate on their own, either can't integrate the changes of the other because it doesn't know and track those files.

However, git has us covered here too:

# add the unrelated repository
git remote add frontend https://....

# fetch the unrelated repository
git fetch frontend

# tell git to fetch commit history from the unrelated repository
git pull frontend master --allow-unrelated-histories

# checkout the master of the repository where we want to merge into
git checkout master

# merge the related commit history to the master
git merge --allow-unrelated frontend/master

# merge conflicts if you have any

# no force push needed, just push the history
git push origin master

Serve SPA

When I analyzed the Spa NuGet of Microsoft, I found that the UseSpa() was the closest to our requirements. However, it did not really work or broke existing contoller methods in the application, because we needed to serve the application from a subpath instead (e.g. /awesome-app/ui). Therefore, I came up with the following extension method:

public static void UseWebUi(this IApplicationBilder app, IHostEnvironment env)
{
  app.MapWhen(request => request.Request.Path.HasValue && request.Request.Path.Value.StartsWith("/awesome-app/ui"), client =>
  {
    client.UseSpaStaticFiles(new StaticFileOptions
    {
      RequestPath = "/awesome-app/ui"
    });
    client.UseSpa(spa =>
    {
      spa.Options.SourcePath = "wwwroot";
      spa.Options.DefaultPage = "/index.html";
    });
  }
}

It will only map requests, that match the frontend subpath /awesome-app/ui and it reuses the static files functionality of ASP.NET. This simply means we copy the frontend artifacts of Angular to the webroot on build and ship them with the Docker container. But there's a problem; in local development, no static files are generated and Angular spins up it's own HTTP web server with HMR. This means we need a different implementation for the local development environment - hence I came up with this addition to the code:

public static void UseWebUi(this IApplicationBilder app, IHostEnvironment env)
{
  app.MapWhen(request => request.Request.Path.HasValue && request.Request.Path.Value.StartsWith("/awesome-app/ui"), client =>
  {
    if (env.IsDevelopment())
    {
      client.RunProxy(proxy => proxy.UseHttp("http://localhost:4200"));
    }
    else
    {
      client.UseSpaStaticFiles(new StaticFileOptions
      {
        RequestPath = "/awesome-app/ui"
      });
      client.UseSpa(spa =>
      {
        spa.Options.SourcePath = "wwwroot";
        spa.Options.DefaultPage = "/index.html";
      });
    }
  }
}

The updated version uses AspNetCore.Proxy to proxy requests to the development server of Angular. I prefer this implementation, because it does not affect the JavaScript debugger or the Angular dev tools in any way unlike other options.

Consolidate Deployment

What happened after we migrated to Monorepository

Next, it was time to update the pipeline and deployment. Originally, I wanted to copy the frontend artifacts in the Dockerfile to the `wwwroot` directory. However, I found out that this copy seemed rather unstable and slow on our CI - so I added one new step in the pipeline, where it copies the frontent files from the frontend distribution directory to the wwwroot recursively:

cp $WEB_PROJECT_PATH/dist/* $PROJECT_DIR_PATH/publish/wwwwroot -r

Using docker inspect and the files viewer in Docker Desktop, I then confirmed that the wwwrootdirectory was present and contained the Angular files.

And remember, that we talked about Ingress almost at the beginning of this article? Of course, I could now remove the resource for the frontend now, since we no longer had any separate pods or deployments for the frontend:

http:
  paths:
  - path: /ui
    pathType: Prefix
    backend:
      service:
        name: backend
        port:
          number: 8080

Next steps

Whew, that was a big bunge of changes. Everything works now, but we actually can establish some interesting next possible steps for our Monorepo:

  • We can optimize our pipeline now more, since everything happens in one pipeline run and can be configured more precisely.
  • We can add logging or middlewares to get more information about the frontend being served
  • We can introduce end to end tests, that spin up the entire frontend and backend, testing them together and making sure that the API matches.
  • We can migrate our Playwright tests in the frontend to the .NET version of Playwright if we prefer writing C#.
  • We can autogenerate the API models more easily for the frontend

Conclusion

Let's wrap up. Here are the learnings & conclusions I've drawn from this migration:

  • We have been seeing significantly less issues with discrepancies in API contracts (e.g. the API controller) since the change
  • Code reviews has gotten much more transparent and easier
  • We were able to eliminate point of failures, that were known to cause downtime for our customers
  • We prepared a foundation for the next steps to perform more broader testing, which further reduces the risk of individual changes and assists the developer
  • The nested project structure can behave slightly different in different products and IDEs
  • Initial investment can be big, especially if you want to keep your VCS history and must coordinate this change between usual sprint increments
  • We needed quite a bit of troubleshooting, since our team primarily focuses on backend applications. Such migrations could therefore be taxing for less experienced teams.

Overall, I definitely think this change was worth the effort and we are already profiting from the advantages of this solution. Even though it may slow down development, the velocity in the team will increase again because additional complexity has been eliminated. Have you made a similar change in your projects too? Comment down below.

]]>
<![CDATA[Reused Hardware; How I built my Steam Big Picture Console]]>Want to build one too? Here's the story how I built a DIY console for less than 100CHF.

Preface

My first contact with Gaming and PCs in general was when I was playing Anno (don't remember which one specifically) on the old and slow desktop PC

]]>
https://ruwe.dev/reused-hardware-how-i-built-my-steam-big-picture-console/67cf587657e16e00016b00a0Mon, 02 Sep 2024 20:49:00 GMT

Want to build one too? Here's the story how I built a DIY console for less than 100CHF.

Preface

My first contact with Gaming and PCs in general was when I was playing Anno (don't remember which one specifically) on the old and slow desktop PC of my father. I remember that I was building my city for hours and hours - even though I wasn't very good at the game.

Some years later, I was gifted an ACER gaming desktop. It featured a NVIDIA GTX 960 and an Intel I7 6700K - a huge improvement from my previous old and incredibly slow Acer laptop. I immediately started playing Minecraft on the highest settings daily after school. That was really fun.

After I finished primary school and got my apprenticeship, I also started watching Youtube videos of the English community. At this time, LTT started to gain traction in the more mainstream consumer channels and I started watching every video of them.

About at that time I also started hosting my own Minecraft server with a volunteer team of 7 people. We had a great time together but slowly the graphics card was getting to it's limits (custom shaders, mods, ...). So as soon as I started my apprenticeship in 2019, I started saving up for a bigger and better PC - which I still use today in 2024!

My old PC was stored in a closet though and I rarely ever used it. So what should I do with it? Sell it?

Why build a DIY console?

I have to admit - I am not really a fan of gaming consoles. They seemed rather expensive and limited in their functionalities. On a normal PC, I am able to run any program. Even better, I'm not vendor-locked. But after my recent move I did start to see the advantages of having a system that is only used for Games. It's rather tiring to start up your PC after a long day of work and waiting for it to start the game while you get the full blast of all distractions (Discord, Email, Browser, ...).

So when I started building a new TV setup for my livingroom, I decided that I am going to make an easy to use gaming setup for my favorite games. I wanted it to be silent, fast booting without any bloat on it (only games, no other software). "What system should I buy though?" Wait, I still have my old PC laying around - does it still work?

Behold, it boots

After ripping out the motherboard from the damaged case, I plug in power, put in a SSD and flash Windows. Surprisingly, Windows 11 starts booting without any issues and this thing even has a TPM.

But is it really enough for what I want to play? I make a list of all the games:

None of them is really demanding. And all of them will look great on my 4K OLED TV (although the GPU isn't handling 4K well).

Reused Hardware; How I built my Steam Big Picture Console
Genshin Impact, screenshot made by me. Aranara Quest

Steam is your friend

All of these games support controller. While I usually prefer keyboard, I do see the advantages of now having to manage holding a mouse and a keyboard while sitting in front of the TV. So I buy one of the cheapest and most popular controllers which I will be pairing via Bluetooth. Next, I 3D print a new motherboard holder so I can put the PC into the TV furniture.

To get a console-like experience, let's talk about software. I did some quick researching if there is any good launcher that I could boot instead of the Windows Explorer. Linux is out of the question due to bad game compatibility. But there is really no competition; Steam has you covered. Big Picture is a full screen mode of Steam which looks very similar to your typical console interface. It supports controllers, looks pretty (even better than the Desktop Steam 🤣) and can launch third party games. Perfect!

Reused Hardware; How I built my Steam Big Picture Console

Tweaks and configs

To make Windows feel even more console-like, I also applied the following tweaks to it:

  • Enabled Wake On LAN: Boot the PC from a ZigBee button or HomeAssistant
  • Hid the taskbar permanently
  • Created a local user without any password so login is automatic and no prompt is shown.
  • Changed all backgrounds to black images to avoid any flashbangs in the night
  • Replaced the default Windows Explorer with the Steam Client so no Windows UI is shown anymore
  • Added a "Startup Movie" with a nice sound (about 3K Steam points)
  • Added a UGREEN Bluetooth adapter for more stable Bluetooth - the built in one was very unreliable and wouldn't automatically connect with the controller.
  • Downloaded my favorite games and added Genshin Impact and Zenless Zone Zero as external games.

Not pretty, but it works

Reused Hardware; How I built my Steam Big Picture Console

It doesn't look pretty at all. Some of the components are bent, have been spray painted and the air cooler is slightly louder than I would like it to be. Nevertheless, it works beautifully when playing games and is hidden well using a sheet of white wood in front of it.

Further ideas

Since I now have a "games-only" machine, I can also use this for playing games remotely. Using Sunshine, Moonlight and a remote VPN, I can connect from anywhere around the world (with a stable connection) and play my games. This is great because my small factor laptop doesn't have the juice to run many of the above mentioned games without throttling or being very loud.

I am also thinking about modifying my Windows even further - there is about 5-15 seconds delay from Sign-On to Steam Big Picture - maybe this can be reduced?

Also, replacing the Windows Explorer with the Steam Client doesn't quite work - it makes the sound distorted and seems to break some games. Would be interesting to hear your thoughts on this.

Conclusion

In the end, I saved about 300-500 CHF by reusing my old PC. Sure, it's not the most powerful and energy efficient setup, but reusing is still much better than buying brand new. It also doesn't run 8 hours daily nor does it have any extremely hot components. Commercial consoles will be much easier to setup, just "plug and play" - but this DIY project was lots of fun and I can enjoy my games now on a shiny & bright display whenever I want. And if I will no longer use this PC, I can reuse it for other purposes.

]]>
<![CDATA[Hyperion: Hype for my TV]]>I love lights! I love them so much that I wanted to add some more to my brand new TV. Why? My eyes often had a hard time focusing and adjusting to the high brightness of the display and the dim room light was distracting. So surely I will be

]]>
https://ruwe.dev/hyperion-hype-for-my-tv/67cf587657e16e00016b009dSun, 14 Jul 2024 10:12:40 GMT

I love lights! I love them so much that I wanted to add some more to my brand new TV. Why? My eyes often had a hard time focusing and adjusting to the high brightness of the display and the dim room light was distracting. So surely I will be able to find a cheap and easy solution for it?

A quick search on Digitec reveals that lights are pretty expensive. For example, this 3.8m LED strip is almost 90 CHF and requires me to mount a camera on top of my TV?

Hyperion: Hype for my TV

Govee DreamView T1 TV-Light Strips, 55"-65"

If I wanted to use solutions like these to fill out my room and the back of my TV, the total cost would be >120CHF.

Even worse; almost all of these products use proprietary hardware and software. They often require a Cloud account and presumably send data to the manufacturer (surely in a safe manner 😄?).

So I started searching for alternative options and possibly any open source project that would do the same.

Hyperion Project

So after searching on Google & Reddit, I came across Hyperion. It is an Open Source project that supports a wide variety of lights - including one of my favorite LED controllers (that is also open source!): WLED. With WLED, I can control almost any popular addressable LED strip and can tailor them much better to the room size. Hyperion takes a screen capture and transforms it into commands for the lights so they react to what's happening on the screen.

There isn't much documentation about Hyperion, but luckily I was able to find most of the information I needed from the mostly German speaking forum. Hyperion can use multiple different sources for the image - mainly USB Capture, Screen capture (of the same device) and remote protocols (that stream the image bitmap over network). Since I'm using a Chromecast on my TV, I first thought about streaming the image from it using an app - in fact, there is an official Android App that is compatible with Android TV (aka. Chromecast).

So I set up Hyperion inside a Docker Container, installed the Android app on my TV and configured my lights. It works flawlessly out of the box when I'm in the Chromecast Start Menu, but as soon as I try to play and videos (eg. Youtube, Twitch, ...) the lights start to flicker furiously. It turns out that streaming DRM protected content is not possible and will cause your lights to go wild. So I had to come up with Plan B...

Overengineering my ambientlights

Remember that Hyperion also supports USB Input? So after much struggle with DRM and playback issues, I decided to build a DIY alternative.

So the most often used workaround in the Hyperion Forum is using a splitter, which clones the signal to a second output device - in this case a USB capture card. Using this, it is possible to stream 4K content with DRM while copying the output signal to a second device that "records" it. Once I settled for this solution, I went onto AliExpress and bought another of these cheap USB capture cards:

Hyperion: Hype for my TV

These only support low resolutions and low framerates - but this doesn't matter as we should be able to interpret the primary colors from the image also with a much lower resolution like 720@15fps. One nice feature of HDMI matrix splitter is, that they often come with a few DIP switches called "EDID". This enables you to force a specific resolution, framerate & protocols (eg. audio), so TV doesn't switch to the lower end device specifications (in this case the USB capture card). The matrix that I bought (for 36$) on AliExpress is no longer available:

Hyperion: Hype for my TV
kebidu 4x2 Matrix Switch Splitter with SPDIF and L/R 3.5mm HDR HDMI-compatible Switch 4x2 Support HDCP 2.2 ARC 3D 4K@60Hzkebidu 4x2 Matrix Switch Splitter with SPDIF and L/R 3.5mm HDR HDMI-compatible Switch 4x2 Support HDCP 2.2 ARC 3D 4K@60Hz

Using this, I can connect up to 4 input devices (eg. my Chromecast) and connect the TV at output 1 and the USB capture card at output 2. The complete setup on a logic level looks like this:

Hyperion: Hype for my TV
Diagram of the components and their connections

In practice, the main part of the setup looks like this:

Hyperion: Hype for my TV
HDMI Matrix glued to the top and Orange Pi 5 glued to the side

Using the Home Assistant Hyperion integration, I can even control whether Hyperion should be controling my lights from the USB capture or not:

Hyperion: Hype for my TV

Conclusion

I'm very happy how this turned out. The lights make watching Anime or Movies much more entertaining and it works with any DRM protected content. The Hyperion developers have created a great product and I absolutely love the result.

0:00
/0:03

The total cost for the project is close to commercial solutions, but does offer more flexibility. Hyperion simply offers more customization than most systems out there and I have the option to add more lights later on if I want to.

]]>
<![CDATA[Conclusion after 5 months of TrueNAS after I migrated from Unraid]]>In 2019, I started my apprenticeship as a Software Engineer. At that time, I dumped all my data onto a 4TB HDD in my main PC at home. This became tedious after I had to organize my files somehow in the file structure and filled the disk completely with data.

]]>
https://ruwe.dev/conclusion-after-truenas-migration/67cf587657e16e00016b009cMon, 29 Apr 2024 16:09:50 GMT

In 2019, I started my apprenticeship as a Software Engineer. At that time, I dumped all my data onto a 4TB HDD in my main PC at home. This became tedious after I had to organize my files somehow in the file structure and filled the disk completely with data. I decided that I need to do something about this...

Introducing: Unraid

I knew that I needed an easy to use system that would accommodate my data. It needed to be reachable 24/7 for me to access the data when I'm traveling. A friend in vocational school introduced me to Unraid because they were planing to use it too. I knew that this would be a much better tool for my data storage so I decided to start a trial license, bought 3 used HDDs (yikes) and some used PC hardware from Ricardo.

Not only Storage?

After a few months of running this system and making continuous improvements to it (adding SSDs, configuring VPNs, ...) I started experimenting with the Apps. There is a community maintained catalogue which offers thousands of apps that you can start with very little to no effort. Naturally, I started downloading tons of apps from it and configured them step by step on my system. File Explorer as a website? Sure! A private Satisfactory Server? You bet.

Thanks to the awesome selfhosted list, I found new and interesting applications on a daily basis that I immediately tired out on my system. A big help were also r/homelab and r/selfhosted.

At some point, I started running 20+ apps continuously 24/7 and configured reverse proxies and tunnels to access them remotely without exposing them. Since I gained experience during this period as a Software Engineer, my interests in applications also shifted and I wanted to try out more niche applications & products that weren't in the community provided catalogue. Hence I created my own catalogue items and applied for adding them officially to the list in Unraid.
This enabled me to not only add new things to the Unraid system, but also share it with the community.

Headache with Unraid

So why did I even bother to migrate? A key reason was the rather clunky Apps system on Unraid. It uses Docker under the hood. Once you configured them, they will automatically start up sequentially on startup - this led to 30+ minutes startup times. If you accidentally misconfigured an App (aka. container), it would remove it permanently and you would loose your configuration.

This caused much frustration on my side and led me to loose my patience with Unraid. I started doing backups more frequently but the whole system still felt so incredibly fragile and broken. To be fair, the Apps are entirely made by the community and I do not want to blame them for the problems I had.

Additionally, I also had performance issues with the cache SSD and almost lost my data once, because it is not protected by parity (this may have changed in 7.0?). When I moved to a new apartment in the end of 2023, I decided to finally rebuild my server from the ground up and migrated to TrueNAS.

Why TrueNAS?

TrueNAS seemed to be a really stable and semi-professional option for storage. I first learned about it via LTT videos but slowly understood more and more about it. It offered much better data protection and built in backups. It's UI seemed better polished. Although I was a bit scared by the migration process & Kubernetes, I decided to go ahead and migrate to TrueNAS Scale.

The migration process

At this point I added even more disks to Unraid since I was running out of storage. The total amount of data was approx. 16TB. At this time, I didn't have any big enough disk to copy all data over to it temporarily.

Instead, I rented a Hetzner Storage Box and started uploading a full file-level snapshot of all of my data. This took ages - over a week to be exact. Upload speed was inconsistent due to small files & bad throughput (10-30MB/s).

After this was done, I bought some new hardware for the new system, flashes a TrueNAS Scale image onto a SSD and started configuring it. I added my disks to a RAIDZ3 and restored my data again from the Storage Box. Then I added TrueCharts to my system and added my apps step by step. Some apps weren't available in the catalogue, so I used the custom-app instead which accepts an image name.

💡
It should be noted that I wouldn't consider this process viable or scalable for other instances. In my case, this worked well because the amount of data was small and throughput was acceptable. However, I will do my future migrations in a different way.

Conclusion

So was this worth the hassle? After I spent 5 months figuring out what I need to do, I would say yes - but maybe not for everyone.

  • The general experience is better and more stable. The UI is built more consistent and less confusing.
  • The level of entry is much higher. The documentation does help a lot but some concepts are completely different compared to Unraid.
  • The addition and removal of drives is much harder. You're much more restricted than in Unraid and will have to comply with the many ZFS limitations. The performance of these pools is great as soon as they work - but don't expect mutations in the pools to be easy
  • The catalog of apps on TrueNAS is smaller and you'll definetly need TrueCharts for most apps. As the time of writing, TrueCharts also deprecated their TrueNAS catalogue and iX Systems is currently developing support for docker compose.
  • TrueNAS isn't as configurable as Unraid. This is a good and bad thing. In Unraid, I often needed additional third party plugins to configure features that should likely be included in Unraid itself - but this also made Unraid feel fragile and often broke.
  • Startup is much faster but since TrueNAS Scale currently uses Kubernetes, stopping or creating new pods does take a considerable amount of time.
  • For many basic things I neeed to use heavyscript or the console (eg. reading namespace events)
  • Running apps only with Docker image is possible and relatively easy. However, the image updates will not be displayed and you'll have to delete the image, stop the app and start it again

Next steps

So what's next? I'll probably stick with TrueNAS and will wait for their docker compose support. Since TrueCharts has terminated development of the charts for TrueNAS, I also expect some things to break until then. I will also be building an own Kubernetes cluster on my own - stay tuned.

]]>