Jekyll2026-02-18T15:22:33+01:00https://crowfunder.github.io/feed.xmlCrowfunder’s BlogMy personal archive for depositing inexplicable ideas. I do not take responsibility for anything here that transgresses moral and/or professional norms. Have a good read!Crowfunder[email protected]A backup failure and a one-time wonder2026-02-18T11:49:10+01:002026-02-18T11:49:10+01:00https://crowfunder.github.io/posts/backup-failure-and-a-wonderI’ve recently heard a rant on how scientists and engineers seldom publish failed research, and how they should do so regardless of the results - I took it to the heart. In the spirit of hubris and learning from failed attempts, let me tell you a story of a backup failure and a one-time wonder.

cover image of the article, a messy desk in the aftermath of the presented story
The aftermath

The Background

For the very first time in the last several months, I had the time (and motivation!) to perform the chore of backing up the data from my smartphone. About a year and a half ago I switched phones and decommissioned my trusty, old Redmi Note 7. It was a fine phone. It took photos that looked really well for the time (and the budget), had a pretty good CPU and a sufficient amount of internal storage. (Switching from 8 to 32GB of storage was one hell of a drug).

Historically, my workflow of making backups was very simple, if not crude. Each of my devices had a separate folder, in which internal storage and SD card were separated. In there I simply copied the folders that mattered, which is mostly photos, videos and music. Full partition backups were never really a thing here - app data was not important for me; only ever was in the case of games but these moved to the cloud anyways. 

The backups were incremental - every time I did a backup I simply appended the new data. When switching devices, I did not migrate all the data to the new device, but only the minimum I needed and backed up the decommissioned device for the last time. [1]

Our story begins with the one time I was careless, with sloth overtaking the rigor, and forgetfulness doing the finishing touch. I did not do the backup of the old device right after the last switch, “I’ll do it later.”

The Beginning

It’s been a while since I switched devices now. The mundane was not merciful - I had very little time for “extracurricular activities”. Just recently however, the fortune had smiled upon me and I finally had some free time. The “later” was now - I decided it was a good moment to perform the overdue chore.

I successfully back up my current device. Now, it’s the time for the old one. I press the power button and the device does not respond. Unfazed, I realize it may have simply discharged, I plug the charger in to no reaction. Something is wrong.

I start searching for similar issues in the context of phones that were left unattended for some time. The most common explanation is a deep discharge of a Li-Ion/Li-Po battery. Nowadays, devices are never really off, they draw minimum current even when turned off, more so it’s in the nature of these batteries to passively discharge over time. When the battery voltage is too low to safely operate, the device IoC will prevent the device from starting to prevent further discharge of the battery, which may be harmful to its health. [2]

A proposed solution is to plug the discharged device to a weak charger (1-2A at most) and let it charge for several minutes/hours, “Why not” I think to myself as I turn to other matters and let the device charge.

The Stakes

Naturally I started to worry a bit and think about the “what-ifs” - how big is the potential damage?

The last backup of the device was about a year or a half before decommissioning, so it would mean that I may lose a year or a half worth of data. Fortunately, nothing immediately critical that I didn’t migrate was stored on the device.  The only thing that comes to mind, which I did not migrate, is photos.

Now, you may correctly think to yourself - “Why worry over something as trivial as photos?”. There is a degree of irrationality to this endeavor.

I belong to one of the first generations who were born in the era of digital photography. Obviously it comes with the caveats of having your entire life documented as well as the oh-so-many ways it could be abused (sharenting!), however I will argue that there is something magical to being able to look at a still of the world of any given moment in the past.

Human memory is imperfect and frail, we naturally forget some less meaningful details from our life. I find joy in reminiscing about a day I had a pleasurable walk, about a challenging time that came to pass, about a time in life that was wildly different. I’m sure you too, my dear reader, can think of a couple simple and little moments that you cherish deep in your heart; immortalizing at least some of them as a stable point of reference is the reason I care about these photos.

You may now point, dear reader, that hoarding data, just for the sake of it, is unhealthy and pointless and I wholeheartedly agree with you, which is also where a certain ritual comes in. I believe that there is something to the oh-so-very old act of looking through photo albums. In case you never did it - I highly recommend it. Aside from being enjoyable, it helps to put some current and past matters into broader perspective.

Now, to finish this overstretched philosophical tangent, you can see that recovering data from the device is an emotional matter, if anything. There is also the potential of learning something interesting in the process of trying to dig into the device, with the risk of breaking something being negligible (at least from the economical standpoint).

The Escalation

Four hours and two USB chargers have passed, the device is still dead silent. I can feel the cold sweat building up on my nape, “Not everything is lost yet” I continue to assure myself. Another potential cause that was suggested is complete death of the battery. It was replaced briefly before switching devices, maybe it was faulty to begin with?

In order to check the state of the battery I have to disassemble the device. Unfortunately for me, it’s one of these phones which have their batteries buried under literally everything else, as shown here - mind that I have neither the professional tools, nor the experience in disassembling phones. “I have to improvise” is how I ended up warming the back cover glue with a hair dryer and a steel needle.

A couple unscrewed screws, torn glue and scratched protective covers later, I arrive at the battery. I pull out my universal multimeter and measure the voltage on the battery. To my complete surprise, it reads perfect 4 Volts, which is the nominal voltage of this battery. We’re in trouble - it’s not the battery.

I start to realize that this is definitely not an issue I will be able to fix with my skills - it’s a hardware failure, the device is good as dead. Out of blue, a phone that used to work a few months ago is now an expensive paperweight. I do however have the one last trump card up my sleeve that theoretically may let me recover the data.

The Obscurity

This phone runs on a Qualcomm Snapdragon 660 CPU. Qualcomm CPU’s come with a special, super-low level maintenance mode - EDL, Qualcomm Emergency Download Mode. It can be accessed in numerous ways - willingly, by shorting special pads on the motherboard, by rebooting from fastboot, by plugging in a special deep flash cable or by starting the phone with USB attached to the computer and battery unplugged. It will also start unwillingly, if the phone encounters a failure that prevents it from booting in the earliest stages.

EDL provides a serial interface for diagnostics and flashing the device. It also allows dumping the memory. [4] In order to use this mode, we need a “Firehose file”. It’s a special, model-specific file that needs to be loaded onto the device for us to access the EDL interface.

As for the EDL client, device manufacturers have their own in-house utilities such as Mi Flash. Qualcomm also has an in-house Qualcomm Product Support Tool (QPST) but we will attempt to use an open-source solution, Qualcomm Sahara / Firehose Attack Client / Diag Tools by B. Kerler. Vitally, it ships with a collection of Firehose files, you can also find and use alternative collections such as the Droidwin Firehose Collection. For the sake of clarity, from now on I will use “edl” lowercased to abbreviate this specific EDL client, as it’s the name of the command it implements.

I unplug the battery, connect the device to my computer over USB and check Device Manager. The device registers as QUSB__BULK with a warning, as Windows fails to find a suitable driver. edl README features a guide how to get the drivers, except it isn’t very clear in how to obtain a QC 9008 Serial Port driver. We get more info from reading through the automated installation script it also provides.

echo "'edl', 'zadig' installed successfully. You can now open a new PowerShell or Terminal window to use these tools."
echo ""
echo "Don't forget to run 'zadig' to install the WinUSB driver for QHSUSB_BULK devices."

So we can use zadig to install the driver, great.

Now, let’s check if edl detects the device. By the way, should you ever try to run edl, keep in mind that it’s made with python 3.9 in mind, it will produce errors on newer python versions.

main - Using loader Loaders/...  
main - Waiting for the device  
...main - Device detected :)  
sahara - Protocol version: 2, Version supported: 1  
main - Mode detected: sahara

It’s detected, but errors on “Xiaomi Auth” upon uploading the autodetected Firehose file. It seems like it picked a Firehose file for the same hardware ID, but for a different manufacturer. Fortunately, edl has a Firehose for this hardware ID for Xiaomi, as well as one supporting Xiaomi Auth. Curious what Xiaomi Auth is? It appears like Xiaomi has their own spin to EDL, which requires the Firehose file to be signed with the private key generated for each hardware ID, hence requiring explicit authorization from Xiaomi to use EDL, fun isn’t it? [3] Fortunately the internet people managed to get their hands on some of the secured Firehose files and shared them online, you can look for the patched Firehose files online.

Luckily enough, I managed to get my hands on about 4-5 different patched Firehose files that match, some of them worked and Xiaomi Auth passed successfully. Unluckily though, trying to perform any actions, especially related to memory, dumped an ocean of filesystem read errors.

firehose - [LIB]: ERROR: Failed to open the UFS Device slot 0 partition 0  
firehose  
firehose - [LIB]: ERROR: Failed to open the device 3 slot 0 partition 0  
firehose  
firehose - [LIB]: INFO: Device type 3, slot 0, partition 0, error 0  
firehose  
firehose - [LIB]: WARN: Get Info failed to open 3 slot 0, partition 0, error 0  
firehose  
firehose - [LIB]: storage_device_get_num_partition_sectors FAILED!

And here the trail goes cold, no feasible solutions were proposed to alleviate this issue, some posts suggest changing the QC 9008 Serial Port driver, but the issue remains mostly unsolved.

The Resolution

Disappointment and dismay start to take hold. Crestfallen, I start to clean up my desk and come to terms with the fact that I will not be able to recover the data, even using the most arcane of methods. Eventually, I realize that even if I were to somehow dump the flash memory, I would probably not be able to do much, as it’s encrypted using both the user password and hardware secrets, which I wouldn’t be able to recover. [5]

A killer headache mixed with fading ire slowly give way to tiredness as I take a warm shower. Loss apparently has 4 to 5 stages but I think this is a 6th one, being a mix of everything all at once. I’m depressed, I’m shocked, I accept and I start to bargain; An idea crazy in its naivety comes to mind.

As I sit down, I open the Gallery app on my current phone and scroll down to the time before the device switch. As expected, there are a few photos there, namely because of other apps I used back then, which took photos (such as social media). I chose to migrate them across devices because of their small filesize, on the contrary to the DCIM/Camera directory which was several GB in size. But wait, something is odd, there are more photos there than I expectec. As I pick one of them and view its metadata, I’m left dumbfounded.

These are, in fact, regular photos I took, the ones I was trying to recover. I go to the moment of the previous backup and merely 2 (uneventful, anyways) months of photos are missing, way before the device switch thing, other than that not a single photo is lost. How is it possible?

We now enter the realm of plausible speculations as I have no way of confirming what actually happened. I have two hypotheses.

The first one is a camera app switch. I remember switching camera apps, from the stock one to a patched Google Cam, but I’m unsure of the timeframe. Purely theoretically, the alternate camera app could save photos to a different destination than the one I used previously. I would then indeed migrate that folder to the new device, unknowingly.

The second one is an accident. During the device switch I may have unknowingly migrated all of the folders on the device, using the Android device migration feature.

The End

The fact is that I did not lose my data at all and the effort was, kind of, in vain. Call it fool’s luck, truth is that I made a grave mistake but at the very least got to tap into some super-cool stuff with Android phone servicing. I may be a bit regretful that I wasted an entire evening and got stressed out, but call it a price I had to pay for negligence. Do your backups kids.

Footnotes

[1] In case you might be wondering, automating this is no easy feat, as Android allows internal storage access through MTP, which is not a traditional filesystem access but rather object storage access - as far as I’ve tried, it’s not possible to i.e hook it up to robocopy/rsync.

[2] I did not manage to find a lot of details on this one, this post summarized the topic fairly well.

[3] There are quite a few implementations of data extraction using EDL, the general idea is fairly well described in Physical Mirror Extraction on Qualcomm-based Android Mobile Devices which makes it a pretty tidy option when compared to i.e chip-off. One company (or more?) even based their entire product on the premise of EDL device manipulation which has quite some nasty potential, i.e the Hydra Tool.

[4] It gets even worse when you learn that the only realistic way to get an authorized Firehose file, if it was not leaked, is by buying single-use credits for a flash in some tool. A pretty nifty explanation on how the auth works can be found in many edl issues, such as this one.

[5] Android implements Full Disk Encryption with keys relying on Trusted Execution Environment and Hardware Bound Keys. I recommend reading the linked specs.

]]>
Crowfunder[email protected]
Linux is not stable2025-12-31T14:35:09+01:002025-12-31T14:35:09+01:00https://crowfunder.github.io/posts/linux-is-not-stableI’ve started using Linux a good 7 years ago and migrated to it as my daily driver about 2 years ago. It is a great system, fun to tinker with, but sometimes it feels like it’s about to fall apart. This article is a rant, a collection of annoyances and issues I’ve accumulated so far and has not forgotten about them. It will be neither very technical nor positive.

Is Linux that bad?

Don’t get me wrong, I find Linux to be a much better OS than Windows [1], in my case mostly because of performance. I have a relatively old HP Elite x2 2in1 laptop which is - frankly speaking - atrocious when it comes to performance. 8GB of RAM is barely enough to run modern IDEs, be it IntelliJ or VSCodium [2]. The intel m5-6Y54 processor has as little as 2 cores with 1.10 GHz base frequency and thermal throttling is just insane at times. Back when I was running Windows I had to run Throttlestop [3] with maximum performance settings to make the system not hang up on opening the explorer.

Migrating to Linux helped to alleviate these pains, at least partially. I’m currently running PopOS 22.04 and I’m quite happy with it. Running High Performance setting with some custom patches makes the system at the very least usable for basic tasks and workable for more intense ones.

Another reason why I find Linux better is work comfort. It’s much more power-user friendly (duh!) and has better support for some low-level tasks, such as writing drivers for super niche wireless devices. I got used to command line, it’s much quicker to look for a command or a config, than looking for that one specific setting hidden in that one specific window hidden under one million clicks. (Thank god that godmode [4] - pun intended - exists for Windows)

To sum that up, the point of this article is to pinpoint that, while much more comfortable and efficient, Linux is far from completely stable.

Touchscreen and On-screen keyboard

One of the first issues I noticed after migrating to Linux was the inconsistent behavior of the on-screen “Caribou” keyboard that comes with GNOME DE.

Back when I was using Windows the drivers worked as such - If the keyboard is attached (it’s detachable) - disable the on-screen keyboard. Enable if it the keyboard is detached. Simple and fun. On Linux it didn’t work at all, the keyboard just randomly popped up, regardless of whether the physical keyboard is attached or not. It worked inconsistently and provided no easy way to change its behavior. To make it even worse, disabling it in the settings did next to nothing.

The solution to all my woes was installing a GNOME Extension that completely disables Caribou. As unfortunate as it is, the keyboard is completely unusable on my device.

Tailscale - Software support sucks

Tailscale [5] is pretty cool, I use it to connect to my devices to work remotely and set up a remote home lab. It usually just works, UX is great… until it isn’t. This will be an example of poor software support.

Taildrop is a pretty cool addon to Tailscale, it allows for sending files to other devices in the network. In Windows and Android the usage is reduces to, click on the file > share > send to device, and it just works. On Linux however….

tailscale docs about taildrop usage, demonstrating commands for sending and receiving files
Tailscale documentation on Taildrop usage on Linux

I can understand that providing a GUI element for sharing may not be that easy, after all Linux has plenty of GUI shells… but actually no, it’s not an explanation - Tailscale provides a systray GUI. But that is not the worst offender, in order to receive the file, you have to manually invoke the command that receives files, every single time.

Another, actually big problem that is far from a nitpick arised recently - suddenly, TPM support started to break for Tailscale on Linux, and a lot of users reported having to re-authenticate their machines every single reboot, myself included. While it was an accident, it made Tailscale very inconvenient to use for a good while.

(Not only) Ghidra GUI with fractional scaling

With the weird aspect ratio and dpi my screen has, fractional scaling has been a lifesaver, but it comes with caveats.

Trying to use Ghidra with fractional scaling? Good luck. The GUI will be all messed up - some parts properly resized, others not-so-much. It’s a known issue (with a solution).

Similar situation occurred with Bitwarden Linux client, which doesn’t seem to care for DPI scaling at all. This one can be quickly “solved” by using View > Zoom Out, but you have to do it every single time.

Zsh autocomplete

When I migrated to Linux, I wanted to try out an alternate shell - I went for zsh with oh-my-zsh. I’m fairly content with it, it’s much easier to use, highly customizable with great community support. Zsh stock autocomplete though…

zsh: do you wish to see all 176 possibilities (59 lines)?
Image from [this thread](https://old.reddit.com/r/zsh/comments/gor76p/zsh_do_you_whish_to_see_all_possibilities_how_to/)

This behavior interrupts writing commands by hanging up for a good second and requiring a Y/N answer before you can continue - catching you completely off-guard. Thankfully, this one was quickly solved by replacing stock autocomplete with fzf-tab which I highly recommend, it works amazing and is blazing fast.

My completion configs in .zshrc are as such:

autoload -Uz compinit
compinit
source $ZSH_CUSTOM/plugins/fzf-tab/fzf-tab.plugin.zsh
zstyle ':autocomplete:*' ignored-input 'apt install'
zstyle ':completion:*' list-prompt   ''
zstyle ':completion:*' select-prompt ''

Fingerprint sensor - Proprietary drivers strike again

Proprietary drivers are barely a new issue for non-Windows/MacOS systems. For the most part, hardware companies just do not feel especially compelled to release or develop drivers for Linux.

My laptop uses a VFS495 fingerprint reader, which allegedly had some community support, (which no longer seems to be the case). Allegedly, the device manufacturer released some drivers, but I was unable to locate them.

Days of research, multiple attempts at compiling custom drivers resulted in nothing. The fingerprint sensor does not work on my system, no matter the struggle. The furthest I got was compiling some driver, installing it, and running fprint_demo for it to merely try and send requests to the reader, but it simply hung up and crashed afterwards.

Also, tinkering with PAM to enable fingerprint login caused some issues with Keystores, which started prompting password to unlock them in random moments.

Below are some links, in case you are a soul just as lost as me, I pray that you’re more lucky than I was.

Screen tearing

This is probably the worst offender. one of the first and most prominent issues was random screen tearing which forced relogging to the system - screen randomly started to tear, flicker and fill up with scanlines. Journalctl has shown the following error:

i915 0000:00:02.0: [drm] *ERROR* CPU pipe B FIFO underrun

The error is related to i915, a Linux Intel Graphics kernel driver. Searching for this obscure issue resulted in… walkarounds, most of which never worked or merely reduced the frequency of the issue appearing. The solutions were proposed in this thread. Upon applying a fairly specific combination of kernel parameters it seems to have been mostly alleviated. My /etc/sysctl.conf parameters [6] are as such:

intel_idle.max_cstate=4
intel_iommu=igfx_off
i915.enable_psr=0

For details on these parameters I recommend reading the linked thread, as well as this Arch Linux wiki article. The issue now appears sporadically. If ever, it appears on login screen - barely ever during normal usage, it still isn’t fixed though.

Also, here is the official issue in i915 repository. As of the time of writing this article - unsolved.

Conclusion

I love Linux, it’s a great OS, fun to tinker with, giving you plenty of freedom. You can learn a lot about its internals just by using it, but sometimes you learn too much. Maybe it’s just my bad luck, maybe it’s niche hardware or weird OS configuration, maybe it’s a bit of everything. Regardless of anything though, I’d never go back to Windows.

Footnotes

[1] - Recently, Windows had a good amount of insane mess-ups, such as breaking “localhost”

[2] - I’m very well aware that I may as well just go full Neovim rice setup, but when I want to code I code, not bother with learning 1 million hotkeys to select and yank text.

[3] - Throttlestop, a Windows utility for controlling CPU thermal throttling for Laptops.

[4] - godmode - A “secret menu” on Windows with almost all system management menus aggregated into a single folder, that can be normally searched through.

[5] - Tailscale - A SAAS service providing a zero-config VPN based on Wireguard, connecting devices into a full mesh.

[6] - sysctl - A tool that enables setting and reading kernel parameters during runtime.

]]>
Crowfunder[email protected]
Making GitHub Profiles Cool - Painful Lessons with GitHub2025-04-06T17:53:47+02:002025-04-06T17:53:47+02:00https://crowfunder.github.io/posts/dynamic-github-profilesAs I was preparing for yet another job search, I’ve decided it’s about time I write a proper, neat README for my GitHub profile. It’s common knowledge how professionalism is measured by the number of stars on your repos, how green your activity graph is and how cool your profile looks.

The “coolness” of the profile, while entirely subjective, can be derived from numerous things. For some people, it’s denounced by how many animated stat graphs you can fit on one page, for others it’s how niche the anime girl on your avatar is, but I thought it would be neat to have a “technically impressive” (/s) profile.

TL;DR Use GitHub Actions to modify user profile repository by substituting linked assets, caching shenanigans ensue.

The Idea

Recruiters, just like programmers or other human beings, tend to work at various times of the day. I thought it would be a cool idea to, bear with me, “personalize” the experience a bit. It’s a common thing with (human) languages how you greet people in different ways depending on the time of the day.

Let’s say I want to have a “different” profile in the day and in the night - for starters, different image banners. Well, while GitHub markdown does support HTML tags, it doesn’t support any code execution. The simple way would be a small server that serves different resources based on the time of day, but guess it or not, I’m a broke student and I’d rather not spend money just to have a day/night banner.

EDIT: It’s been a while since I wrote this article and to my minor dismay, it turns out that it is not an entirely new idea and is quite well documented. Check out this repository for more, cool examples on GitHub Actions integrations with your profile.

GitHub Actions to Action

If I know a way to have a free (ish) “server” that performs tasks, to that one that has a capability of interacting with GitHub repositories, it’s GitHub Actions.

Love them or not, they have a very generous free tier - 2000 CI minutes per month. For scale, the runners used by this project take up 2*3=6 seconds per day, which results in around 3 minutes of usage per month, out of 2000. Neat, isn’t it? That is for public repositories though, but for our use case it’s not a problem, as the “CDN” needs to be public anyways.

github free tier perks
Github free perks, including free CI minutes.

Connect that with the fact that GitHub repositories are essentially free glorified file storage, and we have the stack for our little project ready.

Note: I am, of course, NOT affiliated with GitHub in any way, Even less so, as mentioned above I’m looking for a goddamn job.

Implementation

Let’s start by creating a simple animated banner that changes with the time of the day.

The repository that serves as our CDN (coincidentally, also the profile README repo, but it isn’t a necessity) has an “assets” subdir, with a nested “parts” subdir. The “assets” subdir will hold a “banner.gif” file that will be linked in the profile and will be overwritten by GitHub Action with files from “parts” subdir.

files structure
Repository file tree

Once we have the files ready, let’s figure out how to get an action to run on schedule. A quick Google search (and a dig through AI generated slop) led me to this amazing blog post by Jason Etcovitch which gave a detailed answer. It did not however mention the caveats, but we’ll get to that later.

cron job syntax
Cron syntax for scheduling actions

As it turns out, Actions can be scheduled like a good-old Cron job. Great, we can create two actions - one that runs in the morning, the other that runs in the evening. Actions are also capable of executing shell commands on their runners, and performing any kind of tasks with the repository; for our case - committing and pushing.

The Code

name: Set Day Image

on:
  schedule:
    - cron: '0 6 * * *'  # Runs at 6:00 AM UTC every day
  workflow_dispatch:      # Allows manual run from GitHub UI

jobs:
  update-image:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repo
        uses: actions/checkout@v4

      - name: Replace with Day Image
        run: cp assets/parts/day.gif assets/banner.gif

      - name: Commit Changes
        run: |
          git config user.name "github-actions[bot]"
          git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
          git add assets/banner.gif
          git commit -m "Update image to day version" || echo "No changes"
          git push

The first, vital element is the scheduling. The on: parameter defines triggers that run the action (Check out this page of actions documentation for other triggers). In our case, it’s the “schedule” trigger. The other trigger, workflow_dispatch allows triggering the action manually, with a press of a button. (docs)

PS: Thinking back, if I read the documentation instead of relying on just the previously mentioned blog post, I would know of at least one issue with actions.

I would also like to avoid spamming my activity graph with regularly scheduled commits, since let’s be honest, artificial activity doesn’t look very good. I recalled seeing repos that have commits by Actions Bot, quick look into the checkout action documentation results in this , a GitHub Actions bot git identity.

The rest is simple, copy the appropriate file, overwriting “banner.gif”, commit and push. What could go wrong?

The “oh no” Moment

The first issue I encountered was fairly apparent, action failed with some permission errors.

Permission to git denied to github-actions[bot]

A quick search gives a quick resolution, apparently Actions are given read-only permissions to the repo by default. All that needs to be done is giving the workflows “Read and write permissions” in repository settings.

what to click in actions settings
What to click to change Actions permissions

Reasonable, right? With the read-write permissions the action ran smoothly, replacing the file correctly.

Now, it’s time to see if the scheduling works properly. Evening was nearing by, so I’ve decided that I can verify personally if the scheduling really works. The action was scheduled to run at 6PM UTC, the clock showed 6:05, the action wasn’t running.

6:15, “what the hell?” I muttered, as I was double checking my Action configs and files, wondering what was it that I messed up again. I concluded there’s no use in waiting, I ran the action manually again and it worked as expected. The file was successfully replaced, commit authored by GitHub Actions Bot. I started digging.

Surprisingly, it didn’t take long for me to find a discussion referencing a relevant blog post. It mentions that scheduled GitHub actions barely ever run on-time, with delays up to several minutes being the norm. As it turns out, the scheduling documentation mentions the same thing, but nobody ever reads the docs, right?

discussion on cron jobs syntax
Reply of a Github affiliate, exerpt from the linked post

Honestly, other than being salty for deepening my paranoia, it doesn’t matter much. This isn’t a time-critical action that needs to run at exact times - it being late by even an hour isn’t really a problem. Still, it’s worth knowing in case you’re doing something that requires the scheduling to be punctual. The article linked above mentions that the only way to assure the actions run on schedule is to employ an external scheduling server that invokes the GitHub API for running actions manually.

I re-scheduled the action to run within an hour for testing and so it did, with a delay of course. Now, it was time to finally see the fruit of my labor.

Cache, oh a double-edged sword

If you’ve ever linked to an external image on GitHub, you may have noticed how upon hovering over it its URL shows as something along the lines of camo.githubuserconent.com instead of the link you used. That’s because, by default, GitHub caches static images and queries cache instead of the actual URL in order to “preserve anonymity”. Generally speaking great idea, but for our case it’s more of an issue than anything. Read more

That’s exactly the issue I had, but at that time I lacked this information. You can imagine my genuine confusion when upon confirming the banner got replaced by the action, the old image was still there. I saw a different image on my profile and in the repository, where both were supposed to be the same file. To make the matters worse, it didn’t matter where I opened the README, it has shown the same thing.

Well, now we’re in quite a pickle. I started digging for why the images in README are not updating. I found this discussion and it remarked using a PURGE http method, which the GitHub API apparently understands (I’ve not once heard of that method in my entire lifetime). But that solution is a little tedious - manually invoking the API twice a day is, generally speaking, not the best solution. Automating it was also uncertain, as I wasn’t sure if the cache URLs were really static, especially after purging cache.

Fortunately enough, yet another, surprisingly simple solution was proposed - appending a question mark at the end of the image URL (if it isn’t already parameterized, if it is then it already shouldn’t cache the image) . As stupidly simple as it is, it warrants a cache miss every single time. Upon applying this quick tweak, the image changes were finally visible. The profile was complete… or was it?

github profile after changes
The "end" result, looks neat, doesn't it?

What’s next?

Knowing that you can freely modify your README.md with scheduled arbitrary code execution, I think you can imagine that anything is possible. For example, another great idea I had in mind was displaying the weather in my city for recruiters to indulge themselves in it (be it out of envy or schadenfreude, mostly the latter though).

Generous free tiers are amazing, especially for CI/CD which will inevitably be abused, be it for building malware, or for swapping images on a GitHub profile.

]]>
Crowfunder[email protected]