brew install --cask blackhole-2ch) and run the script below. Make sure to adjust your input device in the constants and install sounddevice + numpy.
I recently bought a fancy XLR microphone and an audio interface from Focusrite (Scarlett Duo). After plugging in everything, I wanted to hear my awesome voice via Voice Memos on Mac. I could hear myself only on the left side of my headphones.
Replugging everything, checking settings, turning the mic… Same output, just the left side.
I thought it must be broken, but after a little research I realized that microphones always produce mono output, and the computer just turns it into fake stereo. Teams, Zoom, Google Meet, etc. also provide my input as stereo. So when my coworkers heard me via Teams, all was fine. At least that worked great.
But I couldn’t stop messing around and wanted to figure out how to duplicate my input to a second channel. Can’t be that hard, right?
Yes. Just buy Loopback for 99 dollars and I’m fine! Wow.
Nope. Not going to do that.
So first of all, we need a virtual audio device we can use to route our duplicated mono input. There is a nice open-source project for this:
Blackhole. Just install it via Homebrew: brew install --cask blackhole-2ch.
I wrote a simple Python script to duplicate the mono stream to stereo from my Scarlett audio interface to Blackhole in real time. Just make sure you choose Blackhole as the input device when recording audio.
Here is the code with some comments:
# make sure to install dependencies
# pip install sounddevice
# pip install numpy
import sounddevice as sd
import numpy as np
import time
import multiprocessing
import os
INPUT_CHANNEL = 0 # take the first channel and duplicate
OUTPUT_CHANNELS = [0, 1] # map to channel 0 and 1 -> stereo
INPUT_DEVICE = "scarlett" # change to your audio interface, lowercase and first word is enough
def audio_callback(indata, outdata, frames, time, status):
if status:
print(f"Stream status: {status}")
raise Exception(f"Stream status error {status}")
outdata[:, OUTPUT_CHANNELS] = np.tile(indata[:, INPUT_CHANNEL], (len(OUTPUT_CHANNELS), 1)).T
def main():
# adjust your sample rate here
# do not exceed 96kHz otherwise the input gets crackly
samplerate = 96000
devices = sd.query_devices()
print("Available devices:")
for i, d in enumerate(devices):
print(f"{i}: {d['name']} (in: {d['max_input_channels']}, out: {d['max_output_channels']})")
input_device_index = next((i for i, d in enumerate(devices) if d['name'].lower().startswith(INPUT_DEVICE)), None)
# output should be Blackhole (virtual audio interface)
output_device_index = next((i for i, d in enumerate(devices) if d['name'].lower().startswith('blackhole')), None)
if input_device_index is None or output_device_index is None:
print("Could not find Scarlett input or Blackhole output device.")
return
print(f"Using input device {input_device_index}: {devices[input_device_index]['name']}")
print(f"Using output device {output_device_index}: {devices[output_device_index]['name']}")
with sd.Stream(device=(input_device_index, output_device_index),
channels=(INPUT_CHANNEL+1, max(OUTPUT_CHANNELS)+1),
dtype='float32',
samplerate=samplerate,
# this should be real time
latency="low",
callback=audio_callback) as s:
print(f"Routing audio at {samplerate} Hz... Press Ctrl+C to stop.")
while s.active:
time.sleep(1)
print("Stream is not active anymore.")
print("Stream closed.")
def run_main_on_core():
while True:
# this makes sure that it works even if you unplug and replug the device
# if the main function does not run in a separate process
# the reconnection does not work because the device will always be blocked by the main thread
# I tested this a bunch...
p = multiprocessing.Process(target=main)
p.start()
p.join()
print("Main function stopped. Waiting 5 seconds before restarting...")
time.sleep(5)
if __name__ == "__main__":
run_main_on_core()
I also wanted to make sure that when I accidentally unplugged the mic, it would reconnect and reopen the stream after replugging. That was a hell of a ride.
Turns out, you have to run the streaming code inside a new process so when it crashes and the process stops, it “releases” the input. Without multiprocessing, I was not able to reconnect the microphone, no matter how often I closed the stream.
Feel free to reach out if you have any questions :)
Happy coding!
You may already know that I manage my Neovim with Nix since I wrote some articles in the past about my current setup.
Recently, I got really upset about my Neovim startup time. It was too slow … it is always too slow!
Nix always loads all plugins at startup. There is a lazy loading feature, but it is not as mature as lazy.nvim.
New sidequest unlocked: Lazy.nvim with plugins managed by Nix.
Let’s have a look on the world wide web, someone already did that!
– naive me
No one did it… well at least no one wrote about it.
First, add the lazy.nvim plugin as the only plugin that is managed by Nix and configure it.
The # lua comment is important since it enables Lua syntax highlighting for the string inside a Nix file.
programs.neovim = {
# ...
plugins = [ pkgs.vimPlugins.lazy-nvim ];
extraLuaConfig =
# lua
''
require("lazy").setup({
-- disable all update / install features
-- this is handled by nix
rocks = { enabled = false },
pkg = { enabled = false },
install = { missing = false },
change_detection = { enabled = false },
spec = {
-- TODO
},
})
'';
};
According to the PluginSpec, we can install plugins from a local path with the dir parameter. Let’s leverage this to manage the plugin source with Nix and lazy loading with lazy.nvim.
As an example, we will install nvim-cmp with all dependencies.
programs.neovim = {
# ...
extraLuaConfig = with pkgs.vimPlugins;
# lua
''
require("lazy").setup({
-- ...
spec = {
{
-- since we used `with pkgs.vimPlugins` this will expand to the correct path
dir = "${nvim-cmp}",
name = "nvim-cmp",
event = { "InsertEnter", "CmdlineEnter" },
dependencies = {
-- we can also load dependencies from a local folder
{ dir = "${cmp-nvim-lsp}", name = "cmp-nvim-lsp" },
{ dir = "${cmp-path}", name = "cmp-path" },
{ dir = "${cmp-buffer}", name = "cmp-buffer" },
{ dir = "${cmp-cmdline}", name = "cmp-cmdline" },
},
config = function ()
local cmp = require('cmp')
cmp.setup({
sources = cmp.config.sources({
{ name = 'nvim_lsp' },
{ name = 'path' },
}),
snippet = {
expand = function(args)
vim.snippet.expand(args.body)
end,
},
mapping = cmp.mapping.preset.insert({}),
})
-- Use buffer source for `/` and `?`
cmp.setup.cmdline({ '/', '?' }, {
mapping = cmp.mapping.preset.cmdline(),
sources = {
{ name = 'buffer' },
},
})
-- Use cmdline & path source for ':'
cmp.setup.cmdline(':', {
mapping = cmp.mapping.preset.cmdline(),
sources = cmp.config.sources({
{ name = 'path' },
}, {
{ name = 'cmdline' },
}),
matching = { disallow_symbol_nonprefix_matching = false },
})
end,
},
},
})
'';
};
Et voilà, now you can update your plugins via Nix and lazy load via lazy.nvim.
I split up the lazy config into different files so my Neovim config stays clean and won’t get messy. Just have a look at the repository I linked at the top.
Read this to learn how to install plugins that are not present in the Nix package repository.
Read this to learn how to install Treesitter grammars with Nix as well.
Best part, you don’t even need any fancy tools for this. Here is an implementation with some basic resources.
First, let’s create the workload that requests resources but gets evicted for real jobs:
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: overprovisioning
# make sure this is
value: -100
globalDefault: false
description: "Priority class used by overprovisioning."
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: overprovisioning
# make sure to create the namespace
namespace: overprovisioning
spec:
replicas: 1
selector:
matchLabels:
app: overprovisioning
template:
metadata:
labels:
app: overprovisioning
spec:
nodeSelector:
# requests a certain type of node if you have multiple node pools
nodepool: job-executor
containers:
- name: overprovisioning
# make sure to set a proper version!
image: public.ecr.aws/eks-distro/kubernetes/pause:latest
resources:
# request enough resources to fill 1 node
requests:
cpu: 28
memory: 100Gi
# refers to the priorityclass above
priorityClassName: overprovisioning
Since one pod of the Deployment above requests enough resources to fill up one full node, the number of replicas is equal to the number of spare nodes you want to have.
Now let’s get into the scheduled part. We use a simple CronJob with kubectl and some RBAC for that:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: overprovisioning
namespace: overprovisioning
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: overprovisioning
namespace: overprovisioning
rules:
- apiGroups:
- apps
resources:
- deployments/scale
- deployments
resourceNames:
- overprovisioning
verbs:
- get
- update
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: overprovisioning
namespace: overprovisioning
subjects:
- kind: ServiceAccount
name: overprovisioning
namespace: overprovisioning
roleRef:
kind: Role
name: overprovisioning
apiGroup: rbac.authorization.k8s.io
These allow our CronJob to access and scale the Deployment we just created.
The following resources manage the up- and downscaling of our overprovisioning workload:
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: upscale-overprovisioning
namespace: overprovisioning
spec:
# upscale in the morning on weekdays
schedule: "0 6 * * 1-5"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
serviceAccountName: overprovisioning
containers:
- name: scaler
# make sure to use a proper version here!
image: bitnami/kubectl:latest
# the replicas are your desired node count
command:
- /bin/sh
- -c
- |
kubectl scale deployment overprovisioning --replicas=2
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: downscale-overprovisioning
namespace: overprovisioning
spec:
# downscale after work
schedule: "0 17 * * 1-5"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
serviceAccountName: overprovisioning
containers:
- name: scaler
# make sure to use a proper version here!
image: bitnami/kubectl:latest
# just use 0 replicas to stop overprovisioning
command:
- /bin/sh
- -c
- |
kubectl scale deployment overprovisioning --replicas=0
That’s it! No Helm chart, no custom operator, just some basic Kubernetes resources.
]]>pkgs.vimPlugins.nvim-treesitter.withAllGrammars to install all compiled grammars at once.Here is a little snippet on how to link all compiled Treesitter grammars to your runtimepath. The Treesitter plugin will automatically pick them up since it searches for <grammar>.so in your runtimepath (rtp). No more config needed (of course, you still need to install the Treesitter plugin).
luaConfigSnippet = let grammarsPath = pkgs.symlinkJoin {
name = "nvim-treesitter-grammars";
paths = pkgs.vimPlugins.nvim-treesitter.withAllGrammars.dependencies;
}; in
# lua
''
-- also make sure to append treesitter since it bundles some languages
vim.opt.runtimepath:append("${pkgs.vimPlugins.nvim-treesitter}")
-- append all *.so files
vim.opt.runtimepath:append("${grammarsPath}")
''
Just integrate this snippet into your existing Neovim Lua config.
lick HERE to see how I integrated it into my setup.
]]>How to migrate if you are using the JDBC integration. You might have an application-local.yaml that looks like this:
spring:
flyway.enabled: true
datasource:
url: "jdbc:tc:postgresql:13.8:///testing"
username: "testing"
password: "testing"
First, we need to create another profile that is not using the Testcontainers integration. Create a file called application-ci.yaml:
spring:
flyway.enabled: true
datasource:
# postgres:5432 is the url to reach our new database
url: "jdbc:postgresql://postgres:5432/testing"
username: "testing"
password: "testing"
Usually your tests look like this:
@DataJpaTest
// this activates the profile "test" for the current test
@ActiveProfiles("test")
class SomeTest() {}
The profile test is hardcoded here. Let’s implement a resolver that changes the active profile based on an environment variable:
import org.springframework.test.context.ActiveProfilesResolver
class ProfileResolver : ActiveProfilesResolver {
override fun resolve(testClass: Class<*>): Array<String> {
val env: Map<String, String> = System.getenv()
// read out the profile environment variable
// use TC as default (local development)
val profile: String = env.getOrDefault("PROFILE", "local")
return arrayOf(profile)
}
}
This function returns local if the PROFILE environment variable is not set. Otherwise, it returns the value of the variable.
The resolver can be used like this:
@DataJpaTest
// instead of hardcoding the profile we use a resolver here
@ActiveProfiles(resolver = ProfileResolver::class)
class SomeTest() {}
Set the variable in your CI definition. Example in GitLab CI:
test:
services:
# use any container image
- postgres:13.8
variables:
PROFILE: ci
scripts:
- ./gradlew test
Sometimes you use the Testcontainers API directly in the code to create containers and use them.
Here is a basic example:
@SpringBootTest
@Testcontainers
@ActiveProfiles("test")
class SomeIntegrationTest() {
// your tests are here ...
companion object {
@Container
@JvmStatic
private val kafkaContainer =
KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:7.4.0"))
.withKraft()
@DynamicPropertySource
@JvmStatic
@Suppress("unused")
fun registerProperties(registry: DynamicPropertyRegistry) =
registry.add("spring.kafka.bootstrap-servers") {
kafkaContainer.bootstrapServers
}
}
}
We cannot solve this simply by switching the active profile.
In the first step, you need to implement the resolver from the JDBC section to be able to switch profiles on demand.
Now let’s create a TestConfiguration that will handle the container creation only when the local profile is active:
@TestConfiguration
// IMPORTANT: this configuration only runs when "local" profile is active
@Profile("local")
class ContainerConfiguration() {
companion object {
init {
val kafkaContainer =
KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:latest"))
.withKraft()
kafkaContainer.start()
System.setProperty(
"spring.kafka.bootstrap-servers",
kafkaContainer.bootstrapServers,
)
}
}
}
In the last step, use the resolver and configuration in your test case:
@SpringBootTest
@ActiveProfiles(resolver = ProfileResolver::class)
@Import(ContainerConfiguration::class)
class AccountBlockingStartedConsumerIntegrationTest() {
// tests here...
// companion object removed!
}
If the PROFILE environment variable is set to local or empty, the container will be created. If it is set to something else, the container won’t be started since the configuration won’t be applied. Make sure to set a correct URL to a running instance in the different application-<profile>.yaml files. Maybe just use GitLab Services to create the container.
image: docker-cache.example.com/library/alpine. To remove docker-cache.example.com as a single point of failure, all teams need to change the image name back to image: docker.io/library/alpine or image: alpine.
A possible solution would be to write a mutating Kubernetes webhook that alters the image name for every pod. That would work, but it would not change the image in the source code. This solution works ASAP but would lead to inconsistent Helm charts.
Before enforcing the new image names via OPA, we thought about helping the teams to change the image name instead of blocking their deployments. Manually digging through 200+ services from 40+ teams was not an option. Let the automation begin.
GitLab has a neat CLI tool called glab found here. With glab you can create issues, merge requests, releases and much more from the command line.
In order to modify every repository in our GitLab instance, we first need to clone them locally.
glab repo clone -g <group> -a=false -p --paginate
Parameters:
-g allows you to specify the group-a specify if you want to clone archived repositories
-p preserves namespace and clones them into subdirectories--paginate makes additional requests in order to fetch all repositoriesUnfortunately, glab does not let you specify the git depth for the repositories. In general, we would like to have a shallow clone since the history is not important for us and it would reduce network bandwidth and disk space a lot.
As already explained, we want to replace all occurrences of docker-cache.example.com with docker.io. Since the mirror only applies to containers deployed into Kubernetes, our script should only trigger for Helm chart files.
The replacements let you specify an array in case you have multiple different pull-through proxies defined.
#!/bin/bash
# replace.sh
replacements=(
# caches
's/docker-cache.example.com/docker.io/g'
's/ghcr-cache.example.com/ghcr.io/g'
)
# finds all .yaml and .yml files
# filters out files that include 'gitlab-ci' or 'docker-compose' in their name
for file in $(find $1 -type f -name "*.y*ml" | grep -v "docker-compose" | grep -v "gitlab-ci"); do
org=$(cat $file)
mod="$org"
# loop over replacements
for pattern in "${replacements[@]}"; do
mod=$(echo "$mod" | sed "$pattern" 2>/dev/null)
done
# only modify the actual file if the content changed
if [[ "$mod" != "$org" ]]; then
echo "$file"
echo "$mod" > $file
fi
done
Run the script:
bash replace.sh <folder>
Some repositories now contain changes on our local disk. We do not want to manually go through every repository, checking the diff, and pushing it to GitLab. Even worse, clicking hour after hour in the UI to create hundreds of merge requests.
Let’s write a script:
#!/bin/bash
# traverse.sh
traverse() {
# iterate over all items inside the folder given as first arg
for dir in "$1"/*; do
# if it's not a folder, continue
if [ ! -d "$dir" ]; then
continue
fi
# if it is not a git repository
# then recursively call the function again
if [ ! -d "$dir/.git" ]; then
echo "Entering $dir"
traverse "$dir"
continue
fi
# check for git changes
(cd "$dir" && git diff --quiet)
git_status=$?
# just continue if there are no changes
if [ $git_status -eq 0 ]; then
continue
fi
# enter the folder
pushd "$dir"
# push changes to remote
git checkout -b fix/replace-image-registry
git add .
git commit -m "fix: replace image registries" -m "Registry mirrors are set transparent in the Kubernetes containerd configuration."
git push
# create a merge request on GitLab
glab mr create --remove-source-branch --assignee="<YOUR-USERNAME>" --yes --title="feat: replace image registry"
# leave the folder
popd
done
}
traverse $1
Run the script:
bash traverse.sh <folder>
Feel free not to blindly execute the script; instead, try it step by step. It is easy to comment some parts out and run this script multiple times.
Every merge request will create one GitLab TODO in the UI if you assign yourself with --assignee in the glab command. That lets you go through all merge requests one by one and review them if needed.
I personally did this even though it took about an hour for 100 merge requests. It was still faster than doing every step manually because you only have to make manual changes if needed.
Application launchers are essential tools for power users, allowing you to quickly access and execute applications, commands, and scripts with minimal effort. One remarkable launcher for macOS is Raycast. This tool takes application launching to the next level.
Raycast offers a highly customizable interface, allowing you to set up quicklinks to your most frequently used tools, applications, and actions. Whether it’s managing your passwords with Bitwarden, navigating your GitHub or GitLab repositories, or accessing your clipboard history, Raycast simplifies these tasks with its Quicklinks feature.
Additionally, Raycast supports a variety of plugins available through the Raycast Store, providing endless possibilities for enhancing your workflow.
A window manager controls the layout and arrangement of windows on your screen. Tiling window managers, in particular, are beloved by power users for their efficiency and organization. Two notable options for macOS are Yabai and Amethyst.
Yabai is a popular choice due to its active development community. With Yabai, you can establish rules that prevent it from managing dialog windows or system settings, ensuring that it seamlessly integrates into your workflow. For example, you can set a rule that excludes system preferences from window management.
Efficient keybindings are a hallmark of a power user’s setup. To configure keybindings on macOS, two applications stand out: Karabiner Elements and skhd.
Skhd is especially useful for setting up keybindings for Yabai because it simplifies binding shell commands to keyboard shortcuts. For instance, you can bind a Yabai command like moving a window to a specific workspace with ease.
On the other hand, Karabiner Elements excels at handling complex keybindings, such as remapping keys or creating conditional keybindings for specific contexts, like “Ctrl is Command and Ctrl is Ctrl while in the terminal.”
To streamline your keybinding setup, the Karabiner Event Viewer is a handy tool for debugging keyboard and mouse inputs.
Power users often gravitate towards browsers like Vivaldi and Arc for their unparalleled customization options. These Chromium-based browsers allow users to rebind virtually every shortcut, a level of control not found in mainstream browsers like Chrome or Firefox. Moreover, they support native vertical tab layouts, perfect for managing numerous open tabs.
For even greater control over web navigation, consider the Vimium extension. Vimium lets you navigate the web using keyboard shortcuts, making browsing lightning-fast. For instance, pressing “f” highlights clickable links on a page, and you can activate them with keystrokes like “gh.”
Text editing is a fundamental task for many power users. While Neovim is a popular choice among developers, it does have a steep learning curve. If you’re new to Vim-style editing, start by installing a Vim plugin for your current editor and gradually acclimate to its functionality. Keep in mind that becoming proficient in Vim requires time and dedication, and it may not be for everyone.
Homebrew is a package manager for macOS that simplifies the installation and management of software. Power users love it for its ability to keep track of installed packages and quickly uninstall them. Additionally, you can export a list of all installed brews, making it painless to set up a new Mac with your preferred tools.
Beyond the core applications, several additional tools can enhance your productivity:
While optimizing your workflow is essential, it’s equally important to find balance. You don’t need to be a power user 24/7. Allow yourself time to relax and enjoy your computer for leisure activities. After all, creativity often thrives when you’re not solely focused on productivity.
In conclusion, becoming a power user involves tailoring your computing environment to suit your needs. These applications and tools provide the means to create a highly customized and efficient workspace, but remember that the journey toward becoming a power user is personal and should align with your goals and preferences. So, go forth, explore, and make your digital world truly your own.
expand the following article to be published on a technology blog and optimize it for search engines:
explain what a power user is power users often use linux because of its customizability many developers use macos because of its simplicity
in this post i wanna share some of the applications i use to enhance my workflow on a mac operating system i wont go into detail about my terminal because this is a another big story i wanna cover in a different post
briefly explain what application launchers are
explain what can be done with raycast quicklinks
raycast has multiple plugins that can be installed via the raycast store some examples i use really often:
there are 2 out there for macos:
i use yabai since the github repo has more drive
you can set rules with yabai so that it does not manage dialog windows or system settings give an example to this
i set all my yabai keybindings with skhd since it makes it easy to bind shell commands give an example on how to bind a yabai command with skhd
complex keybindings are done with karabiner elements like “ctrl is command and ctrl is ctrl while in the terminal” give an example on how to set a karabiner elements via json
karabiner event viewer is pretty good for debugging keyboard and mouse input
there are some browsers that are used by power users like vivaldi and arc they allow users to rebind every shortcut unlike chrome or firefox both are chromium based so you can install all chrome browser extensions they are the only browsers that allow native vertical tab layout
there is a chromium plugin called vimium which lets you navigate the web via shortcuts, pressing f will highlight every clickable link on the current site and mark them with a keystroke like “gh”, if you press “gh” it will then click this link for you
i personally use neovim for my development but that has a steep learning curve and you really need to want this. better start of with installing a vim plugin for the editor you currently use and get used to it slowly. you dont have to fully commit to coding in the terminal, even though it looks badass and you are a king when pair programming but it wont make you a fast or better programmer. IDEs like intelliJ are really good and offer a really good bundle. vim is only for people that really want to take time and configure their IDE as their needs.
briefly explain homebrew i use it to install nearly all of my tools in order to track what i have installed on my system and to uninstall them easily also you can export a list of all installed brews in order to ease setting up a new mac
sometimes it is also good to just lay back and enjoy browsing or coding. there is no need to be a power user 24/7. i personally have my power user hours where i got an idea and i wanna dump my brain farts as fast as possible into bad code and that has to be as fast and efficient as possible since these hours usually dont last long for me that is why i have to use them!
In the world of technology, advancements happen at a breakneck pace. From the early days of mechanical typing machines to the sleek and modern computer keyboards we use today, the evolution of keyboard layouts has been nothing short of remarkable. Have you ever wondered why the keys on your keyboard are staggered in the way they are? Let’s delve into the history of keyboard layouts and discover why a matrix split keyboard layout might be the ergonomic solution you’ve been searching for.
The keyboard layout as we know it today has its roots in the past. To understand its design, we need to rewind to the era of mechanical typing machines. These early contraptions couldn’t be constructed with the efficient matrix layout that many contemporary split keyboards adopt. Instead, they featured a staggered arrangement of keys to prevent mechanical jams when two adjacent keys were pressed in quick succession. This legacy design persisted and was carried over to the computer keyboards we use today.
Take a moment to look at your keyboard. Notice how most of your fingers have a significant workload, but your poor thumbs seem to be sitting idly by, responsible for just a fraction of the action. It’s a bit like sending an alpinist up Mount Everest in sneakers – it’s not ideal. In contrast, a split keyboard layout has at least one or two separate keys per thumb. With this layout, you can assign functions like backspace, enter, and layer switches to these additional keys. Suddenly, your thumbs, which happen to be some of the strongest fingers on your hand, find a new purpose and share the load with your other digits.
If you’re a programmer or spend long hours at your keyboard, you’re no stranger to the toll it can take on your body. Standard keyboards force you to keep your hands close together, which can lead to shoulder, back, and wrist pain over time. It’s not a healthy posture for extended use. But with a split keyboard, you have the opportunity to customize your setup, allowing for a more natural hand position. The result? Reduced strain on your body and a more comfortable typing experience. It’s like upgrading from sneakers to proper climbing boots for your hands.
As you evolve and adapt to ergonomic solutions like split keyboards, you may wonder why these innovative designs haven’t become the norm on every desk. The answer lies in the learning curve. Mastering a new keyboard or keyboard layout isn’t as simple as flipping a switch. It’s a gradual process that requires reprogramming your muscle memory and forgetting the familiar layout you’ve used for years.
That’s where the idea of teaching the next generation comes in. If you aspire to have children and want them to thrive in a more ergonomic world, consider introducing them to a human-friendly keyboard layout from the start. By doing so, you’ll spare them the struggle of adapting to new layouts later in life, making their journey into the world of technology smoother and more comfortable.
In conclusion, while transitioning to a split keyboard layout might seem like a daunting task, it’s an investment in your comfort and well-being. The evolution of keyboard layouts, from the days of mechanical typewriters to today’s split keyboards, has been driven by a quest for ergonomics and efficiency. So, why wait? Take the plunge, embrace the matrix split keyboard layout, and unlock a more comfortable and efficient typing experience for yourself and future generations. Your thumbs will thank you.
expand the following article to be published on a technology blog and optimize it with seo, keep the markdown formatting:
TLDR; An alpinist does not climb the Mount Everest in sneakers. Why should you?
Did you ever wonder why the keyboard layout is staggered? It is because old mechanical typing machines couldn’t be constructed with a matrix layout like most of the split keyboards have. prompt: write something about the history of keyboard layouts and why a matrix split keyboard layout is more ergonomic
One key for two fingers? What the f* is this sh*? Your index finger should handle 6 keys but the thumb only 0.5 keys … I don’t know about this weird keyboard design.
A split keyboard has at least one or two seperate keys per thumb. I mapped backspace, enter and a layer switch to these additional keys.
I finally found a purpose for my thumbs (which are actually the strongest fingers on the hand)!
The more muscular you become, the harder it gets to keep your hands tight together on your keyboard (for sure I don’t have these kind of problems since I am a programmer…. but anyways).
Standard keyboards generate shoulder, back and wrist pain over the years. You always have to squeeze your shoulders in front of your body together, which is not a healthy position to be in for a long period of time.
… and you want them to life in a better world, I am sure!
“But Felix, if split keyboards are so nice, why haven’t I seen them on any desk so far?”
Good question, for sure. Learning to write on a keyboard is not as easy as you might think because you learned it as a kid or over the years. It is a slow process because it is difficult.
Reprogramming your brain to a new keyboard or keyboard layout is even more difficult since before getting used to it, you have to forget about the old layout, so it takes even more effort.
Thats why you should teach your kids a human friendly keyboard layout while growing up in order to make their lives easier in the future. They don’t have to struggle like you will the next 3 months learning to write on a new keyboard.
I haven’t found any minimal example of a working Kubernetes admission webhook made with Kubebuilder, so here is mine.
This example just annotates all created Pods with a nice message.
A lot of code, commands, and comments, as always!
We are going to use Kubebuilder to bootstrap our project.
mkdir pod-webhook
cd pod-webhook
kubebuilder init --domain github.com --repo github.com/breuerfelix/pod-webhook
I didn’t manage to generate valid configuration files with controller-gen when only using webhooks without writing a controller.
Also, I don’t like kustomize, which Kubebuilder uses when generating the manifests, so let’s get rid of all the boilerplate code.
The Makefile won’t make sense anymore either. Dump it and write your own if needed.
rm -rf config hack Makefile
We do not need leader election for a minimal example (you can also remove all Kubebuilder comments since we don’t use the generator anyway).
diff --git a/main.go b/new_main.go
index 9052d2a..18780eb 100644
--- a/main.go
+++ b/new_main.go
@@ -46,13 +46,9 @@ func init() {
func main() {
var metricsAddr string
- var enableLeaderElection bool
var probeAddr string
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
- flag.BoolVar(&enableLeaderElection, "leader-elect", false,
- "Enable leader election for controller manager. "+
- "Enabling this will ensure there is only one active controller manager.")
opts := zap.Options{
Development: true,
}
@@ -66,8 +62,7 @@ func main() {
MetricsBindAddress: metricsAddr,
Port: 9443,
HealthProbeBindAddress: probeAddr,
- LeaderElection: enableLeaderElection,
- LeaderElectionID: "ed15f5f0.github.com",
+ LeaderElection: false,
})
if err != nil {
setupLog.Error(err, "unable to start manager")
go run . should successfully build and run the project.
Modify the Dockerfile so it respects our new project structure and verify your changes with a docker build -t test ..
diff --git a/old_Dockerfile b/Dockerfile
index 456533d..b53359f 100644
--- a/old_Dockerfile
+++ b/Dockerfile
@@ -10,9 +10,7 @@ COPY go.sum go.sum
RUN go mod download
# Copy the go source
-COPY main.go main.go
-COPY api/ api/
-COPY controllers/ controllers/
+COPY *.go .
# Build
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager main.go
The Kubebuilder Book references the following example on GitHub.
We are going to strip these files and integrate them into our bootstrapped Kubebuilder project.
Create a file called webhook.go:
package main
import (
"context"
"encoding/json"
"net/http"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/webhook/admission"
)
type podAnnotator struct {
Client client.Client
decoder *admission.Decoder
}
func (a *podAnnotator) Handle(ctx context.Context, req admission.Request) admission.Response {
pod := &corev1.Pod{}
if err := a.decoder.Decode(req, pod); err != nil {
return admission.Errored(http.StatusBadRequest, err)
}
// mutating code start
pod.Annotations["welcome-message"] = "i mutated you but that is okay"
// mutating code end
marshaledPod, err := json.Marshal(pod)
if err != nil {
return admission.Errored(http.StatusInternalServerError, err)
}
return admission.PatchResponseFromRaw(req.Object.Raw, marshaledPod)
}
func (a *podAnnotator) InjectDecoder(d *admission.Decoder) error {
a.decoder = d
return nil
}
Add the podAnnotator as a webhook to our manager:
diff --git a/old_main.go b/main.go
index 8db76b2..48544b3 100644
--- a/old_main.go
+++ b/main.go
@@ -12,6 +12,7 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
+ "sigs.k8s.io/controller-runtime/pkg/webhook"
)
var (
@@ -48,6 +49,8 @@ func main() {
os.Exit(1)
}
+ mgr.GetWebhookServer().Register("/mutate-pod", &webhook.Admission{Handler: &podAnnotator{Client: mgr.GetClient()}})
+
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
os.Exit(1)
If you run the project via go run . now, it says that it is missing certificates. CERTIFICATES?? Yes … but wait, you won’t see any certificate here, I promise!
Just make sure that it builds without errors and you should be fine.
Kubernetes is not able to call webhooks that are insecure and not protected via HTTPS.
To handle this, we are going to use cert-manager and let it handle all that nasty stuff.
Refer to this guide for the installation of cert-manager; I recommend using Helm.
First of all, let’s create a namespace for all our stuff:
kubectl create namespace pod-greeter
Create a cert-manager Issuer that handles self-signed certificates and a Certificate itself:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned
namespace: pod-greeter
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pod-greeter
namespace: pod-greeter
spec:
# remember the secretName
secretName: pod-greeter-tls
dnsNames:
# IMPORTANT: format is the following: namespace.service-name.svc
- pod-greeter.pod-greeter.svc
issuerRef:
name: selfsigned
Create a Service that matches the DNS name format in our Certificate:
apiVersion: v1
kind: Service
metadata:
# resolves to pod-greeter.pod-greeter.svc
name: pod-greeter
namespace: pod-greeter
spec:
ports:
- name: https
port: 9443
protocol: TCP
selector:
# IMPORTANT:
# this has to match the selector in our Deployment later
app: pod-greeter
Create a Deployment that matches the selector in our Service.
Also, make sure that the secretName matches the one in Certificate.
Cert-manager automatically creates a Secret that contains the generated certificates so we can mount them in our pod.
apiVersion: apps/v1
kind: Deployment
metadata:
name: pod-greeter
namespace: pod-greeter
spec:
selector:
matchLabels:
# IMPORTANT
app: pod-greeter
replicas: 1
template:
metadata:
labels:
# IMPORTANT
app: pod-greeter
spec:
containers:
- name: pod-greeter
image: ghcr.io/breuerfelix/pod-webhook:latest
imagePullPolicy: Always
volumeMounts:
- name: tls
# the tls certificates automatically get mounted into the correct path
mountPath: "/tmp/k8s-webhook-server/serving-certs"
readOnly: true
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
terminationGracePeriodSeconds: 10
volumes:
- name: tls
secret:
# IMPORTANT: has to match from Certificate
secretName: pod-greeter-tls
# the pod only gets created if the secret exists
# so it waits until the cert-manager is done
optional: false
As the last step, we can finally create our MutatingWebhookConfiguration to tell Kubernetes that it should call the correct endpoint of our controller.
Due to the cert-manager annotation, all certificates are going to be injected into this webhook configuration at runtime by cert-manager.
I told you that you won’t see any certs here!
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: pod-greeter
annotations:
# IMPORTANT: has to match Certificate namespace.name
cert-manager.io/inject-ca-from: pod-greeter/pod-greeter
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
# has to match the service we created
namespace: pod-greeter
name: pod-greeter
port: 9443
path: "/mutate-pod"
failurePolicy: Fail
name: mpod.kb.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- pods
sideEffects: None
You are done! Let’s test it out by creating a simple pod:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Fetch it and have a look at the annotations. It should have a welcoming message!
kubectl get pods
I figured out two possible scenarios to develop a mutating webhook:
minikube or kind locally, deploy the controller, and test itclientConfig.url in the MutatingWebhookConfiguration with ngrok (or alternatives) to tunnel your local instance into a remote clusterThe second option is the easiest for me, since I don’t have to redeploy the application on every change and also I don’t have to clutter my computer with a local Kubernetes cluster.
Currently, there is no option to start the Kubebuilder webhook server without TLS certificates. First, let’s create self-signed certificates for our webhook server:
mkdir hack certs
touch hack/gen-certs.sh
chmod +x hack/gen-certs.sh
vi hack/gen-certs.sh
Contents of hack/gen-certs.sh:
#!/bin/bash
mkdir certs
openssl genrsa -out certs/ca.key 2048
openssl req -new -x509 -days 365 -key certs/ca.key \
-subj "/C=AU/CN=localhost"\
-out certs/ca.crt
openssl req -newkey rsa:2048 -nodes -keyout certs/server.key \
-subj "/C=AU/CN=localhost" \
-out certs/server.csr
openssl x509 -req \
-extfile <(printf "subjectAltName=DNS:localhost") \
-days 365 \
-in certs/server.csr \
-CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial \
-out certs/server.crt
Run the following script to generate certificates into the certs folder. Don’t forget to add that folder to your .gitignore file:
./hack/gen-certs.sh
Now add the following lines to your main.go in order to allow passing custom paths for your certificates into the application:
// ...
// read in command line flags
var certDir, keyName, certName string
flag.StringVar(&certDir, "cert-dir", "", "Folder where key-name and cert-name are located.")
flag.StringVar(&keyName, "key-name", "", "Filename for .key file.")
flag.StringVar(&certName, "cert-name", "", "Filename for .cert file.")
// ...
// Server uses default values if provided paths are empty
server := &webhook.Server{
CertDir: certDir,
KeyName: keyName,
CertName: certName,
}
// register your webhook
server.Register("/mutate-pod", &webhook.Admission{Handler: &podWebhook{
Client: mgr.GetClient(),
}})
// register the server to the manager
mgr.Add(server)
// ...
Start the server for developing:
go run main.go --cert-dir certs --key-name server.key --cert-name server.crt
Now we need to tunnel the localhost server to the public. ngrok only tunnels TLS traffic in their paid plan, so I decided to use localtunnel.
localtunnel tries to get the subdomain called webhook-development if it is available. If this is not the case, you have to substitute your subdomain in the MutatingWebhookConfiguration.
npx localtunnel --port 9443 --local-https --local-ca certs/ca.crt --local-cert certs/server.crt --local-key certs/server.key --subdomain webhook-development
Finally, we can create a MutatingWebhookConfiguration for our development setup. Don’t forget to delete it after you are done.
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: webhook-development
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
# choose the correct subdomain here
url: "https://webhook-dev.loca.lt/mutate-pod"
failurePolicy: Fail
name: juicefs.breuer.dev
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- pods
sideEffects: None
Success! You should now get traffic on your local machine when updating or creating a new Pod.
ATMs inside the casinos usually charge a 10% supercharge fee. Your trip will be expensive enough without extra fees. Better bet this money instead of paying it to some random ATM company.
Research clubs, parties, or other events beforehand and try to buy tickets if they are available. Guys typically have to spend more money on tickets than girls. Prices also change during the evening, so buying them before your trip is always a good idea to avoid waiting for hours in line just to be broke after.
There are flat-rate parking spots in nearly every big hotel for around 15–25 dollars per day, depending on the day of the week. Don’t forget to park in front of the hostel to unload your baggage! It looks weird wandering around Vegas with a huge bag. Trust me, I’ve been through this…
Expensive. Get drinks in a store and drink them in your room unless you want to pay 30 bucks for a Vodka Energy.
Be prepared to show your ID card every time you want to watch or play at a table. Everybody who looks younger than 35 will be asked for their ID. This is a standard procedure—be prepared, just hand out your card. You’ll get it back for sure!
]]>