<![CDATA[mattorb]]>https://mattorb.com/https://mattorb.com/favicon.pngmattorbhttps://mattorb.com/Ghost 6.22Wed, 18 Mar 2026 13:33:27 GMT60<![CDATA[Keyjam]]>For a long time, I’ve wanted to reduce my reliance on the [computer] mouse. Despite a deep appreciation for the flow state that good session of non stop keypresses offers, I often found myself reaching for the mouse as an initial reaction when I didn't know

]]>
https://mattorb.com/keyjam/67d496b719fb631966860a90Fri, 14 Mar 2025 21:02:11 GMT

For a long time, I’ve wanted to reduce my reliance on the [computer] mouse. Despite a deep appreciation for the flow state that good session of non stop keypresses offers, I often found myself reaching for the mouse as an initial reaction when I didn't know the keyboard shortcut for something. That’s where Keyjam comes in—I built it to nudge myself to use the keyboard more and the mouse less.

Keyjam is a macOS status app that encourages using the keyboard.

How Keyjam Works

For starters, it counts key presses in real time to track your keyboard streaks – that is, sequential key presses (just counting them). The moment you use the mouse, that breaks the streak and you get some immediate feedback.

It also offers a visualization of the history of those streaks over time - the last day, week, and month.

You can constrain it to only track streaks for keyboard centric apps like your favorite writing or coding app, or go system wide if you are a pro that doesn't really need the mouse with some discipline. If you go big like that, you'll likely want a window manager and some other keyboard helper apps in the mix.

Safety

I am a huge fan of open source, and given the nature and sensitivity of something like intercepting keystroke events, I want to mention two things here:

  1. Keyjam ONLY COUNTS key strokes – it discards any information about the actual key presses it receives. One day it may bucket shortcut keys (like Cmd-Shift-P) to count them distinctly, but there is zero intention to ever capture the content of any text/language you are typing.
  2. Given #1, I decided to open source this so if you have any doubts about giving granting permission to intercept keystrokes on your system are so inclined, you can go read the source and see exactly what it is doing, or ask an AI to explain it to you – that last part makes sense as of Q1 2025 😀.

If you’re interested in driving up your keyboard usage, check it out on GitHubKeyjam.

]]>
<![CDATA[Model Context Protocol and Local AI]]>What is Model Context Protocol? It could be a game changer for local LLM usage w/tools.

I poked at it some this morning to gain some understanding and sketched out these notes along the way.

And here's a mapped out example of a local LLM Client (goose)

]]>
https://mattorb.com/model-context-protocol-notes-1/679d5f4f1c070b2cef57ce50Fri, 31 Jan 2025 23:44:03 GMT

What is Model Context Protocol? It could be a game changer for local LLM usage w/tools.

I poked at it some this morning to gain some understanding and sketched out these notes along the way.

Model Context Protocol and Local AI

And here's a mapped out example of a local LLM Client (goose) with a locally hosted LLM (Qwen2.5 via Ollama), to access a local MCP server. It is a bare minimal example which verifies if a list of items is (lexicographically) sorted. The LLM Client receives an instruction from the LLM to invoke the tool (on the MCP server) which executes that Typescript code on the right side of the screenshot below.

Model Context Protocol and Local AI
]]>
<![CDATA[Building Software: Risk and Validation]]>The idea of course correcting regularly with an intent to learn can have a huge impact on project success.

Let's see if we can explain this in a drawing without any buzzwords

Confirm progress throughout a project by delivering value with mindset toward learning and making adjustments along

]]>
https://mattorb.com/on-risk-iteration-and-validation/679806b21c070b2cef57ce20Mon, 27 Jan 2025 22:29:33 GMT

The idea of course correcting regularly with an intent to learn can have a huge impact on project success.

Let's see if we can explain this in a drawing without any buzzwords

Building Software: Risk and Validation

Confirm progress throughout a project by delivering value with mindset toward learning and making adjustments along the way.

]]>
<![CDATA[Clarity in Decision Making, from Org to Architecture]]>This morning I was thinking about how the most successful software teams I have worked on had a _very_ shared understanding of how decisions were guided.

Hidden costs explode when there is ambiguity around boundaries, ownership, strategy, goals, and tactics.

]]>
https://mattorb.com/clarity-decision-making-org-arch/679284b11c070b2cef57cdf8Thu, 23 Jan 2025 18:06:52 GMT

This morning I was thinking about how the most successful software teams I have worked on had a _very_ shared understanding of how decisions were guided.

Clarity in Decision Making, from Org to Architecture

Hidden costs explode when there is ambiguity around boundaries, ownership, strategy, goals, and tactics.

]]>
<![CDATA[Aligning Software Teams]]>This morning, I have been thinking about past software projects in terms of what made them successful balancing risk, learning, achieving results and good vibes.

Making sure a team has a shared understanding and alignment around the why, what, and how they will approach building software together can have such

]]>
https://mattorb.com/aligning-software-teams/678809b11c070b2cef57cdd7Wed, 15 Jan 2025 19:18:36 GMT

This morning, I have been thinking about past software projects in terms of what made them successful balancing risk, learning, achieving results and good vibes.

Making sure a team has a shared understanding and alignment around the why, what, and how they will approach building software together can have such a huge impact on the success of a project and how the contributors feel about it.

How about a bingo card?

Without being too prescriptive, I would say having a stance on all the items in these boxes ASAP starts you off better than not, as many of them can feed into and affect each other.

Aligning Software Teams
]]>
<![CDATA[Automatic Standing Desk anyone?]]>
Several years ago, I tried a manual standing desk and it was a bit of a failed experiment. I tried standing all day, for a half day, for certain intervals, etc, but inevitably I would forget to switch between the sit/stand positions and I took on too much standing,

]]>
https://mattorb.com/automaticstandingdesk1/6171c8e527e7707cf5a43c02Thu, 21 Oct 2021 20:12:50 GMT


Several years ago, I tried a manual standing desk and it was a bit of a failed experiment. I tried standing all day, for a half day, for certain intervals, etc, but inevitably I would forget to switch between the sit/stand positions and I took on too much standing, too quickly. I tried some reminder/notification centric solutions to encourage me to change positions, but I ended up just ignoring them if I was in the zone writing some code. I half-joked to a coworker at the time that I needed the desk to just automatically switch positions on a regular basis without requiring or even allowing my intervention.

Years later, the combination of open source and a some popular mass market standing desks can deliver what I was half joking about. You can now buy a standing desk with a BluetoothLE enabled controller. Various people have put projects on Github where they have already done the work of reverse engineering the protocol being used by some popular standing desks. So now we can write code to change the desk to sit or stand position. Putting that on a forced schedule is a relatively short hop.

This week, I am experimenting with having my desk switch to standing a few times a day, if I am at the desk -- based on mouse keyboard activity.

So far so good!

Hoping to build a sustainable habit this time around. 😃

update: Github repo.

]]>
<![CDATA[Github Action for Lighthouse]]>After using a Github action to assess web page performance with Google PageSpeed, I found out that PageSpeed leverages a tool called Lighthouse for most of what it now provides.  When I was configuring a Github action to check for Javascript library security vulnerabilities, I remember it using a

]]>
https://mattorb.com/github-action-for-lighthouse/5e3977b854d47f26288f126cWed, 05 Feb 2020 14:12:19 GMT

After using a Github action to assess web page performance with Google PageSpeed, I found out that PageSpeed leverages a tool called Lighthouse for most of what it now provides.  When I was configuring a Github action to check for Javascript library security vulnerabilities, I remember it using a package called Lighthouse, too.

🤔

According to Google, the Lighthouse project provides . . .

"Automated auditing, performance metrics, and best practices for the web"

NOTE: This is the same suite of tests that execute when you launch Google Chrome and use the audits tab in Developer Tools, and the same suite of tests that now provide a large portion of what is reported back by Google PageSpeed.

Some quick scouting on Github revealed some existing efforts focused around making Lighthouse easier to plug in to CI/CD and packaging it into a Github Action.

For the purpose of asserting and maintaining a baseline performance level on this site, we're going to try leveraging the Lighthouse Github Action.   Thanks to Aleksey and the contributors!

The Github Action

First step: add a workflow file .github/workflows/lighthouse.yml in the Github repository which should trigger Lighthouse:

name: Lighthouse
on: 
  push:
    branches:
    - master
jobs:
  lighthouse:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v1
      - name: Audit URLs using Lighthouse
        uses: treosh/lighthouse-ci-action@v2
        with:
          urls: 'https://mattorb.com/'
      - name: Save results
        uses: actions/upload-artifact@v1
        with:
          name: lighthouse-results
          path: '.lighthouseci'

After committing that workflow file, it executes, and we can go grab the artifact generated by lighthouse and see what our scores look like.  

Github Action for Lighthouse
Find Github action artifacts here

That zip file contains both a json and html representation of the report.

Inspecting the summary portion of the html report gives us overall scores in various areas and provides much more detail below that fold:

Github Action for Lighthouse
Lighthouse scoring areas, summary. For breakdown and recommendations, see full report.

Based on those numbers, decide where to draw the line and set a threshold to not fall below in each of the categories.  Leave some room for variability in server performance though.  Nobody wants to be dealing with false alerts regularly.

Create a .github/lighthouse/lighthouserc.json file in the repository with Lighthouse assertions you want to enforce:

{
  "ci": {
    "assert": {
      "assertions": {
        "categories:performance": ["error", {"minScore": 0.90}],
        "categories:accessibility": ["error", {"minScore": 0.80}],
        "categories:best-practices": ["error", {"minScore": 0.92}],
        "categories:seo": ["error", {"minScore": 0.90}]
      }
    }
  }
}

Configure the stanza in the Github Action at .github/workflows/lighthouse.yml to enact those assertions by pointing it to that config file:

✂️ 

    steps:
      - uses: actions/checkout@v1
      - name: Audit URLs using Lighthouse
        uses: treosh/lighthouse-ci-action@v2
        with:
          urls: 'https://mattorb.com/'
          configPath: '.github/lighthouse/lighthouserc.json' # Assertions config file 

✂️ 

Next time the action executes, it will check that we don't drop below these thresholds.  

If one of those assertions fails, it will fail the workflow with a message:

Github Action for Lighthouse
Failing to meet the performance criteria

You can fail the workflow based on overall scores or make even finer grained assertions using items called out in the detail of the report.  Check out the Lighthouse CI documentation on assertions for examples.

Going Further

  • Run a Lighthouse CI server to provide tracking of these metrics over time in a purpose built solution for that.
  • Use multiple runs to average out server performance and draw more manageable thresholds.  
  • Use the concept of a page performance budget to set guide-rails and limits around size and quantity of external (think .js and .css) resources.
  • Shift left and assess a site change while it is still in the PR stage
  • Design specific test suites and set unique performance/seo requirements based on what kind of page is being probed.
  • Leverage a Lighthouse plugin to assess pages that have opinionated and strict design requirements for their target platforms (i.e. - AMP).

]]>
<![CDATA[Github Action for Javascript Vulnerability Scanning]]>Part of what is served by this web site includes 3rd party javascript libraries.    The libraries included in a page are a mash-up of libraries and dependencies from a few    sources.  

Those libraries occasionally have security vulnerabilities disclosed.  In our last post, we put

]]>
https://mattorb.com/github-action-for-javascript-vulnerability-scan/5e318b3054d47f26288f114dWed, 29 Jan 2020 14:44:17 GMT

Part of what is served by this web site includes 3rd party javascript libraries.    The libraries included in a page are a mash-up of libraries and dependencies from a few    sources.  

Those libraries occasionally have security vulnerabilities disclosed.  In our last post, we put in automatic checks around performance of the site.   Now, let's do something to detect Javascript library vulnerabilities.

The Github Action

I found this project and whipped up the changes necessary to turn it into a Github Action.   Thanks Liran!  Thanks too to Snyk, which provides the vulnerability list.

I adapted an existing Docker container, wrapping a Github action around it.   One way to do this without overly leaking the Github Actions contract in to the container design is to map Github action parameters to environment variables and args that are agnostic and already expected by the container like so:

env:
     SCAN_URL: ${{ inputs.scan-url }} 

... where inputs.scan-url comes from the Github Action contract (as a  'parameter')  and  'SCAN_URL' is an environment variable that already works with the existing Docker container.   This is in contrast to having the container need to understand how to look for 'INPUT_' prefixed vars that a Github Action provides by default (ref: github docs).  If you don't want to, or can't modify the existing docker container, this is an option.  

The container does still need to exit with an error to cause a Github Action to fail though.   I had to modify the underlying Javascript code to accommodate that.  There would not be much value in an automatic check that always passes.

Now that we have the action, we use it by creating a workflow file in .github/workflows/javascript_vulnerability_check.yml :

name: Test site for publicly known js vulnerabilities

on: 
  push:
    branches:
    - master                   # Check on every commit to master
  schedule:
    - cron:  '0 13 * * 6'      # Check once a week regardless of commits
  repository_dispatch:
jobs:
  security:
    runs-on: ubuntu-latest
    steps:
      - name: Testing for public javascript library vulnerabilities 
        uses: mattorb/is-website-vulnerable@github-action_v1   # until PR to original repo is merged
        with:
          scan-url: "https://mattorb.com"

With that in place, we see the following check run after each commit, and once a week for good measure:

Github Action for Javascript Vulnerability Scanning

Shift security left!

So.  Awesome.

Now we have at least have some awareness if any of the following happens:

  • A change we make introduces a library with a vulnerability
  • A change introduced by 3rd party dependency introduces a library with a vulnerability
  • No change is made at all, but a vulnerability is discovered and published for a JS library we were already using.

Automatic checks for the win!

]]>
<![CDATA[Github Action for Google PageSpeed Insights]]>Lately, I have been making more broad sweeping changes to this site.  I want to ensure that I don't accidentally make a change which slows down the site – especially the homepage.  Keeping a web site performant has many benefits including better user experience and affecting

]]>
https://mattorb.com/github-action-for-google-pagespeed-insights/5e2af69654d47f26288f0e4eMon, 27 Jan 2020 14:38:20 GMT

Lately, I have been making more broad sweeping changes to this site.  I want to ensure that I don't accidentally make a change which slows down the site – especially the homepage.  Keeping a web site performant has many benefits including better user experience and affecting search engine rankings.

Google provides a tool call PageSpeed Insights to analyze how quickly a page loads and renders on desktops and mobile devices.  According to Google:

PageSpeed Insights analyzes the content of a web page, then generates suggestions to make that page faster.

It also calculates an overall score.

For this site, I want to start by setting a baseline and then regularly measure to make sure I don't do something to drop the score below that baseline performance level.  For now, we're only measuring the homepage.

The Github Action

I found this project which provides a Github Action to run PageSpeed Insights.  Thanks Jake!

I wired it up to my blog's Github repo by defining a workflow file .github/workflows/pagespeedinsights.yml:

name: Check Site Performance with PageSpeed Insights

on: 
  push:
    branches:
    - master
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Running Page Speed Insights
        uses: JakePartusch/psi-action@v1
        with:
          url: "https://mattorb.com"

It kicks off after each commit to the repo.   Example results below:

Github Action for Google PageSpeed Insights
Desktop PageSpeed Insights for mattorb.com

The overall 'performance' desktop score of 99 out of 100 is really good.  Nice!     I can't take credit for that.   The hard work of the Ghost team and the contributors to the default 'Casper' theme put us there.

Now, let us iterate and see what we put together!  

See the #comments in the snippets below, which mark what is  changing.

Raise the threshold

name: Check Site Performance with PageSpeed Insights

on: 
  push:
    branches:
    - master
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Running Page Speed Insights
        uses: JakePartusch/psi-action@v1
        with:
          url: "https://mattorb.com"
          threshold: 96                # <--- default was 70

This fails the build if if the overall performance score drops below this number.  I'm leaving a tiny bit a wiggle room here for volatility in server response times.

Measure after each blog post

...
on: 
  push:
    branches:
    - master
  repository_dispatch:      # <-- External event hook
...

We have our Ghost based blog wired up to post a Github repository_dispatch event to trigger a Github action for new/updated posts.  See the end of this post for more detail.  This is relevant when you are not doing static site generation or have a content management system that manifests changes into a page.  In those cases, you need something event based to retrigger an assessment after the site content changes.

Measure once a week

Our web site has some external dependencies and the rules we are measuring against that can evolve outside our immediate control.   To catch any unexpected changes in those things, do this once a week, even if we change nothing on our site.

...
on: 
  push:
    branches:
    - master
  schedule:
    - cron:  '0 13 * * 6'     # <---- once a week
  repository_dispatch:
...

Hat tip to crontab.guru re: 0 13 * * 6

Measure both desktop and mobile page speed

...
    steps:
      - name: Running Page Speed Insights (Mobile)   # <-- new step
        uses: JakePartusch/psi-action@v1
        id: psi_mobile                     
        with:
          url: "https://mattorb.com"
          threshold: 90                    # <-- distinct threshold
          strategy: mobile                 # <-- different strategy
      - name: Running Page Speed Insights (Desktop)
        uses: JakePartusch/psi-action@v1
        id: psi_desktop                    
        with:
          url: "https://mattorb.com"
          threshold: 96
          strategy: desktop               
...

If you use an action with the same 'uses' clause, twice, you have to give each step a unique 'id'.

Wrapping up

Here's the workflow file for my Github Action, in full:

name: Check Site Performance with Page Speed Insights

on: 
  push:
    branches:
    - master
  schedule:
    - cron:  '0 13 * * 6'
  repository_dispatch:
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Running Page Speed Insights (Mobile)
        uses: mattorb/psi-action@v1
        id: psi_mobile
        with:
          url: "https://mattorb.com"
          threshold: 90
          strategy: mobile
      - name: Running Page Speed Insights (Desktop)
        uses: mattorb/psi-action@v1
        id: psi_desktop
        with:
          url: "https://mattorb.com"
          threshold: 96
          strategy: desktop

For every commit, every blog post, every theme change, and once a week a mobile and desktop Page Speed Assessment measures how the mattorb.com home page  performs and alerts us if something degrades significantly.

We also get history of the results of the performance measurements stored in the actions tab on Github, which can be super helpful in understanding which of the several things that is measured has degraded in performance.

When you browse to examine those results you can see this level of detail:

Github Action for Google PageSpeed Insights
Mobile Page Speed Insights for mattorb.com
Github Action for Google PageSpeed Insights
Desktop Page Speed Insights for mattorb.com

Fun stuff!  Some potential improvements lie in the 'Opportunities' section of the report and making something run this test on several (or all) pages throughout the site.

Performance is one of those things that is a lot easier to keep going once you have a baseline established and some tools to help track and understand what causes changes in it.

]]>
<![CDATA[Dev Team Productivity - Technical Practices]]>For a development team it is critical to have a prioritized list, with tight feedback cycles, while mitigating the right risks.  This article is going to focus on the second part of that.

We want short feedback loops to enable development teams to make changes at a healthy and

]]>
https://mattorb.com/dev-team-productivity-technical-practices/5c77eeea7b071f1e05d002d1Tue, 10 Dec 2019 14:09:07 GMT

For a development team it is critical to have a prioritized list, with tight feedback cycles, while mitigating the right risks.  This article is going to focus on the second part of that.

We want short feedback loops to enable development teams to make changes at a healthy and sustainable pace.   It is important not to go slow (boring! stagnant!) and to avoid boon doggles, but at the same time we don't want to be the overconfident downhill skier that runs into a tree or the irresponsible driver that slides off the road  into a ditch because they ignored the slick road conditions on a dark rainy night.  We want to make small course corrections early and quickly: measure, adjust, repeat.

To do that, we need to create an environment which focuses on achieving outcomes while enabling lots of small experiments and autonomy.  That puts a team in the best possible position to react to planned (release/change based) and unplanned (external events) feedback.

To enable lots of small experiments at a sustainable pace, it is critical to minimize surprises and maximize predictability in the areas that are tangential to an intended change/experiment.  However unknown unknowns will always exist, so it is important to counter-balance treading too lightly with tools and practices that allow inspecting and adapting, backing out and fixing forward for the things we will inevitably fail to predict.  This allows you to lean in with some justifiable confidence, but also not let perfection be the enemy of progress.

A number of base practices enable this at the individual and team level across the areas of application, infrastructure, and releases/testing:

  1. Business Rules are codified as source code.  Source code is checked in to version control for collaboration and to enable rollback to prior state.  Good commit and PR comments facilitate learning as a team while the system evolves.   Version Control.
  2. Source code has a reasonably balanced cognitive load requirement for the domain, enabling modifications by team members with varying familiarity and experience levels.   Prudent investments in automated tests around the most impactful application logic enable its [relatively] safe modification.  Software Architecture, Clean Code, and automated tests.
  3. Application builds are reproducible.  The scripted processes which build an application run on any machine, consistently.  That build process is checked in to version control.  Reproducible builds, Version Control.
  4. Application code builds continuously.  Local feedback cycles are quick, small change-sets are pushed to version control and everything that is committed to version control is built and tested/checked with automation.  That automtaion protects shared environments and sources of truth such as master/release branches.  Continuous Integration
  5. Application configuration for each environment is in version control or an audit-able configuration management tool.  Secrets are secured appropriately.  When it comes to question of how something is configured different in one environment vs another, nothing is left of imagination.  The whole team has a mechanism to  inspect.  Team ownership of deployment, DevOps.
  6. Deployment to the highest promotion environment possible is done automatically.      That's possible as a result of following the rest of these practices.    Continuous Delivery
  7. Application changes are rolled out based on rules (i.e. - Canary / AB test) which enable safety around unknown unknowns.  Useful in lower environments, but especially for production because it is unique no matter how much insurance you attempt to put in place.   Progressive Delivery and test in production
  8. There are executable scripted processes to build the infrastructure an application requires.  Again, nothing is left to the imagination.  Everything about how to make the whole thing work is codified and checked in to version control.  Particularly critical here, having this knowledge in code enables building ephemeral 'clone'  environments to vet changes before they are promoted.  Infrastructure as Code
  9. Manual modifications to Infrastructure are actively prevented or discarded.    To modify Infrastructure in a manner that will stick requires a code change in source control.  In conjunction with the other practices listed, this forces knowledge that might otherwise be hidden, to be committed to source control.  Immutable Infrastructure: faster to scale, more secure, predictable, and understood.
  10. Upgrades and changes to infrastructure and server configuration can be vetted and released without significant downtime or maintenance windows.  Scheduling a maintenance window or having to schedule around low-traffic periods hinders a feedback loop.   High Availability is enabled with load balancer rules capable of shifting traffic to new/modified servers, either rolling through them or spinning up dark clusters (i.e. - red/black), vetting them and cutting over as they prove they are behaving as expected.
  11. Source control is the source of truth for every change from Infrastructure to app code/config and is automatically applied from master or per environment branches.  These branches are protected from bad changes by PR checks which prevent merges, often vetting things in ephemeral environments.  GitOps
  12. Ability to gather detailed information around the current and recent state of production via self service tools for: logging, metrics, traces.  Observability, DevOps.
]]>
<![CDATA[Goodbye Broken links: Ghost + Muffet + Github Actions]]>How big of let down is it when you are reading a web page, find something interesting enough to click on and are subsequently dropped down the 404 Not Found hole?

Often the maintainer of a site does not even know what is broken – especially for content oriented sites

]]>
https://mattorb.com/broken-links-muffet-github-actions/5dba5703e90de0789070792dMon, 04 Nov 2019 14:36:06 GMT

How big of let down is it when you are reading a web page, find something interesting enough to click on and are subsequently dropped down the 404 Not Found hole?

Often the maintainer of a site does not even know what is broken – especially for content oriented sites which have a healthy amount of outbound links.

Much like broken and flaky tests, compiler warnings, test coverage, code quality and consistent configuration+tooling across promotion environments, the sooner you establish [Picard voice] "the line must be drawn here", the better off you are. Once you have a set standard of what is acceptable and good visiblity to what is crossing that line, you have a fighting chance of understanding where to invest to maintain that standard or improve it.

For a blog like this one, hyperlinks can break a few different ways. Internal links can be broken from the start due to unforced errors (i.e. - typos) or software upgrades. Links going out to other sites can drift into a broken state due to an external site making changes or being taken offline.

Muffet

I wanted to see how this site was doing and had recently stumbled on Muffet, a supa-fast, open source, Go-based broken link checker.

The first time I ran Muffet, I discovered an embarrassingly long list of broken links here.

Here are some choice examples from a trimmed down version of that first run:

$ muffet https://mattorb.com

96
https://mattorb.com/swift-2-to-5/amp/
97
	404	https://mattorb.com/swift-2-to-5/swift%20half%20open%20range
103
https://mattorb.com/fuzzy-find-github-repository/
104
	404	https://mattorb.com/fuzzy-find-github-repository/GitHubAPIv3%7CGitHubDeveloperGuide
105
	404	https://mattorb.com/fuzzy-find-github-repository/github.com/shurcooL/githubv4

$ 
How to read this output: By default, Muffet only puts broken links in the output. Hierarchy is expressed through indention: so #97 above is a link that was walked while parsing the page at #96. The unindented number is a counter of links checked, and the indented numbers are HTTP return codes (404 = not found).

All of the link issues above were unforced errors as far as I can tell. Additionally, the stuff I trimmed out included errors for images that had gone missing and links to external sites that were no longer valid.

Awesome! We now have a way to assess the whole site for broken links. The problem is we just found a whole bunch of broken things all at once, which means a whole bunch of work to fix them.

Prefer small fixes right away

Next time a link breaks, I want to be fixing just that one thing and be done -- rather than looking at a large pile of issues that have accumulated over a longer period of time.

Ideally, I want broken links assessed:

  • Automatically, before publishing new content – to catch unforced errors before they go live
  • Automatically, on configuration changes and software upgrades – to catch unexpected interactions of new software and existing content
  • Automatically, on a schedule – to catch drift in the health of links to external party sites
  • On demand, to confirm I have fixed issues after making changes

Having a place to trigger the workload which executes a broken link checker manually or programmatically, record the results, and notify me when things break would hit all my needs.

After my other recent experiment with a Github Action, that seemed like a good candidate.

A Github Action for Muffet

Always Google first, to see if someone else has already done similar work.

I found an archived Github repo from peaceiris that had a Muffet Github action. I have no idea why he/she archived it, but it seems to work fine, so I forked it to keep a copy.

To use that action from our project [checked in to a Github repo], I added a workflow at .github/workflows/checklinks.yml :

name: checklinks

on: [push]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - name: Check links on site
      uses: mattorb/[email protected]
      with: 
        args: >
          --timeout 20 
          https://mattorb.com

This sets up triggering Muffet to check for broken links on every push to git master. It builds the needed action via a Docker build of the Github repo mattorb/actions-muffet, tag 1.3.1.

As noted earlier, sometimes external links go bad due to changes outside our awareness, so below we add a schedule stanza to trigger this check regularly as well:

name: checklinks

on: 
  push:
    branches:
    - master
  schedule:
    - cron:  '0 13 * * 6'

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - name: Check links on site
      uses: mattorb/[email protected]
      with: 
        args: >
          --timeout 20 
          https://mattorb.com

Now, in addition to executing on pushes to master, that cron schedule sets this check to happen automatically once a week. ('0 13 * * 6' == 1pm on Saturdays)

When it fails, Github sends you an e-mail. (default settings)

Also, this handy badge can be placed at the top of README.md in the git repo, or on the site itself:

Goodbye Broken links: Ghost + Muffet + Github Actions

That badge is live, so hopefully it reads 'passing' when you are reading this article! For how to make one, see the Github docs.

At this point, we have a Github Action in place that will be kicked off for a few scenarios:

  • Manually triggered via the Github web UI
  • Automatically triggered via the a push the git repo master
  • Automatically triggered once a week

Going Further

One of those GitHub Action triggers that I'm particularly interested in, since my blog workflow has not moved over to a static generation approach yet: repository_dispatch. It is still in developer preview but offers a way to trigger a Github Action workflow based on an external events.

Ghost has webhooks that can be triggered for various types of modifications:

Goodbye Broken links: Ghost + Muffet + Github Actions

Tying one or more of those to triggering the Github Action via a repository_dispatch event will require building something to either receive the Ghost JSON webhook Payload and post the Github expected JSON payload, or extending Ghost itself with a custom webhook integration for repository dispatch – a small future project.

UPDATE: here is a quick stab at that in Go. I point Ghost webhooks at it for the 'New post published' and 'Published post updated' events to trigger broken link checking on those two events. It is a bit naive in that it scans the whole site every time a new post is published.

]]>
<![CDATA[Site Reliability Engineering Book Trio]]>What works for Google, what works for Facebook, and what works for Netflix may not be the right thing for the rest of us.   Putting too much weight behind the opinions of a few large organizations can bite you.  The same is true of charting a path forward

]]>
https://mattorb.com/site-reliability-engineering-book-trio/5bc733317b071f1e05d0025cMon, 16 Sep 2019 13:37:40 GMT

What works for Google, what works for Facebook, and what works for Netflix may not be the right thing for the rest of us.   Putting too much weight behind the opinions of a few large organizations can bite you.  The same is true of charting a path forward based on the experience of a few individuals, without being aware of the broader landscape.  This is why I'm a huge fan of studies that are more broadly based, like Accelerate and the accompanying yearly State of DevOps reports.  Do what works in your context, but stay informed of what is working well for others, to make better and better choices as you go.

One particular area that has been getting more refined is Site Reliabilty Engineering.  There are three great books I read over the last year that provide a peak into some experiences and experiments.

This trio of books is a treasure trove of ideas, techniques, practices, and organizational approaches for improving both the delivery of value to production, and how the teams around that are organized:

  1. Site Reliability Engineering: How Google Runs Production Systems,
  2. The Site Reliability Engineering Workbook: Practical Ways to Implement SRE
  3. Seeking SRE: Conversations About Running Production Systems at Scale.

SRE is all about applying a software development mindset to infrastructure and operations.

I particularly enjoyed 'Seeking SRE' which is a series of essays.  Each chapter stands on its own, and several are based on years of history and experience reports at well known companies.

]]>
<![CDATA[CI your MacOS dotfiles with GitHub Actions!]]>Dependencies that you can't or won't pin versions of in a reliable persistent cache, have the potential for drift. Such is the life of my dotfiles repo. I don't want to pin most things there. It is an outlier relative to what you might

]]>
https://mattorb.com/ci-your-dotfiles-with-github-actions/5d779ec57b071f1e05d004aeWed, 11 Sep 2019 13:47:19 GMT

Dependencies that you can't or won't pin versions of in a reliable persistent cache, have the potential for drift. Such is the life of my dotfiles repo. I don't want to pin most things there. It is an outlier relative to what you might normally set up to produce the most consistent, reliable build process. For dotfiles, I want to lean way in towards making sure they stay current even if it breaks once in a while in exchange for that posture.

One of the challenges of making sure a dotfiles repo continues to function correctly on a clean machine is that running through them for the first time, in a clean environment, is a rare activity. It generally only happens when I am configuring a new laptop or others are installing it for the first time.

If I could do a clean install on a more regular basis, it would decrease the chance that little breakages are piling up over time. This is where a most basic form of continuous integration comes in.

I am starting with a sanity test . . .

Can the install script execute, start to finish, without error?

I have already seen this break when a dependency is renamed or when I introduce a subtle syntax error that someone finds in a fork when they try to run everything clean for the first time. I want to catch those issues as quickly as possible.

GitHub Actions for MacOS dotfiles CI

When perusing the GitHub Actions documentation, I noticed they included the ability to run an action on a MacOS VM!

wat.

For free.

wat.

For open source repositories.

oh, yah ok.

Still . . . In case you are not aware, getting a hosted MacOS VM, while possible, is generally not cheap relative to other server options. The fact that Github is offering [free] to run Mac workloads for opensource projects is a cause for celebration.

Ok. Let's get to the meat of it: GitHub Actions are behaviors you define in a yaml file checked in to the repository itself.

To get started I defined a 'Smoke Test CI' workflow in .github/workflows/smoke.yml in my dotfiles repo:

name: Smoke Test CI

on: [push]

jobs:
  build:

    runs-on: macOS-latest
    
    steps:
    - uses: actions/checkout@v1
    - name: Execute full install
      run:  ./setup.sh

Every single push to the dotfiles repo provisions a temporary MacOS VM, checks out the code with git, and executes the full install of the dotfiles (via ./setup.sh).

The results for the builds that are triggered are in the Actions tab for that Repository on Github:

CI your MacOS dotfiles with GitHub Actions!

and you can drill in to see the results of each step and the logs:

CI your MacOS dotfiles with GitHub Actions!

That's it!

Getting this set up has already caught some bugs.**

**One caveat: For a small handful of things, I had to detect that I was running inside a GitHub action and skip over them because they required privileges that were locked down or not available on that VM. Prudently, Github provides some default environment variables in the VM when running inside a Github Action. I used those as a clue to skip them.

]]>
<![CDATA[Level up keyboard shortcuts - Part 2. Hammerspoon and the Home Row]]>After my initial experiments using Karabiner to bind a keyboard shortcut to an image-based cheatsheet and leverage a hyper key to avoid application level key binding conflicts and finger twister, I was super excited.  I had combed through many blogs and git repositories.

Having an extra meta key with

]]>
https://mattorb.com/level-up-shortcuts-hammerspoon-home-row/5d4acfa47b071f1e05d0044cWed, 04 Sep 2019 13:44:44 GMT

After my initial experiments using Karabiner to bind a keyboard shortcut to an image-based cheatsheet and leverage a hyper key to avoid application level key binding conflicts and finger twister, I was super excited.  I had combed through many blogs and git repositories.

Having an extra meta key with a whole slew of nonconflicting keybinding slots seemed awesome.   I like keyboard shortcuts bound to keys based on a pnemonic, so binding my global spectacle keyboard shortcut cheatsheet to hyper+k ['k' for keys] seemed like a good, quick win that would help me memorize those Spectacle shortcuts.

Incremental and Safe . . . time to test it out!  👍

The Home Row

Then, mid-way through patting myself on the back, I discovered global vi mode with a hyper key and Karabiner.  With this Karabiner recipe enabled, you press a hyper key and then h/j/k/l to move the cursor left/down/up/right.  Additionally, tab can be rigged up to act as a modifier in conjunction with the hyper key to enable quick access to home/end/pgdn/pgup – which is especially awesome when you are on a laptop keyboard.  

In order to to try out global vi mode w/Karabiner, I had to sacrifice hyper+k from  the Spectacle cheatsheet and switch that to be right-option+k.  

Hey, another key I never use.  right-option.  hyper2?

Fast forward a bit.  I dug deeper and deeper into the Hammerspoon ecosystem, looked at more custom Karabiner and Hammerspoon settings, and heard some feedback from my first post.

Hammerspoon is an application you run in the background that loads custom Lua scripts to interact with your system, allowing you to script behaviors to react to system events.   One kind of system event is a key press.  For our purposes here, we're exploring Hammerspoon primarily in the context of using it to react to keyboard shortcuts and trigger something in response.  It has many other uses though.   Keep in mind we're talking about system-wide key bindings here – not something isolated to a single application.  🤯

A unit of re-usable scripting in Hammerspoon is called a 'spoon'.  

After trying out many different spoons and customizations, two projects really caught my eye:

  1. FryJay's MenuHammer - "A Spacemacs inspired menu system".   A quick, customizable, nestable, menu system you can access via a system-wide keyboard shortcut.
  2. Jason Rudolph's Karabiner+Hammerspoon setup which focuses on home-row centric shortcuts for cursor navigation and window managment (ht: @gregvaughn)

Thinking about these in terms of the capabilities they enable relative to the caps-lock hyper key and Spectacle cheat sheet via a shortcut that we put together in the previous post, it seems there is a lot of potential.

Let's explore.

Key Chords

Hammerspoon can enable key chords*.

Karabiner can enable key chords too, but I found the nuances of getting it implemented well in Karabiner, for alpha keys, with the current version, to be painful. **  

Wait wait . . . What is a key chord you say?  

So, the aforementioned JR's setup has a 'super duper' behavior where pressing the 's' and 'd' keys at the same time acts almost like another new modifier key.  waaaaaaaat?  That new modifier is then leveraged for some home row centric key bindings that can substitute for moving your right hand or stretching the right pinky down to the arrow keys.

Here's how it works: while 's' and 'd' are pressed and held with the left hand, you use the right hand to press h to move the cursor left.   'h' for left, 'j' for down, 'k' for up, 'l' for right.   You may recognize these commonly used vi cursor movement keys.  With hand in place, pressing down the 's' and 'd', you can press 'a' to add the option/alt modifier, 'f' to add cmd modifier, or spacebar to add shift modifier.  Those keys are all lying right under the fingertips of where the left hand is already positioned, minimizing the needed movement.

I thought it would be nice to form the muscle memory for those h/j/k/l vi cursor movement keys as a side benefit for when I might occasionally run vi on a remote server.  Mostly though, I'm thinking about not having to pinky stretch down to those tiny cursor keys or reposition the whole right hand each time I want to navigate the cursor from a laptop keyboard.

What really pushed me over the line of I have to try this now was seeing those same home-row keys used for cursor movement -and- window positioning in a consistent way.

Modes and Menus

Where key chords are generally more temporary states, enabling a behavior only if all the right keys are pressed in close proximity spatially and temporally, a 'mode' let's you toggle into a state where keys react differently until it is dismissed.   Menus are similar and dismiss automatically upon the selection of an action.

So in the context of JR's setup, when you hit control-S, you enable window layout mode.  Then, you press one of the h/j/k/l/i/o/,/. keys while focused on a particular window, and it will resize&move the window to left/down/up/right/top-left/top-right/bottom-left/bottom-right.  This re-uses some familar home row mappings (i.e. - 'h' = left whether it is for cursor movement or shifting a window to the left side of the screen).  A few other window resize/move keys are added (i/o/,/.) in a place that makes sense spatially for shifting windows to corners.  For example, out of the i/o/,/. keys, the 'i' is the top left key on the keyboard and it arranges the window to the top left position.  After you make a selection the window layout mode dismisses itself.  

There is a 'showHelp' flag built in to the lua script that drives window layout mode.  When you set that flag to true, a built in cheatsheet displays when in window layout mode.   Note: I re-bound this mode to hyper-W to preserve ctrl-S for application-level bindings.  

Level up keyboard shortcuts - Part 2.  Hammerspoon and the Home Row
The built-in 'cheatsheet' for window layout mode via JR's hammerspoon setup (set showHelp=true)

It's worth noting that since I have had this working, I've been using Spectacle less and less.  At the moment, it does not have feature parity with Spectacle, but it does seem possible to get there. 🤔

Another variation of the idea of a 'mode' is a shortcut driven menu.   Once activated, you get a list of options, along with the keys to quickly navigate and select from those options.  

This is where the Menuhammer project comes in.  Menuhammer allows you to set up a totally customizable menu you can access system-wide via shortcut.  From that menu, you can trigger any action hammerspoon can initiate – launching applications, macros, running scripts, laying out multiple windows, etc.  

Level up keyboard shortcuts - Part 2.  Hammerspoon and the Home Row
My current Menuhammer system-wide menu (bound to Hyper-Space)

It looks like a great place to house things that are launched less frequently like occasional use macros, or toggling Wi-fi on and off.  I'm thinking this a good landing spot for things which either (1) are not used frequently enough to give up an immediate action keybinding slot or (2) are not used frequently enough that it is worth memorizing.  I bound it to hyper-space to try it out.  

After using it for a while, I found it useful to bind the application submenu to Hyper-A and the Finder submenu to Hyper-F to jump directly to those as needed.  

Level up keyboard shortcuts - Part 2.  Hammerspoon and the Home Row
Application submenu, bound to hyper-A
Level up keyboard shortcuts - Part 2.  Hammerspoon and the Home Row
Finder submenu, bound to hyper-F

Note the really cool feature of those hyper-F menu items:  Several of them launch Finder -and- send specific key to it, immediately selecting the 'Download's directory for instance.

I'm still experimenting with what feels most comfortable for applications I use regularly vs less often . . . especially in regards to putting applications on the shortlist in MenuHammer vs binding a key to launch them directly via hammerspoon.

Hyper key -and- Super Duper key chord?

Given what we did in part 1 of this series, where caps lock became a hyper key, with tab acting as a modifier when held, how does how does super duper mode compare to that and can these two worlds live together?  If not, which one is more comfortable and effective?

With very few changes, I was able to fully enable Super Duper mode, and keep in place many of the caps-lock based hyper key things I already had.  This is for both cursor movement and window management (Spectacle vs Hammerspoon scripted).  This allowed me to compare the comfort level of each to see which one feels right.  After using super duper mode for a bit, I'm finding it a lot more comfortable, and have swapped out caps lock to be a ctrl key on hold instead of being my hyper key which maps to ctrl-alt-shift-cmd.   I have moved that hyper key down to the right-command key position, which I'm finding convenient.  

I'm also experimenting with an 'a'h 'f'udge 😜  mode which maps  home/pgdn/pgup/end keys to h/j/k/l when the a+f keychord is held.  As always, all my curent setup can be found in my dotfiles repo.  

* This is what I like to call them at the moment.

** Karabiners key chord pains: dupe keys, missed key presses, missed dropping and enabling of chord state, tried many solutions including stuff currently marked as 'working' in Karabiner recipes.  All had problems with either messing up my normal typing of words or not working consistently enough.

]]>
<![CDATA[Level Up Shortcuts And The Hyper Key - Part 1]]>Even though typing speed is the least of a developer's bottlenecks, there are two particular speed bumps that can disrupt your flow when you are blazing a trail of fire, thinking and typing your way through solving a problem:

  1. Tapping the breaks to shift your hand to use
]]>
https://mattorb.com/level-up-shortcuts-and-the-hyper-key/5d35c0577b071f1e05d00426Thu, 08 Aug 2019 13:00:00 GMT

Even though typing speed is the least of a developer's bottlenecks, there are two particular speed bumps that can disrupt your flow when you are blazing a trail of fire, thinking and typing your way through solving a problem:

  1. Tapping the breaks to shift your hand to use the mouse.
  2. Pressing a whole lot of keys to do something that you know could be done with fewer keypresses.

I feels like everrrryyyythhhinnnggg sloowwwwws downnnn in those moments.

For that reason, lately I've looking for ways to better enable memorizing useful keyboard shortcuts. For me, the best way to learn new keyboard shortcuts is to start using them. To start using them, I need a way to quickly reference a list of the ones I'm interested in.

I ruled out paper/printed cheatsheets because I wanted something that travels with me, only requiring the footprint of my laptop. I don't like mousing up to a menubar item or having to fall back to the mouse to open a [reference] file either: too much break in flow.

Commandline keyboard shortcuts + Fish

I do like the idea of a key you can press that pops up a reference list of other keyboard shortcuts. Various developer IDE's have keyboard shortcuts that do this and they often enable searching lists of available commands. Cmd-shift-p in VS Code and cmd-shift-a in Jetbrains' IDE's are two examples. They are typically dealing with a very large list of potential actions/bindings, which is a little different than my case.

There are a handful of shortcuts I'd like to get better at on the commandline. Some of them are defaults that come with fish shell. Others are custom behaviors or things I have added. For my purposes, printing out a list of available bindings in the window seems sufficient. So, I set up Alt-K to print out a list of interesting hotkeys that I would like to get better at and internalize:

Level Up Shortcuts And The Hyper Key - Part 1
Alt-K for in-line commandline help, retaining cursor position

One of the cool things about this, is that you can also instruct the fish shell to repaint the existing commandline after sending something to stdout, retaining the original cursor position. You can be in the in the midst of a carefully assembled commandline instruction, think "Hey what is that key that would help me quickly navigate the cursor over to spot X?" or "what is that key to recursively search a directory under my cursor for a filename to pop it in right here?", access the reference list of keys, find the key you need and use it, all without losing your spot!

I'm hoping this will encourage looking up the shortcuts I would like to get better at, since I won't have to break out of the window or the line I am currently modifying to reference the list. Hopefully, the shortcuts become rote as I memorize them.

After realizing the power of having this type of thing at one's [literally] fingertips, you tend to want similar functionality in more places...

Spectacle + Karabiner + Quicklook

I have been trying to use Spectacle (update: 2025, spectacle looks defunct – website gone, so I am removing the link which is now squatted) for OS X window management. There are several keyboard shortcuts to learn to effectively use it. These are global keyboard shortcuts that are active across all applications on OS X. How can I get better at these with as little friction as possible?

While there is a really interesting CheatSheet application for OS X which shows a combination of foreground application and OS level keyboard shortcuts, it doesn't show the 18 system-wide keyboard shortcuts enabled by something like Spectacle.

I really like the idea of an easy to remember global keyboard shortcut that can pop up a cheat sheet of those Spectacle system-wide shortcuts I'm trying to learn.

One low hanging option is capturing an image of the Spectacle shortcut list and then making a keyboard shortcut show that image.

Leveraging Quicklook via commandline:

$ qlmanage -p filename.png

. . . can be used to trigger the OS X preview of our cheatsheet image for Spectacle:

Level Up Shortcuts And The Hyper Key - Part 1
Spectacle cheatsheet image

I just needed a way to bind that to a keypress.

The range of options for binding keyboard shortcuts to shell commands is a spectrum of everything from elaborate tools like Alfred, to varied quick-launcher tools, to an open source project for keybinding, to writing quick actions in Automator and using some built in os x key binding features.

None of these were quite what I was looking for. I wanted something that I could easily codify into some scripts in my version controlled dotfiles repository. During the search of options I spotted something really interesting though.

One option that rose to the top was Karabiner Elements. It does some other exotic things I did not even know were possible.

The series of steps to bind our quicklook command to a keyboard shortcut, using Karabiner is:

Step 1: Install Karabiner Elements via homebrew

$ brew cask install karabiner-elements

Step 2: Define a relevant key binding snippet in Karabiner json config :

{
    "description": "Ctrl-K for spectacle cheatsheet",
    "from": {
        "key_code": "k",
        "modifiers": {
            "mandatory": [
                "left_control"
            ]
        }
    },
    "to": [
        {
            "shell_command": "qlmanage -p ~/cheatsheets/spectacle.png 1>/dev/null 2>/dev/null"
        }
    ],
    "type": "basic"
}```

Step 3: Hit Ctrl-K to pop up the custom cheatsheet image!

Wait a minute . . . Ctrl-K. Some applications use that already for other things, right?

Perhaps the more exotic uses of Karabiner are handy now.

The Hyper Key

What if an application has already bound Ctrl-K for another purpose? For example, our console binds Ctrl-K to delete to end of the current line. That would conflict with the global Ctrl-K shortcut for a Spectacle cheatsheet that we enabled above.

One of the interesting ideas people are using Karabiner Elements for is to create a 'Hyper Key'. Imagine you can press caps-lock and the system thought you has just done a simultaneous press of Ctrl+Option+Cmd+Shift. It is rare that any application would establish a keyboard shortcut to require pressing all those modifier keys, so that sequence plus any other key on the keyboard is not likely to have keybinding conflicts with anything. It's a clean slate of keyboard shortcut slots to define your own custom keybindings without having to be concerned about conflicts!

Dear Caps lock: I AM NOT GOING TO MISS YOU!

Let's define a Hyper Key in Karabiner config:

{
    "description": "Change caps_lock to command+control+option+shift.",
    "from": {
        "key_code": "caps_lock",
        "modifiers": {
            "optional": [
                "any"
            ]
        }
    },
    "to": [
        {
            "key_code": "left_shift",
            "modifiers": [
                "left_command",
                "left_control",
                "left_option"
            ]
        }
    ],
    "type": "basic"
}

⬆️ relevant snippet from a full Karabiner json config

Schweet! Now caps lock is our Hyper Key and acts like a super duper modifier – some sort of alien hand that can twist itself into pressing ctrl, option, cmd, and shift all at once.

You can confirm that key remapping is working as expected by launching the Karabiner-EventViewer app that was installed with Karabiner or turning on something like Keycastr, and then pressing caps lock and viewing the result.

Now, let's adjust our Spectacle cheatsheet to use the Hyper Key:

{
    "description": "Hyper-K for spectacle cheatsheet (Hyper == capslock)",
    "from": {
        "key_code": "k",
        "modifiers": {
            "mandatory": [
                "left_gui",
                "left_control",
                "left_alt",
                "left_shift"
            ]
        }
    },
    "to": [
        {
            "shell_command": "qlmanage -p ~/cheatsheets/spectacle.png 1>/dev/null 2>/dev/null"
        }
    ],
    "type": "basic"
}

(left_gui is the command key on OS X)

Awesome. Now we can press capslock and 'k' together to pop up our global cheatsheet for Spectacle, and not worry about key binding conflicts with other applications. That seems like a good first cut at something to help have quicker access to learn those shortcuts. Now I just have to remember Caps-K to access the key list reference.

Going Further

There are a number of other things people are using Karabiner for. Take a look here for examples. One I immediately jumped at was the chance to try tapping left and right shift to jump words left and right.

This is the tip of iceberg with Karabiner and led me to discover another tool that is brought up a lot in this context: Hammerspoon. It has a number of interesting uses which have some overlap with my cheatsheet and window management needs. It's a little bit more scripting centric, being driven by Lua scripts. Someone has even implemented Spectacle in Hammerspoon scripts. waaaaaat.

]]>