<![CDATA[Chris Arcand]]> 2025-07-08T10:17:38-04:00 https://chrisarcand.com/ https://chrisarcand.com/images/favicons/favicon.ico https://chrisarcand.com/images/favicons/favicon-96x96.png <![CDATA[Full Breadth Developers]]> 2025-07-07T00:00:00-04:00 https://chrisarcand.com/full-breadth-developers A post by Justin Searls on what he calls “full-breadth developers” - those with both technical and product capabilities - and how they are becoming the clear winners in the AI era as they can effectively leverage generative AI tools to rapidly deliver complete solutions without the communication overhead of traditional role segregation.

As usual, Justin writes as if he’s in some sort of weird mind meld with me, knowing my own thoughts better than I do and articulating them in a way with clarity and honesty that I’ve admired and emulated over the course of my own career - and have benefitted enormously from while perhaps only doing it half as well.

Bits that especially resonated with me include a sense of balance between the hype and the cynics that I’ve said is downright surreal to be living through:

A lot of developers are feeling scared and hopeless about the changes being wrought by all this. Yes, AI is being used as an excuse by executives to lay people off and pad their margins. Yes, how foundation models were trained was unethical and probably also illegal. Yes, hustle bros are running around making bullshit claims. Yes, almost every party involved has a reason to make exaggerated claims about AI.

All of that can be true, and it still doesn’t matter.

And I realized that developers I know who’ve embraced AI tend to be more creative, more results-oriented, and have good product taste. Meanwhile, AI dissenters are more likely to code for the sake of coding, expect to be handed crystal-clear requirements, or otherwise want the job to conform to a routine 9-to-5 grind. The former group feels unchained by these tools, whereas the latter group just as often feels threatened by them.

When I take stock of who is thriving and who is struggling right now, a person’s willingness to play both sides of the ball has been the best predictor for success.

As we say goodbye to an era, I’m nervous. And that’s crazy, because I simultaneously understand that I’m personally in one of the best positions to thrive. Never have I ever been more grateful for the many years I put into having a go at being an orchestral musician before I ever executed a single line of code, nor the years after that I spent deep diving into the ‘craft’ of software, when that’s what it was. Never have I been more grateful for years of deep diving on extremely technical bits while also spending years of essentially being a product manager.

I can’t fathom trying to be a junior developer right now. I’m sure it will be fine, folks will adapt, but man I just personally do not have the answers for how to truly grok software in this gross climate of mega ‘productivity’ and delivering extremely quick results with more tooling and less of everything else. My philosophy on learning software early on in my career was spending enormous amounts of time truly digging deep, spending untold amounts of time in a debugger and reading source. Often, this would start with some incredibly silly curiosity that I would just not let go of until I figured it out, learning many bizarre and esoteric things along the way. Is that still valid anymore? Maybe. Maybe not, though. Most likely, as with most things in life, it’s not black and white and the answer is actually “Yes, but also no.” And I think Justin agrees:

Part of me is already mourning the end of the previous era. Some topics I spent years blogging, speaking, and building tools around are no longer relevant. Others that I’ve been harping on for years—obsessively-structured code organization and ruthlessly-consistent design patterns—are suddenly more valuable than ever. I’m still sorting out what’s worth holding onto and what I should put back on the shelf.

I’m still doing this mental sorting as well. If you’re a senior in the industry, doing that sorting for yourself is both an urgent and important task. For example, mastering the intricacies of your text editor (AI agents already code faster than you can think) or knowing every detail of your exact programming language (what Ruby’s $SAFE variable does is not a thing worth retaining, probably) are things that are probably headed to the mental wastebin. Conversely, maintaining balance and being self aware enough to avoid becoming an expert beginner are more important than ever.


Full-Breadth Developers by Justin Searls

]]>
<![CDATA[Zero Trust: Securely Accessing Home Assistant with Cloudflare Tunnels]]> 2025-03-24T00:00:00-04:00 https://chrisarcand.com/zero-trust-securely-accessing-home-assistant-with-cloudflare-tunnels I run home automation with Home Assistant on a Raspberry Pi, and I’d always been a little uneasy about exposing my home network to the web for remote access. I’d set up SSL encryption and all that, but still felt uncomfortable having an open port in my router and to manage update and security for such an important and private service.

I recently learned about Cloudflare Tunnel, part of Cloudflare’s Zero Trust offerings. Cloudflare Tunnel provides a secure way to connect your resources to Cloudflare without requiring a publicly routable IP address. Instead of sending traffic to an external IP and opening ports in your firewall, a lightweight daemon in your infrastructure (cloudflared) creates outbound-only connections to Cloudflare’s global network. This is essentially an agent-initiated reverse tunneling service.

Cloudflare Tunnel diagram

The service can connect HTTP web servers, SSH servers, remote desktops, and various other protocols safely to Cloudflare’s edge. All you need is a domain handled by Cloudflare.

When I learned about this technology, it immediately caught my attention as a way better solution for securely accessing my Home Assistant setup. I also immediately discovered that this idea is far from novel and that there’s an extremely awesome Home Assistant add-on by Tobias Brenner that makes this setup a breeze.

With the add-on, the changes necessary to get this running were surprisingly minimal, since I already used Cloudflare for my domains anyway. I spent more time cleaning up my old SSL Cert and renewing setup than I did this solution.

Cloudflare-side, it’s essentially just creating the tunnel in Cloudflare and generating a secret token to provide to the add-on:

Cloudflare Tunnels UI

Back in Home Assistant, you just need to allow requests from the Cloudflared add-on, which runs in a Docker container:

http:
  use_x_forwarded_for: true
  # The Cloudflared add-on runs locally, so HA has to trust the Docker network it runs on.
  trusted_proxies:
    - 172.30.33.0/24

With this configuration, I then closed all previously forwarded ports in my router. Now my home automation is accessible only through Cloudflare’s secure infrastructure while being completely sealed from direct access from the outside. After putting the cherry on top - enabling 2FA for Home Assistant - I feel much better about it all.

I’ve struggled a bit with finding a way to close off all internal access to Home Assistant (forcing it all through Cloudflare), but I’ll keep at it.

If you run a Home Assistant instance, I highly recommend this setup.

]]>
<![CDATA[Purposeful Commits]]> 2025-03-19T00:00:00-04:00 https://chrisarcand.com/purposeful-commits Merge commits, squash commits, rebasing, requiring a linear history…how a team chooses - or does not choose - to manage changes to their codebase is an inevitable discussion at any software company. I’ve had this conversation many times before for Git, so I thought it was time to write down my own take on the subject. I’m no pedant, I don’t enforce a strict policy doing this on things I review, I mostly just quietly structure my own work this way and am thrilled when others do the same.

Look at most Git repositories and you’ll find one of two problematic approaches to commit history:

On one end we have merge commits, which introduce a Git history on main with dozens of messy commits like “WIP”, “fix tests”, and “oops forgot this file”.

main feature-branch "Initial commit" "WIP" "Fix failing tests" "Oops forgot this file" "More WIP" "Finally works" "Merge pull request #41" Merge point

While these accurately document someone’s development journey - complete with mistakes and backtracking - they create noise that makes it difficult to understand the actual changes later. Scrolling through twenty commits to understand what should be a simple feature addition wastes everyone’s time. Seeing a change that never actually saw the light of day because it was changed five minutes later in the same exact pull request is pointless. Finding regressions and reverting them are annoyingly difficult, as the changes that appear to have been done long in the past are actually just a few minutes old in a deployment - the dates and times on the commits don’t matter, they weren’t included in the branch until the moment in time of the merge commit that they belong to.

Sadly, because it’s the easiest way to merge changes, and likely due to it being the default strategy for GitHub PRs, this is the most common approach while also being the messiest and least useful. In my experience, most of the time this method is used not intentionally but because of indifference.

On the other end of the spectrum, we have the single squash commit approach - clean but lacking detail. The changes from set of commits included in a pull request are ‘squashed’ to a single commit to be actually merged to the main trunk, which discards valuable context about how the solution evolved.

main feature work "Initial commit" "WIP" "Fix failing tests" "Oops forgot this file" "More WIP" "Finally works" "Add user authentication feature" Squash commit

Condensing six hours of work that included refactoring an interface to make it pluggable, changing some test structure to make more sense to add to, and adding a new feature all in a single “Add user authentication” commit tells future developers almost nothing about the decisions made along the way.

While this approach is cleaner and certainly preferrable over merge commits, it’s often too clean. Unless you are always merging the tiniest of changes at time, it’s likely that you’ll lose valuable context about the purpose of individual decisions that supported the final overarching solution to the problem at hand, unless authors are willing to walk through everything in the squash commit’s description.

What happens more often when folks ‘use squash commits’ is the worst of both worlds: With the automatic ‘Squash and Merge’ function on GitHub, developers make 50 commits ala ‘typo’ and ‘fix tests’ for a pull request, then use the ‘Squash and Merge’ button to combine them all into a single commit without amending the commit message for all those changes. Git will take all the commit messages and combine them all together in the squash commit, so a future developer looking for context on a change finds this very helpful commit message explaining the changes:

commit 2b2bce0d228763880ba19cd089a166429ab5a118 (HEAD -> main, origin/main)
Author: Jonathan Smith <[email protected]>
Date:   Thu Mar 13 16:27:25 2025 -0500

    change user.go to use new interface

    Update openapi/spec/schemas/my-feature/thing.yml

    Co-authored-by: Betty Sue <[email protected]>

    refactor #execute method

    Merge branch 'main' into jonsmith/user-authentication

    oops, typo

    fix test

Sigh.

Now, if folks are disciplined enough to rewrite the squash commit message at merge time, squash commits are still pretty agreeable and add very, very little overhead for folks, so squash commits are the minimum thing I usually actively advocate for. Great! Do that.

Ideally, I prefer something a little different that requires a bit more rigor, a middle ground between merge commits and squash commits. I’ve always just considered it ‘the right way to do it’, as I emulated it from mentors early in my career. Maybe there’s a term for it I’ve never heard; the closest I’ve found are ‘atomic commits’, but it’s not quite right.

So I guess I’ll come up with a term for it.

Purposeful Commits

Purposeful commits are a way of organizing your Git history to tell a clear, coherent story of how a codebase is evolving. Each commit should represent a logical step in the progression of your work, building on the previous one to achieve a specific goal.

“For each desired change, make the change easy (warning: this may be hard), then make the easy change.” –Kent Beck

The quote above from Kent Beck is easily my favorite quote about software. It’s amazing how often it’s applicable, and it’s not by coincidence that my preference for how to organize commits in Git history is really just an embrace of this idea. When I work on a feature or bug fix, I structure my commits to showcase this iterative approach to changes.

Purposeful commits are about breaking down a task into smaller, more manageable steps, each with a clear purpose:

  1. The first commit might refactor existing code to make it more amenable to change
  2. The second commit introduces the actual feature I wanted to add, which is now super easy to see based on the refactor in (1)

Imagine you’re extending your authentication logic beyond the basic password you have today, to accept Google OAuth as well. Good software engineering practice would be to (1) refactor things to allow for a pluggable strategy first, then (2) add the new strategy.

Instead of one massive commit with both intentions wrapped together, or twenty disorganized ones, your PR might contain a few purposeful commits:

Commit 1: “Making the change easy…”

commit 8d7f2c1e4b5a9c8d7f1e2b3a4c5d6e7f8a9b0c1d
Author: Developer <[email protected]>
Date:   Wed Mar 19 09:47:18 2025 -0700

    refactor: Refactor user model to accept multiple auth providers

diff --git a/auth/authentication.go b/auth/authentication.go
index 6e2731a..f9a8b21 100644
--- a/auth/authentication.go
+++ b/auth/authentication.go
@@ -1,9 +1,19 @@
- func Authenticate(email, password string) (*User, error) {
-   var user User
-   // Find user by email...
-   if !ValidatePassword(password, user.PasswordHash) {
-     return nil, ErrInvalidCredentials
-   }
-   return &user, nil
- }
+ type Authenticator interface {
+   Authenticate(credentials map[string]string) (*User, error)
+ }
+
+ var authenticators = map[string]Authenticator{
+   "local": &LocalAuthenticator{},
+ }
+
+ func Authenticate(provider string, credentials map[string]string) (*User, error) {
+   authenticator, exists := authenticators[provider]
+   if !exists {
+     return nil, ErrUnsupportedProvider
+   }
+   return authenticator.Authenticate(credentials)
+ }

diff --git a/auth/authenticators/local.go b/auth/authenticators/local.go
new file mode 100644
index 0000000..a2b5079
--- /dev/null
+++ b/auth/authenticators/local.go
@@ -0,0 +1,6 @@
+ // LocalAuthenticator handles password-based authentication
+ type LocalAuthenticator struct{}
+
+ func (a *LocalAuthenticator) Authenticate(credentials map[string]string) (*User, error) {
+   // Implementation of password validation
+ }

Commit 2: “Making the easy change”

commit 2fd8901ab45c38e7b2c5d7f23cfb98d21e4fa3b2
Author: Developer <[email protected]>
Date:   Wed Mar 19 10:12:45 2025 -0700

    feat: Google OAuth authentication backend

diff --git a/auth/authenticators/google.go b/auth/authenticators/google.go
new file mode 100644
index 0000000..a2b5079
--- /dev/null
+++ b/auth/authenticators/google.go
@@ -0,0 +1,6 @@
+ // GoogleAuthenticator handles OAuth authentication with Google
+ type GoogleAuthenticator struct{}
+
+ func (a *GoogleAuthenticator) Authenticate(credentials map[string]string) (*User, error) {
+   // Implementation of Google OAuth validation
+ }

diff --git a/auth/authentication.go b/auth/authentication.go
index f9a8b21..3e57f91 100644
--- a/auth/authentication.go
+++ b/auth/authentication.go
@@ -2,6 +2,7 @@

 var authenticators = map[string]Authenticator{
   "local": &LocalAuthenticator{},
+  "google": &GoogleAuthenticator{},
 }

Commit 3 Isolate an impactful broader change with more context

commit d7bf6e05c8a93f2e1a9d4b7c6e5d4c3b2a1f0e9d
Author: Developer <[email protected]>
Date:   Wed Mar 19 10:21:36 2025 -0700

    chore: Increase default session timeout to 2 hours

    The Google OAuth token refresh mechanism requires longer sessions than
    our default 30 minute timeout allows. Without this change, users would be
    logged out while their Google token is still valid, creating a confusing
    user experience.

    We've tested this change extensively and found no security concerns when
    using tokens with proper expiration validation, which our implementation
    handles correctly.

diff --git a/config/settings.go b/config/settings.go
index 1234567..89abcde 100644
--- a/config/settings.go
+++ b/config/settings.go
@@ -12,7 +12,7 @@ var (
     // Other settings...

     // Session timeout in minutes
-    SessionTimeout = 30
+    SessionTimeout = 120

     // Other settings...
 )

You can imagine these changes being in a single pull request - the overarching goal here was to add Google OAuth. But it took three steps:

  • Commit 1: I refactored the authentication logic to allow for multiple providers. A new interface was introduced, a reviewer can easily see how it works, and most imporantly I can validate and test that my refactor works before moving on to the next step. The tests at this commit should pass!
  • Commit 2: I added the Google OAuth provider. This is the actual feature I wanted to add. With the refactor in place, this was a very easy change to make. And you guessed it, the tests at this commit should pass!
  • Commit 3: Uh oh! When I was testing the Google OAuth provider, I realized that the application’s SessionTimeout was breaking things. That’s a big change that affects far more than just the Google OAuth provider. So I made that change in a separate commit with a detailed explanation of why it was changed.
main feature-branch "refactor: Refactor user model" "feat: Google OAuth backend" "chore: Increase session timeout"

Now, note a few things:

  • This isn’t any different from ‘squash merge’ in the sense that the merge strategy is exactly the same: rebase the changes on top of the main branch, then commit them. The only difference is that the commits used in the feature branch are preserved and all applied to the main branch, without squashing. ‘Squash merge’ and purposefully committing are both examples of linear histories. You can require a linear history (either method!) on GitHub, too!
  • Structuring your commits like this is more artistic than scientific. I could have made the SessionTimeout change in Commit 2, since it was ‘for the Google OAuth provider’, but at the very least I would have explained in that commit message why I was changing it, just like in Commit 3. Someone in the future wondering why we have a 120 minute session timeout would see Commit 3 above, or:

    commit d7bf6e05c8a93f2e1a9d4b7c6e5d4c3b2a1f0e9d
    Author: Developer <[email protected]>
    Date:   Wed Mar 19 10:21:36 2025 -0700
    
        feat: Google OAuth authentication backend
    
        Note that I had to change the application's SessionTimeout value because
        the Google OAuth token refresh mechanism requires longer sessions than
        our default 30 minute timeout allows. Without this change, users would be
        logged out while their Google token is still valid, creating a confusing
        user experience.
    
        We've tested this change extensively and found no security concerns when
        using tokens with proper expiration validation, which our implementation
        handles correctly.
    

Clean. Easy to follow. Tells a story. More than a monolithic squash commit, but always quite a bit less than a pile of meaningless merge commits: you’ll find that this approach almost always yields 1-3 commits per pull request, with an occasional outlier (< 10) for a larger change.

A Note on Conventional Commits

You might have noticed my use of “conventional commits” with their feat:, fix:, and other prefixes. While this format has benefits for automation tools and is compatible with purposeful commits, it addresses a different concern. Conventional commits focus on categorizing changes, while purposeful commits focus on organizing changes into a coherent narrative. Use the prefixes, or don’t! Not terribly important to the point here.

Why This Matters

Purposeful commits aren’t just about keeping your own work organized; they dramatically improve team collaboration:

  1. Easier Code Reviews: Reviewers can understand your approach by following the logical progression of commits, rather than trying to mentally dissect a large, monolithic change.

  2. Safer Merges: When conflicts arise, more granular commits make it easier to understand and resolve them correctly.

  3. Better Bisect Results: When using git bisect to track down when a bug was introduced, purposeful commits provide clearer demarcation points between functional states.

  4. Improved Documentation: Well-structured commits serve as documentation of design decisions and evolution, becoming an invaluable resource for future developers Git spelunking through the codebase years later.

  5. Better Software Thinking Habits: You’ll find that in structuring your changes this way, you’ll reenforce asking yourself questions that you already should be in designing the software at all: What logical steps will get me to my goal? How can I break this down into meaningful increments? What parts might need to be refactored first?

Pedantry-free Zone

This is usually where some folks scoff and point at GitHub. “Everyone uses GitHub! Just use PR descriptions and GitHub comments!”

Here’s why that argument falls short: Ignoring the fact that PR descriptions are sometimes just as neglected as commit messages, Git is the only persistent truth for your codebase. Everything else is temporary. When your company switches from GitHub to another platform, or from Asana to Jira, those external references become dead links. I’ve experienced this transition multiple timesin my career - Tools change, companies change, repositories move. When projects are open-sourced, external context disappears entirely.1

Well-structured Git history travels with your code wherever it goes.2 It’s the one context that remains accessible regardless of which hosting service or project management tool you use next.

Regardless of wherever else you record context and information, having a clean, well organized, and well annotated history in your version control software isn’t pedantry—it’s practicality.

Tools of the Trade

If you follow the purposeful commit pattern you’ll quickly find that you need to be very, very comfortable with Git history. The many revisions you make to your code both in development and in response to pull request feedback mean you need to know how to view it and how to quickly and constantly revise it.

Take our example from earlier:

$ git log --oneline --graph --all
  
  * d7bf6e0 - (feature-branch) chore: Increase default session timeout to 2 hours (Wed Mar 19 10:21:36 2025) <Developer>
  * 2fd8901 - feat: Google OAuth authentication backend (Wed Mar 19 10:12:45 2025) <Developer>
  * 8d7f2c1 - refactor: Refactor user model to accept multiple auth providers (Wed Mar 19 09:47:18 2025) <Developer>
  * 12e48b1 - (origin/main) The commit at the head of `main` (Wed Mar 19 09:00:00 2025) <Developer>

What if a reviewer asks you to change something in the refactor commit? What if you found a linter error in the Google OAuth commit? You wouldn’t want to add “oops” commits to the branch, or you’re creating noise in the history again.

You’ll want to change the refactor and Google OAuth commits to incorporate the feedback on those changes, respectively. You’ll need to know how to amend things to keep your history clean and your changes organized.

Which is where the concept of rebasing comes in. Rebasing is a powerful tool that I feel gets very misunderstood by junior (and honestly, even senior) engineers as a Very Scary, Very Risky thing - likely because in the context of your main trunk, that’s true! But in the context of your feature branch, rebasing is a safe, powerful tool that allows you to ‘rewrite history’ of your changes before committing that history permanently (merging to main).

Rebasing isn’t nearly as complicated as most people think and it’s not hard to have a very efficient workflow for quickly amending your commits to follow the ideas of purposeful commits - but going over committing fixups, rebase, and autosquashing is its own topic for another post.

Happy Git Spelunking

I hope this has given you some ideas on how to structure your Git history in a way that’s helpful for future you and your team. Remember that it’s practicality, not pedantry. It’ll change how you think about approaching software changes in general and make you a better engineer in the process.

RELATED POSTS:

Git Autofix for Amending Pull Requests

  1. When I worked at Red Hat, I worked on ManageIQ, an open-source project that was previously a closed-source one. The commit history needed to be purged of all context as part of the open-sourcing process, so the ‘initial commit’ of the open-source project was a massive commit that included all the changes from the closed-source project. But if you were a Red Hat employee, you’d be given the closed-source Git history to insert into your local repository, to see all prior context. Not the links to the old Bugzilla(?) tickets, though! 

  2. Pragmatically speaking. Sure, there’s a chance that you’ll need to switch to a non-Git VCS in the future, but that’s a very rare experience in my experience, and there’s usually ways to convert Git history to other VCSs. Aside: I’ve never used Gerrit but the concept of patch sets - iterative development on individual commits - sounds exactly like a system built around purposefully committing :) 

]]>
<![CDATA[RailsConf 2019 Visitor's Guide to Minneapolis]]> 2019-04-28T00:00:00-04:00 https://chrisarcand.com/railsconf2019

RailsConf 2019 is here! No, really - here in Minneapolis, Minnesota! The local Ruby community is thrilled to welcome you to our hometown.

Visiting the Twin Cities for the first time? Wondering what to wear? Where’s the good food and what’s a “Jucy Lucy”? Beer lover? I present to you the RailsConf 2019 Visitor’s Guide to Minneapolis, created with love by myself, Charles Nutter, and others at RubyMN!

  • For up-to-date info on logistics concerning the conference itself (schedule, hotel, childcare, policies, etc) the RailsConf website is the definitive resource that you should reference.
  • This page will likely continue to be updated throughout the conference with additional things as they come to mind or are contributed. Check back!

Quick Reference

The Weather/What to Wear Skyway Map Places to Eat Vegetarian/Vegan places to eat Craft Beer Things to Do

Wait, ‘Twin Cities’?

RailsConf 2019 is in Minnesota’s largest city, Minneapolis (pictured left). Minnesota’s capital city, Saint Paul (pictured right), lies a mere 8 miles away (downtown to downtown). Together, the two cities collectively create a single metro area - hence, the Twin Cities.

Both cities have their own flavor and charm, and there’s plenty to see and do in both! This post focuses more on Minneapolis, where you’ll be staying - but remember that St. Paul is connected via the Green Line or it’s a short rideshare trip away.

The Weather

I know, I know - you’re worried about the weather. It does get very cold here, but the summers are quite lovely. Before we get to those summers, however, Minnesota goes through what we like to call ‘2nd Winter’ (aka ‘spring’). It is currently ‘spring’.

Happy ‘spring’.

After a week of beautiful sunny skies and ~75º (24ºC), we’re regrettably going to experience a temperature drop and rain during the conference. But don’t worry! I’m told the weather in the Convention Center is actually quite temperate.

Are you worried about the previous weekend’s forecast of…snow? Don’t. It’ll be gone or plowed (yes, we have plows!) before you get here. Really! It’s ok.

“Soooo do I need to bring a jacket?”

Yup. But you don’t need the world’s heaviest winter jacket. The single easiest way to be comfortably warm in cooler weather than you’re used to is by wearing a hat and gloves. So bring a light jacket and, if you think weather in the 40s sounds super cold, a hat and some gloves for when the sun goes down and you happen to be outside longer than just a few minutes.

Skyway Map

It gets pretty cold here in the winter time. As such, Minnneapolis and St. Paul both have extensive skyway systems: a giant string of enclosed pedestrian foot bridges connecting downtown buildings, above street level. In fact, the Minneapolis skyway system - which cover 80 city blocks spanning 11 miles (18 km) - is the longest contiguous system in the world.

The skyway conveniently connects the conference hotel to the conference venue. However, if you’re going for a longer stroll through downtown I recommend to just go outside while you’re visiting; it’s much more enjoyable and faster in most cases. It’s definitely an option if you’re feeling extra chilly, though!

Places to eat/drink

You’re in the middle of the continent, as Midwest as you can get. The food has to be terrible here, right?

Wrong. The Twin Cities has some really great food. You’ll find plenty of local delicacies here in the Land of 10,000 Lakes, whether it’s Jucy Lucies (where we unabashedly take credit for Wisconsin’s cheese) or some walleye from our 11,842 lakes (see what I did there?). You won’t find just Midwestern comfort food here, either: the Twin Cities has a great variety of culture for some amazing spots you’d never think were possible up here in the North.

A true list of all the great places to eat would be impossible to fit on here, but I’ll throw in some convenient places by the conference hotel and some favorites all around.

Around the conference hotel (Downtown)

There are tons of places to eat within walking distance of the conference hotel. This list is far from exhaustive.

  • Hen House Eatery 114 S 8th St, Minneapolis, MN 55402 Six blocks from the hotel. Cafe, great for breakfast/lunch.

  • Hell’s Kitchen 80 South 9th St. Minneapolis, MN 55402 Four blocks from the hotel. Also solid for breakfast.

  • Brit’s Pub 1110 Nicollet Mall, Minneapolis, MN 55403 Two blocks from the hotel. Large UK-inspired pub/restaurant with a huge rooftop lawn bowling area (which regrettably probably isn’t very useful while you’re here in a Minnesotan April)

  • Butcher & The Boar 1121 Hennepin Ave, Minneapolis, MN 55403 Popular American steak house, very meat-focused; excellent cocktails and bourbons, if that’s your thing.

  • The Local 931 Nicollet Mall, Minneapolis, MN 55402 Multi-roomed Irish pub/restaurant, a long-time downtown staple.

  • Zen Box Izakaya 602 S Washington Ave, Minneapolis, MN 55415 Japanese comfort food - ramen, small plates. A great lunch spot. It’s a long walk from the hotel but a short bus ride/rideshare. Brings you right by the river with Stone Arch Bridge and Mill City ruins for great views of the city while you’re here.

Uptown/Lyn-Lake

~10 minutes away via rideshare

Uptown (at Hennepin and Lake), which is actually south of downtown, is one of the more well-known hang-out spots in the city. I wouldn’t call it a tourist trip - it does have some good spots in it - but note that a lot of locals would consider it synonymous with Times Square in New York (many locals scoff and avoid it). It’s right next to Lyn-Lake (at - you guessed it, Lyndale and Lake), which is great, and so although the most pedantic locals would argue against it I’ve just grouped them in to one area here for you.

Here’s a great list of good picks for the area.

Some things I’d specifically like to call out:

  • moto-i 2940 Lyndale Ave S, Minneapolis, MN 55408 The sole “craft sake” brewery in Minnesota.

  • World Street Kitchen 2743 Lyndale Ave S #5, Minneapolis, MN 55408 Brick-and-mortar version of a local food truck that makes “street foods from around the world.” The Korean BBQ Bangkok burrito is life changing. And some of the Cities’ best ice cream is right next door at Milkjam Creamery.

  • Blue Door, Lyn-Lake 3006 Lyndale Ave S, Minneapolis, MN 55408 A local pub favorite originally started in St. Paul with multiple locations. Lots of amazing burgers, including their own take on a Jucy Lucy: the ‘Blucy’.

  • Chino Latino 2916 Hennepin Ave S, Minneapolis, MN 55408 “Street food from hot zones around the equator is how this restaurant describes its unique cuisine, served family-style. There’s also a sushi bar and imaginative drink list, including the Chinopolitan, a cosmo garnished with dry ice.” (Fodor travel review) Don’t forget your outdoor voice if you go here.

  • Volstead’s Emporium “Secret” address The National Prohibition Act, known informally as the Volstead Act, was enacted to carry out the intent of the 18th Amendment (ratified January 1919), which established prohibition in the United States. Its author? Andrew Volstead, a member of Congress from Minnesota. Sorry about that. We corrected it with the rest of y’all in 1933 and then named an old speakeasy after him out of spite.

    The address literally is secret (sort of). You’ll find it in an alleyway near 711 W Lake St, Minneapolis, MN 55408. There’s no sign. It’s literally just a random door. Shhh! Don’t tell anyone, I’m only letting you know as one of the very few ~1,200 friends visiting this week.

North Loop (Warehouse District)

< 10 minutes away via rideshare or a long walk

Here’s a great list of picks in the North Loop. There’s a ton of great picks here, but I’ll share a few favorites.

  • 112 Eatery 112 N 3rd St, Minneapolis, MN 55401 James Beard Award-winning chef Isaac Becker has one of the best restaurants in the city, in my opinion.

  • Bar La Grassa 800 N Washington Ave, Minneapolis, MN 55401 An amazing Italian eatery.

  • Black Sheep Pizza 600 N Washington Ave, Minneapolis, MN 55401 Note: there are multiple locations in both cities.

  • Borough (& Parlour) 730 N Washington Ave, Minneapolis, MN 55401 “Upstairs, Borough is a modern small-plates restaurant that puts up creative but satisfying dishes. Downstairs in the basement, Parlour is a plush, swank cocktail den with a legendary burger.”

  • Red Cow 208 N 1st Ave, Minneapolis, MN 55401 Really great burgers. There’s a location in Uptown and in St. Paul, too.

  • Smack Shack 603 N Washington Ave, Minneapolis, MN 55401 I’d normally not recommend seafood in the Cities - the sea is very far away from here - but this is one of the exceptions.

Northeast

~10-15 minutes away via rideshare

The Northeast neighborhood right across the river has a huge number of taprooms (some mentioned in Craft Beer) and some solid food.

  • Young Joni 165 13th Ave NE, Minneapolis, MN 55413 Be prepared to wait if you want some of this woodfire pizza by James Beard Award-nominated Chef/Owner Ann Kim.

  • Kramarczuk’s 215 E Hennepin Ave, Minneapolis, MN 55414 This place is as quintessential Northeast as it gets. Kramarczuk’s has been serving up smoked sausages and other Eastern European foods in Minneapolis for over 60 years. Walk across the river to this gem of a Polish deli and combine the trip with checking out the St. Anthony Falls area (mentioned in Things to Do)

  • Brasa Premium Rotisserie 600 E Hennepin Ave, Minneapolis, MN 55414 Cooking inspired by the traditions of the southern US, Caribbean, and Mexico. The Yuca fries and magic green sauce (very technical term) are amazingly delicious here. Vegetarian and gluten-free options, too!

  • El Taco Riendo 2412 Central Ave NE, Minneapolis, MN 55418

  • Chimborazo 2851 Central Ave NE, Minneapolis, MN 55418 Delicious Ecuadorian and Andean food.

St. Paul

Did I mention there’s an entire separate city in the Twin Cities? Down the river from Minneapolis lies St. Paul with all it’s own picks. The Mpls/St Paul magazine list here is also pretty good. Some highlights/additions:

  • Meritage 410 St Peter St, St Paul, MN 55102 Probably the best French dining in the Cities, located in one of the more historic buildings in St. Paul. Note this is nearing fine dining and getting a reservation ahead of time is advised.

  • Tori Ramen 161 Victoria St, St Paul, MN 55104 This cozy little spot specializes in pork-free ramen.

  • Revival 525 Selby Ave, St Paul, MN 55102 James Beard-nominated chef Thomas Boemer brings you amazing Southern comfort right here in the “Bold North”.

  • The Buttered Tin 237 7th St E, St Paul, MN 55101 Go here for breakfast.

  • The Bulldog, Lowertown 237 6th St E, St Paul, MN 55101 A local pub staple.

  • Mickey’s Dining Car 36 7th St W, St Paul, MN 55102 This is NOT a foodie place, but I’d be remiss not to mention the iconic establishment which has basically stayed the same since the 1940s and has seen filming on-location of all three Mighty Ducks movies. It’s open 24 hours day everyday, so could be a late night stop for the adventurous (you know, after watching all the Mighty Ducks movies late in to the evening, as one does).

Vegetarian/Vegan picks

Some of the closest and best vegetarian options are in Uptown/Lyn-Lake, including:

  • Galactic Pizza 2917 Lyndale Ave S, Minneapolis, MN 55408 Not strictly vegetarian, but locally sourced pizza with vegetarian and vegan versions of every speciality pizza, and many of the speciality pizzas are already vegetarian. Gluten-free crusts available, too.

  • Trio Plant-based 610 W Lake St, Minneapolis, MN 55408 Vegan everything!

  • fig + farro 3001 Hennepin Ave S, Minneapolis, MN 55408 “This vegetarian restaurant serves seasonal small plates and entrees influenced by global cuisines including Mediterranean, Mexican, Italian, and Korean. The brunch menu offers items like biscuits with mushroom-based gravy, breakfast ramen, and a blueberry cinnamon roll sized to share.” (Eater Twin Cities)

In St. Paul:

  • J. Selby’s 169 Victoria St, St Paul, MN 55104 Vegan-friendly, plant-based eatery with familiar favorites re-imagined as plant-based fare and new, exciting dishes.

Craft Beer

Beer lover? The Twin Cities’ craft beer game is strong. Here’s a very small selection of some of our favorites, sorted by distance from the conference hotel/venue:

Close to the conference hotel:

  • Lakes and Legends Brewing Company 1368 Lasalle Ave, Minneapolis, MN 55403 A single block away from the conference hotel.

  • Finnegan’s Brewery and Taproom 817 S 5th Ave, Minneapolis, MN 55404 Less than a mile from the conference hotel.

  • Sisyphus Brewing 712 Ontario Ave W #100, Minneapolis, MN 55403 Also less than a mile away, a 17-minute walk through nearby Loring Park and the Minneapolis Sculpture Garden from the conference hotel will get you here.

North Loop (< 10 rideshare or a long walk)

Northeast (10-15 min rideshare)

Midway (~15 min rideshare)

Midway refers to the area between Minneapolis and St. Paul (which is mostly just a part of St. Paul)

Adventure Time (20 minutes or longer via rideshare)

These are well out of your way, but have to be mentioned as they are the certified Favorite Breweries™ of one of our resident JRubyists and founding members of RubyMN.

  • Wild Mind 6031 Pillsbury Ave S, Minneapolis, MN 55419 In South Minneapolis; about 20 minutes away.

  • HammerHeart Brewing Co. 7785 Lake Dr, Lino Lakes, MN 55014 Far to the north, right outside the metro area. About 30ish minutes to get there.

Things to do

RailsConf Events

As has become tradition, Mike Perham has once again posted a listing of official and unofficial RailsConf related events and parties on his website, and I’ll leave that as a canonical source to reference. Check it out!

I will note here that sadly the RubyMN meetup is already full and we can’t really take on any more RSVPs, purely due to the amount of space we have at the venue. As it turns out, a venue that can support 60ish people fills up pretty quickly with 1200+ Rubyists in town!

On your own

There’s so much to do and see around the Twin Cities - remember, the conference is in Minneapolis but there’s a whole other city just eight miles away (contrary to what some Minneapolitans might purport).

As always, remember that this list is far from exhaustive and undoubtedly will get added to as more things come to mind/are suggested by others.

  • Head to the north side of downtown and walk across the iconic Stone Arch Bridge to stretch your legs and check out one of the most quintessential views of Minneapolis overlooking St. Anthony Falls. See all the mill buildings along the riverside? ‘The Mill City’ developed around these falls. Ever heard of General Mills and Pillsbury?
  • Only a block or two away from the conference hotel is the Minneapolis Sculpture Garden with its famed, quirky Spoonbridge and Cherry, as well as the Walker Art Center right next to it. If you’re joining us for the unofficial RailsConf 5k, we’ll be running right around this area already!
  • Home to over 45 businesses spanning over 22 cultures, the Midtown Global Market - located in the historic Midtown Exchange (listed on the National Register of Historic Places as the old Sears, Roebuck and Company building) - is a vibrant cultural center full of food, locally made art, and goods from around the world. “We exist to promote the economic, social and cultural assets of Minnesota and celebrate the healthy foods, arts, crafts and other aspects of our diverse heritage.”
  • In the same vein as Midtown Global Market above, get a taste of St. Paul and head over to Keg and Case, a brand new culinary and retail marketplace that opened last year on the grounds of another historic location, the old Schmidt Brewery.
  • Check out First Avenue, the historic music venue that Prince frequented and recorded Purple Rain at. If you don’t want to see a show, the 531 stars on the exterior of the building commemorating past venue performers are worth the visit alone.
  • Axe throwing has become a thing in the past few years here. It’s like darts, but…with axes. Check out Bad Axe Throwing in Minnneapolis or FlannelJax’s in St. Paul.
  • Enjoy arcade games? The Minneapolis Up/Down in Lyn-Lake is a blast, featuring classic arcade games, ski-ball, etc. Right next door to Blue Door pub, mentioned earlier.
  • The Minnesota Wild are out of the NHL playoffs (as usual) and the Minnesota United FC are sadly not at home in their brand new St. Paul stadium this week. But if you’re in to baseball, the Minnesota Twins will be in a homestand at Target Field with the Astros.
  • See a show at the Gutherie Theater
  • The Minnesota Orchestra is one of the top symphony orchestras in the United States. If you’re still around Friday night after the conference , there’s a concert at Orchestra Hall (near the conference venue) featuring some Haydn, Bernstein, and Mozart.

Further away

These are a trip, but should be mentioned anyway:

  • Paisley Park was Prince’s private estate and recording studio (as well as the name of his record label, as well as a song on his 1985 album). It’s in Chanhassen, a distant suburb of Minneapolis, around a 30+ minute drive from the conference hotel.
  • The Mall of America - near the airport you flew in at - is the largest mall in the United States. It’s large enough that there’s an entire amusement park (like, with rides) in the middle of it. It’s worn a certain marketing for some time, but if you ask a local of a certain age it’ll be forever remembered as Camp Snoopy. Really though, it is just a mall - so unless you’re really in to shopping, there’s a ton of way more interesting things to see and do to experience the Twin Cities. Had to mention it, though!

FIN

I’m so stoked you’re here; enjoy your time in the Twin Cities! Do come say hi during the conference! I have a lot of RubyMN stickers to give away.

Thank you to the local Rubyists that contributed to all this content!

]]>
<![CDATA[Interview Your Interviewers]]> 2019-04-08T00:00:00-04:00 https://chrisarcand.com/interview-your-interviewers At this point in my career, I’ve taken a fair number of job interviews in the tech industry. I’ve interviewed at companies ranging from large enterprises to tiny consultancies and small, bootstrapped product teams to high-growth, VC-funded startups.

Interviewing is (regrettably) it’s own weird skillset, especially in software development. You’re not only expected to demonstrate that you’re technically proficient enough for the role (or can clearly learn the technical skills required) but also that your personality fits well with the culture of the team - all within a few hours, usually. To keep this unique skillset of selling yourself sharp, some people advocate taking at least one interview every year even if you have no intention of actually accepting a new role. While I acknowledge that taking interviews like this requires time and privilege that not everyone has, I think this is a good strategy overall and I’ve followed it for number of years now.

 I often get asked for advice on interviewing by colleagues, and there’s one thing in particular I always share first: Interview your interviewers. An interview is every bit as much an assessment of the interviewer as it is an evaluation of the interviewee.

Interviews are not a one-sided ordeal and you shouldn’t forget to evaluate your potential company as you sit on the candidate’s side of the desk or screen! Companies you want to work for will always offer to honestly answer any questions you have and make sure they reserve enough time in every interview to do so.

Interview your interviewers. An interview is every bit as much an assessment of the interviewer as it is an evaluation of the interviewee.

Treating an interview as a true evaluation of both parties has a ton of benefits:

  • You get more value out of the interview, getting a clearer picture of whether or not the role is a good fit. Job descriptions are more or less bullshit - or at least omit a lot of particulars - even at good companies. They are purposely generic and are never actually tailored to describe the actual details around a specific requisition (this is intentional - more on that later).

  • You shift the burden of answering questions and give yourself a break. Interviews are a lot of work, especially for anyone who tends to identify as more of an introvert. At many companies, candidates are evaluated by a panel of different people, each interview evaluating the candidate from a different angle. You might have interviews scheduled with plenty of time between them or you might get a few solid hours of interviews all in a row (hopefully not - but I’ve experienced this multiple times). Also consider that you might be interviewing at multiple companies at once!

    Regardless of the length of time, interviews can be socially taxing as you take interview after interview, trying to reintroduce yourself to yet another person for yet another hour. Asking good, open-ended questions actually shifts the burden over to the interviewer, allowing yourself to regain a little composure and mentally recover from constantly answering them.

  • You show more interest in the company and role to your interviewers. As someone who often sits on the interviewer side of the table as well, I’ve known people who saw a candidate not asking any questions at the end of an interview as a red flag. It supposedly signals that the candidate might not be all that interested in the company or role they are interviewing for. I don’t agree with this. Although it gives some small bit of insight in to a candidate (e.g. based on what they’re seeking clarification on, what do they seem to value?), blindly assuming the candidate isn’t excited about the role due to a lack of questions is a mistake. Nevertheless, it’s a common interviewer ‘tactic’ to be aware of.

  • You have a better chance of avoiding a bad situation. Just as you’re trying to sell yourself in a few hours time, you also only have a few hours one-on-one time with employees of a company to really deduce anything that you’d consider an absolute dealbreaker to accepting an offer from a company; things that don’t show up on paper in an offer letter but you might discover in some months’ time and realize the role or team really wasn’t a good fit for you. What sorts of things does the company value? Is there anything that makes you feel ethically uncomfortable that you might not have known about the company before? Do you have a clear picture of the expectations of the role and how your future team works together?

  • You might be creating the beginnings of a good work relationship with your future coworkers. By treating an interview as a two-way street, you’re really making it a conversation. Instead of a sequence of questions to answer and be done with, you might actually find yourself connecting with your future teammates. What better way for either side to evaluate if they’d be a good fit than by actually experiencing how you’d connect as colleagues to begin with?

So now that you know you should ask questions of your interviewers, what should you ask? I like to keep a personal checklist of potential questions to ask. If you don’t know where to start with your own list, Julia Evans has a solid list of questions to get you going in her post on the topic from 2013. Pick out ones that matter to you, add your own, and don’t be afraid to tweak them at any time.

In an actual interview I use a subset of questions depending on the particular concerns I might have for a specific company. The questions you ask can (and should!) vary between companies as each one is different. The company size, the role, and what stage the company is in (is it a privately owned startup? a publicly owned corporation?) will affect the priority of your set of questions.

Always ask a hiring manger why the team needs you in an interview. Job postings are almost universally bad at getting specific about the actual role in mind, even at good companies.

I’d specifically like to call out one question to make sure you ask your hiring manager. Always ask a hiring manger why the team needs you - or someone like you - in an interview. As I mentioned above, job postings are almost universally bad at getting specific about the actual role in mind, even at good companies. Part of that is because you haven’t signed an NDA yet, but another is that companies are usually pretty terrible at keeping job descriptions up to date and specific to a certain new role in mind. So don’t forget to have that question answered very specifically! An example: “What do you need the person in this role to do? If you hired me, what’s something specific that you hope I’ll bring to the company that will make you think ‘Oh wow I am so happy we hired him/her/them’ six months from now?”.

The Two Questions I Always Ask

There are two questions, however, that I always ask every single interviewer at the end of every interview - hiring manager, engineer, even the recruiter. If there are multiple interviewers in a single interview, I ask the questions in round, with each person answering the first and then reverse order for the second (as the person who answers the question last always has more time to think, so it’s more even this way).

They are:

  1. What do you absolutely love about <company name>? What makes you get to your desk every day and think ‘Oh yeah, this is exactly where I want to be, I’m really happy with where I’m at’?
  2. Conversely, what do you hate about <company name>? Put in a more positive light, what do you think <company name> isn’t so good at and could use some improvement?

I’ve found these questions to be - by far, in my experience - the most revealing things you can ask in an interview to gauge a company and what its employees think of it. You’ll find that these questions immediately set the interviewer at the same level, and the first few times you ask them you’ll be blown away by the bluntness and honesty that typically come with the answers.

  • Sometimes an answer to #2 will inform your question(s) for the next interview, so you can pry that subject open and see if you discover a major issue at the company that would make you feel uncomfortable. Sometimes the answer will be repeated by every single interviewer and immediately show a red flag that you’ll be happy to avoid by declining an offer (this has happened to me before!).
  • Other times the answer to #2 will be repeated and won’t be a red flag: every company has at least one problem, even good ones, and there’s always a tradeoff. So if it’s not a dealbreaker for you, you’re now just well informed as to what exactly you’re accepting (and will try to help improve) if you’re made an offer and choose to join the company.
  • The answers to #1 could cement your confidence that you’re making the right choice if you have the chance to accept an offer. What if you know exactly what’s most important to you and every one of your interviewers has the same idea? Answer: you might have just found your people.

Interview your interviewers and try to connect with people you’d hope to be able to connect with in the future anyway. It all sounds so obvious from afar, but sometimes hard to remember as you approach a set of interviews with that-one-company-you’ve-always-wanted-to-work-at. Ask lots of questions and don’t be afraid to really pry things open to be as confident as you can be that all of the effort you’re going through is worth it. Both the company - and you - are far better off for it.


Want to ask me these questions, and hear how confident I was joining my team at HashiCorp? (Spoiler: I was very confident) We’re hiring! Check out our open positions at hashicorp.com/jobs

]]>
<![CDATA[Why GraphQL?]]> 2018-06-14T00:00:00-04:00 https://chrisarcand.com/talks/why-graphql If you don’t know much about GraphQL, you probably just identify it as the hip, shiny new thing companies are adopting to replace their RESTful APIs. Sure, sure – clients can ask for only what they need in a single request – but is that it? What really makes GraphQL so special? In this talk, you’ll be introduced to GraphQL beyond just the usual explanations of the query language and learn why things like GraphQL’s introspection and type system make it an enormous advancement in web technologies.

Slides

]]>
<![CDATA[Embrace Boredom]]> 2018-05-29T00:00:00-04:00 https://chrisarcand.com/embrace-boredom Over 30,000 feet in the sky, in a plane without WiFi (which is more and more of a novelty these days), my eyes caught someone through the seats in front of me staring at the home screen of their smart phone.

They flipped through the app icons, searching for…something. They opened the Facebook app only to find a “No Internet Connection” banner. More flipping through icon screens followed after a few minutes of staring at the blank WiFi connection list in Settings. Then back to the home screen to stare at the icons for a few minutes. They opened a major news app only to find “Cannot connect to server.” After more staring and flipping of icons, they discovered their email app staring right back at them with a similar message. This sort of pattern continued for nearly 10 minutes before the person tried Facebook one last time — just in case — before resigning themselves to the somber purgatory of Airplane Mode.

In our modern era of near-perpetual connectivity and endless notifications vying for every second of our attention, this sort of desperation is really common (I freely admit finding myself in the same situation in the past). We have the ability to gather information about almost anything in real time from anywhere in the world, and because of this we feel a sense of urgency to know everything instantly — even when the information isn’t actually urgent or even pertinent at all. This sense of urgency is why you’ll find most people reaching into their pockets as they enter a line waiting for basically anything. It’s why you’ll suddenly find yourself unlocking your phone as you sit down on the bus or you’re sitting at a table waiting for your colleague to join you for lunch in two minutes.

Our culture is obsessed with business and Getting Things Done™. Boredom isn’t sexy. Boredom is a “waste of time.” We therefore fill every second of our time with things that we convince ourselves are Important®. This mindset extends itself in to the time when we knowingly aren’t “getting things done,” and the [albeit impressive] technology we are surrounded by makes distraction a constant, numbing norm. BuzzFeed, Reddit, mobile games are right at our fingertips. With smart watches becoming more of a thing, we don’t even need to reach in to our pockets anymore.

It’s now hard to feel bored.

However, there are countless benefits of boredom and getting lost in one’s own thoughts — a keener awareness of the world around you and creative brainstorming with your own ideas instead of just consuming others’, to name a few — but the one I’d like to mention here is honing the ability to focus.

In the book Deep Work, author Cal Newport explains that constant distraction can actually harm our ability to focus when we need to; even when that distraction is outside of our working hours. He likens mental performance to athletic performance:

Much in the same way that athletes must take care of their bodies outside of their training sessions, you’ll struggle to achieve the deepest levels of concentration if you spend the rest of your time fleeing the slightest hint of boredom.

You wouldn’t expect to be able to run a marathon after eating a steady diet of ice cream and Juicy Lucies. Why would you expect that after desperately distracting yourself every “boring” moment outside of work, you will suddenly be able to sit down and concentrate deeply, ignoring distractions?

The next time you find yourself waiting — for your food in line, your friend at the cafe, or the plane to hurry up and get there — embrace boredom and allow yourself to get lost in whatever thoughts come to you. You’ll be all the more ready to concentrate in the future. You’ll learn a surprising amount about yourself and the world around you, too.


This post was originally written for Software for Good’s Theme of the Week for May 29th, 2018

Link: “Embrace Boredom” on the Software for Good engineering blog

]]>
<![CDATA[Focusing on what's important: Managing GitHub notifications with Octobox]]> 2018-02-22T00:00:00-05:00 https://chrisarcand.com/focusing-on-whats-important-with-octobox I wrote something for devproductivity.io about how I use Octobox to focus my attention on what’s important while still keeping an eye across many different projects.

Link: “Focusing on what’s important: Managing GitHub notifications with Octobox”


Many people wonder how some individuals can be so productive. It might seem that someone you know gets an especially copious amount of work done given the same amount of time as anyone else. There’s a lot of discussion to be had about what ‘productive’ even means and the best methods to go about being productive, but ultimately you can boil it all down to two statements:

  • Spend your time on what’s important
  • Don’t spend your time on what’s not important

These points might seem obvious, but applying them to every aspect of your day is harder than you might think.

As developers we spend a lot of time looking at issues and reviewing code changes, interacting asynchronously with other people on multiple subjects over an indeterminate amount of time. While this is reasonable to manage via email notifications on small projects, as your organization and codebase grows your inbox quickly turns in to a water hose of GitHub notifications that all start to look the same. This problem is compounded if you’re active in open source where you’d like to follow the activity of different projects outside of your own - and further compounded when the projects you’re interested in are very, very large.

Spending too much time figuring out what notifications are actually important in the current moment isn’t productive. Trying to remember which notifications you actually need to follow with up over and over again is also not productive. Most open source maintainers and GitHub staff end up using a complex combination of filters and labels in Gmail to manage their notifications from their inbox. The productive thing to do, however, is to maximize your time actually addressing those issues!

Enter Octobox.

Octobox screenshot

Octobox is an application that manages your GitHub notifications via GitHub’s V3 REST API, allowing you to filter down by organization, project, notification type, or the reason why you are receiving the notification in the first place.

With GitHub’s notifications UI, when notifications are marked as read they disappear from the list. This makes it very hard to keep on top of which notifications you still need to follow up on. Octobox adds an extra “archived” state to each notification so you can mark it as “done”. If new activity happens on the thread/issue/pull request, the next time you sync your notifications the relevant item will be unarchived and moved back into your inbox.

With Octobox it’s easy to drill down in to the issues that matter to you in the current moment while still allowing you to be subscribed to projects that you’d like to keep an eye on. You can quickly address threads that you’ve authored yourself or remember that you still need to submit your review that someone requested. You can “star” long-running issues that you know might be important down the road. And anyone refusing to touch their mouse will be pleasantly surprised at the amount of keyboard shortcuts for all of these actions. ;)

Working full-time in open source, I have to keep track of a lot of different projects - but I also try and maximize my productivity by paying attention to what’s important at a particular moment. Octobox is a tool I use to do just that.

Octobox is open source and available on GitHub. You can try it immediately at octobox.io

]]>
<![CDATA[An Atypical 'Performance' Talk]]> 2017-10-27T00:00:00-04:00 https://chrisarcand.com/talks/an-atypical-performance-talk A mixture of a talk on programming and a live classical music recital. I talk about parallelisms between music performance and programming - such as complexity, imposter syndrome, and true concentration - with unaccompanied clarinet performances throughout.

Video (Keep Ruby Weird 2017 | Austin, Texas)

Slides

]]>
<![CDATA[Balance]]> 2017-05-02T00:00:00-04:00 https://chrisarcand.com/balance When I was young, I never understood why my dad liked splitting and burning firewood so much.

Every summer, my brother and I would play our parts. We’d haul large logs cut by my dad’s chainsaw from some fallen tree in the area (with permission, of course) in his pickup truck to our backyard. Then we’d split them by axe and hammer and stack the pieces at the bottom of the hill to season. Lastly, we’d take a previous years’ stash from the other side of our wood stack and place them in a large red wheelbarrow and haul them up the hill to then re-stack in neat piles against the side of the house next to the patio door for the coming winter.

When the extreme Minnesotan cold set in, my dad would then burn a fire nearly every evening (and sometimes, morning) in the wood burning stove to heat the house, just needing to lean out the patio door for the season’s wood supply.

Almost all of my friend’s families either didn’t bother utilizing their fireplace (you could just turn up the furnace, of course) or had gas ones. With the latter you could just touch a button and a nice fire magically appeared! No dreaded hauling of firewood all afternoon on summer weekends. No having to build a fire with the previous week’s newspaper. No scooping out old ash from the stove or getting on the roof every year and sending a brush up and down the chimney to clean it out.

What’s more is that my dad looked genuinely happy to be out splitting firewood.

He seemed very pleased in the winter evenings with a roaring fire going and a beer in hand, but that was easy to understand. It was the splitting and the hauling and all the overhead in the summertime that I was baffled about. When I was older (and my brother went off to school and I was left to pick up all his slack…) I finally asked him about it.

He explained that being in IT, things like wood burning and fishing and the outdoors were the antidote to his technology filled workweek. After a full day of sitting in front of a computer screen and working with people and their endless computer issues, spending quiet time outside in the sun and fresh air, either alone or with his boys, provided a balance to his life that was important to maintain.


After becoming an experienced software developer I’ve come to understand my dad’s insistence of balance between technology and the simple things in life more and more over the years. It needn’t be burning wood or even the outdoors at all - but I now firmly believe that having something that grounds you and doesn’t involve staring at a liquid crystal display whatsoever is vital to feeling balanced in day-to-day life.

I enjoy bonfires, and when I moved back to Minnesota last summer I had to use some (gasp) purchased firewood from the gas station (Yeah, we have that here. Is that weird?). The wood burned incredibly poorly; it was obvious that it was newly split wood and completely unseasoned. I would have to take time and build the perfect structure to start, and constantly shift things around to get it to burn evenly.

This experience was “frustrating”. But in an amazing, I’m-so-happy-it’s-not-another-software-problem sort of way.

I wasn’t digging through stack traces trying to find where some 3rd party library accidentally made a breaking change in their latest patch release. I wasn’t mulling over the best approach to make some major change to the core architecture of an application to avoid breaking things and pissing off untold numbers of customers (with said major change being a precondition to implementing the feature said customer had been asking about, of course).

I wasn’t continuing to ask for minimal examples of that phantom issue that multiple people have reported but no one can narrow down. I wasn’t getting DM’d about remembering some small caveat from code written years ago regarding that one thing that kept me up all night or all weekend and I’d rather just forget about forever.

I wasn’t doing any of those things. But I was debugging! I was debugging something completely and utterly different, applying all the same logical processing of thoughts. I was trying to make some pile of shitty wood burn. Which piece should I add next? Is this one too thick? Does it seem drier than that piece? Maybe I should wait until the fire is hotter to burn this bad one. Where on the fire would it burn the best? Is there enough airflow there? Will the structure collapse if I try and move that corner?

Debugging under the stars with a bottle of bourbon and the sound of crickets and loons as my work playlist. It’s this sort of disconnection from our always-online society that keeps my balance.

I love technology, software, and the communities that support it. I work remotely, and while I try to keep a healthy separation between work and home I’m too passionate about what I do to completely evict programming from my mind the moment the clock hits 5pm. I might merge pull-requests and read emails after a typical work day. I talk shop constantly on the weekends and speak at conferences. I check Twitter too often. I do all these things because I’m fortunate enough to be paid to do something that I really love doing a lot.

If I do nothing but these things for too long, I become a mess. It’s not something I even notice (which is the worst part), not suddenly. I just begin to become this zombie of a person, only caring about getting that low priority bugfix I wrote merged or some issue closed or some CFP submitted. Slowly but surely the road to burnout begins.

But disconnecting and concentrating on other things is my antidote. I’ve developed a huge affinity towards hiking. I play organized hockey twice a week. I love burning fires and getting lost in the night sky. And as a former orchestral musician, I enjoy reading some old charts every now and then.

Find something that you love that has absolutely nothing to do with software and technology. Learn a skill that uses your hands beyond touching keycaps. Pour yourself in to it. I think you’ll find that it will give you a balance that makes you all the more concentrated and effective as a software developer when you open up that text editor in the morning.

]]>