Above all, be unpredictable. Being predictable means your victim can figure you out and respond accordingly. Instead, keep them on their toes, which will preoccupy their mental bandwidth and prevent them from making meaningful progress in your relationship.
Separate cause and effect. Don’t give reasonable feedback in a timely fashion. When your victim does something that bothers you, don’t express your emotions calmly to them at that time. Instead, wait for a related, but otherwise innocuous trigger and blow up at them. They’ll learn never to trust your initial reactions and will never know if something they do or say will blow up in their face.
Love them without liking them. Be obsessed with them, telling them how much they mean to you. But day-to-day, don’t enjoy their company and constantly try to change them. Say you want to spend your life with them, but tear down their hobbies and their lifestyle. Your love will keep them around, but they will feel terrible when enjoying the things that make them happy.
Rapidly oscillate between praise and criticism. Be extreme about it. You don’t just enjoy being around them, they are the best thing that ever happened to you. Then, turn around and tell them how exactly how much they are hurting you just by being themselves.
Target their insecurities. Don’t just tell them how you feel when their actions hurt you, attack the specific parts of their psyche they struggle with the most. Compare them to others, attack their sense of morality and tear down their identity. You don’t have to know why something triggers them, whether it was a past experience or their general upbringing, you just have to know how much it hurts.
Finally, create an environment where it’s dangerous to say no. Criticize them constantly for asserting their boundaries. Then, ask for their consent for the things you want them to do. You may even respect their decision if they decline, but they won’t decline, because they don’t know how you’ll react.

I’m no expert on LLMs and coding with AI. In fact, I feel like I’ve fallen behind. I’m still in the initial phases of trying out AI-augmented coding. This blog post is my attempt at addressing my own reservations about this new world by comparing current AI to early compilers. The audience is myself, but maybe it’ll help someone who’s hesitant to use AI in their day-to-day coding.
Whenever I have a gut reaction against AI coding, I remind myself: compilers faced the same backlash, but eventually, compilers (and high-level languages) enabled solving more complex problems faster, to the point of becoming indispensable tools. With that in mind, I shouldn’t dismiss AI coding.
As I sat down to write this blog post, I found someone else had already written a version of it: When Compilers Were the ‘AI’ That Scared Programmers. I’ll rehash some of Vivek’s arguments, but Vivek is definitely pro-AI. I additionally want to explore another angle in my post.
My initial reaction to AI coding could apply almost point-by-point to early compilers:
LLMs produce bad code. “Bad” can mean inefficient, or even buggy. Early compilers also produced bad code. But compilers got better, and so will AI.
You’re giving up control. Same with (high-level) compilers, where you no longer decide exactly how your code maps to the actual execution on your machine. In exchange, you get to think about problems at a higher level, not worrying about… the execution on your machine.
You lose out on understanding the fundamentals of your software, so when things go wrong, you can’t fix it. For many people, being able to solve complex problems is more valuable than the few times when things go catastrophically wrong. Think scientists who are just trying to model something, and they don’t actually care about being expert programmers or computer scientists. Meanwhile, for those of us whose core job is writing software, compilers haven’t changed the fact that learning computer science and understanding low-level programming is still useful, hence the utility of a solid computer science degree.
In my time as a programmer, when the end goal was solving a problem, and writing code was just a means to an end, I reached for a high-level language. (Sometimes the constraints, such as writing for a specific hardware target, made that impossible, but I’m talking about software I’ll run on my own computer or similar.) It’s just more productive. Maybe I’ll get to the point where I reach for an LLM.
As a side note: a lot of these same arguments apply to modern IDEs, with their fancy GUIs and auto-completion!
There is one fundamental difference I see with LLMs that I haven’t seen addressed. The way AI coding works today is the LLM spits out code that you have to maintain. The original prompts are no longer the source of truth, the generated code is. When you fix a bug, the generated code is an input to the LLM, and the next iteration changes that code incrementally.
That’s like saying the binary output of a compiler is what you check into source control. The binary is what you edit and the machine code is what you debug. But that’s not how things work today. Today, the original source code is the source of truth. As compilers improve, you recompile the source code to produce a better binary. When you have a bug, you look for logical errors in the source code and modify that until the produced binary does what you want. This would be as if you stored only the LLM prompts, and you evolved the prompts incrementally, running the LLM from a blank slate each time you want to execute your software. (I’m ignoring incremental builds, but in general, a clean build is always possible with a compiler.)
One day, AI may become deterministic enough that we would indeed just store the prompts as our source of truth. Even compilers can be non-deterministic when performing optimizations, but as long as they preserve the semantics of the source code, we’re okay with giving up full control over the machine code.
Given all this, you’d think I’m convinced I have to use LLMs all the time. After all, I wouldn’t use a barebones text editor and start writing in assembly, right? I guess I’ll always be an odd one out, because that’s exactly something I like to do for fun! I’ve been writing a compiler for over a decade, and there’s a lot of assembly. In fact, there’s a lot of hand-assembled machine code, and I like it that way. I typically use Vim, with minimal auto-completion and no “go to definition”. Earlier in my career, I made it a point to make one of my work projects “Vim-friendly”: if you couldn’t keep the program in your head and navigate around the codebase by hand, the codebase was too complex.
So even in the 2020’s, when you’d think compilers and IDEs are a given, I enjoy artisanal, hand-crafted code after all. Still, a tool is a tool, and I should learn how to use LLMs to enhance my code. I’m not ready to be left behind.
]]>
On April 12, 2014, I wrote a quick and dirty interpreter for a Scheme-like language. In the next week, I ripped out that code and laid the foundation for the compiler I’ve been working on for almost eleven years! I didn’t have time to reflect on it at the time of the ten-year anniversary, so I’m writing my thoughts down now.
The language and its compiler are called Garlic, a name I’ll talk about later.
Ultimately: because I enjoy it.
In college, I took a class on compilers with a close friend. We didn’t do well on the second project—static analysis—partly because it was a hard project and partly because we had little time with all the other classes we were taking. The third project—native code generation—was due the week between classes and finals, allowing us to put in extra time. We did amazing on that project. Even with that success, there were many enhancements we didn’t have time for. I remember implementing integers as objects on the heap, while another student used tagged pointers, resulting in a significant speedup in our (admittedly contrived) test programs. I knew I wanted to spend more time in this domain.
That explains why I chose a Scheme-like language, as parsing would not be a significant effort, and I could focus on the code generation. That said, I did want to revisit the static analysis too. Almost two years after graduation, meaning almost three years after the compilers class, I finally sat down to pick up compilers again.
At this point, my motivation comes down to:
Learning. A one-semester class can only go so deep. The class is also (rightly) focused on fundamental concepts, less so on the specifics of individual architectures or file formats.
Exploring different design decisions, instead of picking one and implementing it due to time pressure.
Challenging myself.
I chose to write Garlic in Ruby, hence the name Garlic’s A Ruby Lisp Implementation Compiler. The name wasn’t chosen until almost exactly a year later, and until then, even the repo was simply scheme-compiler.
Ruby is a language I enjoy using and this is my personal project. The tech stack for the main implementation still looks like this:
Ruby as the compiler implementation language.
A mix of C and x86-64 assembly for the runtime. This code is not executed during compilation.
The Parslet library for parsing. Told you I didn’t want to spend much time on the parsing!
The compiler outputs x86-64 assembly as text files that are then fed into GCC or Clang. Those compilers handle the remaining steps of compiling any additional C code, linking everything together and producting an executable file.
The choice to rely on a C compiler and linker was based on what we did in my compilers class. To be fair, in other classes, I’d written an assembly-to-opcode assembler, so the compilers class was more about focusing on the parts we hadn’t learned before.
As I added more features to the compiler, I ran into some interesting challenges. Some of these do assume some knowledge about compilers to understand.
garlic_fncall helperObjective-C is famous for its objc_msgSend function, a tightly-optimized piece of code underlying the entire message passing objected-oriented nature of the language. Mike Ash, one of the premier experts in the Mac development ecosystem, wrote multiple articles about this function, including Let’s build objc_msgSend. Every method call in the language goes through this function.
I ended up with a similar design, calling my version garlic_fncall (initially scm_fncall before the name change). I think the idea of having shared code coordinate function calls is a common paradigm. For example, in other object-oriented languages, there might be some common code to look up method implementations in a virtual table, allowing for features such as inheritence. My version also went through many iterations, adding support for variadic functions, optimizing the code and eventually allowing the calling of user-defined C-code!
One of the real-world problems I ran into was the stack alignment needed to follow the System V Application Binary Interface (ABI). An ABI defines, among other things, a calling convention, rules for how functions are called and what the called functions can expect from their callers. Because the generated code interfaces with code produced by other compilers, I need to follow these conventions to ensure compatibility.
One requirement is to ensure, when a function call is made using the call instruction, the stack needs to be aligned to a 16-byte boundary. There’s a catch, in that the call needs to be made with the stack alignment offset by 8 bytes, because the return address will be pushed to the top of the stack. The stack has to be aligned after the return address is pushed!
When testing on my Linux machine, I ignored this requirement and had no problems. When I tried the compiler on a Mac, the tech debt finally caught up to me. The commit history doesn’t convey the hair-pulling that ensued as I tried to patch the issue incrementally. Finally, I figured out a general approach that has served me well since then. In retrospect, the solution is easy, so maybe I just didn’t have enough years of experience back then.
This was a fun excursion. Instead of looking at the landscape of Scheme implementations at the time, I asked myself what kind of code I wanted to write to define a library or module for reuse, and what the consequences would be for code generation. Certainly, some of this was influenced by my work with Node.js at the time. With that in mind, a few months into the project, I added module support. Here’s how it looks:
; I can write any code I want in the module
(define var-1 ...)
(define (fn-2 ...) ...)
; Then choose what to export. I don't have to export everything.
(module-export
var-1
fn-1)
And in the consumer module:
(require "my-module")
(my-module:fn-1 my-module:var-1)
Because this was my pet project, I was chose to explore some interesting quality-of-life functionality: the ability to import symbols into the global namespace and renaming modules as I import them. The most interesting part was using my chosen syntax to enable analyzing what symbols are available throughout the program and providing clear error messages when a symbol is not defined or visible. I felt especially proud of this part because catching references to undefined symbols was an area of much frustration during my college class.
Aside from the C code in the language runtime, I also added support for defining your own Garlic functions in C. The original implementation used a different syntax, using ccall to indicate calling a C function. In turn, the compiler generated different code for calling a Garlic function versus calling a C function. Eventually, I was able to unify the syntax, making it as easy to call a Garlic function written in C as it was to call a Garlic function written in Garlic.
To achieve this, I had to think about developer ergonomics, as well as make some general improvements to my code. One minor improvement was to ensure I was truly following the System V ABI, including passing arguments on the stack in reverse order. The idea was to call Garlic-native functions in the same way you would call a C function.
Still, there were going to be differences in how Garlic-native and C functions were going to be called, hence the ccall syntax. What I finally landed on was to statically analyze the C code to understand what functions it would export, and use that information to create wrappers around only those functions to set up a little bit of extra code when calling those functions. This eliminates the ccall syntax because the developer no longer needs to explicitly indicate they are calling a C function. The trade-off is that the way I have to analyze the C code is limited, forcing developers to write their exports without comments or pre-processor macros. For example, here’s what my string module exports look like in C:
garlic_native_export_t string_exports[] = {
{"null?", nullp, 1},
{"concat", garlic_internal_string_concat, 0, 1},
{"concat-list", concat_list, 1},
{"string-tail", string_tail, 2},
{"symbol->str", symbol_to_str, 1},
{"string=?", string_equalp, 2},
{"at", character_at, 2},
{"downcase", downcase, 1},
0
};
I’m happy with this trade-off, as it enables effective static analysis.
The other design decision I needed to make was the API I exposed to the C code. I pulled on my knowledge of famous C APIs, like in the Ruby and Python worlds, and I’ve been happy with the ergonomics of writing C extensions. You can see the full C API at the time of writing.
A year-and-a-half into the project, I was happy with the ability to write interesting programs, like a little web page served by an embedded web server (C wrapper around microhttpd, HTML generation library in Garlic).
I wanted the next step of the compiler journey to be macro support, the ability to run Garlic code at compilation time to more easily extend the language. Unfortunately, I realized this would mean either creating a parallel interpreter for the language to run during compilation or finally emit raw machine code instead of assembly text. The latter sounded more appealing, but it would be a large effort. December 2015, I decided that instead of re-implementing the code generation within the Ruby implementation, I might as well rewrite the entire compiler in Garlic! That was the start of the recursive compiler. I promptly neglected the project for almost two years after.
Since then, however, I have been putting my focus into this re-implementation. The goal is to rename the project to Garlic’s A Recursive Lisp Implementation Compiler. In the process, I’ve learned a lot.
One goal I had for Garlic was to make a language that was useful for writing real programs. The language would never be used at a company trying to make money, but I wanted to use the language personally for more than just one-off test programs. Writing a compiler is a bit of code, and it has some interesting I/O, string manipulation and data processing. All of that means the implementation language should be expressive enough to handle the complexities in this domain.
To that end, I’ve found gaps in the language that, when implemented, added significant expressivity into the language. The one I’m most proud of is destructuring assignments, allowing me to write code like:
(let ((a . b) (fn-returning-pair))
(if (> a b) a b))
This is super useful for the types of complex data processing needed in a compiler, as it allows for passing around multiple values easily.
I’ve also extended the standard library to include string and file processing. However, both in terms of language features and standard library functionality, I’ve tried to avoid throwaway work:
Any language features I want have to be added to the Ruby implementation. I will have to reimplement that feature in Garlic later.
Any standard library function I write in C may need to be rewritten in Garlic, “somehow”. This depends on whether I end up reimplementing C module support, but without relying on GCC or Clang, I’m not sure what my plan is.
Still, I’m proud of the language I’ve created, because in my experience, it’s good enough to write code that wrangles the complexity of writing a compiler.
Removing the dependency on an existing C compiler means I needed to output an executable that my operating system can load and run. On Linux, this meant constructing an Executable and Linkable Format (ELF) file. In turn, that meant I needed to understand the structure of an ELF file inside and out.
In the first half of 2023, I put some work into understanding ELF files. I didn’t make any commits to the compiler during that time, but two big artifacts from that time are:
These artifacts were invaluable as I dropped and picked up the project over the course of the next year. With this understanding in hand, I was finally able to create a library to generate these files when given some machine code as the contents:
(require "./elf-x86-64-linux-gnu" => elf)
(require file)
(define test-code
'(0x48 0xc7 0xc0 0x3c 0x00 0x00 0x00 ; mov $60, %rax
0xbf 0x2a 0x00 0x00 0x00 ; mov $42, %edi
0x0f 0x05)) ; syscall
((compose
(lambda (b) (file:write-bytes "generated-elf" b))
(lambda (e) (elf:emit-as-bytes e))
(lambda (e) (elf:add-executable-code e 'main test-code)))
(elf:empty-static-executable))
The idea is the machine code will be generated by the code generation module, leaving very little boilerplate to actually wrap that machine code into an executable. Since then, I’ve even been able to dynamically generate the code and the ELF file generator makes that code executable!
The reason this functionality was so difficult is because ELF files contain many cross-references between different parts of the file. It’s not possible to generate an ELF file in one pass, as references between sections have to be resolved, and those resolutions depend on the size and contents of the other sections in the file. I’m very proud of having cracked this problem, and that too in Garlic code that I find understandable. (We’ll see how I feel when I come back to this code after a break!)
An unexpected benefit of the rewrite was creating infrastructure for better error reporting than even the original Ruby implementation supported. It wouldn’t be hard to improve the error handling in the Ruby implementation, since Parslet gives the necessary information should I choose to use it. However, I’m proud I was able to pass around the necessary information about lines and columns within my hand-written lexer/parser. See this beautiful error report:
Compilation failed (2 errors)
ERROR: undefined variable 'undefined-variable' (test-errors.scm:2:10)
2| (display undefined-variable)
---------^
ERROR: undefined variable 'undefined-variable-again' (test-errors.scm:4:10)
4| (display undefined-variable-again)
---------^
I definitely took inspiration from modern languages like Rust here.
With a flurry of activity in the last few days, I’m happy with the progress I’ve made. I hope Garlic will be a lifelong project for me, and I don’t know if I’ll ever call it finished. Some of the things I see in the future:
Obviously, finish the code generation. I’ll have to think about problems like dynamic linking and relocatable code for any of this to be scalable. That said, I’m excited at the possibility of making Garlic support low-level programming to avoid the need for C-based scaffolding.
Finally adding macro support.
Retiring the Ruby implementation once the recursive implementation is finished.
Maybe one day writing a hobbyist operating system and using Garlic as the system language?
Until then, I’ll keep hacking away at this project that has occupied over a decade of my life.
]]>
A few years ago, I published Talk it Out, a series of 1-2 minute mini-podcasts around effective communication on a platform called Jam, or Just a Minute. The service has since shut down, so I’m re-publishing my series as one blog post. Huge thanks to Pete Davies and Chris Pruett for providing the platform and encouraging me to get my thoughts out into the open.
Contradictary to the guidance in this post, the sections below may seem a bit disjointed. Initially, they were weekly episodes that were meant to stand alone.
Everyone talks about how important communication is, but why? So many conflicts or missed expectations happen because of poor communication. Think about some of these examples:
You’re arguing with your partner, or a friend. You just can’t understand why the other person doesn’t get it! Poor communication.
You work somewhere that values good ideas. Fantastic! But how do people know your idea is good or that it even exists, if you don’t tell them?
Ever had to convince someone to help you with something? They can’t help you if you don’t tell them what you need in the first place.
Communication happens in all sorts of ways, written, verbal, body language. No matter the medium, you need to know what you want to say, and say exactly that so you don’t drown out your main point. All of this takes practice. As long as you’re working with other people, you need to be an effective communicator. This post is meant to help you be just that.
There’s a lot of ways to improve communication, but if I’m in a pinch, the most important tip is the 1-2-3 of effective communication:
Let’s break this down a bit.
One, what does the audience already know? This is how you avoid that dreaded presentation where everyone is bored out of their minds because either they know the material, or it’s all going over their heads. I’ve even seen presentations that do both by repeating the basics and skimming over the hard stuff!
Two, what does the audience want to know? If you’ve ever been to a talk that never got to the point, you know what I’m talking about. You have to actually give your audience what they want!
And three, how do you connect the dots? This is your job as a communicator. But since you know the starting and ending points, you have the tools to get there.
Follow these principles, and your communication will be focused, engaging and actually useful to your audience.
It’s easy to think communication is about you, the person doing the communicating. But really, it’s about the audience. If they don’t understand what you’re saying, all your work is wasted.
I’m a software engineer. All the time at work, I collaborate with people who are not as technical as me, because their expertise is elsewhere. Designers and product managers, for example. The legal team. Marketing! These people are smart, they just specialize in different areas than me.
In one job, I had to convince a bunch of non-technical people why investing in making our software bug-free was so hard to do. The thing is, the product integrated with a bunch of external data sources and put it all together for our users. That also means a lot of places for things to go wrong, because communication between the different sources could break down. If you’re a technical person, that description is obvious, but if you’re not technical… well, it still might make sense, kind of. But the scale of the problem doesn’t really sink in.
So what I did was: I got my coworkers to stand in a line and pass sticky notes between each other. One person was the mobile app, another person was the database where the data was stored, and so on. Showing a little bit of data on the app took a lot of steps. That showed how complicated the system was, and how many places things could go wrong.
What the others lacked wasn’t intelligence. They lacked experience working with software systems, and experience seeing things go wrong. By having them act out the system’s behavior, I got them “in the trenches” in a way that was interactive and memorable.
So next time you need to explain something complicated to people with a different set of knowledge than you, think about what’s missing in their understanding and that’s where you want to focus.
Understanding what your audience already knows isn’t enough. You have to understand what they want to know.
You know those app or product websites that are supposed to get you to download or buy the product? Here’s a mistake I see all the time on those pages: nothing but a list of features. Okay, but what problem is the product trying to solve? When someone ends up on your product page, they want to know if you can solve their problem. For that, you need to tell them exactly what problem your product is solving.
Or another scenario is when my manager and my teammate ask me how my project is going. Both are technical enough that I can give them the same answer, but there’s some subtext. My teammate wants to know when my part will be done so she can build on top of it for her part, so I’ll talk about how I’ll have enough done in the next two days that she can move forward while I continue working on the pieces she doesn’t depend on. Meanwhile my manager wants to know if there’s anything he can help me with, so I’ll give a broad overview and highlight the parts where I’m waiting for another team. Same question, but since the other person wants to know something different, I can communicate more effectively anticipating their needs.
Don’t worry, I don’t have to read their minds. If I’m not sure, I should ask them for clarification before I answer. Once I know them well enough, I won’t even need to do that.
So if you want to make sure your communication is actually hitting the spot, understand what your audience wants to know and give them exactly that.
To ensure your communication has the maximum impact, start with a clear thesis statement, tie your supporting points back to that thesis, and do so incrementally.
One of the most effective things you can do to get your point across is to just say it clearly, and right at the beginning. In other words, start with a clear thesis statement.
What’s a thesis statement? A thesis statement is a sentence that references the topic you’re talking about and states your opinion of the topic.
Have you ever read something and by the end, you didn’t really understand what the point was? Or maybe you wrote an email, only to have people get the wrong message from it? What happened was you didn’t tell the reader clearly and decisively what you wanted them to get out of your words. Back when I was writing about hiring practices in the tech industry, a lot of what I was writing about was controversial. But I always made sure to state exactly what my opinion was in one sentence, so if my readers read nothing else, they would at least hear my central argument. Plus, it helped me, as the writer, make sure I even had a point in the first place!
Not only should you have a thesis statement, but you should put it up near the beginning, maybe even as the first sentence. That will put your argument in the reader’s mind, and they’ll keep it in their mind as they read the rest of your article or email. Now, you might be thinking you don’t want to present an opinion without some evidence to back it up, and that’s great! But don’t worry, you’ll still be backing up your opinion. The difference is, the reader will already know what you’re trying to prove with your evidence.
(You might hear about the thesis statement referred to as Bottom Line Up Front, or BLUF.)
Now, here’s my challenge to you: read this section and see if you can find my thesis statement. Here’s a hint: it’s right up front!
With your thesis statement out of the way, the next step is making sure the rest of your communication backs up your thesis statement.
Every section, every paragraph, every sentence ultimately should tie back to your central point and support that point. When I helped people out with their writing, the most common feedback was to ask how each point they’re making ties back to their thesis. Okay, it actually was that they didn’t have a thesis, but it was hard for them to come up with a thesis precisely because the different paragraphs didn’t back up a coherent point. It’s like they were writing two or more separate articles. Even worse, those separate articles contradicted each other!
That’s not to say you should omit any evidence against your main point. You can incorporate data that doesn’t back up your central argument, but tie it back by saying why you still believe in your thesis after all. That’s how you make your argument bulletproof.
At the end of the day, you’re trying to make a point, and the only way to make that point is to consistently provide evidence to back up that point. Everything else is just confusing.
If you want to keep your audience’s attention, one important tool is the Inverted Pyramid Structure. With the Inverted Pyramid Structure, you put the most important information at the beginning, then reveal increasingly minor details over time.
The Inverted Pyramid Structure hooks your audience’s attention and saves them time. Since the most important, high-level summary is right at the beginning, your reader knows right away whether they care about what you’re saying. This is why a thesis statement is so important: it’s the most important point, right up front. But one step further, the Inverted Pyramid Structure means your reader can keep going, exactly to the point where they have enough information. Everything after that is a detail not worth their time.
Aside from saving your audience’s time, the Inverted Pyramid Structure helps them understand you better. Each point you make primes their brain to contextualize what you’re going to say next. No more reading a detail and not even knowing what it’s talking about!
With that, I urge you to look for how I applied the Inverted Pyramid Structure in this section. In fact, once you recognize this structure, you’ll see it everywhere!
Communicating clearly means nothing without the right substance. The next few sections discuss what you should talk about in the first place.
In high school, presentations always required a lot of preparation. The topic was assigned by the teacher, and I’d have to do research to learn about that topic, almost memorizing what I was going to present. In college and later in the workforce, I got to give presentations on projects I was already working on, and the process was way smoother. I had to give a 45 minute presentation on my undergraduate research? Easy!
If you know the topic you’re talking about, you can adapt to questions others might have. You can deep dive into areas people respond to. Above all, you’re spending less energy recalling the basic facts, so you can talk more confidently. There’s even more trust when people understand you’re knowledgeable about the topic. All of these factors make your message more digestible to your audience.
And it’s okay if you’re not an expert. It may be that the topic you’re actually knowledgeable about is being a beginner at learning something! Talking about learning something new, from a beginner’s perspective, is still valid.
Whatever the case, don’t wing it. Talk about what you know.
One of the most common sources of miscommunication I see is when two people are arguing logically about something, just not about the same thing!
In my field of software engineering, one person might be proposing a solution to make the app faster, even if that means there are some errors here and there, while another person is proposing a solution to make the app less error-prone, even if it gets slower. Both perspectives are useful, but maybe the user is complaining about the app taking up too much space on their device! Without knowing what problem is being solved, your logical solutions may make the other person feel like they’re not being heard.
To avoid this, take the time to state the problem early on. What’s wrong that needs solving? What requirements should a good solution meet? And go deep! Don’t just say you want to make your app better; say you’re looking to make the app take up less space on more limited phones. Often, you can find a discrepancy right there, and you don’t waste your time talking about something that doesn’t matter to your audience. Only keep going after everyone involved agrees on the problem.
If you want to end up on the same page, start on the same page.
Another source of miscommunication I’ve seen is having different definitions for the same words. In the previous section, I gave the example of two software engineers wanting to make the app they’re working on “better”. One wanted to make it faster, and the other wanted to make it less buggy. If they both were clear about what improvements they wanted, they could figure out their disagreement right away. Instead, they used the same word “better” without defining it, so they were talking over each other.
This kind of confusion is weaponized all the time, especially in politically charged scenarios. You can never come to an agreement if both sides aren’t acting in good faith, but here’s what you can do on your side. First, be clear about what your terms mean, and that can entail choosing less ambiguous terms. The other is understanding how others interpret your words. What are their definitions?
And here’s the fun part: sometimes, you might already be agreeing because you’re using different words for the same concept! So define your terms.
I’ve been talking about getting everyone on the same page, and on that theme, my last piece of advice is make sure everyone has the same starting assumptions and the same value systems.
Let’s bring up that example again of two app developers, one who wants to make the app faster and the other wants to make it less buggy. Both of them might be very reasonable, logical people, but they find themselves coming to completely different conclusions. Why is that? Two things might be happening. First, the developers might have different starting assumptions. One thinks the app isn’t even buggy to begin with, and the other thinks the app isn’t slow. Secondly, the developers may disagree on what’s important to users. Do users care about that last bit of speed, or those occasional bugs? Two completely logical people that start at different places and use different rules will of course come to different conclusions!
This is why big corporations care so much about measuring user behavior and having a unified culture. They want everyone at the company to start at the same point and head in the same direction. So the next time you have a disagreement, start by examining the other person’s assumptions and their values. You might still disagree, but your conversation will be much more fruitful.
You have the substance and the structure, next up is adopting the right style. While communication style should be personal to you, some basic principles apply.
One of the most powerful tools in your communication toolbox is storytelling. Humans have a rich tradition of storytelling and we respond well to these narratives. Think fables meant to teach people moral lessons and epics documenting history, however embellished they may be. Not every piece of communication will be a story, but don’t discount its place in your arsenal.
Case studies are a common example of storytelling in settings where you wouldn’t think stories belong. Early in my career, I solved a problem for my team and shared my learnings outside the company. To keep people interested, I set the scene by introducing the problem, then talked about how I solved that problem and new ones as they came up. It was a me vs. the problem narrative. This motivated why my solution looked the way it did, conveying my point more effectively than if I had just listed out what my solution did.
(Video of the presentation if you’re interested.)
But even if you’re not talking about a real-world account, your communication can have elements of storytelling in it. When you give some background information, you’re setting a scene and introducing the characters. By going from general points to specifics and tying it back to your main point, you’re creating a beginning, middle and end, ensuring your audience remembers what you said.
You don’t have to write literature to tell stories. Get creative!
When you have the luxury to do so, put down your thoughts, then edit them to be cohesive. I’ve given a lot of advice on how to best structure your communication, but if you don’t have something worth communicating, all the structure in the world won’t help you.
Here are some things I do when writing, including when creating this entire series:
Eventually, some of this will become second nature, and your first draft will look closer to your final product. But transforming what’s jumbled up in your head into something that other people can understand always takes some editing.
(I applied these techniques as I republished my scripts in blog form.)
I’m going to keep this section short: cut the fluff.
In the last section, I talked about the importance of putting down your thoughts and editing them later. If you do this, you’ll find your first drafts verbose and unwieldy. That’s okay! But it does mean you have to be ruthless about removing anything that’s unnecessary.
Get rid of words that don’t add to your point. Get rid of sentences, paragraphs, even entire sections. Some flourishes are okay; they’re your style. But ask yourself: is this really something I need to keep?
This process is painful. You don’t really want to get rid of those beautiful words, right? But remember: the more you cut, the more impactful the remaining words will be.
I’ve talked before about not explaining what your audience already knows, because that’s boring, even patronizing. But that doesn’t mean you completely ignore those points. Instead, concisely summarize the background information you expect your audience to know, in order to provide the right context for the new material you’re about to present. The goal isn’t to actually explain that background material, just to reference it so you and your audience are on the same page.
In fact, I’m leading by example here! To set the stage for this section, I quickly referenced an earlier section about knowing your audience before jumping into the current thesis about establishing context.
As a software engineer, I’ve written a lot of technical documents. Each of those documents has a Background section. The section is usually only a paragraph or two, but it contains links to other material. Most of my readers will already know that material, so the section just jogs their memory. Anyone else can follow the links if they need to brush up on the context.
Give your audience a good starting point, and they’ll follow along much more easily!
Even if you follow all the guidance in this post, you need to live and breathe communication for it to be effective. Remember, communication is a collaborative exercise.
You can’t communicate effectively without listening.
There’s an idea floating around that some people want to be listened to, and some people want solutions. The truth is, even those who want solutions need to be listened to. As I mentioned in an earlier section, Agree on the Problem, you have to make sure you’re addressing the right problem. And for that, you have to listen to what the other person actually wants.
So how do you listen effectively? Here’s what you do:
Try to understand the big picture while you listen to the small details. Not everyone knows how to word their problems in a way that makes sense to you, so you’ll have to hear each detail, read between the lines and extract the themes all of their words convey.
Ask clarifying questions. Don’t interrogate them. Ask with curiosity so you can understand better. Reflect that curiosity in your tone so the other person doesn’t get defensive.
Repeat your interpretation back to them so you know you’re on the same page. Make sure they agree you understand them.
There’s a common saying: measure twice, cut once. I say, listen twice, talk once.
I’ve talked a lot about what you can do to communicate clearly and without misunderstandings. I’ve even talked about how to make sure you don’t misunderstand the other person. But all of this assumes all parties are communicating in good faith.
What exactly does “in good faith” mean? It means everyone involved is trying to reach a common conclusion, even if it’s not the position they started with. But not everyone wants a shared understanding. They just want to win at all costs. You can see this in their communication style, which uses tactics such as, but not limited to:
Not being consistent. If you give a counterpoint to something they say, they come back with a counter-counterpoint that contradicts their original argument! You keep going back and forth, but somehow, they just have to have that last word.
Arguing against what they think you said, putting you on the defense for something you didn’t even want to talk about and making you clarify the same points again and again.
Flooding you with tangents and even sources that don’t back up their argument, forcing you to do their research for them. By the time you have a response, they’ve moved on.
The common thread in all these tactics is the other person doesn’t want to communicate effectively. Recognizing that quickly is the key to cutting short that argument. Save your breath for someone who actually wants to talk to you!
]]>That’s not the way I want my documents to operate. I’m not building web applications, just adding isolated interactive demos to an otherwise static medium. In the last year, I’ve discovered a great framework, Astro, that fits that exact niche. Usually, I prefer to avoid frameworks, but I have been happy enough with Astro to document my experience with it.
This is going to sound a bit like I’m writing marketing copy for Astro, but honestly, I found it refreshing that Astro’s philosophy aligned well with mine. Astro promotes content-heavy websites by rendering components on the server, then injecting only the necessary Javascript to make isolated “islands” of interactivity on the client side.
Astro allows you to use any Javascript component library (React, Vue, Svelte, Lit, etc.), or Astro’s own component framework, to build a website. Regardless of what you choose, the Javascript is executed on the server to output static HTML. CSS pre-processors, like Sass, are also supported. The important piece is that no client-side Javascript is shipped. That means, unlike SPA frameworks, you get multiple pages with static HTML and CSS with links between them. At the same time, I still get to use components, allowing me to refactor common elements when coding.
When I use a third-party component library like React, I can optionally mark it as a client component. The component is still rendered on the server, but the component is “hydrated” (brought to life with Javascript) on the client. I can enable this when the page loads or when the server-rendered HTML for the component is scrolled into view (Astro uses the Intersection Observer API). Either way, only the Javascript needed to enable these client-side components are shipped to the browser. This is the Island Architecture. Note that it is possible to share state between islands, which I do in some limited cases.
(As a side note, I chose to use Astro components for anything server-only and Svelte for anything client-side. For my personal projects, I like Svelte’s approach of using a compiler to emit targeted DOM updates, as if I were using JQuery or vanilla JS.)
Both of these pieces of functionality are ones I could build myself, which I appreciate conceptually. Doing so in a framework-agnostic way, with Typescript, hot reload, etc. are what Astro brings to the table.
Disclaimer: I haven’t used Astro professionally, though I would totally consider it if I worked at a startup where content-heavy microsites are needed with minimal fuss. However, I have thoroughly enjoyed using Astro for two of my personal projects, both focused around teaching.
First, Interactive Computer Science, where I discovered Astro. I like using common server components for a consistent visual treatment across the website, for definitions and study tips for example. The client components are reserved for the interspersed interactive exercises and visualizations. I had a lot of fun building a full Turing machine simulator and its associated UX. Best of all, I was able to utilize the interactive exercises in my class!
Second, NES development on the web. This is another content-heavy project, but the interactive visualizations are promiment. In particular, I was able to embed a Webassembly-based 6502 assembler and an NES emulator to allow writing 6502 assembly code and have it run right in the browser! Outside of this use case, I’m also using client components for the type of interactive visualizations I wish I had when learning Gameboy Advance programming before college, things like visualizing bit fields and other low-level data representations.
As with any framework, I have spent time wrangling Astro. But overall, Astro, Typescript, SASS and Svelte are all tools that have allowed me to focus on the content of my visualizations, not the infrastructure that powers them.
This part isn’t specific to Astro, but if you ensure your server-rendered HTML is static (no per-user differences, no fetching data dynamically for each request, etc.), you can deploy to any host that supports static HTML. For my pet projects, I’ve been happy using Fastmail’s static website feature. I could also have used Github pages of course.
It really feels like my blog is the perfect fit for Astro. To be honest, I think so too and I’m tempted to rewrite the entire blog using Astro instead of Jekyll. I can get rid of a bunch of hand-rolled Javascript and fully utilize a UI library like Preact as a first-class citizen (instead of just pulling it in via a CDN).
For now, however, I’m going to hold off, for the same reason I’m wary about all-in-one frameworks in general. The more dependencies there are, the more complicated both development and maintenance becomes. As a practical example, I have an item on my to-do list to upgrade the interactive CS website to the latest Astro, something that’s impeded on a conflict between the latest Typescript and the latest Astro. On the flip side, I feel like Jekyll, especially using it in conjunction with the default Github pages infrastructure, has been mostly set-and-forget. For my blog, I’m going to use “boring” technologies as much as possible. I want my blog to be cold-blooded software.
And if it weren’t for the sheer density of interactivity in my online teaching material, I would consider ditching Astro for those projects too. I’ve enjoyed Astro, but I wish I could use fewer dependencies.
]]>I don’t claim to be an expert, and I’ve been piecing together this knowledge through many online resources. Like the last post, a lot of these are notes for myself.
One thing to note is I’m a very stubborn person, and a running theme is me doing things the non-standard way just on principle 😅
The first controversial decision is to use Podman instead of the industry-standard Docker. Podman attracted me because it doesn’t use a daemon-based architecture, meaning individual containers will run under specific users, instead of a single daemon typically running as root. I could also say I was concerned about Docker’s approach to monetization, but Red Hat (makers of Podman) has generated some controversy lately as well. Mostly, I like the daemon-less architecture and thought this would be a good time to play around with some new technology.
Installing Podman, and the associated Podman Compose for small-scale container orchestration, is easy:
sudo apt install podman podman-compose
Note that prior to Bookworm, the previous stable version of Debian had some pretty old versions of Podman and required installing Podman Compose manually. Moreover, the old version of Podman meant you needed to install an older version of Compose from a branch. With Bookworm, I don’t have this problem.
With this setup, I can usually just use any docker-compose.yml file almost as-is. Instead of running sudo docker-compose -f <filename.yml> up, I just run podman-compose -f <filename.yml> up. Very convenient, thanks to the Open Container Initiative creating industry-wide standards that multiple tools can leverage. But there are two major differences I need to think about when adapting instructions for Docker to use Podman:
A lot of Docker Compose files use image names that are not prefixed with the hostname of any container registry. This is because Docker is configured to default to docker.io, the Docker company’s official registry. I can configure Podman to do the same, but I like being explicit with my code and configuration. This means if an image is referenced without a registry hostname, I just have to prepend docker.io/ to the name.
At least as of Podman Compose 1.0.3, I found .env file handling not where I was expecting it to be. Generally, there are two ways these files are used, either to substitute values into the Compose file itself and to pass along environment variables into the running containers. Using the env_file directive, you can use a filename other than .env. However, I found that doing so prevent values from being substituted directly in the Compose file. For now, I’m making sure each service I want to configure has its own directory containing a default-name .env file when needed.
When trying to set up some more complex applications, I found that containers were not able to resolve each other by container name. In trying to fix this, I tried a bunch of solutions, only to find that I needed to reboot (or probably run some command, but rebooting did the trick). So, I don’t know everything below is necessary, and it’s worth trying just the first command to see if that’s enough. Just remember to reboot!
First, install the golang-github-containernetworking-plugin-dnsname package. Theoretically, this should be enough, as it allows containers to DNS resolve each other by container name, as long as they are in the same virtual network:
sudo apt install golang-github-containernetworking-plugin-dnsname
But, when I was trying to figure out the networking prior to rebooting, I saw some errors that prompted me to do the following:
sudo apt install dbus-user-session
sudo systemctl --user start dbus
Another issue I encountered was errors around logging. This was especially relevant when I was trying to debug the inter-container networking issues I described above. I don’t know too much about this, but it seems like the standard journald-based logging requires some extra permissions. The way I ended up fixing the issues was to switch to file-based logging for the user in question (I talk more about the user setup below). For example, when setting up Immich, I updated the container config as follows:
sudo -u immich mkdir ~immich/.config/containers
sudo -u immich cp \
/usr/share/containers/containers.conf \
~immich/.config/containers/containers.conf # copy over the default config
sudoedit -u immich ~immich/.config/containers/containers.conf
In this configuration file, set:
events_logger = "file"
log_driver = "k8s-file"
It looks like I could have just added the user in question to the systemd-journal group. For now, I’m not bothering, but I’m willing to try it out the next time I encounter this problem.
EDIT (May 25, 2025): I tried adding a user to the systemd-journal group, and it worked! The original error I was getting when trying to run something like podman logs was:
Error: initial journal cursor: failed to get cursor: cannot assign requested address
Then, I ran, for one of my later services:
sudo usermod -a -G systemd-journal jitsi
After restarting the service, I was able to get the logs just fine. No need to update the container config as described above.
Using Podman’s rootless architecture, I decided that I’ll run each service as a separate user. Additionally, I wanted these users to be system users. Unlike regular users, system users don’t, by default, have a login shell, so they can’t be logged into. They also don’t show in a listing of login users, say in the login screen of a graphical installation. This latter point is moot for me because I didn’t install a GUI. Again, I’m making these choices on principle.
First, I added a services group to make it easier to easily give common permissions to all the service users. By default, system users are placed in the nogroup group, so I wanted a shared group for these users.
sudo addgroup --system services
Next, I added the user. For example, when preparing to set up Forgejo, I created a system user called forgejo. Two things to note are that I have to explicitly ask the user to be added to the services group, and I have to explicitly specify the home directory. By default, system users have their home directory set to /nonexistent, which doesn’t exist and is not created by the adduser command. I was hoping to get away with no home directory, but unfortunately, Podman stores its data in the running user’s home directory.
sudo adduser \
--system \
--comment 'Forgejo system user' \
--home /home/forgejo \
--ingroup services \
forgejo
# The above command should output the user ID of the new user. But if you
# forget, you can check after the fact:
id forgejo # in this case, the ID is 102
Next, I had set up subuids and subgids for the user. The way containers work is they run processes and create files/directories under “virtual users”. This way, the container-specific processes and data don’t clash with existing users on the system. To do this, subuids and subgids allow reserving a large range of user and group IDs for the parent user to allocate as needed.
# Check the current range of subuids/subgids
# Format is "username:startid:numids"
cat /etc/subuid
cat /etc/subgid
# Adjust the command to use the next available range
# Format is "startid-endid"
sudo usermod --add-subuids 1001000000-1001999999 forgejo
sudo usermod --add-subgids 1001000000-1001999999 forgejo
# Confirm the subuids/subgids were added
cat /etc/subuid
cat /etc/subgid
Finally, when running containers, I encountered errors related to the fact that the users running the containers were not logged in. The systemd login manager can start up a “user manager” for non-logged in users by enabling lingering:
# Use the user ID of the user
sudo loginctl enable-linger 102
Note that the home directory, the subuids/subgids and lingering would automatically be set up for non-system users. But again, on principle, these users have to be system users!
With this setup, I can already start up a service using Podman Compose. For example, for Forgejo, I would run:
# Run as the forgejo user
# Run in daemon mode (in the background)
sudo -u forgejo podman-compose -f /path/to/forgejo-compose.yml up -d
In fact, I would do exactly this to test out the service works. But, because Podman doesn’t use a global daemon, nothing exists to start up running containers after a system reboot (Docker supports this with the restart directive). Instead, I use systemd to manage the application as a service. I start by creating a service configuration file called forgejo.service. A few things to note about this service are:
forgejo user, under the services group and with the home directory as the working directory.After and Wants directives, I ensure the service starts up on its own after a reboot, and that too at the right point in the system initialization.[Unit]
Description=Forgejo self-hosted lightweight software forge
After=network.target
Wants=network.target
[Service]
Type=oneshot
RemainAfterExit=true
User=forgejo
Group=services
WorkingDirectory=/home/forgejo
ExecStart=/usr/bin/podman-compose -f /path/to/forgejo/forgejo.yml up -d
ExecStop= /usr/bin/podman-compose -f /path/to/forgejo/forgejo.yml down
[Install]
WantedBy=multi-user.target
I can install this service by placing the configuration file in the system-wide services directory, enabling the service and starting it up.
# Running this in /path/to/forgejo
sudo cp forgejo.service /etc/systemd/system/forgejo.service
sudo systemctl enable forgejo.service
sudo systemctl start forgejo.service
echo $? # Confirm the service started up correctly
# The return code should be 0
At this point, the service will start up automatically after a reboot. If I want to stop or restart the service myself, I can do that too:
sudo systemctl restart forgejo.service
sudo systemctl stop forgejo.service
Finally, the start and restart commands are a bit of a black box, and you don’t get to see errors or other logs on the command line. Instead, you can use journald to view the logs. Unfortunately, this doesn’t include all the logging, namely the part where the container images are downloaded. Given that this part can take a long time, I suggest running podman-compose manually to download the images before running it via systemd.
sudo journalctl -fxeu forgejo.service
All of this might seem like a disadvantage compared to Docker, but I prefer this system. I think it follows the Unix philosophy, letting Podman focus on containerization and systemd focus on service lifecycle.
There has been a lot of setup, but we’re almost done. The last part is making the service available on the internet, so I can access it when I’m not at home. I could definitely use a self-hosted VPN, and I might do that for some services in the future, but I want to share some of these services with other people.
The basic setup has a few parts:
Here’s the final architecture, which I’ll describe in more detail below:
This part is pretty straightforward. I just log into my domain registrar’s DNS settings and create a new subdomain, set up as an A record. Generally my IP address doesn’t change frequently, but it is technically dynamic, so I want to automatically update the A record when my IP address changes. To do this, I use DDclient.
The exact details of how to set up DDclient will depend on your DNS provider, but you should get a configuration dialog during installation or if you manually reconfigure:
sudo apt install ddclient
# To reconfigure later
sudo dpkg-reconfigure ddclient
# Or manually edit the configuration file
sudoedit /etc/ddclient.conf
# Don't forget to manually refresh
sudo ddclient
What I like to do is set up my subdomain to point to 0.0.0.0, update the configuration to include the new subdomain and refresh. This way, I can verify the subdomain is going to update correctly if my IP address changes.
I want all the services on my server to be available over port 443, instead of having to specify the port when accessing most of the services. Additionally, I don’t want to have individual containers bind to ports like 443, which would require the service users have additional privileges. The way to do this requires a few steps:
Configure my router to forward ports 80 and 443 to my server.
Use Caddy with virtual domains as a reverse proxy to the services. Caddy is the only service on the system listening on ports 80 and 443. I like Caddy for this simple use case because, unlike Nginx, the configuration is simple and I don’t have to separately configure Certbot to provision Let’s Encrypt HTTPS certificates.
Ensure that services that expose ports only expose non-privileged ports, ones greater than 1024. For example, internally, a service might bind to port 80 inside the container but expose that to 3000. This is something I have to check in the Podman Compose configuration files, because a lot of times, they try to expose privileged ports. I also make sure to not enable HTTPS for that service if that’s an option.
After configuring my router’s port forwarding and starting up a service on a non-privileged port, I installed Caddy:
sudo apt install caddy
Before configuring any specific services, I need to add some global configuration. Opening up /etc/caddy/Caddyfile, I commented out the default configuration and added the following:
{
# Used primarily as the email to associate with Let's Encrypt certificates,
# in case any communications are needed.
email [email protected]
}
Now, I can add service-specific configuration, one block per service. Almost all the services are similar:
mysubdomain.mydomain.com {
# Point to whatever port the internal service exposes
reverse_proxy :3000
}
By default, since I don’t specify a protocol (for example http://), Caddy defaults to HTTPS and provisions a Let’s Encrypt certificate for this domain. This works automatically as long as port 80 on my router is being forwarded to this Caddy instance.
Now, I just restart Caddy and I’m good to go:
sudo systemctl restart caddy
Note that many services allow you to specify what hostname they will run on. This is typically configured as an environment variable or as part of a configuration file. Among other reasons, configuring the hostname is useful for display purposes within the application.

Because I have to customize my application installations with details such as file paths, expose ports and user information, I created some tooling to manage these installations. The tooling is straightforward:
Each service’s configuration is stored in its own directory. The directory typically consists of the Podman Compose file, the systemd service file and optionally, a .env file.
The parent directory for these services contains a script to copy over the systemd service file to the right place and a README with useful commands. All of this serves as a reminder for myself how to install and manage these services.
These files are managed using git and stored on my Forgejo instance. Meta!
I’m not sharing the repo because I don’t want to share all the specific details of my server setup, like file paths and hostnames.
With these steps, I’m happy with how isolated each service is, how it automatically starts up with the machine and how little extra maintenance is needed once I get a service running. Even getting the service installed in the first place is easy thanks to containerization. I installed two services recently in just a few minutes.
Nothing about these steps are revolutionary, as they use off-the-shelf tools combined together exactly as they are meant to be. Having this documented here hopefully helps others understand the larger ecosystem of tools and how they can be put together to spin up a useful, low-maintenance home server.
]]>
This blog started at the end of 2018 as a way to document how I set up my Raspberry Pi. Some time ago, the Pi finally broke down, and I’ve had terrible luck with Micro SD card corruption. After a few unsuccessful attempts to get the Pi running again, I picked up a used Dell OptiPlex 7040 Micro. Here are my notes on getting this server set up, since I’ve learned a thing or two in the last 4.5 years. Most are notes for myself.
Today’s post will cover getting the server hardware set up and Linux installed. I’ll cover more about the home server capabilities in later posts.
As mentioned above, I’m using a Dell OptiPlex 7040 Micro because that’s what I found on Craigslist. Given that I was happy with a Raspberry Pi 3B+ in the past, this system is overkill. But, it is nice having a modular, upgradeable system compared to a System on a Chip (SoC).
To that point, I picked up a 1TB 2.5” SATA SSD to swap into the system, and I installed an older 512GB M.2 NVMe SSD I had lying around (it was one I thought had died, so I had replaced it on a different machine, but it turned out to be working fine). I’m using the NVMe drive for the OS installation and /home partition, and the SATA drive for the actual storage. For example, I installed Immich to back up my photos, and I’m using the SATA drive for storing the photos.
It’s nice to have these types of hardware slots inside the small form factor, compared to using a USB drive sticking out of my Raspberry Pi.
The Dell OptiPlex 7040 does support wireless, namely WiFi and Bluetooth, but the unit I picked up didn’t have the necessary hardware installed. I did some research on installing an M.2 wireless module. Ultimately, it’s not that important to me because I placed the machine near my router with an ethernet connection, so I passed on adding this hardware.
Having started my Linux journey with Ubuntu, I now run Debian on my personal laptop. I went with Debian for the server as well. The newest stable release, Bookworm, recently came out, so I’m okay with stable for now and will update to testing if I feel anything has gotten too old.
I didn’t install a graphical desktop environment because I wanted the machine to be a headless server. I configured an SSH server during installation, so I can remotely log into my server. Just as importantly, I didn’t configure a web server. Debian’s default is to use Apache, and I prefer to use Caddy or Nginx for my relatively meager needs.
Overall, installing Debian with the graphical installer was straightforward, but there were a few additional things I needed to do.
It seems there’s a bug with the EFI firmware on the OptiPlex specifically related to the NVMe drives (everything worked out of the box when I initially installed Debian on the original SATA drive). Basically, Debian puts its EFI binary in /boot/efi/EFI/debian/grubx64.efi. Even after going into the boot settings on the machine and changing the EFI binary path, the machine seemed to be looking in a default location of /boot/efi/EFI/boot/bootx64.efi, causing the machine to think there was no installed OS.
Fixing that was simple once I figured out the issue, with the following steps.
Start up the Debian graphic installer and open a terminal session. When prompted, mount /dev/nvme0n1p1 as that’s the boot partition. Then, copy over the EFI binary from the original location to where the machine expects it:
cd /boot/efi/EFI
mkdir boot
cp debian/grubx64.efi boot/bootx64.efi
Reboot, and Grub should start right up.
I didn’t want to keep switching to a root user for all my administration, so I set up sudo:
avik$ su -
root# apt install sudo
root# usermod -aG sudo avik
Again, this is mostly for myself, and it’s pretty much the same software I make sure to install on any Debian machine I own:
sudo apt install command-not-found tmux
sudo apt install vim
sudo apt install curl git
sudo apt update # generate command-not-found index
Also, adding export EDITOR=vim to my .bashrc ensured that sudoedit (to edit files as root without running your editor as root) uses Vim.
As I’ll talk about in a later post, I will mostly use containers to run software. But historically I’ve used asdf and some plugins for node.js, Ruby and Python. I installed those too, probably out of habit.
When I install Debian, I always separate the /home partition. If I do this during a new install, then the installer can set up the system to automatically mount the right partition as my /home directory. If I want to preserve an existing /home partition, I’d have to set up the auto-mounting myself.
Either way, I also wanted to auto-mount the SATA storage drive. First, I had to decide which file system to use for the storage drive, and I went with Ext4 for simplicity. I would have enjoyed trying ZFS, but without native support in Linux (that I could find), I stuck with whatever was well supported:
sudo fdisk -l # find the device for the drive
sudo mkfs -t ext4 /dev/sda1 # format that device
Now to automount the drive (and these steps generally apply for a /home partition as well):
sudo mkdir /mnt/storage # create the mount point
sudo blkid # figure out the UUID of the drive
sudoedit /etc/fstab # see below for what to add
sudo systemctl daemon-reload # pick up changes to /etc/fstab
sudo mount -a # mount!
When editing /etc/fstab, add the following line:
# UUID is based on the output of blkid
# dump=0 - I'm not sure what exactly this does
# pass=2 - `man fstab` says use "2" for non-root filesystems
UUID=... /mnt/storage ext4 0 2
EDIT (Sep 5, 2023): I got a suggestion about an alternate way to identify disks in /etc/fstab, which took me down a rabbit hole. Here’s what I found:
Firstly, this was something I already knew, but you don’t want to use device identifiers like /dev/sda1. There’s no guarantee these will stay the same across boots. That’s why I used the UUID. The UUID is stable, at least until reformatting.
The suggestion was to use the paths inside of /dev/disk/by-id. These are symlinks to files like /dev/sda1, but the filenames are human readable. For example, instead of a UUID like I’m using, my SATA SSD partion would be named ata-INTEL_SSDSC2KB960G8_BTYF92160AB7960CGN-part1. Definitely nicer! This seems like the way to go, and I’ll try it out in the future.
As always, the Arch wiki is fantastic, even for non-Arch users. Note that this wiki page doesn’t talk about the approach mentioned above, but the forums sure discuss it at length!
With all these changes above, I now have a running Debian server that I can start playing around with. Next up is how I installed the right software to make the server useful!
]]>Disclaimer: these are my thoughts after just two semesters of teaching, and I don’t mean for this to be any sort of “words of wisdom”. For that reason, I’ll keep my thoughts light. If anyone with more experience wants to weigh in, I would love to hear your thoughts!
There’s a lot of trial-and-error. I thought teaching required credentials and apprenticeship, the way I saw student teachers practice teaching in high school. Instead, I was given pretty much free reign to teach how I wanted, as long as I submitted grades at the end of the semester. I found it simultaneously freeing to have that autonomy and scary to be trusted to that degree. But, even if I had the credentials, I would still need to adapt my teaching style every semester based on some (informed) trial-and-error. I want to give a huge thanks to my mentor at the same university who guided me on the course design.
Going the extra mile is really expensive. Students appreciated my timely grading, detailed feedback and copious office hours throughout the semester. I wanted students to have as many resources as possible. For example, homework assignments were due as late as possible on the Tuesday before a Thursday exam, and I tried to finish grading by midday on Wednesday so students could use that feedback to study for the exam. Unfortunately, doing this takes a lot of effort, and probably is the primary source of me burning out on teaching. I don’t blame teachers who prioritize ease of instruction over individualized support.
It’s hard to teach within a broken system. And by system, I mean all of education, not the institution. Students often take a full course load while working full-time due to financial constraints, something I never had to do because of my privilege. They also were not always prepared thoroughly by previous classes, again something I was privileged enough to not worry about because my parents could afford the rent needed to send me to well-funded schools and I had the time to focus on my academics even before college. No matter how much effort I put into teaching, I can’t help someone who doesn’t have the 10-12 hours a week needed to truly learn the material.
Inclusive policies can help decrease the burden. Recordings for all lectures and office hours, open book exams, flexible deadlines if someone asked… all of these prevented the need for additional scrutiny on my part to determine if someone was “worthy” of an accommodation. Sure, if someone had medical documentation, they could request such accommodations via the university, but inclusive policies benefit those who can’t get a formal diagnosis or are afraid of retaliation. If I kept teaching, I would continue finding ways to extend “accommodations” to all students by default, both to make my life easier and because these accommodations are, as the Speech Prof says, just good teaching practices.
I once heard that the first year of teaching is just learning to keep your head above water, and I had to give up before I got into the groove, apparently. That does mean the above reflections are based on very little experience. But to be clear: I loved teaching, and I intend to find my way back to it.
]]>You should read the article to see how gender in our society isn’t clear cut. But, I do want to expand on some areas where we can support men better.
(The title of this post is a quote from Trystan Cotten in the article.)
One of the contributors, Trystan Cotten talks about how being African America affected his life experiences pre- and post-transition. Cotten says it beautifully: “Life doesn’t get easier as an African American male. The way that police officers deal with me, the way that racism undermines my ability to feel safe in the world, affects my mobility, affects where I go.” This lack of safety is gendered too, as he mentions how he did not get pulled over or was let off pre-transition, but post-transition, his increased interactions with the police start with him being asked if he has any weapons.
Alex Poon talked about his genetics as a Chinese man not setting him up to have a “lumberjack-style” beard, underlying a fear that his stereotypically feminine facial features will impede his masculine presentation.
Both these stories make it clear why racial equality is needed. Whether it’s Black Lives Matter, or representation of Asian men in media, gender equality can’t be achieved without racial equality. Regardless of your gender identity or your race, if you want to create a society that supports men, support racial justice reform in all its forms.
Cotten also points out how there aren’t spaces for men to share their mental struggles. Contrasting his experience in gay, feminist and women’s circles, “there was a space and place you could talk about your feelings. In the last, you know, 10 years or so [post-transition] I can’t find those spaces necessarily for men, and I don’t know if men necessarily make those spaces for each other.”
And it’s not just a responsibility for those of us who are struggling, because we can’t share if no one is listening. Both of the other contributors, Zander Keig and Chris Edwards, talk about how society became less friendly toward them once they transitioned. Keig sums it up well: “What continues to strike me is the significant reduction in friendliness and kindness now extended to me in public spaces. It now feels as though I am on my own: No one, outside of family and close friends, is paying any attention to my well-being.”
Once again, we can create a better society for all by creating safe and inclusive environments for men:
Some of that is on men who are already creating communities, for example by adopting the right community rules to ban toxicity and allowing (respectful) discussion of topics like mental health. Often, communities for men are taken over by trolls shifting the conversation to blaming women, instead of focusing on the problems men face in an overly gendered society. Community builders need to keep the conversation on a productive discussion of men’s issues.
Some of the responsibility falls on everyone who is trying to create social safety nets. As Keig points out from his experience in social work, “when I would suggest that patient behavioral issues like anger or violence may be a symptom of trauma or depression, it would often get dismissed or outright challenged. The overarching theme was ‘men are violent’ and there was ‘no excuse’ for their actions.” Men who behave violently do need to be held accountable for their actions, but we also need to provide better mental health services that understand how those men end up acting violently in the first place.
The stories make it clear there are societal advantages for men, so I don’t want to suggest women have “made it” in our society. But for many men, especially but not limited to those in marginalized groups, the picture isn’t rosy. We need to create a society that treats everyone as valuable, regardless of other factors like race. We need to create support systems to elevate all those in need, at times taking into account the specific needs men have. Only then can we create a society that supports men.
For what it’s worth, I raise awareness for a men’s health charity called Movember because mental health is really important to me, and men experience mental health struggles in a specific way that’s deep rooted in our culture of tough masculinity. If you want to help, please reach out to a friend, participate in a Movember event to keep the conversations going, or donate to Movember. Let’s save some lives!
]]>The last year and a half have been damaging to all of us. Losing a job makes us feel like less of a provider, and the pandemic has been profoundly isolating. Worse still, for some of us, this isolation has not even been anomalous. Exaggerated maybe, but not anomalous. And I’m sure for many of us, there has been a time in our lives where we latched onto ideologies that ultimately hurt us as we looked for connection. The polarization and echo chambers enabled by social media have made this self-destructive behavior easier than ever.
I’m here to say from personal experience, it’s okay to not be okay. Your need for meaningful connection and personal autonomy is valid. Feeling overwhelmed is valid. Feeling like you’re drowning in the expectations of others is valid. Feeling like things are not going your way is valid. Feeling like no one gives you the attention you need is valid. We don’t have to tough it out. Asking for help and being vulnerable won’t make you less of a man.
That’s it. No solutions right now, no advice on what to do next. Just acknowledgement that your feelings are valid.
I raise awareness for Movember because mental health is really important to me, and men experience mental health struggles in a specific way that’s deep rooted in our culture of tough masculinity. If you want to help, reach out to a friend, participate in a Movember event to keep the conversations going, or donate to Movember. Let’s save some lives!
]]>