dustingoodman.dev - Dustin Goodman's BlogA blog about Engineering Leadership, Software Development, and Personal Growthhttps://dustingoodman.dev/en-usBuilding a Multi Platform Community Engagement Toolhttps://www.thisdot.co/blog/building-a-multi-platform-community-engagement-toolhttps://www.thisdot.co/blog/building-a-multi-platform-community-engagement-toolIn a YouTube show, my team discussed how to build a multi-platform community engagement tool to accompany a theoretical e-commerce buisness, A Latte Java. In this post, I summarize the discussion from "Build IT Better Architecture: Roundtable with This Dot Labs Developers". Thu, 08 Apr 2021 00:00:00 GMTArchitectureResolving Serverless Webpack Issueshttps://dustinsgoodman.medium.com/resolving-serverless-webpack-issues-efae729e0619https://dustinsgoodman.medium.com/resolving-serverless-webpack-issues-efae729e0619Through debugging a critical issue for my team, I learned quite a few important lessons for those considering the Serverless Framework relating to project and file structure and other key optimizations that your team may want to consider. I'll discuss some techniques and tools I used and share some of the lessons I learned along the way. Fri, 27 Nov 2020 00:00:00 GMTArchitecture, Serverless, AWSHow I managed to encounter and recover from Computer Science’s Two Hardest Problemshttps://dustinsgoodman.medium.com/how-i-managed-to-encounter-and-recover-from-computer-sciences-two-hardest-problems-83f108b8dd1https://dustinsgoodman.medium.com/how-i-managed-to-encounter-and-recover-from-computer-sciences-two-hardest-problems-83f108b8dd1Back in August 2019, I managed to encounter both of computer science’s hardest problems: (1) cache invalidation and (2) naming things, while trying to do something relatively simple. I wanted to share my experience and the lessons I learned from that situation. Sun, 22 Nov 2020 00:00:00 GMTArchitectureMigrating from REST to GraphQLhttps://www.thisdot.co/blog/migrating-from-rest-to-graphqlhttps://www.thisdot.co/blog/migrating-from-rest-to-graphqlWith the rise of GraphQL, I wanted to share a brief overview of the problems with traditional API solutions, the benefits of migrating to GraphQL, and strategies for migrating to a GraphQL solution. Mon, 15 Feb 2021 00:00:00 GMTGraphQL, JavaScript, RESTMigrating a classic Express.js to Serverless Frameworkhttps://www.thisdot.co/blog/migrating-a-classic-express-js-to-serverless-frameworkhttps://www.thisdot.co/blog/migrating-a-classic-express-js-to-serverless-frameworkMy team recently needed an easy deployment of a small Express.js server and we discovered that Serverless Framework helped us do this at a very low cost. Mon, 17 Jan 2022 00:00:00 GMTServerless, Node.js, AWS, DevOpsAnnouncing Angular GitHub Clone for starter.dev showcaseshttps://www.thisdot.co/blog/announcing-angular-github-clone-for-starter-dev-showcaseshttps://www.thisdot.co/blog/announcing-angular-github-clone-for-starter-dev-showcasesMy team at This Dot Labs released a new project we're calling starter.dev GitHub showcases in collaboration with the Angular team. Learn more about what we did and how we did it. Tue, 25 Jan 2022 00:00:00 GMTAngular, GraphQL, JavaScript, TypeScriptGit Strategies for Working on Teamshttps://www.thisdot.co/blog/git-strategies-for-working-on-teamshttps://www.thisdot.co/blog/git-strategies-for-working-on-teamsThis is a write up on my thoughts relating to how teams should best utilize git for their needs to have an effective collaborative work environment. Thu, 01 Sep 2022 00:00:00 GMTProject Management, Engineering LeadershipDebugging Node Serverless Functions on AWS Lambdahttps://dustingoodman.dev/blog/20220420-debugging-serverless-funtions/https://dustingoodman.dev/blog/20220420-debugging-serverless-funtions/Writing and testing functions for serverless locally can be a breeze especially with the Serverless Framework and serverless-offline plugin. However, once you get to real infrastructure, sometimes debugging your functions can be really challenging. Let's talk about some debugging tips and tricks. Wed, 20 Apr 2022 00:00:00 GMTHow many times have you written a function locally, tested it, and had it working only for it to fail when you deployed it to AWS? This is probably more common than you realize and it's usually caused by a misunderstanding of Node or an issue with lambda configuration. In this post, I'll cover some of the most common debugging problems you'll encounter when writing serverless functions and how to fix them. ## Improper Use of `async/await` When I first started writing serverless functions in Node.js, I had a misconception about how asynchronous functions behave. I was under the impression that you could run an asynchronous function as a background process and it would run on its own thread. However, this is not the case. Asynchronous functions are executed in the context of the Node.js event loop and are not run in the background. This means that if you try to run an asynchronous function in the background, it will block the event loop and the function may possibly never run. For example: ```javascript const randomBackgroundFunction = async () =&gt; { console.log('This function may never run'); }; export const handler = async () =&gt; { // do some stuff ... randomBackgroundFunction(); // &lt;-- this most likely won't run without an await await randomBackgroundFunction(); // &lt;-- this function will definitely run return goodResponse; }; ``` I say "may" because if no other code is running and the event loop is idle, the function will run, but once your handler returns, it's a race against the CPU clock. The AWS Lambda implementation tries to shutdown the Lambda once the response has executed or the Lambda's timeout has been reached (more to come on this topic!). So it's possible, your invocation may run before the shutdown process beings and you'll get lucky that it ran. Now you may be asking, "Dustin, how do I run my function in the background and ensure execution?" Luckily, there are 2 great solutions: asynchronous Lambda invocations or AWS's Simple Queuing Service (SQS). ### Asynchronous Lambda Invocations AWS built Lambda to have [asynchronous invocations](https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html) as an out-of-the-box feature. This means you can invoke a Lambda from your primary handler and have it run on its own thread and not block your main instance. So you can rewrite our example from above like this: ```javascript // background.js export const handler = async () =&gt; { // do our background stuff like we may have before console.log('This function will definitely run'); }; // main.js import { LambdaClient, InvokeCommand } from '@aws-sdk/client-lambda'; export const handler = async () =&gt; { // do some stuff ... const client = new LambdaClient(config); const command = new InvokeCommand({ FunctionName: 'background', InvocationType: 'Event', // default here is 'RequestResponse' }); await client.send(command); // this acts as a fire and forget return resp; }; ``` See the [AWS SDK v3 Docs](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-lambda/classes/invokecommand.html) for more details about the API in use. What we're doing is utilizing the `'Event'` invocation type to tell Lambda to just trigger this function and not wait for a response. From [Lambda's docs](https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html): &gt; Lambda manages the function's asynchronous event queue and attempts to retry on errors. If the function returns an error, Lambda attempts to run it two more times, with a one-minute wait between the first two attempts, and two minutes between the second and third attempts. Function errors include errors returned by the function's code and errors returned by the function's runtime, such as timeouts. With this, we get the benefit of the event queue without having to set it up and manage it ourselves. The downside is we have to use Lambda's default retry behavior to handle errors giving us less flexibility. ### AWS SQS Similar to invoking via another Lambda, we can utilize SQS to send a message to a queue and have it run our function instead. Like with the above example, we can generate a message in an inconsequential amount of time and send it to the queue. With this, we get the benefit of configurable retry behavior, but it comes at a cost of having to manage the queue ourselves. It also means our Lambda needs to know how to flexibly read event data from the SQS stream instead of being able to parse the Payload. ## Lambda Timeouts Lambda's default timeout settings are the next major hurdle. If your Lambda needs to run for a while or process a lot of data, you might see your function just quit suddenly and not reach a later moment in your code. **By default, Lambda has a 6 second timeout.** If you're waiting on additional services, long running queries, or a Lambda to start cold, this could prove problematic. A quick way to check your Lambda's timeout is to load up the AWS Console and at the Lambda's general configuration at the bottom of page. In the screenshot below, you'll see the Lambda I'm inspecting has a 5 minute timeout. ![Lambda Console Configuration Tab](./blog-assets/20220420-lambda-configuration.webp) Lambda timeouts can be configured in second intervals up to 15 minutes. When I use the Serverless Framwork, I typically set my Lambdas attached to API Gateway triggers to 29 seconds and SQS triggers to 15 minutes via the configuration file. I choose 29 seconds because API Gateway's maximum timeout is 30 seconds and due to latency between the API Gateway and the Lambda, AWS warns when the timeout is equal to 30 seconds as it's not truly 30 seconds. Use your deployment configurations method of choice for setting timeouts but confirm they are what you set them to be. ## Other Things to Look out for These were two of the larger problems I've faced with relatively easy fixes. The following are some smaller problems that are either easy to fix but specific to your utilization of Lambda or are things I haven't experimented with yet but am aware of: - Ensure your Lambda has access to all the resources it interfaces with. You'll need to check the IAM Role attached to your function via the console to see what permissions it has. If you're using the Serverless Framework, you can set the IAM permissions in your serverless configuration file. - Verify your environment variables are set correctly. A Lambda keeps a copy of the environment variables that it accesses and you can verify it via the AWS console. Make sure your values match what you're expecting from your configuration. - If you're doing File I/O or large data operations, make sure you're not running out of memory. If you are, look into utilizing Lambda's new emphermeral storage feature. ## Conclusion I hope you've found these tips and tricks helpful and they save you time down the road!Serverless, AWS, Node.jsAnnouncing a Serverless Microservices Template with GraphQLhttps://dustingoodman.dev/blog/20220414-serverless-template-announcement/https://dustingoodman.dev/blog/20220414-serverless-template-announcement/I'm excited to announce a new project starter kit template that I've been developing for a while. Let me tour you through the repository and the decisions that have been made. Thu, 14 Apr 2022 00:00:00 GMTFor those that know me, they know I love talking about two things more than anything: Serverless Framework and GraphQL. Today, I'm excited to announce a project starter template that I've been developing that allows developers to build serverless microservices with GraphQL. It is built using Nx monorepos and provides a lot of quality of life developer tooling out-of-the-box. I'll discuss what's in the repo and how you can leverage it today for your own projects. If you want to jump into the code, you can [find it on GitHub](https://github.com/dustinsgoodman/serverless-microservices-graphql-template). ## Project Goals Over the past few years, I've developed multiple serverless projects and they all tend to follow a similar pattern especially when using GraphQL as the primary API Gateway for the entire application. As the applications have grown, the service architecture had to expand and my teams discovered a lot of difficulty maintaining the different services and how to stitch the different services together in a streamlined method. Another issue was orchestrating local development through package.json scripts would cause memory issues and some cause system memory leaks. Every team also had to perform deep function analysis to keep bundle sizes down when deploying. This project sought out to solve these key issues and provide a first-class developer experience (DX) for those wishing to build serverless microservices with GraphQL. ### Managing Service Generation In each project, we always introduced a root level directory `serverless.common.yml` configuration file that centralized configuration of plugins and acted as a central lookup for ports utilized by the [`serverless-offline plugin`](https://www.serverless.com/plugins/serverless-offline). Prior to Serverless v3 and TypeScript configuration files, we had to configure the ports and inject them into the application. This was troublesome for a number of reasons, but mainly, it created problems when creating new services into the stack. Developers would have to create a new service directory, scaffold the service, and configure the global settings and then when they were ready to implement it, they would have to spell their service's name correctly at invocation time. The biggest issue with this was when two services were created at the same time and cause merge conflicts for port configuration. With this template, we're leveraging the [Nx monorepo toolchain](https://nx.dev/) which provides us with the ability to create service generators! This project ships with a service generators that quickly solves the issues seen traditionally. You can simply run: ```shell yarn workspace-generator service ``` This will: 1. Create a new service directory with the provided name 2. Scaffold the serverless configuration, test suite, linting, typescript, and example lambda function 3. Register the service to the Nx workspace 4. Update the serverless.common.ts file to include the new service name to the service type and port mappings to next available ports based on existing configuration For example, if you run `yarn workspace-generator service my-service`, you'll see the following changes made for you: ```diff - export type Service = 'public-api' | 'background-jobs' | 'example-service'; + export type Service = 'public-api' | 'background-jobs' | 'example-service' | 'my-service'; export const PORTS: PortConfig = { 'public-api': { httpPort: 3000, lambdaPort: 3002, }, 'background-jobs': { httpPort: 3004, lambdaPort: 3006, }, 'example-service': { httpPort: 3008, lambdaPort: 3010, }, + 'my-service': { + httpPort: 3012, + lambdaPort: 3014, + }, }; ``` This will give you type safety when using the provided `invoke` function when selecting services and identify which local port to use. The default scaffold attempts to make some reasonable defaults for the serverless configuration. If you don't like the defaults, you can customize them by [editing generator default template](https://github.com/dustinsgoodman/serverless-microservices-graphql-template/blob/main/tools/generators/service/files/serverless.ts__tmpl__). ### Solving Service Orchestration Most complex backends require a background jobs worker that relies on a service like SQS or require a database to be running. Unfortunately, these are difficult to emulate services in a local environment and require custom setup. Previously, my teams would attempt to extend start scripts to init the required services. However, the node process manager wouldn't manage the processes correctly and would leave services running in the background and cause memory issues for those trying to run services. Since a majority of services rely on the same system services, we can leverage Docker to start all the needed project services prior running services with the offline command. The template ships with a base `docker-compose.yml` that stands up an instance of [ElasticMQ](https://github.com/softwaremill/elasticmq) to emulate Amazon SQS's API. The Docker image can be extended to include other services like DynamoDB, Redis, or PostgresQL and can be run using `yarn infrastructure:build` or `yarn infrastructure:start` depending on if it's your first run or not. ### Function Analysis One of the most important aspects of serverless development is keeping an eye on your bundle sizes and to reduce cold start times on Lambda. Keeping this in mind, the template utilizes [`serverless-esbuild`](https://github.com/floydspace/serverless-esbuild) and [`serverless-analyze-bundle-plugin`](https://github.com/adriencaccia/serverless-analyze-bundle-plugin) to provide function analysis out-of-the-box. I opted for `serverless-esbuild` over [`serverless-bundle`](https://github.com/AnomalyInnovations/serverless-bundle) for a few reasons: 1. `serverless-bundle` provides a lot of functionality that we don't need or use out of the box 2. `esbuild` is generally faster than `webpack` for bundling and requires significantly less configuration 3. I personally had issues with `serverless-bundle` in the past with function analysis which you can [read more about here](https://dustinsgoodman.medium.com/resolving-serverless-webpack-issues-efae729e0619) and have been reluctant to use it as a result. The plugin has matured since and provides the functionality I fought with but the analyzer tool they chose isn't my favorite so the benefits just aren't there. With this project, you can run `yarn analyze --function=` to get an analysis of your bundle size. For example, if you run `yarn analyze public-api --function=graphql`, you'll see the following analysis: ![public-api graphql function analysis](./blog-assets/20220414-bundle-analysis.webp) ## Default Services For the default template, I've shipped with a few services to demonstrate how to utilize the project structure. The default services are: - public-api: Contains the GraphQL API setup and structure - background-jobs: Contains SQS setup and Lambda runners - example-service: Contains examples for resolver functions for cross service communication and a simple SQS message generator Other projects will have more complexity and this doesn't begin to demonstrate all the different features and functionality you can include in your project. This is solely intended to generate enough of a baseline for you to develop your own applications without having to go through all the verbose setup. If you see a feature that would be useful for others, please drop an issue or pull request on the repo! ## Developer Console I started my career building Ruby on Rails applications. One of my favorite features was the `rails console` which allows developers to interact with their application code directly which greatly helps with testing and debugging. As such, I wanted to recreate this for this template. You can run `yarn console ` to get an interactive REPL that you can interface with that library's code. I only included this as part of libraries and not services due to the nature of the code structure and the lack of entry points in which you should interact with the Lambda functions in your services. Below you can see how the console can work: ![example utils console](./blog-assets/20220414-console-example.webp) ## What's Next? Obviously, there is a lot here and I could keep plugging away to continually improve the template and never release but that wouldn't help anyone. I'll continue to iterate on the template and add useful features and cleanup the codebase. If you have any suggestions or want to help, the [repos issues and pull requests are open](https://github.com/dustinsgoodman/serverless-microservices-graphql-template)! ## Thank you! I want to send a special thank you to the following people for their help in getting this project setup! - [sudokar](https://github.com/sudokar) for the original Nx template in which this project was branched from. I frankly could not have solved some of the workspace generator issues I faced without this starting point. - [Jesse Tomchak](https://github.com/jtomchak) for talking through key issues and helping to make some important architectural decisions. - [Chris Trześniewski](https://github.com/ktrz) for assisting with some critical path AST parsing issues. The `serverless.common.ts` automatic updates wouldn't have been possible without his help. - [Tápai Balázs](https://github.com/TapaiBalazs) for convincing me to use Nx in the first place. I tried yarn workspaces and frankly, I couldn't make anything work right. Nx was the right call and his suggestion made for a great fit even if I had to modify a lot of their core features! - [Nacho Vazquez](https://github.com/NachoVazquez) and [Dario Djuric](https://github.com/dariodjuric) for assisting with some Nx structural decisions that I was fighting and helping me come up with a better long term solution for this template.Serverless, GraphQL, MicroservicesWhy you should upgrade to Serverless 3https://dustingoodman.dev/blog/20220129-serverless3-review/https://dustingoodman.dev/blog/20220129-serverless3-review/Serverless 3 just released and it has some pretty amazing quality of life improvements. Let's check out these new features and how we should leverage them today. Sat, 29 Jan 2022 00:00:00 GMTIf you haven't read about the release and what's included in this update, I highly recommend you read both the [beta release announcement](https://www.serverless.com/blog/serverless-framework-v3-beta) and the [full release announcement](https://www.serverless.com/blog/serverless-framework-v3-is-live) first as they give a good overview of all the changes. In this post, I'll highlight a few of those core changes and their significance. ## Upgrading from v2 to v3 The Serverless Framework team did an amazing job of making the migration to v3 as seamless as possible without major breaking changes and kept up their habit of making things backwards compatibility. The upgrade is as easy as follows: 1. Upgrade to the latest version of v2. If you're on v1, you'll have a bit more work to do. 2. Check if you have _any_ deprecation notices. - If yes, you'll want to fix those but the good news is they should all be restricted to your `serverless.yaml` files. - Some deprecations may be related to your plugins. The Serverless Framework team worked with many of the most popular plugins to make them backwards compatible. Go read your plugins' documentation and make the needed upgrades. 3. Upgrade serverless to v3. Typically you can just do this on your system with the following: ```shell npm i -g serverless ``` 4. Now, you can update your `serverless.yaml` to specify the new version. ```yaml service: plugins: - frameworkVersion: '3' # This is the line to change from 2 -&gt; 3 ``` If you find issues when you move to v3, you can easily just flip this back to v2 so you should definitely give this a shot. ## 🎉 New Stage Parameters 🎉 TL;DR: ![stage parameters meme](./blog-assets/20220130-stage-params.webp) This is by far my favorite feature included with this update as it removes an old hack that many serverless developers implemented to achieve the end goal of this feature. Specifically, this feature allows you to set service configuration settings based on the current stage, or environment. This is incredibily important for larger teams implementing serverless applications as typically, you'll have a production, staging, and/or development environment that may need custom configuration. Before, you might have setup your `serverless.yaml` like this: ```yaml # ... custom: profile: my-profile stage_env: dev: PROFILE: dev SLS_STAGE: dev local: PROFILE: dev SLS_STAGE: local provider: name: aws stage: ${opt:stage, "local"} region: ${opt:region, "us-east-1"} runtime: nodejs14.x profile: ${self:custom.profile}-${self:custom.stage_env.${self:provider.stage}.PROFILE, 'dev'} # ... ``` Notice the really hard to grok field, `profile`, under `provider`. This was what we had to do to configure our service variables per development stage. For example, if I want the dev environment setup, I would set `--stage local` when I run my serverless commands and `profile` would be set to `my-profile-dev`. This is pretty hard to keep track of and lead to many long configuration debugging sessions. Now, you can use the new stage parameters by setting their values under the `params` key and then acessing them via the `${param:xxx}` syntax. So our above `serverless.yaml` can now be rewritten as: ```yaml # ... params: dev: PROFILE: my-profile-dev SLS_STAGE: dev local: PROFILE: my-profile-dev SLS_STAGE: local provider: name: aws stage: ${opt:stage, "local"} region: ${opt:region, "us-east-1"} runtime: nodejs14.x profile: ${params:PROFILE} # ... ``` ## Improved CLI I refer you to the [full release post from the Serverless team](https://www.serverless.com/blog/serverless-framework-v3-is-live) to really see these highlights as they did a wonderful job showing you the before and after of the enhancements. I'll give my quick opinion here on these improvements: - The CLI now only informs you of what you need to know. Before, there was a little extra white noise which _could_ be helpful under the right circumstances but was mostly unhelpful. Now, it tells you the facts you care about: (1) where was my app deployed and (2) what endpoints can I use to access it. - The `--verbose` flag now provides all that additional information that's now hidden when you need to debug and provides it in a much more readable format. - Errors now look like errors! 🛑 Before, errors looked like the rest of the output returned by a command so it didn't immediately capture your eye. Now, it's very obvious when a problem occurs with seemingly better messaging. I haven't had too many run ins with this experience yet as the rest of my systems have upgraded so cleanly but the few times I've seen an issue, it took me seconds to realize what the problem was instead of minutes. If you're new to Serverless Framework, trust me when I say that the devs really thought about you with these updates. These were real problems for folks and now, they're a thing of the past. There's always room to improve so if you see something that could be better, let them know as they're clearly listening to their users. 🥰 ## Onboarding One of my least favorite things with the old CLI was starting a new project. Previously, it created your new project in your current work directory so I would accidentally clutter my top-level `development` folder that I keep and would have to manually clean things up. Additionally, the setup only gave you the minimum files and setup needed to start so the README.md, serverless.yaml, and handler.js. The new CLI project starter made a ton of improvements. Now, it creates the new folder for your project and initializes all your files in there. The CLI offers you a set of standard templates for common usages and guides you to their full list of amazing starter kits for new projects. ![serverless init](./blog-assets/20220130-serverless-init.webp) They kept with the old and help you quickly configure your project for the Serverless Dashboard and help you do your first deploy on project init but allow you to opt out as well. ## Plugins As I mentioned earlier, the Serverless team did a great job helping the community migrate popular plugins to be v3 compatible. I've never created a plugin myself, but the team has put out some great resources and new plugin API for those looking to create their own tooling for Serverless Framework. I've gone ahead and experimented with the following plugins and can say they have played really nicely with v3: - [Serverless Offline](https://www.npmjs.com/package/serverless-offline) - [Serverless Webpack](https://www.npmjs.com/package/serverless-webpack) - [Serverless ESBuild](https://www.npmjs.com/package/serverless-esbuild) - [Serverless Bundle](https://www.npmjs.com/package/serverless-bundle) - [Serverless S3 Remover](https://www.npmjs.com/package/serverless-s3-remover) - [Serverless DynamoDB Local](https://www.npmjs.com/package/serverless-dynamodb-local) This isn't to say they've all been officially upgraded to the latest plugin API and play nice with the new CLI, but they still work so if you're in need of any of these plugins, I'd say they're safe to use but are worth your own investigation. 😉 ## Conclusion Serverless Framework continues to be my favorite toolchain for building applications utilizing serverless architectures, and I'm really excited to see the developers continue to improve it in meaningful ways. The other changes I haven't covered here are some specific deprecated syntaxes that I think are uncommon. If you need more info on them though, [read the upgrade guide](https://www.serverless.com/framework/docs/guides/upgrading-v3). I want to send a special thank you to the Serverless Framework team for their hard work and making this great tool even better! You all hit a homerun with this one! 🎉ServerlessGitHub Actions for Serverless Framework Deploymentshttps://www.thisdot.co/blog/github-actions-for-serverless-framework-deploymentshttps://www.thisdot.co/blog/github-actions-for-serverless-framework-deploymentsIn this article, I discuss deploying applications built with the Serverless Framework and Nx utilizing GitHub Actions and some of the reasons you may want to consider using this strategy. Mon, 26 Sep 2022 00:00:00 GMTServerless, DevOps, MicroservicesHow I pick my web technology stack?https://dustingoodman.dev/blog/20221128-how-i-pick-my-web-tech-stack/https://dustingoodman.dev/blog/20221128-how-i-pick-my-web-tech-stack/In this post, I explore how I make technology decisions based on use cases and what technologies I think are best suited towards solving the problem. Mon, 28 Nov 2022 00:00:00 GMT![My decision tree for picking technology stacks](./blog-assets/20221128-how-i-pick-my-tech-stack.webp) One of my favorite conversations is trying to decide what technologies are best suited to accomplish the desired end result on a particular project. And well, as you may expect, the answer is “well it depends”. ![Simpson's scene where the class pressures Bart to say it depends as if he were a senior engineer](./blog-assets/20221128-it-depends.webp) In this post, I hope to explore how I make these decisions. Feel free to challenge my thinking and let’s have a great conversation about how we can improve this decision tree. **A quick disclaimer** - my skillset is comprised of Ruby &amp; TypeScript/JavaScript and I mostly use React in my day-to-day for frontend work. As such a lot of my decisions will reflect that but I’ll make call outs to alternatives where applicable. **Another quick disclaimer** - the current ecosystem is evolving _fast_ and some of the things I’m saying here may be incorrect due to being outdated. What I hope you take away is the thought process and not the exact choices as those will evolve with time. Let’s jump in! ## Starting with the question of what? The first thing I like to do with any project is understand the end goal. What are we building? A blog? A docs site? A company’s marketing site? A Ecommerce Site? A highly interactive product application? A low interactive product application? Are we displaying rich data? For each of these archetypes, there is a solution that is arguably more appropriate than other options. To simplify this post, I’ll distill some of these examples down to some arbitrarily simple ones in hopes that the principles will hold up when other factors are introduced into the problem. ## Case 1: Static Content Sites ![Simple static website architecture diagram showing S3 to Cloudfront connected to a user device](./blog-assets/20221128-static-website-architecture.webp) Are you building a docs site or a personal blog? Is your content relatively static with low-to-no interactive elements? If yes, I strongly recommend using [Astro](&lt;[https://astro.build/](https://astro.build/)&gt;) or an equivalent framework. ### What is Astro? Astro is a HTML-first framework that utilizes island architectures to allow you to use your preferred component language to create your content and interactive elements. This is extremely powerful in that you ship low-to-no JavaScript to the client that are serviceable via CDNs making your website extremely performant which helps with SEO and other important factors that help these sites perform well. ### Other considerations for Astro ![MDX logo](./blog-assets/20221128-mdx.webp) Astro also is a winner in my book because they have first-class markdown and MDX experiences which is great for content writers, who aren’t necessarily technical. Thus, making Astro an excellent option for this classification of projects. ### When to maybe not use Astro? ![Excerpt from Astro docs talking about how they aren't the best use case for applications](./blog-assets/20221128-astro-use-case.webp) [Astro admits this themself](&lt;[https://docs.astro.build/en/concepts/why-astro/#content-focused](https://docs.astro.build/en/concepts/why-astro/#content-focused)&gt;) that there are use cases that maybe they aren’t the best at and that’s okay! This is part of why I recommend them and prioritize them for this type of project. My website is built on Astro and I’ll say the development experience has been superb. That being said, if you’re starting to add features like CTAs to capture consumer information that sends out emails or other needs to interact with other services, this type of tech maybe isn’t for you. Of course, you can use web APIs to send requests to a server or serverless functions but there are better options for this use case that require less tech. ![Architecture diagram demonstrating how to achieve dynamic sites with Astro by introducing a separate API stack](./blog-assets/20221128-separate-architectures-design.webp) ### Alternatives to Astro Historically speaking, people have used [Gatsby](https://www.gatsbyjs.com/), [Docusaurus](https://docusaurus.io/), and WordPress for these types of sites. I think these are all still valid options and good ones, but it depends on what your priorities are for the project. They each have their own known downsides so that’s another serious consideration. [11ty](https://www.11ty.dev/) is another great option and I think is the most comparable to Astro in terms of their goals and feature offering. I think picking Astro is just a preference for me so seriously consider this option for your site. Finally, we have some new kids on the block, but I’m specifically going to talk about [Qwik](https://qwik.builder.io/). I don’t think it’s quite production-ready as it’s still in beta, but I think this is going to shake up this space as they’re focused on delivering minimal JavaScript to the client. I don’t know what their writer experience is yet as well so do your own homework here. ### TL;DR Find a tech that focuses on shipping HTML and low-to-no JavaScript that has or has a means towards offering a first-class writer experience without harming the developer experience. ## Case 2: Marketing Sites Are you building your company’s marketing website or an Ecommerce site? You’re definitely going to need some level of interactive elements. At this point, I’m unquestionably in favor of using [Next.js](https://nextjs.org/) or [Remix](https://remix.run/). [Nuxt.js](https://nuxtjs.org/) absolutely makes this list too but I’m not a Vue developer. ### Why these frameworks? ![Dynamic website architecture diagram showing most of the ecosystem lives inside of the Next or Remix domain area](./blog-assets/20221128-dynamic-website-architecture.webp) First off, Next.js &amp; Nuxt.js provide a way to build **static** pages, and you can build good content creation experiences for your marketing team using a headless CMS with all three options. However, Next.js &amp; Nuxt.js also provide a means for creating **dynamic** pages allowing you to integrate with your marketing team’s tooling, which Remix does by default. Finally, they all provide **API routes**. This gives you the ability to create those CTA features I mentioned above that Astro didn’t solve so well. With all these pieces combined, it allows you to: 1. Keep all your code in one place 2. Deploy to almost any provider 3. Utilize edge functions and other first-in-class performance APIs 4. Serve lightning fast static content where applicable 5. Load dynamic content in a way that the client doesn’t pay the cost 6. Support SEO and marketing needs I will also note in the case of Remix, they were just acquired by Shopify to enhance their templating systems so this is maybe a strong signal to consider them especially for Ecommerce. ### Why not use these tools for blogs and docs sites? ![Violet from Willy Wonka getting bloated just like apps do using the wrong tool](https://media.tenor.com/ce2n2yhf6TsAAAAM/violet-beauregarde-blueberry-inflation.gif) Frankly, you can and you wouldn’t be wrong to do so. They provide a lot of the same function but there’s some gotchas with each. For Next.js and Nuxt.js, you have to configure them to provide some of those content experiences mentioned earlier and in some cases, configure them to load less JavaScript to the client making them harder to get started with. For Remix, there’s a bit more to learn with their loader pattern, and deployment and hosting costs are a little higher as it requires serverless functions for any page to load. These costs are very low though and most likely will not significantly impact your monthly bill. Overall, there’s just a bit more to do here where Astro or 11ty would just work out of the box for your needs. ### TL;DR Find a solution that offers all the pros of a purely static solution but also provides a means for extending onto the server in a light-weight, zero-server fashion so you can extend features as needed. You’re optimizing for load speed here while managing to keep the developer experience good as your team’s needs grow. ## Case 3: Product Applications Products cover a lot of different types of applications. You have data rich dashboards like NewRelic. There are CMS products like Contentful. You have marketing platforms like Hubspot or MailChimp. Each of these are unique in their offering and how to achieve the end goal. As such, there are a lot of questions we should be asking ourselves: - Do we care about SEO? - Are we building a single page application (SPA) or a multi page application (MPA)? - Is our product highly interactive? Do we care how much JavaScript is being loaded to the client? - Is our product data rich? Are we looking to implement data visualization? Is the data just formatted in a way to make it easy for the user to consume? - Are we building for mobile, desktop, or both? Do we want a progressive web application (PWA) instead of native mobile solution? - What are we building our API with and how is it structured? Are we looking to have separate technology teams? These are just some of the starting questions and as we learn more, our decisions are going to be impacted further as things change. ### A Quick Assessment of Node.js Depending on our answers, we’re going to need to pick the right toolset for our project which is a tough challenge as market conditions for developers are continually changing. At the time of this writing, Node.js with TypeScript and React are arguably the most popular technologies for building applications. I challenge whether that means they’re the right choice though. In my opinion, the Node ecosystem for building backends is not yet mature enough. I'll defer my full thoughts on this to another article, but this isn’t to say you can’t do it, but there’s not a singular proven set of tooling, and everyone is having to decide on their own stacks. This means teams are spending a lot of time doing technology research just to figure out which is the _right_ tool for the job and then doing a lot of work to ensure they’re still using something maintained and supported. This can sometimes lead to having to migrate to new tech that accomplishes the same end goals because of deprecations or dropped support. ### My Preference ![Rails architecture deployed to AWS Elastic Beanstalk with PostgresQL &amp; Sidekiq Redis](./blog-assets/20221128-rails-architecture.webp) When I’m building a server solution, I personally grab for Ruby on Rails. It’s not perfect for all situations, but I think Rails has a lot going for it: - Active Record and Migrations - Server Side Rendered Content - The Rails Console (this is super underrated and I wish Node projects had an out of the box solution for this) - First-class packages for authentication and authorization - Convention over configuration mentality - The Rails CLI - Active Job - RSpec &amp; Cucumber for testing The above may not seem like a lot or that significant but considering the current state of the JavaScript ecosystem, replicating some of these features is a multi-day chore. These tools and features definitely come with their own downsides though and there are tradeoffs. That being said, you can opt to just use Rails as your API server, and frankly, that would be enough in my opinion. Rails’ frontend solutions are not ideal in my opinion and leave a lot on the table in terms of what you want especially with their asset management pipeline and their constant resistance to modern JavaScript frameworks. ![Example of Active Record and the Rails Console in action](./blog-assets/20221128-rails-console.webp) If you’re a .NET or PHP developer, you may be saying, “we have these things, why not us instead? We even solve the frontend problem.” I totally agree and think these are great alternatives. I just personally don’t want to program in C# or PHP, regardless of how good the languages have gotten (and trust me, I’ve tried). But hey - we’ve got something good here. ### So what about the frontend then? ![angular react and vue logos](./blog-assets/20221128-frontend-frameworks.webp) Use what makes the most sense for your product’s use case and what your team is most comfortable with. I think you need to weigh your priorities again, but I think Angular, React, or Vue are all excellent choices especially when paired with meta frameworks that enhance them for the better. I also recommend these as they are the currently battle-tested solutions. SolidJS, Svelte, and Qwik are the new kids on the block and are probably going to be viable solutions to explore in this space, but I’ve been bitten by being on the bleeding edge before so I refrain from choosing them for my main projects currently. This will change in the coming months and years as new developments and enhancements are made, but at the time of this writing, I’m just not sure they’re ready for this level of project _yet_. ### “What about serverless? It’s super fast and connects to all my infrastructure services already.” I’m a big fan of serverless and you can see my attempts on the internet to make serverless a first-class choice. Unfortunately, I’ve learned from experience and seen the downsides to serverless to know I wouldn’t actually recommend it at scale. I think it’s a great tool for singular-focused microservices and ETL APIs. These stacks can handle volume better than most and cost a fraction of the price, but scaling them to larger projects or teams is challenging and the developer experience is just not there yet. I think we’ll see this change in the coming months, but for now, I think [Dax Raad (@thdxr)](https://twitter.com/thdxr) hits a key point with [this tweet](https://twitter.com/thdxr/status/1592917691194802176): ![Tweet from Dax Raad talking about serverless tooling being trick shot demos and not maintainable applications](./blog-assets/20221128-dax-raad-tweet.webp) Serverless is amazing and when used correctly, is truly amazing, but right now, we’re seeing the wrong trends in the space around the tech. I hope this changes. ### REST, GraphQL, or tRPC? I can’t answer this question. Each of these options has an extremely valid use case and a purpose. As an organization and development team, you really need to understand what your needs are. I’d guess for a majority of the work you’re doing, GraphQL and tRPC are most likely the best choices, but there’s going to be cases where a simple REST endpoint will be the best fit for the use case. Keep an open mind and discuss things! ### TL;DR Picking product technology is hard, but look for tools that are mature with a proven track record. If you choose to live on the bleeding edge or try to use the wrong tool for the wrong job, you’re probably going to run into issues or have a bad day along the way. Tread carefully! ## Conclusion Picking technology is a really hard problem. Just because a solution works 99% of the time on your proof of concepts and side projects does not mean it will work at scale. It doesn’t necessarily mean it won’t either though. I challenge you to discuss your technology choices and really understand the end goal of your product and try to better understand the future you’re building towards. Give yourself options and remember: one size does not fit all. Special thanks to [Jesse Tomchak](https://twitter.com/jtomchak) for feedback on this post!Architecture, Engineering LeadershipStarter.dev: Bootstrap your project with zero configuration!https://www.thisdot.co/blog/starter-dev-bootstrap-your-project-with-zero-configurationhttps://www.thisdot.co/blog/starter-dev-bootstrap-your-project-with-zero-configurationWe’re excited to present you with starter.dev, a set of zero-configuration project kits built with your favorite tools. Each kit is configured with the following so you can focus on building features instead of spending time on configuration. Read here to learn more. Mon, 07 Nov 2022 00:00:00 GMTOpen SourceLeveraging Astro's Content Collectionshttps://www.thisdot.co/blog/leveraging-astros-content-collectionshttps://www.thisdot.co/blog/leveraging-astros-content-collectionsLet's explore Astro v2's new content collections API and how it really helps improve your developer experience and content management. Fri, 21 Apr 2023 00:00:00 GMTJavaScript, TypeScriptThe Challenges of the GraphQL Mental Modelhttps://dustingoodman.dev/blog/20230605-the-challenges-of-the-graphql-mental-model/https://dustingoodman.dev/blog/20230605-the-challenges-of-the-graphql-mental-model/GraphQL can be an amazing tool for teams to implement the APIs powering their different applications that rely on the same source of data. However, the mental model required for it may not be as straightforward as traditional solutions. Let's explore some of these challenges. Mon, 05 Jun 2023 00:00:00 GMT My colleague, [Colum Ferry](https://twitter.com/FerryColum), tweeted asking people for their opinions on GraphQL. The thread had a variety of responses with differing opinions but the majority consensus was that people were done or tired of GraphQL and were shifting their practices away from it. <blockquote><p>bad</p>— wes (@wescopeland_) <a href="https://twitter.com/wescopeland_/status/1665032958493831171?ref_src=twsrc%5Etfw">June 3, 2023</a></blockquote> For those that know me, I'm a big fan of GraphQL and think it's a fantastic solution in most situations, but this sentiment got me thinking, "is my opinion wrong?" I pondered on the conversation a bit more and chose to respond with a reduction of my thoughts: <blockquote><p>My experience has been the concepts are foreign enough that people don’t know how to adapt to the mental model. It requires some level of understanding data relations and querying which isn’t something most FE devs deal with regularly.</p>— Dustin Goodman (@dustinsgoodman) <a href="https://twitter.com/dustinsgoodman/status/1665043300275945477?ref_src=twsrc%5Etfw">June 3, 2023</a></blockquote> To elaborate, my last 3 organizations all adopted GraphQL, but all 3 teams suffered from a lot of the same common pitfalls and challenges in gaining the benefits of GraphQL. These teams all had smart and capable developers on the team. So why has the transition been such a challenge? I think the problem is multi-faceted. I'll cover these areas as I see them, but they include: - Legacy impacts of REST &amp; RPC patterns - Backend v Frontend Mentalities - Standards and Best Practices - Depth of Knowledge about the GraphQL Toolchain ## Legacy impacts of REST &amp; RPC patterns As a web community, we've been utilizing REST APIs and RPC data patterns for so long that we've grown accustomed to these patterns making them our default way of thinking. In REST, we dump full resources as they live in our databases with some business logic and rules to protect sensitive data and have created hacks to expand the basics of these APIs to handle our needs. With RPC patterns, we have access to our data models and server logic so we just collect what we need to fulfill our request. Another way to think about this is to say that whenever someone wants a piece of data, they effectively just make a `SELECT * FROM table ...`. There's little to no consideration for the consequences of these actions as we're typically developing on high performance laptops with high-speed internet access so the extra data feels inconsequential to the outcomes. To be clear, I am not saying that every app _needs_ to have these considerations, but when customers or consumers start saying, "your site is slow", our solutions are to bring the code closer to their devices, but what if the problem isn't proximity but amount of data? Mobile networks are always improving and our smart phones have access to our WiFi connections, but mobile hardware does have limitations to what it can handle processing wise. Again, yes this is getting better constantly, but it's because we keep trying to shove more and more data over the wire to these devices so it never feels like we're making progress. Regardless, this is the mental model that exists - I can just fetch my data and let the user's device handle the repercussions because devices are getting better constantly. GraphQL's model pushes back against this model. It makes the developer think about what data they _actually_ need and request it in a meaningful way. Though this seems like a small challenge, it's probably the largest challenge to true adoption of GraphQL. This brings us to the next challenge... ## Backend v Frontend Mentalities As a engineer focused on developing APIs and backend systems, you typically think about how data should be structured and passed around before eventually being exposed to the world through your APIs. Frontend focused engineers, on the other hand, are thinking about how to make that data appear for users. But as a backend engineer, you don't know what data the end user really needs without your frontend counterparts, and as a frontend engineer, you don't always know exactly what data is available without your backend engineers. As a result, you both just dump as much info at each other as possible thus exacerbating the data overload and our legacy mental models. This is not purely the fault of teams as the businesses around them are constantly evolving and needing product to ship faster. This reduces the time these teams have to think about these problems or these problems get delegated to a handful of extremely overloaded lead engineers who can't keep up with the pace of the team around them. As a result, hard to reverse decisions get made by developers who are just trying their best to do their jobs. This has also led to the fullstack engineer who tries to bridge these gaps. Of course, they're caught in the same crossfire of everyone else and fall back to what they know - just dump the data and go. Going back to the GraphQL model again, the idea is the backend engineers can still expose all the data even if it's not consumed by the frontend team and the frontend team can select what fields they want from the data available to reduce client impacts. However, to truly realize the power of GraphQL requires these teams to communicate more which can become another bottleneck for the business and therefore a problem. But, if these two teams started to discuss the actual behavior and needs of the data, better GraphQL fields and entities would arise which could reduce the amount of JavaScript that ships to the client. For instance, a status field with a set of values that aren't human consumable need to get mapped somewhere. Should it happen on the frontend or backend? Historically, this problem just gets deferred to the frontend and requires more logic be shipped to clients. With GraphQL, however, we can now have special formatting resolvers in our API that give the client exactly the shape they want the data which alleviates the device needs. So why aren't developers ready to just do this with GraphQL? ## Standards and Best Practices Well, frankly, GraphQL has introduced new API concerns and challenges that didn't exist previously or had good solutions in its REST or RPC counterparts. Cyclic complexity, authorization, sub-query context, data filtering, and rate limiting are examples of these challenges. With REST solutions, there are great tools that exist that just solve many of these problems. With GraphQL, we're finding vendor lock-in challenges or solutions that solve our problems to a point but then we experience limitations with those solutions to these common problems. Several companies are actively trying to solve these challenges but we're just not getting their tools fast enough for our needs as a community. On top of all of this, we're still not standardized around how certain features of GraphQL should work. That's not to say everything has to work exactly the same way in every instance of a GraphQL API, but we need resources that we can point our less experienced team members towards that can answer their questions on best practices so the more senior members can solve the architecture problems allowing team to be more effective. [The GraphQL Foundation website includes a document on some best practices](https://graphql.org/learn/best-practices/), but these focus more on API architectural best practices such as pagination, batching, caching, and communication. [How to GraphQL](https://www.howtographql.com/) is a great course that is also recommended for learning the basics of GraphQL, but it doesn't cover some of the techniques for thinking in GraphQL to allow for the building of better APIs. This really brings us the last issue. ## Depth of Knowledge about the GraphQL Toolchain The GraphQL ecosystem and toolchain is vast. With field level arguments, resolvers for _any_ field, fragments for query reuse, directives, etc., there are now features that aren't common outside of GraphQL that teams probably need to get the most out of their APIs. Education and experience are core to alleviating any challenges presented by this issue. Bringing GraphQL to a group of developers that have never used it prior will produce negative business productivity for the first several weeks as the teams ramp up on how to use the toolchain. However, it still probably takes the most efficient teams several months to fully acclimate and any new developer brought onto these teams will have to go through the same learning. We're not at a place yet where linters and other developer experience tooling can solve these problems fully. When a team decides on a standard or technique, they need to come to consensus and agree to adhere to the rules. But unlike formatting standards, there aren't solutions readily available to create these guardrails. As a result, teams are required to self-police their standards during development and code review. This comes with the challenge of knowing all the potential avenues where new decisions could arise. When I've worked on teams in the past, someone would try something new we hadn't discussed as a group and we'd have to work backwards to the alternatives to come up with a solution the team collectively would agree to moving forward. An example of this that comes up during later stages of GraphQL API development is the union type. Should you use it? If we use it, what implications does that have on consumers? Shared fields can't be lifted to the top-level query so is it actually worth utilizing this type over multiple queries in the same operation? Should union types be reserved only for aggregations or searches or can they be used more generically? These are the types of questions and discussions you might start finding your team having and without that experience, some decisions might get made without all the info leading to hard to reverse decisions later. ## Conclusions If your team needs an API that can be consumed by multiple applications or serve the general public, GraphQL _may_ be the right solution for it. However, the business needs to understand the implications of adopting this style of development. There will be tradeoffs, like with any solution, but the ones exposed by GraphQL do force a shift to the mental model of many developers. This isn't to say that you should give up on GraphQL in favor of REST or RPC. In fact, REST and RPC both have their own pitfalls that could harm the user experience or limit your businesses ability to grow onto more platforms that you really need to consider as you have these discussions. I personally enjoy the GraphQL mental model and developing in it and have found that a lot of the solutions I've built with it have enormously benefitted from the decision to use it. The pay offs definitely came much later than many stakeholders would have wanted and setting up the ecosystem in the organization took longer than desired, but the flexibility and adaptability now that these systems exist are appreciated by the business, the developers, and the consumers of their products. If you want to continue this conversation, please reach out on [Twitter](https://twitter.com/dustinsgoodman) or [Bluesky](https://bsky.app/profile/dustingoodman.dev).GraphQL, ArchitecturePerforming a Layoff with Empathyhttps://dustingoodman.dev/blog/20230905-empathetic-layoffs/https://dustingoodman.dev/blog/20230905-empathetic-layoffs/My team had to perform a layoff recently. There were several lessons and some interesting insights I gained from the situation. In this post, I'll discuss some of the lessons I learned in handling this layoff and how my organization's leaders did their best to treat everyone humanely. Tue, 05 Sep 2023 00:00:00 GMTThe layoffs in the tech industry have been rampant since the end of last year. Unfortunately, my team had to undergo layoffs recently as well. The decision to perform this layoff was difficult to make and the follow up decisions on how to execute it were equally challenging. At the end of the day, I think my team handled the situation as best as it could be done. Out of respect for the company and those involved, I will not be speaking to the specifics around why these layoffs had to be done or details around who was impacted. This post is to share what I thought were positives from the process and provide insights for those having to make similar challenging decisions. I want to share how we approached these layoffs from a **place of empathy**. ## Preparing the Team One of the original questions asked by our C-suite was "how can we do these layoffs in a humane way?" At the time, I think I took this question for granted. This is not to say that I didn't think about ways to be humane for our team, but what I learned a bit later is what I thought was humane wasn't necessarily humane for another team. To explain, I saw [a video of Dave Portnoy of Barstool Sports sharing his perspective on his recent layoff](https://www.youtube.com/watch?v=lmJAb7Z06xQ) - to be clear, I don't agree with Dave, but his thoughts and the thoughts of those on the show with him were contrarian to my beliefs. My immediate reaction was to read the comment section in hopes of finding people blasting Dave to confirm my belief and to my surprise, the comment sections were supportive of how Dave and team implemented their layoffs. I was initially flabbergasted by this finding but as I thought about it, different teams and industries likely prefer to handle things differently. I thought back to the 2011 film [Margin Call](https://www.imdb.com/title/tt1615147/) which starts with a Wall Street firm that underwent layoffs preceding the 2008 financial crisis. The characters talk about the ruthlessness of the layoff at times referring specifically to certain treatments of characters involving security escorts and lack of warning for the layoffs that occurred. In both these cases and in ours, the general underlying consensus is "don't surprise the team". Each group and industry requires different handling. For us, this was highlighting certain facts to the team. Since I started with the company, we've been transparent with our team regarding the work coming in and leaving which directly impacts our capacity. However, the overall insight of all resource allocations isn't regularly shared which creates some opaqueness. To alleviate this, we held several all-hands meetings preceding the layoff to ensure the team understood that our capacity was lower than needed and how they could help improve the company's state. Our goal was two-fold: (1) ensure the team knew that we needed to increase our sales opportunities and (2) not create a workplace of fear. ## Executing the Layoff One of the comments I made to the original question was "don't leave people in limbo". At a previous job, the team had to go through layoffs. Frankly, those layoffs were handled rather poorly and I learned about them during some planned time off. I received a late in the evening text message from a few colleagues to the tune of "It was nice working with you". I was freaking out - did these people lose their jobs? Was I publicly fired during time off? I finally got my boss on the phone the next morning where he explained that my job was safe but those folks that texted me were no longer with the company and what had occurred the day before. I'll spare you all the details of all these interactions, but the lesson learned was - **make sure people who are keeping their jobs know what's going on**. This is especially true for companies that need to immediately revoke system access due to immediate termination. There are a couple of ways to solve this. Option 1 - hold an all hands meeting for those that will be staying to let them know that a layoff is coming and that they were not impacted. Option 2 - tell each team member staying individually. Option 1 has the pro of being less time consuming, but it has the con that everyone impacted may find out before you've had the chance to tell them yourself leading to unideal results. Option 2 has the major con of being time consuming, however, it prevents team members from knowing who is impacted ahead of time allowing you to communicate what is needed to the impacted individuals. Option 2 has the additional con of creating some obscurity around how to manage the transition of the layoff but this can be mitigated. From here, you have the same decision to make for informing those impacted which have similar pros and cons. If you want to work with these individuals again in the future, I strongly recommend option 2 though. It gives an important personal touch that helps maintain the relationship during a very trying situation. I do recommend for both situations that you have a "script" that you're delivering from. I think the following bullets are good high-level ones to consider: - What is happening and how the individual is impacted. - Share wishes for communication to afford everyone an equal opportunity to find out the news meaningfully. - For those staying: - Share when individuals are leaving - Share how the company is treating those impacted - Share what other changes and restructuring changes are happening that can be shared - Share next steps - For those impacted: - When is the individuals last day - What are the high-level details of severance if available - Note any key follow up meetings that need to occur - If they are given a notice period, set expectations for the notice period - If dismissing immediately, clarify next steps - Leave time to answer questions or allow them to respond - Thank them for all their hard work and commitment to your company ## After the Layoff Now that the news has been delivered, there's a few things as leader to keep in mind. First, you've had more time to process what's going on so you may be ready to move onto the next stage of grief - your team still needs some time to catch up. Be there for your team and help them process the events around you. Definitely discuss this ahead of time with other leaders to ensure you all are aligned on messaging as there will be difficult questions that need to be responded to meaningfully and consistently. Second, those staying are still at risk to leave - just because they survived the layoff won't mean they want to stay. You need to create transparency with those remaining and give them reason to continue to trust you and believe in what the organization is doing. Finally, follow through on any commitments made during the layoff. In our case, we offered continued support to team members in helping them to find their next career, and we followed through on this by leveraging our networks and offering help where requested in prepping these individuals for interviews and application processes. I'm writing this in a time where we're still processing what's happened and some of the impacted team still remains. I'm sure there will be more lessons here, but the above are the ones I've learned so far and have appreciated. We're starting to find our new daily rhythm and continue to work closely with our teams to navigate our changed company. ## Closing Thoughts Layoffs are not easy. I hope I'm never in a position again where I need to make decisions around them. However, if I am, I'll look positively on this experience as it's prepared me for how to approach the situation with empathy. I'm sure others will disagree with some of the points I made or have different approaches. Ultimately though, I think as long as you're doing your best to treat your team with dignity, you're doing the best you can regardless of how others may look upon your circumstances.Engineering LeadershipUtilizing API Environment Variables on Next.js Apps Deployed to AWS Amplifyhttps://www.thisdot.co/blog/utilizing-api-environment-variables-on-next-js-apps-deployed-to-aws-amplifyhttps://www.thisdot.co/blog/utilizing-api-environment-variables-on-next-js-apps-deployed-to-aws-amplifyIf you want to deploy your Next.js app to Amplify, you might need to utilize this custom build pattern to get environment variables to work in your API routes. This is a quick explanation on how to achieve a working API. Fri, 16 Jun 2023 00:00:00 GMTNode.js, ReactJS, AWSChallenges of the SWE Interview Processhttps://dustingoodman.dev/blog/20240101-challenges-swe-interview-process/https://dustingoodman.dev/blog/20240101-challenges-swe-interview-process/The SWE interview process has always been riddled with challenges. Specifically, how we approach technically validating a candidate continues to be a struggle. These are my thoughts on that process and where I hope to see the industry move. Mon, 01 Jan 2024 00:00:00 GMT The software engineering interview process has always been a challenging problem. I spoke on the subject back in 2018 at Women Who Code's We RISE conference where I discussed how a wide array of companies were employing interview processes. I had a unique perspective at the time as I was actively applying for new roles and also heavily interviewing candidates for my company at the time. Ultimately, I was looking to simplify the process and make it more practical for both small and large companies alike. Then, right before the holidays, I was scrolling Twitter and noticed this post from [Anthony Mays](https://twitter.com/anthonydmays). <blockquote><p>Why can't the SWE interview process just be:<br /><br />1) I submit a cool project I verifiably worked on<br />2) We do a behavioral interview<br />3) PROFIT!</p>— Anthony D. Mays (@anthonydmays) <a href="https://twitter.com/anthonydmays/status/1737606020090962180?ref_src=twsrc%5Etfw">December 20, 2023</a></blockquote> My immediate reaction was "YES!" This is the type of process we ideally would employ industry wide, and it would greatly improve the experience for everyone involved. Unfortunately, this has complications in a real world setting that would need to be resolved. ## My Team's Process Before we get into the challenges, I want to explain my current team's process and why we operate this way so I can talk about the complexities of the technical interview. Our steps for engineers are as follows: 1. Resume review to ensure the candidate is qualified to continue. 2. Candidate responds to several phone screen questions in writing. 3. Candidate completes our take home exercise. 4. Candidate exercise is reviewed by our internal team and graded for level. 5. Candidate does 2 behavioral interviews. 1 with a tech lead or manager, 1 with our CEO. 6. Offer and negotiation phase including background check. There are some optimizations that could be made here in the first couple of steps. One of the concerns with our process is you don't actually talk to a human until step 5. Our team does this for a few reasons, including challenges with scheduling due to timezones, volume of candidates when a job is listed, and others. But, there are a few elements I really liked as a candidate and continued to appreciate as a manager. ### The Phone Screen Questionnaire Our phone screen in writing helps individuals take their time to respond rather than making them think on their feet when they're nervous in an interview setting. It also helps to establish some expectations and identify any key issues with alignment. The first issue we can quickly resolve is salary misalignment. We have our budget for a position and if the ask is out of band, we can have that conversation quickly via email and determine if it's worth both party's time to move forward. We ask some questions around job interest and values. These help to identify early if a candidate has misaligned intentions or will not be a good long term fit for our team. ### Take Home Exercise For our take home, we've curated an exercise that aligns with the types of tasks we'd expect a candidate to perform day-to-day on our team. We capped the exercise at 4 hours but allow them a full week to turn around the exercise. We allow a full week as candidate schedules with other responsibilities can be challenging and we want to allow them the flexibility to complete the exercise when they're able to do so comfortably. However, we try to limit them to 4 hours so they don't spend an excessive amount of time on the project. This also serves a secondary purpose of helping us to understand how they think and prioritize requirements which is extremely valuable given our business model. The standardized exercise is great for our team because we can train multiple individuals to grade and rate exercises. This allows our process to scale as needed and provides a level of consistency in the process. ### Behavioral Interviews These serve as a great time to really get to know the candidate and clarify any questions that are outstanding from the questionnaire and take home. It allows our team to really get to know the candidate. Because an interviewer wasn't fully involved in the questionnaire or take home exercise review, they're lacking bias which allows the candidate a fair chance. ## Using Existing Code for Take Home Submissions So coming back to the original challenge - why can't candidates use their own open source projects for submission? My team tried this a few times and it had some major downsides for our process. ### Finding what the candidate did... Candidates struggled finding the right project to share with us. Several would point us to large open source projects or group projects they did with friends. This alone presented a challenge of finding what code the candidate actually wrote. We attempted to request pull requests which narrowed their submissions to far to attain any signal from the submission. Someone might argue this would show their collaboration skills as there are PRs with feedback and other linked issues. Unfortunately, most candidates were talking to other collaborators in Discord or other messaging apps and were managing issues externally from GitHub. This did very little to help us grade them. ### Lack of project context Another problem these projects present is the amount of context required to get into codebase. Let's say we were able to find a good sample of the candidate's code - pending on their communication skills in the pull requests - we might not understand what the change is or why they're making it. This puts extra effort on reviewers to determine what's going on which can take significantly more time than a standardized submission might take. ### Demonstrating the wrong skills The last challenge is what is the candidate demonstrating with their submission? A lot of the time they're showing their ability to manipulate existing code or deliver small algorithms because that's what they think is interesting. Our take home for instance is trying to identify the skill sets they need for the role we're hiring for but these self-submission may only cover 1 aspect of our needs. When any of these issues comes to light, we quickly just ask for the take home we wrote anyway. This has a downside of extending the interview period which can lead to the candidate not getting the role or finding a different role. It's very frustrating as an interviewer and candidate when this happens so we've opted to avoid it altogether and stopped accepting existing projects. ## Conclusion Overall, the idea of receiving existing projects is ideal, but they would need to be standardized in a way that the full industry would accept. I don't think this is going to be feasible especially with new emerging technology and generative AI tooling and how it's changing how we approach problem solving. I personally like our process as it seeks to quickly find candidates and get them to a yes or no as quickly as possible (max. ~3 weeks). As a candidate, I was put off by the lack of human interaction early on but that was also a great test for remote working where interactions can be limited. There are definitely ways to improve all these processes and I hope one day we find a better solution as an industry, but for now, I think each company will have to employ what they think is best for their team.Engineering LeadershipExtending GraphQL Schemas with Custom Scalarshttps://dustingoodman.dev/blog/20231027-extending-graphql-schema-with-custom-scalars/https://dustingoodman.dev/blog/20231027-extending-graphql-schema-with-custom-scalars/Out-of-the-box GraphQL is extremely powerful in allowing us to define the shape of our data and allow others to consume it. But what if we could give more guidance and clarity at the field level for consumers? In this post, we'll dive into custom scalars and how you can do just that. Fri, 27 Oct 2023 00:00:00 GMT[Scalars in GraphQL](https://graphql.org/learn/schema/#scalar-types) are how we define leaf nodes in our responses will be resolved. Out of the box, GraphQL provides the following types: - `Int`: signed 32-bit integer - `Float`: signed double-precision floating point value - `String`: UTF-8 character sequence - `Boolean`: `true` or `false` - `ID`: unique identifier serialized as a string These are sufficient for providing any data interface layer, but they don't always tell us about details about the data itself. This is where custom scalars come in. ## What are Custom Scalars? Custom Scalars allow us to create more explicit rules for leaf node data. For example, let's say we're trying to track inventory for an item in our store's inventory management system. Out of the box, we'd probably define our schema as follows: ```graphql type Product { "Unique product identifier - UUID format" id: ID! "Product's display name" name: String! "How many of this item do we have in stock" itemsInStock: Int! } ``` At a first glance, you can tell that `itemsInStock` is a number. When we start to think about this from a real world perspective though, the `Int` type allows us to set `itemsInStock` to a negative number but this doesn't make since and should not be allowed in our API. This is where custom scalars step in. We can define a new scalar called `NonNegativeInt` and define it to only allow for a value of 0 or more. This gives our API more depth and further clarifies the data of these fields. This can be for more than just numbers. We can do custom string or date types. In my example above, I stated in the doc string that the ID field would be a UUID type. [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier) has a defined structure that we can validate. So I can also create a UUID type and now, my schema can be redefined to be: ```graphql type Product { "Unique product identifier" id: UUID! "Product's display name" name: String! "How many of this item do we have in stock" itemsInStock: NonNegativeInt! } ``` This is much clearer to consumers and allows them to more strongly model their data. So how do I create a custom scalar? ## Creating Custom Scalars Custom scalars, like other type definitions, require 2 things: (1) the type definition and (2) the resolver. For the type definition, it's as simple as in your SDL exporting a name using the `scalar` keyword. Using our `NonNegativeInt` example, we would do the following: ```graphql scalar NonNegativeInt ``` This gives us the scalar but doesn't define how it works. This is where the resolver comes in. The resolver is defined differently than other GraphQL resolvers and needs to be defined using the `GraphQLScalarType` class from the `graphql` library and then then included into our server configuration. This code looks like: ```javascript export const NonNegativeIntResolver = new GraphQLScalarType({ name: 'NonNegativeInt', description: 'Integers that will have a value of 0 or more.', serialize(value) { return processValue(value); }, parseValue(value) { return processValue(value); }, parseLiteral(ast) { if (ast.kind !== Kind.INT) { throw createGraphQLError( `Can only validate integers as non-negative integers but got a: ${ast.kind}`, { nodes: ast, } ); } return processValue(ast.value, 'NonNegativeInt'); }, }); function processValue(value) { const parsedValued = parseInt(value, 10); if (!Number.isInteger(parsedValued)) { throw createGraphQLError(`Value is not an integer: ${parsedValued}`); } if (parsedValued &lt; 0) { throw createGraphQLError(`${parsedValue} is less than 0`); } return parsedValue; } ``` Let's break this down to understand the different pieces. First, we have the `name` field which is the display name for this scalar type. It should match the scalar type we defined in our SDL. The `description` is our documentation for the type that will appear in our schema explorer when we look deeper. Then we have 3 key functions: `serialize`, `parseValue`, and `parseLiteral` that are required for every scalar type definition. `serialize` takes the value provided by our backend and coerces it into a JSON-compatible format so it can appear in the response. So looking at our example, if the value is passed into our GraphQL as `'1'`, serialize will coerce it into the integer value `1` making it compatible with our schema and with JSON. `parseValue` is for handling inbound data from the frontend and making it valid on the backend. This is how resolver arguments get parsed on the server. So again looking at our example, if we provided a valid encoded integer as an integer or string to the server, the server could parse that value for the server to use and process. `parseLiteral` takes hard-coded values from an operation document's abstract syntax tree (AST) and handles parsing it as validation. So in our example, if we ran an operation like: ```graphql query MyQuery { products(itemsInStock: "1") { id name itemsInStock } } ``` the `parseLiteral` function would receive an AST node with kind of `Kind.STRING` and value `"1"`. This is not valid with our function because we're expecting the kind to be `Kind.INT` and so this isn't a valid value. As you can see, we also created a `processValue` function that parsed and validated our value that was reusable in all 3 functions. This example allowed for reusability but other cases, like a Date scalar, might require different validation and steps. You should check out [the example in the Apollo docs](https://www.apollographql.com/docs/apollo-server/schema/custom-scalars/#example-the-date-scalar) for how they dealt with that specific case. ## Conclusion We now know how to enhance our API schema using custom scalars and how we can create them. Before we go though, The Guild has made an awesome library of reusable scalars called [`graphql-scalars`](https://github.com/Urigo/graphql-scalars). This library has over 50 commonly used custom scalars. When you're first building your GraphQL API, I strongly recommend and encourage you to start with these scalars rather than building your own. Eventually, you may find you want to build your own to handle localization or handle certain cases slightly differently and that's okay! But, when you're first starting out to build your schema, it's faster to just borrow these. Also, not all scalar types need this much validation. These are a tool to help for well-defined, well-structured scalar types. If you're just looking to validate the values of a field, I'd recommend just writing validation code into your resolver and adding a docstring comment on the field so people understand what's expected for that field.GraphQL, ArchitectureGetting the most out of your project management toolhttps://www.thisdot.co/blog/getting-the-most-out-of-your-project-management-toolhttps://www.thisdot.co/blog/getting-the-most-out-of-your-project-management-toolDoes your team constantly complain about your project management tool? In this post, we'll talk about why this might be true and how you can get the most out of your project management tools!... Wed, 17 Jan 2024 00:00:00 GMTProject ManagementHow to configure and optimize a new Serverless Framework project with TypeScripthttps://www.thisdot.co/blog/how-to-configure-and-optimize-a-new-serverless-framework-project-withhttps://www.thisdot.co/blog/how-to-configure-and-optimize-a-new-serverless-framework-project-withElevate your Serverless Framework project with TypeScript integration. Learn to configure TypeScript, enable offline mode, and optimize deployments to AWS with tips on AWS profiles, function packaging, memory settings, and more.... Fri, 26 Jan 2024 00:00:00 GMTServerless, TypeScript, AWSConfigure your project with Drizzle for Local & Deployed Databaseshttps://www.thisdot.co/blog/configure-your-project-with-drizzle-for-local-and-deployed-databaseshttps://www.thisdot.co/blog/configure-your-project-with-drizzle-for-local-and-deployed-databasesIf you're using Vercel's Postgres offering, you should check out how to configure your project with Drizzle for local and deployed databases. Fri, 08 Mar 2024 00:00:00 GMTArchitecture, Node.js, TypeScriptThoughts on the Future of HTTP APIshttps://dustingoodman.dev/blog/20240506-thoughts-on-the-future-of-http-apis/https://dustingoodman.dev/blog/20240506-thoughts-on-the-future-of-http-apis/The introduction of React Server Components (RSCs) and Actions have started a conversation in the JavaScript community about how we interact with our databases and backend services. When developing a new application, do we need to have an API in front of the database? Mon, 06 May 2024 00:00:00 GMTThe introduction of React Server Components (RSCs) and Actions have started a conversation in the JavaScript community about how we interact with our databases and backend services. When developing a new application, do we need to have an API in front of the database? [Cory House](https://twitter.com/housecor) asked this question on Twitter, and the responses were mixed. <blockquote><p>Poll: Do you believe every web app should have an API (REST/GraphQL/RPC) in front of the DB?<br /><br />If always, why?<br />If it depends, upon what?</p>— Cory House (@housecor) <a href="https://twitter.com/housecor/status/1782402966038679644?ref_src=twsrc%5Etfw">April 22, 2024</a></blockquote> I've been thinking on this question since I saw it. I initially responded with "No, it depends" but didn't elaborate as I was still putting my thoughts together. I think this question is really brought about looking at the React and Next.js ecosystem. People are starting to put database calls directly in server components and mutating data directly via server actions. For a lot of JavaScript developers, I think this is a significant shift in how we think about fetching and mutating data in a system. For years, we've been using the REST and GraphQL APIs to interact with our data through tools like the Fetch API, Apollo, Axios, and others, and now, we're finding we don't necessarily need these tools. Looking at these patterns though, while I recognize their differences from Ruby on Rails or PHP Laravel implementations, I can't help but see some of the paradigms that those tools established. Looking at a Rails controllers, for example, we can see in their base form, they are similar conceptually to this new pattern: ```ruby # app/controllers/users_controller.rb class UsersController &lt; ApplicationController def index @users = User.all end end # app/views/users/index.html.erb <div> &lt;% @users.each do |user| %&gt; <p>&lt;%= user.name %&gt;</p> &lt;% end %&gt; </div> ``` In this example, our controller fetches all the users from the database and then renders each user's name in a paragraph tag on the DOM. Ruby separates the files this is done, but all the work is done on the server. With RSCs, we would write something like: ```jsx // components/Users.server.tsx export default function Users() { const users = db.query('SELECT * FROM users'); return ( <div> {users.map((user) =&gt; ( <p>{user.name}</p> ))} </div> ); } ``` There's a beauty in the React version as we can clearly see all the code in one file. We'd probably refactor out the SQL query into a utility for reuse across our app. But what happens when we need the same fetching logic outside of our web application? Let's say new business needs arise where we need a mobile application or a third party developer API. We'd have to rewrite the logic elsewhere - either in an API route or a new API service. In Rails, we'd just enable our API to respond with JSON, e.g. ```ruby # app/controllers/users_controller.rb class UsersController &lt; ApplicationController def index @users = User.all respond_to do |format| format.html format.json { render json: @users } end end end ``` In the Next.js version, we'd have to make an API route that leverages a shared utility to respond to the data and we'd have to add a specific file to create the route and respond, e.g. ```jsx // pages/api/users/routes.ts import { getUsers } from '../../utils/db'; export async function GET(req, res) { const users = await getUsers(); res.status(200).json(users); } ``` In the RSC world, we've had to write additional logic or infrastructure to extend our application to support these additional use cases where in the Rails example, we just enabled the feature with a few lines of code. So what does this mean for the future of HTTP APIs? I think generally speaking, we're going to see more web applications built without a HTTP API initially - specifically more web first applications. With Apple finally starting to open the door to Progressive Web Applications (PWAs), I think app developers are going to be less inclined to build native mobile applications due to the complexities of maintaining those codebases and other app store restrictions. This will enable businesses to focus on a single platform for their products and help them to streamline their development processes even further. Now, this isn't to say all businesses will stop mobile development as some businesses will rely on native features and will need to go this route. But, with the React Native team trying to bring RSC to React Native, these teams will be able to achieve the same results as web teams which will decrease the need for HTTP APIs. Businesses that provide third party developer APIs will likely be the only ones that will need to create and maintain HTTP APIs. With a lot of the clients and businesses I'm working with, I'm seeing more and more collaboration among companies so the need for these APIs will start to be more prevalent, but for newer companies that maybe won't start from a REST or GraphQL API, it'll be interesting to see how interested or adverse to these partnerships they'll be given the cost of creating and maintaining these APIs that they didn't previously need. I think the future of HTTP APIs is going to be more focused on the business needs of the company and less on the technical needs of the company. If a company needs to provide a third party developer API, they'll build one. If they don't, they won't. This will allow companies to focus on their core business and not on the technical debt that comes with maintaining an API that they don't need. It's still early in the adoption and roll out of these new tools. I'm curious to see how the framework and open source teams creating this paradigm shift will address this problem as it becomes relevant to them. I don't foresee tools like GraphQL or RPC going away given their existing adoption in key enterprises, but I do think we'll see less new applications starting with these tools and less HTTP APIs being developed in general. Some organizations may discover the need to split off core functionality into separate services and that may lead to new HTTP APIs leveraging REST, GraphQL, or RPCs. It'll be interesting to see how those services get re-integrated into the main application and how the teams will manage the complexity of these services. I've been sad to see the shift away from GraphQL as it's a tool I enjoy building with and see great benefit but with RSCs and other techniques like Remix's loaders, the benefits it once provided for building frontend applications has definitely lessened. I think with the rise in popularity of typed languages, we'll likely see continued adoption of GraphQL and RPCs for APIs as they provide a lof of similar benefits and there will still be applications that will be built using HTTP APIs, but I think we'll see a lot less of them in the future.Architecture, REST, GraphQLDemystifying React Server Componentshttps://www.thisdot.co/blog/demystifying-react-server-componentshttps://www.thisdot.co/blog/demystifying-react-server-componentsReact Server Components (RSCs) are the latest addition to the React ecosystem, and they've caused a bit of a disruption to how we think about React.... Fri, 02 Feb 2024 00:00:00 GMTReactJS, ArchitectureHow to Leverage Apollo Client Fetch Policies Like the Proshttps://www.thisdot.co/blog/how-to-leverage-apollo-client-fetch-policies-like-the-proshttps://www.thisdot.co/blog/how-to-leverage-apollo-client-fetch-policies-like-the-prosApollo Client's caching and fetch policies are powerful tools that can help you optimize your GraphQL queries. We'll explore how to leverage these features to improve the performance of your application. Fri, 17 May 2024 00:00:00 GMTArchitecture, GraphQLExecuting Expensive Database Changeshttps://dustingoodman.dev/blog/20240529-executing-expensive-database-changes/https://dustingoodman.dev/blog/20240529-executing-expensive-database-changes/One of my engineers asked me a question regarding how to properly roll out a change to a database operation that involved an expensive query in an ecosystem that was poorly documented and had a lack of consistency among deployment environments. This post is a summary of my response. Wed, 29 May 2024 00:00:00 GMT## The Situation One of the engineers on my team brought me a question the other regarding a change they were making and were curious how to effectively roll it out. They had built, tested, and validated a new operation that appeared like it would fix the underlying data problem that was reported. However, once it hit production, the operation was taking too long and negatively impacting the databases performance when ran so they reverted the change until they could devise a better solution. In this particular situation, the operation was an aggregation on a MongoDB database that was joining data across multiple collections. We won't talk about the merits of SQL vs. NoSQL today, but the strategy I advised my colleague towards would apply regardless of the underlying database engine. Another important piece of information is that the database was deployed to 4 environments: development, staging, pre-production, and production. None of the environments shared reflected each other in terms of configuration or dataset size. We'll understand why this is important as we walk through the strategy. ## Step 1: Understanding the Problem Database performance is impacted by a variety of different factors varying from the underlying engine, the hardware it's running on, the size of the dataset, the indexes that are present, and so on. The first step to understanding why the operation was slow is to understand how the database is executing the query. Most (if not all) databases provide a function to inspect, or `explain`, how the query will be run. I'll refer you to the manual for your specific database engine, but when you run this operation, you can learn some key insights into how the database will operate. You should see things like: - The indexes that are being used - The number of documents being scanned - The number of documents being returned - The time it took to execute the query Each piece of information will tell you a different piece of the story. For example, if you see the database is scanning the entire table/collection to find the records you want, you may be missing an index that would reduce the number of items scanned. This is why it is important to ensure that all your databases are configured the same. Without the same configuration, you can't guarantee that the query will perform the same across all environments. This is why it's important to have a consistent deployment strategy for your databases. This is why it's the first step in the strategy because without knowing your expected query plan, you can't effectively tune the query. ## Step 2: Planning How to Tune Your Collection Going back to our developers situation, it turns out that in production the collection was at least 10x larger than any other database they had access to for testing and did not have the same indexes. At this point, you may just be thinking "oh well, just go add the index and move on". Unfortunately, this can have negative ramifications downstream as indexes can be large and take up a lot of disk space and can slow down write operations. Additionally, you could accidentally take your database offline by locking up the table/collection while the index is being built. First things first is ensuring all your indexes are aligned across environments. I'd recommend ensuring all the environments match production first and find any indexes that aren't present in production across the remaining environments. Any index that is present in more than one environment should be further investigated to ensure it's necessary. If it's not, it should be removed. Then, if possible, I would create a replica of the production database for testing purposes. This is where you can test your query against a comparable dataset. However, if you can't do this, you can try to generate a dataset that is similar in size to production. ## Step 3: Testing Your Query Now with this, you can run your query and see how it performs. If it's still slow, you can try to add an index and see if that improves the performance. Rinse and repeat this process until you have a query that performs well. ## Step 4: Deploying Your Changes to Production Now, it's time to roll out your changes. Before the change is deployed, ensure that you have a rollback plan in place. This could be as simple as a script that removes the index you added. This is important because if the query doesn't perform as expected, you can quickly revert the change. You'll want to apply your indexes in the background. Most databases offer this feature, but it's important to understand how it works. For example, in MongoDB, you can add an index in the background, but it will still lock the table for writes. This is important to understand because if you have a high write volume, you could potentially lock up your database. Having analytics available to understand when your site is under low volume can help you understand when the best time to apply the index is given these other constraints. After the index is in place, you can roll out your code changes and start leveraging the new query. Be sure to monitor your database performance after the change is rolled out to ensure that the change is having the desired effect. ## Conclusion Effectively rolling out a complex database operation change involves a detailed, step-by-step approach to ensure performance and stability. First, understand the query's performance by using database tools to inspect execution plans and identify issues like missing indexes. Ensure that all environments mirror production in terms of configuration and data size to provide consistent testing grounds. Align indexes across environments, create or simulate a production-sized dataset for testing, and iteratively refine the query. Before deploying changes, have a rollback plan and schedule updates during low traffic periods to minimize impact. Finally, monitor the database post-deployment to confirm the improvement. Following these general guidelines can help you avoid critical failures during high traffic periods and ensure that your database operations are optimized for performance and stability. Best of luck as you explore your own roll out!Architecture, DevOpsHandling Has-Many Through Cascading Deletes in Railshttps://dustingoodman.dev/blog/20240608-handling-has-many-through-cascading-deletes-in-rails/https://dustingoodman.dev/blog/20240608-handling-has-many-through-cascading-deletes-in-rails/In our most recent stream, we were updating our tests and discovered that we had a cascading deletion that was failing. We spent a lot of time debugging as there wasn't good documentation on the subject. This is a post describing the solution and how to handle this in Rails. Sat, 08 Jun 2024 00:00:00 GMTOn the stream, we're building an event platform using [Ruby on Rails](https://rubyonrails.org/). In this week's past stream (see below), we ran into a problem with cascading deletes while running our tests. This issue stemmed from a misconfiguration of a "has_many through" relationship in our models. To focus on this problem, we'll only look at the relevant parts of our application. This includes the `Event`, `EventSession`, `EventSpeaker`, and `EventSessionSpeaker` models. In our application, an `Event` has many `EventSessions` and `EventSpeakers`. An `EventSession` has many `EventSessionSpeakers`, and an `EventSpeaker` has many `EventSessionSpeakers`. This means we have 2 1-to-many relations and 1 many-to-many relation. Our reduced schema to focus on the relationships looks like this: ```ruby ActiveRecord::Schema[7.1].define(version: 2024_05_31_000328) do create_table "event_session_speakers", id: false, force: :cascade do |t| t.string "event_session_id", null: false t.string "event_speaker_id", null: false t.index ["event_session_id"], name: "index_event_session_speakers_on_event_session_id" t.index ["event_speaker_id"], name: "index_event_session_speakers_on_event_speaker_id" end create_table "event_sessions", id: :string, default: -&gt; { "gen_random_uuid()" }, force: :cascade do |t| # other fields here t.string "event_id" t.index ["event_id"], name: "index_event_sessions_on_event_id" end create_table "event_speakers", id: :string, default: -&gt; { "gen_random_uuid()" }, force: :cascade do |t| # other fields here t.string "event_id" t.index ["event_id"], name: "index_event_speakers_on_event_id" end create_table "events", id: :string, default: -&gt; { "gen_random_uuid()" }, force: :cascade do |t| # other fields here end ``` Our model definitions were as follows: ```ruby class Event &lt; ApplicationRecord has_many :event_sessions, dependent: :destroy has_many :event_speakers, dependent: :destroy end class EventSession &lt; ApplicationRecord belongs_to :event has_many :event_session_speakers, dependent: :destroy has_many :event_speakers, through: :event_session_speakers end class EventSpeaker &lt; ApplicationRecord belongs_to :event has_many :event_session_speakers, dependent: :destroy has_many :event_sessions, through: :event_session_speakers end class EventSessionSpeaker &lt; ApplicationRecord belongs_to :event_session belongs_to :event_speaker end ``` When we were running an integration test to delete an `Event`, we were receiving an error that a column `event_session_speaker` was not found. The problem existed because of how we defined our `dependent` option on `EventSession` and `EventSpeaker`. For those unfamiliar, the `dependent` option allows you to specify what happens to associated records when the parent record is deleted. The `:destroy` option we used will delete the associated records when the parent record is deleted and _run any associated callbacks_. In this case, the `EventSession` and `EventSpeaker` models were both attempting to delete their associated `EventSessionSpeakers` records. This tried to run the query: ```sql TRANSACTION (0.1ms) begin transaction EventSession Load (0.1ms) SELECT "event_sessions".* FROM "event_sessions" WHERE "event_sessions"."event_id" = ? [["event_id", "30663738"]] EventSessionSpeaker Load (0.1ms) SELECT "event_session_speakers".* FROM "event_session_speakers" WHERE "event_session_speakers"."event_session_id" = ? [["event_session_id", "42880873"]] EventSessionSpeaker Destroy (0.2ms) DELETE FROM "event_session_speakers" WHERE "event_session_speakers"."" IS NULL TRANSACTION (0.0ms) rollback transaction ``` As you can see, the delete query is trying to delete all `EventSessionSpeakers` where the `event_session_speakers` column is `NULL`. This is because the `EventSession` and `EventSpeaker` models are trying to delete their associated `EventSessionSpeakers` records. However, the `EventSessionSpeaker` model is trying to delete the record _twice_ because it is associated with both the `EventSession` and `EventSpeaker` models. This is why the `event_session_speakers` column is `NULL` in the query and making the query invalid. To fix this, we need to update the `dependent` option on the `EventSession` and `EventSpeaker` models. We need to change the `dependent` option to `:delete_all`. This will delete the associated records when the parent record is deleted **without running any associated callback**. This will prevent the duplicated deletions by letting `EventSession` and `EventSpeaker` handle their cleanup individually. So, we updated our models to be: ```ruby class EventSession &lt; ApplicationRecord belongs_to :event has_many :event_session_speakers, dependent: :delete_all has_many :event_speakers, through: :event_session_speakers end class EventSpeaker &lt; ApplicationRecord belongs_to :event has_many :event_session_speakers, dependent: :delete_all has_many :event_sessions, through: :event_session_speakers end ``` Now, when we run the test, we see the following queries ran: ```sql TRANSACTION (0.1ms) begin transaction EventSession Load (0.2ms) SELECT "event_sessions".* FROM "event_sessions" WHERE "event_sessions"."event_id" = ? [["event_id", "30663738"]] EventSessionSpeaker Delete All (0.1ms) DELETE FROM "event_session_speakers" WHERE "event_session_speakers"."event_session_id" = ? [["event_session_id", "42880873"]] EventSession Destroy (0.1ms) DELETE FROM "event_sessions" WHERE "event_sessions"."id" = ? [["id", "42880873"]] EventSessionSpeaker Delete All (0.0ms) DELETE FROM "event_session_speakers" WHERE "event_session_speakers"."event_session_id" = ? [["event_session_id", "472205080"]] EventSession Destroy (0.0ms) DELETE FROM "event_sessions" WHERE "event_sessions"."id" = ? [["id", "472205080"]] EventSessionSpeaker Delete All (0.0ms) DELETE FROM "event_session_speakers" WHERE "event_session_speakers"."event_session_id" = ? [["event_session_id", "894887044"]] EventSession Destroy (0.0ms) DELETE FROM "event_sessions" WHERE "event_sessions"."id" = ? [["id", "894887044"]] EventSessionSpeaker Delete All (0.0ms) DELETE FROM "event_session_speakers" WHERE "event_session_speakers"."event_session_id" = ? [["event_session_id", "1020543588"]] EventSession Destroy (0.0ms) DELETE FROM "event_sessions" WHERE "event_sessions"."id" = ? [["id", "1020543588"]] EventSpeaker Load (0.1ms) SELECT "event_speakers".* FROM "event_speakers" WHERE "event_speakers"."event_id" = ? [["event_id", "30663738"]] EventSessionSpeaker Delete All (0.1ms) DELETE FROM "event_session_speakers" WHERE "event_session_speakers"."event_speaker_id" = ? [["event_speaker_id", "668127263"]] EventSpeaker Destroy (0.0ms) DELETE FROM "event_speakers" WHERE "event_speakers"."id" = ? [["id", "668127263"]] EventSessionSpeaker Delete All (0.0ms) DELETE FROM "event_session_speakers" WHERE "event_session_speakers"."event_speaker_id" = ? [["event_speaker_id", "350169046"]] EventSpeaker Destroy (0.0ms) DELETE FROM "event_speakers" WHERE "event_speakers"."id" = ? [["id", "350169046"]] EventSessionSpeaker Delete All (0.0ms) DELETE FROM "event_session_speakers" WHERE "event_session_speakers"."event_speaker_id" = ? [["event_speaker_id", "1066195201"]] EventSpeaker Destroy (0.0ms) DELETE FROM "event_speakers" WHERE "event_speakers"."id" = ? [["id", "1066195201"]] Event Destroy (0.0ms) DELETE FROM "events" WHERE "events"."id" = ? [["id", "30663738"]] TRANSACTION (0.2ms) commit transaction ``` This is taking all the associated session and speaker records and cleaning them up individually. This is the correct behavior we want and will now let us perform the cascading delete as we want. An important note to make is that this cascading delete is suboptimal and not a function you want run in a production application. In Rails 6, they introduced the `:destroy_async` option which will allow you to run the dependent destroy in a background job. This will prevent the cascading delete from blocking the main thread and causing performance issues. This is an optimization we should make and demonstrate on the next stream. This was a tricky problem to debug, and there wasn't much documentation on the subject. I hope this post helps others who run into similar issues. If you have any questions or comments, please feel free to reach out to me on [Twitter](https://twitter.com/dustinsgoodman), [Twitch](https://twitch.tv/dustinsgoodman), or comment in the above video. You can find the [full source available on GitHub](https://github.com/dustinsgoodman/event-platform-rails).Ruby on RailsMy Engineering Leadership Recommended Reading Listhttps://dustingoodman.dev/blog/20241207-engineering-leadership-reading-list/https://dustingoodman.dev/blog/20241207-engineering-leadership-reading-list/I was on the Compressed.fm podcast the other day and was asked for my recommended reading list for engineering leaders/managers. I've been recommending these to a lot of people recently so I threw together my list to share out. I hope you enjoy some of these! Sat, 07 Dec 2024 00:00:00 GMTimport Note from '@/components/Note.astro'; I've had a lot of recent conversations with some folks who all have been asking me what books I read as an engineering leader. This is my current recommended reads with a little blurb about why for each. ## Engineering Management for the Rest of Us by Sarah Drasner Purchase Link: https://www.engmanagement.dev/ This is probably my #1 recommendation to all engineers and engineering managers. I've now read this book twice and I've gained something new with each read through. I think this is going to be an annual re-read for me because it's so easy to read but because it has helped center me as a manager. Sarah does a great job helping everyone on a team understand what is the manager's role but also how to get the most out of the relationship. I think if re-titled to "Engineering Leadership for the Rest of Us" this book would have an equally good value because it really does explore how we can all work together to build stronger teams and products. ## The Making of a Manager: What to Do When Everyone Looks to You by Julie Zhuo Purchase Link: https://a.co/d/1sKoYrs This one was recommended to me by [Rob Ocel](https://bsky.app/profile/robocel.bsky.social) when we first started working together because I was still relatively new to management and this was a great kick starter. Julie shares the perspective of someone new to management in general from her time joining Facebook as a design intern to becoming a manager of peers. It talks through navigating some of those challenges and is one I'd recommend for ICs looking to make the jump to manager. ## The Manager's Path: A Guide for Tech Leaders Navigating Growth and Change by Camille Fournier Purchase Link: https://a.co/d/8VmPmbZ I just recently finished reading this one and absolutely learned a ton from it. Camille takes us on the journey of growing from a junior IC to a tech executive. She walks us through what the role is at each level, how it transforms from the previous roles, and how your relationship can look to your team and vice versa. I'd say before this read, I didn't really understand the major differences between Director and VPs of engineering compared to the CTO and now I have a greater appreciation for what each role looks like at different stages of a company. If you can only pick up one of these books, get Sarah's book but if you can, read all 3! These were invaluable to me for different reasons and I think you would enjoy them all. ## The Culture Map: Breaking Through the Invisible Boundaries of Global Business by Erin Meyer Purchase Link: https://a.co/d/1niy5qN If you're going to be managing people from diverse cultures or people from around the world, this is a must read. Erin lays out the information you need to navigate world culture norms that differ from your own. I'm from the South of the USA where "yes sir", "yes ma'am", and "bless your heart" are common, but these same terms aren't received the same by people in other regions of the US much less outside the US! This book really helped me hurdle managing in an international environment. ## No Rules Rules: Netflix and the Culture of Reinvention by Reed Hastings &amp; Erin Meyer Purchase Link: https://a.co/d/bMggBvi Erin gets a second slot on this list because she writes exceptionally well and when paired up with Netflix CEO, Reed Hastings, it takes all the lessons from "The Culture Map" and gives it a whole new meaning. This book is an insight to how Netflix has created such a standout corporate culture. This is less about engineering and more about operations but all the same lessons can be applied and really give you some perspective on what autonomy can really mean in the workplace. ## Radical Candor: Fully Revised &amp; Updated Edition: Be a Kick-Ass Boss Without Losing Your Humanity by Kim Scott Purchase Link: https://a.co/d/fcdSySn This book exploded throughout workplaces when it came on the market and for good reason. Learning how to give feedback without being an asshole is an important lesson. As Kim says in the book, radical candor doesn't give you an excuse to be a jerk. It helps you find a means to effectively help your team grow and receive feedback. Honestly, this is a must read if radical candor gets thrown around as a term in your office because you should hear what the intent of it is, not how your workplace has implemented it. I recommend this to individuals at all levels, not just managers! ## Blink: The Power of Thinking Without Thinking by Malcolm Gladwell Purchase Link: https://a.co/d/69IkNmE As your teams grow and your business accelerates, making rapid decisions is important but what is "too rapid" that can lead you to disaster? Malcolm's Blink helps explore the psychology behind this but also helps you learn about checking in with yourself during stressful situations to avoid unnecessary conflict or pain. I recommend this to individuals at all levels. ## Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead by Brene Brown Purchase Link: https://a.co/d/9se2Lto If you haven't heard of this book, you have now! Vulnerability can be your best friend in life and will be important as you face new challenges. A former manager of mine recommended this and it was worth the read. I didn't love the audio book if I'm being completely honest so buy the paperback and take the time to read it. This reshaped how I approached relationships in general. The lessons will be uncomfortable to experience for the first time, but once you do them, you'll never turn back! ## Extreme Ownership: How US Navy Seals Lead and Win by Jocko Willink &amp; Leif Babin Purchase Link: https://a.co/d/bkMwjcH This may sound like a weird addition to this list but it was probably my first step to becoming a manager years ago. A former CTO had all the engineering leads read this book during a period where there were a lot of excuses as to why things weren't getting done. It was a moment of stress and turmoil for our group and this changed our perspective a bit. After reading this, we each started to take more responsibility over what wasn't going right in our respective areas and started moving to winning again. It's been a while since I read it but I remember liking the lessons even if they were a bit _extreme_. I'm sure you have some great books of your own and I'm sure I'll read more in the future but these are my current picks and recommendations for those looking to get into engineering management. I hope you enjoy these great books! Some other recommendations akin to this list but not exactly belonging here: - [The Phoenix Project by Gene Kim](https://a.co/d/3FxuDpo) - [Tomorrow, and Tomorrow, and Tomorrow by Gabrielle Zevin](https://a.co/d/4spbsO9) - trigger warning but really good - [Delivered From Distraction: Getting the Most Out of Life with Attention Deficit Disorder by Edward M. Hallowell M.D. &amp; John J. Ratey M.D.](https://a.co/d/f0OaWia)Engineering LeadershipFrom Junior to Staff - What's my actual job?https://dustingoodman.dev/blog/20250129-from-junior-to-staff-slides/https://dustingoodman.dev/blog/20250129-from-junior-to-staff-slides/Slides and speaker notes from a recent talk I gave at a local meetup. This contains the abstract, slides, and speaker notes. Wed, 29 Jan 2025 00:00:00 GMT## Abstract Have you ever wondered what it takes to succeed in each role of engineering and how to achieve your career goals as an individual contributor? Let's explore the journey from growing from a novice to an expert and talk about how your expectations will evolve. I'll share some knowledge on how I coach individuals at each level to that you can adapt and adopt for your personal growth and needs. Whether you're just starting out or looking to step up to a leadership role, this talk will provide valuable strategies for career development in the tech industry. ## Slides <div> </div> <a href="https://www.canva.com/design/DAGdPzZAsD8/4xkSZFTR6eGtL_eGB_u_7A/view?utm_content=DAGdPzZAsD8&amp;utm_campaign=designshare&amp;utm_medium=embeds&amp;utm_source=link" target="_blank">From Junior to Staff</a> by Dustin Goodman ## Speaker Notes / Transcript ### Opening Slide As a software engineer, what is your job? Do you just write code? Do you design the product? Determine what features you should build? Define the roadmap? Set a technical direction? Depending on your company and role, the answer could be yes to all of these or no to most of these. And what can be more challenging is the more senior of a role you're in, the less clear the answer to these can become. ### Agenda So today, we're going to talk through this for the core individual contributor (IC) roles in a company and what the expectations are and how you can grow in that row. I'll also start to talk on some of the key areas you'll need to focus to reach that next level and some of the things I experienced at these levels. ### About Me For those that don't know me, my name is Dustin Goodman. I'm currently an Engineering Manager at ClickUp leading their Chat team. I've been held tech lead or manager roles for the last 9 years and have helped my reports reach their goals in various environments. Recently, I felt like I was sharing a lot of these notes almost daily to different folks and when Tracy asked me to talk, I thought it'd be a good opportunity to share this with a larger group. Quick Disclaimer: There's a lot of variation in some of the notes I'm going to give and they may not hold true for your exact circumstance. I've tried to generalize enough so the advice is applicable but definitely feel free to pick and chose what's best for you. ### Career Ladder If your company has been around for a while, they likely have a career ladder Are you at a startup? If so, this probably doesn't exist for you. The QR code on screen will take you to "Rent the Runway's" career ladder that they posted publicly. Why should you find this? It's your #1 guide to understanding what's expected of you and how to get to the next level. These are usually aligned to your company's values and needs for each level but are generally aligned with some common industry trends. A quick note on promotions too - most company's I've worked with really want you to demonstrate being able to work at the next level consistently before you get it. I'll make some notes on this as we go. ### Junior Level: Need Let's start with our junior level... Juniors, I hate to tell you but I think you have the hardest role here. You're relatively new to the industry and you need to have a wide understanding of a bunch of different programming concepts. FE eng - need to know HTML/CSS/JS concepts and probably using a framework like React/Angular/Vue BE eng - need to know the BE server language, different DBs and their querying techniques, probably a framework The good news is if you show enough basic understanding and a willingness and capability of learning and being coached, you're going to be fine. The reason is... ### Junior Level: Success Your success is going to be measured in your ability to adapt. Your leads are going to give you tasks that are extremely well-defined, e.g. "add button to page" or "change text" or "add error handling to this path". The point of these is to help you learn - your job is to take these tasks and try to do them on your own. However, sometimes you're going to run into something unexpected that a quick search or AI can't help you answer or understand. You need to know how to ask questions. But you're not asking someone else to do the work for you but to help you understand what you don't know! Ideally, you're never making the same mistake twice and start to see themes and patterns on your own. My suggestion: have a notebook where you write things you learned! ### Junior Level: Growth Once you get into the groove of doing your tasks, your next goal is to find ways to be more productive both as an individual and as a team member. Instead of getting extremely well-defined tasks, maybe you let your leads give you slightly less defined tasks and you start to go ask the questions you need to answer the open questions for them. This is also a great time to find an area of your product that you can own. Have you built a new feature or fixed several issues in a system? These are great candidates that whenever a question arises, you want to be the one answering questions about it. You can also start to pitch improvement ideas for this area - work with your leads on this one as they'll help you devise a plan but you'll need to take lead on the effort. ### Junior Level: Caution Some words of caution... first - be cautious of "glue" roles. You may see gaps on your team and may see a way you can fill that role. If that role is less technical or doesn't help move your team towards their KPIs or OKRs, flag it to your manager first but don't jump head first into it. These functions can hinder your growth. my other caution is your reliance of AI. It's more than okay to use it and I encourage you to do so but you need to be able to defend everything the AI outputs. Don't understand why Claude used a certain function - go read up on it so you can defend it. Your seniors aren't going to read your PRs and go through the same thought process as you and over analyze every line you wrote so this is a great time to own the change you're making. If you take down prod b/c you checked in AI code without checking it, you're putting yourself at risk. ### Junior Level: Next Level For the next level, keep listening to what I say is expected of mids, but generally you're going to want to be: - delivering changes on a regular cadence - designing and implementing small features - your manager needs less frequent check-ins with you - you've been contributing to the code review process and have helped catch issues before they ship ### Mid Level: Need As a mid, you're going to be expected to be able to consistently contribute. This means you open PRs in an appropriate amount of time for the task, your PRs are still getting feedback but probably not the same feedback twice. As an example, maybe your team uses localization on the FE - you might miss this once but you shouldn't have this on another PR in the future or if you do, it's infrequent. Additionally, you're able to ask questions about your tasks and clarify most assumptions on your own. ### Mid Level: Success Your success is going to be measured in your output. You're going to start contributing more medium features on your own. You'll probably handle the tech design and break your work down into reasonably sized chunks. You can manage your work and prioritize the right things while avoiding getting bogged down in the trivial details. You may run into conflicts but this is when you can chat with your manager to find the right path forward. This is also the point in your career to find a larger area of the product to feel confident in your ability to contribute to and own. Ex) one of my mids built a feature we call "launchpad" which allows you to quick switch to other rooms or start chats on the fly. they now own how we manage user data in our part of the app. ### Mid Level: Growth Growth as a mid is about mastering your craft. Mids typically are a large producer of features in an org, behind seniors, because most of their time is development. This means your goal is to find better ways to do the work. This could be tuning your personal AI to handle tasks or finding better code abstractions. A gotcha is don't just make these changes without discussing - at this point, you need to learn to socialize big changes before merging them. Do a POC branch but get feedback and buy-in from seniors before merging. This is also a great time to collaborate with senior members to tell them where you and your peers are getting bogged down. Work with them to find solutions to come day-to-day problems. Additionally, you'll want to understand how your work is fitting into the larger picture of what your team is trying to do. ### Mid Level: Caution My caution to my mids - you're not a senior yet and the things that helped you be successful as a junior still apply. Too many times have I worked with mids who forgot to do the learning that comes with the job and just stagnated. This applies to all roles but our industry is constantly changing and it's moving FAST! If you don't do a little bit of professional development and learning every month, you won't only stagnate but you'll regress. An industry challenge we're facing is what was good enough to be a mid 5 years ago isn't good enough to get a job anymore. You need to stay current and push yourself. ### Mid Level: Next Level Again - you'll want to listen to what seniors do but generally speaking you want to... - start trying to work at the team level to make improvements, make peers and juniors more productive - help team leaders understand challenges and influence them to make plans that account for solving those challenges - you probably wont' change their mind but anything here can be a win ### Senior Level: Need Seniors have mastered all the skills that make a good mid level engineer and need to have deep understanding of design choices and tradeoffs. You can effectively communicate choices and decisions across the team. Early seniors are typically the top contributors to a codebase since their time is still predominately coding. Later seniors are still strong contributors but sometimes are more involved in planning and SPIKE efforts that enable the rest of the team. Seniors are usually experts in an area or at least know industry trends to be able to lead discussion and solutions. ### Senior Level: Success Your terms for success are much higher now as you've earned more responsibility. You're going to lead larger feature efforts and be expected to deliver those kinds of results. You're the eyes on the ground and are expected to spot problems before they escalate. This means proactively identifying problematic architectural issues in your code and developing plans to fix them. This also means flagging conflicts with new feature work to prevent production issues such as scalability, performance, or edge cases in UX. You're going to be measured on your ability to reduce issues that derail the roadmap. This means that you'll work with your business partners in product, design, and QA to flag and fix issues before they are passed to other engineers. Ultimately, your success is how effectively can your team operate. Are people constantly reinventing the wheel? That's your failure because there's not a consistent solution you've set. ### Senior Level: Growth Growth for seniors is an interesting topic - you're in what a lot of companies would consider a career role. You don't need to seek further promotion and several folks like stopping here. However, this does mean you still need to keep up with industry change. If you want to grow though, start looking for ways to influence your team's 6 month plan now rather than waiting until 6 months from now to make the change. You want to be solving your future problems today in ways that don't impact deliverables. Think about establishing an architecture pattern that will let your team move faster in the future. Another way to grow is understanding the future of your company's tech needs and finding means to influence that. Example) your team is using React 19 but you're a SPA without a framework. do you need to move to a framework to leverage RSC? Is that something your company even needs? Help your leaders understand the pros and cons and set a direction. ### Senior Level: Caution My biggest words of caution to my seniors is beware of shiny tech syndrome. This is something I see a lot of newly minted seniors struggle with and haven't shaken off from their mid days. Just because something is new and great, doesn't mean it's something you should use. You want to align tech direction with industry standards but also what will be best for your team. Ex) Your team is an Angular shop but you're finding it hard to recruit new devs. Do you: - A) Migrate to something easier to hire for? - B) Hire React devs and expect them to transition easily? - C) Develop a training program for onboarding to help those developers adjust? - D) Change nothing? The truth is - all these options are viable and strategies I've seen different companies take. You just need to understand your company's specific market and needs to make the right call to be able to influence correctly. ### Senior Level: Next Level If you are senior looking to reach staff levels, you're going to start shifting out of our project team and towards the larger dev organization. You want to influence the direction of technical decision. You want to work across teams to improve their effectiveness. This might mean taking ownership of core systems or understanding how to making sweeping stability and performance improvements across core business systems ### Career Paths at Senior+ Before we talk about staff+ levels, I want to take a quick detour. Once you reach the upper echelons of senior status, more opportunities for career growth open up. You can stay on the IC track or move the management track. This is a separate talk so I'm not going to go deep into the management side but for the IC side, as a senior, your main choices are to stay where you are, elevate to staff, or seek to become a tech lead. ### Role Responsibilities For this pivot, you should at a high level understand how your role changes. As a engineer up to this point, your main job has been to focus on building software. If you go tech lead or staff engineer, your role picks up another major responsibility area. As a tech lead, you earn some people leadership and need those skills. As a staff engineer, you'll need to flex into technical strategy and alignment across the org. This means the 80% of your time that was likely coding before now becomes more like 20-30% of your time as your other responsibilities will take a significant chunk of your time. ### Staff+ Level For staff and higher, it's less about growth at this point and more about what you need to do to be successful. At this point, you've proven yourself as a senior engineer. Now, you need to showcase how you're more than just another senior. You're looking ahead of the needs of your team to effect the greatest changes possible. Ex) my team prior to my start chose a pattern that didn't encourage reuse across the system and our system was flagged to be a central product - needed to shift to align but couldn't rewrite. Found a way to change how we manage our data to expose it to other systems with minimal rewrite. This has reduced complex rewrites that were deemed essential by months. This is what I call a 10x engineer - you're not individually contributing 10x the average developer but you're improving the average output of your entire team that it feels like you have 10 more developers on your team than you actually. You help seniors with opposing views reach consensus faster and help your team be that much more effective. ### Book Recommendation If you want to explore some of these topics further, I recommend the Pragmatic Engineer newsletter where I grabbed most of my graphics from. Additionally, I recommend this book. The title is a tad misleading. Camille does a fabulous job about talking about your role as an IC as well as the career journey for manager's climbing the ladder. The career ladder example I shared earlier is from her team and she references it in her book. While you might only find personal value in the first half of the book, the 2nd half will be informative so you understand what your boss' boss job is, etc. ### Thank you Thank you all for coming! Let me answer your questions now. Also, if you want to connect with me, this QR code will bring you to my website which has all my socials. I'm primarily on BlueSky these days. If you send me a LinkedIn request, please know, I like to know you first so come talk to me later.Engineering Leadership