Matt Gregg — Software Engineer — Journal https://codegregg.com Personal website, portfolio, and journal by Matt Gregg. NAS Learnings: From Zettlab to UnRaid and necessary apps for your setup https://codegregg.com/posts/nas-learnings-from-zettlab-to-unraid-and-necessary-apps-for-your-setup/ Mon, 23 Feb 2026 00:00:00 GMT https://codegregg.com/posts/nas-learnings-from-zettlab-to-unraid-and-necessary-apps-for-your-setup/

The Self-Hosted Journey: What I Learned Building My Home NAS

TL;DR: I backed the Zettlab 4-bay NAS and, while I’m a huge fan of their hardware and mission, I’ve found my stride using Docker containers for most tasks. If I were starting from scratch today, I’d likely build an UnRaid server on a gaming PC to consolidate my gaming and hosting into one machine.

My Must-Have Apps:


The Hardware: Zettlab and the Community

I jumped into the deep end by backing the Zettlab D4 NAS, a fascinating newcomer featuring built-in machine learning. I truly love what the team is doing; the hardware is sleek, and the Zettlab Discord community is one of the most helpful groups I’ve found in the tech space.

Even though I’ve shifted toward running everything via Docker rather than the native ZettOS apps, I’m still closely following their progress. I’m especially excited to see how they leverage AI and automation tools on the machine as the platform matures. It’s a great piece of kit for anyone who wants a dedicated, community-driven device.

The "Do-Over" Plan:

While I love my D4, if I were doing this again from absolute zero, I’d consider using UnRaid on either an old gaming PC or even my current gaming desktop. With UnRaid, you can run a Windows Virtual Machine with hardware passthrough, meaning you can have a high-end gaming rig that also doubles as your 24/7 home server.


My Essential Self-Hosted Software Stack

The real power of a NAS comes from the services you host. Here is my current setup:

Media and Entertainment

  • Video: Jellyfin is the best open-source alternative to Plex, with great apps for smart TVs and mobile.
  • Music: I run Navidrome on the server paired with Arpeggio on iOS.
  • Audiobooks: Audiobookshelf is the gold standard for self-hosted libraries.

Photos and File Sharing

  • Photos: Immich is incredible for mobile backups. It isn't a photo editor like Adobe Lightroom, but for organizing phone libraries, it’s a must.
  • Local Sharing: Snapdrop provides a lightweight way to share files across devices on your local network.

Infrastructure & Security (The Critical Stuff)

To make your NAS actually useful outside your house, you need these two tools:

Secure Remote Access with Tailscale

Tailscale creates a secure "intranet." It allows you to access your home network from anywhere as if you were on your home Wi-Fi, without the security risks of opening ports on your router.

Public Access with Cloudflare Tunnels

For services you want to share—like letting a friend access your Jellyfin library—Cloudflare Tunnels lets you map subdomains (e.g., media.yourdomain.com) to your NAS securely.


Migration: Moving TBs without the Headache

Moving away from Dropbox felt daunting until I used Rclone.

The Pro Tip (pro tip from me who enjoys UIs, I know all you command line people who call yourselves "pros" are going to balk, I see you but please calm down...): Install the Rclone UI on your desktop and the Rclone Docker container on your NAS. Connect your desktop UI to the NAS container. This gives you a transparent, visual way to manage transfers and set up cron jobs for automated backups. If you have SSH access to your NAS, running commands directly in the terminal is even faster, but the UI-to-Docker method is great for visibility.

For long-term safety, I use Rclone to sync my data to Amazon S3 Glacier for cheap "cold" storage, though Backblaze B2 is a great alternative for data you might need to grab more frequently.


Ready to ditch the cloud and take control of your data? Your first move should be auditing your current storage: list your most-used services (like Google Photos or Dropbox) and see if an open-source alternative like Immich or Jellyfin fits your workflow. If you have an old PC gathering dust, download the UnRaid trial and experiment with a basic array before investing in dedicated hardware. Self-hosting is a steep learning curve, but you don't have to climb it alone—reach out to me directly if you have questions about Docker configs, Rclone migrations, or securing your network. Let's get your home server up and running!

]]>
Side projects https://codegregg.com/posts/side-projects/ Wed, 21 Jan 2026 00:00:00 GMT https://codegregg.com/posts/side-projects/ You know that feeling when you're lying in bed and an idea pops into your head? Maybe it's a tool that would make your life slightly easier, or a way to help people organize, or something that could bring people together. Before, those ideas would just rattle around in my brain until I forgot about them or they got pushed out by work priorities.

When I was laid off from my position at Shopify in August 2025, one of the greatest benefits was having the time to actually build those ideas. My head has always been filled with concepts and to get them out into the world as working apps has been an absolute delight. Now? I can actually build them. And honestly, that's been incredible.

There's something really special about having the space to let your brain roam free. No sprint planning, no stakeholder meetings, no pressure to hit metrics. Just pure creation. I can wake up with an idea, spend the day exploring it, and by evening have something running that didn't exist that morning. It's the kind of creative freedom I didn't realize I was missing.

But here's what I've learned through this process: it's not just about building things for the sake of building them. I've realized that I want to change the world (maybe I've always wanted this haha). I want to show non-technical people that web apps can be affordable, freeware, and easy to use. You don't have to sell your soul and your data to use good software. We can build a better internet, and that is what I've learned I'm focused on doing.

My goal is to help people come together in person. To build apps that facilitate real connection, not just digital engagement. To create tools that respect users' privacy and don't require them to trade their personal information for functionality. I want to prove that good software doesn't have to come with strings attached.

This is what I am going to build. Apps that help people share with each other, organize, own their data. Apps that are free to use, easy to understand, and don't require you to read a privacy policy that's longer than a novel just to figure out what you're signing away.

I'm testing out ideas and seeing what sticks with people. I'm shipping things quickly, getting them into users' hands, and learning what they actually need. Then I iterate. The tools I'm using help me move faster, but I'm still the one making decisions about what to build, who it's for, and what problems I'm trying to solve.

I'm not sure where this period of freedom will lead, but I've discovered something important: I have a mission. I want to build a better internet, one app at a time. I want to prove that software can be helpful without being extractive. I want to help people organize and connect in ways that matter. And having this time to be creative, to let my ideas flow freely, has shown me exactly what I want to spend my energy on.

]]>
Navigating the Adderall Prescription Maze in 2024: A Monthly Adventure https://codegregg.com/posts/navigating-the-adderall-prescription-maze-in-2024-a-monthly-adventure/ Tue, 22 Oct 2024 00:00:00 GMT https://codegregg.com/posts/navigating-the-adderall-prescription-maze-in-2024-a-monthly-adventure/ Understanding Walgreens’ Adderall Refill Policy

Ah, the intricacies of modern healthcare! If you've ever journeyed through the process of refilling your Adderall XR prescription, you know it’s not a straightforward task. Each month, as my supply dwindles, I brace for another round of challenges—think of it as a quest for a rare treasure rather than a simple chore.

The Automated System Struggle

Most prescriptions can be refilled at the push of a button, but Adderall is classified as a controlled substance. This means at Walgreens, you can't simply tap and go; you must navigate the labyrinth of their automated phone tree. Waiting on hold to speak with a pharmacist can feel like reaching out to a celebrity—only instead of an autograph, you’re hoping to secure the medication you need to manage your ADHD.

Timing is Everything

Once you finally get through, prepare for the possibility of disappointing news. If you're outside the designated refill window—typically just a few days before you run out—Walgreens will inform you that you have to wait. This strict adherence to the insurance timeline can test your patience, leaving you in a delicate dance of timing and persistence.

The Stock Challenge: Is Your Adderall in Stock?

But let’s say you’re within the refill window. Even then, there’s no guarantee that Walgreens will have Adderall XR in stock. Post-COVID, the availability of this medication often feels like finding a needle in a haystack. If your usual pharmacy is out, you can't directly transfer your prescription to another location. Instead, you have to call around, hoping to find a pharmacy that has it available. Once you do, the next hurdle is contacting your doctor to send a new prescription—a logistical nightmare that can evoke Kafkaesque sentiments.

Navigating Refills and Renewals

Let’s not forget the often-frustrating reality of running out of refills. Typically, you only find out that you’ve exhausted your refills when you’re just days away from being out of medication. At this point, you’re in a race against time to book an appointment with your doctor, whose schedule is likely packed. It's often a race that feels endless, as you aim to secure your prescription at a pharmacy that may or may not have it in stock.

Navigating the System: Practical Tips

While I fully acknowledge the need for regulations to prevent abuse of medications like Adderall, the system does seem unnecessarily complex for patients who rely on it. Here are a few tips to make navigating this process a bit easier:

  1. Plan Ahead: Keep a close eye on your supply and refill schedule. Setting reminders a week in advance may save you from last-minute panic.
  2. Communication is Key: Regularly check with your pharmacy about stock levels and keep your doctor in the loop about your concerns regarding refill availability.
  3. Know Your Options: If your primary pharmacy cannot fulfill your prescription, familiarize yourself with other local pharmacies that stock Adderall and have a good relationship with your healthcare provider.

The Ongoing Challenge

In conclusion, while we wade through this monthly adventure with optimism and humor, it raises significant questions: Why isn’t there more Adderall available? Why is the refill process so arduous? Until the U.S. healthcare system becomes more streamlined, we'll continue to navigate these challenges each month, armed with a sense of humor and the realization that sometimes, laughter truly is the best medicine.

]]>
Gatsby Dark Mode with Themes https://codegregg.com/posts/gatsby-dark-mode-with-themes/ Mon, 21 Oct 2024 00:00:00 GMT https://codegregg.com/posts/gatsby-dark-mode-with-themes/ Implementing Dark Mode in GatsbyJS 4.7

With the release of GatsbyJS 4.7, developers now have more tools and features at their disposal to create dynamic and responsive websites. One popular feature that can significantly enhance user experience is the ability to toggle between light and dark modes. This feature not only adds aesthetic appeal but also helps reduce eye strain and conserve battery power. In this blog post, we will explore how to implement a dark mode in GatsbyJS 4.7, using React context, local storage, and CSS transitions.

Setup

We will start by defining the context for our dark mode. In the RootLayout.js file, we add a new context provider DarkModeContextProvider. This provider is used to pass down the dark mode state and the function to toggle it to other components in the application.

import React from 'react';
import { DarkModeContextProvider } from './context/DarkModeContext';

export default function RootLayout({ children }) {
  return (
    <DarkModeContextProvider>
      {children}
    </DarkModeContextProvider>
  );
}

Themes Definition

We define two theme styles for light and dark modes in theme.js. Each theme style includes the text and background colors, among others, as shown below:

import * as colors from './colors';

export const themeLight = {
  theme: 'light',
  text: colors.textDark,
  background: '#fff',
  cardBackground: '#0000001a',
  shadow: '#0000001a',
};

export const themeDark = {
  theme: 'dark',
  text: '#fff',
  background: colors.dark,
  cardBackground: '#ffffff2f',
  shadow: '#FFC836',
};

Creating the Context for Dark Mode

We create a new context DarkModeContext in DarkModeContext.js and use a state variable isDark to keep track of whether the dark mode is enabled. The setDarkMode function changes the isDark state and stores the user's preference in localStorage.

import React, { createContext, useState, useEffect } from 'react';
import { ThemeProvider } from '@emotion/react';
import { themeDark, themeLight } from '../styles/theme';

export const DarkModeContext = createContext();

export function DarkModeContextProvider({ children }) {
  const [isDark, setIsDark] = useState(false);

  const setDarkMode = (darkness) => {
    setIsDark(darkness);
    localStorage.setItem('dark-mode', darkness ? 'dark' : 'light');
  };
//...
}

Toggling Between Light and Dark Modes

In the NightDayToggle.js file, we create a component that toggles between light and dark modes. The checked prop determines whether the toggle is in the on (dark mode) or off (light mode) state. The changed prop is a function that handles the user's interaction with the toggle.

import React from 'react';

import './styles.css';

export const NightDayToggle = ({ checked, changed }) => {
  return (
    <div className='toggle'>
      <label className='visually-hidden' htmlFor='dark-toggle'>
        Toggle between dark and light mode
      </label>
      <input
        id='dark-toggle'
        className='toggle-input'
        type='checkbox'
        checked={checked}
        onChange={changed}
      />
      <div className='toggle-bg'></div>
      <div className='toggle-switch'>
        <div className='toggle-switch-figure'></div>
        <div className='toggle-switch-figureAlt'></div>
      </div>
    </div>
  );
};

Styling the Toggle

Finally, we style the toggle by creating a style.css file. The CSS in this file is responsible for the appearance of the toggle button and the transition between light and dark modes.

/* Credit to Jason Tyler */

.toggle,
.toggle * {
  box-sizing: content-box;
}

.toggle {
  position: absolute;
  display: inline-block;
  width: 100px;
  padding: 4px;
  border-radius: 40px;
  transform: scale(0.4);
  flex-shrink: 0;
  margin-left: -20px;
  margin-right: -30px;
  top: 75px;
  right: 10px;
}

.toggle-bg {
  position: absolute;
  top: -4px;
  left: -4px;
  width: 100%;
  height: 100%;
  background-color: #c0e6f6;
  border-radius: 40px;
  border: 4px solid #81c0d5;
  transition: all 0.1s cubic-bezier(0.25, 0.46, 0.45, 0.94);
}

.toggle-input {
  position: absolute;
  top: 0;
  left: 0;
  width: 100%;
  height: 100%;
  border: 1px solid red;
  border-radius: 40px;
  z-index: 2;
  opacity: 0;
}

.toggle-switch {
  position: relative;
  width: 40px;
  height: 40px;
  margin-left: 50px;
  background-color: #f5eb42;
  border: 4px solid #e4c74d;
  border-radius: 50%;
  transition: all 0.1s cubic-bezier(0.25, 0.46, 0.45, 0.94);
}

.toggle-switch-figure {
  position: absolute;
  bottom: -14px;
  left: -50px;
  display: block;
  width: 80px;
  height: 30px;
  border: 8px solid #d4d4d2;
  border-radius: 20px;
  background-color: #fff;
  transform: scale(0.4);
  transition: all 0.12s cubic-bezier(0.25, 0.46, 0.45, 0.94);
}
.toggle-switch-figure:after {
  content: '';
  display: block;
  position: relative;
  top: -65px;
  right: -42px;
  width: 15px;
  height: 15px;
  border: 8px solid #d4d4d2;
  border-radius: 100%;
  border-right-color: transparent;
  border-bottom-color: transparent;
  transform: rotateZ(70deg);
  background-color: #fff;
}
.toggle-switch-figure:before {
  content: '';
  display: block;
  position: relative;
  top: -25px;
  right: -10px;
  width: 30px;
  height: 30px;
  border: 8px solid #d4d4d2;
  border-radius: 100%;
  border-right-color: transparent;
  border-bottom-color: transparent;
  transform: rotateZ(30deg);
  background-color: #fff;
}

.toggle-switch-figureAlt {
  content: '';
  position: absolute;
  top: 5px;
  left: 2px;
  width: 2px;
  height: 2px;
  background-color: #efeeda;
  border-radius: 100%;
  border: 4px solid #dee1c5;
  box-shadow: 42px -7px 0 -3px #fcfcfc, 75px -10px 0 -3px #fcfcfc,
    54px 4px 0 -4px #fcfcfc, 83px 7px 0 -2px #fcfcfc, 63px 18px 0 -4px #fcfcfc,
    44px 28px 0 -2px #fcfcfc, 78px 23px 0 -3px #fcfcfc;
  transition: all 0.12s cubic-bezier(0.25, 0.46, 0.45, 0.94);
  transform: scale(0);
}

.toggle-switch-figureAlt:before {
  content: '';
  position: absolute;
  top: -6px;
  left: 18px;
  width: 7px;
  height: 7px;
  background-color: #efeeda;
  border-radius: 100%;
  border: 4px solid #dee1c5;
}

.toggle-switch-figureAlt:after {
  content: '';
  position: absolute;
  top: 19px;
  left: 15px;
  width: 2px;
  height: 2px;
  background-color: #efeeda;
  border-radius: 100%;
  border: 4px solid #dee1c5;
}

.toggle-input:checked ~ .toggle-switch {
  margin-left: 0;
  border-color: #dee1c5;
  background-color: #fffdf2;
}

.toggle-input:checked ~ .toggle-bg {
  background-color: #484848;
  border-color: #202020;
}

.toggle-input:checked ~ .toggle-switch .toggle-switch-figure {
  margin-left: 40px;
  opacity: 0;
  transform: scale(0.1);
}

.toggle-input:checked ~ .toggle-switch .toggle-switch-figureAlt {
  transform: scale(1);
}

Conclusion

With these steps, we have successfully added a functional dark mode to our GatsbyJS 4.7 application. This not only makes the application more user-friendly but also adds a modern touch to its design. The implementation of the dark mode feature showcases how GatsbyJS 4.7, coupled with React context and local storage, can be used to enhance the user interface and experience of an application.

]]>
Thoughts on Tesla Service: A Seamless User Experience https://codegregg.com/posts/thoughts-on-tesla-service-a-seamless-user-experience/ Thu, 07 Mar 2024 00:00:00 GMT https://codegregg.com/posts/thoughts-on-tesla-service-a-seamless-user-experience/

When you think of car servicing, you might conjure up images of cumbersome paperwork, long waiting times, and a general sense of inconvenience. However, my recent experience with Tesla's service has completely shattered this stereotype. From the convenience of home servicing to the incredible user experience with a loaner car, Tesla truly stands out in the automotive service industry.

Home Servicing

The first aspect that sets Tesla apart is their home servicing option. The convenience of having a technician come to my home for minor service issues is not something that many car companies offer. This is a game changer in terms of customer convenience and satisfaction.

In-Store Service Experience

When in-store service is necessary, Tesla ensures that this process is as smooth and hassle-free as possible. Upon arrival, all I had to do was share my name. My service appointment and my pre-entered service request were immediately accessible to the service representative.

They informed me upfront about the time it may take to service my vehicle, and offered a loaner car for the duration of the service. The process was refreshingly straightforward: they took down my driver's license number and the name of my insurance agency, then handed me a Tesla key card and directed me to the location of the loaner car in the parking lot.

The Loaner Car Experience

The loaner car experience is where Tesla truly shines. After retrieving a few items from my car, which was already remotely set to maintenance mode, I simply walked to the loaner car and hopped in. There were no complicated procedures to follow, nor any paperwork to fill out.

Inside the loaner car, a QR code awaited. Scanning it with the Tesla app immediately added the car to my account. The magic did not stop there. As soon as the car was added, all of my personalized settings were applied to the loaner car. Everything from my Spotify account login to my specific seat, mirror, and steering wheel adjustments were all transferred over. Even my braking, acceleration, and autopilot settings were imported from my profile. This seamless transition from my car to the loaner truly felt like stepping into my own vehicle. The ease and simplicity of this process is a testament to Tesla's dedication to creating an unobtrusive and beautiful user experience.

Picking Up My Car

Once my car is ready, I'll receive a notification on my phone. The process for returning the loaner and retrieving my car is poised to be as effortless as the rest of the experience. I'll simply leave the loaner in the parking lot and hop into my own car at a time that suits me.

In conclusion, Tesla's service process reflects a deep understanding of the customer's needs and a strong commitment to providing an unmatched user experience. It's this sort of innovation and attention to detail that sets Tesla apart in the automotive industry.

]]>
The problem with chronic illness https://codegregg.com/posts/the-problem-with-chronic-illness/ Thu, 30 Mar 2023 22:46:00 GMT https://codegregg.com/posts/the-problem-with-chronic-illness/

It is so difficult for people to comprehend chronic illness.

I was in the cancer clinic today for a routine checkup but dislocated my ankle a week ago, so I have a big splint on my leg. A worker at the clinic walks by, looks at me with a smile, and says "ouch!", pointing at my leg. There are literally people extremely sick from cancer treatment ALL around me and this person points out my silly foot injury. Yes it hurt, that's true, but it was nothing compared to the pain I felt from cancer treatment and the loneliness I felt and still feel from treatment and the uncertainty of death.

How is it so difficult for people to SEE that? They might be bald or they might not, but they all look into empty space with the same empty expression, the same dead eyes.

A broken foot will heal. It will be difficult for a short amount of time. Cancer, and many other long-term or chronic illnesses, will not heal in a short amount of time. Start seeing these people. Continue reminding your friends suffering from long-term illness that their pain does not go unseen. It may be impossible to understand, but acknowledging it does make people feel less alone.

]]>
Easily Password Protect NextJS pages with Iron Session https://codegregg.com/posts/easily-password-protect-nextjs-pages-with-iron-session/ Sat, 22 Oct 2022 15:56:39 GMT https://codegregg.com/posts/easily-password-protect-nextjs-pages-with-iron-session/ Say you want to set up a simple password protected page (or a bunch of pages), just for yourself, in your NextJS application. It's super simple to do with an encrypted cookie and the help of a little library called iron-session, a Node.js stateless session utility. Most of the tutorials for this library focus on setting up user auth for multiple users which you may want to at some point but this tutorial will teach you how to lock certain server pages behind a password. I've used this before to create little admin pages that I want to keep private but I don't want to set up full user auth.

First things first, let's assume you have a NextJS application, if you don't follow the instructions to get one set up here.

Then you'll need to install iron-session and swr (although you can probably not use swr if you want to cut it out, it's just a nice to have).

npm install -S iron-session swr
yarn add iron-session swr

Create a .env.local and .env.local.example file at the root of your project. Make sure .env.local is added to your .gitignore (it should be by default in a vanilla NextJS setup) This file will look like this:

PASSWORD=<anything you want to secure page>
SECRET_COOKIE_PASSWORD=<anything at least 32 characters long>

Create a password 32 characters long for the SECRET (you will not need to remember this), then create a password you can remember for PASSWORD (this is what you'll enter on the page to access your secure route).

You will then create a few new files:

/utils/session.js.

import { withIronSessionApiRoute, withIronSessionSsr } from 'iron-session/next';

const sessionOptions = {
  password: process.env.SECRET_COOKIE_PASSWORD,
  cookieName: 'next-iron-session/examples/next.js',
  // secure: true should be used in production (HTTPS) but can't be used in development (HTTP)
  cookieOptions: {
    secure: process.env.NODE_ENV === 'production',
  },
};

export function withSessionRoute(handler) {
  return withIronSessionApiRoute(handler, sessionOptions);
}

export function withSessionSsr(handler) {
  return withIronSessionSsr(handler, sessionOptions);
}

/pages/api/login.js

import { withSessionRoute } from '@utils/session';

export default withSessionRoute(async (req, res) => {
  const { password } = await req.body;

  try {
    if (password === process.env.PASSWORD) {
      const user = { isLoggedIn: true };
      req.session.user = user;
      await req.session.save();
      res.json(user);
    } else {
      const user = { isLoggedIn: false };
      res.json(user);
    }
  } catch (error) {
    const { response: fetchResponse } = error;
    res.status(fetchResponse?.status || 500).json(error.data);
  }
});

/pages/api/logout.js

import { withSessionRoute } from '@utils/session';

export default withSessionRoute(async (req, res) => {
  req.session.destroy();
  res.json({ isLoggedIn: false });
});

/pages/api/user.js

import { withSessionRoute } from '@utils/session';

export default withSessionRoute(async (req, res) => {
  const user = req.session.get('user');

  if (user) {
    // in a real world application you might read the user id from the session and then do a database request
    // to get more information on the user if needed
    res.json({
      isLoggedIn: true,
      ...user,
    });
  } else {
    res.json({
      isLoggedIn: false,
    });
  }
});

/utils/useUser.js

import { useEffect } from 'react';
import Router from 'next/router';
import useSWR from 'swr';

export default function useUser({
  redirectTo = false,
  redirectIfFound = false,
} = {}) {
  const { data: user, mutate: mutateUser } = useSWR('/api/user');

  useEffect(() => {
    // if no redirect needed, just return (example: already on /dashboard)
    // if user data not yet there (fetch in progress, logged in or not) then don't do anything yet
    if (!redirectTo || !user) return;

    if (
      // If redirectTo is set, redirect if the user was not found.
      (redirectTo && !redirectIfFound && !user?.isLoggedIn) ||
      // If redirectIfFound is also set, redirect if the user was found
      (redirectIfFound && user?.isLoggedIn)
    ) {
      Router.push(redirectTo);
    }
  }, [user, redirectIfFound, redirectTo]);

  return { user, mutateUser };
}

/pages/login.js

import { useState } from 'react';
import useUser from '../utils/useUser';

export default function Login() {
  // here we just check if user is already logged in and redirect to admin
  const { mutateUser } = useUser({
    redirectTo: '/admin',
    redirectIfFound: true,
  });

  const [errorMsg, setErrorMsg] = useState('');

  async function handleSubmit(e) {
    e.preventDefault();

    const body = {
      password: e.currentTarget.password.value,
    };

    const userData = await fetch('/api/login', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify(body),
    });

    const user = await userData.json();

    try {
      await mutateUser(user);
    } catch (error) {
      console.error('An unexpected error happened:', error);
      setErrorMsg(error.data.message);
    }
  }

  return (
    <form onSubmit={handleSubmit}>
      <label>
        Enter password
        <input type='password' name='password' required />
      </label>

      <button type='submit'>Login</button>

      {errorMsg && <p>{errorMsg}</p>}
    </form>
  );
}

These pages and API routes create the backbone to log in and log out, and a form page you can enter your password on. You can see the login API route is doing a simple comparison of the password in the request with the password you set in your ENV file.

The last thing you need is the route (or routes) you want to secure. The example here does it with server side props (SSR) but you could also call the api route from the client side. This will redirect to /login if the user is not returned from the withSessionSsr handler or show the page if you are logged in.

pages/admin.js

import { withSessionSsr } from '@utils/session';

export default function Admin() {
  // Users will never see this unless they're logged in.
  return <h1>Secure page</h1>;
}

export const getServerSideProps = withSessionSsr(async function ({ req, res }) {
  const user = req.session.user;

  if (user === undefined) {
    res.setHeader('location', '/login');
    res.statusCode = 302;
    res.end();
    return { props: {} };
  }

  // You can return data here from a database knowing only authenticated users (you) will see it.
  return { props: {} };
});

Encrypted cookies are pretty awesome and the people behind iron-session are insanely smart. This will get you a simple and functional secure page that you can access with your password. You are vulnerable to brute force here just as an FYI. You'll have to do something else to mitigate that but if you name your pages something other than login and admin, you can at least be a bit obscure and get slight security through that.

Don't forget to add the two ENV vars to your server as well when you deploy.

Let me know if you have questions or suggested modifications by contacting me on twitter @itwasmattgregg.

]]>
Connect To fly.io Sqlite Production Database https://codegregg.com/posts/connect-to-flyio-sqlite-production-database/ Wed, 08 Jun 2022 22:46:00 GMT https://codegregg.com/posts/connect-to-flyio-sqlite-production-database/ Gonna give a quick how to for connecting to and getting your sqlite database from your production app on fly.io. It can be a bit confusing but once you get your ssh connection set up and openssh installed on your fly.io server, it's much easier to replicate. In this tutorial we will be connecting to your fly.io instance via wireguard and ssh, then insalling openssh on your instance so we can use scp from your local computer to copy the sqlite db file for viewing locally.

Start by downloading Wireguard here. You'll also need flyctl which I'm assuming you already have, but if you don't it can be downloaded here.

You'll need to connect wireguard to fly.io by creating a wireguard config file with the fly CLI. Use flyctl wireguard create to create a file called wireguard.conf. Then open wireguard and import that file and activate that ssh tunnel.

Next you're going to connect to your fly instance via ssh. Type fly ssh issue --agent then fly ssh console to ssh into your instance.

Once you're in you'll need to install openssh on the instance to be able to securely copy files from your instance to your local computer. Do this with the command: apt-get install openssh-client.

At this stage you'll want to create a backup of your database to copy to your local machine so nothing goes wrong trying to copy the file while a transaction is happening. You can do this in one of 2 ways:

sqlite3 data/sqlite.db '.backup data/backup.db'

OR

sqlite3 data/sqlite.db "VACUUM INTO 'data/backup.db'"

You may have to change the path to your database, mine was in the data directory.

Exit ssh with exit;

Now you can copy your sqlite db file to your local computer with scp root@[app-name].internal:/data/backup.db fly_backup.db.

Don't forget to fill in the name of your app on fly.io and change the path to wherever you backed up your sqlite db on your server. Mine is at /data/backup.db.

]]>
Sorting One Array With Another in JavaScript https://codegregg.com/posts/sorting-one-array-with-another-in-javascript/ Sun, 13 Mar 2022 09:55:42 GMT https://codegregg.com/posts/sorting-one-array-with-another-in-javascript/ Recently I needed to sort one array I had no control of to show specific categories in a specific order. Any items in the array of categories that I had not specified in my order array I needed to go at the end of my list. No other order mattered.

I solved this with a simple JS sort function with a little extra code for handling items not in the list.

const categoriesArray = [
  { category: 'stuff' },
  { category: 'things' },
  { category: 'unknown' },
  { category: 'important' },
];
const order = ['important', 'things', 'stuff'];

const sortedArray = categoriesArray
  .slice()
  .sort(({ category: categoryA }, { category: categoryB }) => {
    const indexA = order.indexOf(categoryA);
    const indexB = order.indexOf(categoryB);
    return (
      (indexA > -1 ? indexA : Infinity) - (indexB > -1 ? indexB : Infinity)
    );
  });

// Returns:
// ​[
//  { category: "important" },
// ​ { category: "things" },
// ​ { category: "stuff" },
// ​ { category: "unknown" }
// ]
]]>
Angular Dynamic Page Titles with Nested Routes https://codegregg.com/posts/angular-dynamic-page-titles-with-nested-routes/ Thu, 30 Sep 2021 09:21:49 GMT https://codegregg.com/posts/angular-dynamic-page-titles-with-nested-routes/ This is a quick demo of one way you can accomplish dynamic page titles in the html head tags with Angular. This should work in most recent versions of Angular (8+) but let me know on Twitter if it doesn't work or if there are better solutions to this problem. I found it semi-frustrating compared to how easy it is in Vue and React land. This crazy map and filter stuff is needed for nested route structures.

You can define a base title for your application in index.html.

<!-- index.html -->

<title>My Fancy Application</title>
// app.component.ts

@Component({
  selector: 'app-root',
  template: '<router-outlet></router-outlet>',
})
export class AppComponent implements OnInit {
  constructor(
    private titleService: Title,
    private activatedRoute: ActivatedRoute,
    private router: Router,
  ) {
ngOnInit() {
    // Grab the title from index.html
    const appTitle = this.titleService.getTitle();

    this.router.events
      .pipe(
        filter((event) => event instanceof NavigationEnd),
        map(() => this.activatedRoute),
        map((route) => {
          while (route.firstChild) route = route.firstChild;
          return route;
        }),
        filter((route) => route.outlet === 'primary'),
        mergeMap((route) => route.data),
      )
      .subscribe((event) => {
        if (event.title) {
          return this.titleService.setTitle(`${appTitle} | ${event.title}`);
        }
        // This is neccessary to unset the more specific title if it was the previous page.
        return this.titleService.setTitle(appTitle);
      });
  }
  }
// routes.ts

{
    path: '',
    component: WrapperComponent,
    canActivate: [AuthGuard],
    children: [
      {
        path: 'blog',
        component: BlogComponent,
        data: { title: 'Blog' },
      },
      {
        path: 'docs',
        children: [
          {
            path: '',
            component: DocsComponent,
            data: { title: 'Docs' },
          },
          {
            path: 'examples',
            component: ExamplesComponent,
            children: [
              {
                path: '',
                component: ExamplesListingComponent,
                data: { title: 'Examples' },
              },
              {
                path: ':id',
                component: ExampleDetailsComponent,
                data: { title: 'Examples | Example Details' },
              },
            ],
          },

        ],
      },
    ],
  },
]]>
Apply Auto Setting to Multiple Photos in Lightroom CC https://codegregg.com/posts/apply-auto-setting-to-multiple-photos-in-lightroom-cc/ Wed, 29 Sep 2021 08:58:57 GMT https://codegregg.com/posts/apply-auto-setting-to-multiple-photos-in-lightroom-cc/ This is a relatively simple problem to solve but didn't see it posted elsewhere. After reading this tutorial you will know how to copy and paste any settings from one photo to many others in Adobe Lightroom CC (I'm currently on version 4.4). For this tutorial I will be using the "Auto" setting as an example but you can copy many different settings to other photos. The screenshots will be from my MacOS but it should work very similarly on Windows.

First select one photo and click the Auto settings button in the Edit tab.

Click the auto button in the edit tab

Then from the Photo menu at the top, select "Choose Edit Settings to Copy..."


Choose edit settings to copy option

You can uncheck all the settings besides Auto Settings unless you want to copy other edits you've made.


Select which settings you want to copy


Then select ALL photos you want to copy the setting to and select the Photo dropdown from the top again and click "Paste Edit Settings".


Select paste edit settings button


Note: it may take a few minutes for Lightroom to apply the settings to all the pictures but it's a good way to bulk edit. You can see the ongoing progress in the upper left of the application window.

Hope this was helpful to someone.

]]>
Destructure Optional Params in Typescript https://codegregg.com/posts/destructure-optional-params-in-typescript/ Thu, 06 May 2021 14:15:00 GMT https://codegregg.com/posts/destructure-optional-params-in-typescript/ Sometimes you have a function with an optional Object argument that you want to destructure in the function. Like so:

interface SomeObject {
  option1: boolean;
  stuff: boolean;
}

function foo(param?: SomeObject) {
  const { stuff } = param;
}

However you'll get an error because param could be undefined and TS doesn't like you trying to destructure something that's undefined. There are a couple ways around this...

Define a fallback in the initializer and don't use the ? optional identifier:

function foo(param: SomeObject = {}) {
  const { stuff } = param;
}

Use nullish coalescence:

function foo(param?: SomeObject) {
  const { stuff } = param ?? {};
}

Or just call the propery on the parameter directly with optional chaining:

function foo(param?: SomeObject) {
  anotherFunction(param?.stuff);
}

All of these work and will handle param being undefined.

]]>
How to find a Database in 2021 https://codegregg.com/posts/how-to-find-a-database-in-2021/ Thu, 14 Jan 2021 18:55:33 GMT https://codegregg.com/posts/how-to-find-a-database-in-2021/

I haven’t been around as long as some, but I’ve been around web development to remember the days where everyone had a cpanel and managed their own personal site on super cheap shared hosting. The days of clicking a few buttons and spinning up a new mysql database, maybe your hosting company even gave you ssh access to your shared environment and allowed you to create some of your own resources. But we for sure didn’t have cloud infrastructure to manage these things or keep them in separate environments from each other, on separate apps with separate security keys and service accounts. It definitely wasn’t the best thing but most of the time we would host many websites, usually php applications, and many databases all on the same linux box. We had our share of security scares but honestly it was just a simpler time for working on the bare bones infrastructure. I even spent some time at a company where our entire stack was in a closet I could see from my desk. Barracuda load balancers and nightly tape backups our CTO would literally just take home to his house. I’m not saying those were better days, not at all, just different.

Now we have access to 3 major cloud providers (Google, AWS, and Azure) who pretty much offer the same set of competing services. Not to mention all the independent services that live on top of this infrastructure like Netlify and the literally countless others. And these are really great for almost everything we do but it’s interesting how some things have become slightly more difficult or expensive for demo and dev work.

For example, the other day I was creating a Next.js app and wanted to try out Prisma for connecting to a database and having a nice way to define schema, handle migrations, and work with auth. For this I needed an external database. Mongo and Firebase have always been really nice dev resources for me when testing out app ideas, but again I really wanted to try out this Prisma thing which required a relational db. In the old days I would have just spun up a db on my shared hosting service and connected to it but I don’t have any shared hosting anymore with how easy and free things like Netlify and Vercel are. I really just wanted somewhere where I could spin up a hosted database for very cheap or free. I turned to our old friend Heroku and spun up a postgres database there on the free tier. This seemed like exactly what I needed. Free and easy to set up. But with the way Next.js connects to the database with serverless functions, I was running into the max-connection limit heroic puts on free databases just when testing myself. I was hoping to at least be able to test this with a few other people and I knew this couldn’t be a viable option. I didn’t want to create many rows in the database and didn’t need much storage but I definitely needed the concurrent connections for the functions.

I ended up spending about 4 hours researching across AWS and GCP for managed database and compute pricing. I tried looking for databases that could spin down to 0 when not in use to save money, cheap small databases, and cheap hosting where I could add my own container-based database. Everything seemed to be about $15/mo. which was just too much for this little hobby project I wanted to try out. Why aren’t there any services where I can try out a database with a little more heft for free? Is it really that expensive to host a database for testing purposes?

Alright, so I’ve ranted enough. If you’re still reading I hope you found this at least a semi-fun story to read through. And you’re probably curious, what did I end up choosing? I eventually landed on a Google Compute Engine free tier f1-tiny box with a postgres VM installed for about $4/mo. It isn’t free but it works and I’m not running into the connection error anymore. By the way, I did also try mysql on Heroku thinking I could avoid the connection issue but ran into the same limitation. Maybe Heroku dbs don’t release connections very fast? Who knows. I also set up an AWS RDS serverless db that can spin down to 0 when not in use. I plan on trying that out for a little bit in production to compare costs. It’s expensive if it’s used all the time but “should” be cheap if it’s off a lot.

Thanks for reading. Be peaceful to one another.

]]>
Cancer Imposter Syndrome https://codegregg.com/posts/cancer-imposter-syndrome/ Thu, 23 Jul 2020 17:23:49 GMT https://codegregg.com/posts/cancer-imposter-syndrome/

I know we usually reference "imposter syndrome" when we talk about jobs and the fear that you might be exposed as a fraud who fooled their way into the position but I've been feeling a different kind of imposter syndrome these last 7 months. I've been undergoing treatments for Acute Myeloid Leukemia which I was diagnosed with in december. Technically the cancer was cleared out of my body in January but because of the nature of Leukemia it has a nasty tendency to come back without further treatment. I went through 3 rounds of chemo and then a stem cell transplant all of which required over 2 months of total hospitalization and were not very fun at all. However, in between treatments I looked really normal. Yes I had lost my hair but I could wear a hat and I had lost a ton of weight, but people who saw me immediately thought everything must be alright. I could literally see people sigh with relief when they saw me almost as if they had imagined I would barely be held together. The interesting thing is, some days I feel like I'm barely held together even though I don't look it. That's what I've decided to call cancer imposter syndrome. I feel like a cancer imposter. People looking at me all the time like I'm normal makes me feel like I never even had cancer even though I've gone through hell. People all see me as healthy so I feel like I should be healthy even though I don't feel that way on the inside. I feel like an imposter because I feel like I should look like I have cancer on the outside to match what I feel on the inside. Then maybe other people would understand what I'm going through a little easier. It's a really interesting feeling.

]]>
Automate Firebase 🔥 hosting with Github Actions https://codegregg.com/posts/automate-firebase-hosting-with-github-actions/ Sat, 23 May 2020 12:49:41 GMT https://codegregg.com/posts/automate-firebase-hosting-with-github-actions/

UPDATE 01/21/21: Using the firebase init command you can now automatically set up deployment from Github through the CLI and you shouldn't have to follow any of this tutorial. Try that first. It should set up an action for merge and one for PRs.

This is a tutorial for quickly setting up a Github action to deploy a site to Firebase hosting. This would include sites built with Vue, React, Gatsby, the Next.js static generator or any other client-side site. In a few minutes you can be set up so every time you push code to master it will automatically be built by Github and deployed to Firebase. I'm going to assume you already have your code on Github and a firebase project setup. If your project doesn't have a firebase.json file in the root directory, or if that file doesn't have a hosting section you may need to run firebase init. Also make sure you have a .firebaserc file with your project ID or the deploy function won't work. If you run firebase init and follow the instructions to set up hosting both files should be created for you.

Here's an example of what the files should look like:

// .firebaserc
{
  "projects": {
    "default": "project-id"
  }
}

project-id should be replaced with your project ID from the firebase console. This is all done for you if you use the firebase init command from the CLI.

// firebase.json
{
  "hosting": {
    "site": "site-name",
    "public": "dist",
    "ignore": ["firebase.json", "**/.*", "**/node_modules/**"],
    "rewrites": [
      {
        "source": "**",
        "destination": "/index.html"
      }
    ]
  }
}

site-name should be replaced with the name of the site you want to deploy to (...if you want to deploy to a custom named site you created in firebase console. If you just want to use the default site then you shouldn't include the site key), and dist should be the directory your site gets built to. Usually it's either dist or public.

Next you'll need to create a file called main.yml at the path .github/workflows/. That file should have these contents:

name: Build and Deploy
on:
  push:
    branches:
      - master

jobs:
  build:
    name: Build
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repo
        uses: actions/checkout@master
      - name: Install Dependencies
        run: npm install
      - name: Build
        run: npm run build
      - name: Archive Production Artifact
        uses: actions/upload-artifact@master
        with:
          name: dist
          path: dist
  deploy:
    name: Deploy
    needs: build
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repo
        uses: actions/checkout@master
      - name: Download Artifact
        uses: actions/download-artifact@master
        with:
          name: dist
          path: dist
      - name: Deploy to Firebase
        uses: w9jds/firebase-action@master
        with:
          args: deploy --only hosting
        env:
          FIREBASE_TOKEN: $

The things you may need to change here are any reference to dist, change to whatever your build script outputs the final code to, and the npm run build line can be changed to whatever your build command is. Note: you can easily swap npm out with yarn, which at this time is available globally on the build environment.

This file basically runs two separate jobs: one to build the site files from your source code, and one to take that artifact and deploy it to Firebase hosting.

The final piece you need here is to add your secret Firebase token to Github so it has permission to deploy the site for you. To get this token run firebase login:ci in your terminal and copy the code generated. You'll paste that code in the settings of your Github project on the secrets page. Create a new secret with the name FIREBASE_TOKEN and paste the code you got from your terminal.

Now you can commit both files and push to Github and the Github action will take care of the rest. After you push you should be able to see the progress in the actions tab in your Github project.

]]>
How to generate a random ID in JavaScript https://codegregg.com/posts/how-to-generate-a-random-id-in-javascript/ Sat, 22 Feb 2020 13:34:15 GMT https://codegregg.com/posts/how-to-generate-a-random-id-in-javascript/

I'm sure this is posted in a ton of places already but I thought I would share a method I sometimes use to generate random strings of (numbers + letters) with javascript. This function returns the first 6 characters of a randomly generated string. Passing 36 to the toString method tells it to return numbers 0-9 and every letter in the alphabet, you can adjust the 6 in the substr method if you want a longer or shorter ID.

const id = function() {
  return Math.random()
    .toString(36)
    .substr(2, 6);
};
]]>
Going Home (Almost) - day 28 https://codegregg.com/posts/going-home-almost-day-28/ Tue, 28 Jan 2020 21:39:06 GMT https://codegregg.com/posts/going-home-almost-day-28/

Update: A weekend pathologist made a mistake and what they thought were leukemia blasts were really just new white blood cells. We did another bone marrow biopsy a week later and they found no leukemia whatsoever. So good news. We were actually able to go home a couple days after I wrote this.


We were so close to being able to go home this week! So close! But alas, it was not to be. After the first round of chemo, we spent the rest of the month waiting for my immune system counts to come back up to an almost normal level. We were days away from being able to go home and live a couple of normal weeks before the stem cell transplant (or a bit of consolidation chemo if needed). However, on Sunday night we were informed that there were traces of leukemia blasts found in my blood work. This was a pretty rough blow after thinking we were about to go home. It means staying in the hospital another month and going through another round of what they call salvage chemo. It will be a different cocktail of drugs to hopefully wipe out leukemia that was resistant to the last chemo regimen.

The good news is that because I'm so healthy right now (and feeling pretty fucking awesome) they decided I could go out on leave for the day today. Even though we have to come back so soon it was incredible to experience the outside world after being here for exactly a month now and looking at another month stuck in the same room. It was glorious to go home. We napped on our bed, which is about 1000X more comfortable that the stupid hospital bed. Also, my wife has been sleeping on a couch in the hospital for a month, so a real bed had to feel like heaven for her. We got to spend the afternoon with our dog. I got to make myself a latte. We went to a restaurant for lunch. It was a proper day.

Part of me on that first night after hearing I had leukemia thought I would never walk out of this hospital. I now realize I had no fucking clue about what my future would look like and of course, I would walk out of this place and feel the fresh cold air of Minnesota on my face again, but that feeling was still an amazing relief today. I have a feeling this second round is gonna work. I'm going to get a stem cell transplant and grow some new bone marrow and this leukemia will never again grow in my body. I know there's a possibility this won't be my future but I'm choosing to believe it anyways.

As always, thank you for reading, and for your support. I love you all.

Matt

P.S. Our dog might look sad in the above photo but she is a super happy dog. You can follow her on insta @dardarbinksthedog and see all the fun she gets up to.

]]>
The Fight Against Leukemia Begins https://codegregg.com/posts/the-fight-against-leukemia-begins/ Thu, 16 Jan 2020 12:47:26 GMT https://codegregg.com/posts/the-fight-against-leukemia-begins/

Three weeks ago I found myself lying in a hospital bed after a full day in the ER, a blood transfusion dripping into one of my IVs, listening to a doctor tell me I have Acute Myeloid Leukemia. Just that morning I thought I was just getting over a bad case of the flu. I mean, I'm 28 years old. I'm healthy. I can't have cancer, can I? I really wasn't in a good state that night anyway, but even so, the news hit me like a truck. I would be lying if I didn't say I was scared shitless.

So how did I end up in that hospital bed? It started with a basic cold and a little fever. For four days I felt slightly feverish but the Friday night before Christmas things got a lot worse. I thought I had the flu and the best remedy would be for me to spend the weekend in bed drinking water and sleeping. I honestly didn't leave my bed until Monday night when we went to urgent care. I was given some basic antibiotics and sent back to bed. I again assumed the best and thought everything would just go away.

Over Christmas I would spike a fever a few times a day, I had some sort of infection happening in the gums behind my teeth, and I would almost faint every time I got up to get water or go to the bathroom. All of these things I attributed to a really bad flu. But this flu had been going on now for almost a week and a half and even with the antibiotics, it wasn't getting any better. The final straw was when my vision started to get blurry in my left eye. At that point, it was time to go back into the doctor.

My wife and I, still assuming this was some sort of infection/virus, headed to an urgent care hoping to get something stronger to combat whatever was going on. After hearing about my symptoms the doctor immediately sent us to the ER. I guess now I know more about what constitutes an ER visit vs an urgent care visit. They drew some blood to give us a head start at the ER and then we drove to the one they recommended. We ended up being super lucky that they sent us to Methodist hospital. It's pretty close to our house, in between our parents' houses, and they also happen to specialize in oncology. At the time we weren't thinking about any of these things.

They were waiting for me at the ER, which was great because I was barely standing on two feet. To be honest, I was pretty out of it and loopy most of the day. I'm surprised I remember as much as I do. A nurse led me straight to a room and began the process of getting me checked in. A doctor to come in pretty quick and told us that my blood counts (hemoglobin, white blood cells, platelets) were all super low. She also threw out the word leukemia right away just so we could have some warning of what this could be. Of course, I thought there was no way I could have leukemia. It had to just be a bad virus. The doctor said at the least I would be spending the weekend in the hospital to run more tests and monitor my health. Again, I still just thought this was some bad flu. I didn't even really know what leukemia was.

That day was filled with an MRI, CT scan, bone marrow biopsy, and more blood draws than I could count. My arm was bruised and filled with needle holes. Seriously, if one more person tried to draw blood that day I think I would have punched them. Rachel tells me I started to get pretty delirious towards the end of the day and I was feeling it. They told me I would need blood and platelet transfusions but after all the tests I was left sitting in the ER room waiting for a room to open in the hospital so I could be admitted.

As soon as we got upstairs my doctor came in to tell us the news. The results from the bone marrow biopsy determined that I had Acute Myeloid Leukemia (AML) and we would need to begin treatment right away. I had a very vague idea of what leukemia was but I learned a lot pretty fast. This leukemia had probably not been in my body more than a few weeks before I was admitted to the hospital. It being acute meant that it was a fast-moving disease and needed immediate treatment, which is why I had been admitted to the hospital that same day. This was all just a big shock, a huge roadblock to the life I thought I would be living this year. I had so many plans and even just small daily routines that had just come to an end.

Thankfully I received three units of blood that night and 2 of platelets. In the morning I was looking a lot less like death and felt a lot better too. If you've ever donated blood I thank you. You probably saved my life that night and have given me the chance to fight this disease. Even though I was more alive feeling over those next couple days, I could feel the weight of the news I heard that night. I still don't think it's real sometimes. I know there is a real chance I don't make it through this. I think it's more likely that I will, but still the reality of that is impossible to escape. It especially was those first couple days. Thankfully things just started moving. It helped that we were admitted right away, taken care of, nursed back to health, and told that chemo would be starting in a few days. It gave me a sense that it was as simple as: there is something wrong and we have to do something about it.

The treatment, as far as this first month goes, is fairly straightforward. I would be put on a chemo treatment called 7 + 3. Seven days of a constant chemo drip and three days of a single push of a different kind of chemo. Then I would wait for three weeks to let the chemo do its work and then let my body rebuild my immune system a bit. I would remain in the hospital for that entire month and at the time little else was known about the future beyond the hospital. Since then I've learned that there will be two options after the first month in the hospital. One is a stem cell transplant and the other would be 3-4 rounds of consolidation chemo. Consolidation is necessary because even if we get the leukemia into remission at the end of the month, it has such a high likelihood of returning that further treatment is needed to bash it into the ground. The transplant could come from another adult or umbilical cord blood. It would be a pretty sure way that the leukemia cells wouldn't return, but there are some pretty big risks with going that direction. There are risks with further rounds of chemo as well, so the choices are gonna be difficult and there is no clear winner.

This month so far has been pretty good health wise. I've handled the chemo better than expected. The nausea was manageable and the worst I went through was just extreme exhaustion. I consider myself very lucky in that regard. Maybe things will just go smoothly and I'll beat the shit out of this cancer. On my best days, it's easy to believe this narrative. Some days though I allow my mind to wander into a pit of despair and think that I'll never make it out of this simply because this crazy rare bad news has already happened to me. I've already rolled the 15,000 sided die and rolled a 1, so why would there be any reason to get out of it. A DND character doesn't get to get out of it, why should I. (Although if I was a halfling I could totally reroll the 1.) Thankfully with all the support from friends and family I've received, I don't often have these thoughts. The treatments are working so far and I'm young. Did you know the average age for adult AML is 68? Not saying that age is everything but it definitely gives me a good shot at handling the therapies.

I had a bone marrow biopsy this week to make sure the chemo had fully wiped out the leukemia cells and it had. That was a reason to celebrate. Barring something really weird my bone marrow cells should be starting to regrow and bring my immune system back into action if only a little bit. Let the record show, I am not a fan of bone marrow biopsies. They are really only painful during the numbing phase but they feel and sound really weird. It's not a pleasant experience.

We've learned that my immune system won't be back to normal for maybe a year. Which means I won't be able to attend crowded events or even busy restaurants. I'll have to wait quite a while before I eat sushi even if everything goes well. And there will be many hard decisions in the future. For now, we wait for counts to come up so I can get out of the hospital. To be able to relax at home will be a huge comfort. I can't wait for that day.

For now, I find myself spending a lot of time sitting in bed or the chair and walking to try to keep my muscles working. I'm listening to books on tape and writing this blog post. I've seen so many good friends and family members who have come to visit me, which makes the time go fast. I appreciate all the support we've been given so far and all the support I'm sure we will need in the months to come. I will continue to write my story here even if no one reads it. I find it cathartic to write my own thoughts publicly and share my feelings. If you've made it this far, congrats! Tweet at me if you have any questions or just want to reach out for any reason.

Thanks for reading.

Matt

]]>
Conditional Vue Hash Router https://codegregg.com/posts/conditional-vue-hash-router/ Mon, 02 Dec 2019 16:28:17 GMT https://codegregg.com/posts/conditional-vue-hash-router/

Recently I built a Vue app for a client that existed inside an existing Wordpress site. The Vue app was responsible for handling a complex location search section of the site and handled routing for a search, results, favorites, and compare pages. But there was a problem. Even though I was creating the Vue app conditionally based on the existence of root divs on the page, the js bundle was being loaded everywhere and instantiating the Vue Router on every page of the Wordpress site. This meant every url was appended with /#/, which was not ideal for the client.

You might think that at this point you could just switch to the history router instead of the hash router, but that gets insanely tricky, if not impossible with Wordpress. Also, I had 2 main entrypoints for the Vue app. A component (that doesn't require routing) that loads on the home page of the Wordpress site, and the component I referenced earlier that includes routing for the main Vue search app.

Solution

I got around this issue by dynamically loading the router config only if I found the DOM node for mounting the search app on the page. Here's what the code looks like for main.js:

import Vue from 'vue';

import store from '@/store';

import Search from '@/Search.vue';
import SearchAutocomplete from '@/SearchAutocomplete.vue';

/**
 * Dynamic loader that will look for registered Vue mounting points
 * and mount different Vue components
 */
function ready(fn) {
  if (
    document.attachEvent
      ? document.readyState === 'complete'
      : document.readyState !== 'loading'
  ) {
    fn();
  } else {
    document.addEventListener('DOMContentLoaded', fn);
  }
}

const components = {
  '#vue-search': Search,
  '#vue-search-autocomplete': SearchAutocomplete,
};

ready(() => {
  for (const selector in components) {
    const elements = document.querySelectorAll(selector);

    Array.prototype.forEach.call(elements, async el => {
      const thisComponentProps = el.dataset;

      const config = {
        el,
        store,
        render: h =>
          h(components[selector], {
            props: {
              ...thisComponentProps,
            },
          }),
      };

      // Only initialize the router if we are loading the main vue search app.
      // This is so that the hash tag in the url bar doesn't appear on other wordpress pages.
      if (el.id === 'vue-search') {
        const router = await import('./router');

        config.router = router.default;
      }

      // eslint-disable-next-line no-new
      new Vue(config);
    });
  }
});
]]>
Webpack Image Loading https://codegregg.com/posts/webpack-image-loading/ Mon, 23 Sep 2019 13:23:44 GMT https://codegregg.com/posts/webpack-image-loading/

Sometimes you have to make a custom webpack config. Whether you're working on making a vue app inside a server rendered framework like wordpress or you just decided to roll your own, this might help you if you're having issues loading images into your components. My example will be with vue but the principal works with other front end libraries. This will assume you already have a webpack setup and are having difficulties getting images to import and load.

Instructions

Make sure you have file-loader installed along with all your other webpack dev dependencies.

npm i -D file-loader

Then add it to your webpack config file

module: {
  rules: [
    ...

    {
      test: /\.(png|jpg|gif)$/,
      use: [
        {
          loader: 'file-loader',
          options: {}
        }
      ]
    }
  ]
}

The final trick is to make sure to include the publicPath in your webpack output config object. This is honestly the key to this entire post. It will look something like this for wordpress:

output: {
  filename: 'app.js',
  path: path.resolve(__dirname, 'assets/build'),
  publicPath: '/wp-content/themes/my-theme/assets/build/'
},

We have to not only let webpack know where to output assets from the build but also where to access them within the grand scheme of the application (from the root path of the web server). If you're not using wordpress, just make sure that path leads to the directory where the images you import are built. After you call import on an image file in one of your js files, you should see the images being copied to your build directory. If not then you may have a problem with your webpack config file-loader.

With publicPath added in there you can now call things like this in your Vue.js components.

<template>
  <img :src="image" :alt="type" />
</template>

<script>
import icom from '../images/icon.png';

export default {
  data: function() {
    return {
      image: icom,
    };
  },
};
</script>

Good luck out there.

]]>
Gratinata Pasta Sauce https://codegregg.com/posts/gratinata-pasta-sauce/ Fri, 20 Sep 2019 11:45:25 GMT https://codegregg.com/posts/gratinata-pasta-sauce/

prep time: 20min / cooking time: 10min

This amazingly delicious pasta sauce is perfect for baked pasta dishes with shrimp or chicken or pretty much any other pasta dish.

Ingredients

  • 3 Tbs butter
  • 2 Tbs minced garlic
  • 3 Tbs (or more) marsala wine
  • 2 C heavy cream
  • 1 C grated parmesan cheese
  • 1/2 C milk
  • 1/2 C chicken broth
  • 1 Tbs cornstarch
  • 1 Tbs Grey Poupon dijon mustard
  • 2 tsp minced fresh rosemary
  • 1/2 tsp salt
  • 1/2 tsp minced fresh thyme
  • 1/4 tsp ground cayenne pepper

Instructions

  1. Melt the butter in a medium saucepan over medium/low heat.

  2. Add the garlic and sweat for about 5 minutes. Make sure it doesn't brown.

  3. Add the marsala wine and cook for another 5 minutes.

  4. Add the remaining ingredients and whisk until smooth.

  5. Bring to a simmer and keep it there for 10 minutes whisking lightly the whole time.

  6. Cover the sauce and remove from heat.

  7. Pour it onto any pasta and mix it in.

  8. (Optional) Place sauced pasta and whatever else you've added into an oven safe dish. Add more parmesan cheese and paprika on top, and bake at 500 degrees for 10 min, or until top is golden brown.

Penne

]]>
Netlify Dev + Serverless Functions + Mailchimp Subscribe Form Tutorial https://codegregg.com/posts/netlify-dev-serverless-functions-mailchimp-subscribe-form-tutorial/ Fri, 06 Sep 2019 19:13:46 GMT https://codegregg.com/posts/netlify-dev-serverless-functions-mailchimp-subscribe-form-tutorial/

You've probably heard about Netlify, the amazing static host that literally makes everything about static website hosting feel like a walk on the beach. They've brought us automatic PR branch deploys, server-side analytics, form submissions without a server or even serverless function, and even identity management. I do not work for Netlify but I would rarely choose to use any other service for hosting static sites. By the end of this tutorial you will have a working serverless function hosted on Netlify, automatically built and deployed every time you push to git, that you can use on your static site to add subscriber emails directly to Mailchimp.

Now they've brought us a tool called Netlify Dev, and in this tutorial I will show you how to use Netlify Dev to build and deploy a simple serverless function on Netlify for adding emails to a mailchimp subscriber list through their API. The key benefit to using Netlify Dev is that you can be sure what you test locally will be handled exactly the same way by Netlify's service once it's deployed. Through their CLI they give you the exact same deployment tools that Netlify uses on their end to deploy your site through the dashboard and of course, they make it super easy. You won't have to run any deployment scripts locally to get your serverless function to Netlify, you'll only need to push the prebuilt code to git. Plus, you get logs for your functions right in your terminal.

A quick note here on serverless functions if you don't know why we need them, if you know then you can skip this paragraph. When we build static sites with Create React App(CRA), Vue, or Gatsby, we are shipping all our code to the front end. This is great until you need to communicate with a service that requires you to store and use private keys to gain access to their APIs. It would be very bad to ship these to the client-side. Very bad. So instead we can use small Node-based serverless (lambda) functions that we can call from our front end, and that relay our information on to the external service. You can host these functions on many platforms like AWS, GCP, or many others but if you're already hosting your site on Netlify why not keep everything simple and use Netlify functions?


Getting Started

To start we're going to need a few things:

  • A Netlify account
  • A Mailchimp account
  • The netlify CLI npm i -g netlify-cli
  • An application you want to deploy to netlify along with the function (I used VueJS but CRA, Gatsby and many others will work)

You can then run netlify dev from your project root, which will do a few things:

  • Detect and run your site generator
  • Makes environment vars from your netlify dashboard available locally on process.env
  • Uses routing rules from Nelify or local netlify config file
  • Compiles and runs cloud functions

Since we're trying to create a cloud function to connect an email submission form to mailchimp, we'll need to create the cloud function locally.


First, create this directory structure in your site root

/functions
  /subscribe
    subscribe.js
...
[other site files]

Then navigate to the folder that contains subscribe.js and run the command:

npm init

Click through the setup process and at the end you will have a package.json file. We will be committing this file because netlify can be told to recursively run npm install inside function directories, which honestly is pretty cool. Most other serverless function hosts require you to compile your function locally before deploying.

Now we need to install 2 packages inside our subscribe directory to make our mailchimp subscribe function work.

npm i -S base-64 node-fetch

And your subscribe.js file should look like this:

const fetch = require('node-fetch');
const base64 = require('base-64');

exports.handler = async (event, context) => {
  // Only allow POST
  if (event.httpMethod !== 'POST') {
    return { statusCode: 405, body: 'Method Not Allowed' };
  }

  const errorGen = (msg) => {
    return { statusCode: 500, body: msg };
  };

  try {
    const { email } = JSON.parse(event.body);

    if (!email) {
      return errorGen('Missing Email');
    }

    const subscriber = {
      email_address: email,
      status: 'subscribed',
    };
    const creds = `any:${process.env.MAILCHIMP_KEY}`;
    const response = await fetch(
      'https://{data_center}.api.mailchimp.com/3.0/lists/{list_id}/members/',
      {
        method: 'POST',
        headers: {
          Accept: '*/*',
          'Content-Type': 'application/json',
          Authorization: `Basic ${base64.encode(creds)}`,
        },
        body: JSON.stringify(subscriber),
      }
    );

    const data = await response.json();

    if (!response.ok) {
      // NOT res.status >= 200 && res.status < 300
      return { statusCode: data.status, body: data.detail };
    }

    return {
      statusCode: 200,
      body: JSON.stringify({
        msg: "You've signed up to the mailing list!",
        detail: data,
      }),
    };
  } catch (err) {
    console.log(err); // output to netlify function log
    return {
      statusCode: 500,
      body: JSON.stringify({ msg: err.message }), // Could be a custom message or object i.e. JSON.stringify(err)
    };
  }
};

You'll need to fill in a few things at this point. Make sure you've generated an API key on your Mailchimp account. Replace the {data_center} part of the fetch URL with the very last part of your API key after the dash. It will be something like us6 but will be different for everyone. Then replace {list_id} with the Audience ID of the list you want to add subscribers to. This can be found in the Audience Name settings page in the Audience section of Mailchimp. The last thing you need to do is add the full API key to your Netlify dashboard, which is under domain settings. Add an environment variable called MAILCHIMP_KEY with your API key as the value.

Next, run this command from your project root to link your site to netlify and generate the necessary state.json file.

netlify init

The two last things we need to do are create a netlify.toml file in the site root that looks like this:

[build]
  publish = "dist"
  functions = './functions/'
[[redirects]]
  from = "/api/*"
  to = "/.netlify/functions/:splat"
  status = 200

This tells netlify where your functions will be located and will automatically redirect requests to /api/* to that directory. (Note: This redirect is just a convenience and isn't necessary.)

You also need to install netlify-lambda to your project root with npm and add a postinstall script to that package.json so that netlify knows to run npm install inside of function directories at build time. The changes to your package.json will look like this:

"scripts": {
  ...
  "postinstall": "netlify-lambda install",
},
"dependencies": {
  ...
  "netlify-lambda": "^1.6.2",
}

(note: if you install netlify-lambda with the command line you'll get the latest version. do that.)

Then run

npm install

Running Netlify locally

Now finally if everything was done right you can run this command to get Netlify up and running locally.

netlify dev

It will build and start serving your site at localhost:8888. You now have your site built the same way Netlify does on their services and you can call your functions locally as well. The function we wrote above can be called at localhost:8888/api/subscribe with a POST request and a json body of:

{
  "email": "[email protected]"
}

Go ahead and test it with postman and let me know if you have any issues.

The netlify dev command will also take care of hot reloading both the functions as well as the build of your site since it just runs whatever script your app uses to develop locally (serve for Vue). It's pretty awesome running a single command and having it hot reload both the server-side scripts as well as the front end application. The hot reloading though will only be available at whatever that CLI hosts it at though not on the :8888 server without a refresh on the browser.


Conclusion

That's it! You can now push your code to github or whatever you have your site connected to Netlify with and your API function as well as your front end application will be built and hosted on Netlify. Double-check to make sure you have git setup to ignore node_modules and you aren't committing any API keys. Again, let me know on twitter if you have any issues or I need to update any part of this guide. Good luck out there.

]]>
Why don't we talk to users? https://codegregg.com/posts/why-dont-we-talk-to-users/ Fri, 28 Jun 2019 15:17:08 GMT https://codegregg.com/posts/why-dont-we-talk-to-users/

Have you ever been in a meeting with a stakeholder and you or someone else suggests interviewing users or looking at user feedback in order to guide the roadmap and been shot down? I have. I’ve often wondered what causes large group meetings to end in the highest-paid person in the room’s opinion being agreed with universally. Sprint after sprint we tend to work on sets of features that have more to do with what stakeholders personally want to progress their own career than what might actually be best for the product’s users.

I’m not saying this is all teams on all products but I would be surprised if you’ve never experienced this before. I’ve got a couple of thoughts on why this might be and some thoughts on what we as production consultants might be able to do about it.

The main reason I think this happens has to do with pride. The people in that meeting room were chosen to lead this project for a reason. Either they care the most about the product, an executive likes them and has put them in charge, or they personally think they are best qualified to guide that project to success. It has to be insanely difficult in those positions to admit that you might not know the answer. That you might need to consult user feedback or conduct a UX research session to discover what is best for your users.

I would argue that this leads to catastrophic results. Just because you’re being paid a lot of money to lead a project doesn’t mean you need to come up with all the answers and I’ve literally seen these people ignore user feedback in favor of completing features at the request of their bosses.

So what can we do about it? I think the best thing to do is set proper expectations at the onset of a project with the client. There is a great quote I like to reference from designer Mike Monteiro :

“You may be hiring us, and that may be your name on the check, but we do not work for you. We’re coming in to solve a problem, because we believe it needs to be solved, and it’s worth solving. But we work for the people being affected by that problem. Our job is to look out for them because they’re not in the room. And we will under no circumstances design anything that puts those people at risk.” [1]

As a builder of the web, we should always be thinking about our users first. They are the ones who make a product successful or not. Focusing on what the HIPPO (Highest Paid Person’s Opinion) will only work out for so long.

If you’re already in the position I described above, the best we can do is try to remind stakeholders what the purpose of us building this app is. Ask the question, “How will this benefit the users?” as often as you can and maybe, we can make the internet a better place.

mg

]]>
Easy Weeknight Chicken Thighs https://codegregg.com/posts/easy-weeknight-chicken-thighs/ Thu, 27 Jun 2019 14:37:55 GMT https://codegregg.com/posts/easy-weeknight-chicken-thighs/ Chicken

prep time: 5min / cooking time: 30-40min

Been really loving how easy and delicious this recipe is from cookinglsl

Ingredients

  • 2 lb boneless skinless chicken thighs
  • 1/4 cup Dijon mustard
  • 1/4 cup honey
  • 1 tbsp olive oil
  • 1/2 tsp salt
  • 1/4 tsp black pepper
  • 1/2 tsp oregano
  • 1/16 tsp cayenne pepper (optional) or a few dashes of your favorite hot sauce

Instructions

  1. Preheat oven to 350F

  2. In a small bowl, combine Dijon mustard, honey, olive oil, salt, pepper, oregano, and cayenne pepper.

  3. Place chicken on a greased cookie sheet or baking pan. Pour the sauce over it. Make sure it is evenly coated.

  4. Bake chicken uncovered for 30-40 minutes, until the top is golden and the internal temperature of the meat is 165F.

]]>
Welcome! https://codegregg.com/posts/welcome/ Thu, 27 Jun 2019 13:40:23 GMT https://codegregg.com/posts/welcome/

Hey there, my name is Matt. I’m a husband, dog dad, software engineer, musician, photographer, coffee enthusiast, life-long learner, backpacker, home chef, volunteer, public speaker, and party planner. I’m probably many other things as well but that’s what came to mind. I consider myself a passionate person in general and I feel those who know me would describe me thus. Now here I am beginning a blog. Primarily in order to share professional learnings in a more public forum but at times to also divulge more personal stories or recipes. My life sometimes seems to revolve around code and food so as of now those will be the primary categories of this blog.

I began web development as a middle schooler, working on my school’s website as a media center admin as well as the website of my local Minneapolis public library. Oddly I spent a lot of time in libraries, which didn’t always amount to as much reading as some of my peers, but I can say I love the feeling (and smell) of being in a library. A large part of me hopes that we will always have physical books and libraries to house them.

I had no idea what I was doing at first and to be honest I wasn’t writing much code but I do remember Netscape… The public library website could only be edited from Netscape. I continued to dabble throughout high school but at that time in no way was I thinking of computers or coding as something I would be doing as a career. I had always tinkered with computers from a very young age but I didn’t consider it something I would get a degree in.

It was in my second year of college that I decided to get a degree in computer science, and it might be a surprise to you but I really didn’t enjoy it. There was so much math, so many equations to memorize, so many tests to fail and information to know. It all seemed too impractical, learning to make GUIs with Java, programming image processors in C++, and nothing that I thought, oh cool that’s what I want to do for the rest of my life! So how did I end up here, where I can genuinely say I love my job? I fought through 3 years of data models, Java, and discrete math simply because I thought it was my best chance of graduating in a normal 4 years and getting a job that could pay off my loans. I was taking 18 credits a semester to finish my double major with Biblical-Theological Studies and I had a hunch that major wasn’t going to be helping pay the bills.

Thankfully I found a job late in the summer after graduation doing Java for a lab resource company nearby. I worked there for a year but quickly found out that my true passion was in developing user interfaces and creating delightful user experiences. I taught myself, in this year, as much as I could about front end technologies. At the time, sass and FE build tooling was just beginning to come out, everyone still used Bootstrap and Zurb’s Foundation. So I started there, creating websites for my company but also building RSVP sites for parties that I was hosting on the site. I taught myself about sending emails with php, I taught myself about hosting providers and apache, bash scripts and basic AJAX requests. I can’t express enough how little I knew about all of this when I started, but it was the first time in my life I truly became passionate about being a developer. I became driven to learn and to build as much as I possibly could. The first time I used Ember on todomvc.com I was blown away. I’m sure some of you were right there with me.

After a year I left that company to build front end UIs exclusively, although there was a bunch of php mixed in there as well. The agency I was hired at required all of us to pretty much handle full stack responsibilities even though we had defined back end and front end developers. Over the next 4 years at Spyder Trap, the acquisition of our company by a health care startup, and my work now at the Nerdery, I can say that my knowledge has exploded and I can never go back to not building with software. I became stuck in the best of ways.

I would like to not here that while I had decided Java and building back ends was not for me, and I would say that I learned none of what I do now in my computer science degree, I would also say that I’m thankful for what I learned there. Tree structures are all over the place in web UIs if you know how to spot them. Knowing what makes an algorithm fast or slow (discreet math) is increasingly useful the more javascript we inject into our pages. I wouldn’t say it’s necessary to get a CS degree before getting a developer job, but I’m thankful for the path that has led me here. No regrets.

Now I find myself passionate about quite a few things. A deep love for the UI, empathy with users and their experience, appreciation for good thoughtful design, an understanding of what makes software performant and secure, an appreciation of what goes into creating an API or handling user auth, and even some interest in the business side of things as well. At this point, I’m not quite sure where I’ll go next and you may see some of that come through in the topics I write about on this blog. I hope this explanation of my past helps to give those ramblings some context.

My weird smorgasbord background has led me to create a number of dead-end side projects: a scheduling app for schools, a hall pass app, kitten critic (tinder for cats), food truck finder, a Dungeons & Dragons app, and a number of others. I think I just like coming up with ideas, engineering how I would solve problems with code, what database would be the optimal choice, and learning from all my failures along the way. I think it’s a big part of what makes me who I am.

I’ll stop this ramble for now but I wanted to give an introduction to who I am and why I’m here. I hope that through all the articles I write on this blog I remain in a state of humility and empathy. I don’t know everything, and I absolutely never will. Other people probably know more than me and definitely have ideas better than mine. I am here to listen as much as I am to write. I am here to learn as much as I am to teach.

mg

]]>
Research Driven Development https://codegregg.com/posts/research-driven-development/ Thu, 28 Mar 2019 22:58:13 GMT https://codegregg.com/posts/research-driven-development/ Title ideas: success, research, data, user

The Problem

  • Companies create roadmaps of features quarters in advance. Maybe with room for roadblocks but definitely not room for rewrites, refactors, or surprise features.
  • This is because what matters to many businesses is that an initial idea is justcdscdas built, success is secondary (or subconsciously desired). What matters most is features checked off the list, not metrics like 5% revenue increase or 10% user retention increase. The business hopes these will happen as a default of the product being built but they aren’t evaluated at every step. They aren’t tested.
  • We end up building software because someone wanted it built and we forget why we decided to build it in the first place.
  • Product owners are disappointed by lack of velocity, where velocity is defined as the number of features completed in a sprint.
  • We get into the cycle of thinking it’s “bad” to remove features.

The Proposed Solution

  • Test with users often. Actually collect data to give back to business.
  • Define success. Which means defining the why of a feature.
  • Every feature should have a business outcome in mind with specific metrics attached to the goals
  • Cut the project before it’s too late

“the single most important factor that will make a service successful, is the end user satisfaction” 1

]]>