Henry Needs Coffee - Terminal BlogA developer's journey through code, coffee, and terminal adventures. Technical articles, tutorials, and musings from a terminal-loving developer.https://henryneeds.coffee/en-us[email protected] (Henry Quinn)[email protected] (Henry Quinn)Astro 0.0.1 + @astrojs/rsshttps://www.rssboard.org/rss-specification60How I've Been Driving LLMs - Part 2https://henryneeds.coffee/blog/driving-claude-code-part-2/https://henryneeds.coffee/blog/driving-claude-code-part-2/Introducing Jane: My Custom Specs/Stdlib MCP ServerMon, 07 Jul 2025 00:00:00 GMT<p>Author Note: This post is Part 2 of a "How I Drive LLMs" series, so consider reading <a href="https://henryneeds.coffee/blog/driving-claude-code-part-1/">Part 1</a> before continuing.</p> <p>So, you might be thinking:</p> <blockquote> <p>Great, Henry. Using spec and stdlib docs sound great, but if I'm generating those docs with Claude Desktop and you want me to generate code with Claude Code, that's a <em>lot</em> of copy/paste-ing I need to do for every new project.</p> </blockquote> <p>Excellent instincts! I see you're thinking ahead, and yeah: it <em>is</em> a lot of copying and pasting individual docs over from Claude Desktop artifacts to a new project dir in your IDE. After doing that for exactly one project, I realized I needed to automate that toil away.</p> <p>These new agentic tools could already do <em>a lot</em>, but in the last couple of months they've been updated to support remote <a href="https://modelcontextprotocol.io/introduction">Model Context Protocol</a> servers (which are basically like plug-ins or LLM flavored wrappers for different APIs). It's kind of like taking Claude Code and strapping a jetpack to it.</p> <p>Instead of relying on outdated training data, you can configure your LLM client to use the <a href="https://github.com/upstash/context7">Context7</a> MCP server so that you can always pull up to date docs for the languages/frameworks you're using. Instead of relying on string searches in your codebase, you can set up a <a href="https://github.com/oraios/serena">Serena</a> MCP server so that you have access to an abstract syntax tree, giving your LLM better deep understanding of your code base.</p> <p>So, in that vein, let me introduce you to my first MCP server: Jane.</p> <blockquote> <p><strong>πŸ”— <a href="https://github.com/quinncuatro/jane-mcp-server">Quinncuatro/jane-mcp-server</a></strong> Jane is a Model Context Protocol (MCP) server that provides a knowledge management system for stdlib and specs documents.</p> <p><img src="https://img.shields.io/github/stars/quinncuatro/jane-mcp-server?style=social" alt="GitHub stars" /> <img src="https://img.shields.io/github/forks/quinncuatro/jane-mcp-server?style=social" alt="GitHub forks" /> <img src="https://img.shields.io/github/issues/quinncuatro/jane-mcp-server" alt="GitHub issues" /></p> </blockquote> <p>Jane is, essentially, a handful of CRUD tools sitting on top of a pile of flat markdown files. Jane can create &amp; update documents, search &amp; list those documents, and get specific documents to pull them into an LLM's context window. Those documents live under two predictable subdirectories: <code>./Jane/specs/</code> and <code>./Jane/stdlib/</code>. Jane is self-hostable, and can connect to both Claude Desktop and Claude Code.</p> <p>So when I'm working on a new project, I can chat with Claude Desktop to develop a list of languages, frameworks, features, and expected outputs for defined inputs. That process usually starts with something like:</p> <blockquote> <p><code>Hey there! Time for a new project! Please read these two blog posts: https://ghuntley.com/stdlib/ &amp; https://ghuntley.com/specs/ to get an idea of our workflow and then come back to help me plan out spec docs for $project. Please ask me any manner of follow up or assumption check questions you might have before actually generating any documentation. $More-context-about-project...</code></p> </blockquote> <p>After a few back and forths, and we come to a good understanding of what we're going to build, Claude can create a new subdir on Jane (<code>./Jane/specs/$project-name</code>) and populate it with everything we discussed in documents like:</p> <ul> <li><code>api-reference.md</code></li> <li><code>deployment.md</code></li> <li><code>document-management.md</code></li> <li><code>technical-decisions.md</code></li> <li><code>testing-strategy.md</code></li> <li>...</li> </ul> <p>I often accomplish that with a prompt like:</p> <blockquote> <p><code>Great! Now that we've come up with what we want to build, please create a new subdir under specs on Jane for this project and populate it with project specification documents. Use as many different categories as you think you need, but keep in mind you want to be thorough in your explanations, but succinct enough to not blow up context windows.</code></p> </blockquote> <p>So, after having a short conversation with my AI intern, they can generate documents that can be loaded into any LLM's context window to steer the session.</p> <p>Similarly, you can spin up a different conversation with Claude Desktop (with similar prompts as the spec docs thread) to generate language/framework specific stdlib documents to tell an agentic coder not just what to build, but how. The subdir <code>./Jane/stdlib/python/</code> might look like:</p> <ul> <li><code>dependencies-management.md</code></li> <li><code>error-handling.md</code></li> <li><code>mcp-protocol-standards.md</code></li> <li><code>performance-standards.md</code></li> <li><code>security-rules.md</code></li> <li>...</li> </ul> <p>Here's what a typical Jane directory structure looks like after working on a few projects:</p> <pre><code>Jane/ β”œβ”€β”€ specs/ β”‚ β”œβ”€β”€ bluetooth-fix-tool/ β”‚ β”‚ β”œβ”€β”€ requirements.md β”‚ β”‚ └── testing-strategy.md β”‚ β”œβ”€β”€ my-web-app/ β”‚ β”‚ β”œβ”€β”€ api-reference.md β”‚ β”‚ β”œβ”€β”€ deployment.md β”‚ β”‚ β”œβ”€β”€ development-workflow.md β”‚ β”‚ └── technical-decisions.md β”œβ”€β”€ stdlib/ β”‚ β”œβ”€β”€ golang/ β”‚ β”‚ └── cli-tool-patterns.md β”‚ β”œβ”€β”€ javascript/ β”‚ β”‚ β”œβ”€β”€ architecture-standards.md β”‚ β”‚ β”œβ”€β”€ error-handling.md β”‚ β”‚ └── testing-patterns.md β”‚ β”œβ”€β”€ python/ β”‚ β”‚ β”œβ”€β”€ dependencies-management.md β”‚ β”‚ └── mcp-protocol-standards.md </code></pre> <p>And since I'm creating all those documents with a self-hosted Jane MCP server, they're being stored in a central location: on the file system of one of my home servers. So when I'm ready to build, I launch <code>claude</code> in a new project directory, and I can similarly use Jane to pull those freshly created documents into the context window with:</p> <blockquote> <p><code>Please pull and study all the documents under Jane's specs for $project to understand functional specifications and under Jane's stdlib for $language to understand technical requirements. Implement what is not implemented. Create tests. Build the project, run it, fix on errors.</code></p> </blockquote> <p>Claude Code is then able to look at the configuration for using remote MCP servers, see that Jane has tools for <code>list_stdlibs</code>/ <code>list_specs</code>/<code>get_stdlib</code>/<code>get_spec</code>, and intelligently use those tools to track down the documents I'm asking about in order to pull them into the context window.</p> <p>Presto-change-o, no more copying and pasting artifacts over!</p> <p>Then you can get really fancy, by telling Claude Code:</p> <blockquote> <p><code>If you need any additional context ($language, $framework, $problem-domain, $whatever), please reach out to Context7 (another configured and self-hostable MCP server) to get whatever docs you need.</code></p> </blockquote> <p>If you're a couple steps ahead of me, you might be thinking:</p> <blockquote> <p>If you're generating those stdlib docs in a persistent volume, you can reuse the language and framework specific ones on other projects, right?</p> </blockquote> <p>And you'd be right! My mountain of stdlib docs keeps building up over time, giving whatever MCP-compatible agent I plug Jane into an ever clearer view of how I like my code written, formatted, and linted. It can keep track of things like naming conventions, idiom usage, testing opinions... all sorts of stuff.</p> <blockquote> <p>Author Note: I'm also working on getting a new Claude feature called <a href="https://docs.anthropic.com/en/docs/claude-code/hooks">Hooks</a> to handle automatically updating those stdlib docs when Context7 comes into play, just to make sure everything stays trued up.</p> </blockquote> <p>In fact, every project I've built with this method (since creating Jane) has been seeing less and less roadblocks:</p> <ul> <li> <p><a href="https://github.com/Quinncuatro/HenryNeedsCoffee-Astro">HenryNeeds.Coffee</a> (JavaScript/Astro) - I rebuilt my personal site from the Gatsby framework to Astro. This first project benefited from stdlib docs about modern JS patterns and architecture preferences for static sites.</p> </li> <li> <p><a href="https://github.com/Quinncuatro/hass-cli">Home Assistant CLI</a> (Golang) - I wanted a CLI tool for managing my smart home from a terminal for years now. A combination of good project specification and Go specific stdlib docs brought this project to a working v1 in just a few hours, and convinced me that the specs/stdlib approach actually works.</p> </li> <li> <p><a href="https://github.com/Quinncuatro/mbp-bluetooth-fix">Bluetooth Fix Tool</a> (Shell/JS) - Another tool I've needed for years: a quick utility script to reset my bluetooth headphones so that audio quality doesn't stay awful after leaving a Google Meet call. Even small tools benefit from having consistent patterns and error handling approaches. This particular project went from "idea" to "working solution" in under an hour.</p> </li> </ul> <p>Every project teaches you something new about your coding preferences, and with Jane, those learnings get captured and reused immediately. My JavaScript stdlib docs got better after the website rebuild. Figuring out the Home Assistant tool improved my CLI tool patterns documentation. Each project built with this method and MCP server makes the next one at least a little bit faster.</p> <p>Anecdotally, I've even seen a decrease in the amount of "tries" and corrections it takes to spit out something more useful than the often lambasted "AI slop" we hear so much about online. For instance, Claude Code was able to get most of the Bluetooth Fix Tool right on the first try. It only took three follow up prompts (mostly passing it error messages), to shore up basic functionality. All of the pre-work of generating the spec and stdlib docs made for an easy implementation.</p> <p>But like I alluded to earlier, you really start cooking with gas when you combine Jane with other MCP servers. Jane is great for planning then transitioning to building, but Context7 can grab up to date documentation to fill in context gaps, and Serena makes file operations more efficient. Just those three MCP servers tied together have made me worlds more productive - at least with these "starting at 0" projects.</p> <p>Now that we've covered how Jane has helped me solve real world problems, let's zoom out a little bit. With reliable context engineering tools and agents that can actually implement code based on my preferences, I find myself thinking about problems differently.</p> <p>I'm spending less of my free time working on traditional "side projects" and more time figuring out how to architect systems that can build projects automatically (and with less user input from me).</p> <p>Context engineering definitely scales. I'm curious what will happen when I combine Jane with custom agent workflows. I have some experiments going with LangGraph at the moment, and can't wait to tell you more about what I can build.</p> <p>Until then, stay frosty and keep building rad things.</p> <p>Tool Links:</p> <ul> <li><a href="https://www.anthropic.com/claude-code">https://www.anthropic.com/claude-code</a></li> <li><a href="https://github.com/upstash/context7">https://github.com/upstash/context7</a></li> <li><a href="https://github.com/oraios/serena">https://github.com/oraios/serena</a></li> <li><a href="https://github.com/quinncuatro/jane-mcp-server">https://github.com/quinncuatro/jane-mcp-server</a></li> </ul> <p>Further Reading:</p> <ul> <li><a href="https://ghuntley.com/stdlib/">https://ghuntley.com/stdlib/</a></li> <li><a href="https://ghuntley.com/specs/">https://ghuntley.com/specs/</a></li> <li><a href="https://www.philschmid.de/context-engineering">https://www.philschmid.de/context-engineering</a></li> <li><a href="https://modelcontextprotocol.io/introduction">https://modelcontextprotocol.io/introduction</a></li> <li><a href="https://docs.anthropic.com/en/docs/claude-code/hooks">https://docs.anthropic.com/en/docs/claude-code/hooks</a></li> </ul> LLMHow I've Been Driving LLMs - Part 1https://henryneeds.coffee/blog/driving-claude-code-part-1/https://henryneeds.coffee/blog/driving-claude-code-part-1/An exploration of Geoffrey Huntley's StdLib & Specs MethodTue, 01 Jul 2025 00:00:00 GMT<p>It's been a while since I've written one of these, and I low-key hate how often I've done the "this blog is so back!" dance, so let's jump right into it. To paraphrase Geoffrey Huntley:</p> <blockquote> <p>You are using LLMs incorrectly.</p> </blockquote> <p>Listen, I get it. New tools are exciting! They're exciting to me, too. LLMs have been around for a while, but I'm also a little new to using them for coding. I've had a ChatGPT subscription for about a year now. I've used it for troubleshooting some GitHub Action errors, scaffolding out some ECS JSON boilerplate, and for building tools on top of <a href="https://atuin.sh/">Atuin</a>. It was (and is) a very helpful tool, even in the pre-MCP era.</p> <p>But what we've been seeing come out in the last couple of months (mainly off-the-shelf agentic tools and the seeming advent of Model Context Protocol servers) have really changed the game. These new tools are incredibly powerful. They now have the ability to reach beyond the walls of their apps and take action on filesystems. Instead of generating a code snippet you need to copy and paste somewhere useful, they can just add a file to your codebase and write the script <em>for you</em>.</p> <p>But there are still limits. Context windows. Message lengths. Token rate limiting.</p> <p>How do you drive a top of the line Ferrari with limits like an automatic transmission to it's full potential? I can't put it any better than Philipp Schmid (previously of Hugging Face &amp; now Google DeepMind):</p> <blockquote> <p>The conversation is shifting from "prompt engineering" to a broader, more powerful concept:Β Context Engineering. With the rise of Agents it becomes more important what information we load into the β€œlimited working memory”.</p> </blockquote> <p>A little over a month ago, my boss shared a blog post from Huntley about how he got Claude Code to decompile itself. After reading it, I poked through some of his other posts and got mentally stuck on these two:</p> <ul> <li><a href="https://ghuntley.com/stdlib/">https://ghuntley.com/stdlib/</a></li> <li><a href="https://ghuntley.com/specs/">https://ghuntley.com/specs/</a></li> </ul> <p>You should definitely read them yourself, and Geoffrey says a lot specifically about Cursor (though I'd argue his points apply more broadly in the whole LLM space), but I'll do my best to summarize:</p> <ul> <li> <p>The <code>agentic</code> cat is out of the metaphorical bag, so you may as well lean into it. Agents can actually <em>do work</em> instead of just talking about it and, like the chat apps of the world, can be steered by providing them context about what you want them to do.</p> </li> <li> <p>Instead of using one-off prompts, build up a "stdlib" (standard library) of re-usable rules that can be composed together like Unix pipes. You might have stdlib docs about how you like to write JavaScript (architecture choices, error handling, naming schemes), particular frameworks you like to use for different problem domains, how you like to build Docker images. Any number of things really. This tells your LLM "how" to build software to your taste.</p> </li> <li> <p>In addition to your ever growing document store of stdlibs, start each new project (feature, bugfix, etc..) by having a conversation with your LLM of choice about what you want to build. Have your LLM ask you follow up questions as assumption checks. Once you iron out what exactly needs to get built, have it generate a set of project specification docs. Things like technical decisions, testing strategies, api references, and deployment methods. This tells your LLM "what" to build.</p> </li> <li> <p>When those stdlib and specs docs are ready and loaded in your project repo, spin up an agent (like Claude Code), and get it into a loopback pattern. You can tell it something like:</p> <p><code>Please go and study all the documents under @specs/ for functional specifications. Study all the documents under @stdlib/ for technical requirements. Implement what is not implemented. Create tests. Build the project, run it, fix on errors.</code></p> </li> <li> <p>And if you ever run out of tokens or hit a context window: spin up a new agent, give it the same prompt, and <code>Implement what is not implemented</code> tells it to pick up and start running again. This lets the LLM run wild in it's sandbox.</p> </li> </ul> <p>There's more nuance than that, and this method works a lot better with strongly typed languages that provide good error messages. Like I said, please go and read those posts yourself. They're <em>very</em> helpful. The main idea, though, is that these new tools allow us to build autonomous development systems where we can compose high level requirements and let the agent do the heavy lifting by leveraging all of the stdlib and spec docs we're writing to keep it aimed in the right direction.</p> <p>Say it with me now:</p> <blockquote> <p>Context Engineering!</p> </blockquote> <p>I used that method to build a Home Assistant CLI/TUI app over the course of a couple of days. That project had been in my backlog for <em>years</em> and I was able to generate a functioning CLI tool to handle my lights from the command line in about 2-3 hours of active time steering Claude Code.</p> <p>Geoffrey's stdlib and spec blog posts lit a fire in me. They're stuck in my brain. And those of you who know me, know that I live by the phrase "work smarter, not harder". All I can think about lately is how to scale this up.</p> <p>If I could build a tool I've always wanted in the matter of a few hours of generating some off the cuff Golang stdlib docs and defining my project specifications, what could I do with a <em>mountain</em> of stdlibs built up over <em>many</em> projects?</p> <p>Turns out, the mountain already exists and we can use it <em>for free</em>.</p> <p>Part 2 to come soon. I can't wait to introduce y'all to Jane!</p> <p>Tool Links:</p> <ul> <li><a href="https://www.anthropic.com/claude-code">https://www.anthropic.com/claude-code</a></li> </ul> <p>Further Reading:</p> <ul> <li><a href="https://ghuntley.com/stdlib/">https://ghuntley.com/stdlib/</a></li> <li><a href="https://ghuntley.com/specs/">https://ghuntley.com/specs/</a></li> <li><a href="https://www.philschmid.de/context-engineering">https://www.philschmid.de/context-engineering</a></li> </ul> LLMWebsite Update - Signal Boost Pagehttps://henryneeds.coffee/blog/website-1-signal-boost/https://henryneeds.coffee/blog/website-1-signal-boost/I wanted to share how I implemented a signal boost page, so you can too.Sat, 13 Feb 2021 00:00:00 GMT<h2>I just finished giving HenryNeeds.Coffee its annual-ish refresh. I updated the menu bar, split up a lot of the homepage into separate content-specific pages, but my favorite addition is the new <a href="https://henryneeds.coffee/signal-boost">Signal Boost</a> page.</h2> <p>When I was looking around for refresh inspiration, I came across <a href="https://christine.website">Christine Dodrill's website</a>. It has a similar terminal-ish design as mine but with way better colors for everything - which honestly is what I wanted to address with the refresh.</p> <p>But in looking through Christine's site, I found the Signal Boost page - designed to put a spotlight on other tech folks looking for work. After a year where a lot of people lost their jobs, I felt it was a better use of my time to set up a signal boost page of my own rather than making some colors look better.</p> <p>There are so many developers, engineers, ops folks, and all kinds of other tech workers trying to land a new gig right now. I'm lucky enough to still be working, so providing visibility to those who aren't is the least I can do with the small platform I have.</p> <p>I only got to where I am with the help of my friends and colleagues. This is something small I can do to pay that forward, and I encourage y'all to set up something similar.</p> <p>In that vein, I wanted to share how I threw this together so that you can, too.</p> <p>First off, this site's code is available in this <a href="https://github.com/Quinncuatro/Henry-Personal-Website">GitHub repo</a>, but I'll go into specifics about how this particular feature works.</p> <p><a href="https://henryneeds.coffee">HenryNeeds.Coffee</a> was built using <a href="https://www.gatsbyjs.com/">GatsbyJS</a>, and it all sits on top of a hello-world base.</p> <p>In Gatsby sites, data has to come from somewhere, and I already had certain plugins (like <a href="https://www.gatsbyjs.com/plugins/gatsby-source-filesystem/">gatsby-source-filesystem</a> and <a href="https://www.gatsbyjs.com/plugins/gatsby-transformer-yaml/">gatsby-transformer-yaml</a>) installed and my <code>./gatsby-config.js</code> file configured to ingest yaml so that it can be queried with GraphQL.</p> <p>I had that part of Gatsby's content mesh set up to turn <code>./src/resume/resume.yaml</code> into content for my <code>Resume</code> and <code>Talks // Pods</code> pages. I expanded that out to power my <code>Blog</code> page with markdown files, and expanded it again to handle <code>Signal Boost</code>.</p> <p>First off, though, I needed to make a new page so that <code>https://henryneeds.coffee/signal-boost</code> would resolve to something:</p> <pre><code class="language-jsx">// Whole of ./src/pages/signal-boost.js import React from "react" import Layout from "../components/Layout" import SignalBoostLogin from "../components/SignalBoostLogin" export default () =&gt; ( &lt;Layout&gt; &lt;SignalBoostLogin /&gt; &lt;/Layout&gt; ); </code></pre> <p>All this file does is import and render a component named <code>SignalBoostLogin</code>. That component handles things like important the menu bar, doing some date math for the "Current login" header. But its main job is querying data provided by <code>./src/signalboost/signalboost.yaml</code> (more on that later) and then iterating over those results to set up individual <code>SignalBoost</code> components.</p> <pre><code class="language-jsx">// Selection from ./src/components/SignalBoostLogin/index.js &lt;StaticQuery query={graphql` query signalBoostQuery { allSignalboostYaml { edges { node { people { name tech github twitter } } } } } `} render={SignalBoostPage} /&gt; </code></pre> <p>This is the GraphQL query that pulls information defined in that <code>/.src/signalboost/signalboost.yaml</code> file then renders the <code>SignalBoostPage</code> component (in the same file) which ingests the GraphQL results as <code>data</code>.</p> <pre><code class="language-jsx">// Selection from ./src/components/SignalBoostLogin/index.js {data.allSignalboostYaml.edges[0].node.people.map((person) =&gt; ( &lt;SignalBoost name={person.name} tech={person.tech} github={person.github} twitter={person.twitter} /&gt; ))} </code></pre> <p>Like I said earlier, this page is just meant to grab the data provided by the yaml file, iterate over it, and generate individual <code>SignalBoost</code> components for each entry via that <code>.map()</code> method. The whole <code>name={person.name}</code> bit passes all the individual data points from the GraphQL results as props that can be picked up and used by the child component (<code>SignalBoost</code>).</p> <pre><code class="language-jsx">// Whole of ./src/components/SignalBoost/index.js import React from "react" export default (props) =&gt; ( &lt;div&gt; &lt;h3&gt;{ props.name }&lt;/h3&gt; &lt;p&gt;{ props.tech }&lt;/p&gt; &lt;p&gt;&lt;a href={ props.github } target="_blank" rel="noopener noreferrer"&gt;[ GitHub ]&lt;/a&gt;&amp;nbsp;&lt;a href={ props.twitter } target="_blank" rel="noopener noreferrer"&gt;[ Twitter ]&lt;/a&gt;&lt;/p&gt; &lt;hr /&gt; &lt;/div&gt; ) </code></pre> <p>And this (finally) is the template that takes those props, throws the values into the HTML, and renders out individual divs of name/tech/links on the final Signal Boost page.</p> <p>So to recap:</p> <ol> <li>The actual data gets updated in the <code>./src/signalboost/signalboost.yaml</code> file.</li> <li>The page served by the <code>https://henryneeds.coffee/signal-boost</code> URL calls the <code>./src/components/SignalBoostLogin/</code> component.</li> <li>That component queries the data provided by the <code>./src/signalboost/signalboost.yaml</code> file, iterates over it, and calls multiple <code>./src/components/SignalBoost/</code> components.</li> <li>Each of those components takes the data passed to it as props and renders out HTML for each person being signal boosted.</li> </ol> <p>So yaml like this:</p> <pre><code class="language-yaml"># ./src/signalboost/signalboost.yaml people: - name: "John Doe" tech: "bash docker devops gatsby javascript kubernetes linux sql web" github: "https://github.com/username" twitter: "https://twitter.com/username" - name: "John Doe 2" tech: "aws python pandas golang" github: "https://github.com/username2" twitter: "https://twitter.com/username2" </code></pre> <p>Will render this:</p> <p><img src="/src/assets/blog-images/website-1-signal-boost/signal-boost-yaml-render.jpg" alt="Signal Boost YAML Render" /></p> <p>If anyone wants to add themself, all they need to do is follow the <a href="https://github.com/Quinncuatro/Henry-Personal-Website/tree/master/src/signalboost">instructions here</a>, edit the YAML file, and submit a pull request.</p> <p>Once I get the notification, check the formatting, and roll the change into my main branch: builds will automatically kick off on Netlify and Fleek to deploy the updated version.</p> <p>It took me a couple of days' worth of free cycles to figure this all out and get it working the way I like it, but the current version works great!</p> <p>So far I've had two folks submit PR's and it went off without a hitch. They submitted their PR, I hit the "Merge" button, and the builds kicked off all on their own.</p> <p>Building the feature was pretty painless given that I already had my site built on Gatsby's engine. However, adding something like this to a different static site generator or build process should be fairly easy once you understand the data flow.</p> <p>Lots of folks lost their jobs over the past year, and really what's the point of having a voice, site, blog, or whatever if we don't help others climb up behind us on the same ladders?</p> <p>Just something to chew on.</p> <p>Stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a> (<a>IPFS Version</a>)</li> <li><a href="https://henryneeds.coffee/blog">Blog</a> (<a href="https://ipfs.fleek.co/ipfs/bafybeid36rtd2rpjzhz7ll4foruef3vj3n3pbrev53wcwj6pj6q3u4ie7q/blog">IPFS Version</a>)</li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> <li><a href="https://twitter.com/quinncuatro">Twitter</a></li> </ul> Website Update, Signal BoostOptiProx - The Prephttps://henryneeds.coffee/blog/optiprox-1-the-prep/https://henryneeds.coffee/blog/optiprox-1-the-prep/Planning services, acquiring parts, and modding the case.Thu, 04 Feb 2021 00:00:00 GMT<h2>After having some discussions with folks on the <a href="https://reddit.com/r/sleeperbattlestations">/r/sleeperbattlestations</a> and <a href="https://reddit.com/r/sleepingoptiplex">/r/sleepingoptiplex</a> subreddits, I wanted to toss up a quick post on what I'm doing with my server build.</h2> <p>The more I hear bad stories coming out of Silicon Valley regarding user privacy the more I want to de-Google-ify my life. Having calendars and email services and data stores available to me for free is super convenient but it's not worth the hassle of having to rely on a small handful of companies to do the right thing (regarding both security and ethics).</p> <p>But I have been running some servers (mostly through Digital Ocean) for the last several years, so I am fairly comfortable running my own services.</p> <p>And running a server in my apartment to have better stewardship of my data is right up my alley.</p> <p>Honestly, I've been wanting to do this for a while. It'll be fairly easy for me to use something like Proxmox to stand up some services like NextCloud, Home Assistant, Gogs, a Minecraft server, and a handful of other things.</p> <p>Just need to build the thing first, you know? I definitely want to build a sleeper. I always love seeing cool builds of great modern hardware in a retro-ish chassis, so picking out a case is obviously where I needed to start.</p> <p>Whatever I chose needed to hold a motherboard with an AM4 socket and remind me of my childhood.</p> <p>Something like the old beige tower from our guest room with a Pentium II in it or the charcoal Dell Dimension 4500-ish machine I used for my homework in middle school.</p> <p>After a few failed Craigslist attempts to pick up a Dimension I ultimately settled on an OptiPlex 390.</p> <p><img src="/src/assets/blog-images/optiprox-1-the-prep/optiplox-prep-glamour-shot.jpg" alt="OptiPlex 390 - Front View" /></p> <p>Awww yeah, that's the stuff. Just like what my grade school computer labs used to run.</p> <p>I managed to snag it for $20 from <a href="https://resourcevt.org">ReSource in Williston, VT</a> while I was up north for a couple of weeks in December (after following all relevant VT state-mandated COVID-19 travel protocols).</p> <p>Huge thank you to the fellow Champlain grad who helped me find and strip the computer from their warehouse!</p> <p>Once I got home I worked on finalizing my parts list:</p> <p>&lt;a href="<a href="https://pcpartpicker.com/list/cgskmk">https://pcpartpicker.com/list/cgskmk</a>"&gt;PCPartPicker Part List&lt;/a&gt; &lt;table class="pcpp-part-list"&gt; &lt;thead&gt; &lt;tr&gt; &lt;th&gt;Type&lt;/th&gt; &lt;th&gt;Item&lt;/th&gt; &lt;th&gt;Price&lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;CPU&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/tLCD4D/amd-ryzen-9-3900x-36-ghz-12-core-processor-100-100000023box">https://pcpartpicker.com/product/tLCD4D/amd-ryzen-9-3900x-36-ghz-12-core-processor-100-100000023box</a>"&gt;AMD Ryzen 9 3900X 3.8 GHz 12-Core Processor&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; Purchased For $380.00 &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;CPU Cooler&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/gqxbt6/id-cooling-se-914-xt-basic-458-cfm-cpu-cooler-se-914-xt-basic">https://pcpartpicker.com/product/gqxbt6/id-cooling-se-914-xt-basic-458-cfm-cpu-cooler-se-914-xt-basic</a>"&gt;ID-COOLING SE-914-XT Basic 45.8 CFM CPU Cooler&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; Purchased For $28.99 &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Thermal Compound&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/JmYLrH/arctic-mx-4-2019-edition-4-g-thermal-paste-actcp00002b">https://pcpartpicker.com/product/JmYLrH/arctic-mx-4-2019-edition-4-g-thermal-paste-actcp00002b</a>"&gt;ARCTIC MX-4 2019 Edition 4 g Thermal Paste&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; Purchased For $8.59 &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Motherboard&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/VyBhP6/gigabyte-b550m-ds3h-micro-atx-am4-motherboard-b550m-ds3h">https://pcpartpicker.com/product/VyBhP6/gigabyte-b550m-ds3h-micro-atx-am4-motherboard-b550m-ds3h</a>"&gt;Gigabyte B550M DS3H Micro ATX AM4 Motherboard&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; Purchased For $101.02 &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Memory&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/gCCFf7/crucial-ballistix-32-gb-2-x-16-gb-ddr4-3200-memory-bl2k16g32c16u4b">https://pcpartpicker.com/product/gCCFf7/crucial-ballistix-32-gb-2-x-16-gb-ddr4-3200-memory-bl2k16g32c16u4b</a>"&gt;Crucial Ballistix 32 GB (2 x 16 GB) DDR4-3200 CL16 Memory&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; &lt;a href="<a href="https://pcpartpicker.com/product/gCCFf7/crucial-ballistix-32-gb-2-x-16-gb-ddr4-3200-memory-bl2k16g32c16u4b">https://pcpartpicker.com/product/gCCFf7/crucial-ballistix-32-gb-2-x-16-gb-ddr4-3200-memory-bl2k16g32c16u4b</a>"&gt;$148.99 @ Newegg&lt;/a&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Storage&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/P4ZFf7/samsung-970-evo-500gb-m2-2280-solid-state-drive-mz-v7e500bw">https://pcpartpicker.com/product/P4ZFf7/samsung-970-evo-500gb-m2-2280-solid-state-drive-mz-v7e500bw</a>"&gt;Samsung 970 Evo 500 GB M.2-2280 NVME Solid State Drive&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; Purchased For $69.12 &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Storage&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx">https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx</a>"&gt;Western Digital Red 4 TB 3.5" 5400RPM Internal Hard Drive&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; &lt;a href="<a href="https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx">https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx</a>"&gt;$104.99 @ Newegg&lt;/a&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Storage&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx">https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx</a>"&gt;Western Digital Red 4 TB 3.5" 5400RPM Internal Hard Drive&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; &lt;a href="<a href="https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx">https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx</a>"&gt;$104.99 @ Newegg&lt;/a&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Storage&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx">https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx</a>"&gt;Western Digital Red 4 TB 3.5" 5400RPM Internal Hard Drive&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; &lt;a href="<a href="https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx">https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx</a>"&gt;$104.99 @ Newegg&lt;/a&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Storage&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx">https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx</a>"&gt;Western Digital Red 4 TB 3.5" 5400RPM Internal Hard Drive&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; &lt;a href="<a href="https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx">https://pcpartpicker.com/product/rkV48d/western-digital-internal-hard-drive-wd40efrx</a>"&gt;$104.99 @ Newegg&lt;/a&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Video Card&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/x7s8TW/msi-geforce-gtx-1050-ti-4gb-video-card-gtx-1050-ti-4gt-oc">https://pcpartpicker.com/product/x7s8TW/msi-geforce-gtx-1050-ti-4gb-video-card-gtx-1050-ti-4gt-oc</a>"&gt;MSI GeForce GTX 1050 Ti 4 GB Video Card&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; Purchased For $212.69 &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Power Supply&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/mPdxFT/seasonic-focus-sgx-650w-80-gold-certified-fully-modular-sfx-power-supply-ssr-650sgx">https://pcpartpicker.com/product/mPdxFT/seasonic-focus-sgx-650w-80-gold-certified-fully-modular-sfx-power-supply-ssr-650sgx</a>"&gt;SeaSonic FOCUS SGX 650 W 80+ Gold Certified Fully Modular SFX Power Supply&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; Purchased For $139.30 &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Case Fan&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/jZn2FT/noctua-case-fan-nfb9redux1600pwm">https://pcpartpicker.com/product/jZn2FT/noctua-case-fan-nfb9redux1600pwm</a>"&gt;Noctua B9 redux-1600 PWM 37.85 CFM 92 mm Fan&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; Purchased For $10.95 &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Case Fan&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/jZn2FT/noctua-case-fan-nfb9redux1600pwm">https://pcpartpicker.com/product/jZn2FT/noctua-case-fan-nfb9redux1600pwm</a>"&gt;Noctua B9 redux-1600 PWM 37.85 CFM 92 mm Fan&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; Purchased For $10.95 &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Custom&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/QVFXsY/orico-aluminum-525-inch-to-25-or-35-inch-internal-hard-disk-drive-mounting-kit-with-screws-and-shock-absorption-rubber-washer-black">https://pcpartpicker.com/product/QVFXsY/orico-aluminum-525-inch-to-25-or-35-inch-internal-hard-disk-drive-mounting-kit-with-screws-and-shock-absorption-rubber-washer-black</a>"&gt;ORICO Aluminum 5.25 inch to 2.5 or 3.5 Inch Internal Hard Disk Drive Mounting Kit with Screws and SHOCK Absorption Rubber Washer- Black&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; Purchased For $8.79 &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Custom&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;&lt;a href="<a href="https://pcpartpicker.com/product/QVFXsY/orico-aluminum-525-inch-to-25-or-35-inch-internal-hard-disk-drive-mounting-kit-with-screws-and-shock-absorption-rubber-washer-black">https://pcpartpicker.com/product/QVFXsY/orico-aluminum-525-inch-to-25-or-35-inch-internal-hard-disk-drive-mounting-kit-with-screws-and-shock-absorption-rubber-washer-black</a>"&gt;ORICO Aluminum 5.25 inch to 2.5 or 3.5 Inch Internal Hard Disk Drive Mounting Kit with Screws and SHOCK Absorption Rubber Washer- Black&lt;/a&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt; Purchased For $8.79 &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td class="pcpp-part-list-type"&gt;Custom&lt;/td&gt; &lt;td class="pcpp-part-list-item"&gt;Dell Optiplex 390&lt;/td&gt; &lt;td class="pcpp-part-list-price"&gt;Purchased For $20.00&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price-note"&gt;Prices include shipping, taxes, rebates, and discounts&lt;/td&gt; &lt;td&gt;&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-total"&gt;Total&lt;/td&gt; &lt;td class="pcpp-part-list-total-price"&gt;$1568.14&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;&lt;/td&gt; &lt;td class="pcpp-part-list-price-note"&gt;Generated by &lt;a href="<a href="https://pcpartpicker.com">https://pcpartpicker.com</a>"&gt;PCPartPicker&lt;/a&gt; 2021-02-03 22:13 EST-0500&lt;/td&gt; &lt;td&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt;</p> <p>I'm overbuilding the hell out of this thing, but it should last me a long, long time. Once my new power supply comes in (more on that later), I'll just need to order the RAM and hard drives before I can start building.</p> <p>Once I got the parts figured out, I needed to tackle modding the case a bit to fix the obvious airflow problem.</p> <p><img src="/src/assets/blog-images/optiprox-1-the-prep/optiplox-prep-back.jpg" alt="OptiPlex 390 - Back View" /></p> <p>The back looks fine. Plenty of slots for the 1050 Ti (potential hardware transcoding), a regular size cutout for rear IO, room for a full-sized PSU, and even a spot for a fan!</p> <p><img src="/src/assets/blog-images/optiprox-1-the-prep/optiplox-prep-inside-clean.jpg" alt="OptiPlex 390 - Clean Inside View" /></p> <p>Inside is super clean once you strip it down. Screw layout for a standard Micro-ATX motherboard, two dedicated slots for drives, two 5.5" slots (surprise, more hard drives), and plenty of room in the front I can take advantage of for a fan intake.</p> <p>The only issue here is that while there's plenty of room for a full-sized ATX power supply in here, I missed that black hook for the side panel latch even after measuring twice. The ATX unit I ordered didn't quite fit so I hard to return it for the SGX form factor one, which SHOULD fit. That'll be coming in the mail by the end of this week - fingers crossed.</p> <p><img src="/src/assets/blog-images/optiprox-1-the-prep/optiplox-prep-front-taped.jpg" alt="OptiPlex 390 - Taped Front View" /></p> <p>The one real bit of modding I had to do was cutting a hole in the front of this thing. One fan wasn't going to cut it, so I wanted to throw an intake in there too.</p> <p>I forgot to take a picture before I taped it up, but the whole front is the same type of metal grille that you can see right below the tape line.</p> <p>Either way: I taped it up, marked out the hole so that there was still room for mounting hardware, and cut the thing out with a rotary tool.</p> <p><img src="/src/assets/blog-images/optiprox-1-the-prep/optiplox-prep-front-hole.jpg" alt="OptiPlex 390 - Hole Cut" /></p> <p>Not the prettiest cut, but not bad for my first job with a new tool.</p> <p><img src="/src/assets/blog-images/optiprox-1-the-prep/optiplox-prep-front-fan.jpg" alt="OptiPlex 390 - Fan Installed" /></p> <p>The fan mounted nice enough after I gave the mounting holes a second pass to make sure everything lines up. Plenty of room for airflow.</p> <p><img src="/src/assets/blog-images/optiprox-1-the-prep/optiplox-prep-front.jpg" alt="OptiPlex 390 - Front Panel Installed" /></p> <p>And honestly, once I got the front panel back on, you can't even tell the case has been modded.</p> <p><img src="/src/assets/blog-images/optiprox-1-the-prep/optiplox-prep-inside.jpg" alt="OptiPlex 390 - Inside View" /></p> <p>After using some canned air to get all the dust and stuff out, I tossed the exhaust fan and all the odds &amp; ends that came with the shell back in.</p> <p>Like I said, there are still a few parts to order before I can start the build in earnest.</p> <p>Might even take a day off work for all the fun. ;)</p> <p>That adventure will be worthy of a whole other post.</p> <p>Once that's up, it should be followed by another one on setting up Proxmox.</p> <p>Can't wait to share more with y'all!</p> <p>Stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a> (<a href="https://whateverforever.party">IPFS Version</a>)</li> <li><a href="https://henryneeds.coffee/blog">Blog</a> (<a href="https://whateverforever.party/blog">IPFS Version</a>)</li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> Home ServerIPFS - The Decentralized Web Is Finally Herehttps://henryneeds.coffee/blog/decentralized-01-week-with-ipfs/https://henryneeds.coffee/blog/decentralized-01-week-with-ipfs/Tech I wished for as a kid is finally here and the ramifications are huge.Wed, 27 Jan 2021 00:00:00 GMT<h2>It's the spring of 2011. I've already been accepted to college. Senioritis is setting in and 3OH!3 is blaring on my headphones as I sit at my desk at 2 AM on a school night...</h2> <p>Homework long since done, I'm staring at the screen of the heavily used 17" Gateway laptop I managed to get off my boss at The Computer Shack in place of actual payment.</p> <p>I'm trying to test a tool on a Windows XP partition so that I could confidently install it on my high school's network.</p> <p>I had already hidden a handful of games on a shared drive meant for our teachers' math and science tools.</p> <p>My friends and I had plenty there to keep us occupied at the end of long lab periods, but we wanted to be able to chat with each other while we fragged each other in Blood Gulch.</p> <p>And of course, phones weren't allowed.</p> <p>This tool would let us piggyback on (the since sunsetted) <code>net send</code> command to effectively let us private message each other over the local network.</p> <p>Being the "nerd" of the group, it was my responsibility to get this set up. The IT guys may not have loved what I was getting up to, but we had an <em>understanding</em>. I could bend the rules a bit as long as I let them know what parts of their security they needed to shore up.</p> <p><strong>But this tool, though</strong>.</p> <p>Someone wrote an entire chat application written on top of a network command, put it online <strong>for free</strong>, and I had the ability (and unintended privileges) to do something rad with it and make the tail end of high school more exciting.</p> <p>I remember explicitly taking a moment to sit back to think about how god damned cool that was.</p> <p>Now, about 10 years later, I'm living that moment all over again.</p> <p>Work was done at 5:00 today. My girlfriend, tired from nursing school, is in bed. Dishes are... well, sitting in the sink dirty.</p> <p>I'm sitting at my computer: late at night, working on a project, and thinking about how lucky I am to be alive and a few years into my development career <em>right now</em>.</p> <p>Last week, I stumbled onto a <a href="https://www.theverge.com/2021/1/19/22238334/brave-browser-ipfs-peer-to-peer-decentralized-transfer-protocol-http-nodes">Verge article</a> about a new (well, 5 years old) technology that's just starting to pick up steam, called <a href="https://ipfs.io/">InterPlanetary File System</a> (IPFS).</p> <p>Folks who came to this post from a Tweet or something probably know what I'm talking about. But for those who don't:</p> <p>Wikipedia defines IPFS as</p> <blockquote> <p>...a protocol and peer-to-peer network for storing and sharing data in a distributed file system. IPFS uses content-addressing to uniquely identify each file in a global namespace connecting all computing devices.</p> <p>IPFS allows users to not only receive but host content, in a similar manner to BitTorrent. As opposed to a centrally located server, IPFS is built around a decentralized system of user-operators who hold a portion of the overall data, creating a resilient system of file storage and sharing. Any user in the network can serve a file by its content address, and other peers in the network can find and request that content from any node who has it using a distributed hash table (DHT).</p> </blockquote> <p><strong>Long story short, the future of the internet is here.</strong></p> <p>A group of people worked for years to actually make the decentralized internet that Pied Piper was trying to make on the later seasons of <a href="https://en.wikipedia.org/wiki/Silicon_Valley_(TV_series)">Silicon Valley</a>.</p> <p>They built an entirely new protocol for accessing the web. It's meant to work alongside (not replace) HTTPS and operates similarly to BitTorrent in that data (for websites, videos, games, apps, anything) is shared amongst peers and not fetched from central servers owned by Amazon or some other Evil Corpβ„’.</p> <p>The ramifications here are HUGE.</p> <h3>1. Have you ever had a resource on the web disappear on you?</h3> <p>That's a dumb question because so have I.</p> <p>There's this mashup called "Ketchup V1" from an artist named The Deaf DJ that I'd love to be able to find again.</p> <p>I'm sure you all have your own white whales.</p> <p>Content that we care about on Web 2.0 (the HTTPS based one we're on now) only sticks around if the people creating and hosting that content want to keep it available.</p> <p>If some publisher or individual decides they want to take a site down or stop paying for a monthly web server bill, then whatever information they were hosting effectively disappears.</p> <p>The internet is supposed to be the culmination of all human knowledge, but small and easy to host files (like sheet music or a set of woodworking instructions) could disappear because someone needs to reign in their budget.</p> <p>IPFS allows anyone to go "Hey, that blog post on how to code this feature was really helpful to me, I'd like to re-host it so that other people will always be able to find it later."</p> <p>The Brave browser, mentioned in that Verge article, implemented IPFS features that let me do that with two mouse clicks.</p> <p>That's a beautiful thing.</p> <h3>2. Speaking of resources disappearing, did you know that for a few years Turkey blocked their citizens from accessing Wikipedia?</h3> <p>Again, the culmination of all human knowledge, but sequestered from citizens.</p> <p>Kind of fucked up, right? Especially when you know the importance of tools like social media sites during times like the Arab Spring. Being able to block entire portions of the web, like China's Great Firewall, is a dangerous precedent.</p> <p>Turkey was able to block Wikipedia because web addresses are currently based on file locations.</p> <p>You tell your browser to go all the way to the group of servers that the Wikimedia Foundation set up to host Wikipedia and then, once there, pull up the page on water filtration.</p> <p>Since Turkey could see where those web requests were pointing (the addresses assigned to Wikipedia servers), they were able to stop them from completing.</p> <p>Since everything on IPFS is content-addressed (via a hash), whenever you request a page, your browser pulls it from whoever happens to be closest to you and is also hosting that file.</p> <p>With this new model of web browsing, we're not dependent on corporations renting servers from even bigger corporations to keep the information we care about, and they currently control, on the internet.</p> <p>Instead of relying on publishers like the New York Times keeping articles hosted on their servers, or platforms like Stack Exchange seeing the value in keeping answers to years old questions online, we can just click a button to rehost web content so that it's always accessible on a massive P2P network.</p> <p>No more going all the way to specific servers that governments, ISPs, or rogue actors may be able to cut off access to in the future.</p> <p>This gives the people the ability to decide what's worth keeping around in perpetuity. Theoretically, the good stuff will be pinned by enough users that it'll be around forever, while the bad stuff will fall to the wayside.</p> <h3>3. It harkens back to the markedly non-corporate early internet so many of us fell in love with.</h3> <p>I, for one, am tired of <a href="https://www.reddit.com/r/NoStupidQuestions/comments/l2r5da/does_anybody_else_hate_how_the_internet_now_feels/">the internet being mostly a collection of the same seven sites over and over again</a>.</p> <p>So many of our favorite old websites and interesting things we bookmarked just don't exist anymore.</p> <p>All of that effort and content and knowledge is just lost forever, only to be replaced by "community of communities" sites like Facebook and Reddit.</p> <p>I miss the days where we'd go to AddictingGames to play some games, check out some tech news on whichever blog we liked, get involved in separate niche community forums, and generally exist without having to move all our data through massive corporations.</p> <p>This move to a decentralized web has been encouraging early builders to make fun little web pages, more niche community apps, and just experimenting with the new format.</p> <p>It's like the wild west on the web again, and while a lot of the tooling and architecture still needs to be figured out, the possibilities are endless.</p> <p>The new web is weird and that's a good thing.</p> <p>That brings me back to my earlier point:</p> <blockquote> <p>I'm sitting at my computer: late at night, working on a project, and thinking about how lucky I am to be alive and a few years into my development career <em>right now</em>.</p> </blockquote> <p>This kind of decentralized web that lives on all of our devices and frees us from our dependencies on corporations is something my computer lab buddies and I have wanted to exist for a long time.</p> <p>Over the years since college, I've headed up tech communities, led nationwide teams of developers, and traveled around the country giving and listening to talks.</p> <p>All that and I've honestly never been more excited about a new <code>$web_thing</code> crossing my desk.</p> <p>IPFS is the real deal. From nerd dreams of old to HBO jokes to your ears (well, eyes).</p> <p>It's finally here, it's being given away <strong>for free</strong>, and I understand enough modern web development to help build it.</p> <p>That's something 18-year-old me would be absolutely hype about. I just hope I can make him proud.</p> <p>I'm working on a project built on IPFS with a buddy of mine from GitLab.</p> <p>Without giving too much away: it's built with <a href="https://www.gatsbyjs.com/">GatsbyJS</a>, hosted on IPFS with <a href="https://fleek.co/">Fleek</a>, and should help folks adapt to the early days of this new web protocol.</p> <p>I should have another blog post with some more information to share soon.</p> <p>Stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a> (<a>IPFS Version</a>)</li> <li><a href="https://henryneeds.coffee/blog">Blog</a> (<a href="https://ipfs.fleek.co/ipfs/bafybeid36rtd2rpjzhz7ll4foruef3vj3n3pbrev53wcwj6pj6q3u4ie7q/blog">IPFS Version</a>)</li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> Decentralized WebIPFS 1 - The Decentralized Web Is Finally Herehttps://henryneeds.coffee/blog/ipfs-1-decentralized-web-is-here/https://henryneeds.coffee/blog/ipfs-1-decentralized-web-is-here/Tech I wished for as a kid is finally here and the ramifications are huge.Wed, 27 Jan 2021 00:00:00 GMT<h2>It's the spring of 2011. I've already been accepted to college. Senioritis is setting in and 3OH!3 is blaring on my headphones as I sit at my desk at 2 AM on a school night...</h2> <p>Homework long since done, I'm staring at the screen of the heavily used 17" Gateway laptop I managed to get off my boss at The Computer Shack in place of actual payment.</p> <p>I'm trying to test a tool on a Windows XP partition so that I could confidently install it on my high school's network.</p> <p>I had already hidden a handful of games on a shared drive meant for our teachers' math and science tools.</p> <p>My friends and I had plenty there to keep us occupied at the end of long lab periods, but we wanted to be able to chat with each other while we fragged each other in Blood Gulch.</p> <p>And of course, phones weren't allowed.</p> <p>This tool would let us piggyback on (the since sunsetted) <code>net send</code> command to effectively let us private message each other over the local network.</p> <p>Being the "nerd" of the group, it was my responsibility to get this set up. The IT guys may not have loved what I was getting up to, but we had an <em>understanding</em>. I could bend the rules a bit as long as I let them know what parts of their security they needed to shore up.</p> <p><strong>But this tool, though</strong>.</p> <p>Someone wrote an entire chat application written on top of a network command, put it online <strong>for free</strong>, and I had the ability (and unintended privileges) to do something rad with it and make the tail end of high school more exciting.</p> <p>I remember explicitly taking a moment to sit back to think about how god damned cool that was.</p> <p>Now, about 10 years later, I'm living that moment all over again.</p> <p>Work was done at 5:00 today. My girlfriend, tired from nursing school, is in bed. Dishes are... well, sitting in the sink dirty.</p> <p>I'm sitting at my computer: late at night, working on a project, and thinking about how lucky I am to be alive and a few years into my development career <em>right now</em>.</p> <p>Last week, I stumbled onto a <a href="https://www.theverge.com/2021/1/19/22238334/brave-browser-ipfs-peer-to-peer-decentralized-transfer-protocol-http-nodes">Verge article</a> about a new (well, 5 years old) technology that's just starting to pick up steam, called <a href="https://ipfs.io/">InterPlanetary File System</a> (IPFS).</p> <p>Folks who came to this post from a Tweet or something probably know what I'm talking about. But for those who don't:</p> <p>Wikipedia defines IPFS as</p> <blockquote> <p>...a protocol and peer-to-peer network for storing and sharing data in a distributed file system. IPFS uses content-addressing to uniquely identify each file in a global namespace connecting all computing devices.</p> <p>IPFS allows users to not only receive but host content, in a similar manner to BitTorrent. As opposed to a centrally located server, IPFS is built around a decentralized system of user-operators who hold a portion of the overall data, creating a resilient system of file storage and sharing. Any user in the network can serve a file by its content address, and other peers in the network can find and request that content from any node who has it using a distributed hash table (DHT).</p> </blockquote> <p><strong>Long story short, the future of the internet is here.</strong></p> <p>A group of people worked for years to actually make the decentralized internet that Pied Piper was trying to make on the later seasons of <a href="https://en.wikipedia.org/wiki/Silicon_Valley_(TV_series)">Silicon Valley</a>.</p> <p>They built an entirely new protocol for accessing the web. It's meant to work alongside (not replace) HTTPS and operates similarly to BitTorrent in that data (for websites, videos, games, apps, anything) is shared amongst peers and not fetched from central servers owned by Amazon or some other Evil Corpβ„’.</p> <p>The ramifications here are HUGE.</p> <h3>1. Have you ever had a resource on the web disappear on you?</h3> <p>That's a dumb question because so have I.</p> <p>There's this mashup called "Ketchup V1" from an artist named The Deaf DJ that I'd love to be able to find again.</p> <p>I'm sure you all have your own white whales.</p> <p>Content that we care about on Web 2.0 (the HTTPS based one we're on now) only sticks around if the people creating and hosting that content want to keep it available.</p> <p>If some publisher or individual decides they want to take a site down or stop paying for a monthly web server bill, then whatever information they were hosting effectively disappears.</p> <p>The internet is supposed to be the culmination of all human knowledge, but small and easy to host files (like sheet music or a set of woodworking instructions) could disappear because someone needs to reign in their budget.</p> <p>IPFS allows anyone to go "Hey, that blog post on how to code this feature was really helpful to me, I'd like to re-host it so that other people will always be able to find it later."</p> <p>The Brave browser, mentioned in that Verge article, implemented IPFS features that let me do that with two mouse clicks.</p> <p>That's a beautiful thing.</p> <h3>2. Speaking of resources disappearing, did you know that for a few years Turkey blocked their citizens from accessing Wikipedia?</h3> <p>Again, the culmination of all human knowledge, but sequestered from citizens.</p> <p>Kind of fucked up, right? Especially when you know the importance of tools like social media sites during times like the Arab Spring. Being able to block entire portions of the web, like China's Great Firewall, is a dangerous precedent.</p> <p>Turkey was able to block Wikipedia because web addresses are currently based on file locations.</p> <p>You tell your browser to go all the way to the group of servers that the Wikimedia Foundation set up to host Wikipedia and then, once there, pull up the page on water filtration.</p> <p>Since Turkey could see where those web requests were pointing (the addresses assigned to Wikipedia servers), they were able to stop them from completing.</p> <p>Since everything on IPFS is content-addressed (via a hash), whenever you request a page, your browser pulls it from whoever happens to be closest to you and is also hosting that file.</p> <p>With this new model of web browsing, we're not dependent on corporations renting servers from even bigger corporations to keep the information we care about, and they currently control, on the internet.</p> <p>Instead of relying on publishers like the New York Times keeping articles hosted on their servers, or platforms like Stack Exchange seeing the value in keeping answers to years old questions online, we can just click a button to rehost web content so that it's always accessible on a massive P2P network.</p> <p>No more going all the way to specific servers that governments, ISPs, or rogue actors may be able to cut off access to in the future.</p> <p>This gives the people the ability to decide what's worth keeping around in perpetuity. Theoretically, the good stuff will be pinned by enough users that it'll be around forever, while the bad stuff will fall to the wayside.</p> <h3>3. It harkens back to the markedly non-corporate early internet so many of us fell in love with.</h3> <p>I, for one, am tired of <a href="https://www.reddit.com/r/NoStupidQuestions/comments/l2r5da/does_anybody_else_hate_how_the_internet_now_feels/">the internet being mostly a collection of the same seven sites over and over again</a>.</p> <p>So many of our favorite old websites and interesting things we bookmarked just don't exist anymore.</p> <p>All of that effort and content and knowledge is just lost forever, only to be replaced by "community of communities" sites like Facebook and Reddit.</p> <p>I miss the days where we'd go to AddictingGames to play some games, check out some tech news on whichever blog we liked, get involved in separate niche community forums, and generally exist without having to move all our data through massive corporations.</p> <p>This move to a decentralized web has been encouraging early builders to make fun little web pages, more niche community apps, and just experimenting with the new format.</p> <p>It's like the wild west on the web again, and while a lot of the tooling and architecture still needs to be figured out, the possibilities are endless.</p> <p>The new web is weird and that's a good thing.</p> <p>That brings me back to my earlier point:</p> <blockquote> <p>I'm sitting at my computer: late at night, working on a project, and thinking about how lucky I am to be alive and a few years into my development career <em>right now</em>.</p> </blockquote> <p>This kind of decentralized web that lives on all of our devices and frees us from our dependencies on corporations is something my computer lab buddies and I have wanted to exist for a long time.</p> <p>Over the years since college, I've headed up tech communities, led nationwide teams of developers, and traveled around the country giving and listening to talks.</p> <p>All that and I've honestly never been more excited about a new <code>$web_thing</code> crossing my desk.</p> <p>IPFS is the real deal. From nerd dreams of old to HBO jokes to your ears (well, eyes).</p> <p>It's finally here, it's being given away <strong>for free</strong>, and I understand enough modern web development to help build it.</p> <p>That's something 18-year-old me would be absolutely hype about. I just hope I can make him proud.</p> <p>I'm working on a project built on IPFS with a buddy of mine from GitLab.</p> <p>Without giving too much away: it's built with <a href="https://www.gatsbyjs.com/">GatsbyJS</a>, hosted on IPFS with <a href="https://fleek.co/">Fleek</a>, and should help folks adapt to the early days of this new web protocol.</p> <p>I should have another blog post with some more information to share soon.</p> <p>Stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a> (<a>IPFS Version</a>)</li> <li><a href="https://henryneeds.coffee/blog">Blog</a> (<a href="https://ipfs.fleek.co/ipfs/bafybeid36rtd2rpjzhz7ll4foruef3vj3n3pbrev53wcwj6pj6q3u4ie7q/blog">IPFS Version</a>)</li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> Decentralized Web, IPFSElm City Rocks (Project Name TBD)https://henryneeds.coffee/blog/elm-city-rocks/https://henryneeds.coffee/blog/elm-city-rocks/An abstract for the social project I want to take on for New Haven in 2021.Thu, 14 Jan 2021 00:00:00 GMT<p>Someone once told me that:</p> <blockquote> <p>"New Haven is large enough to have a real city feel to it, but small enough that you can make whatever you want to exist in town happen through sheer willpower".</p> </blockquote> <p>I've seen that happen by being made a leader in <a href="https://newhaven.io">NewHaven.IO</a>, creating the short lived (and soon to be resurrected) <a href="https://opensports.net/NewHavenNerdLeague">New Haven Nerd League</a>, and watching people create all kinds of cool things like <a href="https://collabnewhaven.org/">Collab New Haven</a> (an incredible entrepreneurial bootcamp), <a href="https://elmcitygames.com/">Elm City Games</a> (our local board and tabletop gaming community center), and <a href="https://www.manicpresents.com/">Manic Presents</a> (arguably the most famous talent booker in the state), and all sorts of Meetups, clubs, and festivals.</p> <p>But New Haven (and more broadly, Connecticut) seems to be stuck in this chicken &amp; egg problem where young people don't want to move here because there aren't enough exciting companies around, and exciting companies founded here don't want to stay because the hiring pool is small.</p> <p>There are smarter and more well connected people than me working on the "companies don't want to stay" part of the problem via methods like <a href="https://ctnext.com/">financial incentives</a>, building out [training/employee pipelines](training/employee pipelines) (shoutout <a href="https://www.southernct.edu/academics/computer-science">Lisa &amp; Winnie at SCSU</a>!), and creating the <a href="https://www.districtnhv.com/">kinds of spaces those companies might want to reside in</a>. But we also need to solve the people side of the problem - New Haven needs to appear more cool, livable, and attractive to the people thinking about spending their 20's and 30's here.</p> <p>The problem, as I see it, is twofold:</p> <h2>1.) We have a bit of an image problem. We all know what I'm talking about.</h2> <p>Granted, I think we're at an inflection point. I'm not keeping a super tight eye on how New Haven is portrayed by national news outlets but I feel like we're about to turn a corner from "Yale and gun crimes" to "culture, science, and tech hub of Connecticut."</p> <p>However, Connecticuters seem dead set on this idea that "there's nothing to do in this state," which simply just isn't true. I feel like people are resistant to put in the change to make CT a rad place to live because NYC is a train ride away. But I also have a feeling that most people are like me in that even with the MTA station so close, I rarely go into the city.</p> <p>I've seen Burlington, VT and the Capital Region of NY have incredible bursts of grassroots social campaigns that do incredible jobs of showing off just how cool, innovative, and fun to live in those parts of the country are. They show off restaurants, local events, pop-ups, live shows, food festivals, and way more. But in more than a journalistic way - they show off the hippest parts of the areas they represent, have big local followings, and are a great thing to show someone who may be considering moving there.</p> <p>If we can't get excited about what's going on here, how can we expect potential new neighbors to get hyped?</p> <h2>2.) New Haven, as much as I love it, could have even more rad shit going on.</h2> <p>We have the <a href="https://www.clubwaka.com/leagues/new-haven">WAKA kickball league</a>, but we don't have a good drop-in pick up sports league. We have the <a href="https://www.newhavengp.com/">New Haven Grand Prix</a>, but we don't have any alley cats. We have <a href="https://www.visitnewhaven.com/dining">amazing restaurants</a>, but I can't seem to find any kind of underground culinary scene since Homecooked shut down.</p> <p>There are all kinds of things I wished were going on when I moved here in 2016 that aren't any closer to being real things yet.</p> <p>With Yale and our other colleges/universities, we attract some of the best and brightest minds in the world to spend several years of their lives here. Unfortunately those folks often leave when their schooling or tenure is up, which can make long term organizing continuity tricky for organizing teams of Meetups and other groups that pop up. I think that partially explains why we seem to have a lower concentration of high caliber events and groups compared to other cities our size.</p> <p>We townies only have so much effort to pour into things once our families, jobs, and other obligations' needs are met.</p> <p>But other cities face similar problems; we just have to be creative about solving this one. I think the answer is to have more micro-communities spun up to get these ideas in motion.</p> <p>So now that we understand (at least part of) the problem, how do we fix it?</p> <p>Again, I think the solution is twofold, but the two parts really feed into each other:</p> <h2>1.) I want to create a bunch of small event groups under one central umbrella.</h2> <p>Right off the bat I want to get some bike races organized, spin the New Haven Nerd League back up, and start planning an interactive city wide activation, among some other things.</p> <p>The idea is to treat these event series the same way people treat startups. Try a lot of things out, rapidly iterate, and see what sticks. After people vote with their feet and make it clear what they want, just throw more and more effort at it to make sure the idea is the best it can be. When the events really start picking up steam, deputize from within the member group to create admin teams that can carry the momentum forward.</p> <p>I see this working similarly to how <a href="http://NewHaven.IO">NewHaven.IO</a> came to be. It started as a cluster of disparate local tech groups that existed in vacuums until they were pulled under one banner.</p> <p>It was one group overseeing a lot of niche micro-communities and has since grown to be the largest unified tech group in the state. That umbrella style community model made it easier for organizers to plan events, for members to only have one calendar to keep track of, to deduplicate effort expended on like-kind projects, and generally just have an all around easier time due to having one critical mass of members.</p> <p>Through running IO, I got to meet a lot of nearby folks working on their own social projects. If any of them are down to collaborate, this one-two punch provides a great opportunity to help each other out to plan and pull off the kind fun shit we want to see happen.</p> <p>I'd love to partner with <a href="https://bsbc.co/">Bradley Street Bike Co-op</a> to plan some races, or work with <a href="https://www.bestvideo.com/">Best Video Film</a>/<a href="https://www.meetup.com/New-Haven-Screenwriters/">New Haven Screenwriters</a>/<a href="https://www.nutmeginstitute.com/">Nutmeg Institute</a> to create a film viewing club, and maybe now that <a href="https://twitter.com/cookinforpeace?lang=en">Bun</a> has some time on his hands (RIP <a href="http://miyassushi.com/">Miya's</a> as we know it) we could make a dent in that underground food scene.</p> <p>That proven IO model can be used to start with small niche event series, collaborate with other like minded local groups, then recruit from within to organically grow those event series into a broader local event community.</p> <h2>2.) I want to use those events and micro-communities as the focus of a short video and podcast series on the Greater New Haven area.</h2> <p>Josh Levinson is doing a great job of this with <a href="https://betweentworocks.com">Between Two Rocks</a>, but New Haven needs other locals to step up and help. There are all kinds of cool groups making New Haven special and all sorts of history hidden around us that I'm still learning about four years later. We should be documenting it all and sharing it with others in a way that makes New Haven seem a more desirable place to live rather than putting out more lists of the area's best pizza and beer.</p> <p>With a mix of videos showing off the events and culture in the city and a podcast interviewing the world class artists, community leaders, and scientists in town, it should be easy to make people want to move and stay here.</p> <p>Similar to how <a href="https://www.youtube.com/user/caseyneistat">Casey Neistat</a> has New York City as the secret star of most of his content, I want to use videos recorded at the events I want to put on to create content to portray how hip, alive, and innovative New Haven is. Everyone who lives here knows how special this place is; we just need to do a better job of showing it to others.</p> <p>We can use these events, the people who attend them, and the various venues they'll take part in as an in-road to the deeper cultural and historical contexts of the city for an easy marketing win.</p> <p>The events will be put on because they're what people want to see in the city - the audio/video content will all just be a really nice bonus.</p> <p>As vaccines are slowly rolled out, people are going to be itching to start getting back to a post-vaccine "new normal".</p> <p>After spending what's looking to be more than a year cooped up, folks are going to want to get back out in the world, but I think we're going to see a good two years or so of reasonable people not wanting to stray too far from home.</p> <p>We're all excited to be able to grab a beer at <a href="https://barcadenewhaven.com/">Barcade</a>, see a show at <a href="https://www.collegestreetmusichall.com/">College Street Music Hall</a>, then end the night at <a href="https://threesheetsnh.com/">Three Sheets</a> and <a href="https://mamouns.com/">Mamoun's</a>. But it'll take time for everyone to get the vaccine and for us to feel safe going farther than a couple towns away for trips or events again. Looking at you, <a href="https://www.ctfoodtrucks.com/food-truck-festivals/">CT Food Truck Festival</a> and <a href="http://lawnpass.livenation.com/">Meadows Lawn Pass</a>.</p> <p>Folks are going to want things to do, people to engage with, and experiences to share, but many won't feel safe going too far to scratch that itch.</p> <p><strong>Local communities are going to be at the most important we've seen them in our lifetimes. This can help fill that void.</strong></p> <p>If I can plan enough of this out ahead of time, I can catch the post-vaccine wave and grow our membership to that critical mass to make something like this self sustaining. Enough interesting events going on that enough members want to bring their friends to will provide the audio and video needed to put together content that makes New Haven seem even more cool to people thinking about coming here.</p> <p>By having the local event series accelerator and the media house under the same umbrella we can keep creating more and more things for our neighbors to enjoy whether it be events to partake in or content to keep learning about interesting things in the city they love. Having that group of micro-communities built around one umbrella entity lets us take advantage of the network effect IO was able to enjoy where we can plan trial events we're not really sure about and potentially have a lot of people show up just because of our reputation as "the micro-community group" in town.</p> <p>I think it's a solid solution to our chicken and egg problem and one that can be solved without the connections and financial capital needed to chip away at the other side of the coin. Like I said, there are people smarter and more well connected than me working on that problem. I'm going to roll my sleeves up and help out where I best can.</p> CommunitiesHacktoberfest Markdown Editor Challenge, Day -2 (Prep Work)https://henryneeds.coffee/blog/hacktoberfest-markdown-editor-prep/https://henryneeds.coffee/blog/hacktoberfest-markdown-editor-prep/The prep work I did to get ready for building a better markdown editor for Hacktoberfest 2020.Tue, 29 Sep 2020 00:00:00 GMT<p>Real quick, as an aside:</p> <p>As an aside - in the spirit of Hacktoberfest, y'all should all check out <a href="https://2020.allthingsopen.org/">All Things Open</a> - A polyglot technology conference focusing on the tools, processes, and people making open source possible.</p> <p>It's a free virtual conference taking place on October 19th and 20th, with tracks covering topics like DevOps, community leadership, inclusion and diversity, and various workshops.</p> <p>It's going to be a great time full of great talks. I'll be giving a full-length talk on tech debt and an ignite on digital transformations in government myself, but I'm very excited to hear from folks like <a href="https://2020.allthingsopen.org/speakers/liz-fong-jones/">Liz Fong-Jones</a>, <a href="https://2020.allthingsopen.org/speakers/john-papa/">John Papa</a>, and <a href="https://2020.allthingsopen.org/speakers/remy-decausemaker/">Remy DeCausemaker</a>.</p> <p>Sign up and come hang out for a couple of days to get the open source juices flowing! Now back to the blog!</p> <p>If you're reading this, I'm assuming you saw my last post on how I want to bump up the Hacktoberfest challenge a bit above just making four pull requests.</p> <p>I want to build my own open source cross-platform cloud-synced desktop markdown editor in just 31 days.</p> <blockquote> <p>Henry, why do you keep doing these things to yourself?</p> </blockquote> <p>Great question. I'm unsure of the answer, though. I find that when I get stuck on a problem like this for so long (at this point a couple of years of trying to find a markdown editor that fits all my needs) that I just need to buckle down and solve it. It's mostly just a fun bonus that it gives me some material to blog about on DEV and to stream on my <a href="https://twitch.com/henryneedscoffee">Twitch channel</a> (Tuesday nights at 6!).</p> <p>Much in the same way that novelists prepare a bit for National Novel Writing Month (NaNoWriMo) by figuring out their characters, sketching out the plot beats of their story, and spending some time thinking about the hell they're going to put themselves through in November... I feel that it's appropriate to prep for this markdown editor a bit.</p> <p>You wouldn't jump into a half marathon without doing a few half marathons first, right? Well, I might have in 2018, but that's a story for a different time.</p> <p>Let's get right into it: this is a BIG project to take on in just a month. For a refresher, here are the big bullet points I want to hit by the end of Halloween:</p> <ol> <li>Being cross-platform (Linux/Mac/Windows and eventually Android/iOS)</li> <li>Ability to cloud sync data between those platforms</li> <li>Having one editor pane where markdown syntax is rendered on the spot (like Bear and Typora)</li> </ol> <blockquote> <p>You idiot, you didn't even put "build a markdown editor" on your list.</p> </blockquote> <p>Hey, thanks! That's the first thing I wanted to talk about.</p> <p>Much like writing a book, or running a marathon, building this app is going to be a slog - even if I'm just aiming for an MVP that I can keep iterating on. I'll be proud as hell if I can get through those main three items without getting too far into the other 20+ feature ideas I want to bake into this thing.</p> <p>But the truth is that the actual markdown editor part of it is pretty easy. Part of my prep work for this was Googling around for some "electron markdown editor tutorial" tutorials. I wanted to get my feet wet both with how Electron apps are put together and with what a markdown editor might look like in JavaScript.</p> <p>After trying a few, I found that one written by Tzahi Vidas was the simplest that both showed me how to build a simple Electron app and how to parse markdown with JavaScript. I highly recommend y'all give it a shot if you're at all interested in what I'm working on. It's a solid primer.</p> <ul> <li><a href="https://www.freecodecamp.org/news/heres-how-i-created-a-markdown-app-with-electron-and-react-1e902f8601ca/">Tzahi Vidas - Here's how I created a markdown app with Electron and React</a></li> </ul> <p>I did, however, find that I had to use a different command to run Electron apps on my MacBook Pro than the one provided in the above tutorial, though. In package.json, I had to change the run script from something like <code>electron .</code> to <code>electron-builder build --mac -c.extraMetadata.main=build/main.js --publish never</code> to get the app to actually launch. Odd, and took me a bit to figure out (<a href="https://medium.com/@johndyer24/building-a-production-electron-create-react-app-application-with-shared-code-using-electron-builder-c1f70f0e2649">source on the fix</a> - thanks John Dyer!), but it was a solvable problem.</p> <p>In a bit of backward thinking, I then went on to checking out the Electron docs to see if they had any getting started docs. Turns out they have all kinds of cool nuggets in there, but some of them are a bit buried in an interesting hierarchy of links and pages. Within there, I found two really helpful things:</p> <ol> <li> <p>The Electron "Simple Samples" GitHub repo has a few sample projects already built out that interact with your computer's resource monitor, your app tray, and some other bits of their API. You can just run <code>npm install</code> and <code>npm start</code> to pull one of the projects up on your local machine and dig around in the code to see how it all fits together. They even give you a set of challenges per sample project to try and add functionality.</p> <p>a. <a href="https://github.com/electron/simple-samples">Electron Simple Samples Repo</a></p> </li> <li> <p>The second helpful thing I found probably would have been better off if it were the first, even before Tzahi's tutorial - the "Electron API Demos" repo. When you <code>npm install &amp;&amp; npm start</code> this bad boy will pull up a window telling you all about the different parts of the Electron API you can use to interact with a user's desktop, has buttons to show on your desktop what each one does, and has code snippets to show you how to use them.</p> <p>a. <a href="https://github.com/electron/electron-api-demos">Electron API Demos Repo</a></p> </li> </ol> <p>Between those, and digging through the Electron docs a little more, I got most of what I needed to get ready for this challenge. I have a cursory understanding of how Electron apps work, how to parse markdown with JavaScript, and feel mostly ready for October. At least as prepared for it as writers are for NaNoWriMo or runners are for a marathon. I know the basics of what I'm taking on, but the event itself is going to bring plenty of its own challenges.</p> <p>There are still a handful of things to figure out as I get started in October.</p> <p>Like, am I going to use an existing markdown library, or am I going to make my own parser with slightly altered markdown syntax rules?</p> <p>How do I handle the cloud syncing: through something like PouchDB or by treating the whole thing as a progressive web app and use service workers to keep local offline changes synced with a SQL database somewhere?</p> <p>On that last point, a buddy gave me some words of wisdom today:</p> <blockquote> <p>"I would start with the simplest possible solution, probably some form of long-polling, evaluate the UX once the app was a further bit along, and reassess then." - Ben J, a SWE and former tech co-founder</p> </blockquote> <p>And I mean, smart. Knowing myself, it would be all too easy for me to forget that I'm just aiming for an MVP within like three days. I'll always have time to add features later.</p> <p>Past that, there are all kinds of things I'll have to figure out as problems pop up all through November, but I'm glad I did the prep work I did to get myself ready to tackle this whole thing with a little prior knowledge.</p> <p>Tomorrow is day -1. My last "day off" before the development work starts. And I also have a new conference talk (titled The Tech Debt of Monopoly House Rules - it's gonna be a fun time) due in a few weeks for <a href="https://2020.allthingsopen.org/">All Things Open</a>. It's going to be a busy month for sure, but I'm excited to get some work done. I'm going to cook something fun tomorrow, relax a bit, and get back to y'all with an update on October 1st.</p> <p>Until then, stay frosty.</p> <p><a href="https://henryneeds.coffee">https://henryneeds.coffee</a> <a href="https://henryneeds.coffee/blog">Blog</a> <a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></p> ElectronBuilding a Better Markdown Editorhttps://henryneeds.coffee/blog/building-a-better-markdown-editor/https://henryneeds.coffee/blog/building-a-better-markdown-editor/For Hacktoberfest 2020, I want to take on a larger challenge - building a better markdown editor.Sun, 27 Sep 2020 00:00:00 GMT<p>What's up, y'all? Long time, no... blog?</p> <p><a href="https://hacktoberfest.digitalocean.com/">Hacktoberbest</a> is almost upon us and this year I want to take things a little farther than just submitting a few pull requests. The event is meant to help people get more into open source development, and in that vein I wnat to treat October the same way authors treat National Novel Writing Month (NaNoWriMo).</p> <p>I want to start and finish a useful project within those 31 days.</p> <p>I owe a lot of my career to folks putting their open source projects, packages, and products on the internet for everyone to use and I want to pay part of that back to the community I've gained so much from.</p> <p>In the past, I had a lot of success on <a href="http://DEV.to">DEV.to</a> when writing my series on <a href="https://dev.to/quinncuatro/learning-devops-in-public-c26">Learning DevOps In Public</a> (which itself was inspired by <a href="https://www.swyx.io/writing/learn-in-public/">Shawn Wang's post</a>). Making sure I was able to write clearly about what I was learning and applying turned out to be an incrdible way to learn, and it turns out that people vibe with that content! That series had a cumulative 13,500 views!</p> <p>After having a good cadence in getting posts up for a while, I got caught up in the busy season at work and then the world kind of... blew up. Kind of fell off the grid for a long time and missed out on a lot of moments of good community building and interaction.</p> <p>Honestly, for a while I was doing my best just to keep my head above water. There were projects I wanted to hack on but between work, cooking, and trying to find new ways to spend time with my friends... I didn't have the energy. I've been wanting to get back into tech writing and project work but needed the right idea to come along and the right motivation to get back to it.</p> <p>Finally found the next thing I'll learn in public, and just in time for Hacktoberfest:</p> <blockquote> <p>Building a better markdown editor.</p> </blockquote> <p>I know I'm one of thousands (likely more) developers to take a whack at making the "perfect markdown editor", but hear me out.</p> <p>My buddy, <a href="atrost.com">Alex Trost</a> (curator of the <a href="https://www.swyx.io/writing/learn-in-public/">Frontend Horse</a> newsletter - which you should all check out), and I have been trying out different markdown editors over the last year or so and while the 85% of their features have a solid overlap, it's often the other 15% that we love about each individual editor.</p> <ul> <li><a href="https://bear.app/">Bear</a> (what I currently use on my work MBP) has a fantastic layout and organizational system, but doesn't support anything other than macOS and iOS.</li> <li><a href="https://typora.io/">Typora</a> (what this post was written with) has solid cross platform support, but doesn't have any native cloud syncing functionality.</li> <li>Other editors have WYSIWYG bars (not really markdown), some are web based (not ideal for me), and still others cost money when they feel feature incomplete or have stale codebases.</li> </ul> <p>After doing a survey of a ton of different options, I landed on my dream editor having three main features:</p> <ol> <li>Being cross platform (Linux/Mac/Windows and eventually Android/iOS)</li> <li>Ability to cloud sync data between those platforms</li> <li>Having one editor pane where markdown syntax is rendered on the spot (like Bear and Typora)</li> </ol> <p>It seems odd to me that I didn't come across a mainstream markdown editor that covers all three of those points. Maybe I'm getting in over my head with this project, but I feel like this is a solvable proble, you know?</p> <p>I've been wanting to dig into Electron for years now, and I'm sure plenty of other web application developers feel a similar trepidation of making the move over to desktop applications (even if it is the same technology in the background). There are a number of tools I've built with JavaScript for my job, and being able to wrap a GUI around them quickly would make it easier for me to share them with folks who feel less at home on a command line.</p> <p>After running through a couple of Electron tutorials (which I'll write about early on in October), I found that it's a pretty simple technology to use if you already have some familiarity with Node projects. I hope that by writing about my development process I'll be able to help some of y'all make the jump from web to the desktop.</p> <p>A good markdown editor obviously means more than those previous three bullet points to help boost productivity, though. I whittled my wishlist down to a "top 20" list of other features (in order of importance to me):</p> <ol> <li>Local storage in something like SQLite</li> <li>Left side bar for list of notes (title, first couple lines preview)</li> <li>Auto save</li> <li>Add todo's/task list with Bear's <code>-</code> syntax</li> <li>Code blocks (MarkText uses GFM code fence, syntax highlighting - PrismJS?, line numbers)</li> <li>Syntax support for popular programming languages</li> <li>Word count (word/characters/paragraph/read time)</li> <li>In-line styles (like strong, strikethrough, underline, comment, etc)</li> <li>Table of contents generated by headers</li> <li>Show creation/edit date and last editing device</li> <li>Full in-line image support</li> <li>Table blocks (MarkText uses GFM table block)</li> <li>Shortcut keys for styles</li> <li>Focus mode - new note in Bear</li> <li>Light/dark modes</li> <li>Project bundle support similar to FastAuthor (<a href="https://github.com/ExamProCo/fast-author#The-Anatomy-of-a-Project">https://github.com/ExamProCo/fast-author#The-Anatomy-of-a-Project</a>)</li> <li>Export as different file types (HTML/PDF/MD)</li> <li>Organize notes with hashtags?</li> <li>Ability to cross-link and reference other notes</li> <li>Encrypt individual notes and lock the app</li> </ol> <p>It's an aggressive project to tackle in just a month, but I don't see myself getting too deep into my backlog of wishlist items. Figured that having a bigger project to tackle during the month of October would help keep me motivated and make it feel like I actually earn my t-shirt and sticker pack this year - and leave me plenty to do while I try to flesh this app out through the end of the year.</p> <p>Plus, there's the added benefit of me being able to use a tool I've been wanting for a while and getting full creative control over it!</p> <p>I plan on working throughout the month to get at least an MVP put together and want to get a post up every few days on what I've been doing. I learned in the last round of learning in public that posting daily was too lofty a goal. But I'm hoping to use this opportunity to really dig into using Electron to build desktop apps, get back into writing, and hopefully become a better developer while taking y'all on this journey with me!</p> <p>Here's to tackling something big in 2020, and I'll see y'all on October 1st!</p> <p>Until then, stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a></li> <li><a href="https://henryneeds.coffee/blog">Blog</a></li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> ElectronOrganizing Local Tech Sceneshttps://henryneeds.coffee/blog/orgnaizing-local-tech-cenes/https://henryneeds.coffee/blog/orgnaizing-local-tech-cenes/I got to speak on a local podcast about NewHaven.io, a tech group I help run.Mon, 19 Aug 2019 00:00:00 GMT<p>Hey everyone! I haven't forgotten about all of you. Just had a wild week and a half at work that we're just now getting on top of.</p> <p>There are still some posts in the pipeline about building, configuring, and running applications on my Raspberry Pi Kubernetes cluster!</p> <p>Until then, I thought it would be worth sharing an episode of a podcast I was on this past weekend.</p> <p><a href="https://betweentworocks.com/episode-22-newhaven-io/">Between Two Rocks Podcast: Episode 22 - NewHaven.io</a></p> <p>In my free time, I help run a local tech group called <a href="https://newhaven.io">NewHaven.io</a>. Several years ago, this group was created to pull all of the disparate Meetup groups in town (different languages, stacks, technologies) under one banner. I've been involved as a board member for about a year now and it's been a wild ride.</p> <p>We were invited onto this podcast by another of our board members and we discussed how we want to grow out our local scene and focus more on inclusivity and diversity.</p> <p>Hopefully, this might provide some nugget of wisdom to any of you out there who might be wanting to get more involved in your local developer community. Whether you want to help organize more events or stand up a Meetup group of your own, rest assured that there are developers in your area who will greatly appreciate the effort you put in to help grow your scene.</p> <p>For all of you already helping run similar groups, what are you doing to provide more value for your members? We're always trying to find ways to do more with less help organizing, so I'm curious about anything and everything you're doing regarding throwing events, growing a sense of community, and fostering a support group.</p> <p>Again, Raspberry Pi cluster articles inbound! Until then, stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a></li> <li><a href="https://blog.henryneeds.coffee">Blog</a></li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> PodcastsDay 7 - The Promise Of Docker Containershttps://henryneeds.coffee/blog/day7-the-promise-of-containers/https://henryneeds.coffee/blog/day7-the-promise-of-containers/This is a magazine article I wrote to help people get started with using containers.Wed, 31 Jul 2019 00:00:00 GMT<p>Hey folks! My last post, titled <a href="https://dev.to/quinncuatro/cool-but-what-can-you-actually-use-containers-for-228d">How To Incorporate Containers Into Your Daily Duties</a>, was all about how you can use containers to improve your work as a developer.</p> <p>The problem with that is that it necessitates y'all knowing how to set up, configure, and run Docker containers.</p> <p>I shouldn't have assumed that everyone came to that post with the prerequisite knowledge, so here's an article I wrote a little over a year and a half ago. It covers two things:</p> <ol> <li> <p>What benefits containers provide you, as a developer.</p> </li> <li> <p>A demo/walkthrough of building an image based on a CentOS LAMP stack from scratch and spinning up a container to serve a web application.</p> </li> </ol> <p>I've also updated the additional resources at the end of this one so that they're the current links/documents/installers as of July 31, 2019.</p> <p>Hope you enjoy!</p> <h2>The Promise of Containers</h2> <p>As a web developer, I’m sure you’ve built a server or two. You probably spun up a Linux box and installed some packages like Apache, MySQL, or PHP. Maybe you pulled some of your code from GitHub, threw a database together, and edited some config files to bend the server to your liking.</p> <p>(If you haven’t built a server and are interested in learning how, check out my <a href="http://LearnToProgram.TV">LearnToProgram.TV</a> course titled β€œIntroduction to Server Administration”. - <a href="https://www.udemy.com/introduction-to-server-administration/">https://www.udemy.com/introduction-to-server-administration/</a>)</p> <p>But what happens if your server becomes corrupted, your backups have been failing without you noticing, or your site goes viral and traffic starts outpacing what your infrastructure can handle?</p> <p>If you’re new to server administration, any of those events could seem daunting. If one occurred, you would have a bigger problem than manually rebuilding your server. You also have to hope that your documentation is thorough enough to cover every modification you made to your original system. You do have documentation, right?</p> <p>Simply put, a disaster of any magnitude would make for a frustrating day or a late night. Disasters do happen and it’s prudent to be prepared for them. Let me introduce you to Docker, which is a container management system.</p> <blockquote> <p>β€œBut Henry, what in the world are containers and why should I care about them?”</p> </blockquote> <p>Great question. Containers, in the context of software development, are best defined as segmented and sequestered user-space instances existing on top of a kernel.</p> <blockquote> <p>β€œYeah… can you make that simpler?”</p> </blockquote> <p>I sure can. The easiest way to explain containers is to compare them to virtual machines (VM's). One method to run three different apps on one server is to run three different VM's, which consist of an operating system, the app, and whatever dependencies that app needs in order to run. A container, in the context of this example, is a bundle of the bare minimum of code and dependencies it takes to run an application. Docker functions as the guest operating system for any number or type of containers. The obvious benefit is that it is much more resource-efficient than running several VM's.</p> <p><img src="https://thepracticaldev.s3.amazonaws.com/i/zg0rqa3mvjifcluo3r97.jpg" alt="" /> <em>Source: <a href="https://images.idgesg.net/images/article/2017/06/virtualmachines-vs-containers-100727624-large.jpg">https://images.idgesg.net/images/article/2017/06/virtualmachines-vs-containers-100727624-large.jpg</a></em></p> <blockquote> <p>"Word. This is starting to make sense. Paint me a word picture."</p> </blockquote> <p>You got it. Let's dive in on how Docker (and containers in general) are going to take your game to another level. The benefits of using Docker are fiftyfold, but for the sake of brevity, I want to key in on the five which I believe are the most important.</p> <h3>1. Speed</h3> <p>The main key to wrap your head around is that Docker allows you to programmatically build out your infrastructure. That means instead of having to manually build another server every time you need a new one, you can just spin one up based on a predefined image of what you need the resulting container to be. It's a little more time-intensive to build that image at the beginning of a new project, but every time you need to stand up a duplicate environment, it takes minutes instead of hours.</p> <h3>2. Ease of Use</h3> <p>In addition to speed, the ability to quickly spin up instances of infrastructure you need at any given moment can be a great boost to your organization. Let's say the app you're building hits the front page of Reddit and your traffic goes out of control. You could also get hacked and have your site knocked offline. Either way, you need to get back online quickly. Your business very well might depend on it. At least nine out of ten times, starting a container is faster than building a fresh server.</p> <h3>3. Security</h3> <p>A basic tenet of containers is that they are walled gardens. An initial benefit is that you can bundle apps with their respective dependencies so that if container A needs PHP5 and container B needs PHP7, that can be handled without having to worry about dependency clashes. The two containers may run side by side, but they won't get in each other's way.</p> <p>The walled garden approach also means you can define exactly how you want and if you want data flowing in and out of a container. You'll see in the demo that we explicitly tell our container to "EXPOSE 80". That is saying it is okay to open up port 80 in the container to the broader system we're running it on so that Apache can do its job and host content over that port. When you start building more complicated systems with Docker, you can use that kind of security to link certain containers running applications up to other specific containers holding databases or running some specific micro-services.</p> <h3>4. Portability</h3> <p>Since containers (in a simplistic view) are meant to be an app packaged with the bare minimum of resources it needs to run correctly, you can run that container on anything that can run Docker. Whether it is on Red Hat Enterprise Linux, your MacBook Pro, or Windows Server 2016, that container will spin up and function exactly as it should.</p> <p>An added benefit is that you can develop in an environment that will eventually be your production environment. Your containers will be based on an image, which is a compiled list of instructions on how to build a specific environment. You can use that image to spin a container up on your laptop and build your app in it. When you're ready to launch, you can use that same image to deploy a container on a public server and be completely confident that it'll run the same there as it did locally.</p> <h3>5. Version Controllability</h3> <p>Finally, since Docker lets you build out your infrastructure as code, your infrastructure can be entered into version control system. One more time for the people in the back. YOUR INFRASTRUCTURE CAN NOW BE VERSION CONTROLLED. If you make a change and something breaks, just roll it back. If your data center somehow burns to the ground, you can pull your images and code from your remote repositories. Future you is going to thank present you for making their life that much easier.</p> <h2>Docker Demo</h2> <p>With all that said, let's build our first container. I'm assuming that I've convinced you of Docker's usefulness enough that you've already downloaded and installed it on your system. If you haven't yet, please go do that now.</p> <p><a href="https://docs.docker.com/install/">https://docs.docker.com/install/</a></p> <p>Just for laughs, let's say I need a basic site running on a CentOS server for a demo. There are two ways I can go about that: I can spend the twenty minutes it can take to create a new server from scratch, or I can use Docker to build one image that can then instantiate innumerable identical containers to run my site.</p> <h3>1 - Create a Dockerfile</h3> <p>First, make a new project directory and create a file named <code>Dockerfile</code>, which is a list of instructions on how to build a system on top of a base image. Base images can be based around languages like PHP, run-times like Node, or operating systems like Ubuntu. They are generally provided by the maintainers of the source product. That means they're built by the creators of the software specifically to work well with Docker.</p> <p>We then need instructions on how to build on top of that base. There are commands we can use, like: <code>RUN</code>, <code>ADD</code>, and <code>CMD</code>. They run shell level commands, map files from the host to a container and specify commands to run on boot, respectfully. We take the steps of configuring a server and translate them into tasks that Docker can use to automate the process.</p> <p>It's similar to how Git works behind the scenes, in that each commit (or instruction) is a set of changes to be made to the last commit, not the entire codebase at that moment.</p> <p>For the sake of simplicity, we're going to be building a very basic image. our Dockerfile will be as follows:</p> <pre><code class="language-Dockerfile"># Dockerfile # Basic Setup FROM centos:7 LABEL maintainer="[email protected]" # Update repos and install httpd RUN yum -y update &amp;&amp; \ yum -y install httpd &amp;&amp; \ yum clean all # Expose a port from the container to Docker and run the startup script on launch EXPOSE 80 CMD ["/usr/sbin/apachectl", "-DFOREGROUND"] </code></pre> <p>A Dockerfile reads surprisingly like English. All it is saying is that we want to use CENTOS y as a base, install Apache, expose port 80 to map to a different host port, and start-up Apache on boot.</p> <h3>2 - Build Your Image</h3> <p>Once the Dockerfile is saved, we'll run a "build" command in the terminal/ This tells Docker (the utility) to use our Dockerfile (the list of instructions) to build a system.</p> <p>First, we need to open our terminal and enter the directory that contains our Dockerfile. For me, that directory is <code>~hquinn/dev/forDevTo/TheFutureIsDocker/DockerDemo/</code>.</p> <p><img src="https://thepracticaldev.s3.amazonaws.com/i/x7vv3ghrxf8zcvxs1fwg.png" alt="" /></p> <p>We then need to tell the Docker utility to build the image being described by the Dockerfile.</p> <p><img src="https://thepracticaldev.s3.amazonaws.com/i/l11fcrekx15c91zumm3z.png" alt="" /></p> <p>The <code>--rm</code> denotes that we want to automatically delete the intermediate containers that are generated during the build process, the <code>-t hquinn/dockerdemo</code> is tagging our build with a name so we can more easily refer to it later, and the <code>.</code> at the end indicates that we want to run this command in the present working directory.</p> <p>This command will log out every part of the build process to the console. It may take a few minutes, which is totally normal. Let it sit for a minute; it will tell you when it's finished.</p> <h3>3 - Verify Your Image</h3> <p>Once Docker pulls down the base we chose to use (CentOS 7), installs Apache, and completes all other instructions, we're given a finished image that is named based on the tag in the build command. You can see it (and the CentOS 7 base image) by listing the Docker images on your system.</p> <p><img src="https://thepracticaldev.s3.amazonaws.com/i/9cvvcz9lz4ebzdf08yos.png" alt="" /></p> <h3>4 - Spin Up A Container</h3> <p>Since the image is now built, we can use it to spawn a running container. We're going to change directory to the project we want running on this first container and then run a <code>docker run</code> command.</p> <p>(Here, I'm using a sample project from Dave Moran's LTP course titled "Learn SASS and SCSS" - <a href="https://www.udemy.com/learn-sass-and-scss/">https://www.udemy.com/learn-sass-and-scss/</a>)</p> <p><img src="https://thepracticaldev.s3.amazonaws.com/i/0qromylrljd4jpsqlesm.png" alt="" /></p> <p>This command is telling Docker to run a container-based on the image that we just build, but it was a few other parameters. The <code>-d</code> is telling Docker to run the container as daemon (in the background), the <code>-p 80</code> is to map port 80 on the container to a port on the host, so it can be viewable via a browser, the <code>--name Project1</code> is again a way to name the resource, and the <code>--mount type=bind,source="$(pwd)"/app/,target=/var/www/html/</code> is telling Docker to map our local <code>./app/</code> directory to <code>/var/www/html/</code> in the container so that Apache can host it for us.</p> <h3>5 - Verify Your Container</h3> <p>Now that the container is running with our code loaded into it, we can go see it in a browser. First, we need to figure out what local port the container's exposed port got bound to.</p> <p><img src="https://thepracticaldev.s3.amazonaws.com/i/n0bsjvk986xe43r4zilb.png" alt="" /></p> <p>We can see that the running container named <code>Project1</code> has mapped our local port 32769 to port 80 on the container (the host port may be different on your machine). If we open a browser and head to <code>localhost:32769</code>, we can see our code running live.</p> <p><img src="https://thepracticaldev.s3.amazonaws.com/i/nuitkzsmypz8uuhq62wl.png" alt="" /></p> <h3>6 - Drop Into Your Container's Shell</h3> <p>That's all there is to it. We could keep moving to different directories that have basic HTML/CSS/JS applications in a subdirectory called <code>./app/</code> and keep using that same command to spawn more containers. They'll each spin up, map that code to <code>/var/www/html/</code>, and assign a port on <code>localhost</code> to each container.</p> <p>We can verify that the container is pulling in our code by running a Docker command that grants us shell access to it. It'll drop us right into the root of that CentOS system as the root user. If we look in the web directory, we can see that it pulled our files in and uses Apache to host them, the same way would on a traditional VPS.</p> <p><img src="https://thepracticaldev.s3.amazonaws.com/i/zyuiyfkbg96bb93uov84.png" alt="" /></p> <h3>7 - Clean Up</h3> <p>What demo would be complete without some file cleanup? If we keep spawning containers without doing any sort of maintenance, we will eventually hit the limits of our hardware. That's not ideal. Let's shut down and delete the container we spun up, then remove the image it's based on.</p> <p>Containers are ethereal. Once one is stopped with the <code>docker stop</code> command, we could spin it right back up, and it would be identical to the first time that we started it. To get rid of a stopped container, we'd run a <code>docker rm</code> command. To get rid of the image it was based on, we would run a <code>docker rmi</code> command.</p> <p><img src="https://thepracticaldev.s3.amazonaws.com/i/2tfha1domsdmke9l9nll.png" alt="" /></p> <h2>Outro</h2> <p>Docker isn't as insurmountable a technology as it seems. It can be tricky to wrap your head around, but once you do, it's pretty easy to see the benefits that it can provide. Not only does it let you develop locally in the same environment you'll use for production, but it lets you spin your infrastructure up and down to suit your needs: whether it's scaling to meet viral demand or recovering from a disaster incident.</p> <p>Furthermore, this is just a very small taste of what Docker can do. Your builds can be as simple or as complicated as you need. For instance, I have one build that consists of a group of three containers: one running a Node application, another container running PostgreSQL with data persistence through the help of a data volume, and the third acting as a query cache.</p> <p>As a developer, Docker is an incredibly powerful tool to have in your arsenal. Honestly, even if it's meant to solve a problem you haven't had yet. You'll eventually find yourself in a situation where Docker can save your bacon, and when you do, you'll be ready for it. The documentation is incredible and it sounds corny, but the only real limit is your imagination. Now, get started building your infrastructure as code.</p> <h2>Additional Resources</h2> <ul> <li><a href="https://github.com/Quinncuatro/TheFutureIsDocker.git">Article &amp; Corresponding Code</a></li> <li><a href="https://docs.docker.com/install/">Docker Desktop Edition Installers</a></li> <li><a href="https://docs.docker.com/get-started/">Docker Getting Started Guide</a></li> <li><a href="https://www.katacoda.com/courses/docker">Docker Labs - Katacoda</a></li> <li><a href="https://docs.docker.com/engine/reference/builder/">Dockerfile Documentation</a></li> </ul> <p>Containers can be a hard subject to initially broach, since, from the outside, it can seem that there is just too much in that realm to learn. However, once you dig in, it's just some light system administration work that's being automated so you only have to put in a decent amount of effort the first time.</p> <p>Hopefully, this post is enough to help at least one of you start your journey towards container enlightenment!</p> <p>Also, the parts for my Raspberry Pi K8s bramble are all slowly coming in the mail. Just waiting on my cooling rack tomorrow afternoon. Can't wait to start putting that together and creating some more content for you all.</p> <p>Until then, stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a></li> <li><a href="https://henryneeds.coffee/blog">Blog</a></li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> DevOpsDay 6 - Cool, but what can you ACTUALLY use containers for?https://henryneeds.coffee/blog/day6-cool-but-what/https://henryneeds.coffee/blog/day6-cool-but-what/An explanation on real life situations where one might want to use containers.Sun, 28 Jul 2019 00:00:00 GMT<p>We're going to take a bit of a sidestep this week.</p> <p>I was planning on converting a Docker-Compose project into a Kubernetes one, but I had a small project pop up this week. That project involved me setting up a container to mimic a production environment so I could test my script without disrupting any of our clients.</p> <p>Figured a practical example of how a container saved the day would be a better topic than me learning something just to learn something.</p> <p>Every once in awhile I'm given a task that, once explained to me, seems like it's going to take a lot of time. Like hours of my attention every time this bit of work comes up.</p> <p>I'll usually ask how often this task comes up in a given quarter. If the combined time for a quarter is more than a day or two, I usually throw the <a href="https://xkcd.com/1319/">XKCD comic on automation</a> out the window just go for it.</p> <p>It's more worth my time to write and maintain an automation script than it is to have me out of "the fight" for a few hours to deal with something mundane while the rest of my team is tackling a production issue.</p> <p>This week I was given a task that requires me to hop on a server and programmatically clean out some files. Sometimes it's just looping over a list and using some simple <code>*</code> matching to nuke some stuff. Sometimes it needs some sed magic to make things work.</p> <p>Once I decided that it was worth it to automate this task, I copied the Confluence outlining steps into a new document in VS Code. I chunked up the tasks based on similar work that needed to be done, played around with some Bash scripting, cried a bit, and eventually came up with a more sophisticated version of the following GitHub repo:</p> <blockquote> <p>PROJECT CODE: <a href="https://github.com/Quinncuatro/ReleaseTheHounds">Release The Hounds</a> &lt;-- GitHub Link</p> </blockquote> <p>Now let's dive into what exactly is going on in there. The README is mostly just telling you how to start the container and run the scripts in the <code>./code/</code> directory:</p> <pre><code class="language-bash">$ docker pull centos:5.11 $ docker run -v `pwd`/code/:/var/theHounds/ --name DogHouse -it centos:5.11 /bin/bash $ cd /var/theHounds/ $ ./SetUpEnv.sh $ ./ReleaseTheHounds.sh </code></pre> <blockquote> <p>"But Henry, why are you using a container for this?"</p> </blockquote> <p>Great questions! Presently I have three answers for you:</p> <ol> <li> <p>We, like a lot of companies, run Red Hat Enterprise Linux in production. Spinning up a CentOS (downstream project from RHEL) server allows me to closely approximate the environment I want my script to run in. This gives me a more similar testing environment than say, Windows.</p> </li> <li> <p>Containers are meant to be ethereal. One of the scripts in <code>./code/</code>, <code>SetUpEnv.sh</code> creates dirs and files for me to test against. If I need to test again, I can restart the container, and my setup script will create the same env every time.</p> </li> <li> <p>Containers don't have access to any local storage that you don't explicitly give them access to. That way, if I mess up my script, I won't accidentally delete a local directory I want to keep.</p> </li> </ol> <pre><code class="language-bash">#!/bin/bash # EXCERPT FROM SETUPENV.SH: mkdir -p /opt/hoodlums touch /opt/hoodlums/2019-03-07_Thief.zip touch /opt/hoodlums/2019-06-12_Ruffian.zip touch /opt/hoodlums/2019-07-25_Vandal.zip </code></pre> <p>So, once I start up my container and run this script, I'll always have the same environment to test my script against. And that's where <code>./code/ReleaseTheHounds.sh</code> comes into play.</p> <p>This script exists solely to help me get rid of some files. Sometimes that can be as easy as looping over a variable and shredding any files that kind of match.</p> <pre><code class="language-bash">#!/bin/bash # Set up vars HOODLUMTYPES="Thief Ruffian Hooligan Delinquent Vandal" ... # A HOODLUMS cd /opt/hoodlums/ for i in $HOODLUMTYPES; do shred -n 7 -u -v *$i.zip; done; ... echo "------------------------------"; echo "LEFTOVER FILES IN /opt/Hoodlums/"; ls /opt/hoodlums/; </code></pre> <p>I can run this script inside of my Docker container to make sure that it's catching any edge cases I set up in <code>SetUpEnv.sh</code> without worrying about accidentally touching my local <code>/opt/</code> dir on my MacBook Pro. Which is good, because I don't want to accidentally nuke Gradle or Vagrant! Sometimes I need to run my data through a sanitizer before I can use it to find files. For example, one of our inputs is <code>163Killer</code> and a file we're testing against is <code>0277_Killer.yaml</code>. We need to extract <code>Killer</code> to match that yaml file. That's easy enough with an extra loop and some sed magic:</p> <pre><code class="language-bash">#!/bin/bash WHALETYPES="123Fin 456_Gray 495Beluga 385_Livyatan 012_Blue 978Bowhead 149_Humpback 163Killer" SANITIZEDWHALES="" function whaleSanitizer { input=$1 input=`echo $input | sed 's/^[0-9]*//'` input=`echo $input | sed 's/^_*//'` echo $input } # B WHALES for i in $WHALETYPES; do SANITIZEDWHALES="$SANITIZEDWHALES $(whaleSanitizer $i)" done; echo "------------------------------"; echo "LEFTOVER FILES IN /var/lib/docker/"; ls /var/lib/docker/; </code></pre> <p>Without turning this into a Bash post, we're running the <code>WHALETYPES</code> var through the <code>whaleSanitizer</code> function and into the fresh <code>SANITIZEDWHALES</code> var. We're then using that new var to find files we need to shred. Then, at the end of the script, I <code>ls</code> all of the dirs I touched to make sure the files I wanted to get rid of are no longer there. After I simplified this a bit in order to share it, it ended up being a pretty simple example. I'm spinning up a container, creating a temporary test environment, then running my cleanup script against that test environment. If it doesn't work right, I can make some changes, set my environment back up quickly, and try again. An added benefit is that this is easily repeatable. The <code>README.md</code> file with the instructions on setting up and running the test environment is only 176 bytes. That's right. Bytes. When I inevitably need to update this script, pulling down the GitHub repo will take a few seconds. I can make my changes, and have a container stood up and ready to test against in a matter of minutes. Shorter, if I already have the <code>centos:5.11</code> image pulled locally.</p> <p>Hopefully, this makes the benefits of containers seem a little more real. If any of you read this far and have had to spin up containers for a similar kind of project (work or personal) drop it in the comments. Let's get some discussions going and help the newer folks learn when to reach for containers!</p> <p>Also, as I work on my Compose -&gt; Kubernetes project/article, I'm also waiting on some fun toys to come in from Amazon. This will for sure be its own article, but I'm hopping on the Raspberry Pi Kubernetes cluster bandwagon.</p> <p><img src="https://thepracticaldev.s3.amazonaws.com/i/cl0eiam0zxfho5c6pa9d.jpg" alt="" /></p> <blockquote> <p>Photo Credit: <a href="https://www.c4labs.com/product/8-slot-stackable-cluster-case-raspberry-pi-3b-and-other-single-board-computers-color-options/">C4Labs</a></p> </blockquote> <p>Until then, stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a></li> <li><a href="https://henryneeds.coffee/blog">Blog</a></li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> DevOpsDay 5 - The One Where It Turns Out I Misunderstood Kuberneteshttps://henryneeds.coffee/blog/day5-misunderstood-kubernetes/https://henryneeds.coffee/blog/day5-misunderstood-kubernetes/Turns out Kubernetes is at least a LITTLE bit tricky - and mostly functions on labels.Fri, 19 Jul 2019 00:00:00 GMT<p>I just finished <a href="https://twitter.com/nigelpoulton">Nigel Poulton's</a> <a href="https://leanpub.com/thekubernetesbook">The Kubernetes Book</a>. I highly recommend it, even though it pointed out some pretty big misconceptions I had about how some important K8s components work.</p> <p>It's a great surface level intro to the world of Kubernetes and how it can help build projects that can scale and self heal. Excellent for beginners and intermediate users alike.</p> <p>I also took some time off from posting. Turns out that DevOps stuff takes a little longer to learn and internalize than I thought. Going to try and scale down to weekly posts and keep this going longer than a month.</p> <p>Burnout is a really real thing.</p> <p>I've touched on this in some previous posts, but my time with the U.S. Courts was a lot of trial under fire "learn as you go." In most work environments that wouldn't be a problem, but there isn't much in the way of traditional Sr./Jr. developer relationships within the judiciary.</p> <p>What that means, in a practical sense, was that I was responsible for teaching myself any technologies that I didn't start the job already knowing.</p> <p>At a certain point, I was handed a ColdFusion/Informix/MariaDB application and was tasked with getting the application into a certain amount of other districts within a certain amount of time.</p> <blockquote> <p>For a good laugh before your weekend starts, take a look through this <a href="https://henryneeds.coffee/files/MyOldCourtProgrammerJob.pdf">job posting for my old position</a> that just went up. I'll be very surprised if they find a developer they're happy with for those job requirements and salary range.</p> </blockquote> <p>I think it's telling about the career path I would eventually take that my first thought wasn't about ColdFusion being a garbage language, but rather about how I would both deploy and maintain this application in multiple courts.</p> <p>Enter: containers.</p> <p>I had to teach myself Docker, and eventually OpenShift, from scratch. There were a couple of folks working for the Administrative Office in D.C. moonlighting as developer advocates who helped me along the way. I owe the current state of my career to Jonathan, Carl, and Tom; but for the most part, I was learning these complicated technologies in a silo.</p> <p>I'm unsure if it was real or false competence that made me think I had a handle on everything as we were rushing towards our initial rollout, but I thought knew everything. Typical 20-something bravado.</p> <p>That is, at least until I read <a href="https://leanpub.com/thekubernetesbook">The Kubernetes Book</a>. I learned that even though I had monkey wrenched our application into a deployable state, I had three big misconceptions about how Kubernetes works under the hood.</p> <h3>1.) How services and deployments differ:</h3> <p>When I was getting our JDash product ready for our initial deployment I was mostly just taking known good .yaml files and modifying them to fit my own needs - basic round peg -&gt; round hole stuff.</p> <p>It was enough to get the job done, but I wasn't understanding what I was doing.</p> <p>I had a basic understanding that pods were wrapped in containers that can optionally be wrapped in deployments, but didn't have a firm understanding of what these first-class Kubernetes objects provided.</p> <p>Turns out that there's a base paradigm of separation of concerns that stated that pods should typically (with a few exceptions) only contain one type of container.</p> <p>Then, those pods can be wrapped in deployments. Those deployments can be configured to keep a certain amount of pods available at all times (including rolling updates) so that your applications gain the ability to scale and self heal.</p> <p>Thus, the power of containers (mostly easy scaling) in K8s lies in deployments managing pods, rather than the containers themselves.</p> <p>Understanding that clicked some things into place for me.</p> <h3>2.) How PVs and PVCs work:</h3> <p>I had a loose understanding that PVCs (persistent volume claims) were related to PVs (persistent volumes) in that they both facilitated some kind of persistent data storage to power our ethereal containers.</p> <p>However, I thought that a PVC was always what provisioned the space a PV took up.</p> <p>Turns out, I was half right.</p> <p>Typically, in K8s, you would create a PV that utilized some sort of storage from a storage provider that would then be "claimed" by the PVC. The PVC acts as a ticket that allows a pod to access the storage provisioned in the PV.</p> <p>Additionally, though, there are K8s resources called <code>storage classes</code>. These set up a connection to a storage provisioner that allow for dynamic storage allocations. That way, you can point a pod's PVC to a storage class and have your PV automatically created and connected to your pod. This behavior works across local solutions (NFS, SMB, etc) and cloud services (AWS, Azure, GCE, etc).</p> <p>Not having to manually create persistent storage for applications at scale is a HUGE benefit of K8s that I had no solid understanding of.</p> <h3>3.) What exactly labels do:</h3> <p>I had played with labels before, but mainly as adding my name as an author to Dockerfiles.</p> <pre><code class="language-yaml">FROM lucee/lucee5:5.0.1.85 LABEL author="Henry Quinn &lt;[email protected]&gt;" # TOMCAT CONFIGS --&gt; OPTIONAL to implement if you need custom Tomcat config. COPY config/lucee/setenv.sh /usr/local/tomcat/bin/ ... </code></pre> <p>That said, I had no idea what kind of insane power they provide you with an orchestrator like Kubernetes.</p> <p>Turns out labels are what connect everything. Let's take <code>services</code> for example.</p> <p>Services are what enable easy networking within a cluster. Short story is that services provide a stable IP address, DNS name, and port. They handle all of the complicated networking bits of forwarding traffic on to ever-changing amounts of pods, containers, and other resources.</p> <p>That way <code>https://domain.tld</code> always points to your application, regardless of how many pods have been created to accommodate traffic.</p> <p>But, as your service is handling load balancing for you, it needs to know which node(s) it can throw requests to. You may have a six node cluster with a particular app only running pods on three of those nodes. If a user tries to visit your application, that service is going to be using <code>labels</code> to figure out how to handle it.</p> <p>That service will be configured to have certain label selectors that will match certain labels you provide to your pods in your yaml.</p> <p>In short, those labels appear to be what provide most of the magic in Kubernetes. They're what help your cluster master node draw the complicated map across your entire cluster so that it knows how to navigate any kind of request that comes in.</p> <p>Understanding that is really what made everything click for me.</p> <p>All in all, I think I was able to get 80% of my way to a solid understanding of Kubernetes on my own.</p> <p>And I'm damn proud of that.</p> <p>However, reading this short (160 page) book made the last 20% click into place. And that understanding is going to power a big new project I'm picking up at work.</p> <p>But more on that later. ;)</p> <p>Now that I have a better understanding of K8s, I'm going to <code>$ kubectl apply myWeekend.yaml</code>. See y'all next week. Until then, stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a></li> <li><a href="https://henryneeds.coffee/blog">Blog</a></li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> DevOpsDay 4 - Scripting Docker Commands With Spinup.shhttps://henryneeds.coffee/blog/day4-scripting-docker-commands/https://henryneeds.coffee/blog/day4-scripting-docker-commands/An explanation of a design pattern I use - using a 'spinup.sh' file to help me manage different Docker-Compose builds.Fri, 05 Jul 2019 00:00:00 GMT<p>In Day 3, I included a blurb from my DevOps-y friends about the natural progression of abstractions on top of containers:</p> <blockquote> <p>You usually start with docker run CLI commands and graduate to tools with more layers of abstraction as you need them. Docker-compose comes next, followed by automating several commands with Bash scripts, which is eventually followed by Kubernetes.</p> </blockquote> <p>I also shared with you a better way to handle switches in Bash scripts. Today I'll show you how I moved from running my own Docker commands to running off of one shell script with a handful of flags.</p> <p>For the first few months of learning how to use Docker, and then how to utilize it in projects, I was running A LOT of individual Docker CLI commands every day. I then moved on to having Docker Compose manage things for me. However, even that started to be a burden for some edge cases I was dealing with.</p> <p>After thinking back on my conversations with other friends in the DevOps field, I remembered they all told me that you'll hit a certain point of wanting to automate your workflows... so I started diving into Bash scripts.</p> <p>I knew I would want my script to do a few different things:</p> <ul> <li>Spin up all of my container infrastructure <ul> <li>One version with explicit commands for each piece of infrastructure (dev)</li> <li>One version running off docker-compose (prod)</li> </ul> </li> <li>Tear down all of my container infrastructure (teardown)</li> </ul> <p>First thing I did was to make sure my <code>docker-compose.yaml</code> file was built out. That would be my source of truth - if running <code>docker-compose up -d</code> made everything work correctly, then the rest of this script would be based on what was written in that file.</p> <pre><code class="language-yaml">version: "3.7" services: app: container_name: hquinn-app image: "hquinn_app:latest" networks: - hquinn-net ports: - "8080:8080" restart: always volumes: - type: volume source: hquinn_app_home target: /var/www/html networks: hquinn-net: volumes: hquinn_app_home: </code></pre> <blockquote> <p>I felt that I should point out that this is just an example project. <code>hquinn-app</code> isn't a real image, container, or project. Use this as a template to plug in your own information. It's a good training exercise!</p> </blockquote> <p>Running <code>docker-compose up -d</code> seems to work with this yaml configuration. Good! Now we can move on to creating our Bash script. We're going to call this <code>spinup.sh</code>.</p> <p>Let's start by setting the improved switch statement that we learned about yesterday. We'll include four flags (help, dev, prod, teardown) as well as a catchall for errors.</p> <pre><code class="language-bash">#!/bin/bash while getopts ":hdpt" opt; do case ${opt} in h ) printf "USAGE: ./spinup.sh [OPTION]... \n\n" printf "-h for HELP, -d for DEV, -p for PROD, or -t for TEARDOWN \n\n" exit 1 ;; d ) exit 1 ;; p ) exit 1 ;; t ) exit 1 ;; \? ) echo "Invalid option: %s" "$OPTARG" 1&gt;&amp;2 exit 1 ;; esac done shift $((OPTIND -1)) printf "USAGE: ./spinup.sh [OPTION]... \n\n" printf "-h for HELP, -d for DEV, -p for PROD, or -t for TEARDOWN \n\n" exit 1 ;; </code></pre> <p>Solid. This switch is going to make it really easy to just plug in commands we want to run for each flag.</p> <p>Production will probably be the easiest since we'll be leaning on the <code>docker-compose.yaml</code> files we already built out. Let's fill those commands into the <code>p )</code> case:</p> <pre><code class="language-bash">p ) # Rebuild image docker-compose build # Spin up container docker-compose up -d exit 1 ;; </code></pre> <p>As you can see, we're really just having this bash script run the same commands we would run ourselves to start up our containers, volumes, and networks. We're just splitting up the different jobs into different flags so we can utilize the same script to accomplish a number of different tasks.</p> <p>Our dev case [<code>d )</code>]isn't going to be much different. We're just manually creating a network and running one long <code>docker run</code> command to start up our container:</p> <pre><code class="language-bash">d ) # Rebuild image hquinn_app docker-compose build --no-cache # Create hquinn-net bridge network for container(s) to communicate docker network create --driver bridge hquinn-net # Spin up hquinn-app container docker run -d --name hquinn-app --restart always -p 8080:80 -v hquinn_app_home:/var/www/html --network hquinn-net hquinn_app:latest exit 1 ;; </code></pre> <blockquote> <p>Henry, what's the actual difference between your dev and prod builds here?</p> </blockquote> <p>Great question, reader! This is part of the fun (pain?) of initially learning about containers. There are a lot of different ways of dealing with the same tasks and you learn best practices as you go.</p> <p>When I initially wrote this script, I was working on that ColdFusion, Informix, and MySQL project. Due to the way it was initially built before it was handed to me, we needed to run different sets of commands to spin it up depending on if we were running it locally for development or if we were running it in production for actual use by judges.</p> <p>As I dug deeper into Docker, I had all kinds of sources telling me what should have been obvious:</p> <blockquote> <p>One of the main tenets of containers is that your code should run the same everywhere. It's the same containers, just running on different engines.</p> </blockquote> <p>That's to say that I should be running the same commands to run the same containers everywhere. Since I wasn't, I was still falling prey to the whole <code>but, it worked on MY machine</code> gotcha.</p> <p>Since then I've trimmed this script down a bit. I still like having the longer commands in my <code>d )</code> case, though. It allows me to test some things quickly in the way that I stand up my infrastructure that I can then solidify in my <code>docker-compose.yaml</code> files that I can then run in production environments. This is another tenant of containers, we can treat our infrastructure as code. Once our <code>docker-compose.yaml</code> is fine-tuned to our liking, we can check it into version control and know that it's safe for all time.</p> <p>Now the <code>t )</code> case is meant to tear down all of our infrastructure. Kill containers, and remove containers, images, volumes, and networks. That way we can get a clean slate to spin up and test out new changes we made to our infrastructure.</p> <p>We're going to accomplish this with a number of <code>if/then</code> blocks:</p> <pre><code class="language-bash"># If hquinn-app container is running, turn it off. running_app_container=`docker ps | grep hquinn-app | wc -l` if [ $running_app_container -gt "0" ] then docker kill hquinn-app fi </code></pre> <p>For this particular block, we're setting a variable named <code>running_app_container</code> to the output of <code>docker ps | grep hquinn-app | wc -l</code>. Which means if the container <code>hquinn-app</code> is up and running, <code>running_app_container</code> is set to the number of lines returned by that command.</p> <p>The <code>if/then</code> block then checks to make sure the controlling variable is greater than 0. If true, it runs the command <code>docker kill hquinn-app</code> to kill the container.</p> <p>We'll use a series of these blocks to manage our containers, images, volumes, and networks.</p> <p>Let's see the entire <code>spinup.sh</code> script, with all of the parts plugged in:</p> <pre><code class="language-bash">#!/bin/bash while getopts ":hdpt" opt; do case ${opt} in h ) printf "USAGE: ./spinup.sh [OPTION]... \n\n" printf "-h for HELP, -d for DEV, -p for PROD, or -t for TEARDOWN \n\n" exit 1 ;; d ) # Rebuild image hquinn_app docker-compose build --no-cache # Create hquinn-net bridge network for container(s) to communicate docker network create --driver bridge hquinn-net # Spin up hquinn-app container docker run -d --name hquinn-app --restart always -p 8080:80 -v hquinn_app_home:/var/www/html --network hquinn-net hquinn_app:latest exit 1 ;; p ) # Rebuild image docker-compose build # Spin up container docker-compose up -d exit 1 ;; t ) # If hquinn-app container is running, turn it off. running_app_container=`docker ps | grep hquinn-app | wc -l` if [ $running_app_container -gt "0" ] then docker kill hquinn-app fi # If turned off hquinn-app container exists, remove it. existing_app_container=`docker ps -a | grep hquinn-app | grep Exit | wc -l` if [ $existing_app_container -gt "0" ] then docker rm hquinn-app fi # If image for hquinn_app exists, remove it. existing_app_image=`docker images | grep hquinn_app | wc -l` if [ $existing_app_image -gt "0" ] then docker rmi hquinn_app fi # If hquinn_app_home volume exists, remove it. existing_app_volume=`docker volume ls | grep hquinn_app_home | wc -l` if [ $existing_app_volume -gt "0" ] then docker volume rm hquinn_app_home fi # If hquinn-net network exists, remove it. existing_hquinnnet_network=`docker network ls | grep hquinn-net | wc -l` if [ $existing_hquinnnet_network -gt "0" ] then docker network rm hquinn-net fi exit 1 ;; \? ) printf "Invalid option: %s" "$OPTARG" 1&gt;&amp;2 exit 1 ;; esac done shift $((OPTIND -1)) printf "USAGE: ./spinup.sh [OPTION]... \n\n" printf "-h for HELP, -d for DEV, -p for PROD, or -t for TEARDOWN \n\n" exit 1 ;; </code></pre> <p>All in all, this is looking pretty tight. You can add more commands in if you need anything more complicated. You can add more flags to handle more edge cases, too.</p> <p>This script (and a handful of others like it) really helped me through the last six months of my job with the courts. However, with the projects I'm working on now, the amount of these scripts I would need to remain productive is going to be a burden to maintain. We need more power and more control over what we're doing.</p> <p>Hence, my deep dive into Kubernetes.</p> <p>I haven't forgotten about it. I'm starting to dig into the books that I bought. As far as Kubernetes THe Hard Way goes, <a href="https://dev.to/christiancorbin">Christian Corbin</a> pointed out that the tutorial might be out of date. To that end, I think I'm going to drop K8s The Hard Way and just focus on the books that I bought and the <a href="https://kubernetes.io/docs/tutorials/">Kubernetes.io</a> when I need some hands-on practice.</p> <p>DevOps is all about iterating on and improving processes. Happy to change things here as better opportunities come up!</p> <p>It's a holiday weekend and I'm headed to Maine. Time to <code>spinDOWN.sh</code>.</p> <p>/rimshot</p> <p>I'll try to write some more while I'm on vacation, though you might not hear from me until next week.</p> <p>Stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a></li> <li><a href="https://henryneeds.coffee/blog">Blog</a></li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> DevOpsDay 3 - Fun With Flagshttps://henryneeds.coffee/blog/day3-fun-with-flags/https://henryneeds.coffee/blog/day3-fun-with-flags/An explanation as to how I use flags in Bash scripts to make multi functional CLI programs.Thu, 04 Jul 2019 00:00:00 GMT<p>First off, some housekeeping. I can already tell that coming up with a daily post is going to be harder than my buddy Alex had with his SVG work. He had one animation to complete every day. I'm spending all month working towards one larger goal.</p> <p>However, I'll do my best to share something I learn every day, even if it's small.</p> <h2>Background</h2> <p>I bought <a href="https://leanpub.com/b/masteringcontainers">Nigel Poulton's book bundle from LeanPub</a> and am starting to dig into those, but learned something neat recently that I thought I would share.</p> <p>When I was still at the courts, I was teaching myself how to use containers as quickly as I was implementing them in projects. While learning how to manage multiple sets of containers, I spoke with my college friends who were in DevOps-y roles about the tools they used. It shocked me to hear everyone say the same thing:</p> <blockquote> <p>You usually start with <code>docker run</code> CLI commands and graduate to tools with more layers of abstraction as you need them. <code>Docker-compose</code> comes next, followed by automating several commands with Bash scripts, which is eventually followed by Kubernetes.</p> </blockquote> <p>When I was shoving legacy applications into containers, I had to restart my containers a lot. That led to me learning how to utilize Bash scripts to automate a lot of that pain out of my life. I taught myself by piecing together different snippets I found on StackOverflow. That's all well and good, but I never read a book or anything to really get the basics down.</p> <p>At my new job, I'm using Bash to automate some processes and I'm learning a LOT from actually being around other developers. [Side note: being a one-man government dev shop gets REALLY lonely.]</p> <h2>Task</h2> <p>I recently tasked myself with building a tool that automated a process that takes our team of three engineers approximately 0.5 - 1.5 man hours each day. It involves a lot of Git operations as we prepare for production deployments. It needs to have a pair of eyes on it pretty much every step of the way. It's tedious.</p> <p>Luckily I was able to take a script a coworker had been working on, toss it into a Docker container, and used that as a base. This version still required one of us to run it over and over again until our list of tasks was complete. That wasn't going to fly - this needed to be automated all the way through and it needed actual error handling to let us know when something broke.</p> <p>After building it out, I had a nice little CLI tool that had a handful of different flags that would change the way the script ran depending on the task at hand. I had built small things like this for the court and never really ran into any problems as I was just building tools for my own use.</p> <p>My coworker took one look at it and asked <strong>"why do these flags need to be in a specific order?"</strong></p> <p>Frankly, I had never even considered that. There were six flags and not all of them were used every time. If they weren't added in the correct order, the script would break, and the Git branch commit tree might be compromised.</p> <p>No bueno.</p> <h2>Solution</h2> <p>I very quickly got acquainted with Bash switches and all of the fun that comes with them. I've been writing code professionally for a few years now, so I knew enough to set up a quick switch after a Google search, but I didn't know the correct way to handle errors.</p> <pre><code class="language-bash">#!/bin/bash # Basic switch to handle a couple of flags case "$1" in a ) printf "Option A \n" exit 1 ;; b ) printf "Option B \n" exit 1 ;; esac </code></pre> <p>And there are all kinds of errors with switches to handle flags in a CLI tool:</p> <ul> <li>The user may just not know the correct way to call your tool.</li> <li>A user could enter a flag you didn't intend for them to use.</li> <li>A flag may need an extra argument that a user didn't provide.</li> </ul> <p>Well, the first fix is easy. We just add another flag to print out help text:</p> <pre><code class="language-bash">#!/bin/bash # Basic switch to handle a couple of flags, with added help flag case "$1" in h | help ) printf "HELP MESSAGE \n" exit 1 ;; a ) printf "Option A \n" exit 1 ;; b ) printf "Option B \n" exit 1 ;; esac </code></pre> <p>Let's make sure this works:</p> <pre><code class="language-shell">hquinn$ ./example.sh -a Option A hquinn$ ./example.sh -b Option B </code></pre> <p>The next two fixes are actually where my new tidbit comes into play. We need to let our users enter flags in any order they want. We also need for them to be able to enter in arguments with their flags.</p> <p>Think about the command <code>yum install git</code>.</p> <p><code>Yum</code> is your tool, <code>install</code> is your "flag", and <code>git</code> is the argument you're passing with your flag to your tool.</p> <p>I learned that after wrapping your case in a while loop you can use the built-in Bash function <code>getopts</code> to limit the types of flags that can even make it into your switch:</p> <pre><code class="language-bash">#!/bin/bash # Switch to let users place flags in any order they want while getopts ":hab" opt; do case ${opt} in h ) printf "HELP MESSAGE \n" exit 1 ;; a ) printf "Option A \n" ;; b ) printf "Option B \n" ;; esac done shift $((OPTIND -1)) </code></pre> <p>What happens when we run that?</p> <pre><code class="language-shell">hquinn$ ./example.sh -h HELP MESSAGE </code></pre> <p>Alright, cool. Now we can use the <code>${opt}</code> variable in our switch to drop us into different cases. The <code>":hab</code> bit in the while loop instantiator is what allows us to limit what flags are even allowed to make it into the switch. If the flag isn't in that list, our switch doesn't kick off, and code doesn't get run. Easy peasy.</p> <p>After any of those letters in that flag limiting list, we can place another <code>:</code> to denote that a particular flag requires an added argument:</p> <pre><code class="language-bash">#!/bin/bash # Switch to let users place flags in any order they want while getopts ":hab:" opt; do case ${opt} in h ) printf "HELP MESSAGE \n" exit 1 ;; a ) printf "Option A \n" ;; b ) printf "Option B \n" newVar=$OPTARG printf "Argument is $newVar \n" ;; : ) printf "Invalid option: -$OPTARG requires an argument \n" 1&gt;&amp;2 exit 1 ;; esac done shift $((OPTIND -1)) </code></pre> <p>Let's check that out.</p> <pre><code class="language-shell">hquinn$ ./example.sh -b whatever Option B Argument is whatever hquinn$ ./example -b Invalid option: -b requires an argument </code></pre> <p>You may have caught the extra case in the last example. When we set up a flag to need an added argument, Bash provides us with a way to catch errors. The <code>:</code> case only gets run when a flag that needs an added argument (in this case, <code>-b</code>) isn't actually provided that extra argument.</p> <p>Lastly, we need to be able to print out a helpful error if a user inputs a flag that's not in our list of allowed flags.</p> <blockquote> <p>"Henry, does Bash give us a tool for that, too?"</p> </blockquote> <p>You bet it does.</p> <p>If an invalid option is provided in calling this CLI tool, the opt variable will be assigned the value ?. Let's add a case to handle that:</p> <pre><code class="language-bash">#!/bin/bash # Switch to let users place flags in any order they want while getopts ":hab:" opt; do case ${opt} in h ) printf "HELP MESSAGE \n" exit 1 ;; a ) printf "Option A \n" ;; b ) printf "Option B \n" newVar=$OPTARG printf "Argument is $newVar \n" ;; \? ) echo "Invalid option: $OPTARG" 1&gt;&amp;2 exit 1 ;; : ) printf "Invalid option: -$OPTARG requires an argument \n" 1&gt;&amp;2 exit 1 ;; esac done shift $((OPTIND -1)) </code></pre> <p>Now, if a user were to pass in a flag we don't want them to, they'll get an error message alerting them to that fact.</p> <pre><code class="language-shell">hquinn$ ./example.sh -c Invalid option: c </code></pre> <p>Turns out that there is a LOT that you can do with flags in bash. If you have a script that runs even a little differently each time you run it, consider adding some flag logic into it. It takes a few minutes, and more than a couple tries, to set up correctly. However, the time it saves you later on with helpful console messages is invaluable</p> <p>I'm Henry Quinn and this has been Fun With Flags. Stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a></li> <li><a href="https://henryneeds.coffee/blog">Blog</a></li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> DevOpsDay 2 - What's the plan?https://henryneeds.coffee/blog/day2-whats-the-plan/https://henryneeds.coffee/blog/day2-whats-the-plan/My outline for what I would like to learn about DevOps this month.Tue, 02 Jul 2019 00:00:00 GMT<p>In the spirit of learning in public, I wanted to share why I picked Kubernetes as the topic of my deep dive, what my defined goals are for the end of the month, and what my learning plan looks like in order to get me over the finish line.</p> <p>As short as a month can feel, this is a marathon; not a sprint. Let's take these in order:</p> <ol> <li>Why did I pick K8s?</li> </ol> <p>I have a bit of a weird background: computer nerd in a high school with no computer courses -&gt; Champlain College CNIS student -&gt; DBA at the US Courts -&gt; web developer at the US Courts -&gt; web and infrastructure developer for the AO of the US Courts -&gt; DevOps Engineer at Clarity Software Solutions.</p> <p>All of that is to say I have a bit of a background in every bit of technology involved with creating, deploying, and scaling web applications.</p> <p>Jack of all trades, master of none.</p> <p>I was initially hired at the courts to help with a major platform upgrade that, like government work does, got sidelined for two years. I ended up spending that time teaching myself to code. While building some small LAMP apps to help automate tasks for the HR department, I found myself rebuilding servers to manage PHP version conflicts.</p> <p>At a certain point, I realized there had to be a better way than doing that work manually. It was at this point that I got a side job teaching at a local developer bootcamp. As I was getting paid to learn new skills that I could pass down to my students, I stumbled onto Docker.</p> <p>It blew my mind that I could cram so many different containers into so small a space and not have to worry about dependency conflicts anymore. This was the end of my constantly rebuilding servers! I dug into Docker hard and even ran a workshop on its use at GDG Dev Fest New Haven 2017. The Comp Sci Department Chairperson of SCSU actually took what she learned in that workshop and started containerizing up her lab environments for her students. Yet another example of how teaching is one of the best ways to learn.</p> <p>After myself and other developers in the US Courts made enough noise about containers being the future, we finally got the folks in D.C. to build us a national OpenShift environment. Game on. This opened the doors to our small applications built in our own districts to go national. I was being carted around to conferences around the country (Minneapolis, Phoenix, and Washington D.C.) to talk about my projects and teach other judiciary developers the promise of implementing container technologies into their workflows. Using OpenShift made it so that we didn't have to worry about uptime, or scaling, or persistent storage. Everything just worked. It was lightning in a bottle.</p> <p>My four years with the judiciary culminated in a morning view dashboard we built for our judges. We were able to save every one of our judges about an hour's worth of time every morning. When you look at their salaries and how many judges we have in the country, this tool serves to save the judiciary tens of millions of dollars worth of downtime every year. And it wouldn't have been possible without containers and orchestration platforms like OpenShift.</p> <p>After working with OpenShift for a year, and getting the dashboard to a point where other developers can continue to roll it out, I had to leave that job. I actually got hired where I am now due to all of the experience I gained at the courts with shoving legacy technology into containers and scaling them out. (I'll have a fun post later this month about getting Vue, ColdFusion, MySQL, and Node all working together in containers).</p> <p>My new company is just starting their journey of adopting DevOps principles across the organization. We're in a good spot to start putting some of our internal tools and processes into containers to run automatically in order to take some load off of our developers and release engineers.</p> <p>The more we can automate, the more our developers can innovate, and the faster we can grow.</p> <p>In that vein, I want to learn as much as I can about how the underpinnings of how container orchestration systems work. Tools like Kubernetes are going to be key to the success of my organization and I want to be able to roll my sleeves up and help with the nitty gritty as I go about my DevOps evangelizing over the coming years.</p> <ol> <li>What do I actually hope to accomplish this month?</li> </ol> <p>I have some containerized tools that I've already built in my time here at Clarity that are already saving my team a lot of time every day. Which is great, because we're still swamped even after getting that time back.</p> <p>However those are still tools we have to manually spin up, manually kick off, and manually spin down.</p> <p>By the end of July I want to have a built out Kubernetes cluster (even if it's local on my MBP) that can handle running my various tool containers.</p> <p>I want to understand how to stand the cluster up, how to manage it, and how to debug it when things go wrong. Kubernetes is going to be a key technology in pushing my organization towards automation enlightenment and I want to have an intimate understanding of my tools.</p> <p>Anything else I learn about process automation, Bash, and Go along the way will purely be gravy.</p> <ol> <li>What is my learning plan?</li> </ol> <p>I initially wanted to work through the book <a href="http://shop.oreilly.com/product/9780596005955.do">Classic Shell Scripting</a> this month, but priorities change.</p> <p>It was only yesterday that I decided I'm doing Kubernetes instead so I've had to retool a bit.</p> <p>After gathering input from folks on Twitter, /r/DevOps, and my various Slack groups I landed on <a href="https://leanpub.com/thekubernetesbook">The Kubernetes Book</a> by <a href="https://twitter.com/nigelpoulton">Nigel Poulton</a>. Seems like it's a great way to learn things from the ground up. I also found this great free eBook from O'Reilly/NGinx called <a href="https://www.nginx.com/resources/library/cloud-native-devops-with-kubernetes/">Cloud Native DevOps With Kubernetes</a>.</p> <p>Those, paired with <a href="https://github.com/kelseyhightower">Kelsey Hightower's</a> tutorial <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way">Kubernetes-The-Hard-Way</a> should give me a REALLY good foundation for understanding the ins and outs of managing my own Kubernetes cluster.</p> <p>I'd like to read both of those books, work through that tutorial, and have my own cluster set up and captured in code by the end of July. The idea is to still share what I learn every day with all of you, in order to pass that knowledge down to those who come after me. Maybe this will turn into a good resource on setting up your first cluster.</p> <p>Who knows how many cups of coffee this will all take. So far, we're up to five in the first two days.</p> <p>I'll be back tomorrow with something actually technical. Until then, stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a></li> <li><a href="https://henryneeds.coffee/blog">Blog</a></li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> DevOpsDay 1 - Learning DevOps In Publichttps://henryneeds.coffee/blog/day1-learning-devops-in-public/https://henryneeds.coffee/blog/day1-learning-devops-in-public/This is the beginning of spending July 2019 learning DevOps in public.Mon, 01 Jul 2019 00:00:00 GMT<p>Hi, my name is Henry Quinn and in July I'll be "learning in public."</p> <p>Last month, my buddy <a href="https://twitter.com/mistertrost">Alex Trost</a> started a learn in public month after reading <a href="https://www.swyx.io/writing/learn-in-public/">this post by Shawn Wang</a>. In 30 short days he really upped his game with SVG animations (enough where he was asked to create an eLearning course about it), so I figure there must actually be some benefit to learning, and then teaching, something new every day.</p> <blockquote> <p>"Henry, what does learning in public actually mean?"</p> </blockquote> <p>Great question, reader. Learning in public pretty much means what it sounds like. A lot of people will learn things on their own, with folks at work, or within silo'd Slack communities. It's great that they're upping their skills, but they're not necessarily sharing what the found with the greater development community. The learning in public "movement" (lol) is all about sharing what you learn. Writing blogs, creating tutorials, or making videos to pass on what you learn.</p> <p>Imposter syndrome is a really real thing in our industry, but it's important to remember that no matter how little you think you actually know, there's always someone coming up underneath you who knows less. They might be able to glean something from you that sparks an interest and accelerates their career. God knows I wouldn't be in the DevOps field right now without some of the folks over at <a href="https://newhaven.io">New Haven IO</a> pushing me in the right direction.</p> <p>So all of that said, after years of me telling people looking for technical help that "if you can't find the blog post, it's time for you to write the blog post," it's time for me to start actually writing the blog posts.</p> <blockquote> <p>"That's cool and all, but why should I read anything you write?"</p> </blockquote> <p>Again, GREAT QUESTION. I should probably tell you a little about myself. (I'll also drop a link to my resume and LinkedIn at the bottom of this article, in case anyone's curious.)</p> <p>I graduated from Champlain College in 2015 with a degree in Computer Networking &amp; Information Security. I promptly got a job as a DBA for the United States District Court, District of Connecticut where I taught myself to build web applications and eventually became a one man dev shop for the district.</p> <p>After building a few things and getting invited to speak at a couple of judiciary technology conferences, I was eventually brought on half-time with the Administrative Office of the U.S. Courts in Washington, D.C. to scale out a product we built for CT to be used by courts across the country.</p> <p>I left that job for reasons I'll talk about later in the month (burnout is a REAL thing), and found a great job as a DevOps Engineer for Clarity Software Solutions. I'm only five weeks into this new role, but I love it so far. It's a mix of release management and process automation.</p> <p>I'm learning a LOT here and I want to share what I learn with you. I wouldn't be where I am without the folks in my local tech group passing their knowledge down to me, so this is partly me paying that kindness forward. It's part me wanting to keep a log of everything I learned so I can look back on this fondly when I'm older. And it's also part me wanting to get better at what I do. One of the tenets of learning in public is that teaching is one of the best ways to learn. Call me selfish if you must. :)</p> <p>So July, for me, was going to be a deep dive on process automation with Bash and GoLang. However, since this is real life, things sometimes need to change. This month I find myself needing to switch gears to learning more about Kubernetes. We're building out some really cool containerized tools, so having a platform on which we can host all of them and kick off jobs remotely is going to be a big help. By the end of the month, I want to have a cluster set to hold our tools that I know from the inside out. Though, there will definitely be some automation thrown in there for good measure.</p> <p>With all of that said, I think this is going to be a really fun month. My plan is to share something new with you every day and that we can communicate and learn together as a community.</p> <p>I'll be back tomorrow with a new post about something DevOps-y, but until then smash that "Like" button, hit "Subscribe", and stay frosty.</p> <ul> <li><a href="https://henryneeds.coffee">https://henryneeds.coffee</a></li> <li><a href="https://henryneeds.coffee/blog">Blog</a></li> <li><a href="https://linkedin.com/in/henryquinniv">LinkedIn</a></li> </ul> DevOpsDeepfakes PSAhttps://henryneeds.coffee/blog/deepfakes-psa/https://henryneeds.coffee/blog/deepfakes-psa/I wanted to write a PSA about Deepfakes and how I feel they will impact the next major election cycle.Mon, 18 Jun 2018 00:00:00 GMT<p>Looks like it's time for another PSA, y'all. Buckle in.</p> <p>Unless you've been living under a rock since the 2012 U.S. presidential election cycle, you know that "fake news" has been running rampant. We're in a real life age of Nineteen Eighty-Four-esque doublespeak fueled by literal White House Press Secretaries and anyone savvy enough to set up a blog. The very offices and organizations that are supposed to tell us where we stand on the world stage are lying in very public forums about easily verifiable facts.</p> <p>Shit's real and shit's scary.</p> <p>Here's the real rub, though: it's about to get a LOT worse.</p> <p>I think it's fair to say that everyone reading this is familiar with Photoshop, the graphic editing software from Adobe. Whether it be through making long cat memes or looking at airbrushed photos of models in magazines, everyone more or less knows what Photoshop can do.</p> <p>One of Adobe's new products, called VoCo, is basically Photoshop for voice. This will essentially give you the ability to create sound bites of someone saying something they didn't really say, but have it sound like their voice. And it's as easy as typing.</p> <p>ONE MORE TIME FOR THE PEOPLE IN THE BACK.</p> <p>You can make it sound like people said something they didn't really say just by typing out some words.</p> <blockquote> <p>"But Henry, that can't possibly be real."</p> </blockquote> <p>Oh yeah?</p> <p><a href="https://www.youtube.com/watch?v=I3l4XLZ59iw" title="Adobe Audio Manipulator Sneak Peak"><img src="https://img.youtube.com/vi/I3l4XLZ59iw/0.jpg" alt="Adobe Audio Manipulator Sneak Peak" /></a></p> <blockquote> <p>"Oh, shit."</p> </blockquote> <p>Yeah. In fact, that video was from all the way back in November of 2016. Technology has gotten a lot better since then.</p> <p>Jordan Peele's production company actually used Adobe After Effects and a tool called FakeApp to make a video of Barack Obama moving his mouth to go along with and copying mannerisms from Peele's impersonation of him, reminding us to "stay woke, bitches."</p> <p><a href="https://www.youtube.com/watch?v=cQ54GDm1eL0" title="You Won’t Believe What Obama Says In This Video!"><img src="https://img.youtube.com/vi/cQ54GDm1eL0/0.jpg" alt="You Won’t Believe What Obama Says In This Video!" /></a></p> <blockquote> <p>"Are you serious?"</p> </blockquote> <p>For realsies times a million. Now just imagine if you combined After Effects, FakeApp, AND VoCo. You scared yet?</p> <p>The problem has actually been getting worse in 2018, with a wave of deepfake celebrity porn. People are using artificial intelligence to face-swap celebrities and porn stars. In the wake of Celebgate, where a number of celebrities were embarrassed by having their private photos leaked, there are people in the world actively putting your favorite celebrities' faces on porn stars in the middle of scenes. And due to artificial intelligence getting better and better, these videos actually look pretty real - to the point where I'm sure a good chunk of people wouldn't be able to tell the difference.</p> <p>If that's not troubling enough, let's look at the implications.</p> <p>People my age have been told our entire lives to be careful about what we put online. We didn't want pictures of us drinking underage (sorry, Mom) to pop up in case we wanted to run for office one day. We needed to be careful about pictures shared with significant others. We were told to keep certain opinions to ourselves to avoid "stirring the pot." Something could pop up decades down the line and compromise a goal we didn't know we would ever want to work towards. One lapse in judgement on something we uploaded willingly as an angsty teenager could completely derail a potential future career.</p> <p>And forget about what other people decide to post. Depending on the platform, it can be near impossible to take down. I've definitely seen pictures online of friends, family, and exes that could put those people in compromised positions somewhere down the line.</p> <p><img src="https://crashthebot.net/blog/wp-content/uploads/2018/06/pencil-300x211-300x211.jpg" alt="" /></p> <p>So what happens when these tools get so simple that everyone can use them? Some really sketchy stuff, that's what.</p> <p>Imagine the 2020 U.S. presidential election cycle with this kind of technology in the mix. Super PAC's can make their opponent look like they're lighting a cross on fire, sacrificing a baby goat, and whispering some words to Satan.</p> <p>But let's be really really real for a minute. People would never do something that brash, right? Most other people would understandably realize that something that ridiculously outlandish would probably be fake. That's to say, getting too outside the box wouldn't work for a completely fabricated smear campaign.</p> <p>/shouting: This is why I wanted to write this post.</p> <p>Fabricated smear campaigns ARE going to happen, but they're going to be incredibly subtle. Just like the supposed Russian involvement (Cambridge Analytica) in our presidential election, whoever ends up taking the helm on putting words in someone else's mouth during a campaign is going to come up with some very focused quips that will affect a very small but focused group of people. They won't need everyone in the country to believe what's put in front of them. Just enough voters in key swing states to turn an election.</p> <p>To put that more succinctly, people will use this technology to make enough small (but believable) mistruths to make people in key states believe a good candidate may be a poor choice.</p> <p>They'll do this targeting with data. It's happened before. After all of the investigations looking into the 2016 elections, we've seen that it doesn't take much to totally derail an election. This will only make that outcome easier with a lot less effort.</p> <p>"Well, that's scary as shit. Is there a way to tell if those sound/video bites are real or not?"</p> <p>Currently, not really, and that's what really scares me. There are a lot of charts out there (see below) that help you kind of figure out the partisan bias and information quality of different news sources. So if you see something from Patribotics or Infowars, you know that it'll inherently have a huge bias to it, and it might not be factually accurate. Knowing where your source lies on a chart like this can help you kind of gauge how true something might be.</p> <p><img src="https://crashthebot.net/blog/wp-content/uploads/2018/06/Media-Bias-Chart_Version-3.1_Watermark-min.jpg" alt="" /></p> <p>But in a world where any idiot can set up a WordPress blog (whaddup?) and run a "news" site, it can be difficult to know exactly who we can trust to deliver us factual information. There are people out there working on a solution, however.</p> <p>There's the old way of signing posts with PGP keys. Basically you'd sign a blog post or article or Facebook update with a private key, and anyone could check using your publicly available public key to verify that it was indeed you that wrote or posted that item. However, this can be tricky to non technical people.</p> <p>You could post your work on a decentralized (blockchain) application like Steemit. That way, there's a public ledger of who posted what. Instead of keeping all of the information related to the post in a closed off and private database, it would all be publicly available. If you wanted to, you could look at the blockchain and see that a certain author used their private key information to post an article or blog post to Steemit. Basically, you can be sure that a certain politician actually put out a certain video if there's some concern as to authenticity. The only problem, is that not a lot of people are using these decentralized platforms yet, and right now the whole crypto space isn't very forgiving to new or non technical users. There's still a lot of UX optimization to be done there.</p> <p>So really, the only option we're left with at the moment is something we all should have learned in high school. Find and read the primary sources that are being referenced. Look at competing viewpoints. Consider possible bias in the videos you watch and articles you read. Don't take quotes from anyone, even the President of the United States, for granted. Dig into the facts. See if you can find evidence supporting the "truth" you're being told.</p> <p>If we're going to make it out the other side of all of this intact, it's going to take a lot of work from everyone. People can't just watch the news and assume they're being told absolute truths anymore; they'll have to look into the evidence themselves. Folks like me who are building these technologies need to do a better job at solving the UI/UX problems we're faced with so everyone can feel comfortable using decentralized applications. And honestly, the news organizations need to do a better job at serving you the facts. We, as citizens, deserve better than what they've been providing.</p> <p>But, as always, do your own research. And stay safe out there.</p> PSAFacebook VPN PSAhttps://henryneeds.coffee/blog/facebook-vpn-psa/https://henryneeds.coffee/blog/facebook-vpn-psa/I wanted to write a PSA about why we shouldn't be letting Facebook be a VPN.Wed, 14 Feb 2018 00:00:00 GMT<p>PSA: Don't download Facebook's VPN. They're going to start pushing it soon. Explanation below.</p> <p>So, some of you might (will) start seeing a setting in your mobile Facebook app called "Protect." Hitting it will take you to the app store and prompt you to download an app called "Onavo Protect - VPN Security". It's a Facebook owned VPN.</p> <blockquote> <p>"Hey Henry, what's a VPN?"</p> </blockquote> <p>Great question. A VPN is a virtual private network. It's essentially a point on the internet that you can route web traffic from your phone/laptop/whatever through before going to the site you wanted to visit.</p> <blockquote> <p>"Yeah... explain that like I'm 5."</p> </blockquote> <p>You got it. There are times when this is helpful.</p> <p>Let's say your favorite TV show isn't on Netflix anymore; it's only available on Netflix Germany. You could route your <a href="https://netflix.de/your-favorite-show">https://netflix.de/your-favorite-show</a> request through a VPN in Germany so it looks like the request originated there, and pass all the traffic back to a laptop in your bedroom.</p> <blockquote> <p>"That ... okay."</p> </blockquote> <p>Or let's say you're in an airport and need to check your bank account, but you don't have a 4G signal on your phone. There's only a free and unsecured WIFI access point for you to use. You don't know who's in control of that. You don't want them looking at traffic you're sending your BANK. You could instead connect to a VPN that can establish a more secure connection to your bank's website/app/whatever and log in securely without worrying about some hacker or script kiddie looking over your digital shoulder.</p> <blockquote> <p>"That actually sounds useful."</p> </blockquote> <p>Last example (out of like fifty more), you can use a VPN to stop your ISP from looking at what you're doing. Whether you're downloading a stolen movie or just not wanting Comcast to see what you're up to, you can route your traffic through a VPN. You just keep hitting that one address for the VPN. The VPN is what connects to Facebook, The Pirate Bay, and whatever weird videos you watch in your free time. Comcast can only see you hitting the VPN. It's just a looooong list of you continually surfing to that VPN.</p> <blockquote> <p>"Are ISP's really watching us THAT much?"</p> </blockquote> <p>Absolutely. They look at patterns, see what people are doing, create customer profiles, and then sell all that data to marketing agencies. Fun fact, Facebook does the same thing whenever you're on Facebook or Instagram or whatever new social network they bought this week. They figure out who you are, what you like to do, and what you like to buy so that they can give you more relevant ads all the time.</p> <blockquote> <p>"That's creepy as hell. Facebook is watching what I do on EVERY app they own?"</p> </blockquote> <p>You bet. And Facebook owns a lot. of. companies. <a href="https://en.wikipedia.org/%C3%A2%E2%82%AC%C2%A6/List_of_mergers_and_acquisitions">Like, a lot of companies</a>. They're watching what you do on every single one that's still around. They're in the ad game, and they want to put as many ads that you'll click on in front of you as they can.</p> <blockquote> <p>"Should I just delete Facebook?"</p> </blockquote> <p>I mean, probably. Personally, I think it's outlived it's usefulness. But let's get back to this VPN thing. Facebook isn't giving out a free VPN out of the kindness of their heart. People have been saying it for years (specifically about companies like Facebook and Google), but if you're not the customer, your'e the product.</p> <p>Again, these companies are gathering as much information about you as they can so that they can sell all the ad space you see on the internet to companies who want to pay to potentially gain you as a customer. If you're fine with that, cool, but it's terrifying to some.</p> <p>Facebook is trying to get you to use their VPN specifically so they can see what you're doing on the rest of the internet that isn't Facebook.</p> <p>One more time for the people in the back.</p> <p>FACEBOOK IS TRYING TO GET YOU TO USE THEIR VPN SPECIFICALLY SO THEY CAN SEE WHAT YOU'RE DOING ON THE REST OF THE INTERNET THAT ISN'T FACEBOOK.</p> <p>That's terrifying.</p> <p>It's a raw end of a shitty deal. If you're cool with them watching everything you do without giving you any real benefit in return and truly feel like you have nothing you don't want them seeing, go ahead and download it.</p> <p>But, if you're like me and my friends who think Facebook has gotten a little too big for their britches, please stay away from this. You're giving up a lot of internet freedom that people have been fighting for for literally decades without getting anything of benefit from Facebook besides more ads for dumb mobile games shoved in your face on every platform they own.</p> <p>I absolutely LOATHE when people do this, but please spread this news around. Share this post, copy and paste it in messages to your older relatives, whatever you need to do. There are going to be plenty of people who don't know any better (people who have Facebook making up a majority of their time on the internet) that will see Facebook trying to "Protect" them and think it could only be a good thing. Help them keep their privacy.</p> <p>Edited To Include:</p> <p>This is just bonkers: "In August of last year, The Wall Street Journal took a look at how Facebook uses Onavo to track what people do on their smartphones outside of the Facebook ecosystem. Using Onavo data, for example, Facebook was able to determine that the Instagram Stories feature was impacting SnapchatÒ€ℒs business well ahead of when Snap disclosed slowing user growth." - <a href="https://www.macrumors.com/.../facebook-promoting-onavo-vpn/">https://www.macrumors.com/.../facebook-promoting-onavo-vpn/ </a></p> PSAMoment of Truthhttps://henryneeds.coffee/blog/moment-of-truth/https://henryneeds.coffee/blog/moment-of-truth/I wanted to share a 'moment of truth', where I learned that I have what it takes to do well in this career.Thu, 25 Aug 2016 00:00:00 GMT<p>I don't update everyone on what I'm up to very often, and if that upsets or worries anyone, I'm sorry. Those who are close to me know I value face to face interaction more than whatever meme posts or video snippets end up on Facebook, but I've had a lot happening recently.</p> <p>I started teaching myself to code in earnest about 14 months ago when I started working for the federal court. In order to progress my skills more quickly, I signed up for a web development boot camp this summer. Our cohort learned a lot. REALLY QUICKLY.</p> <p>A couple of intranet UI overhauls, several bespoke apps, and a fleshed out MVP for a local startup later, I've been invited to present one of my applications at a national district courts operations conference at Disney World next month.</p> <p>It turns out that my coworker and I found a problem, extrapolated the core issues, and (mostly I) developed a solution, that to my knowledge, no one in our nationwide organization had solved yet. That garnered us two separate 45 minute presentations/demos in front of operations staff, IT directors, and Clerks of Court for all 94 federal judicial districts. If enough courts decide to integrate our application into their HR workflows, we get money from DC to help support it; and possibly hire underlings!</p> <p>The code camp I'm involved with teaches about moments of truth - a time when everything you've been working on comes to a head, and you present yourself and your work in a fantastic light.</p> <p>I had such a moment last night, at a small hack event at a local smart light company. The group opened up an API (an interface to interact with a program someone else built) to let us play with the lights all over their training room. Between getting a tour, talking with their developers, and eating some pizza, I built a functional tic tac toe game using fluorescent lights in the ceiling. I got in front of the group at the end of the night, wired on caffeine, and demoed what I made. I freaking nailed it.</p> <p>There are still several decades ahead of me in my career, but every once in a while, I need to step back to look at what I've accomplished, and remind myself that I'm only 23. I've done a lot in the past year or so to push myself forward and it feels amazing to be recognized for the work I'm doing to make my co-worker's jobs more pleasant to do.</p> <p>Here's hoping that the presentations next month go even better than I intend them to.</p> <p>To top it all off, my Soylent Coffiest came in the mail yesterday and I cracked the first bottle for breakfast this morning.</p> <p>Am I a software developer yet?</p> Career$("#Blog").append("Post");https://henryneeds.coffee/blog/blog-append-post/https://henryneeds.coffee/blog/blog-append-post/After dumping some time into FreeCodeCamp, here are some projects I have to share.Wed, 23 Mar 2016 00:00:00 GMT<p>I haven't been updating this nearly as much as I should have been. Truth be told I've been ignoring some of my yearly goals. Guitar has all but fallen to the wayside, and I'm honestly okay with that. It's one of those things I'd love to be good at, but really don't want to put the effort into.</p> <p>Rock climbing has been in a bit of a lull, but I've been backpacking when I can, and the weather is finally getting right to start climbing outside. I'm planning on spending this weekend at Seven Falls State Park to get some bouldering in. Really psyched for that.</p> <p>I've been using all the extra free time to really focus on web development. I realized that with the skills I had, I really wasn't going to get anywhere fun very fast. After looking around at different code camps, I finally stumbled onto <a href="http://freecodecamp.com/">Free Code Camp</a>. The way they lay out their lessons makes a lot more sense to me than anything else I've found so far. It takes a lot of time and effort, but by the time you're done, you earn Front End, Back End, and Data Visualization certifications. They even have you work with a small team to build functional websites for two non profits. At the end, you have a lot more experience, a solid portfolio, and usually some job offers.</p> <p>I've been cranking away on it, and at about 202.33 hours in, I have some pretty cool stuff to show for it:</p> <ul> <li><a href="http://crashthebot.net/freecodecamp/tribute">Paul Rudd Tribute Page</a></li> <li><a href="http://crashthebot.net/freecodecamp/quotes">Random Quote Generator</a></li> <li><a href="http://crashthebot.net/freecodecamp/weather">Weather Checker</a></li> <li><a href="http://crashthebot.net/freecodecamp/wiki">Wikipedia Portal</a></li> <li><a href="http://crashthebot.net/">Personal Webpage</a></li> </ul> <p>Granted, some of those aren't looking the best they could on mobile, but these early stages are more about learning different JavaScript frameworks than responsive web design. It's a lot of fun, and it's actually helping a lot for my current job, so win/win.</p> Check-inFrom Barely Functional PHP To Conference Worthy Web Apps In Under Two Yearshttps://henryneeds.coffee/blog/from-barely-functional/https://henryneeds.coffee/blog/from-barely-functional/Nothing happens overnight - this is how I got conference ready in under two years.Wed, 23 Mar 2016 00:00:00 GMT<p>I've only been in the web development game for a little under two years, but I'm building applications for a federal agency that are getting me invited to speak at conferences. It took me a while to figure out how to get to that point, and I didn't have anyone available to give me a road map when I started. I spent a long time teaching myself, and in retrospect there are definitely easier paths to take. So here's my attempt to set new developers on the right track.</p> <p>I got my start in high school with some basic C and HTML/CSS, then had classes in Java, C++, JavaScript, XML, and SQL in college. Once I graduated, I began using PHP and MySQL for a work project. I eventually picked up Meteor (full stack JavaScript framework), ECMAScript6, some NoSQL experience (MongoDB), and now I'm working with Ruby on Rails.</p> <p>However, the way I went about learning didn't work that well, and was honestly kind of sporadic. It took a long time for fundamental concepts to really stick. I jumped between languages a lot, did tutorials, read some books, endlessly trawled through StackOverflow, and even went to a local codecamp for funsies to get where I am.</p> <p>It's entirely true that once you learn one language pretty well and get the core concepts down, that it's easier to branch out and learn more. They're all pretty similar, the syntax is just a little different. But you <strong>can</strong> learn a lot of marketable skills <strong>very</strong> quickly.</p> <p>In the end, I guess it worked out, but looking back on <strong>how</strong> I learned everything, there's definitely a way to streamline it:</p> <p>The biggest benefit you can give yourself is to get the fundamentals down. Pick a single language (something like Ruby, JavaScript, or Python) and learn the hell out of it. The three I listed are pretty simple and you can see results quickly, but can also be insanely powerful when you start adding in frameworks and the like later.</p> <h2>Ruby:</h2> <p><a href="https://www.codecademy.com/learn/ruby">Ruby - Codecademy</a>: This gives you an interactive shell (so you don't need to install anything) and teaches you the basics of the language (variables, functions, methods) and gives you a quick way to test and see the results of the code you're writing. This is what I used to pick up Ruby, and it helped a ton.</p> <p><a href="https://learnrubythehardway.org/book/">Learn Ruby The Hard Way</a>: Don't let the title intimidate you. It's all pretty straight forward, and the author makes sure you have a thorough understanding of WHY the things you're building work the way they do. That insight can help a lot later down the road.</p> <h2>JavaScript</h2> <p><a href="https://www.freecodecamp.com/">Free Code Camp</a>: This is an open source learning framework. This takes you from the very basics of using JavaScript as a language to learn fundamentals (not necessarily for web development, but it walks you through a LOT). It also provides you with a support network of other people to talk to about solving particular problems, and eventually folds more in (HTML, CSS, API's) to continue building bigger and better things. This is what I used to learn most of what I know now, and I consider it invaluable.</p> <p>It's important to note that you don't need to finish Free Code Camp. They add new material so fast, that I'm not sure if anyone ever has. It wouldn't hurt to finish one of the modules and get a certificate. Employers like to see that you can commit to something and finish it, but if you want to move to working on personal projects after doing a few of theirs, absolutely make that move.</p> <p><a href="https://www.codecademy.com/learn/javascript">JavaScript - Codecademy</a>: Again, this gives you an interactive shell so that you can write, test, and see the results of your code right in your browser. This really focuses on fundamentals.</p> <h2>Python</h2> <p><a href="https://automatetheboringstuff.com/">Automate The Boring Stuff With Python</a>: This is a fun way to learn a language. You pretty quickly start building some things that help you in real life.</p> <p><a href="https://www.codecademy.com/learn/python">Python - Codecademy</a>: Again, this gives you an interactive shell so that you can write, test, and blah blah blah. Starting to see a pattern here? Codecademy is really good for picking up the syntax of a new language.</p> <p>I know that's ^ a lot. But like I said, pick one for now and roll with it.</p> <p>Once you have the basics of a language down, it's easy enough to jump from one to another. I find that the Codecademy courses are enough to help me understand the syntax of a new language, and anything they don't teach me, the language's official documentation will. But now you're getting to the point of building things on your own, whether it be scripts, sites, or applications.</p> <p>YOU NEED. TO KNOW. VERSION CONTROL.</p> <p>I recommend Git. It's widely available, open source (created by the guy who made the Linux kernel), and it plays nice with <a href="https://github.com/">GitHub</a> and <a href="https://bitbucket.org/">BitBucket</a>. It helps you keep track of changes in your code, helps you share code with others (so they can read it and recommend changes, or use it as a basis to hire you), and offers a convenient way to have offsite backups.</p> <h2>Git</h2> <p><a href="https://try.github.io/levels/1/challenges/1">GitHub's Git Introduction</a>: This one is awesome for beginners and mostly teaches the command line (you can do this from any operating system). Do it twice. Do it three times. Just make sure you understand it.</p> <p><a href="https://guides.github.com/activities/hello-world/">GitHub's Git Lesson</a>: Once you have the Git Intro down, give this one a shot. It teaches you how Git interacts with GitHub. And helps the idea of branches and pull requests make a little more sense.</p> <h2>Extra Materials On Git</h2> <p><a href="https://www.youtube.com/watch?v=Y9XZQO1n_7c">Learn Git in 20 Minutes</a>: This is a pretty well known talk that can help get a lot of the fundamentals down. Git is going to be a little different that anything you're used to, so it's important to understand the concepts. This uses visual aids to help it sink in.</p> <p><a href="https://www.atlassian.com/git/tutorials/learn-git-with-bitbucket-cloud">Getting Git Right</a>: This is an incredible resource from Atlassian. It teaches a lot: from basic to advanced Git commands, different workflows, and answers almost any question you'd ever have about Git. Keep this in your bookmarks bar; you're going to need it.</p> <p>From there, I'd say just keep building stuff. Start adding frameworks like <a href="https://www.meteor.com/">Meteor</a>, <a href="https://www.djangoproject.com/">Django</a>, <a href="http://rubyonrails.org/">Rails</a>. Find something you'd like to use, like a small application to let you and your co-workers keep track of which lunch spots you like, or something to keep track of how much weight you lift in a year.</p> <p>Personally, I built a <a href="http://crashthebot.net/">few small things for FreeCodeCamp</a>, and just kept rolling with it. Once you build a few projects, you'll find a new piece of technology that makes a certain part of development easier. Like, using <a href="http://handlebarsjs.com/">Handlebars</a> and realizing that templating engines are a game changer. And then learning about asynchronous database calls with <a href="https://www.meteor.com/">Meteor</a>, being able to make <a href="https://cordova.apache.org/#supported_platforms_section">applications for several platforms using one codebase</a>, and using frameworks like <a href="http://rubyonrails.org/">Rails</a> to abstract out a lot of the boring stuff in web development. You just have to keep going and continuing to build more interesting things.</p> <p>It's a big, wide world.</p> <p>And Reddit user <a href="https://reddit.com/u/eru_melkor">/u/eru_melkor</a> makes a really really really good point:</p> <blockquote> <p>Don't suffer from analysis-paralysis.</p> </blockquote> <p>There are going to be points where it seems like there are far too many options out there. Don't let that stop you from building. It's a big hurdle that most of us have to get over.</p> <blockquote> <p>But Henry, how do I know I'm using the right tool?</p> </blockquote> <p>Honestly, in the beginning, you don't. You just build, find a pain point, find something to relieve it, and move on. I didn't even touch on hosting, testing, containerization, or continuous integration here - but that's because you don't need that stuff yet. Just build using what you know until you come to a logical point to add something else into the mix.</p> <p>And I know this is a lot. I truly get it, but we all have to start somewhere. I mean, just check out <a href="http://crashthebot.net/first_site/">the first web page I built back in 2011</a>.</p> <p>/Cringe.</p> <p>But now I have a <a href="https://henryneeds.coffee/">pretty good resume</a> under my belt, with <a href="https://github.com/Quinncuatro">plenty more projects</a>. You just have to keep chugging away and in a year or two you'll be looking back on some of the old stuff you wrote thinking how impossible you thought it might be to get to where you are.</p> <p>Life's funny like that.</p> <p>And truly, if any of you reading this ever need help with any of it (syntax, getting stuff installed, setting up a development environment, getting your first website hosted) absolutely feel free to shoot me a <a href="https://reddit.com/u/quinncuatro">PM on Reddit</a>. I love helping people get their start, so reach out!</p> CareerNART 2013 | Planninghttps://henryneeds.coffee/blog/nart-2013-planning/https://henryneeds.coffee/blog/nart-2013-planning/This outlines the planning stage of a cross-half-country roadtrip I took in the summer of 2013.Wed, 09 Dec 2015 00:00:00 GMT<blockquote> <p>Trust your gut, but do what makes for a better story.</p> </blockquote> <p>None of this was even an idea yet. All I knew is that I was tired of being told by my supervisors in all too many ways that my behavior as a resident advisor had been disappointing. Had they hired me to watch a small dorm full of similarly aged college kids? Sure. Were they paying me money to hang out with residents that I would have been spending time with anyway? Absolutely. I owed them for that, but after being a camp counselor for so many years and developing a rhythm to how I deal with a disturbance, I wanted to do this job my own way.</p> <p>Rachel set her pen and notebook on her desk before looking me dead in the eye.</p> <p>"Henry, I'm honestly tired of having this conversation every month. You need to fill out the paperwork for the programs you run. If not because your job requires it, know it gives the school an idea where our funding is going and what students our programs are reaching. If we don't turn in that paperwork, we get less money."</p> <p>Now Rachel was my immediate supervisor. She was in charge of a group of RA's that made up the west quadrant of campus. We called ourselves the Two Bit Punks.</p> <p>The programs she was referring to were some of my few contractual obligations in exchange for free housing. We were to run two every month: one fun one just to build a sense of community in the dorm, and another that fit an aspect of our "7 C's Social Change Model." They weren't particularly hard to do. Fill out a planning form, run an event, have some fun, teach some stuff, get signatures to prove people were there, and fill out a fun little wrap up form. I was coming from a camp environment where everything was done super off the cuff, and where we were trusted to run wholesome activities for sometimes over one hundred children at a time. It was actually that job that got me the gig with the college. Why hire us if you wouldn't trust us implicitly with the lives and well being of our residents and our actual buildings?</p> <p>"I have a copy of your schedule, so I know you don't have anything going on for the next few hours. I need you to go somewhere you can focus and catch up on your paperwork or we need to have a bigger discussion about your employment here."</p> <p>So I went to the cafeteria, pulled out my laptop, and started planning a road trip.</p> <p>I had two months of sophomore year left, with a month off between that and my summer job starting up, and just wanted to be in Florida. It had been a cold winter in Vermont and I needed some relaxing time in the sun.</p> <p>The planning for the roadtrip started as these things often do. Pull up a map, mark all of the places one would like to go, and start plotting a route through as many of the marks as one can. I looked up sporting events, concert dates, and starting looking up old friends and forgotten distant family members. This is one of the few times my childhood experience of being an Air Force brat has come in handy. My father retired as a Master Sergeant after twenty years of service. In that time, or at least in the span of my first fourteen years leading up to his retirement, I had lived in: Connecticut, California, Tokyo, Virginia, Arkansas, New Jersey, and ended up back in Connecticut.</p> <p>I had moved around as a kid. Quite a lot, actually. It was miserable at the time. Having to leave a group of friends every couple of years only to end up in an entirely new part of the country, dropped into a foreign culture, and told to start the whole process of making new friends over. It was traumatic. It took me a long time after settling down in Connecticut (fourth grade through the end of high school) to be able to make lasting friendships. There was always the thought deep in the back of my head that I might move and lose contact with these people forever. Close friends were in short supply. Keep in mind, this is before social media took off, so keeping touch with people as a child was a tall order.</p> <p>What came of this though, is that I met a lot of different people living in a lot of different places. And whether it happened before or after I left for a new home, parents of my friends would be reassigned and move to different parts of the country too.</p> <p>That realization is when I got real inspiration for what this roadtrip, and subsequently this chronicle of it, would ultimately turn into. Sitting in the dining hall, with a plate of untouched salad and a slice of pizza next to me, I began hunting down all of my old friends from my childhood. I found Kelsey in Arkansas. Jenna in North Carolina. Matt and Nathan in Ohio. I found friends of friends that I have spoken to online in the past, like Jordan in Michigan. Family scattered around that I haven't spoken to in a while. My cousin Dylan down in Florida. My aunt and uncle on the Gulf in Alabama. People that I had brief encounters with. My friend Brit from summer camp in Quebec. A girl I shared a crazy experience right out of a movie with in the Charlotte Airport ended up in Toronto.</p> <p>Facebook helped a lot. So did Google, and a few other tools I found online. I called in favors, asked if anyone had friends scattered around the country that would let me crash for a night. I got a rough draft of my itinerary together. I gave myself 32 days to drive around half the United States and part of Canada, all solo. I wanted this to be an opportunity to be able reconnect with all of these people I had lost touch with. I wanted cool pictures to show everyone. Fun stories to share. Amazing, unique experiences to make me a better human being. Little did I know exactly what I was getting myself into.</p> <p>This road trip is arguably the most interesting thing I've done with my life up to this point. Even more than reconnecting and spending time with people who have meant so much to me in different parts of my youth, it gave me an opportunity to better know myself. There is something oddly calming and zen like about being alone on the road for that long. There are no obligations other than what you want to do that day. There are no alarms; you make your own schedule and wake up with the sun. It provides a lot of time for self reflection. Between college courses, working two part time jobs, and trying to have some semblance of a social life, that important downtime to think and decompress can slip through the cracks.</p> <p>I had been on a couple of road trips before. I went down the east coast with my best friend and his parents once. We got some professional golfing lessons, had some great local food, and had our first real taste of whiskey all while we were sophomores in high school. That same friend and I took a roadtrip down to Florida after we graduated. Marathoned it there and back in what I believe was a week. We thought going that far that quickly was a challenge back then, but we've since learned. This solo month long roadtrip was meant to be a personal challenge to me. I would have to drive, on average, over one hundred miles a day to make it work. I would have to learn be humble and accept help that was offered to me along the way. But scariest of all, I would have to endure a month of having mostly myself for company. The mind does crazy things when isolated. But that's a story for later in the book.</p> <p>I skipped some classes the day I started planning. I walked into that cafeteria a little after noon and didn't realize it had gotten dark until I stepped outside to head home. I had a rough idea of where I wanted to go, and about how long I had to be in each city. The goal was Florida though. The goal has always been Florida. Whenever I have had any good chunk of time off of school, I find a way to boogie down there. This was it though. Three thousand miles over the course of a month. Meeting up with friends I haven't spoken to face to face since I was a toddler. Trying new foods. Soaking up as much warm weather as I possibly could. Taking every opportunity that came my way. North American Road Trip 2013.</p> Travel