{ "version": "https://jsonfeed.org/version/1", "title": "", "home_page_url": "https://atevans.com/", "feed_url": "https://atevans.com/feed.json", "description": "It's like StackOverflow, but I answer my own questions", "icon": "https://atevans.com/icons/apple-touch-icon.png", "favicon": "https://atevans.com/favicon.ico", "expired": false, "items": [ { "id": "https://atevans.com/2023/12/23/soelim-binary-macos.html", "title": "Where to find soelim binary in macOS", "summary": null, "content_text": "brew install groff - this contains the soelim binary, and whatever you are trying to build should now build correctly. Good luck!Longer version: you may have encountered an error message like: Entering subdirectory man1PAGES=`cd .; echo *.1`; \\\tfor page in $PAGES; do \\\t\tsed -e \"s%LDVERSION%2.4.44%\" \\...\tdone/bin/sh: soelim: command not found/bin/sh: soelim: command not found.../bin/sh: soelim: command not foundmake[5]: *** [all-common] Error 127make[4]: *** [all-common] Error 1make[3]: *** [all-common] Error 1make[2]: *** [all-common] Error 1make[1]: *** [sbin/slapadd] Error 2make: *** [gitlab-openldap/libexec/slapd] Error 2I encountered this while trying to set up LDAP for local testing using the GitLab Development Kit. It downloads an older version of OpenLDAP, configures, compiles, and installs locally in a pretty sane way. Unfortunately, when it attempts to install the man pages for OpenLDAP, it tries to “glue” all the associated files together using soelim instead of something more universal. Since this was missing on my MacBook Pro, the install failed there.My attempts to bypass man-page-compilation ended up with me digging through a lot of compile-time code and error messages, and it turned out to be a lot simpler to just get the soelim binary locally. The main barrier to this was my own StackOverflow / DuckDuckGo-fu; I didn’t come up with where the heck this soelim binary is. Ended up getting help from some awesome colleagues and finding the answer on Slack. Figured I should post this somewhere publicly searchable on the internet.The groff package contains the soelim binary, so now you should have everything you need to find and start fixing the next dependency problem you encounter.", "content_html": "
brew install groff - this contains the soelim binary, and whatever you are trying to build should now build correctly. Good luck!
Longer version: you may have encountered an error message like:
Entering subdirectory man1PAGES=`cd .; echo *.1`; \\\tfor page in $PAGES; do \\\t\tsed -e \"s%LDVERSION%2.4.44%\" \\...\tdone/bin/sh: soelim: command not found/bin/sh: soelim: command not found.../bin/sh: soelim: command not foundmake[5]: *** [all-common] Error 127make[4]: *** [all-common] Error 1make[3]: *** [all-common] Error 1make[2]: *** [all-common] Error 1make[1]: *** [sbin/slapadd] Error 2make: *** [gitlab-openldap/libexec/slapd] Error 2I encountered this while trying to set up LDAP for local testing using the GitLab Development Kit. It downloads an older version of OpenLDAP, configures, compiles, and installs locally in a pretty sane way. Unfortunately, when it attempts to install the man pages for OpenLDAP, it tries to “glue” all the associated files together using soelim instead of something more universal. Since this was missing on my MacBook Pro, the install failed there.
My attempts to bypass man-page-compilation ended up with me digging through a lot of compile-time code and error messages, and it turned out to be a lot simpler to just get the soelim binary locally. The main barrier to this was my own StackOverflow / DuckDuckGo-fu; I didn’t come up with where the heck this soelim binary is. Ended up getting help from some awesome colleagues and finding the answer on Slack. Figured I should post this somewhere publicly searchable on the internet.
The groff package contains the soelim binary, so now you should have everything you need to find and start fixing the next dependency problem you encounter.
", "url": "https://atevans.com/2023/12/23/soelim-binary-macos.html", , "date_published": "2023-12-23T06:15:00+00:00", "date_modified": "2023-12-23T06:15:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2023/09/20/aws-textract-results-in-ruby.html", "title": "AWS Textract Results in Ruby", "summary": null, "content_text": "tl;dr: check out this sample code on Github for an AWS Textract client and results parserAt my previous company, we wanted to use Textract to get some table-based data out of PDFs. This was in the medical field, so the only “programmatic” interface we had to the system was to set up an inbox that would receive emails from it, and those emails might contain PDFs with the data we wanted. Medicine be like that.However, the output of Textract can be a little hard to work with. It’s just a big bag of “blocks” - elements it has identified on the page - with a geometry, confidence, and relationships to each other. They’re returned as a paginated list, and you have to reconstruct the element hierarchy in your client. This became critical when trying to visualize where in a given “type” of document the information we wanted was located. Was it the last LINE element on a page? Was it a WORD element located inside some other elements? I wanted to visualize this to get a better look.The result-parsing code in their tutorials is in Python, and is of the most uninspiring big-bag-of-functions type, so I thought about how to manage this in Ruby. Mostly I just wanted some data structure where I could call .parent on particular element and recurse up to the page level, kind of like the DOM in html-land.I ended up with some code that looks like this:class Node attr_reader :block, :parent, :children def initialize(block, parent: nil, blocks_map: {}) @block = block @parent = parent @children = [] return if block.relationships.nil? block.relationships.each do |rel| next unless rel.type == 'CHILD' next if rel.ids.nil? || rel.ids.empty? rel.ids.each do |block_id| blk = blocks_map[block_id] next if blk.nil? @children << self.class.new(blk, parent: self, blocks_map: blocks_map) end end endendThis gave me a tree object with a reasonable structure. If I wanted to get fancier, I could add a grep method to search the node text and its children, or other recursive tree-based functionality. If we wanted to get really fancy, we could sort the tree by x * y in the geometry, making it easy to walk the tree from top-left to bottom-right.But since we were writing pretty basic extractors, this was enough to let me walk through, find the element I wanted with the right block.text value, and walk up its parents to see where it lived in the document structure.I added some code to print the whole tree to console so you can easily visualize it: def to_s txt = if block.text.nil? '' elsif block.text.length > 10 \"#{block.text[0..7]}...\" else block.text end \"<#{block.block_type} #{txt} #{block.id}>\" end def print_tree(indent = 0) indent_txt = indent > 0 ? ' ' * (indent * 2) : '' puts \"#{indent_txt}#{to_s}\" children.each {|chld| chld.print_tree(indent + 1) } endThis is in leiu of just making a nicer inspect method and using something like awesome_print or the built-in pp method. While those are great, we don’t really need the Ruby object ids and other properties for this visualization - they just clutter up the terminal. We could overwrite def inspect to show only the info we want, but I feel like that’s a POLA violation, so it’s better to just write this functionlaity where it belongs.If you’d like to run a Textract analysis and play with the results, I’ve got the sample code up on Github. It’s not well-tested or ready for deployment, but it can be a starting point if you want to do a quick integration of Textract into your own Ruby project. Hope this helps someone!", "content_html": "tl;dr: check out this sample code on Github for an AWS Textract client and results parser
At my previous company, we wanted to use Textract to get some table-based data out of PDFs. This was in the medical field, so the only “programmatic” interface we had to the system was to set up an inbox that would receive emails from it, and those emails might contain PDFs with the data we wanted. Medicine be like that.
However, the output of Textract can be a little hard to work with. It’s just a big bag of “blocks” - elements it has identified on the page - with a geometry, confidence, and relationships to each other. They’re returned as a paginated list, and you have to reconstruct the element hierarchy in your client. This became critical when trying to visualize where in a given “type” of document the information we wanted was located. Was it the last LINE element on a page? Was it a WORD element located inside some other elements? I wanted to visualize this to get a better look.
The result-parsing code in their tutorials is in Python, and is of the most uninspiring big-bag-of-functions type, so I thought about how to manage this in Ruby. Mostly I just wanted some data structure where I could call .parent on particular element and recurse up to the page level, kind of like the DOM in html-land.
I ended up with some code that looks like this:
class Node attr_reader :block, :parent, :children def initialize(block, parent: nil, blocks_map: {}) @block = block @parent = parent @children = [] return if block.relationships.nil? block.relationships.each do |rel| next unless rel.type == 'CHILD' next if rel.ids.nil? || rel.ids.empty? rel.ids.each do |block_id| blk = blocks_map[block_id] next if blk.nil? @children << self.class.new(blk, parent: self, blocks_map: blocks_map) end end endendThis gave me a tree object with a reasonable structure. If I wanted to get fancier, I could add a grep method to search the node text and its children, or other recursive tree-based functionality. If we wanted to get really fancy, we could sort the tree by x * y in the geometry, making it easy to walk the tree from top-left to bottom-right.
But since we were writing pretty basic extractors, this was enough to let me walk through, find the element I wanted with the right block.text value, and walk up its parents to see where it lived in the document structure.
I added some code to print the whole tree to console so you can easily visualize it:
def to_s txt = if block.text.nil? '' elsif block.text.length > 10 \"#{block.text[0..7]}...\" else block.text end \"<#{block.block_type} #{txt} #{block.id}>\" end def print_tree(indent = 0) indent_txt = indent > 0 ? ' ' * (indent * 2) : '' puts \"#{indent_txt}#{to_s}\" children.each {|chld| chld.print_tree(indent + 1) } endThis is in leiu of just making a nicer inspect method and using something like awesome_print or the built-in pp method. While those are great, we don’t really need the Ruby object ids and other properties for this visualization - they just clutter up the terminal. We could overwrite def inspect to show only the info we want, but I feel like that’s a POLA violation, so it’s better to just write this functionlaity where it belongs.
If you’d like to run a Textract analysis and play with the results, I’ve got the sample code up on Github. It’s not well-tested or ready for deployment, but it can be a starting point if you want to do a quick integration of Textract into your own Ruby project. Hope this helps someone!
", "url": "https://atevans.com/2023/09/20/aws-textract-results-in-ruby.html", , "date_published": "2023-09-20T20:11:00+00:00", "date_modified": "2023-09-20T20:11:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2023/09/06/dependabot-not-playing-nice-with-yarn.html", "title": "Dependabot not playing nice with Yarn", "summary": null, "content_text": "tl;dr - if Dependabot isn’t opening a PR to fix a vulnerable package, especially with npm + yarn, try it manually every so often. Use commands like: npm ls vulnpackage to see where the vulnerable package is in the npm dep tree yarn why --recursive vulnpackage for Yarn’s explanation of all dep trees including the vulnerable package yarn up --recursive vulnpackage to upgrade everything in the dep tree from the vulnerable package, but keeping the version constraints specified in your package.json (only available in Yarn 3.0+)This was an issue we encountered at my last job. We used Github’s Dependabot to track CVE’s in our supply chain. There are similar tools from Snyk and others, but Dependabot comes included with Github, and it’s compatible with a pretty broad array of languages.There was a frustrating vulnerability that was open for some time - vm2 had a sandbox escape vulnerability, and it was required about 7 levels deep by Microsoft AppCenter’s react-native-code-push library. Rather than attempt to mitigate the issue, the vm2 maintainer decided to discontinue the project.Eventually a mitigation was found, since the code path using vm2 in code-push is a very rare use case, and could be removed by using a newer version of superagent . code-push was updated, then react-native-code-push was updated to fix the dependency tree and allay Snyk and Dependabot alerts for AppCenter users.Dependabot should have opened a pull request to update our app. Although react-native-code-push was pinned to version v.8.0.0 , the dependency specification for code-push was at \"code-push@npm:^4.1.0\": in the yarn.lock file. When we ran yarn up --recursive code-push , it successfully updated code-push in the yarn.lock file, removed the vm2 dependency, and looked ready to go. But Dependabot was throwing an error and saying that the dependency “could not be upgraded.”I’ve managed to nearly reproduce this situation in a public repo - check out yarn-dependabot-example on my Github. The only difference here is that Dependbot doesn’t seem to be trying to open a pull request to fix the issue. Running yarn up --recursive code-build fixes it immediately, affecting only the lockfile.I wasn’t able to clearly determine why Dependbot considered this issue unfixable, but it seems clear that the dep tree shaker written for Dependabot and the one Yarn uses are slightly different. For any “stubborn” vulnerabilities that aren’t getting fixes auto-generated, it’s worth trying the simply stuff manually now & then to see if a fix is available.", "content_html": "tl;dr - if Dependabot isn’t opening a PR to fix a vulnerable package, especially with npm + yarn, try it manually every so often. Use commands like:
npm ls vulnpackage to see where the vulnerable package is in the npm dep treeyarn why --recursive vulnpackage for Yarn’s explanation of all dep trees including the vulnerable packageyarn up --recursive vulnpackage to upgrade everything in the dep tree from the vulnerable package, but keeping the version constraints specified in your package.json (only available in Yarn 3.0+)This was an issue we encountered at my last job. We used Github’s Dependabot to track CVE’s in our supply chain. There are similar tools from Snyk and others, but Dependabot comes included with Github, and it’s compatible with a pretty broad array of languages.
There was a frustrating vulnerability that was open for some time - vm2 had a sandbox escape vulnerability, and it was required about 7 levels deep by Microsoft AppCenter’s react-native-code-push library. Rather than attempt to mitigate the issue, the vm2 maintainer decided to discontinue the project.
Eventually a mitigation was found, since the code path using vm2 in code-push is a very rare use case, and could be removed by using a newer version of superagent . code-push was updated, then react-native-code-push was updated to fix the dependency tree and allay Snyk and Dependabot alerts for AppCenter users.
Dependabot should have opened a pull request to update our app. Although react-native-code-push was pinned to version v.8.0.0 , the dependency specification for code-push was at \"code-push@npm:^4.1.0\": in the yarn.lock file. When we ran yarn up --recursive code-push , it successfully updated code-push in the yarn.lock file, removed the vm2 dependency, and looked ready to go. But Dependabot was throwing an error and saying that the dependency “could not be upgraded.”
I’ve managed to nearly reproduce this situation in a public repo - check out yarn-dependabot-example on my Github. The only difference here is that Dependbot doesn’t seem to be trying to open a pull request to fix the issue. Running yarn up --recursive code-build fixes it immediately, affecting only the lockfile.
I wasn’t able to clearly determine why Dependbot considered this issue unfixable, but it seems clear that the dep tree shaker written for Dependabot and the one Yarn uses are slightly different. For any “stubborn” vulnerabilities that aren’t getting fixes auto-generated, it’s worth trying the simply stuff manually now & then to see if a fix is available.
", "url": "https://atevans.com/2023/09/06/dependabot-not-playing-nice-with-yarn.html", , "date_published": "2023-09-06T16:33:00+00:00", "date_modified": "2023-09-06T16:33:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2023/08/30/roo-xlsx-parsing-invalid-value-for-integer-argumenterror.html", "title": "Roo XLSX Parsing: Invalid value for Integer (ArgumentError)", "summary": null, "content_text": "tl;dr - got a weird error when opening a .xlsx sheet using Roo, wrote a quick fix for it. The error had a stack trace like this:\t12: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/nokogiri-1.15.4-x86_64-darwin/lib/nokogiri/xml/node_set.rb:234:in `upto'\t11: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/nokogiri-1.15.4-x86_64-darwin/lib/nokogiri/xml/node_set.rb:235:in `block in each'\t10: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/sheet_doc.rb:224:in `block (2 levels) in extract_cells'\t 9: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/sheet_doc.rb:101:in `cell_from_xml'\t 8: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/nokogiri-1.15.4-x86_64-darwin/lib/nokogiri/xml/node_set.rb:234:in `each'\t 7: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/nokogiri-1.15.4-x86_64-darwin/lib/nokogiri/xml/node_set.rb:234:in `upto'\t 6: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/nokogiri-1.15.4-x86_64-darwin/lib/nokogiri/xml/node_set.rb:235:in `block in each'\t 5: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/sheet_doc.rb:114:in `block in cell_from_xml'\t 4: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/sheet_doc.rb:172:in `create_cell_from_value'\t 3: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/sheet_doc.rb:172:in `new'\t 2: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/cell/number.rb:16:in `initialize'\t 1: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/cell/number.rb:27:in `create_numeric'/Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/cell/number.rb:27:in `Integer': invalid value for Integer(): \"\" (ArgumentError)I made a pull request with a fix here, but it hasn’t been addressed or merged. Kind of a bummer, feels like this happens nearly every time I try to contribute to open-source projects. You can implement the fix yourself by monkey-patching like so:# frozen_string_literal: truemodule Roo class Excelx class Cell class Number < Cell::Base def create_numeric(number) return number if Excelx::ERROR_VALUES.include?(number) case @format when /%/ cast_float(number) when /\\.0/ cast_float(number) else (number.include?('.') || (/\\A[-+]?\\d+E[-+]?\\d+\\z/i =~ number)) ? cast_float(number) : cast_int(number, 10) end end def cast_float(number) return 0.0 if number == '' Float(number) end def cast_int(number, base = 10) return 0 if number == '' Integer(number, base) end end end endendIf you’re running Rails, drop that in lib/roo/excelx/cell/number.rb and make sure it’s require -ed in an initializer or on app boot. Your app should now be able to handle the “bad” spreadsheets producing the error.The longer storyThis wasn’t the initial error I was debugging, I think the original error was a NoMethodError on the Roo::Spreadsheet object or something. From seeing the initial error on Bugsnag, I was able to determine which file it came from, but it seemed like the stack trace was truncated. So I downloaded the sheet and tried to reproduce locally. The results were weird: I didn’t get the production error, and didn’t seem to get a “proper” local error, either. When running inside a rails console, I just saw the message:> xlsx.each_row_streaming {|row| pp [:row, row] }...lots of output...[:row, [#<Roo::Excelx::Cell::String:0x00007f8982913580 @cell_value=\"Header 1\", @coordinate=[1, 1], @value=\"Header 1\">, #<Roo::Excelx::Cell::String:0x00007f8982913148 @cell_value=\"Header 2\", @coordinate=[1, 2], @value=\"Header 2\">, #<Roo::Excelx::Cell::String:0x00007f8982912ea0 @cell_value=\"Header 3\", @coordinate=[1, 3], @value=\"Header 3\">]]invalid value for Integer(): \"\"> $!nil> $@nil> Like, huh? Where’s the stack trace? Why isn’t the error in $! , the magic Ruby global for “last error message?” I did manage to get a full stack trace by writing a test that tried to parse the “bad” spreadsheet. Maybe IRB or Pry was swallowing the error somehow.Once I got the stack trace above, it looked like something was wrong in Roo itself, and not in our code or the way we were calling it. It seemed like something that should be a simple fix. The “Numbers” app that comes with OSX opened the bad spreadsheet no problem, and uploading to Google Sheets also didn’t pose any issue. So what was going on with Roo? Clearly it was trying to parse something as an Integer when it was an empty string, but how did it get there?Investigating the raw XMLXLSX documents are just XML files zipped up in a particular way. You can unzip them and read the raw XML if you want. Since the each_row_streaming command above indicated which cell was causing the problem, I thought I’d dig in and see if it was some weird Unicode character conversion, mis-encoded file, or what it might be.To unzip your XLSX file, just use your OS’s built-in unzipping tool:$ unzip Example-sheet.xlsx -d Example-sheet-unzip$ find empty_sheetempty_sheetempty_sheet/[Content_Types].xmlempty_sheet/_relsempty_sheet/_rels/.relsempty_sheet/xlempty_sheet/xl/workbook.xmlempty_sheet/xl/worksheetsempty_sheet/xl/worksheets/sheet1.xml...snip...$ open xl/worksheets/sheet1.xmlYup! Sure is XML. It’s not, like, super-readable XML, but you can kinda reverse-engineer it. Like reading a restaurant menu in a language you don’t know.<c r=\"K58\" s=\"1\" t=\"s\"> <v>4</v></c>Here’s a cell in the spreadsheet. You can see it’s cell K58 - column K, row 58. The t=\"s\" only seems to appear for cells with string values in them, not the numeric ones, so it probably means “type=string” . The s=\"1\" , I kinda think means “spreadsheet 1” , since each workbook can have multiple tabs of sheets on it that can reference each other. Don’t quote me on that, it was out of scope for this investigation.The value inside the <v>4</v> tags indicates two possible things: for numeric-type cells, it’s the numeric value in the cell, either int or float for string-type cells, it’s a reference to the file sharedStrings.xml , which is just a big array of all the strings in the spreadsheet. Helps reduce filesize by de-duping strings, I guessKnowing all that, and knowing the specific error message invalid value for Integer(): \"\" , I could kind of deduce what might have gone wrong with the cell that seemed to stump the Roo parser:<c r=\"L58\"> <v/></c>I checked some other spreadsheets that didn’t have any errors, and I couldn’t see any instances of a self-closing tag like this. Other spreadsheets had full valid XML tags:<c r=\"N26\"> <v></v></c>Roo didn’t blow a gasket on this type of cell. It seemed like this might be the issue. I did some Googling and figured out how to zip the dir back up as an XLSX file so I could test out my theory:Modifying the XML in an XLSX file for fun & profitI followed this guide to unzipping and re-zipping XLSX files. Re-posting in case the original site goes down or something: unzip Example-sheet.xlsx -d Example-sheet-unzip cd into the extracted zip dir Example-sheet-unzip make edits to the xml as desired, probably in a file like xl/worksheets/sheet1.xml use python2 ../zipdir.py Example-sheet-edited.xlsx . (see script below) to compress the objects this should generate an .xlsx file open-able by Numbers, Excel, GSheets#!/usr/bin/python# Name: zipdir.py# Version: 1.0# Created: 2016-11-13# Last modified: 2016-11-13# Purpose: Creates a zip file given a directory where the files to be zipped# are stored and the name of the output file. Don't include the .zip extension# when specifying the zip file name.# Usage: zipdir.py output_filename dir_name# Note: if the output file name and directory are not specified on the# command line, the script will prompt for them.import sys, shutilif len(sys.argv) == 1: dir_name = raw_input(\"Directory name: \") output_filename = raw_input(\"Zip file name: \")elif len(sys.argv) == 3: output_filename = sys.argv[1] dir_name = sys.argv[2]else: print \"Incorrect number of arguments! Usage: zipdir.py output_filename dir_name\" exit()shutil.make_archive(output_filename, 'zip', dir_name)Once I did all that and zipped the file back up, Roo was able to parse it and print the expected results in IRB and in my RSpec test case. Bingo! Scientific proof, basically.I also took a stab at generating some “bad” XML files myself, since I didn’t want to check a real file with production data into our git repo if I could avoid it. I couldn’t get Numbers or Google Sheets to generate a self-closing XML tag in the output. They only generated valid open-and-close-style tags even for empty cells.A little more investigation revealed the file came from an external web site’s reporting feature. The site was using Kendo UI with Angular on the front-end, which had a built-in export-to-Excel feature. Like most problems in web development, the blame falls on a horrible front-end framework reinventing the wheel with JavaScript.Developing the fixSince I couldn’t reproduce the exact path to make a “bad” file, I just wrote one myself. Made a quick, nearly-empty sheet in Numbers, exported it, and followed the steps above to change a valid <v></v> tag into an invalid <v/> tag. This I could check in to the repo along with an RSpec test reproducing the error condition. Now we could code something!Since I had the stack trace, I could look at the file directly. The relevant class is here on Github, though I usually just use bundle show roo to inspect the exact code on my machine. Since there’s only one place with Integer() in the file, and this is basically another type of nil-check, it seemed like a pretty easy fix.I’ve had bad luck getting fixes merged into open-source projects. Most maintainers seem overworked, short on time, and extremely particular about the kind of code they want in their repo. Even submitting a full pull request meeting the maintainers’ expectations for code style, testing, documentation, and all the other ancillary considerations, it can take a lot of back-and-forth and usually weeks to get a PR merged. I didn’t think we should wait this long to get our process fixed, so I went with monkey-patching as a first approach.I made the modifications you can see in the monkey-patch 🙈 above. I figured we’d probably want to check for this empty-string, no-format condition for both Float and Integer values, so it’s probably worth factoring out into a separate method. This would also make it easier to extend later with a rescue block, if needed.Added the patch to our repo, ran the specs to make sure it worked, and put up a pull request on our app. A fix is never really done until it’s verified to be working on production, so once I got code review, merged & deployed, I re-ran the file to verify it produced the expected results instead of an error. Good to go!UpstreamingSince I had written an essentially-empty XLSX file to test on locally, I figured it would be easy to make a PR to Roo itself. I cloned their repo, and their test suite was in RSpec and MiniTest and mostly what I expected. They had a whole pile of files with a particular naming scheme, and an easy test harness to parse the file and check for expected results.Converted the monkey-patch into a diff on the Roo::Excelx::Cell::Number class, added my empty file as a test case, and added a unit test for the class to boot. Maybe I could have written more tests to cover every possible branch condition, but I figured I’d get the PR up and see what the maintainers thought about the approach.Sadly I can’t end it with “they loved my PR and merged it” – not sure why my PR got ghosted when others are getting reviewed and merged. But at least we got it working for our app, so ¯\\(ツ)/¯Wrap-upHope that helps somebody out there! I learned a lot just debugging this one bad spreadsheet, so I figured it was worth a write-up. If you have any questions or comments, feel free to email me about it, since social media has kinda fallen off a cliff lately. Best of luck, internet programmers!", "content_html": "tl;dr - got a weird error when opening a .xlsx sheet using Roo, wrote a quick fix for it. The error had a stack trace like this:
\t12: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/nokogiri-1.15.4-x86_64-darwin/lib/nokogiri/xml/node_set.rb:234:in `upto'\t11: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/nokogiri-1.15.4-x86_64-darwin/lib/nokogiri/xml/node_set.rb:235:in `block in each'\t10: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/sheet_doc.rb:224:in `block (2 levels) in extract_cells'\t 9: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/sheet_doc.rb:101:in `cell_from_xml'\t 8: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/nokogiri-1.15.4-x86_64-darwin/lib/nokogiri/xml/node_set.rb:234:in `each'\t 7: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/nokogiri-1.15.4-x86_64-darwin/lib/nokogiri/xml/node_set.rb:234:in `upto'\t 6: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/nokogiri-1.15.4-x86_64-darwin/lib/nokogiri/xml/node_set.rb:235:in `block in each'\t 5: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/sheet_doc.rb:114:in `block in cell_from_xml'\t 4: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/sheet_doc.rb:172:in `create_cell_from_value'\t 3: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/sheet_doc.rb:172:in `new'\t 2: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/cell/number.rb:16:in `initialize'\t 1: from /Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/cell/number.rb:27:in `create_numeric'/Users/agius/.rbenv/versions/2.7.0/lib/ruby/gems/2.7.0/gems/roo-2.10.0/lib/roo/excelx/cell/number.rb:27:in `Integer': invalid value for Integer(): \"\" (ArgumentError)I made a pull request with a fix here, but it hasn’t been addressed or merged. Kind of a bummer, feels like this happens nearly every time I try to contribute to open-source projects. You can implement the fix yourself by monkey-patching like so:
# frozen_string_literal: truemodule Roo class Excelx class Cell class Number < Cell::Base def create_numeric(number) return number if Excelx::ERROR_VALUES.include?(number) case @format when /%/ cast_float(number) when /\\.0/ cast_float(number) else (number.include?('.') || (/\\A[-+]?\\d+E[-+]?\\d+\\z/i =~ number)) ? cast_float(number) : cast_int(number, 10) end end def cast_float(number) return 0.0 if number == '' Float(number) end def cast_int(number, base = 10) return 0 if number == '' Integer(number, base) end end end endendIf you’re running Rails, drop that in lib/roo/excelx/cell/number.rb and make sure it’s require -ed in an initializer or on app boot. Your app should now be able to handle the “bad” spreadsheets producing the error.
This wasn’t the initial error I was debugging, I think the original error was a NoMethodError on the Roo::Spreadsheet object or something. From seeing the initial error on Bugsnag, I was able to determine which file it came from, but it seemed like the stack trace was truncated. So I downloaded the sheet and tried to reproduce locally. The results were weird: I didn’t get the production error, and didn’t seem to get a “proper” local error, either. When running inside a rails console, I just saw the message:
> xlsx.each_row_streaming {|row| pp [:row, row] }...lots of output...[:row, [#<Roo::Excelx::Cell::String:0x00007f8982913580 @cell_value=\"Header 1\", @coordinate=[1, 1], @value=\"Header 1\">, #<Roo::Excelx::Cell::String:0x00007f8982913148 @cell_value=\"Header 2\", @coordinate=[1, 2], @value=\"Header 2\">, #<Roo::Excelx::Cell::String:0x00007f8982912ea0 @cell_value=\"Header 3\", @coordinate=[1, 3], @value=\"Header 3\">]]invalid value for Integer(): \"\"> $!nil> $@nil> Like, huh? Where’s the stack trace? Why isn’t the error in $! , the magic Ruby global for “last error message?” I did manage to get a full stack trace by writing a test that tried to parse the “bad” spreadsheet. Maybe IRB or Pry was swallowing the error somehow.
Once I got the stack trace above, it looked like something was wrong in Roo itself, and not in our code or the way we were calling it. It seemed like something that should be a simple fix. The “Numbers” app that comes with OSX opened the bad spreadsheet no problem, and uploading to Google Sheets also didn’t pose any issue. So what was going on with Roo? Clearly it was trying to parse something as an Integer when it was an empty string, but how did it get there?
XLSX documents are just XML files zipped up in a particular way. You can unzip them and read the raw XML if you want. Since the each_row_streaming command above indicated which cell was causing the problem, I thought I’d dig in and see if it was some weird Unicode character conversion, mis-encoded file, or what it might be.
To unzip your XLSX file, just use your OS’s built-in unzipping tool:
$ unzip Example-sheet.xlsx -d Example-sheet-unzip$ find empty_sheetempty_sheetempty_sheet/[Content_Types].xmlempty_sheet/_relsempty_sheet/_rels/.relsempty_sheet/xlempty_sheet/xl/workbook.xmlempty_sheet/xl/worksheetsempty_sheet/xl/worksheets/sheet1.xml...snip...$ open xl/worksheets/sheet1.xmlYup! Sure is XML. It’s not, like, super-readable XML, but you can kinda reverse-engineer it. Like reading a restaurant menu in a language you don’t know.
<c r=\"K58\" s=\"1\" t=\"s\"> <v>4</v></c>Here’s a cell in the spreadsheet. You can see it’s cell K58 - column K, row 58. The t=\"s\" only seems to appear for cells with string values in them, not the numeric ones, so it probably means “type=string” . The s=\"1\" , I kinda think means “spreadsheet 1” , since each workbook can have multiple tabs of sheets on it that can reference each other. Don’t quote me on that, it was out of scope for this investigation.
The value inside the <v>4</v> tags indicates two possible things:
sharedStrings.xml , which is just a big array of all the strings in the spreadsheet. Helps reduce filesize by de-duping strings, I guessKnowing all that, and knowing the specific error message invalid value for Integer(): \"\" , I could kind of deduce what might have gone wrong with the cell that seemed to stump the Roo parser:
<c r=\"L58\"> <v/></c>I checked some other spreadsheets that didn’t have any errors, and I couldn’t see any instances of a self-closing tag like this. Other spreadsheets had full valid XML tags:
<c r=\"N26\"> <v></v></c>Roo didn’t blow a gasket on this type of cell. It seemed like this might be the issue. I did some Googling and figured out how to zip the dir back up as an XLSX file so I could test out my theory:
I followed this guide to unzipping and re-zipping XLSX files. Re-posting in case the original site goes down or something:
unzip Example-sheet.xlsx -d Example-sheet-unzipcd into the extracted zip dir Example-sheet-unzipxl/worksheets/sheet1.xmlpython2 ../zipdir.py Example-sheet-edited.xlsx . (see script below) to compress the objects.xlsx file open-able by Numbers, Excel, GSheets#!/usr/bin/python# Name: zipdir.py# Version: 1.0# Created: 2016-11-13# Last modified: 2016-11-13# Purpose: Creates a zip file given a directory where the files to be zipped# are stored and the name of the output file. Don't include the .zip extension# when specifying the zip file name.# Usage: zipdir.py output_filename dir_name# Note: if the output file name and directory are not specified on the# command line, the script will prompt for them.import sys, shutilif len(sys.argv) == 1: dir_name = raw_input(\"Directory name: \") output_filename = raw_input(\"Zip file name: \")elif len(sys.argv) == 3: output_filename = sys.argv[1] dir_name = sys.argv[2]else: print \"Incorrect number of arguments! Usage: zipdir.py output_filename dir_name\" exit()shutil.make_archive(output_filename, 'zip', dir_name)Once I did all that and zipped the file back up, Roo was able to parse it and print the expected results in IRB and in my RSpec test case. Bingo! Scientific proof, basically.
I also took a stab at generating some “bad” XML files myself, since I didn’t want to check a real file with production data into our git repo if I could avoid it. I couldn’t get Numbers or Google Sheets to generate a self-closing XML tag in the output. They only generated valid open-and-close-style tags even for empty cells.
A little more investigation revealed the file came from an external web site’s reporting feature. The site was using Kendo UI with Angular on the front-end, which had a built-in export-to-Excel feature. Like most problems in web development, the blame falls on a horrible front-end framework reinventing the wheel with JavaScript.
Since I couldn’t reproduce the exact path to make a “bad” file, I just wrote one myself. Made a quick, nearly-empty sheet in Numbers, exported it, and followed the steps above to change a valid <v></v> tag into an invalid <v/> tag. This I could check in to the repo along with an RSpec test reproducing the error condition. Now we could code something!
Since I had the stack trace, I could look at the file directly. The relevant class is here on Github, though I usually just use bundle show roo to inspect the exact code on my machine. Since there’s only one place with Integer() in the file, and this is basically another type of nil-check, it seemed like a pretty easy fix.
I’ve had bad luck getting fixes merged into open-source projects. Most maintainers seem overworked, short on time, and extremely particular about the kind of code they want in their repo. Even submitting a full pull request meeting the maintainers’ expectations for code style, testing, documentation, and all the other ancillary considerations, it can take a lot of back-and-forth and usually weeks to get a PR merged. I didn’t think we should wait this long to get our process fixed, so I went with monkey-patching as a first approach.
I made the modifications you can see in the monkey-patch 🙈 above. I figured we’d probably want to check for this empty-string, no-format condition for both Float and Integer values, so it’s probably worth factoring out into a separate method. This would also make it easier to extend later with a rescue block, if needed.
Added the patch to our repo, ran the specs to make sure it worked, and put up a pull request on our app. A fix is never really done until it’s verified to be working on production, so once I got code review, merged & deployed, I re-ran the file to verify it produced the expected results instead of an error. Good to go!
Since I had written an essentially-empty XLSX file to test on locally, I figured it would be easy to make a PR to Roo itself. I cloned their repo, and their test suite was in RSpec and MiniTest and mostly what I expected. They had a whole pile of files with a particular naming scheme, and an easy test harness to parse the file and check for expected results.
Converted the monkey-patch into a diff on the Roo::Excelx::Cell::Number class, added my empty file as a test case, and added a unit test for the class to boot. Maybe I could have written more tests to cover every possible branch condition, but I figured I’d get the PR up and see what the maintainers thought about the approach.
Sadly I can’t end it with “they loved my PR and merged it” – not sure why my PR got ghosted when others are getting reviewed and merged. But at least we got it working for our app, so ¯\\(ツ)/¯
Hope that helps somebody out there! I learned a lot just debugging this one bad spreadsheet, so I figured it was worth a write-up. If you have any questions or comments, feel free to email me about it, since social media has kinda fallen off a cliff lately. Best of luck, internet programmers!
", "url": "https://atevans.com/2023/08/30/roo-xlsx-parsing-invalid-value-for-integer-argumenterror.html", , "date_published": "2023-08-30T01:53:00+00:00", "date_modified": "2023-08-30T01:53:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2020/05/08/setting-egpu-preferences-for-epic-games.html", "title": "Setting eGPU Preferences for Epic Games", "summary": null, "content_text": "You plug your shiny new eGPU into your MacBook Pro, and expect a game like Borderlands 3 from the Epic Games Store to run smoothly (30+ FPS) at medium settings on your 4k monitor. It’s a good graphics card, but when running the Benchmark animation, you’re still seeing drops into 10-20fps range, and it stutters and becomes a slideshow. Is it a crappy port? Is your graphics card broken? No! Take heart, and follow these steps: Use a disk space calculator like DaisyDisk, and notice that there’s a huge folder at /Users/Shared/Epic\\ Games/Borderlands3/ , where the Epic Games Store puts all of its games Open it in finder by running:open /Users/Shared/Epic\\ Games/Borderlands3/on the terminal ⌃+click or on the Borderlands3 app in that folder and select “Get Info” from the menu If your eGPU is plugged in, you should see a checkbox that says “Prefer External GPU” , as explained in this Apple Support article . Check that box! Re-launch the game via the Epic Store. Your benchmarks should be markedly improved.For some reason the game was trying to run using the MacBook Pro’s built-in mobile GPU instead of the honking, loud, industrial-strength graphics pipes of the eGPU. Checking that setting fixed it.There’s a few things I don’t know about this setting: will it get clobbered if Epic Games Store updates the game files? is it retained after unplugging and re-plugging the eGPU? is there any way to set it via command-line instead of in Finder? can it be passed in as a command-line argument via Epic Game Store’s per-game “Additional Command Line Arguments” setting?But at least this has solved my slideshow problem for now.", "content_html": "You plug your shiny new eGPU into your MacBook Pro, and expect a game like Borderlands 3 from the Epic Games Store to run smoothly (30+ FPS) at medium settings on your 4k monitor. It’s a good graphics card, but when running the Benchmark animation, you’re still seeing drops into 10-20fps range, and it stutters and becomes a slideshow. Is it a crappy port? Is your graphics card broken? No! Take heart, and follow these steps:
/Users/Shared/Epic\\ Games/Borderlands3/ , where the Epic Games Store puts all of its gamesopen /Users/Shared/Epic\\ Games/Borderlands3/on the terminalFor some reason the game was trying to run using the MacBook Pro’s built-in mobile GPU instead of the honking, loud, industrial-strength graphics pipes of the eGPU. Checking that setting fixed it.
There’s a few things I don’t know about this setting:
But at least this has solved my slideshow problem for now.
", "url": "https://atevans.com/2020/05/08/setting-egpu-preferences-for-epic-games.html", , "date_published": "2020-05-08T00:00:00+00:00", "date_modified": "2020-05-08T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2019/11/11/boopable-snoots.html", "title": "Boopable Snoots", "summary": null, "content_text": "Sometimes, you’re testing a chat bot dingus, and you need a couple images so it can respond to @bot boop . I wanted more than 10, quickly, preferably without having to click through Google Image search and manually download them.Giphy and Imgur both require you to sign up and make an oAuth app and blah blah blah before you can start using their API. Not worth it for a trivial one-off.Turns out Reddit exposes any endpoint in JSON as well as for web browsers, and that’s publicly accessible. And Reddit has a whole community called /r/BoopableSnoots .So I threw this command together:curl \"https://www.reddit.com/r/BoopableSnoots.json\" | \\jq -c '.data.children | .[].data.url' | \\xargs -n 1 curl -OWhat is this doing?curl - sends a GET request to the specified URL and forwards the response body to stdout .| - take the output of the previous command on stdout and feed it into the next command as stdinjq -c '...' - jq is a command-line JSON parser and editor. This command drills one level down the object structure returned by reddit and returns the data.url field. The -c flag removes quoting from the output, and the .[] iterates across an array, outputting one element per linexargs - is a meta-command; it says “run the following command for each line of input on stdin.” It can run in parallel, or in a pool of workers, etc.curl -O - sends a GET request to the specified URL and saves the response to the filesystem using the filename contained in the URLPutting it all together: Get the ~25 most recent posts off Reddit’s BoopableSnoots community Filter down to just the images that people posted download those images to the current directoryThis quickly got me some snoots for my bot to boop, and I could move on with my work.", "content_html": "Sometimes, you’re testing a chat bot dingus, and you need a couple images so it can respond to @bot boop . I wanted more than 10, quickly, preferably without having to click through Google Image search and manually download them.
Giphy and Imgur both require you to sign up and make an oAuth app and blah blah blah before you can start using their API. Not worth it for a trivial one-off.
Turns out Reddit exposes any endpoint in JSON as well as for web browsers, and that’s publicly accessible. And Reddit has a whole community called /r/BoopableSnoots .
So I threw this command together:
curl \"https://www.reddit.com/r/BoopableSnoots.json\" | \\jq -c '.data.children | .[].data.url' | \\xargs -n 1 curl -OWhat is this doing?
curl - sends a GET request to the specified URL and forwards the response body to stdout .
| - take the output of the previous command on stdout and feed it into the next command as stdin
jq -c '...' - jq is a command-line JSON parser and editor. This command drills one level down the object structure returned by reddit and returns the data.url field. The -c flag removes quoting from the output, and the .[] iterates across an array, outputting one element per line
xargs - is a meta-command; it says “run the following command for each line of input on stdin.” It can run in parallel, or in a pool of workers, etc.
curl -O - sends a GET request to the specified URL and saves the response to the filesystem using the filename contained in the URL
Putting it all together:
This quickly got me some snoots for my bot to boop, and I could move on with my work.
", "url": "https://atevans.com/2019/11/11/boopable-snoots.html", , "date_published": "2019-11-11T00:00:00+00:00", "date_modified": "2019-11-11T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2019/04/26/jenv-and-java-versioning.html", "title": "Jenv and Java Versioning", "summary": null, "content_text": "<rant>I tried to compile our mapbox-java sdk on my Macbook, and ran into a versioning error:$ make build-config./gradlew compileBuildConfigStarting a Gradle Daemon (subsequent builds will be faster)> Task :samples:compileBuildConfig FAILED/Users/username/Workspace/mapbox-java/samples/build/gen/buildconfig/src/main/com/mapbox/sample/BuildConfig.java:4: error: cannot access Objectpublic final class BuildConfig ^ bad class file: /modules/java.base/java/lang/Object.class class file has wrong version 56.0, should be 53.0 Please remove or make sure it appears in the correct subdirectory of the classpath.1 errorI had installed Java via Homebrew Cask, the normal way to install developer things on macOS. Running brew cask install java gets the java command all set up for you, but what version is that?$ java -vUnrecognized option: -vError: Could not create the Java Virtual Machine.Error: A fatal exception has occurred. Program will exit.# c'mon java, really :/ smh$ java --versionopenjdk version \"12\" 2019-03-19OpenJDK Runtime Environment (build 12+33)OpenJDK 64-Bit Server VM (build 12+33, mixed mode, sharing)$ brew cask info javajava: 12.0.1,69cfe15208a647278a19ef0990eea691https://jdk.java.net//usr/local/Caskroom/java/10.0.1,10:fb4372174a714e6b8c52526dc134031e (396.4MB)/usr/local/Caskroom/java/12,33 (64B)From: https://github.com/Homebrew/homebrew-cask/blob/master/Casks/java.rb==> NameOpenJDK Java Development Kit==> Artifactsjdk-12.0.1.jdk -> /Library/Java/JavaVirtualMachines/openjdk-12.0.1.jdk (Generic Artifact)Which is cool. 12.0.1 , 12+33 and 56.0 are basically the same number.So I guess I need a lower version of Java. No idea what version of Java will get me this 53.0 “class file,” but let’s try the last release. Multiple versions means you need a version manager, and it looks like jenv is Java’s version manager manager.$ brew install jenv$ eval \"$(jenv init - zsh)\"$ jenv enable-plugin export$ jenv add $(/usr/libexec/java_home)$ jenv versions* system (set by /Users/andrewevans/.jenv/version)12openjdk64-12Jenv can’t build or install Java / OpenJDK versions for you, so you have to do that separately via Homebrew, then “add” those versions via jenv add /Some/System/Directory , because java. Also, the oh-my-zsh plugin doesn’t seem to quite work, as it doesn’t set the JAVA_HOME env var. I had to manually add the “jenv init” and “enable-plugin” to my shell init scripts.Anyway, let’s try Java 11, as 11 is slightly less than 12 and 53 is slightly less than 56.$ brew tap homebrew/cask-versions$ brew cask install java11$ jenv add /Library/Java/JavaVirtualMachines/openjdk-11.0.2.jdk/Contents/Home$ jenv local 11.0$ jenv shell 11.0Had to add both of the latter jenv commands, as I guess jenv local only creates the .java-version file and doesn’t actually set JAVA_HOME . Sadly 11.0 is not 53.0 , so I still got basically the same error when I ran make build-config .After asking my coworkers, Android and our mapbox-java repo uses JDK 8. You could install this via a cask called, funnily enough, java8 . Except Oracle torpedoed it. Sounds like they successfully ran the “embrace, extend, extinguish” playbook on the “open” OpenJDK, though I am not a Java and thus do not fully understand the insanity of these versions and licensing issues). tl;dr Homebrewers had to remove the java8 cask.Homebrewers seemed to prefer AdoptOpenJDK, which is a perfectly cromulent name and doesn’t at all add to the confusion of the dozens of things named “Java.” So let’s get that installed:$ brew cask install homebrew/cask-versions/adoptopenjdk8$ jenv add /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home$ cd ~/Workspace/mapbox-java$ jenv local 1.8$ jenv shell 1.8 # apparently 'jenv local' wasn't enough??$ jenv version1.8 (set by /Directory/.java-version)$ java -vUnrecognized option: -vError: Could not create the Java Virtual Machine.Error: A fatal exception has occurred. Program will exit.# right, forgot about that, jeebus java please suck less$ java --versionUnrecognized option: --versionError: Could not create the Java Virtual Machine.Error: A fatal exception has occurred. Program will exit.# wtf java srsly?!$ java -versionopenjdk version \"1.8.0_212\"OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b03)OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b03, mixed mode)# farking thank you finally$ make build-configThis seemed to be the right version, and the make build-config command succeeded this time. JDK 8 and 1.8.0 and 53.0 are pretty similar numbers, so in retrospect this should’ve been obvious. And AdoptOpenJDK has more prefixes before “Java,” so I probably should’ve realized that was the “real” Java.Anyway, now I can compile the SDK without having to have installed IntelliJ IDEA or Android Studio, which both seemed kinda monstrous and who knows what the hell they’d leave around my system. Goooooood times.</rant>", "content_html": "<rant>
I tried to compile our mapbox-java sdk on my Macbook, and ran into a versioning error:
$ make build-config./gradlew compileBuildConfigStarting a Gradle Daemon (subsequent builds will be faster)> Task :samples:compileBuildConfig FAILED/Users/username/Workspace/mapbox-java/samples/build/gen/buildconfig/src/main/com/mapbox/sample/BuildConfig.java:4: error: cannot access Objectpublic final class BuildConfig ^ bad class file: /modules/java.base/java/lang/Object.class class file has wrong version 56.0, should be 53.0 Please remove or make sure it appears in the correct subdirectory of the classpath.1 errorI had installed Java via Homebrew Cask, the normal way to install developer things on macOS. Running brew cask install java gets the java command all set up for you, but what version is that?
$ java -vUnrecognized option: -vError: Could not create the Java Virtual Machine.Error: A fatal exception has occurred. Program will exit.# c'mon java, really :/ smh$ java --versionopenjdk version \"12\" 2019-03-19OpenJDK Runtime Environment (build 12+33)OpenJDK 64-Bit Server VM (build 12+33, mixed mode, sharing)$ brew cask info javajava: 12.0.1,69cfe15208a647278a19ef0990eea691https://jdk.java.net//usr/local/Caskroom/java/10.0.1,10:fb4372174a714e6b8c52526dc134031e (396.4MB)/usr/local/Caskroom/java/12,33 (64B)From: https://github.com/Homebrew/homebrew-cask/blob/master/Casks/java.rb==> NameOpenJDK Java Development Kit==> Artifactsjdk-12.0.1.jdk -> /Library/Java/JavaVirtualMachines/openjdk-12.0.1.jdk (Generic Artifact)Which is cool. 12.0.1 , 12+33 and 56.0 are basically the same number.
So I guess I need a lower version of Java. No idea what version of Java will get me this 53.0 “class file,” but let’s try the last release. Multiple versions means you need a version manager, and it looks like jenv is Java’s version manager manager.
$ brew install jenv$ eval \"$(jenv init - zsh)\"$ jenv enable-plugin export$ jenv add $(/usr/libexec/java_home)$ jenv versions* system (set by /Users/andrewevans/.jenv/version)12openjdk64-12Jenv can’t build or install Java / OpenJDK versions for you, so you have to do that separately via Homebrew, then “add” those versions via jenv add /Some/System/Directory , because java. Also, the oh-my-zsh plugin doesn’t seem to quite work, as it doesn’t set the JAVA_HOME env var. I had to manually add the “jenv init” and “enable-plugin” to my shell init scripts.
Anyway, let’s try Java 11, as 11 is slightly less than 12 and 53 is slightly less than 56.
$ brew tap homebrew/cask-versions$ brew cask install java11$ jenv add /Library/Java/JavaVirtualMachines/openjdk-11.0.2.jdk/Contents/Home$ jenv local 11.0$ jenv shell 11.0Had to add both of the latter jenv commands, as I guess jenv local only creates the .java-version file and doesn’t actually set JAVA_HOME . Sadly 11.0 is not 53.0 , so I still got basically the same error when I ran make build-config .
After asking my coworkers, Android and our mapbox-java repo uses JDK 8. You could install this via a cask called, funnily enough, java8 . Except Oracle torpedoed it. Sounds like they successfully ran the “embrace, extend, extinguish” playbook on the “open” OpenJDK, though I am not a Java and thus do not fully understand the insanity of these versions and licensing issues). tl;dr Homebrewers had to remove the java8 cask.
Homebrewers seemed to prefer AdoptOpenJDK, which is a perfectly cromulent name and doesn’t at all add to the confusion of the dozens of things named “Java.” So let’s get that installed:
$ brew cask install homebrew/cask-versions/adoptopenjdk8$ jenv add /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home$ cd ~/Workspace/mapbox-java$ jenv local 1.8$ jenv shell 1.8 # apparently 'jenv local' wasn't enough??$ jenv version1.8 (set by /Directory/.java-version)$ java -vUnrecognized option: -vError: Could not create the Java Virtual Machine.Error: A fatal exception has occurred. Program will exit.# right, forgot about that, jeebus java please suck less$ java --versionUnrecognized option: --versionError: Could not create the Java Virtual Machine.Error: A fatal exception has occurred. Program will exit.# wtf java srsly?!$ java -versionopenjdk version \"1.8.0_212\"OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b03)OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b03, mixed mode)# farking thank you finally$ make build-configThis seemed to be the right version, and the make build-config command succeeded this time. JDK 8 and 1.8.0 and 53.0 are pretty similar numbers, so in retrospect this should’ve been obvious. And AdoptOpenJDK has more prefixes before “Java,” so I probably should’ve realized that was the “real” Java.
Anyway, now I can compile the SDK without having to have installed IntelliJ IDEA or Android Studio, which both seemed kinda monstrous and who knows what the hell they’d leave around my system. Goooooood times.
</rant>
I joined Mapbox roughly three months ago as a security engineer. I’d been a full-stack engineer for a little over ten years, and it was time for a change. Just made one CRUD form too many, I guess. I’ve always been mildly paranoid and highly interested in security, so I was delighted when I was offered the position.
It’s been an interesting switch. I was already doing a bunch of ops and internal tool engineering at Hired, and that work is similar to what our security team does. We are primarily a support team - we work to help the rest of the engineers keep everything secure by default. We’re collaborators and consultants, evangelists and educators. From the industry thought leaderers and thinkfluencers I read, that seems to be how good security teams operate.
That said, tons of things are just a little bit different. My coworkers come from a variety of backgrounds; full-stack dev, sysadmin, security researcher; and they tend to come at things from a slightly different angle than I do. Right when I joined, we made an app for our intranet and I thought “It’s an intranet app, no scaling needed!” My project lead corrected me, though: “Oh, some pentester is going to scan this, get a thousand 404 errors per hour and DDoS it.” That kinda thing has been really neat.
I thought it’d be good to list out some of the differences I’ve noticed:
I’ve worked on a lot of multi-year-old Rails monoliths. You read stuff like POODR and watch talks by Searls because it’s important your code scales with the team. Features change, plans change, users do something unexpected and now your architecture needs to turn upside-down. It’s worth refactoring regularly to keep velocity high.
In security, sure, it’s still important the rest of the team can read and maintain your code. But a lot of your work is one-off scripts or simple automations. Whole projects might be less than 200 LoC, and deployed as a single AWS lambda. Even if you write spaghetti code, at that point it’s simple enough to understand, and rewrite from scratch if necessary.
Fixes to larger & legacy projects are usually tweaks rather than overhauls, so there’s not much need to consider the overall architecture there, either.
Definitely less important than in full-stack development. Internal tooling rarely has to scale beyond the size of the company, so you’re not going to need HBase. You might need to monitor logs and metrics, but those are likely going to be handled already.
Teamwork is, if anything, more important in security than it was in full-stack development. Previously I might have to chat with data science, front-end, or a specialist for some features. In security we need to be able to jump in anywhere and quickly get enough understanding to push out fixes. Even if we end up pushing all the coding to the developers, having a long iteration cycle and throwing code “over the wall” between teams is crappy. It’s much better if we can work more closely, pair program, and code review to make sure a fix is complete rather than catch-this-special-case.
You also need a lot of empathy and patience. Sometimes you end up being the jerk who’s blocking code or a deploy. Often you are dumping work on busy people. It can be very difficult to communicate with someone who doesn’t have a lot of security experience, about a vulnerability in a legacy project written in a language you don’t know.
I’m used to technical challenges like “how do we make the UI do this shiny animation?” Or “how do we scale this page to 100k reads / min?” The technical challenges I’ve faced so far have been nothing like that. They’ve been more like: “how do we encode a request to this server, such that when it makes a request to that server it hits this weird parsing bug in Node?” and subsequently “how do we automate that test?” Or “how do we dig through all our git repos for high-entropy strings?”
It’s not all fun and games, though. There’s plenty of work in filling out compliance forms, resetting passwords and permissions, and showing people how & why to use password managers. While not exactly rocket surgery, these are important things that improve the overall security posture of the company, so it’s satisfying that way.
Most deadlines in consumer-facing engineering are fake. “We want to ship this by the end of the sprint” is not a real deadline. I often referred folks to the Voyager mission’s use of a once-per-175-years planetary alignment for comparison. In operations, you get some occasional “the site is slow / down,” but even then the goal was to slowly & incrementally improve systems such that the urgent things can be failed over and dealt with in good time.
In security the urgency feels a bit more real. Working with outside companies for things like penetration tests, compliance audits, and live hacking events means real legal and financial risk for running behind. “New RCE vulnerability in Nginx found” means a scramble to identify affected systems and see how quickly we can get a patch out. We have no idea how long we have before somebody starts causing measurable damage either for malicious purposes or just for the lulz.
In full-stack engineering & ops, I would occasionally need to jump into a different language or framework to get something working. Usually I could get by with pretty limited knowledge: patching Redis into a data science system for caching, or fixing an unhandled edge-case in a frontend UI. I felt like I had a pretty deep knowledge of Ruby and some other core tools, and I could pick up whatever else I needed.
There’s a ton of learning any time you start at a new company: usually a new language, a new stack, legacy code and conventions. But throwing that out, I’ve been learning a ton about the lower-level functioning of computers and networks. Node and Python’s peculiar handling of Unicode, how to tack an EHLO request onto an HTTP GET request, and how to pick particular requests out of a flood of recorded network traffic.
Also seeing some of the methods of madness that hackers use in the real world: that thing you didn’t think would be a big deal because it’s rate-limited? They’ll script a cron job and let it run for months until it finds something and notifies them.
It’s been a blast, and I look forward to seeing what the next three months brings. I’m hopeful for more neat events, more learning, and maybe pulling some cool tricks of my own one of these days.
", "url": "https://atevans.com/2018/04/18/three-months-in-security.html", , "date_published": "2018-04-18T00:00:00+00:00", "date_modified": "2018-04-18T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2017/10/31/october-event-notes.html", "title": "Notes from October 2017 Events", "summary": null, "content_text": "Security@ NotesI went to HackerOne’s Security@ conference last week, and can vouch that it was pretty cool! Thanks to HackerOne for the invite and to Hired for leave to go around the corner from our building for the day.My notes and main take-aways:The name of the conference comes from an email inbox that every company should theoretically have. Ideally you’d have a real vulnerability disclosure program with a policy that lets hackers safely report vulnerabilities in your software. But not every company has the resources to manage that, so at least having a security@ email inbox can give you some warning.As a company, you probably should not have a bug bounty program unless you are willing to dedicate the resources to managing it. To operate a successful bug bounty, you need to respond quickly to all reports and at least get them triaged. You should have a process in place to quickly fix vulnerabilities and get bounties paid. If hackers have reports sitting out there forever, it frustrates both parties and discourages working with the greater bounty community.I was surprised during the panel with three of HackerOne’s top hackers (by bounty and reputation on their site). Two of them had full-time jobs in addition to pursuing bug bounties. They seemed to treat their hacking like a freelance gig on the side - pursue the quickest & most profitable bounties, and skip over low-rep or slow companies. Personally I find it difficult to imagine having the energy to research other companies for vulnerabilities after a full day’s work. But hey, if that’s your thing, awesome!Natalie Silvanovich from Google’s Project Zero had a really interesting talk on how to reduce attack surface. It had a lot of similar themes to good product management in general: consider security when plotting a product’s roadmap, have a process for allocating time to security fixes, and spend time cleaning up code and keeping dependencies up to date. It’s easy to think that old code and old features aren’t hurting anyone: the support burden is low, the code isn’t getting in the way, and 3% of users love this feature, so why take time & get rid of it? Lowering your attack surface is a pretty good reason.Coinbase’s CSO had an interesting note: the max payout from your bug bounty program is a proxy marker for how mature your program is. If your max bounty is $500, you’re probably paying enough bounties that $500 is all you can afford. They had recently raised their max bounty to $50,000 because they did not expect to be paying out a lot high-risk bounties.Fukuoka Ruby NightLast Friday I also went to the Fukuoka Ruby Night. I guess the Fukuoka Prefecture is specifically taking an interest in fostering a tech and startup scene, which is pretty cool. They had talks from some interesting developers from Japan and SF, and they also brought in Matz for a talk. Overall a pretty cool evening.Matz and another developer talked a bunch about mruby - the lightweight, fast, embeddable version of Ruby. It runs on an ultra-lightweight VM compiled for any architecture, and libraries are linked in rather than interpreted at runtime. I hadn’t heard much about it, and figured it was a thing for arduinos or whatever. Turns out it’s seen some more impressive use: Yes, arduinos, Raspberry Pi’s, and other IoT platforms MItamae - a lightweight Chef replacement distributed as a single binary Nier Automata, a spiffy game for PS4 and PCMatz didn’t have as much to say about Ruby 3. He specifically called out that if languages don’t get shiny new things, developers get bored and move on. I guess Ruby 3 will be a good counter-point to the “Ruby is Dead” meme. Ruby 3 will be largely backwards-compatible to avoid getting into quagmires like Ruby 1.9, Python 3, and PHP 6. They are shooting for 3x the performance of Ruby 2.x - the Ruby 3x3 project.One way the core Ruby devs see for the language to evolve without breaking changes or fundamental shifts (such as a type system) is to focus on developer happiness. Building a language server into the core of Ruby 3 is one example that could drastically improve the tooling for developers.He also talked about Duck Inference - an “80% compile time type checking” system. This could potentially catch a lot more type errors at compile time without requiring type hints, strict typing or other code-boilerplate rigamarole. Bonus: it would be fully backwards-compatible.I’m a little skeptical - I personally find CTAGs and other auto-complete tools get in the way about as often as they help. For duck inferencing Matz mentioned saving type definitions and message trees into a separate file in the project directory, for manual tweaking as needed. Sounds like it could end up being pretty frustrating.Guess we’ll see! Matz said the team’s goal is “before the end of this decade,” but to take that with a grain of salt. Good to see progress in the language and that Ruby continues to have a solid future.", "content_html": "I went to HackerOne’s Security@ conference last week, and can vouch that it was pretty cool! Thanks to HackerOne for the invite and to Hired for leave to go around the corner from our building for the day.
My notes and main take-aways:
The name of the conference comes from an email inbox that every company should theoretically have. Ideally you’d have a real vulnerability disclosure program with a policy that lets hackers safely report vulnerabilities in your software. But not every company has the resources to manage that, so at least having a security@ email inbox can give you some warning.
As a company, you probably should not have a bug bounty program unless you are willing to dedicate the resources to managing it. To operate a successful bug bounty, you need to respond quickly to all reports and at least get them triaged. You should have a process in place to quickly fix vulnerabilities and get bounties paid. If hackers have reports sitting out there forever, it frustrates both parties and discourages working with the greater bounty community.
I was surprised during the panel with three of HackerOne’s top hackers (by bounty and reputation on their site). Two of them had full-time jobs in addition to pursuing bug bounties. They seemed to treat their hacking like a freelance gig on the side - pursue the quickest & most profitable bounties, and skip over low-rep or slow companies. Personally I find it difficult to imagine having the energy to research other companies for vulnerabilities after a full day’s work. But hey, if that’s your thing, awesome!
Natalie Silvanovich from Google’s Project Zero had a really interesting talk on how to reduce attack surface. It had a lot of similar themes to good product management in general: consider security when plotting a product’s roadmap, have a process for allocating time to security fixes, and spend time cleaning up code and keeping dependencies up to date. It’s easy to think that old code and old features aren’t hurting anyone: the support burden is low, the code isn’t getting in the way, and 3% of users love this feature, so why take time & get rid of it? Lowering your attack surface is a pretty good reason.
Coinbase’s CSO had an interesting note: the max payout from your bug bounty program is a proxy marker for how mature your program is. If your max bounty is $500, you’re probably paying enough bounties that $500 is all you can afford. They had recently raised their max bounty to $50,000 because they did not expect to be paying out a lot high-risk bounties.
Last Friday I also went to the Fukuoka Ruby Night. I guess the Fukuoka Prefecture is specifically taking an interest in fostering a tech and startup scene, which is pretty cool. They had talks from some interesting developers from Japan and SF, and they also brought in Matz for a talk. Overall a pretty cool evening.
Matz and another developer talked a bunch about mruby - the lightweight, fast, embeddable version of Ruby. It runs on an ultra-lightweight VM compiled for any architecture, and libraries are linked in rather than interpreted at runtime. I hadn’t heard much about it, and figured it was a thing for arduinos or whatever. Turns out it’s seen some more impressive use:
Matz didn’t have as much to say about Ruby 3. He specifically called out that if languages don’t get shiny new things, developers get bored and move on. I guess Ruby 3 will be a good counter-point to the “Ruby is Dead” meme. Ruby 3 will be largely backwards-compatible to avoid getting into quagmires like Ruby 1.9, Python 3, and PHP 6. They are shooting for 3x the performance of Ruby 2.x - the Ruby 3x3 project.
One way the core Ruby devs see for the language to evolve without breaking changes or fundamental shifts (such as a type system) is to focus on developer happiness. Building a language server into the core of Ruby 3 is one example that could drastically improve the tooling for developers.
He also talked about Duck Inference - an “80% compile time type checking” system. This could potentially catch a lot more type errors at compile time without requiring type hints, strict typing or other code-boilerplate rigamarole. Bonus: it would be fully backwards-compatible.
I’m a little skeptical - I personally find CTAGs and other auto-complete tools get in the way about as often as they help. For duck inferencing Matz mentioned saving type definitions and message trees into a separate file in the project directory, for manual tweaking as needed. Sounds like it could end up being pretty frustrating.
Guess we’ll see! Matz said the team’s goal is “before the end of this decade,” but to take that with a grain of salt. Good to see progress in the language and that Ruby continues to have a solid future.
", "url": "https://atevans.com/2017/10/31/october-event-notes.html", , "date_published": "2017-10-31T00:00:00+00:00", "date_modified": "2017-10-31T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2017/08/02/ruby-curses-for-terminal-apps.html", "title": "Ruby Curses for Terminal Apps", "summary": null, "content_text": "Curses is a C library for terminal-based apps. If you are writing a screen-based app that runs in the terminal, curses (or the “newer” version, ncurses ) can be a huge help. There used to be an adapter for Ruby in the standard library, but since 2.1.0 it’s been moved into its own gem.I took a crack at writing a small app with curses, and found the documentation and tutorials somewhat lacking. But after a bit of learning, and combining with the Verse and TTY gems, I think it came out kinda nice.Here’s a screenshot of the app, which basically stays open and monitors a logfile:There are three sections - the left side is a messages pane, where the app will post “traffic alert” and “alert cleared” messages. The user can scroll that pane up and down with the arrow keys (or h/j if they’ve a vim addict). On the right are two tables - the top one shows which sections of a web site are being hit most frequently. The bottom shows overall stats from the logs.Here’s the code for it, and I’ll step through below and explain what does what:require \"curses\"require \"tty-table\"require \"logger\"module Logwatch class Window attr_reader :main, :messages, :top_sections, :stats def initialize Curses.init_screen Curses.curs_set 0 # invisible cursor Curses.noecho # don't echo keys entered @lines = [] @pos = 0 half_height = Curses.lines / 2 - 2 half_width = Curses.cols / 2 - 3 @messages = Curses::Window.new(Curses.lines, half_width, 0, 0) @messages.keypad true # translate function keys to Curses::Key constants @messages.nodelay = true # don't block waiting for keyboard input with getch @messages.refresh @top_sections = Curses::Window.new(half_height, half_width, 0, half_width) @top_sections.refresh @stats = Curses::Window.new(half_height, half_width, half_height, half_width) @stats << \"Stats:\" @stats.refresh end def handle_keyboard_input case @messages.getch when Curses::Key::UP, 'k' @pos -= 1 unless @pos <= 0 paint_messages! when Curses::Key::DOWN, 'j' @pos += 1 unless @pos >= @lines.count - 1 paint_messages! when 'q' exit(0) end end def print_msg(msg) @lines += Verse::Wrapping.new(msg).wrap(@messages.maxx - 10).split(\"\\n\") paint_messages! end def paint_messages! @pos ||= 0 @messages.clear @messages.setpos(0, 0) @lines.slice(@pos, Curses.lines - 1).each { |line| @messages << \"#{line}\\n\" } @messages.refresh end def update_top_sections(sections) table = TTY::Table.new header: ['Top Section', 'Hits'], rows: sections.to_a @top_sections.clear @top_sections.setpos(0, 0) @top_sections.addstr(table.render(:ascii, width: @top_sections.maxx - 2, resize: true)) @top_sections.addstr(\"\\nLast refresh: #{Time.now.strftime('%b %d %H:%M:%S')}\") @top_sections.refresh end def update_stats(stats) table = TTY::Table.new header: ['Stats', ''], rows: stats.to_a @stats.clear @stats.setpos(0, 0) @stats.addstr(table.render(:ascii, width: @stats.maxx - 2, resize: true)) @stats.addstr(\"\\nLast refresh: #{Time.now.strftime('%b %d %H:%M:%S')}\") @stats.refresh end def teardown Curses.close_screen end endendInitializeOn initialize, we do some basic initialization of the curses gem - this will set up curses to handle all rendering to the terminal window.Curses sets up a default Curses::Window object to handle rendering and listening for keyboard input, accessible from the stdscr method. This is where Curses.lines and Curses.cols come from, and represent the whole terminal.I initially tried using the default window’s subwin method to set up the panes used by the app, but that proved to add a whole bunch of complication for no actual benefit. Long ago it may have provided a performance boost, but we’re well past that, I think.Also tried using the Curses::Pad class so I wouldn’t have to handle scrolling myself, but that also had tons of wonky behavior. Rendering yourself isn’t that hard; save the trouble.To handle keyboard input, we set keypad(true) on the messages window. We also set nodelay = true (yes, one is a method call, the other is assignment, no idea why) so we can call .getch but still update the screen while waiting for input.The two stats windows, we initialize mostly empty. Then call refresh on all three to get them set up on the active terminal.Main Render LoopThe class that loops and takes actions is not the window manager; but the interface is pretty simple. There’s a loop that checks for updates from the log file, updates the stats data store, then calls the two render methods for the stat windows. It also tells the window manager to handle any keyboard input, and will call print_msg() if it needs to add an alert or anything to the main panel.The main way to get text onto the screen is to call addstr() or << on a Curses::Window , then call refresh() to paint the buffer to the screen.The Window has a cursor, and it will add each character from the string and advance that, just like in a text editor. It tries to do a lot of other stuff; if you add characters beyond what the screen can show, it will scroll right and hide the first n columns. If you draw too many lines it will scroll down and provide no way to scroll back up. I tried dealing with scrl() and scroll() methods and such, but could never get the behavior working well. In the end, I did it manually.I used the verse gem to wrap lines of text so that we never wrote past the window boundaries. The window manager keeps an array of all lines that have been printed during the program, and a position variable representing how far we’ve scrolled down in the buffer. On each update it: clears the Curses::Window buffer moves the cursor back to (0,0) prints the lines within range to the Curses::Window calls refresh() to paint the Curses::Window buffer to the screenThe stats windows are basically the same. I used the TTY::Table gem from the tty-gems collection to handle rendering the calculated stats into pretty ASCII tables.TeardownThe teardown method clears the screen, which resets the terminal to non-visual mode. The handle_keyboard_input method calls exit(0) when a user wants to quit, but the larger program handles the interrupt signal and ensure ‘s the teardown method gets called.WrapHope that’s helpful! I had the wrong model of how all this stuff worked in my head for most of the development of this simple app. Maybe having what I came to laid out here will be useful.", "content_html": "Curses is a C library for terminal-based apps. If you are writing a screen-based app that runs in the terminal, curses (or the “newer” version, ncurses ) can be a huge help. There used to be an adapter for Ruby in the standard library, but since 2.1.0 it’s been moved into its own gem.
I took a crack at writing a small app with curses, and found the documentation and tutorials somewhat lacking. But after a bit of learning, and combining with the Verse and TTY gems, I think it came out kinda nice.
Here’s a screenshot of the app, which basically stays open and monitors a logfile:

There are three sections - the left side is a messages pane, where the app will post “traffic alert” and “alert cleared” messages. The user can scroll that pane up and down with the arrow keys (or h/j if they’ve a vim addict). On the right are two tables - the top one shows which sections of a web site are being hit most frequently. The bottom shows overall stats from the logs.
Here’s the code for it, and I’ll step through below and explain what does what:
require \"curses\"require \"tty-table\"require \"logger\"module Logwatch class Window attr_reader :main, :messages, :top_sections, :stats def initialize Curses.init_screen Curses.curs_set 0 # invisible cursor Curses.noecho # don't echo keys entered @lines = [] @pos = 0 half_height = Curses.lines / 2 - 2 half_width = Curses.cols / 2 - 3 @messages = Curses::Window.new(Curses.lines, half_width, 0, 0) @messages.keypad true # translate function keys to Curses::Key constants @messages.nodelay = true # don't block waiting for keyboard input with getch @messages.refresh @top_sections = Curses::Window.new(half_height, half_width, 0, half_width) @top_sections.refresh @stats = Curses::Window.new(half_height, half_width, half_height, half_width) @stats << \"Stats:\" @stats.refresh end def handle_keyboard_input case @messages.getch when Curses::Key::UP, 'k' @pos -= 1 unless @pos <= 0 paint_messages! when Curses::Key::DOWN, 'j' @pos += 1 unless @pos >= @lines.count - 1 paint_messages! when 'q' exit(0) end end def print_msg(msg) @lines += Verse::Wrapping.new(msg).wrap(@messages.maxx - 10).split(\"\\n\") paint_messages! end def paint_messages! @pos ||= 0 @messages.clear @messages.setpos(0, 0) @lines.slice(@pos, Curses.lines - 1).each { |line| @messages << \"#{line}\\n\" } @messages.refresh end def update_top_sections(sections) table = TTY::Table.new header: ['Top Section', 'Hits'], rows: sections.to_a @top_sections.clear @top_sections.setpos(0, 0) @top_sections.addstr(table.render(:ascii, width: @top_sections.maxx - 2, resize: true)) @top_sections.addstr(\"\\nLast refresh: #{Time.now.strftime('%b %d %H:%M:%S')}\") @top_sections.refresh end def update_stats(stats) table = TTY::Table.new header: ['Stats', ''], rows: stats.to_a @stats.clear @stats.setpos(0, 0) @stats.addstr(table.render(:ascii, width: @stats.maxx - 2, resize: true)) @stats.addstr(\"\\nLast refresh: #{Time.now.strftime('%b %d %H:%M:%S')}\") @stats.refresh end def teardown Curses.close_screen end endendOn initialize, we do some basic initialization of the curses gem - this will set up curses to handle all rendering to the terminal window.
Curses sets up a default Curses::Window object to handle rendering and listening for keyboard input, accessible from the stdscr method. This is where Curses.lines and Curses.cols come from, and represent the whole terminal.
I initially tried using the default window’s subwin method to set up the panes used by the app, but that proved to add a whole bunch of complication for no actual benefit. Long ago it may have provided a performance boost, but we’re well past that, I think.
Also tried using the Curses::Pad class so I wouldn’t have to handle scrolling myself, but that also had tons of wonky behavior. Rendering yourself isn’t that hard; save the trouble.
To handle keyboard input, we set keypad(true) on the messages window. We also set nodelay = true (yes, one is a method call, the other is assignment, no idea why) so we can call .getch but still update the screen while waiting for input.
The two stats windows, we initialize mostly empty. Then call refresh on all three to get them set up on the active terminal.
The class that loops and takes actions is not the window manager; but the interface is pretty simple. There’s a loop that checks for updates from the log file, updates the stats data store, then calls the two render methods for the stat windows. It also tells the window manager to handle any keyboard input, and will call print_msg() if it needs to add an alert or anything to the main panel.
The main way to get text onto the screen is to call addstr() or << on a Curses::Window , then call refresh() to paint the buffer to the screen.
The Window has a cursor, and it will add each character from the string and advance that, just like in a text editor. It tries to do a lot of other stuff; if you add characters beyond what the screen can show, it will scroll right and hide the first n columns. If you draw too many lines it will scroll down and provide no way to scroll back up. I tried dealing with scrl() and scroll() methods and such, but could never get the behavior working well. In the end, I did it manually.
I used the verse gem to wrap lines of text so that we never wrote past the window boundaries. The window manager keeps an array of all lines that have been printed during the program, and a position variable representing how far we’ve scrolled down in the buffer. On each update it:
Curses::Window bufferCurses::Windowrefresh() to paint the Curses::Window buffer to the screenThe stats windows are basically the same. I used the TTY::Table gem from the tty-gems collection to handle rendering the calculated stats into pretty ASCII tables.
The teardown method clears the screen, which resets the terminal to non-visual mode. The handle_keyboard_input method calls exit(0) when a user wants to quit, but the larger program handles the interrupt signal and ensure ‘s the teardown method gets called.
Hope that’s helpful! I had the wrong model of how all this stuff worked in my head for most of the development of this simple app. Maybe having what I came to laid out here will be useful.
", "url": "https://atevans.com/2017/08/02/ruby-curses-for-terminal-apps.html", , "date_published": "2017-08-02T00:00:00+00:00", "date_modified": "2017-08-02T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2017/07/07/key-value-api-elixir.html", "title": "A stateful key/value server in Elixir", "summary": null, "content_text": "I wanted to make a simple key-value server in Elixir - json in, json out, GET, POST, with an in-memory map. The point is to reinvent the wheel, and learn me some Elixir. My questions were: a) how do I build this without Phoenix and b) how do I persist state between requests in a functional language?Learning new stuff is always painful, so this was frustrating at points and harder than I expected. But I want to emphasize that I did get it working, and do understand a lot more about how Elixir does things - the community posts and extensive documentation were great, and I didn’t have to bug anyone on StackOverflow or IRC or anything to figure all this out.Here’s what the learning & development process sounded like from inside my head. First, how do I make a simple JSON API without Phoenix? I tried several tutorials using Plug alone. Several of them were out of date / didn’t work. Finally found this one, which was up-to-date and got me going. Ferret was born! How do I reload code when I change things without manually restarting? Poked around and found the remix app. Now I can take JSON in, but how do I persist across requests? I think we need a subprocess or something? That’s what CodeShip says, anyhow. Okay, I’ve got an Agent. So, where do I keep the agent PID so it’s reusable across requests? Well, where the heck does Plug keep session data? [3] That should be in-memory by default, right? Quickly, to the source code! Hrm, well, that doesn’t tell me a lot. Guess it’s abstracted out, and in a language I’m still learning. Maybe I’ll make a separate plug to initialize the agent, then dump it into the request bag-of-data? Pretty sure plug MyPlug, agent_thing: MyAgent.start_link will work. Can store that in my Plug’s options, then add it to Conn so it’s accessible inside requests Does a plug’s init/1 get called on every request, or just once? What about my Router’s init/1 ? Are things there memoized? Guess I’ll assume the results are stored and passed in as the 2nd arg to call/2 in my plug. Wait, what does start_link return? 14:15:15.422 [error] Ranch listener Ferret.Router.HTTP had connection process started with :cowboy_protocol:start_link/4 at #PID<0.335.0> exit with reason: \\{\\{\\%MatchError{term: [iix: {:ok, #PID<0.328.0>}]} WHY DO I KEEP GETTING THIS?! ** (MatchError) no match of right hand side value: {:ok, #PID<0.521.0>}(ferret) lib/plug/load_index.ex:10: Ferret.Plug.LoadIndex.init/1 figures out how to assign arguments turns out [:ok, pid] and {:ok, pid} and %{\"ok\" => pid} are different things futzes about trying various things to make that work How do I log stuff, anyway? Time to learn Logger. THE ROUTE IS RIGHT THERE WHAT THE HELL?! 14:29:45.127 [info] GET /put14:29:45.129 [error] Ranch listener Ferret.Router.HTTP had connection process started with :cowboy_protocol:start_link/4 at #PID<0.716.0> exit with reason: \\{\\{\\%FunctionClauseError{arity: 4, half an hour later - Oh, I’m doing a GET request when I routed it as POST. I’m good at programmering! I swear! I’m smrt! Turns out Conn.assign/3 and conn.assigns are how you put things in a request - not Conn.put_private/3 like plug/session uses. Okay, I’ve got my module in the request, and the pid going into my KV calls WTF does this mean?!?! Ranch listener Ferret.Router.HTTP had connection process started with :cowboy_protocol:start_link/4 at #PID<0.298.0> exit with reason: {\\{:noproc, {GenServer, :call, [#PID<0.292.0>, bloody hours pass The pid is right bloody there! Logger.debug shows it’s passing in the same pid for every request! Maybe it’s keeping the pid around, but the process is actually dead? How do I figure that out? tries various things Know what’d be cool? Agent.is_alive? . Things that definitely don’t work: Process.get(pid_of_dead_process) Process.get(__MODULE__) Process.alive?(__MODULE__) Which is weird, since an Agent is a GenServer is a Process (as far as I can tell). This article on “Process ping-pong” was helpful. Finally figured out to use GenServer.whereis/1 , passing in __MODULE__ , and that will return nil if the proc is dead, and info if it’s alive. Turns out I don’t need my own plug at all: just init the Agent with the __MODULE__ name, and I can reference it by that, just like a GenServer. IT’S STILL SAYING :noproc ! JEEBUS! Okay, I guess remix doesn’t re-run Ferret.Router.init/1 when it reloads the code for the server. So when my Agent dies due to an ArgumentError or whatever, it never restarts and I get this :noproc crap. I’ll just manually restart the server - I don’t want to figure out supervisors right now. This seems like it should work, why doesn’t it work?Agent.get_and_update __MODULE__, &Map.merge(&1, dict) Is it doing a javascript async thing? Do I need to tell Plug to wait on the response to get_and_update ? Would using Agent.update and then Agent.get work? Frick, I dunno, how async are we getting here? All. the. examples. use a pid instead of a module name to reference the agent. How would I even tell plug to wait on an async call? Oh, frickin’! get_and_update/3 has to return a tuple , and there’s no function that does single-return-value-equals-new-state. I need a function that takes the new map, merges it with the existing state, then duplicates the new map to return, but get_and_update/3 ‘s function argument only receives the current state and doesn’t get the arguments. get_and_update/4 supposedly passes args, but you have to pass a Module & atom instead of a function. I couldn’t make that work, either. Does Elixir have closures? I mean, that wouldn’t make a lot of sense from a “pure functions only” perspective, but in Ruby it’d be like new_params = conn.body_paramsAgent.get_and_update do |state| new_state = Map.merge(state, new_params) [new_state, new_state]end …errr, whelp, no, that doesn’t work. The Elixir crash-course guide doesn’t mention closures, and I’m not getting how to do this from the examples. hours of fiddling uuuuugggghhhhhhhh functional currying monad closure pipe recursions are breaking my effing brain. You have to make your own curry, or use a library. This seems unnecessary for such a simple dang thing. Is there a difference between Tuple.duplicate(Map.merge(&1, dict), 2) and Map.merge(&1, dict) |> Tuple.duplicate(2) ?I dunno, neither one of those are working. What’s the difference between????? def myfunc do ... end ; &myfunc f = fn args -> stuff end ; &f &(do_stuff) Okay, this is what I want: &(Map.merge(&1, dict) |> Tuple.duplicate 2) Why is dict available inside this captured function definition? I dunno. BOOM OMG IT’S WORKING! Programming is so cool and I’m awesome at it and this is the best! Let’s git commit! Jeebus, I better write this crap down so I don’t forget it. Maybe someone else will find it useful. Wish I coulda Google’d this while I was futzing around. I’m gonna go murder lots of monsters with my necromancer while my brain cools off. Then hopefully come back and figure out: functions and captures pipe operator’s inner workings closures??? supervisors Links I used:Elixir Getting-Started GuideMaps: elixir-langLogging with LoggerProcesses & StateStatefulness in a Stateful Language (CodeShip)Processes to Hold StateWhen to use Processes in ElixirElixir Process Ping-PongUsing Agents in ElixirAgent - elixir-langConcurrency Abstractions in Elixir (CodeShip)GenServer name registration (hexdocs)GenServer.whereis - for named processesAgent.get_and_update (hexdocs) - hope you are good with currying: no way to pass args into the update function unless you can pass a module & atom (and that didn’t work for me)PlugHow to build a lightweight webhook endpoint with ElixirPlug (Elixir School) - intro / overviewPlug body_params - StackOverflowplug/session.ex - how do they get / store session state?Plug.Conn.assign/3 (hexdocs)Plug repo on GithubFunction CompositionCurrying and Partial Application in ElixirComposing Elixir FunctionsBreaking Up is Hard To DoFunction Currying in ElixirElixir Crash Course - partial function applicationsPartial Function Application in ElixirElixir vs Ruby vs JS: closures", "content_html": "I wanted to make a simple key-value server in Elixir - json in, json out, GET, POST, with an in-memory map. The point is to reinvent the wheel, and learn me some Elixir. My questions were: a) how do I build this without Phoenix and b) how do I persist state between requests in a functional language?
Learning new stuff is always painful, so this was frustrating at points and harder than I expected. But I want to emphasize that I did get it working, and do understand a lot more about how Elixir does things - the community posts and extensive documentation were great, and I didn’t have to bug anyone on StackOverflow or IRC or anything to figure all this out.
Here’s what the learning & development process sounded like from inside my head.
First, how do I make a simple JSON API without Phoenix? I tried several tutorials using Plug alone. Several of them were out of date / didn’t work. Finally found this one, which was up-to-date and got me going. Ferret was born!
How do I reload code when I change things without manually restarting? Poked around and found the remix app.
Now I can take JSON in, but how do I persist across requests? I think we need a subprocess or something? That’s what CodeShip says, anyhow.
Okay, I’ve got an Agent. So, where do I keep the agent PID so it’s reusable across requests?
Well, where the heck does Plug keep session data? [3] That should be in-memory by default, right? Quickly, to the source code!
Hrm, well, that doesn’t tell me a lot. Guess it’s abstracted out, and in a language I’m still learning.
Maybe I’ll make a separate plug to initialize the agent, then dump it into the request bag-of-data?
Pretty sure plug MyPlug, agent_thing: MyAgent.start_link will work. Can store that in my Plug’s options, then add it to Conn so it’s accessible inside requests
Does a plug’s init/1 get called on every request, or just once? What about my Router’s init/1 ? Are things there memoized?
Guess I’ll assume the results are stored and passed in as the 2nd arg to call/2 in my plug.
Wait, what does start_link return?
14:15:15.422 [error] Ranch listener Ferret.Router.HTTP had connection process started with :cowboy_protocol:start_link/4 at #PID<0.335.0> exit with reason: \\{\\{\\%MatchError{term: [iix: {:ok, #PID<0.328.0>}]}WHY DO I KEEP GETTING THIS?!
** (MatchError) no match of right hand side value: {:ok, #PID<0.521.0>}(ferret) lib/plug/load_index.ex:10: Ferret.Plug.LoadIndex.init/1figures out how to assign arguments
turns out [:ok, pid] and {:ok, pid} and %{\"ok\" => pid} are different things
futzes about trying various things to make that work
How do I log stuff, anyway? Time to learn Logger.
THE ROUTE IS RIGHT THERE WHAT THE HELL?!
14:29:45.127 [info] GET /put14:29:45.129 [error] Ranch listener Ferret.Router.HTTP had connection process started with :cowboy_protocol:start_link/4 at #PID<0.716.0> exit with reason: \\{\\{\\%FunctionClauseError{arity: 4,half an hour later - Oh, I’m doing a GET request when I routed it as POST. I’m good at programmering! I swear! I’m smrt!
Turns out Conn.assign/3 and conn.assigns are how you put things in a request - not Conn.put_private/3 like plug/session uses.
Okay, I’ve got my module in the request, and the pid going into my KV calls
WTF does this mean?!?!
Ranch listener Ferret.Router.HTTP had connection process started with :cowboy_protocol:start_link/4 at #PID<0.298.0> exit with reason: {\\{:noproc, {GenServer, :call, [#PID<0.292.0>,bloody hours pass
The pid is right bloody there! Logger.debug shows it’s passing in the same pid for every request!
Maybe it’s keeping the pid around, but the process is actually dead? How do I figure that out? tries various things
Know what’d be cool? Agent.is_alive? . Things that definitely don’t work:
Process.get(pid_of_dead_process)Process.get(__MODULE__)Process.alive?(__MODULE__)Which is weird, since an Agent is a GenServer is a Process (as far as I can tell). This article on “Process ping-pong” was helpful.
Finally figured out to use GenServer.whereis/1 , passing in __MODULE__ , and that will return nil if the proc is dead, and info if it’s alive.
Turns out I don’t need my own plug at all: just init the Agent with the __MODULE__ name, and I can reference it by that, just like a GenServer.
IT’S STILL SAYING :noproc ! JEEBUS!
Okay, I guess remix doesn’t re-run Ferret.Router.init/1 when it reloads the code for the server. So when my Agent dies due to an ArgumentError or whatever, it never restarts and I get this :noproc crap.
I’ll just manually restart the server - I don’t want to figure out supervisors right now.
This seems like it should work, why doesn’t it work?Agent.get_and_update __MODULE__, &Map.merge(&1, dict)
Is it doing a javascript async thing? Do I need to tell Plug to wait on the response to get_and_update ?
Would using Agent.update and then Agent.get work? Frick, I dunno, how async are we getting here? All. the. examples. use a pid instead of a module name to reference the agent.
How would I even tell plug to wait on an async call?
Oh, frickin’! get_and_update/3 has to return a tuple , and there’s no function that does single-return-value-equals-new-state.
I need a function that takes the new map, merges it with the existing state, then duplicates the new map to return, but get_and_update/3 ‘s function argument only receives the current state and doesn’t get the arguments.
get_and_update/4 supposedly passes args, but you have to pass a Module & atom instead of a function. I couldn’t make that work, either.
Does Elixir have closures? I mean, that wouldn’t make a lot of sense from a “pure functions only” perspective, but in Ruby it’d be like
new_params = conn.body_paramsAgent.get_and_update do |state| new_state = Map.merge(state, new_params) [new_state, new_state]end…errr, whelp, no, that doesn’t work.
The Elixir crash-course guide doesn’t mention closures, and I’m not getting how to do this from the examples.
hours of fiddling
uuuuugggghhhhhhhh functional currying monad closure pipe recursions are breaking my effing brain. You have to make your own curry, or use a library. This seems unnecessary for such a simple dang thing.
Is there a difference between Tuple.duplicate(Map.merge(&1, dict), 2) and Map.merge(&1, dict) |> Tuple.duplicate(2) ?I dunno, neither one of those are working.
What’s the difference between?????
def myfunc do ... end ; &myfuncf = fn args -> stuff end ; &f&(do_stuff)Okay, this is what I want: &(Map.merge(&1, dict) |> Tuple.duplicate 2)
Why is dict available inside this captured function definition? I dunno.
BOOM OMG IT’S WORKING! Programming is so cool and I’m awesome at it and this is the best!
Let’s git commit!
Jeebus, I better write this crap down so I don’t forget it. Maybe someone else will find it useful. Wish I coulda Google’d this while I was futzing around.
I’m gonna go murder lots of monsters with my necromancer while my brain cools off. Then hopefully come back and figure out:
Statefulness in a Stateful Language (CodeShip)
When to use Processes in Elixir
Concurrency Abstractions in Elixir (CodeShip)
GenServer name registration (hexdocs)
GenServer.whereis - for named processes
Agent.get_and_update (hexdocs) - hope you are good with currying: no way to pass args into the update function unless you can pass a module & atom (and that didn’t work for me)
How to build a lightweight webhook endpoint with Elixir
Plug (Elixir School) - intro / overview
Plug body_params - StackOverflow
plug/session.ex - how do they get / store session state?
Currying and Partial Application in Elixir
Elixir Crash Course - partial function applications
Partial Function Application in Elixir
Elixir vs Ruby vs JS: closures
", "url": "https://atevans.com/2017/07/07/key-value-api-elixir.html", , "date_published": "2017-07-07T00:00:00+00:00", "date_modified": "2017-07-07T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2017/06/15/find-wide-open-aws-security-groups-quickie-script.html", "title": "Script: Wide open AWS sec groups", "summary": null, "content_text": "Was looking at our AWS configuration audit in Threat Stack today. One issue it highlighted was that some of our security groups had too many ports open. My first guess was that there were vestigial “default” groups created from hackier days of adding things from the console.But before I could go deleting them all, I wanted to see if any were in use. I’m a lazy, lazy man, so I’m not going to click around and read stuff to figure it out. Scripting to the rescue!#!/usr/bin/env rubyrequire 'bundler/setup'require 'aws-sdk'require 'json'client = Aws::EC2::Client.newgroups = client.describe_security_groupsSecGroup = Struct.new(:open_count, :group_name, :group_id) do def to_json(*a) self.to_h.to_json(*a) endendopen_counts = groups.security_groups.map do |group| counts = group.ip_permissions.map {|ip| ip.to_port.to_i - ip.from_port.to_i + 1 } SecGroup.new counts.inject(&:+), group.group_name, group.group_idendwide_opens = open_counts.select {|oc| oc.open_count > 1000 }if wide_opens.empty? puts \"No wide-open security groups! Yay!\" exit(0)endputs \"Found some wide open security groups:\"puts JSON.pretty_generate(wide_opens)Boxen = Struct.new(:instance_id, :group, :tags) do def to_json(*a) self.to_h.to_json(*a) endendinstances_coll = wide_opens.map do |group| resp = client.describe_instances( dry_run: false, filters: [ { name: \"instance.group-id\", values: [group.group_id], } ] ) resp.reservations.map do |r| r.instances.map do |i| Boxen.new(i.instance_id, group, i.tags) end endendinstances = instances_coll.flattenputs \"Being used by the following instances:\"puts JSON.pretty_generate(instances)Something to throw in the ‘ole snippets folder. Maybe it’ll help you, too!", "content_html": "Was looking at our AWS configuration audit in Threat Stack today. One issue it highlighted was that some of our security groups had too many ports open. My first guess was that there were vestigial “default” groups created from hackier days of adding things from the console.
But before I could go deleting them all, I wanted to see if any were in use. I’m a lazy, lazy man, so I’m not going to click around and read stuff to figure it out. Scripting to the rescue!
#!/usr/bin/env rubyrequire 'bundler/setup'require 'aws-sdk'require 'json'client = Aws::EC2::Client.newgroups = client.describe_security_groupsSecGroup = Struct.new(:open_count, :group_name, :group_id) do def to_json(*a) self.to_h.to_json(*a) endendopen_counts = groups.security_groups.map do |group| counts = group.ip_permissions.map {|ip| ip.to_port.to_i - ip.from_port.to_i + 1 } SecGroup.new counts.inject(&:+), group.group_name, group.group_idendwide_opens = open_counts.select {|oc| oc.open_count > 1000 }if wide_opens.empty? puts \"No wide-open security groups! Yay!\" exit(0)endputs \"Found some wide open security groups:\"puts JSON.pretty_generate(wide_opens)Boxen = Struct.new(:instance_id, :group, :tags) do def to_json(*a) self.to_h.to_json(*a) endendinstances_coll = wide_opens.map do |group| resp = client.describe_instances( dry_run: false, filters: [ { name: \"instance.group-id\", values: [group.group_id], } ] ) resp.reservations.map do |r| r.instances.map do |i| Boxen.new(i.instance_id, group, i.tags) end endendinstances = instances_coll.flattenputs \"Being used by the following instances:\"puts JSON.pretty_generate(instances)Something to throw in the ‘ole snippets folder. Maybe it’ll help you, too!
", "url": "https://atevans.com/2017/06/15/find-wide-open-aws-security-groups-quickie-script.html", , "date_published": "2017-06-15T00:00:00+00:00", "date_modified": "2017-06-15T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2017/06/09/roundup-open-offices-are-terrible.html", "title": "Roundup: Open Offices are Terrible", "summary": null, "content_text": "Every now and then I have to explain why I like working remote, why I don’t like offices, and why I hate open offices in particular.I’m an introvert at heart. I can be social, I do like hanging out with people, and I get restless and depressed if I’m alone at home for a week or two. But I have to manage my extroversion – make sure I allocate sufficient time to quiet, introverted activities. Reading books, single-player games, hacking on side projects, etc.To do great work, I need laser-like focus. I need multi-hour uninterrupted blocks of time. Many engineers feel the same way - see Paul Graham’s oft-cited “Maker’s Schedule” essay.Open offices are the worst possible fucking environment for me.Loud noises at random intervals make me jump out of my skin - and I don’t even have PTSD or anything. I need loud music delivered via headphones to get around the noise inherent to open offices.Constant movement in my peripheral vision is a major distraction. I often have to double-check to see if someone is trying to talk to me because of the aforementioned headphones. I message people on Slack to see if they have time to chat, but plenty of people think random shoulder-taps are great.Privacy is important to me. People looking over my shoulder at what I’m doing makes me itch. I feel like people are judging me in real-time based on my ugly, unfinished work. Even if they’re talking to someone else, I get paranoid and want to know if they’re looking.If you follow Reddit, Hacker News, or any tech or programming related forums, you’ll see hate-ons for open offices pop up every month or two. Here’s a summary.Link Roundup:PeopleWare: Productive Projects and Teams (3rd Edition) (originally published: 1987)Washington Post: Google got it wrong. The open-office trend is destroying the workplace.Fast Company: Offices For All! Why Open-Office Layouts Are Bad For Employees, Bosses, And Productivity BBC: Why open offices are bad for us [Hacker News thread] CNBC: 58% of high-performance employees say they need more quiet work spacesMental Floss: Working Remotely Makes You Happier and More ProductiveNathan Marz: The inexplicable rise of open floor plans in tech companies (creator of Apache Storm) [Hacker News thread]Various. Reddit. Threads. Complaining.Slashdot. Hates. Them. Too.", "content_html": "Every now and then I have to explain why I like working remote, why I don’t like offices, and why I hate open offices in particular.
I’m an introvert at heart. I can be social, I do like hanging out with people, and I get restless and depressed if I’m alone at home for a week or two. But I have to manage my extroversion – make sure I allocate sufficient time to quiet, introverted activities. Reading books, single-player games, hacking on side projects, etc.
To do great work, I need laser-like focus. I need multi-hour uninterrupted blocks of time. Many engineers feel the same way - see Paul Graham’s oft-cited “Maker’s Schedule” essay.
Open offices are the worst possible fucking environment for me.
Loud noises at random intervals make me jump out of my skin - and I don’t even have PTSD or anything. I need loud music delivered via headphones to get around the noise inherent to open offices.
Constant movement in my peripheral vision is a major distraction. I often have to double-check to see if someone is trying to talk to me because of the aforementioned headphones. I message people on Slack to see if they have time to chat, but plenty of people think random shoulder-taps are great.
Privacy is important to me. People looking over my shoulder at what I’m doing makes me itch. I feel like people are judging me in real-time based on my ugly, unfinished work. Even if they’re talking to someone else, I get paranoid and want to know if they’re looking.
If you follow Reddit, Hacker News, or any tech or programming related forums, you’ll see hate-ons for open offices pop up every month or two. Here’s a summary.
PeopleWare: Productive Projects and Teams (3rd Edition) (originally published: 1987)
Washington Post: Google got it wrong. The open-office trend is destroying the workplace.
| BBC: Why open offices are bad for us | [Hacker News thread] |
CNBC: 58% of high-performance employees say they need more quiet work spaces
Mental Floss: Working Remotely Makes You Happier and More Productive
Nathan Marz: The inexplicable rise of open floor plans in tech companies (creator of Apache Storm) [Hacker News thread]
Various. Reddit. Threads. Complaining.
", "url": "https://atevans.com/2017/06/09/roundup-open-offices-are-terrible.html", , "date_published": "2017-06-09T00:00:00+00:00", "date_modified": "2017-06-09T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2017/05/19/link-roundup.html", "title": "Link Roundup: Friday May 19, 2017", "summary": null, "content_text": "The hacking group that leaked NSA secrets claims it has data on foreign nuclear programs - The Washington Post - We are officially in the cyberpunk era of information warfareURL Validation - A guide and example regex for one of those surprisingly difficult problemsCookies are Not Accepted - New York Times - “Protecting Your Digital Life in 8 Easy Steps”, none of which is “keep your software updated” ~ @pinboardadriancooney/console.image: The one thing Chrome Dev Tools didn’t need. - console.image(\"http://i.imgur.com/hv6pwkb.png\"); (yes, I added a stupid easter egg)PG MatViews for better performance in Rails 🚀 - Postgresql Materialized Views for fast analytics queriesAn Abridged Cartoon Introduction To WebAssembly – Smashing MagazineFixing Unicode for Ruby Developers – DaftCode Blog - Another surprisingly difficult problem: storing, reading, and interpreting text in filesStrength.js - Password strength indicator w/ jQueryObnoxious.css - Animations for the strong of heart, and weak of mind", "content_html": "The hacking group that leaked NSA secrets claims it has data on foreign nuclear programs - The Washington Post - We are officially in the cyberpunk era of information warfare
URL Validation - A guide and example regex for one of those surprisingly difficult problems
Cookies are Not Accepted - New York Times - “Protecting Your Digital Life in 8 Easy Steps”, none of which is “keep your software updated” ~ @pinboard
adriancooney/console.image: The one thing Chrome Dev Tools didn’t need. - console.image(\"http://i.imgur.com/hv6pwkb.png\"); (yes, I added a stupid easter egg)
PG MatViews for better performance in Rails 🚀 - Postgresql Materialized Views for fast analytics queries
An Abridged Cartoon Introduction To WebAssembly – Smashing Magazine
Fixing Unicode for Ruby Developers – DaftCode Blog - Another surprisingly difficult problem: storing, reading, and interpreting text in files
Strength.js - Password strength indicator w/ jQuery
Obnoxious.css - Animations for the strong of heart, and weak of mind
", "url": "https://atevans.com/2017/05/19/link-roundup.html", , "date_published": "2017-05-19T00:00:00+00:00", "date_modified": "2017-05-19T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2017/05/10/railsconf-2017.html", "title": "RailsConf 2017", "summary": null, "content_text": "That’s a wrap! I miss it already. RailsConf is a wonderful conference, and I’d encourage any Ruby and/or Rails engineer to go.I don’t think I went to any conferences before joining Hired. We’ve been a sponsor for all three RailsConf’s since I joined in 2014, and I’ve gone every year. The company values career development, and going to conferences is part of that. We are a Rails shop, our founders are hardcore Ruby junkies, and we believe in giving back to the community for all the things it’s done for us. Of course, as a tech hiring marketplace, it makes business sense as well.I gave Hired’s sponsor talk this year - my first time speaking at a conference, and a big ‘ole check off the bucket list. I’d love to do it again. I’d love to give the same talk again for meetup groups or something - I learned a lot from having a “real” audience who are neither coworkers nor the mirror at home. It’d probably be a much better talk with a few iterations.I went through two of our open source libraries from a teamwork and technical perspective. This post will get a link to it once ConFreaks puts it up.Developer AnxietyThis seemed to be a major theme of the conf overall. DHH’s keynote talked about the FUD and the shiny-new-thing treadmill that prevents us from putting roots down in the community of a language & ecosystem. Searls’ keynote talked about how many of his coding decisions are driven by fear of familiar problems. There was a panel on spoon theory - which applies more generally than Christine Miserandino’s personal example.Studies of anxiety and stress in development seem to indicate that anxiety is bad for productivity. Anxiety and stress impair focus, shrink working memory, and hurt creativity - which are all necessary for doing good work. These studies are marred by small sample sizes, poor methodology, and the fact that we generally don’t know what the hell “productivity” even means for developers. But the outcomes seem obvious intuitively.It would behoove us to figure out how to reduce the overall anxiety in our industry. May is Mental Health Awareness Month in 2017. I’ve seen a lot of folks talking about Open Source Mental Illness, which seems like a great organization. There’s not going to be a silver bullet, it’ll take a lot of effort to educate, de-stigmatize, and work toward solutions. At least talking about it is a good start.Working TogetherLots of talks dealt with empathy, teamwork, witholding judgement, and team dynamics. Searls had a quotable line - I’ll paraphrase it as: “When I see people code in what I think is a bad way, I try to have empathy - I would hate for someone else to tell me I couldn’t code my favorite way, so I can put myself in their shoes.”The Beyond SOLID talk discussing the continued divide between “business” and development. Haseeb Qureshi countered DHH’s pro-tribalism, saying it’s a barrier to communication that prevents developers from converging on “optimal” development. Joe Mastey’s talk on legacy code discussed ways to build team momentum and reduce fear of codebase dragons. Several talks covered diversity, where implicit bias can shut down communication and empathy right quick.Working together to build things is a huge and complex process, and there’s no overall takeaway to be had here. Training ourselves in empathy and improving our communications are key developments that seemed to be a common thread.I didn’t see a lot of talk about the organizations or structures affecting how we work together. Definitely something I’d like to hear more about - particularly with examples of structural change in organizations, what worked, and what didn’t. How do you balance PMs vs EMs? Are those even the right roles? How does it affect an org to have a C-level exec who can code?Some Technical StuffThere were way fewer “do X in Y minutes” talk this year, for which I am greatful. That sort of thing can be summed up better in a blog post, and frankly hypes up new tech without actually teaching much. There were more “deep dive” talks, a few “tips for beginners” talks, and some valuable-looking workshops. I didn’t go to many of these, but it seemed like a good mix.Wrap-UpIt was a great conference, and I’d love to go back next year. I’d like to qualify for a non-sponsor talk some time, but I should probably act more locally first - perhaps having a few live iterations beforehand would improve the big-audience presentation.If you’re a Rails developer, or a Ruby-ist of any sort, I’d say it’s worth the trip. There may be scholarships available if you can’t go on a company’s dime - worth a shot.", "content_html": "That’s a wrap! I miss it already. RailsConf is a wonderful conference, and I’d encourage any Ruby and/or Rails engineer to go.
I don’t think I went to any conferences before joining Hired. We’ve been a sponsor for all three RailsConf’s since I joined in 2014, and I’ve gone every year. The company values career development, and going to conferences is part of that. We are a Rails shop, our founders are hardcore Ruby junkies, and we believe in giving back to the community for all the things it’s done for us. Of course, as a tech hiring marketplace, it makes business sense as well.
I gave Hired’s sponsor talk this year - my first time speaking at a conference, and a big ‘ole check off the bucket list. I’d love to do it again. I’d love to give the same talk again for meetup groups or something - I learned a lot from having a “real” audience who are neither coworkers nor the mirror at home. It’d probably be a much better talk with a few iterations.
I went through two of our open source libraries from a teamwork and technical perspective. This post will get a link to it once ConFreaks puts it up.
This seemed to be a major theme of the conf overall. DHH’s keynote talked about the FUD and the shiny-new-thing treadmill that prevents us from putting roots down in the community of a language & ecosystem. Searls’ keynote talked about how many of his coding decisions are driven by fear of familiar problems. There was a panel on spoon theory - which applies more generally than Christine Miserandino’s personal example.
Studies of anxiety and stress in development seem to indicate that anxiety is bad for productivity. Anxiety and stress impair focus, shrink working memory, and hurt creativity - which are all necessary for doing good work. These studies are marred by small sample sizes, poor methodology, and the fact that we generally don’t know what the hell “productivity” even means for developers. But the outcomes seem obvious intuitively.
It would behoove us to figure out how to reduce the overall anxiety in our industry. May is Mental Health Awareness Month in 2017. I’ve seen a lot of folks talking about Open Source Mental Illness, which seems like a great organization. There’s not going to be a silver bullet, it’ll take a lot of effort to educate, de-stigmatize, and work toward solutions. At least talking about it is a good start.
Lots of talks dealt with empathy, teamwork, witholding judgement, and team dynamics. Searls had a quotable line - I’ll paraphrase it as: “When I see people code in what I think is a bad way, I try to have empathy - I would hate for someone else to tell me I couldn’t code my favorite way, so I can put myself in their shoes.”
The Beyond SOLID talk discussing the continued divide between “business” and development. Haseeb Qureshi countered DHH’s pro-tribalism, saying it’s a barrier to communication that prevents developers from converging on “optimal” development. Joe Mastey’s talk on legacy code discussed ways to build team momentum and reduce fear of codebase dragons. Several talks covered diversity, where implicit bias can shut down communication and empathy right quick.
Working together to build things is a huge and complex process, and there’s no overall takeaway to be had here. Training ourselves in empathy and improving our communications are key developments that seemed to be a common thread.
I didn’t see a lot of talk about the organizations or structures affecting how we work together. Definitely something I’d like to hear more about - particularly with examples of structural change in organizations, what worked, and what didn’t. How do you balance PMs vs EMs? Are those even the right roles? How does it affect an org to have a C-level exec who can code?
There were way fewer “do X in Y minutes” talk this year, for which I am greatful. That sort of thing can be summed up better in a blog post, and frankly hypes up new tech without actually teaching much. There were more “deep dive” talks, a few “tips for beginners” talks, and some valuable-looking workshops. I didn’t go to many of these, but it seemed like a good mix.
It was a great conference, and I’d love to go back next year. I’d like to qualify for a non-sponsor talk some time, but I should probably act more locally first - perhaps having a few live iterations beforehand would improve the big-audience presentation.
If you’re a Rails developer, or a Ruby-ist of any sort, I’d say it’s worth the trip. There may be scholarships available if you can’t go on a company’s dime - worth a shot.
", "url": "https://atevans.com/2017/05/10/railsconf-2017.html", , "date_published": "2017-05-10T00:00:00+00:00", "date_modified": "2017-05-10T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2017/04/24/deadlines.html", "title": "Deadlines", "summary": null, "content_text": "I was listening to the Liftoff podcast episode about the Voyager missions. They pointed out that it launched in 1977 - well, NASA put it best: This layout of Jupiter, Saturn, Uranus and Neptune, which occurs about every 175 years, allows a spacecraft on a particular flight path to swing from one planet to the next without the need for large onboard propulsion systems.That’s a hard deadline. “If your code isn’t bug-free by August 20, we’ll have to wait 175 years for the next chance.” No pressure.In startups, I often hear of “hard” deadlines like “we have to ship by Friday so we can keep on schedule.” Or worse, “we promised the customer we’d have this by next Tuesday.” To meet these deadlines, managers and teams will push hard. Engineers will work longer hours in crunch mode. Code reviews will be lenient. Testing may suffer. New code might be crammed into existing architecture because it’s faster that way, in the short term. Coders will burn out.These are not deadlines, they’re bullshit. Companies are generally not staring down a literal once-a-century event which, if missed, will wind down the entire venture.If you’re consistently rushed and not writing your best code, you’re not learning and improving. The company is getting a crappier codebase that will slow down and demoralize the team, and engineers are stagnating. Demand a reason for rush deadlines, and don’t accept “…well, because the sprint’s almost over…”", "content_html": "I was listening to the Liftoff podcast episode about the Voyager missions. They pointed out that it launched in 1977 - well, NASA put it best:
This layout of Jupiter, Saturn, Uranus and Neptune, which occurs about every 175 years, allows a spacecraft on a particular flight path to swing from one planet to the next without the need for large onboard propulsion systems.
That’s a hard deadline. “If your code isn’t bug-free by August 20, we’ll have to wait 175 years for the next chance.” No pressure.
In startups, I often hear of “hard” deadlines like “we have to ship by Friday so we can keep on schedule.” Or worse, “we promised the customer we’d have this by next Tuesday.” To meet these deadlines, managers and teams will push hard. Engineers will work longer hours in crunch mode. Code reviews will be lenient. Testing may suffer. New code might be crammed into existing architecture because it’s faster that way, in the short term. Coders will burn out.
These are not deadlines, they’re bullshit. Companies are generally not staring down a literal once-a-century event which, if missed, will wind down the entire venture.
If you’re consistently rushed and not writing your best code, you’re not learning and improving. The company is getting a crappier codebase that will slow down and demoralize the team, and engineers are stagnating. Demand a reason for rush deadlines, and don’t accept “…well, because the sprint’s almost over…”
", "url": "https://atevans.com/2017/04/24/deadlines.html", , "date_published": "2017-04-24T00:00:00+00:00", "date_modified": "2017-04-24T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2017/04/18/ruby-inheritence-reflection.html", "title": "Ruby inheritence reflection", "summary": null, "content_text": "Stumbled on something interesting - checking a class to see if it “is a” or is “kind of” a parent class didn’t work. Checking using the inheritence operator did work, as well as looking at the list of ancestors.irb(main)> class MyMailer < ActionMailer::Base ; end=> nilirb(main)> MyMailer.is_a? Class=> trueirb(main)> MyMailer.kind_of? Class=> trueirb(main)> MyMailer.is_a? ActionMailer::Base=> falseirb(main)> MyMailer.kind_of? ActionMailer::Base=> falseirb(main)> a = MyMailer.new=> #<MyMailer:0x007fa6d4ce9938>irb(main)> a.is_a? ActionMailer::Base=> trueirb(main)> a.kind_of? ActionMailer::Base=> trueirb(main)> !!(MyMailer < ActionMailer::Base)=> trueirb(main)> !!(MyMailer < ActiveRecord::Base)=> falseirb(main)> MyMailer.ancestors.include? ActionMailer::Base=> trueI suppose .is_a? and .kind_of? are designed as instance-level methods on Object. Classes inherit from Module, which inherits from Object, so a class is technically an instance. These methods will look at what the class is an instance of - pretty much always Class - and then check the ancestors of that.tl;dr when trying to find out what kind of thing a class is, use the inheritence operator or look at the array of ancestors. Don’t use the methods designed for checking the type of an instance of something.", "content_html": "Stumbled on something interesting - checking a class to see if it “is a” or is “kind of” a parent class didn’t work. Checking using the inheritence operator did work, as well as looking at the list of ancestors.
irb(main)> class MyMailer < ActionMailer::Base ; end=> nilirb(main)> MyMailer.is_a? Class=> trueirb(main)> MyMailer.kind_of? Class=> trueirb(main)> MyMailer.is_a? ActionMailer::Base=> falseirb(main)> MyMailer.kind_of? ActionMailer::Base=> falseirb(main)> a = MyMailer.new=> #<MyMailer:0x007fa6d4ce9938>irb(main)> a.is_a? ActionMailer::Base=> trueirb(main)> a.kind_of? ActionMailer::Base=> trueirb(main)> !!(MyMailer < ActionMailer::Base)=> trueirb(main)> !!(MyMailer < ActiveRecord::Base)=> falseirb(main)> MyMailer.ancestors.include? ActionMailer::Base=> trueI suppose .is_a? and .kind_of? are designed as instance-level methods on Object. Classes inherit from Module, which inherits from Object, so a class is technically an instance. These methods will look at what the class is an instance of - pretty much always Class - and then check the ancestors of that.
tl;dr when trying to find out what kind of thing a class is, use the inheritence operator or look at the array of ancestors. Don’t use the methods designed for checking the type of an instance of something.
", "url": "https://atevans.com/2017/04/18/ruby-inheritence-reflection.html", , "date_published": "2017-04-18T00:00:00+00:00", "date_modified": "2017-04-18T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2017/04/07/link-roundup.html", "title": "Link Roundup: Friday April 7, 2017", "summary": null, "content_text": "I love Mike Gunderloy’s “Double Shot” posts on his blog A Fresh Cup. Inspired by that, here’s a link roundup of some stuff I’ve read lately, mainly from my Pinboard.How to monitor Redis performance metrics - Guess what I’ve been up to for the last week or two?redis-rdb-tools - Python tool to parse Redis dump.rdb files, analyze memory, and export data to JSON. For some advanced analysis, load the exported JSON into Postgres and run some queries.redis-memory-analyzer - Redis memory profiler to find the RAM bottlenecks through scaning key space in real time and aggregate RAM usage statistic by patterns.LastPass password manager suffers ‘major’ security problem - They’ve had quite a few of these, recently.0.30000000000000004.com - Cheat sheet for floating point stuff. Also, a hilarious domain name.Subgraph OS - An entire OS built on TOR, advanced firewalls, and containerizing everything. Meant to be secure.", "content_html": "I love Mike Gunderloy’s “Double Shot” posts on his blog A Fresh Cup. Inspired by that, here’s a link roundup of some stuff I’ve read lately, mainly from my Pinboard.
How to monitor Redis performance metrics - Guess what I’ve been up to for the last week or two?
redis-rdb-tools - Python tool to parse Redis dump.rdb files, analyze memory, and export data to JSON. For some advanced analysis, load the exported JSON into Postgres and run some queries.
redis-memory-analyzer - Redis memory profiler to find the RAM bottlenecks through scaning key space in real time and aggregate RAM usage statistic by patterns.
LastPass password manager suffers ‘major’ security problem - They’ve had quite a few of these, recently.
0.30000000000000004.com - Cheat sheet for floating point stuff. Also, a hilarious domain name.
Subgraph OS - An entire OS built on TOR, advanced firewalls, and containerizing everything. Meant to be secure.
", "url": "https://atevans.com/2017/04/07/link-roundup.html", , "date_published": "2017-04-07T00:00:00+00:00", "date_modified": "2017-04-07T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2016/10/21/keep-an-eye-on-everything-via-slack.html", "title": "Keep an Eye on Everything via Slack", "summary": null, "content_text": "When I get paged, my first step is to calmly(-ish) asses the situation. What is the problem? Our app metrics have, in many cases, disappeared. Identify and confirm it: yep, bunch of dashboards are gone.Usually I start debugging at this point. What are the possible reasons for that? Did someone deploy a change? Maybe an update to the metrics libraries? Nope, too early: today’s deploy logs are empty. Did app servers get scaled up, which might cause rate-limiting? Nah, all looks normal. Did our credentials get changed? Doesn’t look like it, none of our tokens have been revoked.All of that would have been a waste of time. Our stats aggregation & dashboard service, Librato, was affected by a wide-scale DNS outage. Somebody DDoS’d Dyn, one of the largest DNS providers in the US. Librato had all kinds of problems, because their DNS servers were unavailable.We figured that out almost immediately, without having to look for any potential problems with our system. It’s easy for me to forget to check status pages before diving into an incident, but I’ve found a way to make it easier. I made a channel in our Slack called #statuspages . Slack has a nifty slash command for subscribing to RSS feeds within a channel. Just type/feed subscribe http://status.whatever.com/feed-url.rssand boom! Any incident updates will appear as public posts in the channel.Lots of services we rely on use StatusPage.io, and they provide RSS and Atom feeds for incidents and updates. The status pages for Heroku and AWS also offer RSS feeds - one for each service and region in AWS’ case. I subscribed to everything that might affect site and app functionality, as well as development & business operations - Github, npm, rubygems, Atlassian (Jira / Confluence / etc), Customer.io etc.Every time one of these services reports an issue, it appears almost immediately in the channel. When something’s up with our app, a quick check in #statuspages can abort the whole debugging process. It can also be an early warning system: when a hosted service says they’re experiencing “delayed connections” or “intermittent issues,” you can be on guard in case that service goes down entirely.Unfortunately not all status pages have an RSS feed. Salesforce doesn’t provide one. Any status page powered by Pingdom doesn’t either: it’s not a feature they provide. I can’t add Optimize.ly because they use Pingdom. C’mon y’all - get on it!I’ve “pinned” links to these dashboards in #statuspages so they’re at least easy to find. Theoretically, I could use a service like IFTTT to get notified whenever the page changes - I haven’t tried, but I’m betting that would be too noisy to be worth it. Some quick glue code in our chat bot to scrape the page would work, but then the code has to be maintained, and who has time?We currently have 45 feeds in #statuspages . It’s kind of a disaster today with all the DNS issues, but it certainly keeps us up-to-date. Thankfully Slack isn’t down for us - that’s a whole different dumpster fire. But I could certainly use an RSS service as an alternative, such as my favorite Feedbin. That’s the great things about RSS: the old-school style of blogging really represented the open, decentralized web.I’m not the first person to think of this, I’m sure, but hopefully it will help & inspire some of you fine folks out there.", "content_html": "When I get paged, my first step is to calmly(-ish) asses the situation. What is the problem? Our app metrics have, in many cases, disappeared. Identify and confirm it: yep, bunch of dashboards are gone.
Usually I start debugging at this point. What are the possible reasons for that? Did someone deploy a change? Maybe an update to the metrics libraries? Nope, too early: today’s deploy logs are empty. Did app servers get scaled up, which might cause rate-limiting? Nah, all looks normal. Did our credentials get changed? Doesn’t look like it, none of our tokens have been revoked.
All of that would have been a waste of time. Our stats aggregation & dashboard service, Librato, was affected by a wide-scale DNS outage. Somebody DDoS’d Dyn, one of the largest DNS providers in the US. Librato had all kinds of problems, because their DNS servers were unavailable.
We figured that out almost immediately, without having to look for any potential problems with our system. It’s easy for me to forget to check status pages before diving into an incident, but I’ve found a way to make it easier. I made a channel in our Slack called #statuspages . Slack has a nifty slash command for subscribing to RSS feeds within a channel. Just type
/feed subscribe http://status.whatever.com/feed-url.rss
and boom! Any incident updates will appear as public posts in the channel.
Lots of services we rely on use StatusPage.io, and they provide RSS and Atom feeds for incidents and updates. The status pages for Heroku and AWS also offer RSS feeds - one for each service and region in AWS’ case. I subscribed to everything that might affect site and app functionality, as well as development & business operations - Github, npm, rubygems, Atlassian (Jira / Confluence / etc), Customer.io etc.
Every time one of these services reports an issue, it appears almost immediately in the channel. When something’s up with our app, a quick check in #statuspages can abort the whole debugging process. It can also be an early warning system: when a hosted service says they’re experiencing “delayed connections” or “intermittent issues,” you can be on guard in case that service goes down entirely.
Unfortunately not all status pages have an RSS feed. Salesforce doesn’t provide one. Any status page powered by Pingdom doesn’t either: it’s not a feature they provide. I can’t add Optimize.ly because they use Pingdom. C’mon y’all - get on it!
I’ve “pinned” links to these dashboards in #statuspages so they’re at least easy to find. Theoretically, I could use a service like IFTTT to get notified whenever the page changes - I haven’t tried, but I’m betting that would be too noisy to be worth it. Some quick glue code in our chat bot to scrape the page would work, but then the code has to be maintained, and who has time?
We currently have 45 feeds in #statuspages . It’s kind of a disaster today with all the DNS issues, but it certainly keeps us up-to-date. Thankfully Slack isn’t down for us - that’s a whole different dumpster fire. But I could certainly use an RSS service as an alternative, such as my favorite Feedbin. That’s the great things about RSS: the old-school style of blogging really represented the open, decentralized web.
I’m not the first person to think of this, I’m sure, but hopefully it will help & inspire some of you fine folks out there.
", "url": "https://atevans.com/2016/10/21/keep-an-eye-on-everything-via-slack.html", , "date_published": "2016-10-21T00:00:00+00:00", "date_modified": "2016-10-21T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2016/05/11/overcoming-rails-doom-and-gloom.html", "title": "Overcoming Rails Doom and Gloom", "summary": null, "content_text": "[Updated May 24, 2016: now with more salt]There’s been some chatter about how Ruby on Rails is dying / doomed / awful.Make Ruby Great Again …can a programming language continue to thrive even after its tools and core libraries are mostly finished? What can the community do to foster continued growth in such an environment? Whose job will it be?Rails is Yesterday’s Software We need new thinking, not just repackaging of the same old failures. I should be spending time writing code, not debugging a haystack of mutable, dependency-wired mess.My Time with Rails is Up As a result of 9 freaking years of working with Rails and contributing like hell to many ruby OSS projects, I’ve given up. I don’t believe anything good can happen with Rails. This is my personal point of view, but many people share the same feelings.Ruby in decline… The significance of the drop in May and the leveling of is that “Ruby does not become the ‘next big programming language’”.Whoops! That last one is from 2007. To summarize the various comments I’ve seen on Reddit, Hacker News, blogs, articles, and talks lately, here are some examples stuffed with the finest straw:Rails Doesn’t ScaleThis canard hasn’t aged well. My uninformed college-student argument in 2006 was: “You don’t scale your language or framework, you scale your application.” I had never scaled anything back then, but over ten years later I still agree with young-dumb-and-loud college me.Your language or framework is rarely the bottleneck: your database, network structure, or background services are where things will slow down. ActionCable may or may not get two million plus connections per box like a trivial Elixir app, but you can always deploy more app servers and workers. And I’d worry those Elixir connections will end up waiting on Redis or Postgres or something else - your language or framework isn’t a magic bullet.You can also extract services when Ruby and Rails aren’t the right tools for the job. Many Ruby gems do this by binding to native C extensions. Extracting a service for machine learning models can take advantage of Python’s different memory model and better ML libraries. Your CRUD app or mobile API doesn’t have to do it all.Ruby is a Messy LanguageIn Ruby you can re-open any class from anywhere. You can access any object’s private methods. Duck typing is everywhere. People make weird DSLs like RSpec. Rails is chock-full of magic in routing, database table names, callbacks, and mysterious inheritence. Ruby and Rails are easy, but they’re not clear or simple.This blast from the past is a 2007 argument. Since then we’ve learned tons about how to write expressive-but-clear Ruby. Rails isn’t going to keep you safe by default, but you can and should try to write clear Ruby on top of it. I’ve found the following super helpful when refactoring and designing code: Sandi Metz’s Practical Object-Oriented Design in Ruby Her and Katrina Owen’s new book 99 Bottles of OOP Searls’ How to Stop Hating Your Tests Trailblazer Architecture for Rails The late Jim Weirich’s Decoupling from Rails talkSolnic in particular felt like his attempts to reduce complexity, tight coupling, and shenanigans in Rails were actively mocked by DHH and others in the community. If true, that’s awful, and I would hope most of our community isn’t the ActiveMocking type. DHH has certainly said Rails won’t trade convienence for cleanliness by default. I don’t think that precludes cleanliness and safety when the code and team get big enough to need it, though.A programming language isn’t a magic bullet here, either. I’ve seen people hold up Java as an example of explicit, clear code with sanity checks and few surprises. Well, here’s some Java code for an Elasticsearch plugin, with comments stripped, spacing cut down, and truncated for brevity:public class LookupScript extends AbstractSearchScript { public static class Factory extends AbstractComponent implements NativeScriptFactory { private final Node node; private final Cache<Tuple<String, String>, Map<String, Object>> cache; @SuppressWarnings(\"unchecked\") @Inject public Factory(Node node, Settings settings) { super(settings); this.node = node; ByteSizeValue size = settings.getAsBytesSize(\"examples.nativescript.lookup.size\", null); TimeValue expire = settings.getAsTime(\"expire\", null); CacheBuilder<Tuple<String, String>, Map<String, Object>> cacheBuilder = CacheBuilder.builder();Now here’s a “magical” plugin for ActiveRecord, Paperclip. Truncated for brevity, but nothing stripped out:module Paperclip require 'rails' class Railtie < Rails::Railtie initializer 'paperclip.insert_into_active_record' do |app| ActiveSupport.on_load :active_record do Paperclip::Railtie.insert end if app.config.respond_to?(:paperclip_defaults) Paperclip::Attachment.default_options.merge!(app.config.paperclip_defaults) end end rake_tasks { load \"tasks/paperclip.rake\" } endThe latter seems vastly more readable to me. You may need to read the docs on what some of the methods do, but I had to read an awful lot more to grok the Elasticsearch plugin. Since we’re in the era of musty old arguments, I’ll bring up the amazing AbstractSingletonProxyFactoryBean again.Ruby gives you enough rope to hang yourself. Javascript helpfully ties the noose and sets up a gallows for you. Java locks you in a padded room for your own good. Elixir uses a wireless shock colla… wait, sorry. That metaphor got weird when a functional language jumped in. Point is, you can write terrible code in any language, and some languages make that a little easier or harder depending on your definition of “terrible.” Ruby strikes a balance that is my favorite so far.Rails Jobs are Drying UpHere’s some data I found by asking colleagues, Googling and poking around: Here’s that scary graph from Searls’ talk, with some other terms added TIOBE for May 2016 is saying: “Ruby is currently at position 8 in the TIOBE index. This is equal to the highest position it reached in December 2008” GitHut visualizations of language data from Github RedMonk’s Language Ranking for Jan 2016Rails isn’t achieving the thunderous growth percentage that Javascript and Elixir are, but percentage growth is much harder to achieve when your base is already huge.On further discussion and analysis, this is actually insanely hard to measure. Javascript is on the rise, for sure - but how many Javascript StackOverflow questions are for front ends that exist inside a Rails, Django, or Phoenix app? How many Node.js apps are tiny background services giving data to a Java dumpster fire? Where do you file a full-stack job?Languages and frameworks don’t just disappear. You can still find a decent job in COBOL. Ruby and Rails will be with us for a long time to come.The Cool Kids™ Have Moved OnThese are people I look up to, and significantly contributed to my understanding & development in Ruby. José Valim, creator of Devise, wrote Elixir Chris McCord has moved on to Elixir / Phoenix Mike Perham, creator of Sidekiq, is now making Sidekiq.js Yehuda Katz is working on Ember and Rust Solnic, author of Virtus, is fed up with the Rails community & DHH Justin Searls is working on a number of JS projectsOf course, many of the “movers-on” are still part of the Ruby + Rails communities. Sidekiq for Ruby isn’t going anywhere - Sidekiq Enterprise is pretty recent, and Mike has a commendable goal of making his business sustainable. Yehuda has been at RailsConf the last two years, and works at Skylight - a super cool performance monitoring tool with great Rails defaults. Searls has also been at RailsConf the last two years, and as he said to me on Twitter: …my favorite thing about programming is that we can use multiple languages and share in more than one community 😀 ~ @searlsSanid Metz and Aaron Patterson have been fixtures in the community, and they don’t seem to be abandoning it. Nick Quaranto is still at RailsConf. And of course, Matz is still on board.Plus, a mature community always has new Cool Kids™ and Thought Leaderers™. I’ve found lots of new people to follow over the last few years. I can’t be sure if they’re “new” or “up and coming” or “how the heck am I just finding out now.” Point is: the number of smart Ruby & Rails people I follow is going up, not down.The above-mentioned Katrina Owen, Sarah Allen, Godfrey Chan, Richard Schneeman and others are all people I wish I’d known about earlier in my career.I’m Still Pro-SkubRuby and Rails make a fantastic environment for application development. Rails is great at rapid prototyping. It’s great at web apps. It’s great at APIs - even better now that Rails 5 has an API-only interface. The Ruby community encourages learning, thoughtfulness, and trying to get things right. Resources like books, tutorials, screencasts, and boot camps abound. Extensions to Rails to help keep code clean, DRY, and organized are out there. Integrations with SaaS products are plentiful.If you’re looking to learn a new language or just getting started in web dev, I’d still recommend Ruby and Rails before anything else. They’re easy enough to get started in, and the more you read and work with them, the better you’ll get at OOP and interface design.", "content_html": "[Updated May 24, 2016: now with more salt]
There’s been some chatter about how Ruby on Rails is dying / doomed / awful.
…can a programming language continue to thrive even after its tools and core libraries are mostly finished? What can the community do to foster continued growth in such an environment? Whose job will it be?
We need new thinking, not just repackaging of the same old failures. I should be spending time writing code, not debugging a haystack of mutable, dependency-wired mess.
As a result of 9 freaking years of working with Rails and contributing like hell to many ruby OSS projects, I’ve given up. I don’t believe anything good can happen with Rails. This is my personal point of view, but many people share the same feelings.
The significance of the drop in May and the leveling of is that “Ruby does not become the ‘next big programming language’”.
Whoops! That last one is from 2007. To summarize the various comments I’ve seen on Reddit, Hacker News, blogs, articles, and talks lately, here are some examples stuffed with the finest straw:
This canard hasn’t aged well. My uninformed college-student argument in 2006 was: “You don’t scale your language or framework, you scale your application.” I had never scaled anything back then, but over ten years later I still agree with young-dumb-and-loud college me.
Your language or framework is rarely the bottleneck: your database, network structure, or background services are where things will slow down. ActionCable may or may not get two million plus connections per box like a trivial Elixir app, but you can always deploy more app servers and workers. And I’d worry those Elixir connections will end up waiting on Redis or Postgres or something else - your language or framework isn’t a magic bullet.
You can also extract services when Ruby and Rails aren’t the right tools for the job. Many Ruby gems do this by binding to native C extensions. Extracting a service for machine learning models can take advantage of Python’s different memory model and better ML libraries. Your CRUD app or mobile API doesn’t have to do it all.
In Ruby you can re-open any class from anywhere. You can access any object’s private methods. Duck typing is everywhere. People make weird DSLs like RSpec. Rails is chock-full of magic in routing, database table names, callbacks, and mysterious inheritence. Ruby and Rails are easy, but they’re not clear or simple.
This blast from the past is a 2007 argument. Since then we’ve learned tons about how to write expressive-but-clear Ruby. Rails isn’t going to keep you safe by default, but you can and should try to write clear Ruby on top of it. I’ve found the following super helpful when refactoring and designing code:
Solnic in particular felt like his attempts to reduce complexity, tight coupling, and shenanigans in Rails were actively mocked by DHH and others in the community. If true, that’s awful, and I would hope most of our community isn’t the ActiveMocking type. DHH has certainly said Rails won’t trade convienence for cleanliness by default. I don’t think that precludes cleanliness and safety when the code and team get big enough to need it, though.
A programming language isn’t a magic bullet here, either. I’ve seen people hold up Java as an example of explicit, clear code with sanity checks and few surprises. Well, here’s some Java code for an Elasticsearch plugin, with comments stripped, spacing cut down, and truncated for brevity:
public class LookupScript extends AbstractSearchScript { public static class Factory extends AbstractComponent implements NativeScriptFactory { private final Node node; private final Cache<Tuple<String, String>, Map<String, Object>> cache; @SuppressWarnings(\"unchecked\") @Inject public Factory(Node node, Settings settings) { super(settings); this.node = node; ByteSizeValue size = settings.getAsBytesSize(\"examples.nativescript.lookup.size\", null); TimeValue expire = settings.getAsTime(\"expire\", null); CacheBuilder<Tuple<String, String>, Map<String, Object>> cacheBuilder = CacheBuilder.builder();Now here’s a “magical” plugin for ActiveRecord, Paperclip. Truncated for brevity, but nothing stripped out:
module Paperclip require 'rails' class Railtie < Rails::Railtie initializer 'paperclip.insert_into_active_record' do |app| ActiveSupport.on_load :active_record do Paperclip::Railtie.insert end if app.config.respond_to?(:paperclip_defaults) Paperclip::Attachment.default_options.merge!(app.config.paperclip_defaults) end end rake_tasks { load \"tasks/paperclip.rake\" } endThe latter seems vastly more readable to me. You may need to read the docs on what some of the methods do, but I had to read an awful lot more to grok the Elasticsearch plugin. Since we’re in the era of musty old arguments, I’ll bring up the amazing AbstractSingletonProxyFactoryBean again.
Ruby gives you enough rope to hang yourself. Javascript helpfully ties the noose and sets up a gallows for you. Java locks you in a padded room for your own good. Elixir uses a wireless shock colla… wait, sorry. That metaphor got weird when a functional language jumped in. Point is, you can write terrible code in any language, and some languages make that a little easier or harder depending on your definition of “terrible.” Ruby strikes a balance that is my favorite so far.
Here’s some data I found by asking colleagues, Googling and poking around:
Rails isn’t achieving the thunderous growth percentage that Javascript and Elixir are, but percentage growth is much harder to achieve when your base is already huge.
On further discussion and analysis, this is actually insanely hard to measure. Javascript is on the rise, for sure - but how many Javascript StackOverflow questions are for front ends that exist inside a Rails, Django, or Phoenix app? How many Node.js apps are tiny background services giving data to a Java dumpster fire? Where do you file a full-stack job?
Languages and frameworks don’t just disappear. You can still find a decent job in COBOL. Ruby and Rails will be with us for a long time to come.
These are people I look up to, and significantly contributed to my understanding & development in Ruby.
Of course, many of the “movers-on” are still part of the Ruby + Rails communities. Sidekiq for Ruby isn’t going anywhere - Sidekiq Enterprise is pretty recent, and Mike has a commendable goal of making his business sustainable. Yehuda has been at RailsConf the last two years, and works at Skylight - a super cool performance monitoring tool with great Rails defaults. Searls has also been at RailsConf the last two years, and as he said to me on Twitter:
…my favorite thing about programming is that we can use multiple languages and share in more than one community 😀
~ @searls
Sanid Metz and Aaron Patterson have been fixtures in the community, and they don’t seem to be abandoning it. Nick Quaranto is still at RailsConf. And of course, Matz is still on board.
Plus, a mature community always has new Cool Kids™ and Thought Leaderers™. I’ve found lots of new people to follow over the last few years. I can’t be sure if they’re “new” or “up and coming” or “how the heck am I just finding out now.” Point is: the number of smart Ruby & Rails people I follow is going up, not down.
The above-mentioned Katrina Owen, Sarah Allen, Godfrey Chan, Richard Schneeman and others are all people I wish I’d known about earlier in my career.
Ruby and Rails make a fantastic environment for application development. Rails is great at rapid prototyping. It’s great at web apps. It’s great at APIs - even better now that Rails 5 has an API-only interface. The Ruby community encourages learning, thoughtfulness, and trying to get things right. Resources like books, tutorials, screencasts, and boot camps abound. Extensions to Rails to help keep code clean, DRY, and organized are out there. Integrations with SaaS products are plentiful.
If you’re looking to learn a new language or just getting started in web dev, I’d still recommend Ruby and Rails before anything else. They’re easy enough to get started in, and the more you read and work with them, the better you’ll get at OOP and interface design.
", "url": "https://atevans.com/2016/05/11/overcoming-rails-doom-and-gloom.html", , "date_published": "2016-05-11T00:00:00+00:00", "date_modified": "2016-05-11T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2016/03/08/hubot-and-coffeescript.html", "title": "Hubot and CoffeeScript", "summary": null, "content_text": "I’ve been working a lot with Hubot, which our company is using to manage our chat bot. We subscribe to the ChatOps mantra, which has a lot of value: operational changes are public, searchable, backed up, and repeatable. We also use Hubot for workflows and glue code - shortcuts for code review in Github, delivering stories in Pivotal Tracker when they are deployed to a demo environment, various alerts in PagerDuty, etc.Hubot is written in CoffeeScript, a transpiles-to-javascript language that is still the default in Rails 5. CoffeeScript initially made it easy and obvious how to write classes, inheritence, and bound functions in your Javascript. Now that ES6 has stolen most of the good stuff from CoffeeScript, I think it’s lost most of its value. But migrating legacy code to a new language is low ROI, and a giant pain even with tools like Decaffeinate. Besides, most of the hubot plugins and ecosystem are in CoffeeScript, so there’s probably some advantage to maintaining compatibility there.Hubot has a relatively simple abstraction over responding to messages in Slack, and has an Express server built-in. It’s basically writing a Node application.Writing Clean CodeA chatbot is not external, and is often not super-critical functionality. It’s easy to just throw in some hacks, write very minimal tests (if any), and call it a day. At Hired we have tests for our Hubot commands, but we’ve never emphasized high-quality code the way we have in our main application. I’m changing that. Any app worth making is worth making well.I’ve been trying to figure out how to break hubot scripts into clean modules. OO design is a hard enough problem in Ruby, where people actually care about clean code. Patterns and conventions like MVC provide helpful guidelines. None of that in JS land: it’s an even split whether a library will be functional, object-oriented, or function-objects. Everything’s just a private variable - no need for uppercase letters, or even full words.While Github’s docs only talk about throwing things in /scripts, sometimes you want commands in different scripts to be able to use the same functionality. Can you totally separate these back-end libraries from the server / chat response scripts? How do you tease apart the control flow?Promises (and they still feel all so wasted)Promises are a critical piece of the JS puzzle. To quote Domenic Denicola: The point of promises is to give us back functional composition and error bubbling in the async world. ~ You’re Missing the Point of PromisesI started by upgrading our app from our old library to Bluebird. The coolest thing Bluebird does is .catch(ErrorType), which allows you to catch only for specific errors. Combine that with the common-errors library from Shutterstock, and you get a great way to exactly classify error states.I’m still figuring out how to use promises as a clean abstraction. Treating them like delayed try/catch blocks seems to produce clean separations. The Bluebird docs have a section on anti-patterns that was a good start. In our code I found many places people had nested promises inside other promises, resulting in errors not reaching the original caller (or our test framework). I also saw throwing exceptions used as a form of flow control, and using the error message of the exception as a Slack reply value. Needless to say, that’s not what exceptions are for.EventsNodeJS comes with eventing built in. The process object is an EventEmitter, meaning you can use it like a global message bus. Hubot also acts as a global event handler, so you can track things there as well. And in CoffeeScript you can class MyClass extends EventEmitter. If you’ve got a bunch of async tasks that other scripts might need to refer to, you can have them fire off an event that other objects can respond to.For example, our deploy process has a few short steps early on that might interfere with each other if multiple deploys happen simultaneously. We can set our queueing object to listen for a “finished all blocking calls” event on deploys, and kick off the next one while the current deploy does the rest of its steps. We don’t have to hook into the promise chain - a Deploy doesn’t even have to know about the DeployQueue, which is great decoupling. It can just do its waterfall of async operations, and fire off events at each step.StorageHubot comes with a Brain built-in for persistent storage. For most users, this will be based on Redis. You can treat it like a big object full of whatever data you want, and it will be there when Hubot gets restarted.The catch is: Hubot’s brain is a giant JS object, and the “persistence” is just dumping the whole thing to a JSON string and throwing it in one key in Redis. Good luck digging in from redis-cli or any interface beyond in-app code.Someone (not me) added SQLite3 for some things that kind of had a relational-ish structure. If you are going to use SQL in your node app, for cryin’ out loud use a bloody ORM. Sequelize seems to be a big player, but like any JS framework it could be dead tomorrow.Frankly, MongoDB is a much bigger force in the NodeJS space, and seems perfect for a low-volume, low-criticality app like a chatbot. It’s relational enough to get the job done and flexible enough with schema-less documents. You probably won’t have to scale it and deal with the storage, clustering, and concurrency issues. With well-supported tools like Mongoose, it might be easier to organize and manage than the one-key-in-Redis brain.We also have InfluxDB for tracking stats. I haven’t dived deep into this, so I’m not sure how it compares to statsd or Elasticsearch aggregations. I’m not even sure if they cover the same use cases or not.TestingWhooboy. Testing. The testing world in JS leaves much to be desired. I’m spoiled on rspec and ruby test frameworks, which have things like mocks and stubs built in.In JS, everything is “microframeworks,” i.e. things that don’t work well together. Here’s a quick rundown of libraries we’re using: Mocha, the actual test runner. Chai, an assertion library. Chai-as-promised, for testing against promises. Supertest-as-promsied, to test webhooks in your app by sending actual http requests to 127.0.0.1. Who needs integration testing? Black-box, people! Nock, for expectations around calling external APIs. Of course, it doesn’t work with Mocha’s promise interface. Rewire, for messing with private variables and functions inside your scripts. Sinon for stubbing out methods. Hubot-test-helper, for setting up and tearing down a fake Hubot.I mean, I don’t know why you’d want assertions, mocks, stubs, dependency injection and a test runner all bundled together. It’s much better to have femto-frameworks that you have to duct tape together yourself.Suffice to say, there’s a lot of code to glue it all together. I had to dive into the source code for every single one of these libraries to make them play nice – neither the README nor the documentation sufficed in any instance. But in the end we get to test syntax that looks like this:describe 'PING module', -> beforeEach -> mockBot('scripts/ping').then (robot) => @bot = robot describe 'bot ping', -> it 'sends \"PONG\" to the channel', -> @bot.receive('bot ping').then => expect(@bot).to.send('PONG')The bot will shut itself down after each test, stubs and dependency injections will be reverted automatically, Nock expectations cleaned up, etc. Had to write my own Chai plugin for expect(bot).to.send() . It’s more magical than I’d like, but it’s usable without knowledge of the underlying system.When tests are easier to write, hopefully people will write more of them.WrapupYour company’s chatbot is probably more important than you think. When things break, even the unimportant stuff like karma tracking, it can lead to dozens of distractions and minor frustrations across the team. Don’t make it a second-class citizen. It’s an app - write it like one.While I may have preferred something like Lita, the Ruby chatbot, or just writing a raw Node / Elixir / COBOL app without the wrapping layer of Hubot, I’m making the best of it. Refactor, don’t rewrite. You can write terrible code in any language, and JS can certainly be clean and manageable if you’re willing to try.", "content_html": "I’ve been working a lot with Hubot, which our company is using to manage our chat bot. We subscribe to the ChatOps mantra, which has a lot of value: operational changes are public, searchable, backed up, and repeatable. We also use Hubot for workflows and glue code - shortcuts for code review in Github, delivering stories in Pivotal Tracker when they are deployed to a demo environment, various alerts in PagerDuty, etc.
Hubot is written in CoffeeScript, a transpiles-to-javascript language that is still the default in Rails 5. CoffeeScript initially made it easy and obvious how to write classes, inheritence, and bound functions in your Javascript. Now that ES6 has stolen most of the good stuff from CoffeeScript, I think it’s lost most of its value. But migrating legacy code to a new language is low ROI, and a giant pain even with tools like Decaffeinate. Besides, most of the hubot plugins and ecosystem are in CoffeeScript, so there’s probably some advantage to maintaining compatibility there.
Hubot has a relatively simple abstraction over responding to messages in Slack, and has an Express server built-in. It’s basically writing a Node application.
A chatbot is not external, and is often not super-critical functionality. It’s easy to just throw in some hacks, write very minimal tests (if any), and call it a day. At Hired we have tests for our Hubot commands, but we’ve never emphasized high-quality code the way we have in our main application. I’m changing that. Any app worth making is worth making well.
I’ve been trying to figure out how to break hubot scripts into clean modules. OO design is a hard enough problem in Ruby, where people actually care about clean code. Patterns and conventions like MVC provide helpful guidelines. None of that in JS land: it’s an even split whether a library will be functional, object-oriented, or function-objects. Everything’s just a private variable - no need for uppercase letters, or even full words.
While Github’s docs only talk about throwing things in /scripts, sometimes you want commands in different scripts to be able to use the same functionality. Can you totally separate these back-end libraries from the server / chat response scripts? How do you tease apart the control flow?
Promises are a critical piece of the JS puzzle. To quote Domenic Denicola:
The point of promises is to give us back functional composition and error bubbling in the async world.
I started by upgrading our app from our old library to Bluebird. The coolest thing Bluebird does is .catch(ErrorType), which allows you to catch only for specific errors. Combine that with the common-errors library from Shutterstock, and you get a great way to exactly classify error states.
I’m still figuring out how to use promises as a clean abstraction. Treating them like delayed try/catch blocks seems to produce clean separations. The Bluebird docs have a section on anti-patterns that was a good start. In our code I found many places people had nested promises inside other promises, resulting in errors not reaching the original caller (or our test framework). I also saw throwing exceptions used as a form of flow control, and using the error message of the exception as a Slack reply value. Needless to say, that’s not what exceptions are for.
NodeJS comes with eventing built in. The process object is an EventEmitter, meaning you can use it like a global message bus. Hubot also acts as a global event handler, so you can track things there as well. And in CoffeeScript you can class MyClass extends EventEmitter. If you’ve got a bunch of async tasks that other scripts might need to refer to, you can have them fire off an event that other objects can respond to.
For example, our deploy process has a few short steps early on that might interfere with each other if multiple deploys happen simultaneously. We can set our queueing object to listen for a “finished all blocking calls” event on deploys, and kick off the next one while the current deploy does the rest of its steps. We don’t have to hook into the promise chain - a Deploy doesn’t even have to know about the DeployQueue, which is great decoupling. It can just do its waterfall of async operations, and fire off events at each step.
Hubot comes with a Brain built-in for persistent storage. For most users, this will be based on Redis. You can treat it like a big object full of whatever data you want, and it will be there when Hubot gets restarted.
The catch is: Hubot’s brain is a giant JS object, and the “persistence” is just dumping the whole thing to a JSON string and throwing it in one key in Redis. Good luck digging in from redis-cli or any interface beyond in-app code.
Someone (not me) added SQLite3 for some things that kind of had a relational-ish structure. If you are going to use SQL in your node app, for cryin’ out loud use a bloody ORM. Sequelize seems to be a big player, but like any JS framework it could be dead tomorrow.
Frankly, MongoDB is a much bigger force in the NodeJS space, and seems perfect for a low-volume, low-criticality app like a chatbot. It’s relational enough to get the job done and flexible enough with schema-less documents. You probably won’t have to scale it and deal with the storage, clustering, and concurrency issues. With well-supported tools like Mongoose, it might be easier to organize and manage than the one-key-in-Redis brain.
We also have InfluxDB for tracking stats. I haven’t dived deep into this, so I’m not sure how it compares to statsd or Elasticsearch aggregations. I’m not even sure if they cover the same use cases or not.
Whooboy. Testing. The testing world in JS leaves much to be desired. I’m spoiled on rspec and ruby test frameworks, which have things like mocks and stubs built in.
In JS, everything is “microframeworks,” i.e. things that don’t work well together. Here’s a quick rundown of libraries we’re using:
127.0.0.1. Who needs integration testing? Black-box, people!I mean, I don’t know why you’d want assertions, mocks, stubs, dependency injection and a test runner all bundled together. It’s much better to have femto-frameworks that you have to duct tape together yourself.
Suffice to say, there’s a lot of code to glue it all together. I had to dive into the source code for every single one of these libraries to make them play nice – neither the README nor the documentation sufficed in any instance. But in the end we get to test syntax that looks like this:
describe 'PING module', -> beforeEach -> mockBot('scripts/ping').then (robot) => @bot = robot describe 'bot ping', -> it 'sends \"PONG\" to the channel', -> @bot.receive('bot ping').then => expect(@bot).to.send('PONG')The bot will shut itself down after each test, stubs and dependency injections will be reverted automatically, Nock expectations cleaned up, etc. Had to write my own Chai plugin for expect(bot).to.send() . It’s more magical than I’d like, but it’s usable without knowledge of the underlying system.
When tests are easier to write, hopefully people will write more of them.
Your company’s chatbot is probably more important than you think. When things break, even the unimportant stuff like karma tracking, it can lead to dozens of distractions and minor frustrations across the team. Don’t make it a second-class citizen. It’s an app - write it like one.
While I may have preferred something like Lita, the Ruby chatbot, or just writing a raw Node / Elixir / COBOL app without the wrapping layer of Hubot, I’m making the best of it. Refactor, don’t rewrite. You can write terrible code in any language, and JS can certainly be clean and manageable if you’re willing to try.
", "url": "https://atevans.com/2016/03/08/hubot-and-coffeescript.html", , "date_published": "2016-03-08T00:00:00+00:00", "date_modified": "2016-03-08T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2015/12/27/phoenix-html-safe-tuple-to-iodata-1.html", "title": "Phoenix.HTML.Safe.Tuple.to_iodata/1", "summary": null, "content_text": "I was running through tutorials for Elixir + Phoenix, and got to the part where forms start showing validation failures. Specifically this code, from Pragmatic Programmers’ “Programming Phoenix” book:<%= form_for @changeset, user_path(@conn, :create), fn f -> %> <%= if f.errors != [] do %> <div class=\"alert alert-danger\"> <p>Oops, something went wrong! Please check the errors below:</p> <ul> <%= for {attr, message} <- f.errors do %> <li><%= humanize(attr) %> <%= message %></li> <% end %> </ul> </div> <!-- snip --><% end %>I got this error: no function clause matching in Phoenix.HTML.Safe.Tuple.to_iodata/1Couldn’t find a bloody solution anywhere. Took a long time to find IO.inspect . The message thing turned out to be a tuple that looked made for sprintf - something like {\"name can't be longer than %{count}\", count: 1} , so I spent forever trying to figure out if Elixir has sprintf , looked like there might be something in :io.format() , then I had to learn about Erlang bindings, but that wasn’t…Ended up on #elixir-lang IRC channel, and the author of the book (Chris Mccord) pointed me to the Ecto “error helpers” in this upgrade guide. It’s a breaking change in Phoenix 1.1 + Ecto 2.0. The book is (and I imagine many tutorials are) for Phoenix 1.0.x , and I had installed the latest at 1.1.0 .Major thanks to Chris - I have literally never had a question actually get answered on IRC. It was a last-resort measure, and it really says something about the Elixir community that someone helped me figure this out.", "content_html": "I was running through tutorials for Elixir + Phoenix, and got to the part where forms start showing validation failures. Specifically this code, from Pragmatic Programmers’ “Programming Phoenix” book:
<%= form_for @changeset, user_path(@conn, :create), fn f -> %> <%= if f.errors != [] do %> <div class=\"alert alert-danger\"> <p>Oops, something went wrong! Please check the errors below:</p> <ul> <%= for {attr, message} <- f.errors do %> <li><%= humanize(attr) %> <%= message %></li> <% end %> </ul> </div> <!-- snip --><% end %>I got this error: no function clause matching in Phoenix.HTML.Safe.Tuple.to_iodata/1
Couldn’t find a bloody solution anywhere. Took a long time to find IO.inspect . The message thing turned out to be a tuple that looked made for sprintf - something like {\"name can't be longer than %{count}\", count: 1} , so I spent forever trying to figure out if Elixir has sprintf , looked like there might be something in :io.format() , then I had to learn about Erlang bindings, but that wasn’t…
Ended up on #elixir-lang IRC channel, and the author of the book (Chris Mccord) pointed me to the Ecto “error helpers” in this upgrade guide. It’s a breaking change in Phoenix 1.1 + Ecto 2.0. The book is (and I imagine many tutorials are) for Phoenix 1.0.x , and I had installed the latest at 1.1.0 .
Major thanks to Chris - I have literally never had a question actually get answered on IRC. It was a last-resort measure, and it really says something about the Elixir community that someone helped me figure this out.
", "url": "https://atevans.com/2015/12/27/phoenix-html-safe-tuple-to-iodata-1.html", , "date_published": "2015-12-27T00:00:00+00:00", "date_modified": "2015-12-27T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2015/12/10/testing-db-updates-with-docker.html", "title": "Testing db updates with Docker", "summary": null, "content_text": "Stretchy is an ActiveRecord-esque query builder for Elasticsearch. It’s not stable yet (hence the <1.0 version number), and Elasticsearch has been moving so fast it’s hard to keep up. The major change in 2.0 was eliminating the separation between queries and filters, a major source of complexity for the poor gem.For now my machine needs Elasticsearch 1.7 for regular app development. To update the gem for 2.0 2.1, I’d need to have both versions of Elasticsearch installed. While I could potentially do that by making a bunch of changes to the config files set up by Homebrew, I thought it would be better to just run the specs on a virtual machine and solve the “upgrade problem” indefinitely.Docker looked great because I wanted to avoid machine setup as much as possible. I’ve used Vagrant before, but it has its own configuration steps beyond just “here’s the Dockerfile.” I already have boot2docker docker-machine installed and running for using the CodeClimate Platform™ beta, and I didn’t want to have multiple virtual machines running simultaneously, eating RAM and other resources. Here’s the setup: Docker lets you run virtual machines inside “containers,” using a few different technologies similar to LXC boot2docker docker-machine manages booting virtual machine instances which will run your Docker containers. I’m using it to keep one virtualbox machine around to run whatever containers I need at the moment fig docker-compose lets you declare and link multiple containers in a docker-compose.yml file, so you don’t need to manually run all the Docker commands The official quickstart guide for rails gives a good run-down of the tools and setup involved.It’s a bit of tooling, but it really didn’t take long to get started; maybe an hour or two. Once I had it up and running for the project, I just modified the docker-compose.yml on my new branch. I had to do a bit of fiddling to get Compose to update the elasticsearch image from 1.7 to 2.1:# modify the docker-compose.yml to update the image version, then:docker-compose stop elasticdocker-compose pull elasticdocker-compose rm -f -v elasticdocker-compose run web rpsec # boom! builds the machines and runs specsOnce there, the specs started exploding and I was in business. Let the updates begin! After that, just a matter of pestering our CI provider to update their available versions of Elasticsearch so the badge on the repo will look all nice and stuff.", "content_html": "Stretchy is an ActiveRecord-esque query builder for Elasticsearch. It’s not stable yet (hence the <1.0 version number), and Elasticsearch has been moving so fast it’s hard to keep up. The major change in 2.0 was eliminating the separation between queries and filters, a major source of complexity for the poor gem.
For now my machine needs Elasticsearch 1.7 for regular app development. To update the gem for 2.0 2.1, I’d need to have both versions of Elasticsearch installed. While I could potentially do that by making a bunch of changes to the config files set up by Homebrew, I thought it would be better to just run the specs on a virtual machine and solve the “upgrade problem” indefinitely.
Docker looked great because I wanted to avoid machine setup as much as possible. I’ve used Vagrant before, but it has its own configuration steps beyond just “here’s the Dockerfile.” I already have boot2docker docker-machine installed and running for using the CodeClimate Platform™ beta, and I didn’t want to have multiple virtual machines running simultaneously, eating RAM and other resources. Here’s the setup:
docker-compose.yml file, so you don’t need to manually run all the Docker commandsIt’s a bit of tooling, but it really didn’t take long to get started; maybe an hour or two. Once I had it up and running for the project, I just modified the docker-compose.yml on my new branch. I had to do a bit of fiddling to get Compose to update the elasticsearch image from 1.7 to 2.1:
# modify the docker-compose.yml to update the image version, then:docker-compose stop elasticdocker-compose pull elasticdocker-compose rm -f -v elasticdocker-compose run web rpsec # boom! builds the machines and runs specsOnce there, the specs started exploding and I was in business. Let the updates begin! After that, just a matter of pestering our CI provider to update their available versions of Elasticsearch so the badge on the repo will look all nice and stuff.
", "url": "https://atevans.com/2015/12/10/testing-db-updates-with-docker.html", , "date_published": "2015-12-10T00:00:00+00:00", "date_modified": "2015-12-10T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2015/12/04/new-design.html", "title": "New design", "summary": null, "content_text": "I wanted to try out basscss. Ended up changing the fonts, color scheme, and fixing syntax highlighting all over the place. Now using highlightjs, which looks like the only well-supported syntax highlighter than can guess which language your snippet is in.This blog’s been around for 5 years now. It’s mostly thanks to Jekyllrb – whatever I want to change about the site, I can do it without having to migrate from one database or format to another. If I need to code something myself, I can do that with Ruby or Javascript or plain ‘ole shell scripts.atevans.com has run off at least 3 different servers. It’s been on Github Pages, Heroku, Linode, and more. It will never be deleted because some blogging company with a great platform ran out of VC, or added some social dingus I didn’t want.The git repo has been hosted four different places. When I got pwned earlier this year, the git repo was on the infected server. I just deleted it since my local copy had everything. Benefits of decentralized version control.I had blogs all over the place before this, but this one has stuck around. I think even if Jekyll dies off somehow, a parser & renderer for a bunch of flat files should be easy in any language. I wonder what this will look like in another 5 years?", "content_html": "I wanted to try out basscss. Ended up changing the fonts, color scheme, and fixing syntax highlighting all over the place. Now using highlightjs, which looks like the only well-supported syntax highlighter than can guess which language your snippet is in.
This blog’s been around for 5 years now. It’s mostly thanks to Jekyllrb – whatever I want to change about the site, I can do it without having to migrate from one database or format to another. If I need to code something myself, I can do that with Ruby or Javascript or plain ‘ole shell scripts.
atevans.com has run off at least 3 different servers. It’s been on Github Pages, Heroku, Linode, and more. It will never be deleted because some blogging company with a great platform ran out of VC, or added some social dingus I didn’t want.
The git repo has been hosted four different places. When I got pwned earlier this year, the git repo was on the infected server. I just deleted it since my local copy had everything. Benefits of decentralized version control.
I had blogs all over the place before this, but this one has stuck around. I think even if Jekyll dies off somehow, a parser & renderer for a bunch of flat files should be easy in any language. I wonder what this will look like in another 5 years?

Web widgets are something everyone needs for their site. They’ve been done a hundred ways over the years, but essentially it’s some bundle of HTML, CSS, and JS. The problem is that there are so many ways of doing this badly. The worst is “semantic” classes and JS hooks. Semantic-ish markup is used in quick-start frameworks like Bootstrap and Materialize, and encouraged by some frontend devs as “best practices.”
Semantic markup: semantic to who? It’s not like your end-users are gonna read this stuff. Google doesn’t care, outside of the element types. And it’s certainly not “semantic” for your fellow devs. Have a look at the example CodePen linked from this post.
This CodePen represents the html, css, and js for two sections of our Instagram-clone web site. We have a posts-index page and a post-page ; two separate pages on our site that both display a set of posts with an image and some controls using semantic patterns. Some notes on how it works:
post class name do anything? It’s semantic, and tells us this section is a post. But that was probably obvious because the html is in _post.html.erb or post.jsx or something, so it’s not saying anything we didn’t already know.float: left or text-align or inline-block or flexbox ? I’d have to find the css and figure it out.post featured changes. Is featured something that only applies to post, or can featured be applied to anything? If I make user featured will that have conflicts?post and featured at the same level of CSS specificity? Their rules will override each other based on an arcane system no one but a dedicated CSS engineer will understand..post{img{}} pattern is a land mine for anyone else. What if I want to add an icon indicating file type for that post? It’s going to get all the styles of the post image on my icon, and I won’t know about these style conflicts until it looks weird in my browser. I’ll have to “inspect element” in the browser or grep for it in the CSS, and figure out how to extract or override them. What if I want to add a “fav” button to each post? I have to fix / override .post{button{}} . Who left this giant tangled mess I have to clean up?.post have any javascript attached to it? From reading the html, I have no idea. I have to go hunting for that class name in the JS. Ah, it does - the “hide” behavior on any <button> inside the post markup. Again, the new “fav” button has to work around this. featured post? For the big post on your individual post page, a “hide” button doesn’t even make sense, so it’s not there. Why is the JS listening for it?featured and the regular post elements.posts-index page, but more images from the “related images” feed in the related-images section. Do we use different selector scoping? Data attributes? Copy + paste the JS with minor alterations? The more places we use this component, the more convoluted the logic here will get.Okay, we can apply semantic class names to everything: hide-button , post-image , featured-post-image , etc. Bootstrap does it, so this must be a good idea, right? Well, no. We haven’t really solved anything, just kicked all these questions down another level. We still have no idea where CSS rules and JS behaviors are attached, and how they’re scoped is going to be even more of a tangled maze.
What we have here is spaghetti code. You have extremely tight coupling between what’s in your template, css, and js, so reusing any one of those is impossible without the others. If you make a new page, you have to take all of that baggage with you.
In the rest of our code, we try to avoid tight coupling. We try to make modules in our systems small, with a single responsibility, and reusable. In Ruby we tend to favor composable systems. Why do we treat CSS and JS differently? CSS by its nature isn’t very conducive to this, and JS object systems are currently all over the place (function objects? ES6 Class ? Factories?). Still, if we’re trying to write moar gooderer code, we’ll have to do something different.
I’m not the first one to get annoyed by all this. Here’s Nicholas Gallagher from Twitter on how to handle these problems. Here’s Ethan Muller from Sparkbox on patterns he’s seen to get around them.
I’ve found a setup that I’m pretty happy with.
image-card is an image-card is an image-cardimage-card__caption is clearly a child of image-card just from reading the markupimage-card highlighted could be clobbered or messed up when someone else wants a highlighted class, but image-card image-card--highlighted won’t be.js-* classes as hooks for javascript, not semantic-ish class names. <ul> , an <ol>, or any arbitrary set of elements, as long as they have .js-dropdown-keyboarderer and .js-dropdown-keyboarderer-item.js-fav-button can be tiny on one screen and huge on another without CSS conflicts or overridesdata-* attributes have all the same advantages, but they are longer to type and about 85% slower to find (at least, on my desktop Chrome)This was brought on by using Fortitude on the job, which has most of these solutions baked-in. It had a bit of a learning curve, but within a month or two I noticed how many of the problems and questions listed above simply didn’t come up. After using Bootstrap 3 for the previous year and running into every. single. one. multiple. times. I was ready for something new. I quickly fell in love.
The minute anyone decided to go against the conventions, developing on that part of the site got 10x harder. Reusing the partials and components with “semantic” markup was impossible - I had to split things up myself to move forward. Some components were even tied to specific pages! Clear as day: “do not re-use this, just copy+paste everything to your new page.”
I’d much rather be shipping cool stuff than decoupling systems that should never have been tightly bound in the first place.
", "url": "https://atevans.com/2015/10/27/make-your-web-widgets-suck-less.html", "external_url": "http://codepen.io/anon/pen/ZboQab", , "date_published": "2015-10-27T00:00:00+00:00", "date_modified": "2015-10-27T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2015/10/20/blog-post-for-elastic-co.html", "title": "Blog post for Elastic.co", "summary": null, "content_text": "I made a bloggity post for Elastic, the company behind Elasticsearch, Logstash and Kibana. Our list page got 35% faster. Then we took it further: Angular was making over a dozen web requests to get counts of candidates in various buckets - individuals who are skilled in iOS or Node development, individuals who want to work in Los Angeles, etc. We dropped that to a single request and then combined it with the results request. From 13+ HTTP round-trips per search, we got down to one.", "content_html": "I made a bloggity post for Elastic, the company behind Elasticsearch, Logstash and Kibana.
", "url": "https://atevans.com/2015/10/20/blog-post-for-elastic-co.html", "external_url": "https://www.elastic.co/blog/hired-taps-elasticsearch-as-a-service-for-job-marketplace", , "date_published": "2015-10-20T00:00:00+00:00", "date_modified": "2015-10-20T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2015/10/12/chef-hackiness.html", "title": "Chef hackiness", "summary": null, "content_text": "Chef seemed like a big hack when I first started with it. Chef “cookbooks” have “recipes” and there’s something called “kitchens” and “data bags” and servers are called “nodes.” Recipes seem like the important things - they define what setup will happen on your servers.Recipes are ruby scripts written at the Kernel level, not reasonably contained in classes or modules like one would expect. You can include other recipes that define “lightweight resource providers” - neatly acronymed to the unpronouncable LWRP. These define helper methods also at the top level, and make them available to your recipe. What’s to keep one recipe’s methods from clobbering another’s? Nothing, as far as I can tell.Recipes run with “attributes,” which can be specified in a number of different ways: on the node itself: scoped for a specific box inside a “role:” for every server of an arbitrary type inside an “environment:” for overriding attributes on dev/stg/prd from the recipe’s defaultsLast time I used Chef, attributes had to be defined in a JSON file - an unusual choice for Ruby, which usually goes with YAML. Now apparently there’s a Ruby DSL, which uses Hashies, which also appear to run at the Kernel level. I couldn’t get it to work in my setup. Chef munges these different levels together with something like inheritence - defaults get overridden in a seemingly sensible order. Unless you told them to override each other somewhere. Then whatever happens, happens.“Data bags” are an arbitrary set of JSON objects. Or is the Ruby DSL supposed to work there, too? I dunno. Anyway, they store arbitrary data you can access from anywhere in any recipe, and who doesn’t love global state? They seem necessary for things like usernames and keys, so I can forgive some globalization.This seems like a good enough structure / convention, until you start relying on external recipes. Chef has apparently adopted Berkshelf, a kind of Bundler for chef. You can browse available cookbooks at “the supermarket:” are you tired of the metaphors yet?The problem here is that recipe names are not unique or consistent! I was using an rbenv recipe. But then I cloned my Chef repo on a new machine, ran berks install, and ended up with a totally different cookbook! I mean, what the hell guys? You can’t just pull the rug out like that. It’s rude.Sure, I could vendor said recipes and store them with my repo. Like an animal. But we don’t do that with Bundler, because it seems like the absolute bloody least a package manager can do. Even Bower can handle that much, and basically all it does is clone repos from Github.These cookbooks often operate in totally different ways. Many cookbooks include a recipe you can run with all the setup included; i’s dotted and t’s crossed. They install something like Postgres 9.3 from a package manager or source with a configuration specified in the munged-together attributes for your box. Others rely on stuff in data bags, and you have to specify a node name in the data bag attributes or something awful. Some cookbooks barely have any recipes and you have to write your own recipe using their LWRPs, even if attributes would be totally sensible.Coming back to Chef a few months after doing my last server setup, it seems like they are trying to make progress: using a consistent Ruby DSL rather than JSON, making a package manager official, etc. But in the process it’s become even more of a nightmarish hack. The best practices keep shifting, and the cookbook maintainers aren’t keeping up. You can’t use any tutorials or guides more than a few months old - they’ll recommend outdated practices that will leave you more confused about the “right” way to do things. Examples include installing Berkshelf as a gem when it now requires the ChefDK, using Librarian-Chef despite adoption of Berkshelf, storing everything in data bags instead of attributes, etc, etc, etc.Honestly, I’m just not feeling Chef any more. Alternatives like Ansible, Puppet, and even Fucking Shell Scripts are not exactly inspiring. Docker is not for system configuration, even though it kinda looks like it is. It’s for isolating an app environment, and configuring a sub-system for that. Maybe otto is the way to go? But damn, their config syntax is weirder than anything else I’ve seen so far.I’m feeling pretty lost, overall.", "content_html": "Our list page got 35% faster. Then we took it further: Angular was making over a dozen web requests to get counts of candidates in various buckets - individuals who are skilled in iOS or Node development, individuals who want to work in Los Angeles, etc. We dropped that to a single request and then combined it with the results request. From 13+ HTTP round-trips per search, we got down to one.
Chef seemed like a big hack when I first started with it. Chef “cookbooks” have “recipes” and there’s something called “kitchens” and “data bags” and servers are called “nodes.” Recipes seem like the important things - they define what setup will happen on your servers.
Recipes are ruby scripts written at the Kernel level, not reasonably contained in classes or modules like one would expect. You can include other recipes that define “lightweight resource providers” - neatly acronymed to the unpronouncable LWRP. These define helper methods also at the top level, and make them available to your recipe. What’s to keep one recipe’s methods from clobbering another’s? Nothing, as far as I can tell.
Recipes run with “attributes,” which can be specified in a number of different ways:
Last time I used Chef, attributes had to be defined in a JSON file - an unusual choice for Ruby, which usually goes with YAML. Now apparently there’s a Ruby DSL, which uses Hashies, which also appear to run at the Kernel level. I couldn’t get it to work in my setup. Chef munges these different levels together with something like inheritence - defaults get overridden in a seemingly sensible order. Unless you told them to override each other somewhere. Then whatever happens, happens.
“Data bags” are an arbitrary set of JSON objects. Or is the Ruby DSL supposed to work there, too? I dunno. Anyway, they store arbitrary data you can access from anywhere in any recipe, and who doesn’t love global state? They seem necessary for things like usernames and keys, so I can forgive some globalization.
This seems like a good enough structure / convention, until you start relying on external recipes. Chef has apparently adopted Berkshelf, a kind of Bundler for chef. You can browse available cookbooks at “the supermarket:” are you tired of the metaphors yet?
The problem here is that recipe names are not unique or consistent! I was using an rbenv recipe. But then I cloned my Chef repo on a new machine, ran berks install, and ended up with a totally different cookbook! I mean, what the hell guys? You can’t just pull the rug out like that. It’s rude.
Sure, I could vendor said recipes and store them with my repo. Like an animal. But we don’t do that with Bundler, because it seems like the absolute bloody least a package manager can do. Even Bower can handle that much, and basically all it does is clone repos from Github.
These cookbooks often operate in totally different ways. Many cookbooks include a recipe you can run with all the setup included; i’s dotted and t’s crossed. They install something like Postgres 9.3 from a package manager or source with a configuration specified in the munged-together attributes for your box. Others rely on stuff in data bags, and you have to specify a node name in the data bag attributes or something awful. Some cookbooks barely have any recipes and you have to write your own recipe using their LWRPs, even if attributes would be totally sensible.
Coming back to Chef a few months after doing my last server setup, it seems like they are trying to make progress: using a consistent Ruby DSL rather than JSON, making a package manager official, etc. But in the process it’s become even more of a nightmarish hack. The best practices keep shifting, and the cookbook maintainers aren’t keeping up. You can’t use any tutorials or guides more than a few months old - they’ll recommend outdated practices that will leave you more confused about the “right” way to do things. Examples include installing Berkshelf as a gem when it now requires the ChefDK, using Librarian-Chef despite adoption of Berkshelf, storing everything in data bags instead of attributes, etc, etc, etc.
Honestly, I’m just not feeling Chef any more. Alternatives like Ansible, Puppet, and even Fucking Shell Scripts are not exactly inspiring. Docker is not for system configuration, even though it kinda looks like it is. It’s for isolating an app environment, and configuring a sub-system for that. Maybe otto is the way to go? But damn, their config syntax is weirder than anything else I’ve seen so far.
I’m feeling pretty lost, overall.
", "url": "https://atevans.com/2015/10/12/chef-hackiness.html", , "date_published": "2015-10-12T00:00:00+00:00", "date_modified": "2015-10-12T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2015/10/01/xss-rails-bffs.html", "title": "XSS and Rails: BFFs!", "summary": null, "content_text": "Everything you know about html_safe is wrong.As pointed out in the World of Rails Security talk at RailsConf this year, even the name is kind of crap. Calling .html_safe on some string sounds kind of like it would make said string safe to put in your HTML. In fact, it does the opposite.Essentially, you need to ensure that every bit of user output is escaped. The defaults make things pretty safe: form inputs, links, etc. are all escaped by default. There are a few small holes, though.Safe link_to user_name, 'http://hired.com' image_tag user_image, alt: user_image_title HAML: .xs-block= user_text ERB: <%= user_text %>Not Safe link_to user.name, user_entered_url .flashbar= flash[:alert].html_safe # with, say, username included", "content_html": "Everything you know about html_safe is wrong.
As pointed out in the World of Rails Security talk at RailsConf this year, even the name is kind of crap. Calling .html_safe on some string sounds kind of like it would make said string safe to put in your HTML. In fact, it does the opposite.
Essentially, you need to ensure that every bit of user output is escaped. The defaults make things pretty safe: form inputs, links, etc. are all escaped by default. There are a few small holes, though.
link_to user_name, 'http://hired.com'image_tag user_image, alt: user_image_title.xs-block= user_text<%= user_text %>link_to user.name, user_entered_url.flashbar= flash[:alert].html_safe # with, say, username includedFight for the Future wrote an open letter to Salesforce/Heroku regarding their endorsement of the Cybersecurity Information Sharing Act (pdf link). The bill would, according to FFTF, leak personally identifying information to DHS, NSA, etc.
The first sentence of the letter bothered me, though:
I was disappointed to learn that Salesforce joined Apple, Microsoft, and other tech giants last week in endorsing the Cybersecurity Information Sharing Act of 2015 (CISA).
Apple is proud of their lack of knowledge about you. They encrypt a lot of things by default. They have a tendency to use random device identifiers instead of linking things to an online account, which is better security but causes annoying bugs and edge cases for users. Tim Cook has specifically touted privacy and encryption as advantages of using Apple devices and software. The FBI has given Apple flack for using good encryption, and there were rumors they would take Apple to court.
Has Apple reversed their stance? Are they lying to their customers? I haven’t seen them do that, ever. It would be really weird if they started now.
Oh, wait, they’re not:
Microsoft and Apple, two of the world’s largest software companies, did not directly endorse CISA. They – along with Adobe, Autodesk, IBM, Symantec, and others—signed the letter from BSA The Software Alliance generally encouraging the passage of data-sharing legislation. They also specifically praised four other bills, two of which focused on electronic communications privacy.
But who cares about the details, right? Get outraged! Get mad! Go the window, open it, stick your head out and yell: “I’m as mad as hell, and I’m not going to take this any more!”
The second sentence of the letter is also problematic:
This legislation would grant blanket immunity for American companies to participate in government mass surveillance programs like PRISM…
This implies a conflation I’ve seen around the internet a lot: that Apple willingly and knowingly participated in an NSA data-harvesting program codenamed PRISM because Apple’s name appeared on one of the Snowden-leaked slides about the program. Also appearing: Google, Microsoft, Facebook, etc.
Apple responded that they did not participate knowingly or willingly. Google said the same thing. Microsoft spouted some weasel words; damage control as opposed to “what the fuck?!”
The NSA may have been using the OpenSSL “Heartbleed” bug for some or all of the data collection from these companies. Apple issued a patch for that bug with timing that subtly suggests it was in response to PRISM - pure speculation, but plausible.
Point is, if the three-letter agencies were using exploits like heartbleed, they wouldn’t tell Apple or Google. To all appearances, Apple and Google didn’t know anything about PRISM. The FFTF letter is making a weird insinuation that Apple, Google, and other companies would knowingly participate in such a scheme if the bill were passed.
I’m sick and tired of web sites, Twitter, news, etc telling me to be outraged. Virtually all of them reduce big, complex issues to sound bytes so we can get mad about them. I flat-out refuse to have any reaction (positive or negative) to anything “outrageous” I find on the internet, until I’ve done my own homework.
", "url": "https://atevans.com/2015/09/23/privacy-outrage-machine.html", , "date_published": "2015-09-23T00:00:00+00:00", "date_modified": "2015-09-23T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2015/08/05/trying-out-atom.html", "title": "Trying out Atom", "summary": null, "content_text": "It started with TextMate when I first discovered Ruby on Rails in 2006 or so. TextMate went for ages without an update, Sublime Text was getting popular, and appeared to have mostly-complete compatibility with TextMate, so I switched.Now Sublime has finally annoyed me. The Ruby and Haml packages just try too hard to be helpful, throwing brackets and indents around like there’s no tomorrow, often in places I don’t even want them. Time to try out Atom, especially since Github had a rather amusing video about it.It takes quite a few packages to get up to the level I had Sublime at, but I think I’m basically there. Here’s my setup: Sync Settings - back up your Atom settings to Gist. Here’s mine. Like dotfiles, these are meant to be shared. In Sublime this was a PITA involving symlinking things to Dropbox. Sublime Word Navigation - nothing is more frustrating than having to hit alt+← twice just to get past a stupid dash. Editorconfig - keep your coding style consistent. Local Settings - I’ve wanted this in Sublime for ages. Simple things like max line length, soft wrap settings, and even package settings like “should RubyTest use rspec or zeus” on a per-project basis. RubyTest - speaking of… Does everything I need from Sublime’s RubyTest, just had to re-map the keyboard shortcuts. Pigments - shows css colors in the editor, and alternative to Sublime’s GutterColor. Aligner - works way better than Sublime’s AlignTab package. Git History - step through the history of any file. Git Blame - shows the last committer for each line in the gutter. Unfortunately, the gutter is too small for many names, so it craps out and shows “min”. Also, the gutter can’t keep up with the main window’s scrolling, which is janky. Git Plus - I still end up doing Git on the command line. This often didn’t support the stuff I need to do on a daily basis. Language-haml - if you’re unfortunate enough to have to deal with HAML, this kinda helps. Like putting a band-aid on a bullet wound. Rails Transporter - this is a nice idea, but it still doesn’t cover the functionality that Sublime’s RubyTest had. cmd+. would let you jump from a file to the spec file and back, and transporter just gives up if you’re in a namespace, form object, worker, etc.How’s it working out? Well, Atom still feels a bit unpolished overall. Some of the packages above don’t work quite right, or aren’t as helpful as they advertise. And Atom’s auto-completion is annoying as bloody hell. It seems to use CTAGs or some variant, so it pulls in all symbols from everywhere, and the one I want is never even close to the top. And it pops up on every. single. thing. I. type. in a big flashy multi-colored box that randomly switches whether it’s above or below the cursor.Finally, the quick-tab-switch is terrible compared to Sublime’s. It’s fuzzy matching is way worse, it ignores punctuation like underscores, and definitely maintains no concept of how “nearby” a file is, nor how recently I’ve opened it.I might switch back.", "content_html": "It started with TextMate when I first discovered Ruby on Rails in 2006 or so. TextMate went for ages without an update, Sublime Text was getting popular, and appeared to have mostly-complete compatibility with TextMate, so I switched.
Now Sublime has finally annoyed me. The Ruby and Haml packages just try too hard to be helpful, throwing brackets and indents around like there’s no tomorrow, often in places I don’t even want them. Time to try out Atom, especially since Github had a rather amusing video about it.
It takes quite a few packages to get up to the level I had Sublime at, but I think I’m basically there. Here’s my setup:
How’s it working out? Well, Atom still feels a bit unpolished overall. Some of the packages above don’t work quite right, or aren’t as helpful as they advertise. And Atom’s auto-completion is annoying as bloody hell. It seems to use CTAGs or some variant, so it pulls in all symbols from everywhere, and the one I want is never even close to the top. And it pops up on every. single. thing. I. type. in a big flashy multi-colored box that randomly switches whether it’s above or below the cursor.
Finally, the quick-tab-switch is terrible compared to Sublime’s. It’s fuzzy matching is way worse, it ignores punctuation like underscores, and definitely maintains no concept of how “nearby” a file is, nor how recently I’ve opened it.
I might switch back.
", "url": "https://atevans.com/2015/08/05/trying-out-atom.html", , "date_published": "2015-08-05T00:00:00+00:00", "date_modified": "2015-08-05T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2015/05/26/make-your-rails-code-more-boringer.html", "title": "Make your Rails code more boringer", "summary": null, "content_text": "Or: I’m getting too old for magic tricks So much of what we try to do is get to a point where the solution seems inevitable: you know, you think “of course it’s that way, why would it be any other way?” It looks so obvious, but that sense of inevitability in the solution is really hard to achieve. ~ Jony Ive, July 2003I’ve been doing Rails for nearly a decade. I’ve seen bits of magic come and go, I’ve written too-fancy abstractions that leak like sieves, and mostly I’ve worked both solo and on teams. I’ve come to like boring code. Code with little to no magic, that looks “enterprisey,” that has too many classes and objects, and uses boring old things like inheritence instead of composition.Boring code is easy to read, and easy to debug. When you don’t define methods and classes dynamically, you can actually use the stacktrace. When you don’t use mixins, modules and concerns, you never have to wonder where a method is defined. You can grep your codebase. When you separate domain logic from the underlying technology, it’s very clear what is happening where.That’s very helpful for working on teams. Everyone should be able to read and understand your code. The ability for someone else to understand and work with your code has an inverse, exponential correlation with the number of files, objects, and messages between input and output. Layers of indirection and metaprogrammed magic make the curve even steeper.I want to make it really hard for the most annoying, stupid member of my team to screw it up: future me. Me in three months, when I’ve lost context and forgotten why I wrote any of this, or how. I want him to pick it up. Maybe he’ll say “man, this code is stodgy,” but he’ll understand it immediately.Let’s get to work.No side effects in model codeCode in your models should not change any other models, send emails, call APIs, or write to anything other than the primary data store. Especially in callbacks.Callbacks are great for setting and verifying internal state. A callback to normalize a url, email, or url slug is great. You’re just ensuring the model’s data is consistent. A callback to send an email is total bullshit. There will be times, probably many of them, when you do not want to send that email. Data migrations, actions from admins, a hundred other cases. Put those actions in another class, or make a method that is never called automatically. Force yourself to be explicit about when that is happening in your controllers, background workers, etc.Of course there are exceptions. touch: true is generally fine, as long as the touched model has no side effects on update.No observersObservers were removed in Rails 4 for a reason. They are invisible logic that no one knows to anticipate. Use explicit calls in controllers or workers.No default scopesWhen you write an ActiveRecord query, you should see exactly what it does. No one should have to wonder why they are getting unexpected ordering, joins or n+1 queries.No state machines for modelsEveryone thinks this state machines for your models are a great idea, and I’ve no idea why. Look at all these state machines. These put your business logic inside your models. That’s great, right? I mean, it gets them out of the controller. But models are not your junk drawer for business logic.Models will get to invalid states, as inevitably as the fucking tides. The business logic will change. You will deploy bugs. Then you have to do some ugly hack like update_columns status: 'fml' to herd them back into line. You have to do a ton of setup in tests. State machines define tons of magic methods. Guard methods, state-specific methods, and transitions will fail.State machines are for in-line processing. Regular Expressions are a great example. They are not for asynchronous changes over time that sync to an external service like a database.Just use a bloody string field, or better yet an ActiveRecord Enum. You can use conditional validations, but really you should put your business logic elsewhere.Avoid instance variables in views & helpersI write partials like this:# app/views/blog_posts/_byline.html.erb<% post = local_assigns[:post] || @post%><div class=\"byline\"> <span class=\"author-avatar\"><%= fetch_author_avatar(post.author) %></span> <span class=\"author-name\"><%= post.author.name.titleize %></span> <span class=\"post-date\"><%= localize post.updated_at %></span></div># app/helpers/blog_posts_helper.rbmodule BlogPostsHelper def fetch_author_avatar(author) CDNFetcher.generate_url(author.avatar_url) endendEven that’s not great, since post.author may be an n+1 query, but that’s manageable with the Bullet gem.Explicitly declaring variables and passing dependencies downward makes it crystal clear where everything is coming from. When you want to render this partial in some other view, and you inevitably will, you won’t have to dig through the whole chain and figure out what to set in the controller. Instance variables are effectively global variables for the view scope, and nobody likes globals.Locals are excellent for making sure your partial doesn’t depend on instance variables, but they’re bloody annoying when it isn’t clear where they’re coming from. The local_assigns hash prevents cryptic undefined method errors, makes the partial’s dependencies explicit, and allows you to override them when you’re using the instance variable for something else. I even pull a local out of this hash for the partial-name-variable passed in with render partial: 'my_partial', object: obj - byline in this case. This allows for defensive coding, sensible defaults, and makes it an explicit dependency.Helpers that depend on instance variables are less clear and less reusable than helpers with arguments. They compound the problem of instance variables in views or partials, since they’re not immediately visible when looking at the view code.No view helpers in models or controllers“Convention over configuration” is one of the huge benefits of Ruby on Rails. You don’t wonder where to put this or that bit of code, and other devs don’t wonder where to find it. If you have a method on a model that formats a name so it can be used in a view, you’ve made it harder for anyone else to find. Same thing if you define a helper in a controller that is used in the view.Use additional conventionsSome really smart people in the Rails community have invented more specialized objects for parts of a Rails app, and they had some good reasons. Form Objects, Service Objects, Presenters, and other conventions exist to help you keep your code clean and DRY.Don’t always or dogmatically use these things - a form to update a string in a model doesn’t need a form object. A controller that saves one model and makes an API call doesn’t need a service object. But when code gets re-used or specialized, these can be super helpful. Having more conventions for your team helps keep it obvious where any given piece of code is or should be.Don’t go too far, either - I think Trailblazer or Hexagonal Architecture make it harder for Rails devs to understand where things are, and tempt you into using more magic to wire everything up.Remember that abstractions hurtAll abstractions leak, and these are some of the most aggravating bugs to deal with. You end up pouring through someone else’s source code trying to figure out what the hell is going on. Not to pick on Trailblazer (it really does look interesting), but when I saw the contract / validation DSL I immediately shook my head. Knowing when something is invoked and how is pretty important. The more of that you have to keep in your head, the less working memory you have for actually writing your code.To justify an abstraction, it has to have 10x easier than operating without it. Not using the abstraction has to be so painful that you’re actively losing hair over it.For example, this is my main issue with HAML. It’s a big abstraction - it takes you very far away from the actual HTML you want to render - and the only value it provides is “it’s pretty.” And it’s not even pretty, for non-trivial apps. If you use BEM notation, any amount of data attributes, conditional classes, or I18n, you end up with perl-like punctuation soup. You can’t even add arbitraty white space to make it more readable.Sass (in its scss form) is a great counter-example. Lacking variables, comprehensions, and clear inheritence is a massive pain when writing css. Sass keeps you pretty close to the generated css, and provides 100x the power.DSLs, Concerns, transpiled languages, and syntax sugar gems are all suspect. Be mindful about when and how you introduce new layers of abstraction.Don’t monkey patchDuh. Use Decorators to make it explicit where your methods are coming from.These are all very general guidelines. Rules are meant to be broken, and you totally should if it makes your code 10x easier. I’ll add more if I can think of anything else.", "content_html": "Or: I’m getting too old for magic tricks
So much of what we try to do is get to a point where the solution seems inevitable: you know, you think “of course it’s that way, why would it be any other way?” It looks so obvious, but that sense of inevitability in the solution is really hard to achieve.
~ Jony Ive, July 2003
I’ve been doing Rails for nearly a decade. I’ve seen bits of magic come and go, I’ve written too-fancy abstractions that leak like sieves, and mostly I’ve worked both solo and on teams. I’ve come to like boring code. Code with little to no magic, that looks “enterprisey,” that has too many classes and objects, and uses boring old things like inheritence instead of composition.
Boring code is easy to read, and easy to debug. When you don’t define methods and classes dynamically, you can actually use the stacktrace. When you don’t use mixins, modules and concerns, you never have to wonder where a method is defined. You can grep your codebase. When you separate domain logic from the underlying technology, it’s very clear what is happening where.
That’s very helpful for working on teams. Everyone should be able to read and understand your code. The ability for someone else to understand and work with your code has an inverse, exponential correlation with the number of files, objects, and messages between input and output. Layers of indirection and metaprogrammed magic make the curve even steeper.
I want to make it really hard for the most annoying, stupid member of my team to screw it up: future me. Me in three months, when I’ve lost context and forgotten why I wrote any of this, or how. I want him to pick it up. Maybe he’ll say “man, this code is stodgy,” but he’ll understand it immediately.
Let’s get to work.
Code in your models should not change any other models, send emails, call APIs, or write to anything other than the primary data store. Especially in callbacks.
Callbacks are great for setting and verifying internal state. A callback to normalize a url, email, or url slug is great. You’re just ensuring the model’s data is consistent. A callback to send an email is total bullshit. There will be times, probably many of them, when you do not want to send that email. Data migrations, actions from admins, a hundred other cases. Put those actions in another class, or make a method that is never called automatically. Force yourself to be explicit about when that is happening in your controllers, background workers, etc.
Of course there are exceptions. touch: true is generally fine, as long as the touched model has no side effects on update.
Observers were removed in Rails 4 for a reason. They are invisible logic that no one knows to anticipate. Use explicit calls in controllers or workers.
When you write an ActiveRecord query, you should see exactly what it does. No one should have to wonder why they are getting unexpected ordering, joins or n+1 queries.
Everyone thinks this state machines for your models are a great idea, and I’ve no idea why. Look at all these state machines. These put your business logic inside your models. That’s great, right? I mean, it gets them out of the controller. But models are not your junk drawer for business logic.
Models will get to invalid states, as inevitably as the fucking tides. The business logic will change. You will deploy bugs. Then you have to do some ugly hack like update_columns status: 'fml' to herd them back into line. You have to do a ton of setup in tests. State machines define tons of magic methods. Guard methods, state-specific methods, and transitions will fail.
State machines are for in-line processing. Regular Expressions are a great example. They are not for asynchronous changes over time that sync to an external service like a database.
Just use a bloody string field, or better yet an ActiveRecord Enum. You can use conditional validations, but really you should put your business logic elsewhere.
I write partials like this:
# app/views/blog_posts/_byline.html.erb<% post = local_assigns[:post] || @post%><div class=\"byline\"> <span class=\"author-avatar\"><%= fetch_author_avatar(post.author) %></span> <span class=\"author-name\"><%= post.author.name.titleize %></span> <span class=\"post-date\"><%= localize post.updated_at %></span></div># app/helpers/blog_posts_helper.rbmodule BlogPostsHelper def fetch_author_avatar(author) CDNFetcher.generate_url(author.avatar_url) endendEven that’s not great, since post.author may be an n+1 query, but that’s manageable with the Bullet gem.
Explicitly declaring variables and passing dependencies downward makes it crystal clear where everything is coming from. When you want to render this partial in some other view, and you inevitably will, you won’t have to dig through the whole chain and figure out what to set in the controller. Instance variables are effectively global variables for the view scope, and nobody likes globals.
Locals are excellent for making sure your partial doesn’t depend on instance variables, but they’re bloody annoying when it isn’t clear where they’re coming from. The local_assigns hash prevents cryptic undefined method errors, makes the partial’s dependencies explicit, and allows you to override them when you’re using the instance variable for something else. I even pull a local out of this hash for the partial-name-variable passed in with render partial: 'my_partial', object: obj - byline in this case. This allows for defensive coding, sensible defaults, and makes it an explicit dependency.
Helpers that depend on instance variables are less clear and less reusable than helpers with arguments. They compound the problem of instance variables in views or partials, since they’re not immediately visible when looking at the view code.
“Convention over configuration” is one of the huge benefits of Ruby on Rails. You don’t wonder where to put this or that bit of code, and other devs don’t wonder where to find it. If you have a method on a model that formats a name so it can be used in a view, you’ve made it harder for anyone else to find. Same thing if you define a helper in a controller that is used in the view.
Some really smart people in the Rails community have invented more specialized objects for parts of a Rails app, and they had some good reasons. Form Objects, Service Objects, Presenters, and other conventions exist to help you keep your code clean and DRY.
Don’t always or dogmatically use these things - a form to update a string in a model doesn’t need a form object. A controller that saves one model and makes an API call doesn’t need a service object. But when code gets re-used or specialized, these can be super helpful. Having more conventions for your team helps keep it obvious where any given piece of code is or should be.
Don’t go too far, either - I think Trailblazer or Hexagonal Architecture make it harder for Rails devs to understand where things are, and tempt you into using more magic to wire everything up.
All abstractions leak, and these are some of the most aggravating bugs to deal with. You end up pouring through someone else’s source code trying to figure out what the hell is going on. Not to pick on Trailblazer (it really does look interesting), but when I saw the contract / validation DSL I immediately shook my head. Knowing when something is invoked and how is pretty important. The more of that you have to keep in your head, the less working memory you have for actually writing your code.
To justify an abstraction, it has to have 10x easier than operating without it. Not using the abstraction has to be so painful that you’re actively losing hair over it.
For example, this is my main issue with HAML. It’s a big abstraction - it takes you very far away from the actual HTML you want to render - and the only value it provides is “it’s pretty.” And it’s not even pretty, for non-trivial apps. If you use BEM notation, any amount of data attributes, conditional classes, or I18n, you end up with perl-like punctuation soup. You can’t even add arbitraty white space to make it more readable.
Sass (in its scss form) is a great counter-example. Lacking variables, comprehensions, and clear inheritence is a massive pain when writing css. Sass keeps you pretty close to the generated css, and provides 100x the power.
DSLs, Concerns, transpiled languages, and syntax sugar gems are all suspect. Be mindful about when and how you introduce new layers of abstraction.
Duh. Use Decorators to make it explicit where your methods are coming from.
These are all very general guidelines. Rules are meant to be broken, and you totally should if it makes your code 10x easier. I’ll add more if I can think of anything else.
", "url": "https://atevans.com/2015/05/26/make-your-rails-code-more-boringer.html", , "date_published": "2015-05-26T00:00:00+00:00", "date_modified": "2015-05-26T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2015/05/08/the-analytics-cycle.html", "title": "The Analytics Cycle", "summary": null, "content_text": "My experience with analytics & measurement:", "content_html": "My experience with analytics & measurement:

I’ve been getting into Ruby & other software engineering talks lately, as they complement my usual diet of quantum physics, neuroscience, and social psychology lectures. I’m not actually that smart, a lot of it goes over my head, but sometimes I get concepts and other times they prompt me to poke through Wolfram Alpha, Wikipedia, etc.
Anyway, Technical Talks:
Here Be Dragons: Katrina Owen.
Starts off with some fun ranting about some bad code, then gets real. Fantastic.
Sometimes a Controller is Just a Controller: Justin Searls.
How people on dev teams interact, and how to maintain sanity.
API Design for Gem Authors: Emily Stolfo.
How to design your Ruby gem so people will actually want to use it.
Don’t Be a Hero: Sustainable Open Source: Lillie Chilen.
How to make sure your open source project doesn’t die, and actually get other people to contribute.
Estimation Blackjack and Other Games: A Comedic Compendium: Amy Unger.
Why estimation is important and how it goes wrong.
Introduction to Introspection Features of Ruby: Koichi Sasada.
Gets into some of the low-level capabilities the Ruby engine gives you. I thought I knew a lot about Kernel and Object methods, but this taught me otherwise.
I guess most of these are “soft talks” in that they aren’t about some new library or specific functions of programming. But these topics are critical to working on a team. Even if you know some or all the material, it’s worth a refresher course now and again.
", "url": "https://atevans.com/2015/05/06/here-be-dragons-amazing-ruby-talk.html", , "date_published": "2015-05-06T00:00:00+00:00", "date_modified": "2015-05-06T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2015/05/03/rails-conf-2015.html", "title": "Rails Conf 2015", "summary": null, "content_text": "I had the pleasure of attending RailsConf 2015 this year with my company Hired.It was exhausting.That’s the biggest thing I learned - I haven’t been to many tech conferences, and I’ve only once been paid to fly somewhere else on business. The factors added up, and I spent nearly every minute tired, exhausted, and not so functional. 5-hour flight on Monday Jet lag Being out of my familiar places Going to talks all day, and trying to learn something at each of them Socializing during any breaks or downtime Syncing up with the team Trying to bang out some code here and there The Hired semi-official after party on WedsFor an introvert like me, that kind of chaos took everything I had.The upshot was, there were some great talks, and I met a lot of cool fellow Rubyists. We bounced ideas and war stories off each other during lunch & breaks, talked about our respective companies, and made the place more of a community than a business conference.Favorite TalksThe videos are up on ConFreaks, who are by far the best conference-talk recording people I’ve ever seen. Some of my favorites: Don’t Be a Hero: Sustainable Open Source - a great intro to not having a bus problem in your open source project. Intro to Introspection Features in Ruby - not really an intro, this was some high-level talk. I knew maybe half of this stuff. Aaron Patterson’s Keynote - insight into how @tenderlove identified and attacked some problems in Ruby’s require Intro to Rails Security - no video yet, but check back later. Again, not exactly an intro - he mentioned some vulnerabilities I didn’t know about, and kinda sold me on BrakemanA few of my notes DHH’s motivation on Rails is making a framework for small teams the does 90% of what you need. Twitter took this as him crapping on microservices and front-end frameworks, which he did a little bit. I respect that motivation, as that’s what let me pick up Rails and do cool stuff with it in the first place. And for small teams (<100ish) it allows everyone to be self-sufficient. He mentioned Writeboard as a terrible experience developing a microservice. Couldn’t agree more - it was a PITA to use. I’ve gone down a similar path with at least one team, and the added overhead becomes awful. If you’re running an open source project, respond to ALL pull requests within 48hrs. If you wait more than a week, they’ll never contribute to you again. Don’t hoard the keys to your open source project - make sure someone else has access to the domain, can publish the gem, etc. Kubernetes is pronounced kübêrnêtēs - thanks to Aja “Thagomizer” for the clarification. And the quick intro to Rails on Docker. Model callbacks in Rails 5 will not halt the callback chain unless you explicitly throw(:abort) . For a ridiculously long discussion why, check out this ginormous PR. From Koichi - keyword params are still 2x slower than normal params (30x slower on Ruby 2.0) I left the “Crossing the Bridge” talk on Rails with client-side js frameworks. The architecture he outlined (ActiveRecord, ActiveModel::Serializers, ng-rails-resource) is terrible. 20x the overhead of server-side rendering, and your client-side app ends up a disastrous mess. I did get to talk to Mike Perham, the creator of Sidekiq. We had an interesting chat about memory usage and ruby GC. I was hoping that the OS would clean up memory used by a separate thread – ie, ending a sidekiq job cleans out memory much faster than letting ruby’s GC run. Unfortunately, that’s not the case, and there’s still no real way to predict when ruby GC will run, short of calling it manually. ", "content_html": "I had the pleasure of attending RailsConf 2015 this year with my company Hired.
It was exhausting.
That’s the biggest thing I learned - I haven’t been to many tech conferences, and I’ve only once been paid to fly somewhere else on business. The factors added up, and I spent nearly every minute tired, exhausted, and not so functional.
For an introvert like me, that kind of chaos took everything I had.
The upshot was, there were some great talks, and I met a lot of cool fellow Rubyists. We bounced ideas and war stories off each other during lunch & breaks, talked about our respective companies, and made the place more of a community than a business conference.
The videos are up on ConFreaks, who are by far the best conference-talk recording people I’ve ever seen. Some of my favorites:
requireDHH’s motivation on Rails is making a framework for small teams the does 90% of what you need. Twitter took this as him crapping on microservices and front-end frameworks, which he did a little bit. I respect that motivation, as that’s what let me pick up Rails and do cool stuff with it in the first place. And for small teams (<100ish) it allows everyone to be self-sufficient.
He mentioned Writeboard as a terrible experience developing a microservice. Couldn’t agree more - it was a PITA to use. I’ve gone down a similar path with at least one team, and the added overhead becomes awful.
If you’re running an open source project, respond to ALL pull requests within 48hrs. If you wait more than a week, they’ll never contribute to you again.
Don’t hoard the keys to your open source project - make sure someone else has access to the domain, can publish the gem, etc.
Kubernetes is pronounced kübêrnêtēs - thanks to Aja “Thagomizer” for the clarification. And the quick intro to Rails on Docker.
Model callbacks in Rails 5 will not halt the callback chain unless you explicitly throw(:abort) . For a ridiculously long discussion why, check out this ginormous PR.
From Koichi - keyword params are still 2x slower than normal params (30x slower on Ruby 2.0)
I left the “Crossing the Bridge” talk on Rails with client-side js frameworks. The architecture he outlined (ActiveRecord, ActiveModel::Serializers, ng-rails-resource) is terrible. 20x the overhead of server-side rendering, and your client-side app ends up a disastrous mess.
I did get to talk to Mike Perham, the creator of Sidekiq. We had an interesting chat about memory usage and ruby GC. I was hoping that the OS would clean up memory used by a separate thread – ie, ending a sidekiq job cleans out memory much faster than letting ruby’s GC run. Unfortunately, that’s not the case, and there’s still no real way to predict when ruby GC will run, short of calling it manually.
On the second day of Elastic{ON}, I woke up to an email from my VPS provider saying that my server was participating in a DDoS attack. Network access had been suspended, and I needed to back up any data and kill the server. I had console access via their portal, so I logged in.
Turned out ElasticSearch was the culprit. I found a bash console running under the elasticsearch user, so I killed all their processes (and Elasticsearch). If you are not on on the latest version, you need to be. And if you have dynamic scripting on (the default in previous versions), you need to make sure it’s off.
I didn’t have much of import on there anyway, so I just blew away the server. Then it was time to figure out a new, more secure setup. I use this server to try out quick apps I do on the side. They don’t take very much in terms of resources. Usually they just need a basic app run, and a service like Postgres, Redis, or Mongo at very low scale. There’s no reason to have one or more servers per app.
Heroku has the auto-sleep thing, which sucks, and not all addons are free at the intro tier. For example, Found.
My first thought was Docker, because it’s the new hotness.
While I could run just base Docker, I just can’t justify having to do these things manually. For now, I’m sticking with the “just a linux box” architecture.
Enter chef-solo. I’d been itching to write a setup & config script for a while, especially since my apps have so many components in common. Upstart, monit, logrotate, cron jobs - it’s way better to have this stuff in a repo than just sitting on a server somewhere.
Plus, the recipes for the most part come with secure defaults and recommended best practices right in the REAME. My final repo stack ended up using:
This made it super easy to write some chef scripts, run a test build on a Vagrant box, and then deploy it to my shiny new dev server. My blahg here is running on nginx on it, since it’s built with Jekyll, Grunt, and rsync, modified from the super-nice yeoman generator.
My new setup is hopefully more secure, and won’t be going down again for a while.
", "url": "https://atevans.com/2015/03/20/i-got-hacked-new-server-setup.html", , "date_published": "2015-03-20T00:00:00+00:00", "date_modified": "2015-03-20T00:00:00+00:00", "author": { "name": "" } }, { "id": "https://atevans.com/2014/12/02/unicode-css.html", "title": "Unicode CSS", "summary": null, "content_text": "Why? Because job security. See also: Coding in Emoji with Swift", "content_html": "Why? Because job security. See also: Coding in Emoji with Swift
", "url": "https://atevans.com/2014/12/02/unicode-css.html", "external_url": "http://jsbin.com/nuhinuda/1/edit?html,css,output", , "date_published": "2014-12-02T00:00:00+00:00", "date_modified": "2014-12-02T00:00:00+00:00", "author": { "name": "" } } ] }