<![CDATA[johncoder]]>https://johncoder.com/https://johncoder.com/favicon.pngjohncoderhttps://johncoder.com/Ghost 6.22Sat, 21 Mar 2026 04:21:08 GMT60<![CDATA[Fixing Date Taken for Photos]]>I was able to figure out a few commands for exiftool in order to fix some photos that didn't have any useful date/time stamps for when they were taken. Here's the test I conducted:

# run this command to view the exif data (everything!)
exiftool.exe
]]>
https://johncoder.com/fixing-date-taken-for-photos/652b131539b5810001482786Sat, 14 Oct 2023 22:43:23 GMTI was able to figure out a few commands for exiftool in order to fix some photos that didn't have any useful date/time stamps for when they were taken. Here's the test I conducted:

# run this command to view the exif data (everything!)
exiftool.exe path/to/file.jpg

# view only the timestamps present in the exif data, note
# DateTimeOriginal is missing
exiftool.exe -a -G -time:all path/to/file.jpg

# [File]     File Modification Date/Time    : 2020:09:03 15:33:04-04:00
# [File]     File Access Date/Time          : 2023:10:14 17:58:19-04:00
# [File]     File Creation Date/Time        : 2023:10:14 17:49:34-04:00
# [ICC_Profile] Profile Date Time           : 2017:07:07 13:22:32

In the output above, the ICC_Profile should be ignored. The important fields are the Access & Creation date, which are going to be relative to the time that the file was copied to its current location, or in some cases, opened and viewed with certain programs. This is a problem because we don't want that date, it's irrelevant. We need to get DateTimeOriginal set in the file for a definitive "Date Taken" which we'll use to sort photos. The sensible option, when the DateTimeOriginal is missing and the "File Modification Date/Time" accurately reflects the date taken, is to set DateTimeOriginal to this modification date.

# this command does the update described above, preserving
# the original file as file.jpg_original in the same directory
exiftool.exe "-DateTimeOriginal<${FileModifyDate}" path/to/file.jpg

# now we can check it again
exiftool.exe -a -G -time:all path/to/file.jpg

# [File]          File Modification Date/Time     : 2023:10:14 17:59:48-04:00
# [File]          File Access Date/Time           : 2023:10:14 17:59:48-04:00
# [File]          File Creation Date/Time         : 2023:10:14 17:49:34-04:00
# [EXIF]          Date/Time Original              : 2020:09:03 15:33:04
# [ICC_Profile]   Profile Date Time               : 2017:07:07 13:22:32

This now has an EXIF entry for "Date/Time Original", which is what we want. It's worth nothing that File Modification Date/Time has been updated to reflect this change to the file's EXIF data. However, this is exactly what is desired because the "Date/Time Original" (or DateTimeOriginal) has the previous "File Modification Date/Time".

We can now run a script to test the file:

# test the original file
exiftool.exe -q -if "not $DateTimeOriginal" -r -p "$directory\$filename" path/to/file.jpg_original

# output:
# path/to/file.jpg_original
# Warning: [Minor] Tag 'DateTimeOriginal' not defined - path/to/file.jpg_original

# test the new file
exiftool.exe -q -if "not $DateTimeOriginal" -r -p "$directory\$filename" path/to/file.jpg

# outputs nothing because the DateTimeOriginal exists

This experiment demonstrates several details about this process, from identifying problematic files to fixing them, and then verifying the fix worked.

When working with exiftool, remember a few key things:

  • These commands are slightly different on Windows, Mac, and Linux. Beware!
  • By default, updating a file with exiftool preserves a copy of the original with a _original suffix.
  • There is a flag to directly edit the files without preserving the original, -overwrite_original_in_place.
  • The -r flag tells exiftool to recursively walk the given path and perform the respective task on each matching file it finds.

TL;DR One-Liner

The exiftool program is super powerful, and worthy of respect. Here's a one-liner that walks a path recursively looking for files missing a DateTimeOriginal field, and uses the file modified date to set DateTimeOriginal, overwriting the original file.

To help clarify the state before and after running the one-liner, an extra command is included to output all files that are missing DateTimeOriginal.

# for sanity, recursively walk & output files that will be changed
exiftool.exe -q -if "not $DateTimeOriginal" -r -p "$directory\$filename" path/to

# here's a one-liner to do it recursively
exiftool.exe -q -if "not $DateTimeOriginal" -r -overwrite_original_in_place -p "Setting DateTimeOriginal for: $directory/$filename" -d "%Y:%m:%d %H:%M:%S" "-DateTimeOriginal<${FileModifyDate}" path/to

# for sanity, see if any files still match
exiftool.exe -q -if "not $DateTimeOriginal" -r -p "$directory\$filename" path/to

Update responsibly!

Extra Credit

I spent some time looking into avoiding processing files with a prefix of ._ and came up with this:

exiftool.exe -q -if "not $FileName=~/\._.*/i and not $DateTimeOriginal" -r -p "$directory\$filename" path/to
]]>
<![CDATA[Using Emacs With PostgreSQL]]>I like to stay inside of Emacs as much as possible, and it's amazing that I have put off this quality of life improvement for so long. After reading this post about using Emacs as a database client, I decided to try and make this more convenient for

]]>
https://johncoder.com/using-emacs-with-postgresql/60b92a5729a633003efad3d7Thu, 03 Jun 2021 20:30:47 GMTI like to stay inside of Emacs as much as possible, and it's amazing that I have put off this quality of life improvement for so long. After reading this post about using Emacs as a database client, I decided to try and make this more convenient for my setup.

The Macro

I built it as a macro, which just makes all this convenient to wrap up. The macro takes the connection information as a &rest parameter, so that I may specify several at once. Each one needs to be added to the list of connections. Then, I'd like to automatically define an interactive function for the connection using its name.

(defvar sql-connection-alist nil)

(defmacro sql-specify-connections (&rest connections)
  "Set the sql-connection-alist from CONNECTIONS.
Generates respective interactive functions to establish each
connection."
  `(progn
     ,@(mapcar (lambda (conn)
		 `(add-to-list 'sql-connection-alist ',conn))
	       connections)
     ,@(mapcar (lambda (conn)
		  (let* ((varname (car conn))
			 (fn-name (intern (format "sql-connect-to-%s" varname)))
			 (buf-name (format "*%s*" varname)))
		    `(defun ,fn-name ,'()
		       (interactive)
		       (sql-connect ',varname ,buf-name))))
		connections)))

This is just part of my general Emacs configuration.

Using the Macro

In my init.el I have a last step to optionally load some work-specific functionality. I added work.el to my .gitignore file so that I don't have to worry about committing any sensitive information.

(when (file-exists-p "~/.emacs.d/work.el")
  (load-file "~/.emacs.d/work.el"))

Then, inside my work.el file I use the macro:

(sql-specify-connections
 (mytest-pgsql-dev (sql-product 'postgres)
	     (sql-port 5432)
	     (sql-server "localhost")
	     (sql-user "postgres")
	     (sql-password "password")
	     (sql-database "myapp_development"))
 (mytest-pgsql-test (sql-product 'postgres)
	      (sql-port 5432)
	      (sql-server "localhost")
	      (sql-user "postgres")
	      (sql-password "password")
	      (sql-database "myapp_test")))

Expand the Macro

You can look at what this expands to by running M-x macrostep-expand, but I have included it here:

(progn
  (add-to-list 'sql-connection-alist
	       '(mytest-pgsql-dev
		 (sql-product 'postgres)
		 (sql-port 5432)
		 (sql-server "localhost")
		 (sql-user "postgres")
		 (sql-password "password")
		 (sql-database "myapp_development")))
  (add-to-list 'sql-connection-alist
	       '(mytest-pgsql-test
		 (sql-product 'postgres)
		 (sql-port 5432)
		 (sql-server "localhost")
		 (sql-user "postgres")
		 (sql-password "password")
		 (sql-database "myapp_test")))
  (defun sql-connect-to-mytest-pgsql-dev nil
    (interactive)
    (sql-connect 'mytest-pgsql-dev "*mytest-pgsql-dev*"))
  (defun sql-connect-to-mytest-pgsql-test nil
    (interactive)
    (sql-connect 'mytest-pgsql-test "*mytest-pgsql-test*")))

Now, as I'm bopping in and out of .sql files in my project, I can use these interactive functions to share a connection to a database.

Extra

While I'm talking about my postgres development setup, it'd be worth including a couple of lines from my .psqlrc file:

\set PROMPT1 '(%n@%M:%>) %`date +%H:%M:%S` [%/] \n%x%\n'
\set PROMPT2 ''

These lines get picked up when Emacs starts the psql process, and ensures that the client writes more useful values for prompts. This allows me to ensure that the header row of my query results are properly aligned.

Another extra thing I use is C-c C-s to send things like \x to toggle vertical/horizontal display of query results.

]]>
<![CDATA[Reusability is a Myth]]>Some years ago I had a bit of a hot take that “reusability is a myth.” I said it out loud, in front of people. It got some laughs, and despite there being a lot more to the context in which it was said, I am often reminded

]]>
https://johncoder.com/reusability-is-a-myth/5fc94db76d8fe0003967d9e5Thu, 03 Dec 2020 20:51:01 GMTSome years ago I had a bit of a hot take that “reusability is a myth.” I said it out loud, in front of people. It got some laughs, and despite there being a lot more to the context in which it was said, I am often reminded of it. Occasionally I utter it, if nothing, for a fresh burst of laughter.

To be clear, that's not necessarily to say that reusability is unattainable or idealistic, but a punchy way of asserting that forethought is hard, and hindsight still requires keen vision. Neither of which is expedient and both inhibit momentum. Regardless, as I falter at the helm of my editor I find it helpful to remind myself to focus on first making something usable, and let the patterns emerge naturally and worry about them later.

]]>
<![CDATA[Considerations for HTTP Clients]]>Something we keep doing in this industry is writing our own API client packages/modules. We build a lot of backend services, and most would agree that it's worth extracting some kind of standard functionality for it. Then, all you have to do is take on that package

]]>
https://johncoder.com/considerations-for-http-clients/5e67a53f3e624200381d812fTue, 10 Mar 2020 14:46:39 GMTSomething we keep doing in this industry is writing our own API client packages/modules. We build a lot of backend services, and most would agree that it's worth extracting some kind of standard functionality for it. Then, all you have to do is take on that package as a dependency and hammer out some boilerplate code and you're well on your way to sprint satisfaction.

When the next sprint comes around, someone has opened a ticket about a possible bug in the HTTP client code related to a production incident. So, you pile on some story points and try to mentally and emotionally prepare yourself to backpedal a bit to see what's going on.

I know what you probably found. Some setting on the standard library HTTP client. A timeout, or a botched header. Unpooled connections. How about a missing retry? A regrettably ambitious retry? Am I right? Ask me how I know. Welp, I've been there a few times.

If I had a woefully incomplete, unsorted list of things to remember when you take a stab at writing an API client, it might look something like this:

  • Service Discovery
  • Load Balancing
  • Timeouts and Expirations
  • Retries
  • Rate limiting
  • Connection pooling
  • Circuit breaking
  • Failure detection
  • Metrics and tracing
  • Interrupts
  • Context propagation

Feel free to break out this list when someone sends you a pull request to review where they're braving into this familiar and error prone territory.

]]>
<![CDATA[Curry in Python]]>I'm... not sure how I got here, but this is what I spent my evening doing. I have been using partial in python on occasion, but it's kind of annoying to have to be so explicit with it any time you want to use partial application.

]]>
https://johncoder.com/curry-in-python/5c92f0d7b0d6ce00c0992f99Thu, 21 Mar 2019 02:21:05 GMTI'm... not sure how I got here, but this is what I spent my evening doing. I have been using partial in python on occasion, but it's kind of annoying to have to be so explicit with it any time you want to use partial application.

def curry(fn):
    arity = fn.__code__.co_argcount
    def f(*fargs):
        if len(fargs) > arity:
            raise RuntimeError(
                'Received {} arguments, but {} takes {} arguments'.format(
                    len(fargs), fn, arity))
        def g(*gargs):
            argcount = len(fargs) + len(gargs)
            return (partial(f, *(fargs + gargs))
                    if argcount < arity
                    else f(*(fargs + gargs)))
        return g if len(fargs) < arity else fn(*fargs)
    return f

So I goofed around a bit until I came up with this. And, it works!

In case you don't know what currying is, it makes this possible:

def add_three(a, b, c):
  return a + b + c

add_three(1, 2, 3)  # 6
curry(add_three)(1, 2, 3)  # 6
curry(add_three)(1, 2)(3)  # 6
curry(add_three)(1)(2, 3)  # 6
curry(add_three)(1)(2)(3)  # 6
]]>
<![CDATA[Using SSH for Separate GitHub Accounts]]>I keep my emacs configuration in my personal GitHub account, but choose to use a separate account for my full time job. It's tricky when you want to use SSH for both, but it's definitely doable. Here's my quick notes on how to manage

]]>
https://johncoder.com/ssh-with-two-github-accounts/5c7fef0a343e8b00c0d3a165Wed, 06 Mar 2019 16:13:17 GMTI keep my emacs configuration in my personal GitHub account, but choose to use a separate account for my full time job. It's tricky when you want to use SSH for both, but it's definitely doable. Here's my quick notes on how to manage it.

Create a file: ~/.ssh/config

With something like this in it:

Host github-personal
     Hostname github.com
     IdentityFile ~/.ssh/github-personal
     User johncoder

Host github.com
     Hostname github.com
     IdentityFile ~/.ssh/github-fulltimejob
     User my-fulltimejob-github-user

As long as I've set up the separate SSH keys via ssh-add ~/.ssh/<key> then things will work just fine. The only trick is that I need to clone using the Host name:

git clone git@github-personal:johncoder/my-repo

I chose to use github.com for my work stuff, since it could get pretty confusing if I used a different host. My personal stuff isn't something I need frequently on my work machine.

]]>
<![CDATA[Musings of Performance Past]]>

I once worked in a bit of dream scenario. My job was simple: render a web page. Lots of times. Generally this involved background processing and caching, which I found to be fun and full of surprises. The evolution of this solution is an intesting story, to be sure, and

]]>
https://johncoder.com/musings-of-performance-past/5ad694f4e1f27900224c6dffTue, 29 May 2018 01:01:42 GMT

I once worked in a bit of dream scenario. My job was simple: render a web page. Lots of times. Generally this involved background processing and caching, which I found to be fun and full of surprises. The evolution of this solution is an intesting story, to be sure, and one I'm excited to tell. Many long hours were spent poring over log messages and building tools, agonizing over every detail. I'm proud of what was accomplished, and hope that others can appreciate the hard learned lessons and unabashed creativity involved.

Simple, First

The project was a week or two old by time I was asked to join the efforts. It was fresh start, from scratch, to build something that we could use to serve content at our large scale. My first commit probably involved fixing a couple of minor bugs in the script we ran via cron. Actually, it wasn't really even cron! It was a cron-like module trying to contend the event loop to schedule things. It worked for the modest start we had, but I had been down a road like this before.

As more data was added to the job, it ran longer, and eventually past the cadence set in cron. Before one finished, another began. Just like any good programmer confronted with an inconvenient timeout scenario, we extended it. Every thirty seconds became every forty-five, then sixty seconds. When would the madness end? I have experienced similar issues in my .NET days while using the stock timer that comes with the framework (sadly I've been both the creator and inheritor of these bastardizations). Being far from my first rodeo, I had a clear idea of what wouldn't work.

Aggregators

One of the first innovations happened when we realized what the data ingest was really trying to do. For any given page being built in the app, there were a number of distinct requests made to the news API. The requests were related by the page, meaning we could aggregate all of the requests into one single view model. Emergent patterns like this are the bread and butter of refinement.

Aggregators, as they would come to be known, provided a fixture that abstracted away all of the common code for making requests. From this we got a nice little package with a name, and fully decoupled from the rest of the application. It knew nothing about storing data or scheduling execution, only about how to export a key/value when everything was said and done.

Here's a basic example of how an aggregator was created:

const Aggregator = require('./Aggregator');
const aggregator = new Aggregator('some-clever-name');

const model = {};

aggregator.endpoints.push({
  url: 'http://www.timeapi.org/utc/now.json',
  complete: (data) => {
    model.time = data;
  },
});

aggregator.export('current_time', () => {
  return model.time;
});

module.exports = aggregator;

These aggregators were teeming with introspection. In many ways, you can think of the worker process as one big aggregator introspector. When building out an aggregator, you're providing declarative information, like what URLs you want to request. You're also exporting keys, which is how data is retrieved from the aggregators at any point in time. The worker process code is free to look at that data, allowing it to organize the workloads as it chooses. More on this later.

It is worthwhile to point out that the aggregators weren't exclusively declarative. You'll notice that the module closes some state in the form of the variable model. The export should do nothing but return some kind of data. There are plenty of hooks for processing data at whatever point you need. Every time an aggregator is run it will execute the callback associated with a URL for a given endpoint. The callback can modify the model. The callback only gets invoked if the request was successful. This is important because it provides a level of robustness for the worker process! Note that common sense should prevail here while modifying this shared state. Willy-nilly calls to Array.push() on the model's arrays is a recipe for disaster. This is essentially a memory leak, and should be avoided! You might wonder how I know that. I did it on a couple of occasions, and helped others avoid it through code reviews and hitting some hard production fire drills.

Consuming APIs is anything but fully reliable. By closing state, each time the aggregator ran it was able to start from the last known good state. The result is that if there were intermittent failures while consuming an endpoint, the model would remain at the last known good state. The rendering application enjoyed a long period of stability from this. Although, it was not without its problems. One major problem that manifested from this approach was the perception of reversions in the data model. The service provided the last known good version because some portion of the newest data was unavailable (API Gateway timeouts, error responses, etc.). I believe that this was a reconcilable flaw.

Over time we realized a need to make a second round of requests within Aggregators. As it turns out, the aggregators could be leveraged to carry out these subsequent tasks. That is when I added an after function that could be specified for an endpoint, which takes a context (this is the child aggregator). You could even specify a complete function to execute after the aggregator was done (this always happened at the end of a run, and was guaranteed to complete before export functions are called.

The aggregators also had a built in capability to measure the time it took to make each request and the status of the last response it got. At various points in the worker's hayday this information was critical for determining upstream issues in the news API and even in the worker itself. Sometimes they were quirky API gateway bugs, and in some cases traceable to document database replication errors.

Arguably the largest triumph of the aggregators was how much flexibility it provided while trying different ways to iterate and refine our performance strategy.

Declarative Payoff

All aggregators begin their life with a name, which was useful for referring to specific groups of data or tasks in the worker. It allowed us to do things like specify a whitelist or blacklist of aggregator names, and the worker could run them. This was configurable from a command line parameter (aggregators=mainpage), or more commonly, the environment variable ENABLED_WORKER_AGGREGATORS.

The worker process also used this information to generate a dashboard website, which allowed you to explore the vast amounts of collected data. You could see each request that the worker was making, make the request through the app to see the current state of the endpoint, and also request the redis key to see the current state of the model. I can't express how truly useful this was to me over time.

Another angle into this was a development-time feature for the rendering app (available at ~/dashboard/refresh/ui) that allowed you to kick off a background process to run a specific aggregator. This was developed primarily for lightweight hosting providers in the absense of a dedicated, always running worker.

Since the aggregators provided all this great information, we were able to leverage it to generate some amount of documentation, such as the keys for data that would ultimately be stored in redis, and the URLs that were made in order to generate the data for those keys. Time and again this documentation was helpful to communicate what data the rendering application was consuming from the news API.

Spikes

One time a spike was created to leverage the existing aggregators and map them to endpoints as their own API. It took fifteen minutes to spin up an express app and write some generalized code to create routes for each aggregator by its name. As a proof of concept it was able to demonstrate the high degree of flexibility that the aggregators possessed.

Another time I added a compression step to the persistence layer around aggregators, offering 60%-90% improvements over payloads transferred to and from redis. If I remember right, I read somewhere that StackOverflow used this technique to eke out some extra performance with a similar setup.

Cadence

Originally the worker was set up to run as a cron job. There was an npm package that emulated cron for the purposes of scheduling workloads. As the worker ramped up in the variety and amount of data that it had to aggregate, cron became increasingly irrelevant and even problematic. The time it took to run all aggregators to completion a single time could take longer than the desired cron schedule. Choosing to skip the current cycle or cancel the previous one could result in extensive delays in updating the view models.

The next change was to just allow the process to run to completion, and then incur a throttle time. It would essentially sleep the worker for the configured duration. This worked out much nicer, especially when we had opportunities to spin up workers dedicated to a subset of aggregators. Specifically this was a great success while covering an election night.

A problem we experienced with the workers was when it would inexplicably hang for unbounded periods of time. Admittedly, this could probably be narrowed down to a workload that is unreasonably large to be efficiently coordinated in a single instance, and likely some blatantly poor coding around the asynchronous tasks. This could be handled more effectively with strict timeouts and some more robust handling of unexpected or unusual behavior in the service.

One attempt at solving (spoiler alert: it solved nothing) this inexplicable hanging problem was to use a heartbeat to detect when the worker ran over on its usual workload. The heartbeat module basically used a timeout duration, and expected to be notified with a "beat" or "tick" to indicate normal behavior. That is to say, the application would say "Hey heartbeat, checking in because I finished my work. About to start again." and the heartbeat would reply "Okay, see you next tick." In the absense of a timely tick, the heartbeat would just log a single exception and wait around to see if the latent tick would eventually happen. When it did, the heartbeat would happily return to normal function. It was simultaneously enlightening and annoying. I always intended to do something more sophisticated when the module tripped, but never got around to it. Many Splunk alerts were sent on behalf of this heartbeat!

Rather than be a brainless scraper, the worker should have just become event driven. I later learned that there were only a few thousand changes throughout the day that would necessitate updates to our view models. The aggregation could still happen, but rather than attempt to be scheduled they could be initiated via editorial events. It would have been a significant reduction in the total amount of work needed.

Worker as a Dev Tool

Rather than demand a developer to use redis in their local environment, the worker could write to static JSON files. This made the designer's life much easier. The files are readily available to modify the data used to render. It continued for basically the full lifetime of the worker. Eventually this was leveld up by creating a development proxy service devoted to this abstraction.

Analyzing Performance

I would be remiss to not make mention of how we analyzed the performance and behavior of the worker through Splunk. The worker wrote some good information out to its logs (mind you this is pre-JSON formatted logs). As aggregators ran, it would report when aggregators would finish, their name, and the amount of time it took. It also logged out "render complete" entries with the total time it took for the worker to finish one full iteration. By charting this over time, we were able to identify periods of low performance, and even correlate it to upstream problems caused by document database replication and the news API. It was essential to establishing an accurate picture of what normal was for the worker.

Fate

The worker was eventually retired in favor of a more on-demand API transformation layer. It, too, was an interesting project, but that's probably a story for another time. If I could do it all over again, I'd start with the event driven approach, but I would explore opportunities to adapt the aggregators of this worker to see how well they would work. Rather than making a request in the aggregator, the event data would be passed to it to act on, and continue to use its notion of exported keys to persist the view model in some data store such as redis.

]]>
<![CDATA[Next Big Adventure]]>

This has been a busy year! My last job gave me a crash course in management, which was a huge opportunity to learn about working with people and how to help grow developer culture. With all of the new things going on, I kind of lost track of what truly

]]>
https://johncoder.com/next-big-adventure-2017/5a84592a0cf541001825c9deFri, 29 Sep 2017 13:06:49 GMT

This has been a busy year! My last job gave me a crash course in management, which was a huge opportunity to learn about working with people and how to help grow developer culture. With all of the new things going on, I kind of lost track of what truly motivates me, which is building things.

I have stumbled upon a new opportunity to resume working remotely, but this time for a startup with a handful of familiar faces. It'll be the first time I have ever accepted a job working in Python! Admittedly, I am starting to care less and less about which technologies I'm using. This mindset makes it easy for me to dive in and learn as much as I can to start making contributions.

Hopefully the following months will not only give me a great learning opportunity professionally, but also open some doors for me to continue my own personal studies. If I'm lucky, maybe this could result in an uptick in blogging activity (optimistic even after all these years of minimal dedication to blogging).

]]>
<![CDATA[New Challenges Await]]>

Friday marks my last day working at nbcnews.com. I've gotten do a lot of great things, from throwing down with RavenDB to hopping on some serious Node.js bandwagons. I've even migrated to a more linux based ecosystem, which has been great.

Over the past

]]>
https://johncoder.com/new-challenges-await/5a84592a0cf541001825c9ddWed, 11 Jan 2017 17:36:18 GMT

Friday marks my last day working at nbcnews.com. I've gotten do a lot of great things, from throwing down with RavenDB to hopping on some serious Node.js bandwagons. I've even migrated to a more linux based ecosystem, which has been great.

Over the past four years I've worked from home and this job grew with me. I got married, moved back to my home town, bought a house, and had two wonderful kids. I did all of this while getting to wear sweatpants most days. I'm lucky to have had that kind of opportunity. Time to go back to wearing real people clothing and commute my 8 minutes to work every day.

Monday I start at Libera as a manager of software engineering. There I'll have a fresh start, and an opportunity to tackle new and challenging problems. Hopefully I'll bring with me some of the elements of the working culture I've grown to love.

Wish me luck!

]]>
<![CDATA[Practicing Functional Programming: Logic]]>

Most of what I want to write about today can be found in Chapter 3 of An Introduction to Functional Programming Through Lambda Calculus, by Greg Michaelson. However, I'll be working through it with JavaScript to try and give it some tangible application.

Disclaimer: this isn't

]]>
https://johncoder.com/practicing-functional-programming-logic/5a84592a0cf541001825c9d9Tue, 18 Oct 2016 14:23:55 GMT

Most of what I want to write about today can be found in Chapter 3 of An Introduction to Functional Programming Through Lambda Calculus, by Greg Michaelson. However, I'll be working through it with JavaScript to try and give it some tangible application.

Disclaimer: this isn't really a post for teaching you; it's a post for me to review what I've learned. If you see that I've made some sort of error, ping me on twitter.

Conditions

To build a condition, you first need a basis for negotiating one value over the other. In my last post about vectors, I accomplished a similar task. Here's how I represented it in that post:

λx.λy.λf.((f x) y)

As it turns out, a two dimensional vector has a lot in common with a conditional. If we rewrite it a bit (taken from the book):

def cond = λexpression1.λexpression2.λcond.((cond expression1) expression2)

NOTE: that bound values are at work here. The first occurrence of cond (before the =) is a shortcut name for the whole expression. The second occurrence of cond (next to the λ) binds a value to the name inside the function body.

The first two functions take the expressions that we hope to operate on, leaving us with:

λcond.((cond expression1) expression2)

Let's examine this function for a moment and try to suss out what kind of expression would be suitable input (cond) for this expression. cond is applied to expression1, and the result of that is applied to expression2:

λexpression1.λexpression2.(?????)

This seems like a solid start. It is quite close to the ternary operator (commonly ?: in C style languages). Given a boolean value, choose the left expression or the right expression. So this means the body of our cond will have to negotiate one value over another. The ternary operator is a great source of inspiration, with many similarities to how I represented choosing a vector's X or Y value, respectively. Given two options, we can arbitrarily return one of them. From this, we can extract a definition for true and false:

def true = λexpression1.λexpression2.expression1
def false = λexpression1.λexpression2.expression2

So let's put this together in JavaScript:

const TRUE = x => y => x;
const FALSE = x => y => y;
const COND = x => y => cond => cond(x)(y);

And here's an example of how you'd be able to use this:

COND(1)(2)(TRUE); // returns 1
COND(1)(2)(FALSE); // returns 2

Given a selector, we can choose one expression over another.

NOT Operation

It may seem trivial, but we can learn a lot about how to implement the NOT operation through a truth table.

X     | RESULT
------|-------
TRUE  | FALSE
FALSE | TRUE

We can see that this is essentially the opposite of a conditional, X ? FALSE : TRUE. Although, it's a tad strange because we use cond to do the opposite of what it would normally do. Let's take a look at something kind of redundant:

((cond true) false)

This is the same thing as cond, right? Apply it to true or false and you get the same value out.

(((cond true) false) true) => true
(((cond true) false) false) => false

In JavaScript you might execute these statements:

COND(TRUE)(FALSE)(TRUE)(1)(2)  // returns 1
COND(TRUE)(FALSE)(FALSE)(1)(2) // returns 2

So if we want to define not, it'll just be the opposite:

def not = λx.(((cond false) true) x)

To depart from an accurate deconstruction of this expression, for a moment, I'd like to share my simplified understanding. We can examine a portion of this function body, ((cond false) true). This leaves us with a simple function that takes the input and returns the opposite. If the input is true, then it returns us the opposite. If you're still with me, I'm about to make a leap to simplify this definition a bit:

def not = ((cond false) true)

Based on our definition for cond above, we know that there are three functions involved. In this simplified definition for not we are applying the first two lambda expressions, which returns a function. There is probably a name for this kind of refactoring/simplification, but I don't know it yet.

Now let's whip up some JavaScript to bring it all together:

const NOT = x => COND(FALSE)(TRUE)(x);

Pretty straight forward, I think. However, the execution might throw you a curve ball. Here's an example of using it:

NOT(FALSE)       // returns a function.... wat?
NOT(FALSE)(1)    // still returns a function... wat?
NOT(FALSE)(1)(2) // returns 1!
NOT(TRUE)(1)(2)  // returns 2!

Okay, so the trick is that you're selecting an expression, not a value. Our definitions for TRUE and FALSE are selectors given two values. NOT is going to return a selector for the opposite value. Am I helping you understand this yet? The node.js REPL is a great place for trying this stuff out.

AND Operation

Once again, we'll start by examining a truth table. The thing I found surprising was just exactly how literally the truth table laid out the implementations for these logical operations.

X     | Y     | RESULT
------|-------|------
FALSE | FALSE | FALSE
FALSE | TRUE  | FALSE
TRUE  | FALSE | FALSE
TRUE  | TRUE  | TRUE

When reading this truth table, consider that you'll want to capture that value in a λ function. By composing our λ functions we'll end up with an expression that perfectly represents this truth table. If you think back to short circuiting for a moment, you remember that the AND operation is only true if both values are true. That means half of the time we know the result just by examining the first expression alone.

def and = λx.λy.(((cond y) false) x)

This looks like a tangled mess, but bear with me. The Y value determines the value of the expression when X is true, and when X is false, we know the whole expression is false. We can simplify the expression a bit by walking through a substitution.

// here's our expression
((and true) false) ==
// replace "and" with its definition above
((λx.λy.(((cond y) false) x) true) false) =>
// substitute true for the first input of "and" (λx)
(λy.(((cond y) false) true) false) =>
// substitute false for the second input of "and" (λy)
(((cond false) false) true) =>
// evaluate cond (true ? false : false)
false

Let's try it again with different input.

// here's our expression
((and true) true) ==
// replace "and" with its definition above
((λx.λy.(((cond y) false) x) true) true) =>
// substitute true for the first input of "and" (λx)
((λy.(((cond y) false) true) true) =>
// substitute true for the second input of "and" (λy)
(((cond true) false) true) =>
// evaluate cond (true ? true : false)
true

Hopefully this helps clarify how this gets evaluated. The AND operation is a tad trickier than NOT. Let's take a look at some JavaScript examples.

const AND = x => y => COND(y)(FALSE)(x);

With this definition we can try and evaluate the above examples. Remember that you'll be getting back a function!

AND(FALSE)(FALSE)       // returns a function!
AND(FALSE)(FALSE)(1)(2) // returns 2
AND(FALSE)(TRUE)(1)(2)  // returns 2
AND(TRUE)(FALSE)(1)(2)  // returns 2
AND(TRUE)(TRUE)(1)(2)   // returns 1

OR Operation

Now we're starting to cruise. Again let's start with the truth table.

X     | Y     | RESULT
------|-------|------
FALSE | FALSE | FALSE
FALSE | TRUE  | TRUE
TRUE  | FALSE | TRUE
TRUE  | TRUE  | TRUE

This table has two real cases. Either the first value is true, and we know the whole expression is true, or the first value is false and the value of the entire expression is equal to the second value. Given that, we can figure out the definition:

def or = λx.λy.(((cond true) x) y)

I'll leave the step-by-step evaluation for you to do. Instead I'll get right to the JavaScript version:

const OR = x => y => COND(TRUE)(x)(y);

And here's some examples of calling it:

OR(FALSE)(FALSE)       // returns a function!
OR(FALSE)(FALSE)(1)(2) // returns 2
OR(FALSE)(TRUE)(1)(2)  // returns 1
OR(TRUE)(FALSE)(1)(2)  // returns 1
OR(TRUE)(TRUE)(1)(1)   // returns 1

Examples

Now that we have some JavaScript and functions for doing logic, let's try something a bit more complicated:

const COND = x => y => cond => cond(x)(y);
const TRUE = x => y => x;
const FALSE = x => y => y;
const NOT = x => COND(FALSE)(TRUE)(x);
const AND = x => y => COND(y)(FALSE)(x);
const OR = x => y => COND(TRUE)(x)(y);

// EXAMPLE 1:
console.log(
  COND(
    NOT(FALSE)
  )(
    NOT(TRUE)
  )(
    FALSE
  )
  (1)(2)
); // prints 2

// EXAMPLE 2:
console.log(
  COND(
    AND(
      NOT(FALSE)
    )(
      OR(
        NOT(TRUE)
      )(
        NOT(FALSE)
      )
    )
  )(
    OR(AND(TRUE)(TRUE))(TRUE)
  )(
    TRUE
  )(1)(2)
); // prints 1

Conclusion

I hope that this has been helpful for you. I find it useful to try and apply this information by writing code. It helps to solidify my understanding. Chapter 3 has more information about representing natural numbers. I understood it pretty well, but I think I will write about it soon so that I can really grok it.

]]>
<![CDATA[Practicing Functional Programming: Vectors]]>

I've been on a bit of a functional programming kick lately, and it's all starting to click. I came across some vector math for character movement in games, and it seemed like a nice exercise to try out with a purely functional approach. Just because it&

]]>
https://johncoder.com/practicing-functional-programming-vectors/5a84592a0cf541001825c9d4Wed, 12 Oct 2016 14:25:55 GMT

I've been on a bit of a functional programming kick lately, and it's all starting to click. I came across some vector math for character movement in games, and it seemed like a nice exercise to try out with a purely functional approach. Just because it's a commonly known language, I'm going to use JavaScript for this example.

const assert = require('assert');

const vector = x => y => f => f(x)(y);
const vectorX = x => y => x;
const vectorY = x => y => y;
const addScalar = x => y => x + y;

const addVector = v1 => v2 => vector
(
  addScalar(v1(vectorX))(v2(vectorX))
)
(
  addScalar(v1(vectorY))(v2(vectorY))
);

describe('vectors', () => {
  it('adds two vectors', () => {
    const result = addVector(vector(1)(0))(vector(2)(3));
    assert.equal(result(vectorX), 3);
    assert.equal(result(vectorY), 3);
  });
});

The hardest part was coming up with the function for representing a vector:

const vector = x => y => f => f(x)(y);

Compare that to the lambda calculus version:

λx.λy.λf.((f x) y)

If we imagine that we are taking a vector with x = 1 and y = 0, we'd call it like this:

vector(1)(0);

This gives us:

f => f(x)(y);

Or, if we make the equivalent simplification in the lambda calculus version, we get:

λf.((f x) y)

So the representation of a vector is accomplished by binding values for x and y, and applying another function to those parameters.

For funzies, let's suppose we want to feed vector to itself:

vector(1)(0)(vector);

This produces the same result! That has more to do with how the function is composed, but still thought provoking, nonetheless.

The next challenge is coming up with a representation for getting either the x or the y value of a vector. Now that we've seen that the vector function can apply a function to its x and y values, respectively, we can start there. Given vector(1)(0), we have a function that can apply another function to the 1 and 0. Our representation should be able to take both values so that we can pass it to the vector. Then, we already know what value we expect to get out of it:

const vectorX = x => y => x;
const vectorY = x => y => y;

Or, here is the lambda calculus version:

λx.λy.x
λx.λy.y

Given two arguments, arbitrarily return one of them. It's as simple as that. If we apply a vector to these functions we get the correct results:

vector(1)(0)(vectorX) // returns 1
vector(1)(0)(vectorY) // returns 0

Next up is defining addition for vectors. A vector has two values, which can be individually treated as scalar values. In order to add two vectors together, we add the individual x's and y's from both vectors together to produce new vector.

const addScalar = x => y => x + y;

Now, this isn't necessarily "pure" because it's just using the addition operator in JavaScript. To be more pure, we'd have to define a function that could be called like this:

add(x)(y)

Which would return the new value. Essentially, addScalar is called the same way, but is not defined in terms of functions. For the sake of simplicity, we can just save that for another exercise.

Now that we can effectively add two numbers together we can make the next leap to defining a function that adds two vectors together. Here's how we add the x values of two vectors (v1 and v2):

addScalar(v1(vectorX))(v2(vectorX))

And here's how we'd add the y values of these two vectors:

addScalar(v1(vectorY))(v2(vectorY))

Now we can create a new vector from these two values, giving us the final function:

const addVector = v1 => v2 => vector
(
  addScalar(v1(vectorX))(v2(vectorX))
)
(
  addScalar(v1(vectorY))(v2(vectorY))
);

I hope that this is helpful in some way to others that are just starting to cut their teeth in functional programming.

]]>
<![CDATA[Finding Missing Handlebars Views]]>

Just a quick post about a solution I thought up to finding missing handlebars views in my project.

grep -oEr "{{> [-a-zA-Z0-9\/_]+" ./app/server/views | grep -oE "[-a-zA-Z0-9\/_]+$" | sort | uniq | while read -r line ; do
    if [ ! -e "./app/server/views/partials/$line.hbs" ]
    then
]]>
https://johncoder.com/finding-missing-handlebars-views/5a84592a0cf541001825c9d1Wed, 10 Aug 2016 15:13:58 GMT

Just a quick post about a solution I thought up to finding missing handlebars views in my project.

grep -oEr "{{> [-a-zA-Z0-9\/_]+" ./app/server/views | grep -oE "[-a-zA-Z0-9\/_]+$" | sort | uniq | while read -r line ; do
    if [ ! -e "./app/server/views/partials/$line.hbs" ]
    then
        echo "./app/server/views/partials/$line.hbs"
    fi
done

This just greps for the {{> stem and then re-greps for the path part. Then it sorts and drops duplicate values. For each path, it checks to see if a file exists. If it doesn't, it outputs it. I'm sure a more elegant solution exists.

That is all, carry on with your lives.

]]>
<![CDATA[Colorful Comment Markers in Emacs]]>

Earlier this year I switched to a (mostly) monochromatic theme for coding in emacs. It has been a great experiment so far. I even tried switching back to a colorful theme the other day, and it was so distracting that I ended up switching back. Even though I'm

]]>
https://johncoder.com/colorful-comment-markers-in-emacs/5a84592a0cf541001825c9ceWed, 06 Jul 2016 13:23:08 GMT

Earlier this year I switched to a (mostly) monochromatic theme for coding in emacs. It has been a great experiment so far. I even tried switching back to a colorful theme the other day, and it was so distracting that I ended up switching back. Even though I'm not using much syntax highlighting, I quickly made an exception for commented code. That had to be a darker color so that my eyes could quickly distinguish between code and comments.

I don't do a whole lot of commenting, but I tend to leave a bunch of TODO markers to circle back to later. Occasionally I will leave a NOTE or QUESTION marker, or the exceedingly rare SEE <hyperlink>.

I've decided to make an exception to my monochromatic theme. Comment markers are more helpful if they jump out at me, so I'm choosing to colorize them.

;; Colorful Markers
(setq fixme-modes '(c++-mode c-mode emacs-lisp-mode js2-mode go-mode))
(make-face 'font-lock-fixme-face)
(make-face 'font-lock-study-face)
(make-face 'font-lock-important-face)
(make-face 'font-lock-question-face)
(make-face 'font-lock-note-face)
(make-face 'font-lock-see-face)
(mapc (lambda (mode)
        (font-lock-add-keywords
         mode
         '(("\\<\\(TODO\\)" 1 'font-lock-fixme-face t)
           ("\\<\\(STUDY\\)" 1 'font-lock-study-face t)
           ("\\<\\(IMPORTANT\\)" 1 'font-lock-important-face t)
           ("\\<\\(QUESTION\\)" 1 'font-lock-question-face t)
           ("\\<\\(SEE\\)" 1 'font-lock-see-face t)
           ("\\<\\(NOTE\\)" 1 'font-lock-note-face t))))
      fixme-modes)
(modify-face 'font-lock-fixme-face "#D64C2A" nil nil t nil t nil nil)
(modify-face 'font-lock-study-face "Yellow" nil nil t nil t nil nil)
(modify-face 'font-lock-important-face "Yellow" nil nil t nil t nil nil)
(modify-face 'font-lock-question-face "#ffa500" nil nil t nil t nil nil)
(modify-face 'font-lock-see-face "#88C9F0" nil nil t nil t nil nil)
(modify-face 'font-lock-note-face "#8ABB93" nil nil t nil t nil nil)

I started with Casey Muratori's code from his .emacs file in Handmade Hero, and added a little bit to it.

]]>
<![CDATA[A Toy Leveled Cache in Node.js]]>

I wanted to try an experiment around improving caching performance in Node.js while using redis. One of the ideas I came up with was to build a leveled cache that can work primarily with something in-memory (like eidetic), falling back to a secondary cache like redis. I really wanted

]]>
https://johncoder.com/a-toy-leveled-cache-in-node-js/5a84592a0cf541001825c9cbWed, 20 Apr 2016 13:19:04 GMT

I wanted to try an experiment around improving caching performance in Node.js while using redis. One of the ideas I came up with was to build a leveled cache that can work primarily with something in-memory (like eidetic), falling back to a secondary cache like redis. I really wanted to support some sort of compression for the secondary cache to improve the payloads being sent over the network. The toy had some promising results in my early experiments. The payloads being sent over the network were often megabytes in size, so a ~60-90% improvement in size resulted in less I/O time.

DISCLAIMER: This isn't production ready. You have been warned!

var zlib = require('zlib');
var Buffer = require('buffer').Buffer;

function defaultCalculationForL2CacheSecondTtl(ttl) {
  return ttl * 1.15;
}

function LCache(l1Cache, l2Cache, calculateL2CacheSeconds) {
  if (!l1Cache || !l2Cache) {
    throw new Error('LCache initialized without cache');
  }
  this.L1Cache = l1Cache;
  this.L2Cache = l2Cache;
  this.calculateL2CacheSeconds = calculateL2CacheSeconds || defaultCalculationForL2CacheSecondTtl;
}

LCache.prototype.updateL1Cache = function lCacheUpdateL1Cache(key, value, callback) {
  var self = this;
  self.L1Cache.set(key, value, function updateL1Cache(updateL1CacheError) {
    if (updateL1CacheError) {
      callback(updateL1CacheError, null);
      return;
    }

    self.L2Cache.ttl(key, function l2CacheTtl(l2CacheTtlError, l2Ttl) {
      self.L1Cache.expire(key, l2Ttl || 30, function l1CacheExpire(l1CacheExpireError) {
        callback(l1CacheExpireError, value);
      });
    });
  });
};

LCache.prototype.fallbackGet = function lCacheFallbackGet(key, callback) {
  var self = this;
  self.L2Cache.get(key, function l2CacheGet(l2CacheGetError, l2Value){
    if (!l2Value) {
      callback(l2CacheGetError, null);
      return;
    }

    zlib.gunzip(new Buffer(l2Value, 'binary'), function (gunzipError, payload) {
      if (gunzipError || !payload) {
        callback(gunzipError || new Error('no payload'), null);
        return;
      }
      var l2ValueUnzipped = payload.toString();
      self.updateL1Cache(key, l2ValueUnzipped, callback);
    });
  });
};

LCache.prototype.get = function lCacheGet(key, callback) {
  var self = this;
  this.L1Cache.get(key, function l1CacheGet(cacheGetError, l1Value) {
    if (l1Value) {
      callback(cacheGetError, l1Value);
      return;
    }

    self.fallbackGet(key, callback);
  });
};

LCache.prototype.set = function lCacheSet(key, value, callback) {
  var self = this;
  if (!value) {
    setImmediate(callback(new Error('Cannot cache empty value')));
    return;
  }
  this.L1Cache.set(key, value, function l2CacheSet(l1CacheSetError) {
    zlib.gzip(value, function (gzipError, gzippedOutput) {
      self.L2Cache.set(key, gzippedOutput.toString('binary'), function l2CacheSet(l2CacheSetError) {
        //var improvement = ((value.length - gzippedOutput.length)/value.length)*100;
        //console.log(' pre-zip length: ' + value.length);
        //console.log('post-zip length: ' + gzippedOutput.length);
        //console.log('    improvement: ' + improvement.toFixed(2) + '%');
        callback(l2CacheSetError || l1CacheSetError);
      });
    });
  });
};

LCache.prototype.expire = function lCacheExpire(key, ttl, callback) {
  var self = this;
  this.L1Cache.expire(key, ttl, function l1CacheExpire(l1CacheExpireError) {
    var l2Cachettl = self.calculateL2CacheSeconds(ttl);
    self.L2Cache.expire(key, l2Cachettl, function l2CacheExpire(l2CacheExpireError) {
      callback(l2CacheExpireError || l1CacheExpireError);
    });
  });
};

module.exports = LCache;
]]>
<![CDATA[On Empowerment Through Declarative Programming]]>

Lately I've been pairing with my good friend, Caleb Gossler, on some atypical Node.js at work. To be honest it's not the most interesting work to talk about. Rather than go into the bland details I thought I'd extrapolate the valuable part of

]]>
https://johncoder.com/on-empowerment-through-declarative-programming/5a84592a0cf541001825c9c8Tue, 12 Apr 2016 17:28:54 GMT

Lately I've been pairing with my good friend, Caleb Gossler, on some atypical Node.js at work. To be honest it's not the most interesting work to talk about. Rather than go into the bland details I thought I'd extrapolate the valuable part of the experience.

We have been taking a more declarative approach to some of the node code we are writing. It merely describes, through functions, how we would like to consume an API. We are setting goals for how to get all of the data we need. Being declarative affords us a lot of flexibility with how the mechanical parts of the application work. We can deal with HTTP caching semantics under the hood, and even isolate mucky setup code. It also allows us to program what amounts to a bunch of separate HTTP requests somewhat procedurally (it's just a single function at the end of the day).

One of the side effects of this is that we ended up with a bevy of simple functions that are related in some way. When you open up a file and take a look, it can be a little daunting. You tend to acclimate to the approach fairly quickly, but the disorientation part will come up again when you are trying to figure out what the application is doing at runtime. Abstractions come with a cost, and it is in your best interest to precompute the costs and find margins for maximizing your savings.

All of these benefits, so far, seem well worth the confusion. Suppose we can deal with the confusion. We'd just be left with up side, no? Well I figured out a way to deal with a portion of the confusion! Don't mind the stacked argument...

My solution, for the purposes of this post, will be somewhat abstract. I'm more interested in why the solution feels like a good fit. Bear with me.

I talked about a bevy of simple functions above. To elaborate a bit, the functions are simplistic. They take a single parameter, and return nothing. The parameter is an object with functions that let you describe how the current function is related to other functions. If you're thirsty for more, I'm telling you: it's really not that interesting. Okay, so you want to see an overly contrived, unhelpful example that badly. Here you go:

function wiggle(x) {
  x.qaz('jiggle', {}, jiggle, (jiggler) => jiggler.wiggled === true));
  x.default({});
}

function jiggle(x) {
  x.qaz('beep', {}, f => f.default('boop'));
  x.qaz('wiggle', {}, wiggle, (wiggler) => wiggler.jiggled === true));
  x.default({});
}

function foo(bar) {
  bar.qaz('wiggle', {}, wiggle);
  bar.qaz('jiggle', {}, jiggle);
  bar.qaz('free', {}, free => free.default('can\'t catch me!'));
}

See? I told you it wasn't interesting. However, maybe you can see my point about the levels of indirection at play.

Simplistic functions like these are the bread and butter of this abstraction. Since it is declarative, we're not attempting to hijack a callback to do any hardcore procedural or object oriented programming. We are merely using the functions to convey intent, and intent is a helpful starting place for deciding what work you need to actually do or even how to do it.

Imagine that bar in the example above is some context object that translates obscure function calls to HTTP requests. That's how the approach started. However, as I looked at the code more and more, I started to see a pattern emerge. The functions form a hierarchy, which stands to be more complex than our simplistic functions let on.

The solution seemed simple: I could create a new context to pass to these functions, and collect a structured representation of the intent. That way we could capture the essence of what the code was going to be doing. I dubbed this our execution plan. Programming languages are interpreted in a similar way: you describe your intent by writing the code. The compiler parses it and builds an abstract syntax tree, and proceeds to optimize it and perform code generation.

There are a couple of concrete details that are interesting. First, this is Node.js, so you can do things like call .toString() on a function. And so I did. Then I parse for things like function names, parameter names, and even entire function bodies (predicates). They get woven into the structured data to give you meaningful context. I don't actually care much for this part, but I have some ideas in mind for getting around it (maybe more on that another time).

Second, since there is nothing stopping circular function calls in this abstraction, my solution also needed to take this into account (thanks to Caleb for finding this with a real and complicated example). My approach is to walk up the structure looking at the call stack to see if the function finds another occasion where it was already called. If it finds itself, it will flatten the path and indicate that the next sequence of functions is circular, and also detailing which functions are in the sequence.

The result is a set of tooling that gives us some deep introspection into what is going on at development time and at runtime. Not only that, but we now have a more sophisticated structure that we can use to optimize the mechanical code that is doing all of the hard work. Imagine that we are using a distributed cache for HTTP caching, and this structure allows us to batch up our caching logic to decide which HTTP requests are stale and need to be reissued. This is also a convenient way to capture key performance metrics per logical grouping of requests. We can follow the execution plan, capturing cache hits and request/response times, and even errors. What if we just generate execution plans as a form of documentation? Or, if we think back to the abstract syntax tree thing for a moment, what if we could derive a simple DSL for these goals? Let the possibilities soak in for a moment.

Oh yeah, and it's declarative. That's why we could do this in the first place. Caleb came up with this declarative approach, and it has been pretty great. Being able to easily add value to it is reassuring for my brain. The only thing really missing at this point is being able to do it as efficiently as you could in lisp.

]]>