Jekyll2026-04-27T06:07:02+00:00https://www.fredrikmeyer.net/feed.xmlFredrik MeyerMathematics is fun.AI is like TV2026-04-09T16:34:00+00:002026-04-09T16:34:00+00:00https://www.fredrikmeyer.net/2026/04/09/ai-is-like-tvAI is like TV (or books before that). It started some years ago with asking ChatGPT about code constructs, then suddenly we were here: running agents in several terminals, producing more code than before, the programmer reduced to someone who just presses “accept” / “reject” / “think harder”.

What happened to challenging side projects? (just let an agent do it, but where’s the fun in that!)

Socrates thought writing would weaken our minds (because people would read and retrieve information without understanding it). When television came, people said it would dull our minds. Today it is not hard to predict the same thing about our use of AI.

A programmer functions as an intermediary between business people and hardware, by writing software. A programmer’s job is to “deliver code you have proven to work” (in the words of Simon Willison). Our job is not to write code, but to deliver code. Or to be even clearer: our job is to deliver systems that function well1 (systems that someone is willing to pay for).

To stay useful in the “knowledge sector”, one must produce and learn at the same time. Sometimes these forces are not compatible. You will produce more code if you skip testing, but in the long run your project will stall. Pure vibe coding will get you very far, but as systems tend to grow in complexity, one needs more and more resources to keep developing.

My Norwegian teacher used to say books should be a struggle to read. Books should fight against you. At the time I couldn’t possibly see what the point of willfully fighting a book was, but there is value in opposition. Hard books often end up being the most rewarding books.

How do programmers learn? By experiencing the hard way that this choice was a bad choice, that tests are helpful, that breaking schemas is bad, that nullability is a pain, that any type of migration should be done in several steps, that logging should be done before something happens (rather than after), that more code is bad2.

When programming with AI, code flows on the screen, maybe you read it, maybe you skim it, but you did not write it. Programming suddenly feels more like scrolling Instagram than a craft.

Claude wrote plenty of tests, let me skim through them…

As AI writes more of the codebase, cognitive debt takes over from technical debt. As long as we still care about the codebase, one must spend more effort, not less, keeping the codebase tidy in order to understand it (especially important is readable tests), unless one has gone full “gas town.”

My hope is that AI will lead us to a situation where we as programmers will spend more time thinking about the problem domain3 and less time typing code. We must not fall into the temptation of writing even more code. The AI (Claude, Github Copilot CLI, Gemini CLI, etc etc) is our very helpful partner, and it should help us review, create, and simplify. AI’s are very good discussion partners, but unless chastised, they are still overly verbose.

This demands some discipline. The same kind of discipline one has when one reads the whole newspaper article and not only the first paragraph. Or closer to home: the same kind of discipline one has when one writes the test before the implementation.

I’ll end my little stream of thought by a quote by Kelly Clarkson4:

What doesn’t kill you makes you stronger.

  1. For some definition of well

  2. In my experience, some things aren’t really taught. Some knowledge is earnt over time. 

  3. Meaning talking to domain experts, stakeholders, but also turning off the screen and trying out hammock-driven development

  4. I’m trying to be funny, it’s Friedrich Nietzsche. 

]]>
Writing a work log2026-01-11T13:11:00+00:002026-01-11T13:11:00+00:00https://www.fredrikmeyer.net/2026/01/11/work-logAbout a year ago I started writing a “work diary”. The process is simple: at the end of the day, I write a few sentences of what I did that day at work.

It has a few benefits:

  • Easier to know what I should do the next day if I didn’t finish a task.
  • If I’m stuck with something, I can write the problem down, effectively rubber-ducking with myself. I have a belief that writing a problem down will help clarify thoughts.
  • I can ask an LLM questions about what I have spent time on. Here’s an example:

    > pbpaste | llm 'given this, list programming skills i \
                can use for a CV later. be brief, only keywords. at most 10'
    
    1. Java
    2. Spring Boot
    3. API Development
    4. Test Automation
    5. SQL
    6. BigQuery
    7. Git
    8. Microservices
    9. Debugging
    10. CI/CD (Continuous Integration/Continuous Deployment)

    Or this one:

    ❯ pbpaste | llm 'given this diary, estimate by seniority as a programmer. answer in one sentence'
    

    Based on the extensive work logs, collaboration with team members, involvement in complex debugging, API development, feature implementation, and participation in meetings and project management, it can be estimated that the programmer is at a senior level.

    Here I use the llm CLI tool by Simon Willison.

  • It can help me realize issues I should focus more or less on. Asking the LLM again, it pointed out that a lot of time is spent fixing bugs or attending meetings. It suggested to set aside dedicated time for deep work, so that complex coding tasks can be handled without interruption.
  • It gives myself some traceability. I can verify that I did actually work a particular day, or that I worked on that particular thing at a particular day.

The process

I use Emacs for Org Mode and for Magit. To write the log I press C+æ1 to open the “work Org Mode file”. Then I navigate to the work diary (headlines are “Work log”, month, day). In Org Mode, dd insert the current date.

Then I write a few sentences. Here’s an example from last Friday (loosely translated):

Sleepy today. Deployed § 11-4 second part out in the dev environment, fixed a small bug (it didn’t consider manual income). Otherwise spent time on unrelated small fixes.

Used Copilot to get ktor-openapi-generator to support @JsonValue-annotations.

Made a Grafana-dashboard for errors logs per app.

  1. I have this in my Emacs config:

    (global-set-key (kbd "C-c æ")
                (lambda () (interactive) (find-file "~/path-to-work.org")))
    

]]>
What I self host2025-10-18T11:39:00+00:002025-10-18T11:39:00+00:00https://www.fredrikmeyer.net/2025/10/18/what-i-self-hostI’ve always liked reading blogs, and have used several feed readers in the past (Feedly, for example). For a long time I was thinking it would be fun to write my own RSS reader, but instead of diving into the challenge, I did the next best thing, which was finding a decent one, and learning how to self host it.

In this post I will tell about the self hosting I do, and end by sketching the setup.

RSS reader - Miniflux

Miniflux is a “minimalist and opinionated feed reader”. I host my own instance at https://rss.fredrikmeyer.net/ It is very easy to set up using Docker, see the documentation.

Miniflux

I do have lots of unread blog posts 🤨.

Grafana, Strava Integration

I host a Grafana instance, also using Docker. What first triggered me to make this instance was an old project (that I want to revive one day): I had a Raspberry Pi with some sensors measuring gas and dust at my previous apartment, and a Grafana dashboard showing the data. It was interesting seeing how making food at home had a measurable impact on volatile gas levels.

Later I discovered the Strava datasource plugin for Grafana. It is a plugin that lets Grafana connect to the Strava API, and gives you summaries of your Strava activities. Below is an example of how it looks for me:

Grafana

One gets several other dashboards included in the plugin.

Spotify

One day YourSpotify was mentioned on HackerNews. It is an application that connects to the Spotify API, and gives you aggregated statistics of artists and albums you’ve listened to over time (why they chose to store the data in MongoDB I have no idea of!).

YourSpotify

It is interesting to note that I have listened to less and less music over the years (I have noticed that the more experience I have at work, the less actual programming I do).

Because I didn’t bother setting up DNS, this one is only exposed locally, so I use Tailscale to be able to access YourSpotify. This works by having Tailscale installed on the host, and connecting to the Tailnet. It lets me access the application by writing http://forgottensuperhero:3001/ in the browser.

Bookmark manager

I have a problem with closing tabs, and a tendency to hoard information (don’t get me started on the number of unread PDF books on my Remarkable!). So I found Linkding, a bookmark manager, which I access at https://links.fredrikmeyer.net/bookmarks/shared.

LinkDing

In practice it is a grave yard for interesting things I never have the time to read, but it gives me peace some kind of peace of mind.

How

I have an ambition of making the hosting “production grade”, but at the moment this setup is a mix of practices of varying levels of quality.

I pay for a cheap droplet at DigitalOcean, about $5 per month, and an additional dollar for backup. The domain name and DNS is from Domeneshop. SSL certificates from Let’s Encrypt.

All the apps run in different Docker containers, with ports exposed. These ports are then listened to by Nginx, which redirects to HTTPS.

I manage most of the configuration using Ansible. Here I must give thanks to Jeff Geerling’s book Ansible for DevOps, which was really good. So if I change my Nginx configuration, I edit it on my laptop, and run

ansible-playbook -i inventory.ini docker.yml  --ask-become-pass

to let Ansible do its magic. In this case, “most” means the Nginx configuration and Grafana.

Miniflux and YourSpotify are managed by simply doing scp spotify_stats.yml droplet:~ and running sudo docker-compose -f ./spotify_stats.yml up -d on the host.

Ideally, I would like to have a 100% “infrastructure as code” approach, but hey, who has time for that!

Ideas for the future

It would be nice to combine AI and user manuals of house appliances to make an application that lets you ask questions like “what does the red light on my oven mean?”. Or write my own Jira, or… Lots of rabbit holes in this list on Github.

Until next time!

]]>
All the ways I use AI2025-07-02T11:24:00+00:002025-07-02T11:24:00+00:00https://www.fredrikmeyer.net/2025/07/02/all-the-ways-i-use-aiI had some very nice experiences with Claude Code recently, and I realized it would be fun to write down all the ways I use AI today (highly likely it will all change within the next year!).

For context, I am a software developer, these days I mainly use Kotlin and Next.js at work.

Tools I use

Raycast AI

I use Raycast as a Spotlight replacement on Mac. I have bound TAB to its “Quick AI” feature (which at the moment uses GPT-4o).

I use it a lot for quick questions that don’t require a conversation. This has for a large part replaced Google searches for me. Sometimes I just write “postgres literal json array”, and I get the right response faster than a Google search.

(for some reason it doesn’t support multi-line chat questions, which is a bit annoying with code questions…)

ChatGPT desktop/phone app

I have had a ChatGPT subscription for a while now. I use it for longer conversations and random questions. I have tried the conversational feature, but I prefer writing. Common uses for me are recipe suggestions when shopping groceries, explaining words, et cetera.

Claude desktop app

I have a Garmin watch, and I use GarminDB to export activity and heart rate data to a local SQLite database. I found a JDBC MCP Server, and I wanted to see what the AI could do with the data.

Screenshot of Claude Artifact

I was impressed, see the picture. After reading and understanding the database, it produced a HTML file with a nice plot.

IntelliJ’s Junie

Junie was the first agentic AI I tried (I got access to the early preview program). I used it to help me learn OpenGL in Java (which may be a topic for a future blog post). See this repo for some example commits (those with “thx Junie” in the commit message).

I have also used it to write integration tests that requires a lot of setup or boilerplate code.

It understands the Java eco system quite well, and the generated code is quite good. Unfortunately it couldn’t browse the internet last time I tried it, so it will often pull old versions of libraries. It doesn’t seem to have access to IntelliJ’s refactoring functionality (it edits one file at a time).

Claude Code

I have tried Claude Code a few times. At the moment I’m using an API key to connect (which according to Google searches is probably more expensive if I were using it a lot).

With Claude Code I could go a lot further with exploring the data from my Garmin watch. I asked it to use the R programming language to analyze and plot the data, and summarize everything in a PDF using LaTeX.

Sleep categories

On question I wanted to answer was if bad sleep affected heart rates the following day. By analyzing the sleep scores and comparing with heart rates, it produced the plot on the right side.

We see a fairly clear association between bad sleep and average heart rate (it turns out the opposite question also is true: higher heart rate (stress) leads to worse sleep).

LLM CLI

Another tool is Simon Willison’s llm CLI tool.

The last six months I’ve gotten the habit of ending every work day by writing a few short sentences about what I did that day. I can copy all the days to the clipboard, and ask an LLM questions about how I’ve worked. For example:

> pbpaste | llm 'List the three things I have spent the most time on. Respond very succinctly, in English.'
1. Working on statistics and data accuracy for various reports.
2. Addressing issues and bugs related to processing and API integration.
3. Participating in and leading numerous meetings for alignment and planning.

The llm tool is also quite useful when in offline environments. It supports using models from Ollama, running directly on my Mac. It is slower, but very useful when I don’t remember some standard library functions, or some awkward awk syntax.

Notebook LM

Google’s Notebook LM lets you upload PDF’s and you can ask questions about it. Its most famous feature is that it can make a podcast of the contents one has uploaded.

I did this with my PhD thesis, and it was quite fun to hear. The most noticable thing is that the hosts are extremely positive about simple ideas.

Future ideas

I want to test more AI packages in Emacs, for example gptel. It would also be fun to try to write some AI agent at some point, for example using JetBrain’s Koog or Pydantic’s framework for Python.

My view on AI

For software engineering, AI is very good, used right. I find that for autocompletion it can sometimes be more distracting than useful. Good uses include:

  • Code review
  • Spotting bugs
  • Writing boring code

Bad (or worse) uses include:

  • Big tasks without clear boundaries
  • Illustrations for presentations
  • Writing prose. (actually, I really dislike this use: it is good for correcting prose or suggesting change, but AI-written text is bad for sooo many reasons.)
]]>
Estimating Pi with Kafka Streams2024-05-06T10:48:00+00:002024-05-06T10:48:00+00:00https://www.fredrikmeyer.net/2024/05/06/estimating-pi-kafkaRecently I wanted to learn a bit about Apache Kafka. It is often used as a way to do event sourcing (or similar message-driven architectures). An “add-on” to the simple publish/subscribe pattern in Kafka is Kafka Streams, which provides ways to process unbounded data sets.

I wanted to write something slightly more complicated than the examples in the Kafka documentation.

Estimating π (Pi) by throwing darts

Circle inside square Back in the day we all learned the formula $A=\pi r^2$ for the area of a circle. We place the circle of radius 1 inside the unit square $[-1,1]^2$ (see the picture). The unit square has an area of $2 \times 2 = 4$, so the area of the circle is $\pi \cdot r^2 = \pi \cdot 1^2 = \pi$. The circle covers $\pi / 4 \approx 0.7853$ of the square’s area.

One (silly, very bad) way to estimate the area of the circle is to sample random points inside the square, and count how many of them land inside the circle. We end up with this formula:

$$ \lim_{n \to \infty} \frac{ \#\left( \text{hits inside the circle} \right)}{n} = \large\pi $$

We need two ingredients to estimate Pi this way:

  • A way of generating (pseudo-)random numbers (which any normal computer can do).
  • A way to decide if a given point is inside the circle.

To decide if a given point is inside a circle, we just check that its absolute value is less than one:

$$ (x,y) \mapsto x^2+y^2 \leq 1 $$

We realize now that this algorithm can be implemented in 25 lines of Python (with plenty of spacing), but let us use our skills as engineers to over-engineer instead.

Implementation in Kafka

Glossing over details, Kafka consists of topics and messages. Clients can publish and subscribe to these topics. It also promises “high throughput”, “scalability”, and “high availability”. Since I’m doing this on my laptop, none of this applies.

Kafka Streams is a framework for working with the Kafka messages as a stream of events, and provides utilities for real time aggregation.

Below is a high-level overview of the Kafka streams topology that we will build:

We produce a stream of random pairs of numbers to a Kafka topic named randoms. Then, in the aggregate step, we use a state store to keep track of the numbers of hits so far (together with the total number of random points processed).

The code to build the topology is quite simple:

public static Topology getTopology() {
    final StreamsBuilder builder = new StreamsBuilder();

    var randoms = builder.stream(TOPIC_RANDOMS,
                                 Consumed.with(Serdes.String(),
                                 new Tuple.TupleSerde()));

    KStream<String, Double> fractionStream = getPiEstimationStream(randoms);

    // Output result to a topic
    fractionStream.to(TOPIC_PI_ESTIMATION,
                     Produced.with(Serdes.String(), Serdes.Double()));

    // Also make a topic with the error
    fractionStream.mapValues(v -> Math.abs(Math.PI - v) / Math.PI)
                 .to(TOPIC_PI_ERROR,
                     Produced.with(Serdes.String(), Serdes.Double()));

    return builder.build();
}

We consume input from the randoms topic (making it into a stream). Then we use the getPiEstimationStream method to calculate a running estimation of π. Finally, we output the estimation to the Kafka topic pi-estimation. We also output the relative error to another topic.

The code for calculating the estimate is also quite short:

private static KStream<String, Double> getPiEstimationStream(KStream<String, Tuple> stream) {
    var fractionStore = Materialized
                          .<String, FractionAggregator>as(Stores.persistentKeyValueStore("average-store"))
                          .withValueSerde(new FractionAggregator.FractionAggregatorSerde());

    return stream
            .mapValues(Tuple::insideCircle) // Map Tuple to boolean
            .groupByKey()
            .aggregate(() -> new FractionAggregator(0, 0),
                       (key, value, agg) -> FractionAggregator.aggregate(
                               agg,
                               value ? 1 : 0),
                       fractionStore)
            .toStream()
            .mapValues(PiEstimationTopology::getEstimate);
}

private static double getEstimate(FractionAggregator fractionAggregator) {
    return fractionAggregator.total() == 0
            ? 0
            : 4. * fractionAggregator.trues() / fractionAggregator.total();
}

The FractionAggregator class is just a simple Java record that keeps track of the total number of messages consumed, and how many landed inside the circle.

I also set up a simple Javalin app that publishes the messages via websockets to localhost. To do this, one writes a Kafka consumer for each topic, and use a standard consumer.poll loop. Then I used µPlot to continuously update the current estimate.

Demo

As always, I put my code on Github. To run the project, first start Kafka (for example with docker run -it --rm -p 9092:9092 apache/kafka:3.7.0), and then run mvn exec:java to run the project. Open your browser on localhost:8081, and start seeing estimations come in.

Below is a screen recording I did.

What I learned

Even though this little project was embarrassingly simple, I learned more than just the Kafka Java API.

Maybe write tests earlier

I decided to also write tests, mostly to increase my own learning. For example, writing the tests for the stream topology (see source here), made me realize that the topology is completely stateless. It is just a specification of inputs, transformations, and outputs.

If I had written the tests earlier, I would have avoided having to restart the Kafka container again and again.

Documentation of test utilities

I spent an awful lot of time writing the single test I have for the EstimationConsumer class. It uses Kafka’s MockConsumer class to mock a consumer. The only documentation I could find of this class were some examples on Baeldung’s pages.

(in general, I find that documentation for “beginners” is often too basic, outdated, or lacking)

Kafka not as fast as I thought (locally)

The first thing I tried when I started exploring Kafka, was to publish a message on some topic, and subscribe to the same topic in another terminal. About one second later, I got the message. I was a bit surprised that I wouldn’t see the message immediately, but when meditating on this for a little while, I realized that there is a difference between latecy and throughput.

There is also tons of configuration that I did not explore.

Serialization can be complicated

I chose to not take serialization seriously, so I just used Java’s built-in serialization interface (using implements Serializable). This is of course fine for testing applications, but it turned out to be cumbersome when I changed the classes involved. The solution was usually to delete and restart the Kafka container.

In a real application one would rather use a serialization method that don’t crash on schema changes (Protobuf, Avro, JSON, …). Also, one must be mindful about what to do with invalid messages. The default is to crash if any message is invalid.

There is even a whole chapter about this in the book Designing Data-Intensive Applications (Chapter 4).

Fin

Take a look at the code, and if I you found any errors here, don’t hesitate to contact me.

]]>
Implementing a 2d-tree in Clojure2024-04-02T08:51:00+00:002024-04-02T08:51:00+00:00https://www.fredrikmeyer.net/2024/04/02/2d-treeRecently I followed the very good Coursera course “Algorithms, Part I” from Coursera. The exercises were in Java, and the most fun one was implementing a two-dimensional version of a k-d tree. Since I sometimes do generative art in Clojure, I thought this would be a fun algorithm to implement myself.

There already exists other implementations, for example this one, but this time I wanted to learn, not use.

What is a 2-d tree?

A 2-d tree is a spatial data structure that is efficient for nearest neighbour and range searches in a two-dimensional coordinate system.

It is a generalization of a binary search tree to two dimensions. Recall that in a binary search tree, one builds a tree structure by inserting elements such that the left nodes always are lower, and the right nodes always are higher. In that way, one only needs $O(\log(n))$ lookups to find a given element. See Wikipedia for more details.

In a 2-d tree one manages to do the same with points $(x,y)$ by alternating the comparison on the $x$ or $y$ coordinate. For each insertion, one splits the coordinate system in two.

Look at the following illustration:

2-d tree tree structure

This is the resulting tree structure after having inserted the points $(0.5, 0.5)$, $(0.6, 0.3)$, $(0.7, 0.8)$, $(0.4, 0.8)$, and $(0.4, 0.6)$. For each level of the tree, the coordinate to compare with is alternating.

The following illustration shows how the tree divides the coordinate system into sub-regions:

Subregions of a 2-d tree structure

To illustrate searching, let’s look up $(0.4, 0.6)$. Then we first compare it with $(0.5, 0.5)$. The $x$ coordinate is lower, so we look at the left subtree. Now the $y$ coordinate is lower, so we look at the left subtree again, and we found our point. This is 2 compares instead of the maximum 5.

There’s a lot more explanation on Wikipedia.

Implementing it in Clojure

Let’s jump straight to the implementation in Clojure. We first define a node to contain tree values: the point to insert, a boolean indicating if we are comparing vertically or horizontally (vertical means comparing the $x$-coordinate), and a rectangle, indicating which subregion the node corresponds to.

(defrecord TreeNode [value vertical rect])

(note: we don’t really need to carry around the rectangle information - it can be computed from the vertical boolean and the previous point. I might optimize this later.)

(defn- tree-cons [root pt]
  (loop [path []
         vertical true]
    ;; If the current path exists
    (if-let [{:keys [value]} (get-in root path)]
      (let [comparator-fn (if vertical first second)
            current-compare-res (<= (comparator-fn pt) (comparator-fn value))]
        (cond
          ;; If pt already in tree, just return the tree
          (= value pt)
          root
          ;; If pt is lower than current value, recur with :lower appended to path
          current-compare-res
          (recur (conj path :lower) (not vertical))
          :else
          (recur (conj path :higher) (not vertical))))
      ;; We are in the case of a non-existing node
      ;; If path is empty, it means the tree was nil. Return a root node.
      (if (empty? path)
        (assoc
         (->TreeNode pt vertical (->Rectangle 0 0 1 1)) :size 1)
        ;; Otherwise, insert a new node at the current path
        (let [{prev-pt :value prev-rect :rect} (get-in root (pop path))
              curr-key (peek path)]
          (-> root
              (update :size inc)
              (assoc-in path
                        (->TreeNode pt vertical
                                    (if vertical
                                      (if (= curr-key :lower)
                                        (below-of prev-rect prev-pt)
                                        (top-of prev-rect prev-pt))
                                      (if (= curr-key :lower)
                                        (left-of prev-rect prev-pt)
                                        (right-of prev-rect prev-pt)))))))))))

Insertion is almost identical to insertion into a binary search tree. Where the structure shines, is when looking for nearest neighbour. The strategy is as follows: keep track of the “best so far” point, and only explore subtrees that are worth exploring.

When is a subtree wort exploring? Only when its region is closer to the search point than the current best point:

(defn- worth-exploring?
  "Is the `rect` worth exploring when looking for pt?"
  [rect best-so-far pt]
  (< (distance-squared rect pt) (p/distance-sq pt best-so-far)))

In addition, we do one optimization when there are two subtrees. We explore the closest subtree first. Here’s the full code:

(defn- tree-nearest
  "Find the nearest pt in the tree represented by root."
  [root pt]
  (if (nil? root) nil
      (loop [best-so-far (:value root)
             paths [[]]]
        (let [current-path (peek paths)
              {:keys [value lower higher vertical] :as current-node} (get-in root current-path)
              best-so-far* (min-key #(p/distance-sq pt %) value best-so-far)]
          (cond
            ;; The stack of paths to be explored is empty, return best-so-far
            (nil? current-path)
            best-so-far
            ;; If pt = value, then no need to do anything more
            (= pt value)
            value
            ;; Both children exist
            (and lower higher)
            (let [comparator-fn (if vertical first second)
                  current-compare-res (<= (comparator-fn pt) (comparator-fn value))
                  ;; Explore closest node first
                  child-nodes  (if current-compare-res '(:higher :lower) '(:lower :higher))
                  v (->> child-nodes
                         ;; Filter nodes worth exploring
                         (transduce (comp (filter #(worth-exploring? (:rect (% current-node)) best-so-far* pt))
                                          (map #(conj current-path %))) conj (pop paths)))]
              (recur best-so-far* v))
            (some? lower)
            (if (worth-exploring? (:rect lower) best-so-far* pt)
              (recur best-so-far* (conj (pop paths) (conj current-path :lower)))
              (recur best-so-far* (pop paths)))
            (some? higher)
            (if (worth-exploring? (:rect higher) best-so-far* pt)
              (recur best-so-far* (conj (pop paths) (conj current-path :higher)))
              (recur best-so-far* (pop paths)))
            :else
            (recur best-so-far* (pop paths)))))))

In the recursion, we keep a stack of paths (it looks like [[:higher :lower] [:higher :higher] ...]). When exploring a new node, we add it to the top of the stack, and when recurring, we pop the current stack.

Here’s how the data structure looks after inserting the same points as in the illustration above:

{:value [0.5 0.5],
 :vertical true,
 :rect {:xmin 0.0, :ymin 0.0, :xmax 1.0, :ymax 1.0},
 :size 5,
 :higher
 {:value [0.6 0.3],
  :vertical false,
  :rect {:xmin 0.5, :ymin 0.0, :xmax 1.0, :ymax 1.0},
  :higher {:value [0.7 0.8], :vertical true, :rect {:xmin 0.5, :ymin 0.3, :xmax 1.0, :ymax 1.0}}},
 :lower
 {:value [0.4 0.8],
  :vertical false,
  :rect {:xmin 0.0, :ymin 0.0, :xmax 0.5, :ymax 1.0},
  :lower {:value [0.4 0.6], :vertical true, :rect {:xmin 0.0, :ymin 0.0, :xmax 0.5, :ymax 0.8}}}}

Integrating with core Clojure functions

I wanted the tree structure to behave like a normal Clojure collection. The way to do this, is to implement the required interfaces. For example, to be able to use cons, conj, map, filter, etc, we have to implement the clojure.lang.ISeq interface. To find out which methods we need to implement, I found this Gist very helpful.

I create a new type that I call TwoTree using deftype:

(deftype TwoTree [root]
  I2DTree
  (value [_]
    (:value root))
  (intersect-rect [_ other-rect]
    (tree-insersect-rect root other-rect))

  (nearest [_ pt]
    (tree-nearest root pt))

  clojure.lang.ISeq
  (first [_]
    (letfn [(first* [{:keys [lower value]}]
              (if lower (recur lower) value))]
      (first* root)))
  (cons [_ pt]
    (TwoTree. (tree-cons root pt)))

  (next [this]
    (seq (.more this)))

  (more [_]
    (letfn [(more* [{:keys [lower higher] :as node} path]
              (cond
                lower (recur lower (conj path :lower))
                (seq path) (TwoTree. (assoc-in root path higher))
                :else (TwoTree. higher)))]
      (more* root [])))

  clojure.lang.Seqable
  (seq [this]
    (when (contains? root :value) this))

  clojure.lang.IPersistentCollection
  (equiv [this other]
    (seq-equals this other))
  (empty [_]
    (TwoTree. nil))

  clojure.lang.Counted
  (count [_]
    (get root :size 0))

  clojure.lang.IPersistentSet
  (disjoin [_ _]
    (throw (Exception. "Not supported")))

  (contains [this pt]
    (boolean (get this pt)))

  (get [_ pt]
    (if (nil? root) nil
        (loop [path []]
          (if-let [{:keys [value ^boolean vertical]} (get-in root path)]
            (let [comparator-fn (if vertical second first)
                  current-compare-res (<= (comparator-fn pt) (comparator-fn value))]
              (cond (= value pt) value
                    current-compare-res
                    (recur (conj path :lower))
                    :else
                    (recur (conj path :higher))))
            nil)))))

When implementing TwoTree, I took a lot of inspiration (and implementation) from this blog post by Nathan Wallace. Also, thanks to the Reddit user joinr for pointing out a bad equiv implementation. Here is the diff after his comments.

We can create a helper method to create new trees:

(defn two-tree [& xs]
  (reduce conj (TwoTree. nil) xs))

Now we can create a new tree like this:

(two-tree [[0.3 0.4] [0.6 0.3]])

Also, the following code works:

(filter #(> (second %) 0.5) (two-tree [0.3 0.5] [0.2 0.3] [0.5 0.5] [0.7 0.8]))

(get all points whose seconds coordinate is greater than 0.5)

The full code can be seen here on Github.

Lessons learned

I did this partly to learn a simple geometric data structure, but also to make another tool in my generative art toolbox. Implementing an algorithm helps immensely when trying to understand it: I got better and better visualizing how these trees looked like.

There are two main things about Clojure I want to mention in this section: the clojure.test.check library, and the Clojure interfaces.

The clojure.test.check library

The test.check library is a property based testing library. In a few words, given constraints on inputs, in can generate test data for your function. In one particular case, it helped me verify that my code had a bug and produce a minimal example of this (the bug was that I forgot to recur in the :else clause in the tree-nearest function). By writing some “simple-ish” code, I got an example of an input that made the tree-nearest return a wrong answer. Here is the code:

(defn nearest-by-sort [pts pt]
  (-> (sorted-set-by (fn [p q] (compare (p/distance-sq p pt) (p/distance-sq q pt))))
      (into pts)
      (first)))

(defn ^:private points-gen [min-length max-length]
  (-> {:infinite? false :max 1 :NaN? false :min 0}
      gen/double*
      (gen/vector 2)
      (gen/vector min-length max-length)))

(def prop (prop/for-all [[p & pts] (points-gen 3 50)]
                        (let [t (reduce conj (s/two-tree) pts)
                              correct-answer (nearest-by-sort pts p)
                              correct-dist (p/distance-sq correct-answer p)
                              tree-answ (s/nearest t p)
                              tree-dist (p/distance-sq tree-answ p)]

                          (= correct-dist tree-dist))))

(deftest nearest-generative
  (is (= nil (let [res (tc/quick-check 10000 prop)]
               (when (not (:pass? res))
                 (let [failed (first (:smallest (:shrunk res)))
                       t (reduce conj (s/two-tree) failed)
                       answ (nearest-by-sort failed [0.5 0.5])
                       tree-answ (s/nearest t [0.5 0.5])]
                   {:tree t
                    :failed failed
                    :root (.root t)
                    :tree-answ tree-answ
                    :tree-dist (p/distance-sq [0.5 0.5] tree-answ)
                    :correct-dist (p/distance-sq [0.5 0.5] answ)
                    :answ answ}))))))

It is probably more verbose than needed, but the summary is this: the points-gen function return a generator, which, given some restraints, can return sample inputs (in this case: vectors of points). Then I compare the result from the tree-search with the brute force result given by first sorting the points, then picking the first point.

The way I’ve set it up, whenever I run my tests, clojure.test.check generates 10000 test cases and fails the test if my implementation doesn’t return the correct result. This was very handy, and quite easy to set up.

The Clojure core interfaces

It was rewarding to implement the Clojure core interfaces for my TwoTree type (clojure.lang.ISeq, clojure.lang.IPersistentSet, etc.). What was a bit frustrating though, was the lack of documentation. I ended up reading a lot of Clojure source code to understand the control flows. Basically, the only thing I now about IPersistentSet, is that it is a Java interface like this:

package clojure.lang;

public interface IPersistentSet extends IPersistentCollection, Counted{
	public IPersistentSet disjoin(Object key) ;
	public boolean contains(Object key);
	public Object get(Object key);
}

Then I had to search the Clojure source code to understand how it was supposed to be used. It would be nice with a docstring or two. I found many blog posts that implemented custom types (this one, this one, or this one), but very little in Clojure own documentation.

On the flip side, I got to read some of the Clojure source code, which was very educational. I also got to understand a bit more the usefulness of protocols (using defprotocol and defrecord to provide several implementations). Here it was very useful to read the source code of thi-ng/geom.

Conclusion

I learned a lot, and I got one more tool to make generative art. Perhaps later I could publish the code as a library, but I should really battle test it a bit more first (anyone can copy the code, it is open source on my Github)

I used the data structure to create the following pictures (maybe soon I’ll link to my own page instead of Instagram). The nearest function was very useful in making the code fast enough.

Se dette innlegget på Instagram

Et innlegg delt av Fredrik Meyer (@generert)

Until then, thanks for reading this far!

]]>
Setup of new Macbook2024-01-05T09:15:00+00:002024-01-05T09:15:00+00:00https://www.fredrikmeyer.net/2024/01/05/new-macI just got a new Macbook, and I thought it would be useful for my future self to write down what I installed on it. Luckily the history file in my shell is long enough to remember everything.

The order of the steps is quite random.

1. Install Homebrew

I hear there are other alternatives out there, but I stick to Brew for now.

I installed Brew the “official” way:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

This seemed to automatically install the XCode Command Line Tools.

Follow the install instructions (this adds a init script to my .zprofile file):

(echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users/fredrikmeyer/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

2. Install Clojure

This installs Clojure and OpenJDK 21. From the official Clojure documentation.

brew install clojure/tools/clojure
brew tap homebrew/cask-versions
brew install --cask temurin21
brew install leiningen

I do my Clojure programming in Emacs with Cider and clojure-lsp.

3. Install Emacs

I use this version of Emacs on Mac.

Install with:

brew install emacs-plus@30 --with-ctags --with-xwidgets --with-imagemagick --with-native-comp --with-poll
osascript -e 'tell application "Finder" to make alias file to posix file "/opt/homebrew/opt/emacs-plus@30/Emacs.app" at POSIX file "/Applications"'

The second line makes it possible to open Emacs with Finder.

My Emacs configuration is stored here. since I don’t install on a new machine very often, usually I have to restart Emacs a few times before it works.

4. Install tmux

brew install tmux

I use Tmux to manage windows in my Terminal.

My Tmux configuration is Git managed. Here is the current version:

# See this https://www.seanh.cc/2020/12/27/copy-and-paste-in-tmux/
set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'
set -g @plugin 'tmux-plugins/tmux-yank'

set -g mouse on

bind c new-window -c "#{pane_current_path}"
bind '"' split-window -c "#{pane_current_path}"
bind % split-window -h -c "#{pane_current_path}"

run '~/.tmux/plugins/tpm/tpm'

This first requires install the Tmux plugin manager. The package tmux-yank makes copy on select work as expected. I wrote about how I use Tmux here.

5. Install oh-my-zsh

Mostly by habit I use oh-my-zsh for terminal configuration. I’m mostly happy with the default configuration.

sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

6. Install node

I do a lot of frontend development, so I will probably need to install more Node related packages, but I needed Node to let Emacs install LSP clients automatically (many of them are stored on NPM).

brew install node

7. Install jotta-cli

I do a lot of my backup using Jottacloud. They have a CLI utility to select directories to backup.

brew tap jotta/cli
brew install jotta-cli
brew services start jotta-cli
jotta-cli login

8. Install Rust

From the official documentation:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"  # Add this line to .zshenv

For editor integration, also add the language server:

rustup component add rust-analyzer

9. Install Rectangle

I was looking for a good window manager for Mac. Spectacle is not maintained anymore (I do think it works still), but after some searching, I found Rectangle. Open source and easy to use.

brew install --cask rectangle

I mostly use ctrl+option+return to maximize windows and ctrl+option+←/ctrl+option+→ to move windows to the left/right half.

10. Install GNU Stow

I use GNU Stow to manage (some of) my dotfiles. A good intro is here.

brew install stow

I keep the dotfiles in a private repository (maybe I make it public one day).

11. Command line utilities

I use bat sometimes to read code files with syntax highlighting in the browser.

And ripgrep for fast search (also makes some Emacs plugins faster).

brew install bat
brew install rg

Setup Git

Remember to update the Git config. At the moment mine looks like this:

[user]
	name = Fredrik Meyer
	email = [email protected]

[core]
	editor = emacs

[github]
	user = FredrikMeyer

[init]
	defaultBranch = main

12. Install Ruby

This blog is built using Jekyll, so it needs Ruby installed.

It also uses (at the moment) an old version of Ruby, so I installed Ruby 2.7.3 with a version manager:

brew install rbenv ruby-build
rbenv init
source ~/.zshrc
rbenv install --list-all
rbenv install 2.7.3

13. Apps installed other ways

That seems to be all (for now) that is installed via the CLI. I have usually also always install iTerm2, but I noticed I don’t use many of its features (tabs, themes, etc?), so for now I’m sticking with the builtin Terminal app.

Google Chrome

It’s easy and it stores all my passwords.

Slack

For the interruptions.

Dropbox

All my files.

NordVPN

Sometimes I need a VPN.

Spotify

What’s life without music? (silent)

Amazon Prime Video

The app allows downloading series.

Remarkable Desktop App

I have the Remarkable tablet, and I often use the app to upload PDF’s.

Steam

Sometimes I play games. Unfortunately, many games don’t work anymore on Mac - and I might be too lazy to try installing Windows on it.

]]>
Fredrik Meyer
How I use Tmux2023-05-31T19:21:00+00:002023-05-31T19:21:00+00:00https://www.fredrikmeyer.net/2023/05/31/how-i-use-tmuxWhen working in the terminal, I like to move efficiently between panes. To do this I use tmux, which is a terminal multiplexer. I had to look that word up, so here is what Wikipedia says:

A terminal multiplexer is a software application that can be used to multiplex several separate pseudoterminal-based login sessions inside a single terminal display, (…)

A multiplexer is a device that combines several signal sources into a single signal. So in short, tmux lets you combine several terminal sessions into a single one. It also remembers the configuration if you manage to close your terminal, since the tmux server keeps running in the background.

Let’s just dive into how I would start a day at work. After typing tmux my terminal looks something like this:

Tmux start

Then I press ctrl+b and % to split the terminal into two horizontal panes (to split vertically, type ctr+b and "). I can use ctrl+b and the arrow keys to jump between panes.

Now I can run for example yarn typecheck --watch in the left pane, a dev server in the bottom right, and a free terminal in the upper right pane.

Tmux with panes

If I think it starts to get too crowded, I can type ctrl+b and : and write new-window, to start fresh with zero panes (these are the numbers at the bottom in the pictures). To navigate between windows I would use ctr+b and n/p (next/previous).

I use a very simple configuration file (put it in ~/.tmux.conf):

set -g mouse on

bind c new-window -c "#{pane_current_path}"
bind '"' split-window -c "#{pane_current_path}"
bind % split-window -h -c "#{pane_current_path}"

The configuration file does two things: first, by default mouse actions are not supported, so I enable it (which lets me copy text by marking it, for example). The second options make sure that new panes and windows are opened in the same directory as the original pane.

There is of course a lot more you can do, but at this point, these are the commands I use the most. For more inspiration (and better guides than this one), see for example thr awesome-tmux repository. This was not meant to be a guide, for that I would recommend the official documentation.

]]>
Using Emacs to backup a Raspberry Pi2023-02-26T19:33:00+00:002023-02-26T19:33:00+00:00https://www.fredrikmeyer.net/2023/02/26/backup-pi-emacsAt home, I manage by smart lights (Philis Hue, Ikea Trådløs, etc) from a Raspberry Pi running the Deconz Zigbee gateway.

A week ago, I suddenly couldn’t turn off my kitchen lights - and I soon realized it was because the SD card on my Raspberry had stopped working. Resetting smart lights is a hassle, so I was very happy when I discovered that a simple backup script I had set up ages ago still worked (it backed up every night to Google Disk).

It turns out the the Deconz software only needs the SQLite database to remember the connection information of the lights, so just copying my backup file to the configuration directory of Deconz was enough to reconnect to the lights (once I managed to actually install the Deconz software, but that is another story…).

This time I decided to use Amazon S3 to backup the Deconz database, since I’m more familiar with how S3 works, thanks to work experience.

Step 1

Create a S3 bucket and an IAM user that is allowed to write files to this specific bucket. I use AWS CDK for this, but it should be quite easy to do it via the GUI as well.

After the user is created, go to the IAM console, and create access keys. They consist of an ID and a secret part.

Step 2

Install the AWS CLI on your Raspberry Pi. I just did this:

> curl -O 'https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip'
> sudo aws/install

Check that it worked by typing e.g. aws --help.

Add your credentials by typing aws configure and follow the instructions.

Step 3 - write the Elisp script

Of course I could have written a simple Bash script, but writing Bash feels like fixing the electrics at home without professions - it might work, or you might shoot yourself in the foot.

So I decided to try using Emacs, since I already installed Emacs on the Raspberry (again, because I’m more used to Emacs keybindings than anything else).

It required some Googling, but it turned out short and understandable:

#!/usr/bin/emacs --script

(let* ((file-name "/deconz/zll.db")
       (target-name (concat
		    (format-time-string "%Y-%m-%d-%H_%M") "_zll.db"))
       (command (concat "aws s3 cp "
			file-name " "
			"s3://<S3-BUCKET-NAME>/"
			target-name)))
  (print "Starting upload.")
  (if (> (shell-command command) 0)
    (print "Error uploading.")
    (print (concat "Uploaded file " target-name " to S3."))))

The script is quite simple. I define the relevant file names, and the the command to run (the command variable). Then I run the command.

By placing the shebang at the top of the file, and running chmod +x on it (making it executable), one can run the script just by typing ./backup_deconz.el

To not have everything in my home directory, I moved the script to /usr/local/bin and changed its owner to root.

Making it run periodically

To make it run periodically, I googled how crontab works, and added this snippet in the crontab file (crontab -e):

0 3 * * * /usr/local/bin/backup_deconz.el

Now the script will run every night at 03:00 AM.

Conclusion

This is not exactly advanced Emacs usage, but I found it fun to be able to use Elisp instead of Bash when I had the chance.

]]>
Introduction to AWS CloudFormation2022-02-22T19:14:00+00:002022-02-22T19:14:00+00:00https://www.fredrikmeyer.net/2022/02/22/cloudformation-introIn the old days, well before my time as a programmer, you had servers, which maybe had a Java server running, connecting to a Oracle SQL database, managed by some other team.

The infrastructure itself was “simple” (a bunch of servers, a firewall and a database). Then came Kubernetes and cloud providers with managed services, making the number of components larger. To be able to manage the infrastructure in a reprocible way, one would want to keep infrastructure configuration in source control.

Infrastructure as Code is a way of defining infrastructure with code (obviously). This has the advantage that the infrastructure and its configuration can be put in version control, and often also in a CI pipeline. This in turn ensures that whatever is on the Git main branch represents the true state of things.

In this post I will explain AWS’ CloudFormation.

The most popular Infrastructure as Code tool is probably Terraform, which looks like below:

resource "aws_s3_bucket" "b" {
  bucket = "my-tf-test-bucket"

  Tags = {
    Name        = "My bucket"
    Environment = "Dev"
  }
}

This code defines a S3 bucket with an ID b and a bucket name my-tf-test-bucket. The language of Terraform is the HashiCorp Configuration Language (HCL).

In the AWS world, the two most popular infrastructure as code tools are Terraform and CloudFormation. CloudFormation is natively supported within AWS, and is written in YAML (for good and for worse…).

This is how CloudFormation looks:

AWSTemplateFormatVersion: 2010-09-09
Description: CloudFormation template for a S3 bucket 
    
Resources:
  MyS3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: i-named-this-bucket

Resources are roughly speaking things you can create in AWS: this includes things like Lambda Functions, S3 bucket, IAM roles/permissions so on. They can be referenced by other resources, in order to create a resource graph (this resource refers to that resource, and so on). A single such YAML file defines a stack, which is a collection of resources in the same “namespace”.

To make things clearer, let us go through a simple example.

Let us first create a bucket using the above snippet. Save it in a file called my_template.yaml. Then run the following command with the AWS CLI:

aws cloudformation deploy \
  --template-file my_template.yaml \
  --stack-name my-stack

You will see something like this:

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - my-stack

If this completes successfully, it will create a S3 bucket with the name i-named-this-bucket.

Let us add a Lambda Function, by adding this to the YAML file:

  MyLambda:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: "my-function"
      Handler: index.handler
      Code:
        ZipFile: |
          exports.handler = function(event, context) {
            console.log("I'm a lambda!")
          };
      Runtime: nodejs14.x
      Role: !GetAtt LambdaExecutionRole.Arn

  LambdaExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: root
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - logs:*
            Resource: arn:aws:logs:*:*:*

Phew! That was a mouthful. The first block defines a Lambda function with the name my-function, which only logs the sentence “I’m a lambda!”. The next field, Role, just defines what the Lambda function is allowed to do. This one is only allowed to log to CloudWatch.

Let’s do something slightly more complex. Let the Lambda function be able to list all objects in the bucket. Change the Lambda code to the following:

var AWS = require("aws-sdk");
var s3 = new AWS.S3();
exports.handler = function(event, context) {
    s3.listObjects(
        { Bucket: process.env.S3_BUCKET},
        function (err, data) {
            if (err) throw err;
                console.log(data)
        })
};

And add the following policy in the IAM Role:

      - PolicyName: list-s3
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
            - Effect: Allow
              Action:
                - s3:List*
              Resource: !GetAtt MyS3Bucket.Arn

Now the lambda will be allowed to list objects in the S3 bucket. For the full template, see this gist.

To deploy this, run the same command as above, and wait for it to finish. To test the Lambda function, go to the Lambda console, and choose “Test”. If the function doesn’t crash, you will see a green box with its output.

These steps should give the reader an idea of what CloudFormation is and how it works. The documentation is very good and has a lot of examples, so I recommend always starting there when wondering about all the possible properties.

As you can see, CloudFormation has a tendency to become quite complex. To keep things under control, a good first step is to use something like cfn-lint, which can validate your CloudFormation and warn you about problems before you deploy.

A good next step would be to instead use something like AWS CDK (Cloud Development Kit). This is a framework which takes your favorite programming language and compiles it to CloudFormation. If you’re going to write a lot of infrastructure as code in AWS, CDK is definitely the way to go. (but that will be another blog post)

(to clean up, the easiest is to go to the AWS Console, then CloudFormation, then “Delete Stack”. Note that some resources are just “orphaned”, not deleted. This includes DynamodDB tables and S3 buckets, which must be manually deleted after.)

]]>