<![CDATA[Julien's DevRel corner]]>https://lengrand.fr/https://lengrand.fr/favicon.pngJulien's DevRel cornerhttps://lengrand.fr/Ghost 6.0Tue, 17 Mar 2026 13:10:11 GMT60<![CDATA[Ways in which GenAI has changed my (tech) life so far]]>https://lengrand.fr/ways-in-which-genai-has-changed-my-tech-life-so-far/693479897acd40276ed5b004Sun, 07 Dec 2025 11:35:32 GMT

Just like many (most?) of you, I am using AI every day. I like it for some things, like giving me insights on topics and explore new ideas or create learning plans. I've been way less successful for other things (well, most things actually).
Heck, I'm even currently leading AI adoption for engineers in my current job, so let's say I'm pretty invested.
In this article though, I want to discuss a few of the ways AI has drastically impacted me and my life so far.

It's generally harder to find quality technical content online

Because AI is able to generate tons of content in minimal time, many content farms out there generate massive amounts of data and flood the market with it. Some sources even report that 3/4 of websites online already contain generated content.
It makes it increasingly difficult to use search engines to find quality technical information.
This is a not a new phenomenon in DevRel, it's well known that many of the large actors out there externalise the "lower quality / more generally" content to third parties. But with the arrival of LLMs, it has scaled up tremendously.
I simply find myself relying less and less on "insert your search engine" to get proper technical information. Instead, I directly go to specialised places like Discord, Reddit or simply the documentation.

This is also true for smaller scale content publishers and personal blogs. For as little as 50$ a month you can have a completely automated blog generating content for itself, including social media promotion.

The signal / ratio on social media has become abysmal

Social Media has always been a double edged sword for me. Even 10 years ago, unless you'd curate your timeline like a madman, you'd have to sift through a lot of irrelevant content to find a few gems. But those gems were worth it. As a long time Kotlin developer, I've found most of "my people" online first by sharing and reacting to Kotlin content. Today, those gems are still there but the amount of low quality / irrelevant content has spiked up to incredible amounts.
I feel like it's more blatant on Youtube, play around with shorts (🤢) for 30 minutes and your timeline will be full of AI generated videos that voice over successful reddit posts from completely automated accounts. Worse, the content is very often plain wrong.
Not only that, AI is also being pushed on people using social media. Insert a few words, and LinkedIn will transform those into a full fledged 500 words post babbling about said words.

Everything looks the same

Because of how easy it is to generate content, as well as how aggressively it is pushed everywhere (Gmail, Outlook, LinkedIn, ..., basically everywhere!) the share of people using it is increasingly large.
This leads to posts that basically all look the same. Same formatting, same verbosity, same use of emojis, bullet lists everywhere with separated sections AND SAME LIFELESS TONE... The online world is really losing its flavour. Essentially, AI generated content has no soul.
It also means that shallower content will tend to be posted more often, because of how cheap it is to generate posts. Which increases the signal/noise ratio issue I mentioned above.

I lost my (online) community

One can find many semi automated accounts posting engagement traps to and this simply pushes the more technical audience away.

The social media world has also become a lot more fragmented the last few years. First because of the Twitter acquisition (leading to many people being on different platforms) but also lately because the ever more aggressive chase for data to feed the almighty models. Add to this that by being on a platform, you have to coexist with some outrageously voluntarily biased models and the use of social media becomes very unattractive.

Because of all these reasons, a large share of the people I used to follow are now using social media for "push only", meaning that they use cross-posters and do not interact with their replies. Others have left the game altogether.

And given that discussions (for me) are where connection gets created, I simply find it harder to find inspiring people (whose behaviour or work give me energy) online.

We've become so much more used to stealing other people's content

One of the things that bothers me the most is that we all know that those large models are only possible because we have been essentially been stealing people's data at an unprecedented scale.
In a world where some of the most used Open-Source projects struggle to survive for lack of sponsorship, all of us techies are using AI assistants that have been fed the entirety of GitHub. Anthropic has just lost a lawsuit because they used books to train their models without authorisation. The same is true for images, music, video generation, . . . you name it.

We developers are at the same time lamenting for the lack of sponsoring from the big corps while using stolen data on a daily basis.

It all feels unsustainable

I am no expert in economics, so please don't take my word for it but given the already insane amount of revenue generated by AI the past years and the sheer size of the losses that those companies are still making, I find it extremely hard to believe that the current pricing models of AI tooling is sustainable (and I'm not the only one).
A few years back, Netflix cost 12 dollars. The convenience and the size of the catalog made me a very happy user. Eight years down the line, it is now close to 25 dollars (and climbing) and I have to pay 4 similar services for the same catalog.

What's even more, the insane levels of investments loops being made make me genuinely worried the whole industry ends up crashing hard.

It's like driving a Ferrari at 250km/h on the highway : It feels good and exhilarating, but it's genuinely a bit hard to not have a large part of your brain wondering what will happen when a mistake happens....

It's easy to be lazy

I use AI everyday, and it can really be amazing. A pet-project that used to take a week now takes 1/2 day. It's also really great at helping me dive into an existing project or domain quickly. Or update dependencies for older maintenance projects. It's also a rather good rubber ducky (even if it tends to agree with you too much for my taste) and it's a useful creative companion at giving project or gift ideas.

What I find difficult though is how easy it can be to fall into the trap of "just do it for me please".

  • This blog post you have to write ? "Do it and I'll do some changes".
  • This slide deck that is expected of you "Oh, I'll just need a couple hours and AI will do most of the work for me".
  • Darn I'm stuck on Advent of Code, Claude, do your magic.

It takes effort to stay sharp. And now, we constantly have a backup plan next to us to do our work. It's easier to just let it slip and requires (me) more discipline and motivation to stay relevant.

I am sometimes really glad I am not a student today, because I feel GenAI would have done me a lot of harm back then. The grind is useful to get past a certain level of understanding.
It was always possible, you simply needed to copy the work of someone else or use a mechanical turk platforms. But now it's always available on all your devices, for free. And it turns out, research seems to agree.

Hiring online has become harder

A few months back, I was hiring for a couple technical positions. I made the mistake to post the openings on LinkedIn.

LinkedIn has always been a bit of a hit and miss for hiring, but I've had some success in the past finding very relevant people way out of my network (Shoutout to my former Adyen team!!). Things seem to be different since the GPT era though....
Within one hour, I had received over 300 responses. Most clearly generated and not a single one relevant to the position. I closed the gates as the requests were still pouring in and decided to use my local network again.
I am unsure exactly what happens here, having not used the platform to search for a job myself but I believe that many people use tools like n8n to build automated pipelines those days generating applications responses on the fly .

I have to admit I don't quite understand this to be honest, I've always been more of the "apply to 2 jobs but go 300% for it" kind of person. What are your chances to get an interview based on a fully generated application? The current market is quite overcrowded though, so I guess I can still understand why people do this...

And so what now?

This article ended up being much longer than I expected. I don't intend to do any low key AI bashing here. I am using AI everyday myself and am genuinely excited about the possibilities that technology can bring us in the future. I have Claude writing a new pet-project for me literally as I am writing this.

I'll write more about it soon but as a community person, I have never seen such engagement from a community than with AI tooling. There is deep, genuine interest; people experiment a lot and fast and this creates a lot of positive excitement. Optimists and Skeptics alike, close to everyone feels involved.


That being said, I also see that my behaviour has a changed a lot the past couple years :

  • I mostly lost interest in social media, for the reasons mentioned above. I am almost push only now and it has become infrequent.
  • I am slowly migrating out of GitHub and my pet projects have become private by default.
  • I am much more skeptical of any news I see and verify almost everything I read (which is not a bad thing in itself).
  • I am always "on alert" for anything I see / read / hear to see if it is generated content.
  • Most visibly AI generated content has become a red flag for me and a sign of "low effort". I don't dismiss it but I do alter my expectations about the content / person / company in front of me based on this.

Most interestingly though, I see renewed interest in smaller, gated communities. Physical Meetups, local groups, closed content publications, indie writing, ...
I actually feel like we're coming back to where it all started for me 15 years ago. Locally, with people I know, making genuine connections. AI can't replace that, and in some ways, it might be making me more human again...

To be continued!

]]>
<![CDATA[Using Coolify for subdomain direct redirect without a app resource]]>https://lengrand.fr/using-coolify-for-subdomain-direct-redirect-without-a-app-resource/68c70d1e7acd40276ed5af73Sun, 14 Sep 2025 19:07:33 GMT

You should know by now that I'm a pretty big fan of Coolify. I use it to manage all of my applications.

One of the things I love the most about Coolify is how it automagically handles DNS for you. Write where you want your app to be hosted and boom, it'll set it up for you. Dont, and it will generate one for you. You can read more about it here.

Using Coolify for subdomain direct redirect without a app resource
Deciding where your app will be available is as easy as writing it down

Today, I faced a new use case for the first time : I had a Claude artifact that was hosted somewhere already, but wanted one of my subdomains to redirect to it.

I know I could do it directly via my registrar's DNS interface, but because all of my apps are handled via Coolify I wanted to keep it as such if possible.

The issue is, to access this domain configuration setting from Coolify, you have to create a resource (git repo, Dockerfile, . . . ).

Turns out there is another way : the dynamic proxy configuration.

My use case is as such : I want to have https://blogprocessor.lengrand.quest/ redirect to https://claude.ai/public/artifacts/b21d3e9c-b05f-4fe3-923e-63e1144cfff8. To do this :

  • Navigate to Servers > [YourServer] > Proxy > Dynamic Configuration in your Coolify dashboard.
Using Coolify for subdomain direct redirect without a app resource
The server proxy dynamic configuration page
  • Create a new YAML file (in my case claude.yaml) with the following structure:
http:
  routers:
    blogprocessor-redirect:
      rule: Host(`blogprocessor.lengrand.quest`)
      service: blogprocessor-redirect-service
      middlewares:
        - blogprocessor-redirect-middleware
      tls:
        certResolver: letsencrypt
  middlewares:
    blogprocessor-redirect-middleware:
      redirectRegex:
        regex: '^https://blogprocessor\.lengrand\.quest/?(.*)'
        replacement: 'https://claude.ai/public/artifacts/b21d3e9c-b05f-4fe3-923e-63e1144cfff8'
        permanent: true
  services:
    blogprocessor-redirect-service:
      loadBalancer:
        servers:
          - url: 'http://127.0.0.1:9999'

(Note : I am using the traefik proxy here, but you can use Caddy too)

The actual content of the configuration is pretty simple :

  • A Middleware that describes what kind of redirection I want
  • A router to be able to handle tls
  • A (dummy) service because somehow the router requires a service and a middleware

I have tried to get rid of the dummy service but couldn't find a way to get the TLS handling without it. The good news however is that I can keep using that file as I create more Claude artifacts without having to create much more configuration.

If you find a way, please let me know!

I want to thank Cinzya, one of the moderators of the Coolify server for pointing me in the right direction, she's been super helpful!

]]>
<![CDATA[Extracting moments of interest from Dash Cam footage]]>https://lengrand.fr/extracting-moments-of-interest-from-dash-cam-footage/68c3c7ac7acd40276ed5af03Fri, 12 Sep 2025 08:21:23 GMT

Earlier this year, I've bought my very first car. (Managed to spend almot 20 years of my adult life without owning one, isn't that a bit impressive? 😊).

The first thing I did was to buy a dashcam. I went for a rather simple model, but with both a front and back camera. Being a rather new (active) driver, I wanted to make sure that everything that could happen was recorded just in case.

Extracting moments of interest from Dash Cam footage

The dashcam itself is great, and it turns out there isn't more than a couple days passing by without an incident that I'd like to record. People tend to be very dangerous on the road, and I'm honestly surprised how we can give a ton of steel to everyone and send them on their merry way to the same locations every day, on a schedule without causing more mayhem.

One of the limitations of that dashcam is that there is no easy way to "mark" moments of interest to extract them later. Just like most dash cams, the camera simply has a SD Card inside it and continuously records the road on a rolling stream.

My first thought was to shout "RECORD" every time I saw something I wanted to extract later and post process the video files searching for those moments. That gave me the opportunity to play with OpenAI's whisper. But it turns out that that microphone from the dash cam is of mediocre quality, and I also have a bad tendency to play music extremely loud when I drive 😅. It's also extremely power and resource consuming because it requires transcripting the entirety of the video files before searching for the keyword occurrences.

I gave it another thought and realized that I could quickly build a very efficient poor man's version of the system I wanted using Apple Shortcuts.

Extracting moments of interest from Dash Cam footage
Creating a new shortcut

This is how my current process looks like :

  • Create variables with the current data and location
  • Create a string of text with both variables concatenated
  • Append this text to an existing note.
Extracting moments of interest from Dash Cam footage
Creating a dedicated shortcut

We constantly talk about AI those days, and I realised that I barely use Siri except to ask it to set an alarm or a timer while cooking. Turns out the shortcut application is very powerful, being able to open and close applications, run loops and much, much more. It even has a sort of scripting language.

The last thing left to do is to find a good name for the shortcut. I called it "Record incident" so it's generic enough, while still very easily understandable by Siri on top of music.

Because of the location variable (which I added as a bonus), Apple will ask for additional permissions the first time shortcut is being run but that's only a good thing in my opinion.

Extracting moments of interest from Dash Cam footage
My new shortcut, available

That's it for now! I now have a special notes on my apple devices that lists all of the moments of interests that I can easily trigger by voice and without being dangerous while driving.

The next step is to write a short script that will parse those values and create small clips from the video stream around those moments.

Always fun to discover new capabilities, and find cheap and efficient ways to achieve my goals 😊.

]]>
<![CDATA[Hosting Bugsink on Coolify]]>https://lengrand.fr/hosting-bugsink-on-coolify/686c25eb458af7216de9027aMon, 07 Jul 2025 20:33:23 GMT

TL;DR : There is a bug in Coolify currently preventing to use the default Bugsink deployment template. This post describes a method to deploy via Docker Compose while waiting for a fix.

Some context

As some of you may know already, I've been moving off the public cloud the past year and started deploying my applications on my own server instead using Coolify. I do this for many reasons, one of them being that I want to support indie projects more and I really like Andras' open approach to building his company.

Since last summer, I've been following Klaas' journey into building Bugsink; a simple but effective error tracker. (Disclaimer : I have been doing some advising work for Klaas and am biased here 😊, but my opinion is 100% honest here).

The main deployment template

Coolify has a module system that you can use to install services from templates. I have learnt about it a bit, since I contributed the first version of the Bugsink template to it 😊. The template uses a Docker Compose deployment in the background.

Hosting Bugsink on Coolify
Installing Bugsink on Coolify is only one click away

Pretty handy when working on a new project. Pick service, click, use.

The bugsink blog actually has coolify deployment page, but the problem is that the latest version of Coolify has a issue where some environment variables can't get updated properly after the original deployment. This makes it impossible to deploy Bugsink properly in my case.

[Bug]: Bugsink environment variables issue. · Issue #6035 · coollabsio/coolify
Error Message and Logs On bugsink it looks like coolify doesnt update the env variables (just once on the creation of the container). When updating the Domain in the General tab, the FDQN env varia…
Hosting Bugsink on Coolify

While waiting for the bug to be fixed, I dived deeper into the Docker Compose file and ran my own version.

Current deployment

I started from the Bugsink Docker compose documentation page, but I ran into multiple issues specific with Coolify with the setup :

  • I needed to setup port redirection because I didn't want my DSN address to contain any port (meaning I had to use Coolify's magic variables)
  • I wanted HTTPS to work as expected
  • I also wanted my BASE_URL to be properly recognized by bugsink, while it is refusing any request that is not from an authorized domain.

This is the Docker Compose file that I currently use :

services:
  mysql:
    image: 'mysql:latest'
    restart: unless-stopped
    environment:
      - 'MYSQL_ROOT_PASSWORD=${SERVICE_PASSWORD_ROOT}'
      - 'MYSQL_DATABASE=${MYSQL_DATABASE:-bugsink}'
      - 'MYSQL_USER=${SERVICE_USER_BUGSINK}'
      - 'MYSQL_PASSWORD=${SERVICE_PASSWORD_BUGSINK}'
    volumes:
      - 'my-datavolume:/var/lib/mysql'
    healthcheck:
      test:
        - CMD
        - mysqladmin
        - ping
        - '-h'
        - 127.0.0.1
      interval: 5s
      timeout: 20s
      retries: 10
  web:
    image: bugsink/bugsink
    restart: unless-stopped
    environment:
      - SECRET_KEY=$SERVICE_PASSWORD_64_BUGSINK
      - 'CREATE_SUPERUSER=admin:$SERVICE_PASSWORD_BUGSINK'
      - SERVICE_FQDN_BUGSINK_8000
      - 'DATABASE_URL=mysql://${SERVICE_USER_BUGSINK}:$SERVICE_PASSWORD_BUGSINK@mysql:3306/${MYSQL_DATABASE:-bugsink}'
      - BEHIND_HTTPS_PROXY=true
      - BASE_URL=$BASE_URL
    depends_on:
      mysql:
        condition: service_healthy
    healthcheck:
      test:
        - CMD-SHELL
        - 'python -c ''import requests; requests.get("http://localhost:8000/").raise_for_status()'''
      interval: 5s
      timeout: 20s
      retries: 10

There are only a noticeable changes :

  • I added the BEHIND_HTTPS_PROXY variable, because you don't want to run a bug tracker on plain HTTP.
  • I separated the generated qualified domain name that Coolify uses and the BASE_URL variables.

Now, before deploying there is two more small manual steps to do :

  • Set the desired domain name in the Bugsink web settings (you can use the default generated if you want, I decided to roll my own).
Hosting Bugsink on Coolify
  • Set the BASE_URL environment variable to the same value
Hosting Bugsink on Coolify

Note : you need to align the port that bugsink is running on, the port in the domain settings in bugsink, set the rediretion in the DockerFile using the FQDN environment variable and omit it in the base url variable for everything to work as expected.

Testing is all

The last thing to do is to deploy, and test an error against a project in Bugsink. For this, you can create a project in the application until you hit the SDK setup page.

Hosting Bugsink on Coolify
Using an obfuscated domain name on purpose here :)

Once this is done, you need to trigger an error in a project, Bugsink's documentation explain how very well. There are several ways to do this but in the past I faced network and setup issues.

To make sure I am testing only Bugsink and not my application setup, I created a small project called Sentry Error Generator (This application does not log anything, but be aware this is a sensitive action to do). Go to the website, enter the setup URL Bugsink gave you and press generate error. You should see it in Bugsink.

Hosting Bugsink on Coolify
Hosting Bugsink on Coolify

Congratulations, you are now collecting your application's errors (for free, if you have followed this guide! Oh, and if your current bug tracker too expensive and want a smaller and simpler alternative but with no maintenance, Bugsink has some hosted and supported options too (As I've said earlier, I'm biased!).

]]>
<![CDATA[KotlinConf 2025 is a real bowl of fresh air for backend Devs]]>https://lengrand.fr/kotlinconf-2025-is-a-real-bowl-of-fresh-air-for-backend-devs/682f23027cfc0773d1ba372dThu, 22 May 2025 14:34:02 GMT

TL;DR : After years of focus on the compiler and KMP, The JetBrains folks are coming it out with tons of announcements for server folks too, and that feels great

The past few years

You probably know me by now, I've been liking Kotlin a lot for a long time. Back in 2019 already we were the first at ING moving some of of our production code from Java to Kotlin. Over the past years though, I've also been vocal online about my worries that all of the love was going towards KMP. And Google deciding to sunset the (non Android) Kotlin category has just been another pointer in the same direction. I've even mentioned this exact thing already right after visiting KotlinConf 2023.

My KotlinConf experience
My experience and impressions attending KotlinConf a couple weeks back. And some thoughts on the future of Kotlin.
KotlinConf 2025 is a real bowl of fresh air for backend Devs

Even with Java releasing more often, and getting more and more new features every iteration I'm still much happier using Kotlin. But for pure backend developers is the developer experience still that differentiating that it makes it worth switching? After all, Java got pretty good records those days. A much better functional programming experience. And virtual threads. The list goes on....

Note : This article is my personal impression based on my usage of the language. Depending what you're working on, your opinion may vary 😊. I've also included many screenshots from the Keynote itself, please watch it for the complete announcements!

Note 2 : There's also been lots of great AI related announcements, but I'll intentionally skip them in this article. I'm generally happy JetBrains picks up the AI wave and innovates while staying open. I'm a mostly happy user of their product myself.

New language features

As a backend engineer, I think the main improvements I've personally seen in Kotlin that made a difference to me the past years are the time API (back in 2021!) and the K2 compiler. Of course, the K2 compiler is a huge (and needed) improvement that came with Kotlin 2.0. But it also didn't bring a huge amount of new language features.

Now, today's keynote was a VERY welcome change to this. Others will do a complete breakdown better than me but here some actual new language features I'm excited about!

  • Name-based destructuring to ensure I am grabbing the right properties. An actual improvements compared to the current version.
KotlinConf 2025 is a real bowl of fresh air for backend Devs
Screenshot from Day 1 Keynote: https://www.youtube.com/live/PYAPymKRKVA?si=lKlYZoRzImgKMptb
  • Rich errors : an improvement to error handling, sitting on top of the already great null safety features of the language. Not only do we now have complete return types but they also get propagated so we can neatly handle all error cases in a single location.
KotlinConf 2025 is a real bowl of fresh air for backend Devs
Screenshot from Day 1 Keynote: https://www.youtube.com/live/PYAPymKRKVA?si=lKlYZoRzImgKMptb
  • Even Kotlin compiler plugins are getting more powerful, with Power Assert offering an expressive and clear error message. Look at this, we're almost topping the already amazing Elm error handling capabilities.
KotlinConf 2025 is a real bowl of fresh air for backend Devs
Screenshot from Day 1 Keynote: https://www.youtube.com/live/PYAPymKRKVA?si=lKlYZoRzImgKMptb

This is developer experience like I haven't seen in a long time from the teams at JetBrains, and it genuinely makes me happy.

Tooling ecosystem, and strategic partnerships

I've already mentioned lately that I was a big fan of Kotlin notebooks. I find them generally amazing, because they combine the unique experience of Python Notebooks with the gigantic JVM ecosystem (and without the Python horrible dependency management issues).

Julien Lengrand-Lambert 🥑👋 (@jlengrand.bsky.social)
Been doing some data analysis lately and #Kotlin notebooks, together with dataframe, kandy and sqlite are a really freaking powerful combination! Can’t wait to show you the results :)

I was legit 🤯 this morning when I saw that the folks at JetBrains combine this already amazing experience together with the ecosystem that pretty much every JVM dev uses on the planet : Spring!

The screenshots aren't doing justice to the announcement, so please just go and have a look at the Keynote yourself. The latest version of notebooks can attach itself to a running Spring kernel and have access to all of its context. And that even for applications that aren't using Kotlin yet.

KotlinConf 2025 is a real bowl of fresh air for backend Devs
Kotlin notebook, straight from a running Spring kernel

Again, this is next level for me in terms of developer experience : combining the flexibility of notebooks for experimentation together with the sturdiness of a production like application. With the kandy and dataframe data manipulation capabilities, I can definitely see this speed up my day to day development speed.

KotlinConf 2025 is a real bowl of fresh air for backend Devs

And of course, JetBrains also announcing a strategic partnership with Spring on stage shows that they clearly intend to give us "corporate" backend developers some serious love.

KotlinConf 2025 is a real bowl of fresh air for backend Devs

Growing Kotlin Foundation

For a long time, Kotlin was completely in the hands of JetBrains. This has massively changed over the past years with the creation of the Kotlin Foundation.

As a professional having to pick a technology, it is crucial to look at how promising its future looks. Seeing that Kotlin is jointly supported by several companies, and especially companies that aren't making a living of the language makes it a much safer business case to adopt inside a company.

Seeing two more companies (and none other than Meta) join the foundation gives a great idea of how bright of a future Kotlin has 😊.

KotlinConf 2025 is a real bowl of fresh air for backend Devs
Kotlin foundation members

The Kotlin built Java library ecosystem

A few weeks back, I was mentioning that for the first time I found a library that was completely built in Kotlin, but had a clear Java compatibility too. If you're interested, I'm talking about the KBsky.

Julien Lengrand-Lambert 🥑👋 (@jlengrand.bsky.social)
Interestingly, this is the first time ever that I find a more mature ecosystem of libraries for my need in #kotlin than in #java quite interesting isn’t it. I was looking for ATProto libraries, it seems the main java was got deprecated in favor of a kotlin version? https://github.com/uakihir0/bsky4j

A Java library, built in Kotlin

Today, I learnt that the two most successful JVM AI libraries (the OpenAI Java SDK and the Anthropic Java SDK) both are actually written in Kotlin.

KotlinConf 2025 is a real bowl of fresh air for backend Devs
See Keynote for complete info : https://www.youtube.com/watch?v=PYAPymKRKVA

KotlinConf 2025 is a real bowl of fresh air for backend Devs
See Keynote for complete info : https://www.youtube.com/watch?v=PYAPymKRKVA

I find this a great sign for the language. Of course, the amount of Java developers out there is much larger than people writing Kotlin and it makes sense for library builders to want to maximize their reach. But them choosing Kotlin to write the library speaks volume about the quality of the language, but is also a great sign for its future.

The official KOTLIN LANGUAGE SERVER

This is, by far, the announcement I am most excited about and I genuinely didn't see it coming. JetBrains announced the official release of their Kotlin Language Server protocol today.

KotlinConf 2025 is a real bowl of fresh air for backend Devs

For those unaware, the language server protocol "defines the protocol used between an editor or IDE and a language server that provides language features like auto complete, go to definition, find all references etc.". Having an official language server protocol implementation basically means that all IDEs (from vim to IntelliJ, or SublimeText) can support the language equally well.

The fact that JetBrains is behind Kotlin always made it rather logical for them to not Open-Source a language server. After all, they're selling their IDEs and that'd pretty much mean they're competing against themselves. And it's always been one of my biggest issues with the language.

Julien Lengrand-Lambert 🥑👋 (@jlengrand.bsky.social)
Agreed. Absolutely love IntelliJ and the company behind it, but I don’t like have to start the powerhouse all the time for some simple scripting. Let me use IntelliJ for feature full, multiplatform stuff and give me a powerful language server for the simple backend / scripting thing please

Yes, IntelliJ is a great IDE. I use it every single day, have paid for my license for years and I love it. But in simply just don't believe in the success of gated products. More and more people use VSCode every day. People now use Cursor. Or SublimeText. Some still use vim. Forcing folks to use a specific IDE just feels wrong and tames the excitement using a new language should bring.

Yes, there's been community driven language server implementations for Kotlin, but they've always been struggling with support. And literally being a company building IDEs for a living, JetBrains always felt like the best party to handle this properly.

KotlinConf 2025 is a real bowl of fresh air for backend Devs
The main kotlin language server implementation struggling with support

I'm over the moon that JetBrains has the courage to open up and even lead the language server efforts in the future. It shows a strong trust in the future of the language and that can only help drive adoption across the industry.

ING as use case

As I mentioned already, I was part of the first team to start using Kotlin at ING back in 2019. Five years later, adoption has grown a lot internally and I was super happy to see JetBrains offer us to be one of the leading industry use cases for their server side Keynote.

Kotlin powers some of our critical services today, and it's a great symbol of how good and powerful the language is. I can't wait to see where we'll be in 5 years.

KotlinConf 2025 is a real bowl of fresh air for backend Devs
Simone, one of my colleagues, talking about Kotlin adoption at ING. Well done Simone!

I've always been a fan of Kotlin, and it's been an honour to be able to contribute to the content of the program, no matter how small my contribution was.

KotlinConf 2025 is a real bowl of fresh air for backend Devs
Recording the video, behind the scenes

Conclusion

There's much more I'd like to talk about (amper, klibs, kmp, ...) but I'll keep it to the essential : It feels to me with their keynote that JetBrains intentionally started putting backend developers in the spotlight again. They made big, impactful announcements, both at an ecosystem and language levels and I can't wait to try out all of the new goodies.

Well done JetBrains, well done!

]]>
<![CDATA[My experience using Junie for the past few months]]>https://lengrand.fr/my-experience-using-junie-for-the-past-few-months/681f952e7cfc0773d1ba35d2Sun, 11 May 2025 11:38:55 GMT

I've been a big fan of all things JetBrains for a long time. The company itself, for many reasons but also their high quality products (and languages, but you know that about me already 😝). As a fan, I've jumped on the occasion to be using Junie (JetBrains coding agent) as soon as it came out. This post summarises my experience, in no particular order of features.

NOTE: The examples below are a compilation created over a long period of time. They are illustrations of the issues I've met but aren't meant to be exact examples.

Quick intro

Junie is JetBrains equivalent of Copilot (or Windsurf / Cursor together with the IDE). It is only available inside the IntelliJ based IDEs (Rider, Pycharm, .... included) and is available through an additional "AI" monthly paid fee (10$/month at the time of writing). They have a limited free plan too.

You can install Junie from the Marketplace. Once installed it appears as a window in your IDE

My experience using Junie for the past few months
The Junie window inside IntelliJ

Junie has 2 modes at the moment :

  • Code : You ask Junie something, it will most likely write code / create / edit files in your project.
  • Ask : In case you don't want changes, but rather ask a question or chat with the AI, you can switch to Ask mode. Handy if you want to weight choices in your project, or ask about potential issues / optimisations.

I'm very glad they added this mode actually, I tend to use the AI a lot to help me with reasoning and it is very frustrating to have half a dozen files changes only when asking of a refactoring makes sense.

You can also activate the "brave mode" option, which will allow the AI to run commands in your terminal without your explicit permission every time. I was a little worried at the beginning but so far nothing crazy has happened. I'll report the first time Junie decides to $ rm -rf /, that'll sure be fun and I can only blame myself then.

The many AI products, naming and differentiations

I'm a little confused between the different types of AI products IntelliJ is offering at the moment, which are most relevant to me, but also what are their differentiations. I just found out writing this blog that they have a dedicated page for this actually.

To me, it's still really not clear what the difference between Junie and the AI assistant is. It looks like Junie is the AI assistant, but with more bells and whistles? Except Junie is not customisable (models, prompt library, ...), while the assistant is? Then there's also Mellum, the JetBrains LLM. It feels like I have to learn a whole lot of new product names with little differentiation and it's a bit more than I want to have to remember.

The pricing page with features also basically tells you you get everything as part of the subscription anyways, which is good news! (unless you're enterprise, then you're screwed). Oh, and one thing to mention : pretty cool to see a flat price for the agent, when most of the competition mentions token limit pricing instead!

My experience using Junie for the past few months
The features table of the JetBrains AI products package

I understand the market is moving very fast and is honestly quite a mess at the moment, but I'd appreciate a "use me if, don't use me if" part that's a bit clearer.

Junie is VERY eager to solve my issues

I like to use my coding agent as if I had a junior pair programmer with me. I usually don't ask it for a complete project / solution, but rather first look at a list of things we need to achieve for a particular outcome, before diving into each piece of it separately. That way seems to be an optimal way to speed up delivery, while keeping an actual control over quality and overall architecture.

In my experience so far, Junie tends to REALLY want to be doing a lot of work and I sometimes have a hard time telling it to only do one thing.

I may ask it to create a single test, and I'll have a complete test class created, the implementation will be refactored and it will also go on and upgrade some of my dependencies while it's at it.

It can be frustrating, because it's not what I asked for, and it will take me time to refuse part of the solution... Here is a concrete example where I ask Junie to create a test for a single class method.

My experience using Junie for the past few months
Asking to test a method, only to have my implementation changed and refactored

By the way, this was my method at the time 😅. Yup, it was basically empty.

My experience using Junie for the past few months

And this is the summary of what Junie did by the end of its quest. Not only did it create a test, it also decided to create the actual implementation of the method. As I'd tell my junior if they were not an AI : very industrious, but not what I asked for buddy....

My experience using Junie for the past few months

I refused Junie's solution, and asked it again. This time being more precise about what I DIDN'T want (modifying the implementation):

My experience using Junie for the past few months

This time, the results were as expected. But it took me 2 tries, and my sentence in my request actually was longer.

EDIT : The awesome Marit Van Dijk mentioned the allowlist on Bluesky, which I didn't know about!

Junie can get some basic stuff wrong

This was a bit surprising to me. I've seen Junie get imports wrong for AssertJ for example. To the extent that the code wouldn't compile. (Also, I would have liked it to ask me to add AssertJ as a dependency instead of just decide to start using it 😅). It's quickly and easily fixed, since it will try to build by itself, fail and iterate. But still, I wonder where that comes from.

For smaller libraries, I've also seen Junie struggle to use basic objects. In the below example, it wants to use reflection and complex shenanigans where a simple instantiation just works out of the box. The actual implementation uses val feature = RichtextFacetMention() . It also tries to access fields using getDeclaredField() where accessors are perfectly fine facet.features = mutableListOf(feature).

My experience using Junie for the past few months
Junie getting lost with basic POJOs

I'm not exactly sure why this is happening and I honestly also haven't dived very deep into it. I wonder if that isn't because kbsky is both multiplatform as well as Java compatible and there is some confusing generation happening to get that compatibility.

To fix this, I've had to write some of the code myself so the agent can learn from it. It's become pretty good at it now, but it also means that I don't trust the agent with the code as much as I'd like to 😊.

Trust, but verify

Related, but tangential : It happens very often that Junie does some extra things for me that are completely unrelated to the task at hand. It can be changing some formatting, or updating a version, or changing a Gradle option. The changes aren't bad per se, but this also means that I have to be very careful about checking the output of the task when Junie is done. Those changes also can be in places that aren't really suitable for tests, making them harder to spot in any other way than a thorough code review.

Here is an example where Junie decided that checking for null fields wasn't enough. The fields also shouldn't be empty. This isn't a bad change per se, however it wasn't really done at the right moment. At least, it does inform me at the end of the task though so that's that. Not quite welcome if it had slipped, but also not terrible. I'm on the edge on this one, I like the changes but it also makes me feel I can't quite trust the agent. What if those changes had large impact on my clients?

My experience using Junie for the past few months
Junie decides that null isn't enough. Fields also shouldn't be empty.
My experience using Junie for the past few months
Junie informs me of the change

Junie can be opiniated about the names / locations of my files

I've had a few times where Junie would take my file, take its content and decide to place it somewhere else. It's also explicit about it, telling me my file is now deprecated. Even though the agent may be correct overall (maybe my naming isn't great. Maybe the file should be in another place), I don't quite like that it makes tracking history more difficult in the future. It also makes reviewing the changes harder. As usual, I'd rather have this done in a separate step, not at the same time as functional changes.

Funnily enough, this can even happen mid task, where Junie moves the file around, finds a solution to the problem and moves everything back. I'd love to know more about the reasoning behind this ^^.

My experience using Junie for the past few months
Clearly Junie didn't like the name of my file 😊

I really miss the option to refuse part of the solution

At the moment, what I miss THE MOST, BY FAR about Junie compared to other similar agent is the option to accept / refuse part of a solution. When Junie thinks it's done, it will tell you. You can then decide to accept, refuse or tell it to try again.

My experience using Junie for the past few months
Junie waiting for a decision

Now, I may be 90% OK with the solution, but want to remove the gradle options that it also added at the same time. I've had a time where it added tests, and also a Jitpack configuration. I mean, thank you, but no?

Cursor is more fine grained in that regard, and will let you individually accept / refuse changes :

My experience using Junie for the past few months
Accept / Reject feature in cursor (Screenshot courtosy of datacamp)

When I asked about this on the discord, the official answer was to use the commit window of IntelliJ to do this. This is a valid, but subpar answer imho. It also makes it more difficult for me to check whether the refused changes are breaking the complete task in any way. I have to switch to manual mode for this. There must be a way to do better.

Ask mode responses are great!

The scratch files as an output to the ask mode are great. When you chat with Junie in ask mode, the output is a Markdown file that is placed in your scratches folder. I really like this for several reasons :

  • The output is structured and readable
  • It is easily shareable
  • I can reuse this as input for later tasks.

I haven't tried this yet, but I also think that makes it for a very good start of keeping a log of decisions if they were placed somewhere together with my source code.

My experience using Junie for the past few months
A structured answer to a question I asked Junie

Junie is extremely slow in my experience compared to other tools I've used

This is my main gripe with Junie at the moment honestly. Even for simple-ish requests (think, write a simple method to filter a data class ), Junie will take between 3 and 4 minutes to complete. I haven't benchmarked this, but it felt much longer than most other coding agents I've tried.

This is understandable, given how it works. It will first make a plan, verify its plan across many files of the project, create the implementation, make sure the code builds, write tests for the method, run those tests. It usually will discover a couple bugs that way, reiterate and keep looping until it finally succeeds. This is pretty much how I, mere human, would also do it. (This is while using Brave mode by the way).

My issue with this is that during that whole time, I am not actually actively involved in the process. I will be needed soon to verify the implementation because I'm not actually pair programming and seeing the code being written live I cannot be in a "copilot seat". The pilot actually closes the door of the cockpit and reopens it when it thinks it's ready.

My issues with this is that it really disrupts my flow. During that time, I cannot quite start doing something else to keep the context fresh in my head. I also do not want to start checking email / slack because I want to keep in the flow. I haven't found a good way yet to use that time in a way that isn't disrupting. It has made me actually decide to not use Junie for many things several times.

My experience using Junie for the past few months
Now that compiling Kotlin is fast again in IntelliJ I wait for Junie instead 😄

When spending 4/5 hours in the IDE, this spinning wheel really started grinding my gears after a while 🫤.

Random 401s when leaving the IDE open too long

Not sure whether others experience this too, but Junie regularly lost contact with its servers and the only way to fix this was to restart IntelliJ completely. I never close my IDE, and rarely restart my computer and after a few days Junie tends to just give me an unknown error that won't go away even when reloading the plugin. Not a huge issue, CMD+Q CMD-Space are only a couple keystrokes but still, it's a mild annoyance.

My experience using Junie for the past few months
Random errors at time, forcing me to restart the IDE.

The need for structured Junie guidelines?

There are many prompt libraries out there. I asked on the Junie discord where people were sharing their Junie guidelines. To my surprise, that doesn't really seem to be a thing today, and everyone mentioned they either didn't use any or they were too project specific. However, the first thing that the Junie guide mentions is to create those guidelines.

This makes me wonder whether there is a need for a Junie guidelines library, or at least a place where people can share how they use this file. Because at the moment I feel like I'm underutilising the tool and I could pick up great ideas from others.

In summary, some personal tips :

  • Use a guidelines file to personalize Junie as you want it.
  • Chat about large pieces of work with Junie, and then ask it to do the work piece by piece for better implementation results.
  • Don't ask Junie to just "upgrade" your project because it will be late on versions. Instead, check first and be specific about what you want.
  • Don't hesitate to switch between ask and code modes consciously, for best results.
  • If you find a way to stay actively in the flow while Junie is doing its thing, please let me know.
  • I really hope Junie becomes faster over time, because during long coding days it's sometimes a make or break situation for me, and I'm likely to start using another agent in the future simply because of this.
]]>
<![CDATA[You probably shouldn't hire a Developer Advocate yet]]>https://lengrand.fr/you-probably-shouldnt-hire-a-developer-advocate-yet/67250be86ff08319f98bb09aSun, 03 Nov 2024 17:46:31 GMT

TL;DR : Many companies searching for Developer Advocates probably shouldn't be hiring yet. More often than not, what they need first is some internal mindset change. And even then, Developer Experience will go a long way to achieve the first stages of a healthy community.

Many companies searching for Developer Advocates probably shouldn't be hiring yet. More often than not, what they need first is some internal mindset change. And even then, a Developer Experience will go a long way to achieve the first stages of a healthy community.

Note : I don't intend to describe what a Developer Advocate does in this post. Others have done it better than I could. This article also both discusses internal as well as external Developer Advocacy challenges.

I've been officially in the world of Developer Relations for just about 4 years now and I've had multiple occasions where people would come up to me and asking me for advice on how to find their first Developer Advocate.
And even if there's many ways I could help them with that, my answer after chatting for some time is "well you probably shouldn't". This blog is a list of a few of the reasons that keep coming back and my thoughts on it

"Why do you folks want to hire a Developer Advocate?"

The company struggles with talent retention

They struggle to keep their best profiles, who go on to find better pastures. They want to hire a developer advocate to improve this.

This one is always super interesting to me. When digging up more, it turns out that the reasons are typically pretty clear :

  • It's a shame, we're a great company but we're losing out best people because the competition pays more.
  • Our best elements are leaving because they reach a ceiling and cannot be promoted any more unless they move to management
  • Our developers wants to write / speak at conferences / organise events but we don't have budget for them / our policies don't allow it.
  • Our developers build what they're told, it's the product team who decides what to build.

A lot of the answers above cannot directly be solved by a Developer Advocate, and maybe even make them worse. A lot of the keys needed to solve the issues above lie in the HR realm.
Improve your compensation packages, open up more growth opportunities for your tech profiles, rework your product / tech work relationship, ...
Yes, those may be longer projects and / or more expensive but believe me they're better than putting lipstick on a pig. Your engineers are smart and you'll do more harm than good.

The company struggles with internal upskilling

"Our developers aren't upskilling and staying up to date with market trends and we need someone to bring that knowledge inside and motivate people to learn".

I find that one super interesting. As a developer in a company, I'd be tremendously pissed if someone was paid to go to conferences in my stead and report to me on what I should learn.
Again, I find that sort of discussion fascinating :

  • Do you not have a good internal candidate for this already? Why not?
  • Are your developers not upskilling, or are you simply not aware of it?
  • Humans are typically curious by nature. What makes it that they're not learning ? Do you have budget / opportunities in place? What does the workload look like?
  • And picking from the amazing "Accelerate" book : Do you have a culture of safety and experimentation in place? Do you lead by example? When was the last time you went on a conference or shared knowledge yourself?

The company struggles with hiring "A" players

They want to speed up hiring and capture better profiles.

Typically my first question is "what makes your current profiles not good enough"? And we're typically back to the upskilling / retention discussion. What is an A player anyway?

One of the things that I consistently discover and find super interesting is that there always seems to be a disconnect between the Marketing team who will be responsible for Tech Branding and the actual techies on the floor. And the problem only grows the larger the company is.
I've literally been in discussions where people would say "we need to hire influencers in the space" only to answer "really? How about this person in your company already who wrote 2 books on topic X. Or this person who spoke at 4 conferences the past year. Or this lady who happens to be a Java Champion? Or this other who literally organises part of the event you're sponsoring every year on their own time?"

Sure you can hire a Developer Advocate to find out those internal profiles and nurture them, but it seems cheaper to make sure whoever is responsible for Tech Branding also happens to be close to the communities they want to reach, or even better be part of it 😊.

Another point of discussion I raise usually is : why do you want A players? And what does that mean? Are your problems hard enough that you will be able to keep them sharp?
I don't know anyone who will say openly that they want to hire less than great developers. Still, not all companies are born equal. Refining what you mean exactly will go a long way in finding the right profiles, and will also help line up expectations for new hires. There's nothing more damaging to your technical brand than advertising something and getting people who convert to realise they've been fed incorrect information. Especially "A players".

(Consultancy) Company wants to be more visible in the market

This one is actually more frequent than I was expecting. (Consultancy) company, who makes a living selling hours or projects (not products) wants someone to speak at conferences / podcasts / write for them in order to increase their brand visibility (aka thought leadership).

That can make a lot of sense, and there's great examples of this on the market. That being said, it depends on the strategy being used. You want to be more visible, great! But visible to who? Who is the decision maker at your future customer? Which kind of events do those people participate in, where do they get their influences from, and from what kind of people ?
A lot of the times, the answer is probably close to C-level or directors. And I would then argue that your own C-level / directors probably would be the best leaders to create influence outside the company in that setting. They are the ones shaping the vision of the company, and spreading it out. At least that's where it should start from.

(Small) company has a (technical) product and wants to market it to techies

Now, we're getting closer from the realm of what your typical Developer Advocate excels at. The request makes sense, but let's dive into the topic a little deeper.

The question "What kind of activities do you want your Developer Advocate to do?" are usually answered with :

  • Go to conferences / man conference booths and talk about our products
  • Make videos and blogs about the product / new features
  • Respond to questions on Stack Overflow / Social Media
  • ...

With the typical goal of largely increasing the number of new accounts created / monthly entering developers.

Now, that's where I love to discuss more about the product in itself. All the activities described make a lot of sense, but each activity also targets a specific part of the product funnel. And before promoting the heck out of a project, you want to know if people will love the project.

Because your real success metric actually probably is the number of monthly active developers (ideally paying). If most of them are leaking out right after entering, you're going to struggle being successful.

Many developers simply don't go to conferences, and aren't going to read about you until they have a specific issue to fix. Conferences are also incidentally the most expensive activity one can think of, in straight up money but also time!

At this point we're entering the wonderful and exciting world of ✨Developer Experience✨.

  • Do you offer free accounts?
  • Are you using the expected standards in the industry in terms of APIs, WebHooks, ....
  • Do you have sample data for folks to get onboard quickly?
  • How accessible is your roadmap?
  • Are you already active in the community of the domain you're in ? Do you conform to their expectations? (Example : If you're into APM, do you support OpenTelemetry? If you're doing PostgreSQL, are you an active contributer?)

The outcome of that discussion gives you a pretty clear idea of how far they are in the process, and how much can achieve joining them.

In conclusion

I find all those discussions fascinating. They're all very important problems and they're typically things that executives spend a lot of brain cycles on. What I find most interesting is that very often those people already know what their real issue is. But it seems harder or more expensive to fix the root cause so they want to hire a Developer Advocate instead to tackle the issue.

It's definitely possible to hire a consultant to help you shape your strategy, setup an action plan or more. I find it interesting that "hire a developer advocate" became some kind of catch all in a way. A superpower that will help solve underlying issues. This could absolutely work, but if the hire has the direct mandate to solve those issues and they typically cross many departments.

One more thing that I remark is that "Developer Advocate" often seems conflated with "Developer Relations" and many of the other fields adjacent to it. Developer Education, Developer Experience, Technical Product Management, ... these are all very valid needs, but they're not quite the same as "Developer Advocacy".

Not an issue per se, it's all part of the journey to dive into this. Hiring someone without giving them the keys for being successful, in a field that is generally known for being hard to measure without going through the journey is a waste though (but that's a topic for a whole other post)!


Wanna chat, do you agree, disagree or think you have another challenge? Just hit me up 😊!

Thanks Floor for the insanely fast review

]]>
<![CDATA[Impressions on the remarkable 2, one month in]]>https://lengrand.fr/impressions-on-the-remarkable-2-one-month-in/671ab3d86ff08319f98baf0dFri, 25 Oct 2024 16:44:02 GMT

TL;DR : I love the remarkable for reading articles and writing notes on books and I love the computer / website / tablet sync, but I find the ecosystem relatively poor and I still prefer to take my notes on paper. I haven't tried the handwriting to text capabilities.

BTW, this isn't a sponsored post in any way (though do feel free to send your proposals, good folks at Remarkable xD).

I've been eyeing the Remarkable tablet for a very long time. I'm an avid reader, and I still take a lot of notes on paper. I also love to write and highlight things in the books I read, and I've been a Kobo user since pretty much forever. I've never bought one though, because I just found it way too pricy for the capabilities.

Why now

But when they announced the remarkable pro couple months ago, I decided it was the right time to give it a shot. Many of the remarkable fans jumped on the new product and there was a lot of the second model online for relatively cheap.

I bought a 6 months old, still under warranty, close to new condition remarkable with 2 covers, the pen and pen tips for 375 euros (about a 45% discount on the store price).

Impressions on the remarkable 2, one month in

First impressions

I like the tablet as much as I was expecting to. The product looks and feels qualitative. It's a pleasure to hold the pen. The interface is snappy. The magnet and covers are great and it transports easily.

Reading

The (active) reading experience is the aspect of the tablet I love the most. I really like my Kobo, especially as it fits in my trouser backpocket but reading larger format books or PDFs in general is a pain.

The experience on the remarkable is ... well ... remarkable. The screen is high quality enough that I feel like I'm pretty much reading a paper A4 format book. Highlighting snaps to sentences and it feels great when reading. Its also very nice to quickly be able to tag pages, write some notes at the bottom, ... Basically stuff I do with a paper book but in a way that I can reuse easier later.

I have an iPad as well, but in my experience eink displays are just so much more pleasant to read on the difference is extremely noticeable. I'll talk more about it later but reading shorter articles / publications is the thing I think the remarkable excels at.

Impressions on the remarkable 2, one month in

I do dearly miss the presence of a backlight, like any e-reader has since forever. I can live without it, but being able to read books in the dark is one of the unique selling points of ebooks compared to good old paper and I find it a shame not to find it there.

DRMs

One small, but relevant item I learnt that hard way is that the tablet does not support files with DRM. That is logical, given the product is then encrypted to the dive but that also means that you cannot buy an ebook anywhere and expect it to be readable on your tablet. For example for me in the Netherlands Bol.com is a no go. (And most cases you can't get ebooks reimbursed, given they're digital products).

Writing

In my opinion, the writing experience is .... alright. I've tried the iPad / paperlike combo in the past and didn't quite like it. At least the remarkable pen feels great (it's not smooth) and the tablet is very thin compared to an iPad so you feel like your hand is on a notebook. But it's still a screen, and in my personal experience the input lag is slightly noticeable which isn't pleasant.

I also don't quite like the fact that the angle you put the pen on the tablet changes the thickness of the writing. I get it, but the effect is bolder than what I'd get with a real pen and it annoys me.

I'll come back to it later in the UI part but I'm left handed and it's SUPER annoying that the "close" button is basically located where you hand lies on the tablet. I was going crazy closing my documents by mistake. Turns out it's a known problem and the only actual "solution" is to actively hide the UI...

My left hand closing the page (hard to simulate with a phone in my hand :P)

I know my handwriting is pretty bad, so I'm not having much hope for the handwriting to text conversion, though I tried it just for this post. As expected, it's OK, but far from good enough for me to make use of it. Can't blame them completely though, my writing is bad. But if I have to care about my handwriting at all times or rewrite half my notes, it's just as good as nothing.

Impressions on the remarkable 2, one month in
My handwriting
Impressions on the remarkable 2, one month in
And the automatic conversion

In any case, it's not much of an issue for me, since I don't take notes to remember later, but to remember now.

The UI

I don't dislike the interface of the Remarkable, but I can't say I love it either. It's barebones (which is a good thing in my opinion) but it feels clunky to me at times too.

(I don't pretend to be a pro , so please correct me if there's obvious things I missed)

I try to take one notes per meeting, and to create a new note I need to click at least at 3 different places in the UI, maybe in a folder too, AND pick the name of my notes.

Creating a new note... Not without friction

Any note application I've used in the past 10 years let's you start taking notes first, and setup later.

With Apple Notes, I'm directly productive

I use folders to take notes, and I constantly need to click in and out of the folders. As far as I can tell, drag and drop isn't supported.

As I was saying earlier, the UI is also useful, but you can hit it by accident when writing and that is super frustrating (and distracting) half way through a meeting.

That being said, I LOVE the (web and native) application two way sync. It's super nice to just be able to drop anything on the website and have it ready to be read in the train (though, obviously your tablet has to have access to the internet). In the same way, taking notes all day and seeing them in the web environment is pretty cool.

It's 2024, and I would love to have a folder "à la Dropbox" (or even a Dropbox integration like Manning books supports since 2015(!!)) that auto syncs local content from my computer with the tablet and back, but I guess that's a bridge too far 😊.

The ecosystem

I have quite a few things to say about the ecosystem, in the generic sense of the word.

Remarkable connect

First, I find it quite annoying to have to pay for a monthly service when already buying a premium piece of hardware which sets you 500 euros back.

Sure, you don't need to pay for connect; but then you don't get your notes synced up... So you're basically back to having a glorified ebook.

I DO understand that the company needs to make a living, and I'm totally fine with their "50 days" free tier limitation. But come on, at least add the notes to it...

That being said, it's only a few euros a month so at least we're not talking about yet the price of another Netflix subscription.

I don't use any of the other features of connect, so I can't talk much about them.

Hardware, anyone?

Another gripe of mine is that the only keyboard that Remarkable officially supports is their own 220 euros type folio.

What's more interesting to me is that the tablet is literally equipped with a USB-C connector and the pins at the back of the tablet are also the pins of a USB connector. Which means that folks are literally able to 3D print and reverse engineer support for a keyboard themselves.

Looks to me like it is a conscious decision to sustain lockin, which is a shame really.

Impressions on the remarkable 2, one month in
Source https://codeberg.org/veryapt/remarkable-2-pogo-to-usb-adapter

In the same way, the tablet currently doesn't support bluetooth (which could have been another way to support input devices). It's not a large issue, but would have been appreciated.

Integrations

Another thing I find surprisingly lacking is the relative lack of integrations with the tablet.

I'm an avid Pocket user for example, and synchronising pocket articles on the Remarkable requires setting up your own sync server (Thanks Open-source once more!). This is something that Kobo supports since forever and I was surprised to not find it available. Same for Dropbox, agenda features, Google Drive, or anything really.

(EDIT : Remarkable actually supports Google Drive and Dropbox, but I just had no idea until someone pointed it out to me when reviewing the article. I don't think I received any information about it, and it seems only possible via the website so I never actually saw the option ^^. My other remarks stay valid)

(EDIT2: It's actually crazy, they have a "save articles on your tablet" extension as well, but none of those integrations are visible on the tablet or app themselves, only at the bottom of their website. Completely missed them... Installed and trying it out to see if it can replace pocket in the future).

Impressions on the remarkable 2, one month in

Turns out everything is possible, but they're quite literally all informal and made by the community. I hear Remarkable is a small company and we can't ask them all; but I'd like them at least supporting those efforts in some way or even better : centralize them.

Templates

Another thing I didn't expect when I started using the tablet is the sheer amount of people using custom page templates. There is quite literally a parallel industry around these. Some of them cost up to 40 euros!

It's always fascinating to me to see how products can create new market opportunities, with value in them. One thing I wished, just like for integrations, is to see Remarkable recognise and support them (whatever that may mean).

A great place of inspiration for me is how Notion offers a gigantic list of community templates for example, but also a directory of Notion partners you can contact (and who make a living of it!).

Tiny sloppy things that accumulate

This is something that has no relation with the product, but I find a good illustration of my general feeling of the remarkable product. When logging in my account with Apple, I got an email telling me they detected the login with a link to my account to approve / refuse it.

Impressions on the remarkable 2, one month in
The remarkable email

The email looks great, and came in under a second. Except the myremarkable.com domain isn't active, and we're basically one step away from being scammed. The actual URL is my.remarkable.com.

When contacting support to mention it, I received no confirmation a ticket was created, only a direct answer that contained no information about my request.

Impressions on the remarkable 2, one month in

It's well intended, but honestly a bit sloppy and a bad look (and if they don't own myremarkable.com; could actually be straight harmful).

Conclusion

The article is longer than I expected and I feel like I spent a lot of time describing what I didn't like in the product.

Overall, I love my remarkable. It's really a GREAT tool to read articles and practise active reading. That being said, it also leaves a sour taste in my mouth because I feel like there's huge amount of potential to develop the ecosystem that isn't tapped into. And for the price of the product, I find it hard to accept it. I paid less than 100 euros for my Kobo; about 7 years ago. I paid about 2/3 of the price of my iPad for this tablet. But my iPad comes with a complete app store and all the Mac ecosystem. I hope I can say the same for Remarkable.

For now, I'll keep using it to read articles; and write notes on paper like before...

]]>
<![CDATA[Hosting Kotlin applications using Coolify]]>https://lengrand.fr/hosting-kotlin-applications-using-coolify/666817db6ff08319f98bac0cTue, 11 Jun 2024 10:37:11 GMT

TL;DR : With Coolify you can host you Kotlin applications in seconds on your own server and benefit from auto deploys, custom domains, preview branches and more. You can see the code here, and access the sample here.

Lately, I've been increasingly thinking about the fact that all of my applications / experiments are spread across providers (Supabase, AWS, Koyeb, Digital Ocean, ...) and I've been toying with the idea of owning all of this back on my own servers. After discovering Hetzner auction servers, I realised that I could have a super beefy server for very cheap and decided to try it out.

Installing Coolify is as simple as running $ curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash on your server.

A sample Kotlin application

For this test, I'll go to the Ktor Starter website and create the simplest application I can think of.

Hosting Kotlin applications using Coolify
My minimal ktor application configuration

I'll then unzip the repo and create a GitHub repository from it (using the GitHub CLI, get it if you don't have it yet it's awesome 😊).

$ unzip ktor-sample-coolify.zip -d ktor-sample-coolify
$ cd ktor-sample-coolify
$ gh repo create . 
## Some setup, and final repository push to GitHub

Once that is done, I can access my repository here.

Creating a Coolify GitHub application

We have to deploy this application to Coolify now. There are several ways to do it, but the most powerful one will be via a GitHub app, we'll see why very soon.

To do this, we'll add a new GitHub app source to Coolify.

Hosting Kotlin applications using Coolify
The GitHub app dialog

It will then ask us which features we want to activate, and we'll be redirected to GitHub to approve the creation of the app, and then requested to select which repositories to apply this application to (I selected them all, but you can also choose to segregate better and only add the one repository we created earlier).

Hosting Kotlin applications using Coolify
Registering a new GitHub app

Deploying our Ktor application

Now that the connection between Coolify and GitHub is setup, we want to deploy our Ktor application. To do this, we create a new resource and select the Private repository with GitHub option.

Hosting Kotlin applications using Coolify
Menu for creating a new resource

I'm not going to show you all of the dialogs, but you'll need to select which server to deploy on, which GitHub app to use and then which repository to choose.

Once all of this is done, we'll have access to our deployment configuration. We'll select the main branch for the deployment, and the port 8080 which is the default Ktor port.

Hosting Kotlin applications using Coolify
Basic configuration for our Ktor deployment

Once that is done, we can hit the deploy button. By the way, we can also very much appreciate the fact that Coolify will use sslip.io to generate a domain URL for your app without you having to setup anything (Granted, it's not the URL we want but it's so much better than an IP address and port combination).

Hosting Kotlin applications using Coolify
Sslip domain generated for us by Coolify

First roadblock : Invalid Nixpack start command

Once thing that I haven't mentioned yet here is that our Ktor sample application does not have any kind of DockerFile. Nixpacks will magically detect which kind of project it is yet, and start building it, running gradle tests, building the project and creating a Docker deployment based on its own inference. I didn't know about this yet, and honestly I was 🤯.

The issue though, is that our deployment fails :

Hosting Kotlin applications using Coolify
No main manifest attribute found error

Now, that's a very well known error for any seasoned JVM developer I think 😊. We can spot the issue rather quickly when investigating the logs. Here is the commands that Nixpack will use to build/deploy our project :

╔══════════════════════════════ Nixpacks v1.24.1 ═════════════════════════════╗
║ setup │ jdk17, gradle, curl, wget ║
║─────────────────────────────────────────────────────────────────────────────║
║ build │ ./gradlew clean build -x check -x test ║
║─────────────────────────────────────────────────────────────────────────────║
║ start │ java $JAVA_OPTS -jar $(ls -1 build/libs/*jar | grep -v plain) ║
╚═════════════════════════════════════════════════════════════════════════════╝

The issue, however, is that we actually build 2 jars during our build step, and Nixpack runs the incorrect one in its start phase. This is not a Ktor only issue by the way, it seems to happen for Spring boot too.

  ╱  ~/Dev/ktor-sample-coolify/b/libs ╱   feat/automat…-branch-test  ls -la                                                                                          ✔ ╱ 12:00:46 
total 30584
drwxr-xr-x   7 julienlengrand-lambert  staff       224 Jun 10 15:23 .
drwxr-xr-x  11 julienlengrand-lambert  staff       352 Jun 10 15:22 ..
-rw-r--r--@  1 julienlengrand-lambert  staff      6148 Jun 10 15:23 .DS_Store
drwx------   7 julienlengrand-lambert  staff       224 Jun 10 15:23 quest.lengrand.ktor-sample-coolify-0.0.1
-rw-r--r--@  1 julienlengrand-lambert  staff      5193 Jun 10 15:21 quest.lengrand.ktor-sample-coolify-0.0.1.jar
drwx------  18 julienlengrand-lambert  staff       576 Jun 10 15:23 quest.lengrand.ktor-sample-coolify-all
-rw-r--r--@  1 julienlengrand-lambert  staff  15638896 Jun 10 15:22 quest.lengrand.ktor-sample-coolify-all.jar

We have 2 ways to fix this :

  • Create a nixpacks.toml to customize the start command
  • Change the Coolify configuration and set the start command to ./gradlew start

I've chosen the latter for simplicity this time.

Hosting Kotlin applications using Coolify
Custom start command in Coolify

We change, save and press deploy again.

Second roadblock : Issue with healthchecks

Deployment somehow fails again. This time, it seems to be due to the automated healthchecks from Coolify to indicate that the application is unhealthy. And the default behaviour for Coolify is to 404 any traffic to unheathly applications.

[COMMAND] docker inspect --format='{{json .State.Health.Status}}' xc8g0s0-100357313316
[OUTPUT]
"unhealthy"

[2024-Jun-11 10:06:05.645616]

[COMMAND] docker inspect --format='{{json .State.Health.Log}}' xc8g0s0-100357313316
[OUTPUT]
[{"Start":"2024-06-11T12:05:43.755413473+02:00","End":"2024-06-11T12:05:43.808713519+02:00","ExitCode":1,"Output":""},{"Start":"2024-06-11T12:05:48.809795855+02:00","End":"2024-06-11T12:05:48.8473437+02:00","ExitCode":1,"Output":""},{"Start":"2024-06-11T12:05:53.848150166+02:00","End":"2024-06-11T12:05:53.985646744+02:00","ExitCode":1,"Output":""},{"Start":"2024-06-11T12:05:58.986804475+02:00","End":"2024-06-11T12:05:59.036324085+02:00","ExitCode":1,"Output":""},{"Start":"2024-06-11T12:06:04.037004721+02:00","End":"2024-06-11T12:06:04.07944818+02:00","ExitCode":1,"Output":""}]

[2024-Jun-11 10:06:05.648560] Attempt 10 of 10 | Healthcheck status: "unhealthy"
[2024-Jun-11 10:06:05.651092] Healthcheck logs: (no logs) | Return code: 1
[2024-Jun-11 10:06:05.653988] ----------------------------------------
[2024-Jun-11 10:06:05.656408] Container logs:
[2024-Jun-11 10:06:05.745223] Downloading https://services.gradle.org/distributions/gradle-8.4-bin.zip
............10%............20%.............30%............40%.............50%............60%.............70%............80%.............90%............100%

Welcome to Gradle 8.4!

Here are the highlights of this release:
- Compiling and testing with Java 21
- Faster Java compilation on Windows
- Role focused dependency configurations creation

For more details see https://docs.gradle.org/8.4/release-notes.html

Starting a Gradle Daemon (subsequent builds will be faster)
[2024-Jun-11 10:06:05.748707] ----------------------------------------
[2024-Jun-11 10:06:05.751778] Removing old containers.
[2024-Jun-11 10:06:05.754548] New container is not healthy, rolling back to the old container.
[2024-Jun-11 10:06:06.335355] Rolling update completed.

I haven't found the solution for this just yet, so I've decided to disable healthchecks for now. We press deploy again.

Hosting Kotlin applications using Coolify
Disabling healthchecks

That's it, this time we're in business! If we go to the generated URL, our application answers as expected

Hosting Kotlin applications using Coolify
Hello world!

Where the magic begins!

Now that our configuration is valid, we can benefit from all the magic that a Fly.io, Koyeb or any other cloud provider can offer us, but on our own terms!

Preview Pull requests

One of the features I love the most is preview pull requests. We create a new branch with a custom endpoint:

fun Application.configureRouting() {
    routing {
      ...
        get("/mood/{mood}"){
            call.respondText("Are you feeling ${call.parameters["mood"]}?")
        }
    }
}

We commit, push the new branch and create a Pull Request. Coolify will automatically detect this, start a new deployment and generate a new SSlip URL, all of this while our main deployment is still running! You can see this happen here.

Hosting Kotlin applications using Coolify
Deployment after a new Pull Request is created
Hosting Kotlin applications using Coolify
Dedicated URL for the Pull Request, in parallel to the main deployment

And just like expected, you will also have access to information about the deployment directly in your Pull Request:

Hosting Kotlin applications using Coolify
Coolify keeping me updated about the deployment straight from my PR

Similarly, any merge / commit to main trigger a new deployment, so you can basically have CI/CD with a great Developer Experience, all of that on your own premises.

There you go, I'm feeling happy. 😊

Custom secure domains

This is not the object of today's article, but adding custom domains to Coolify is also very simple and the tool will take care of all the SSL certificates setup / renewal for you automagically so it takes seconds to create shareable domains, including multiple wildcards. For example, you can access my app here, here and here too and all I had to do was change the "Domains" input and restart.

Hosting Kotlin applications using Coolify

A word of conclusion

You know me by now, I'm a sucker for good DevEx. And I usually love to share my excitement for new tooling, which is why I'm a big fan of tools like Supabase, TinyBird, Koyeb or Digital Ocean.

What impresses me A LOT here though, is that all of this is available for free locally as well, and is mainly developed by a single person. I honestly wish the very best to Andras and will definitely be supporting his work further.

I could deploy a complete application within an hour using lots of tooling I have no experience about, and without having to read any documentation. That allows me to just focus on writing my application, and I just love this.

Now, there's still a lot I want to explore. Obviously, we need to fix those healthchecks. I also want to create a more "production like" application, with a database, observability setup and more, but we'll see this soon, I have total confidence I can figure it out! 😊

Try out Coolify, it's worth it!

]]>
<![CDATA[🎉Celebrating Kotlin 2.0🎉]]>https://lengrand.fr/celebrating-kotlin-2-0/664e67b702fb1c43eb7ff206Wed, 22 May 2024 22:22:53 GMT

Upgrading a simple Kotlin and PicoCLI project to Kotlin 2.0 in under 5 minutes. TL;DR: You can see the diff here.

Introduction

As you may or may not know, KotlinConf will be kicking off tomorrow and Kotlin 2.0 will be announced during the Keynote. I can't be there this year, but I'm celebrating in my own way by upgrading one of my projects tonight 😊. I'll be upgrading SwaCLI, a StarWars CLI demo app I've used in talks to demo the amazing PicoCLI project.

I have been following the whole 2.0 project from very far away, having mostly done Java for the past months, so I really don't know what to expect. Hopefully a smooth experience!

First, this is what SwaCLI does : It lists Star Wars planets or characters, in many different ways. This is the paginated version

When running $ swacli planets for example, it gives us a sorted list of planets.

🎉Celebrating Kotlin 2.0🎉
A paginated list of Star Wars planets

As any typical Kotlin project, it can be compiled using $./gradlew build.

The gradle.build file looks like this:

plugins {
    id 'java'
    id 'org.jetbrains.kotlin.jvm' version '1.4.10'
    id 'org.jetbrains.kotlin.plugin.serialization' version '1.4.10'
}

apply plugin: 'kotlin-kapt'


group 'nl.lengrand'
version '1.0-SNAPSHOT'

repositories {
    mavenCentral()
}

dependencies {
    implementation "org.jetbrains.kotlin:kotlin-stdlib"

    implementation 'info.picocli:picocli:4.7.6'
    implementation 'org.jetbrains.kotlinx:kotlinx-serialization-json:1.6.3'

    implementation 'com.github.kittinunf.fuel:fuel:2.3.1'
    implementation 'com.github.kittinunf.fuel:fuel-kotlinx-serialization:2.3.1'

    testImplementation "org.junit.jupiter:junit-jupiter-api:5.10.2"
    testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:5.10.2"

    kapt 'info.picocli:picocli-codegen:4.7.6'
}

compileKotlin {
    kotlinOptions.jvmTarget = "21"
}
compileTestKotlin {
    kotlinOptions.jvmTarget = "21"
}

kapt {
    arguments {
        arg("project", "${project.group}/${project.name}")
    }
}

tasks.register('customFatJar', Jar) {
    duplicatesStrategy = DuplicatesStrategy.EXCLUDE

    manifest {
        attributes 'Main-Class': 'nl.lengrand.swacli.SwaCLIPaginate'
    }
    archiveBaseName = 'all-in-one-jar'
    from { configurations.runtimeClasspath.collect { it.isDirectory() ? it : zipTree(it) } }
    with jar
}

Yup, I haven't touched that project in a looooong time, we're still rocking kotlin 1.4.10 😅.

Rambo style, without reading anything I'll just update my plugins version to 2.0.0, let's see what happens. I saw it pop up yesterday on Twitter, so I'm curious.

plugins {
    id 'java'
    id 'org.jetbrains.kotlin.jvm' version '2.0.0'
    id 'org.jetbrains.kotlin.plugin.serialization' version '2.0.0'
}
🎉Celebrating Kotlin 2.0🎉

Downloading, and migrations; that's a good sign!

🎉Celebrating Kotlin 2.0🎉
No migration detected

Ok, First thing I'm seeing is a deprecation on the kotlinOptions. Let's change that. Without checking the docs, it looks like we want the kotlin.compilerOptions now.

🎉Celebrating Kotlin 2.0🎉

Uh, error.

Build file '/Users/julienlengrand-lambert/Developer/swacli/build.gradle' line: 38

A problem occurred evaluating root project 'swacli'.
> Cannot set the value of property 'jvmTarget' of type org.jetbrains.kotlin.gradle.dsl.JvmTarget using an instance of type java.lang.String.

Ohhhhh, looks like our targets have their own enum now! Let's change that!

compileKotlin {
    kotlin.compilerOptions.jvmTarget = JvmTarget.JVM_21
}
compileTestKotlin {
    kotlin.compilerOptions.jvmTarget = JvmTarget.JVM_21
}

Now, that was easy! Haven't touched the doc yet 😬. Let's keep going. We run ./gradlew. build again :

 $ ./gradlew build       

> Task :kaptGenerateStubsKotlin
w: Kapt currently doesn't support language version 2.0+. Falling back to 1.9.

> Task :kaptGenerateStubsTestKotlin
w: Kapt currently doesn't support language version 2.0+. Falling back to 1.9.

> Task :kaptTestKotlin
warning: The following options were not recognized by any processor: '[project, kapt.kotlin.generated]'

Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

For more on this, please refer to https://docs.gradle.org/8.7/userguide/command_line_interface.html#sec:command_line_warnings in the Gradle documentation.

BUILD SUCCESSFUL in 6s
8 actionable tasks: 8 executed

Ok, build successful, amazing. However, I see that the kapt (Kotlin Annotation Processing Tool) plugin automagically detects Kotlin 2.0 and knows it doesn't support it and reverts back to 1.9. Amazing developer experience if you ask me (really, kudos for the gentle fallback!), but it also means we're not running 2.0 just yet 😱.

Fortunately, picoCLI only uses kapt to generate the docs and CLI options in our tool. Very useful, but I can leave without to try the new shiny version. Let's comment kapt out.

import org.jetbrains.kotlin.gradle.dsl.JvmTarget

plugins {
    id 'java'
    id 'org.jetbrains.kotlin.jvm' version '2.0.0'
    id 'org.jetbrains.kotlin.plugin.serialization' version '2.0.0'
}

//apply plugin: 'kotlin-kapt'

group 'nl.lengrand'
version '1.0-SNAPSHOT'

repositories {
    mavenCentral()
}

dependencies {
    implementation "org.jetbrains.kotlin:kotlin-stdlib"

    implementation 'info.picocli:picocli:4.7.6'
    implementation 'org.jetbrains.kotlinx:kotlinx-serialization-json:1.6.3'

    implementation 'com.github.kittinunf.fuel:fuel:2.3.1'
    implementation 'com.github.kittinunf.fuel:fuel-kotlinx-serialization:2.3.1'

    testImplementation "org.junit.jupiter:junit-jupiter-api:5.10.2"
    testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:5.10.2"

//    kapt 'info.picocli:picocli-codegen:4.7.6'
}

test {
    useJUnitPlatform()
}

compileKotlin {
    kotlin.compilerOptions.jvmTarget = JvmTarget.JVM_21
}
compileTestKotlin {
    kotlin.compilerOptions.jvmTarget = JvmTarget.JVM_21
}

//kapt {
//    arguments {
//        arg("project", "${project.group}/${project.name}")
//    }
//}

tasks.register('customFatJar', Jar) {
    duplicatesStrategy = DuplicatesStrategy.EXCLUDE

    manifest {
        attributes 'Main-Class': 'nl.lengrand.swacli.SwaCLIPaginate'
    }
    archiveBaseName = 'all-in-one-jar'
    from { configurations.runtimeClasspath.collect { it.isDirectory() ? it : zipTree(it) } }
    with jar
}

And when we build again:

$ ./gradlew build                                                         

BUILD SUCCESSFUL in 2s
4 actionable tasks: 4 executed

Let's run the tool :

$ java -cp build/libs/all-in-one-jar-1.0-SNAPSHOT.jar nl.lengrand.swacli.SwaCLIPaginate planets tat

$

🎉Celebrating Kotlin 2.0🎉

And just like that, our project is running Kotlin 2.0. Haven't even opened the docs yet, that's a smooth upgrade if I've seen one! You can check the complete PR here.

Of course, I do realise that my project is ultra simple, isn't containing anything fancy or multiplatform; but still it gives me enough confidence to move forward and keep upgrading my other projects. Remember, I was running 1.4 a couple minutes ago.

Once more, 🎉congrats to the whole team at JetBrains for the release🎉, and have a lot of fun at KotlinConf tomorrow all of you! I'll be looking into the new stuff over the coming days and probably come back with some nice nuggets!

]]>
<![CDATA[A retrospective on getting solar panels installed]]>https://lengrand.fr/a-retrospective-on-getting-solar-panels-installed/65a6465f02fb1c43eb7ff04dTue, 16 Jan 2024 11:28:03 GMT

As you may know, we're trying many ways to reduce our impact on the environment as a family. For example, we didn't fly for several years, focussing instead on vacations that are easier to reach. We're also growing part of our own food organically, making a lot of our own products and almost exclusively using public transport.

A year ago now, we got solar panels installed on our roof and it's time for a small retrospective.

TL;DR : The installation cost around 5€ (4k€ after government subsidies), and we were able to generate over 80% of our yearly consumption (used 30% straight up) and save just short of 1k€ in year 1.

First, our consumption

In this post, we'll only be talking about electricity. Our house heating (and warm water) is done using hot water pipes from the city, so it won't be part of the equation here.

As you may know already, the first thing to do if you are trying to lower your impact on the environment is not to recycle, but to start by reducing your consumption. Yup, it all starts by reducing.

A retrospective on getting solar panels installed

We leave in a 100m2 house that is connected on both sides. We do not own an electric car, and we have two kids. In 2022, our yearly consumption was just above 3000kWh. If I believe the data from our energy contractor, our monthly consumption is about 45% smaller than similar families, which is kind of nice. (If you're interested btw, the average American household consumed 10,632-kilowatt hours (kWh) of electricity last year 😓😱) .

A retrospective on getting solar panels installed
Our energy usage compared to average families in the Netherlands.

This is all done without many sacrifices btw, we're just using mostly "dumb" appliances and avoiding to run the dryer, for example.

The experience, and the setup

Interested to further see how to reduce our house's footprint, we invited Energie N at home, a group of volunteers who comes and tells you based on your house installation which optimizations are the most impactful. (If you're searching for help by the way, I recommend checking for similar groups in your area. They have lots of experience visiting many houses and it's been extremely helpful).

For example, I learnt that dry air is cheaper to heat that humid air. Which is very logical if you think about it, but I never had before 🤓.

Based on their recommendations, we decided to look into installing solar panels on our house. Solar panels are extremely common in the Netherlands (more than 25% of households have them now!!) so there's a plethora of companies to choose from. They do everything online, using cadaster and aerial imagery to plan your future installation. Here is our part of the contract looked for us, without them even having to come

A retrospective on getting solar panels installed

Now, I have to say that not everything is flawless with those methods. As you can see, the space we have to install those panels is limited because of the roof extension we have in our bedroom. There's also legal requirements for margins on each side of your roof to avoid taking space off of your neighbor's roof. The first contractor who came over realized that there was a 5cm discrepancy between his plans and our actual roof and had to cancel the installation...

We contracted a second company, who back then had a 16 months waiting time (due to the war in Ukraine and the energy prices exploding)! Luckily, a few weeks later another customer cancelled their installation date in our street and we could get them installed very fast.

The whole installation took less than 3 hours, and was flawless. Super happy with our contractor.

A retrospective on getting solar panels installed

A year later, the results

We're now a year later. Let's have a look at the results!

We do not have batteries in our installation. The way it works in the Netherlands at the moment, is that the energy you generate is either used direction by your house, or sent back to the grid. The energy you send back to the grid is removed from your bill on a 1/1 ratio.

Last year, our 6 panels generated a total of 2393kWh. The cool thing is that you really can see we had a very sunny spring, but a crap summer 😅. I'm curious how it'll compare to next year.

A retrospective on getting solar panels installed

Out of this energy, we sent almost 1600kWh back to the grid.

A retrospective on getting solar panels installed

This means we used directly just over 30%, or 800kWh. If the conditions of energy delivery were to change, it might actually be worth installing batteries, because it seems that we sent back over two thirds of what we generated.

After all this, our leftover usage is 2226 -  1591 = 635kWh, which means we generated almost 80% of our total consumption! In terms of financials, we saved just shy of 1k this year, meaning that I'd expect our installation to be profitable within the next 4 years if all goes well.

However, if we

A retrospective on getting solar panels installed

And in terms of CO2 emissions?

This will be a very rough calculation here, because I'm not including the cost of the solar panels manufacturing, and the values I will be giving change all the time, but overall if we look at the numbers :

On a normal day, 1kWh generates roughly 500g of CO2 (In 2023 the average emissions of Netherlands were 421 g CO2eq/kWh.). Meaning that us generating 2393kWh this year leads to a reduction of 1150kg of our CO2 footprint, 400kg of which we didn't even send back to the grid. Not bad, that's almost 10% of our combined footprint!

A retrospective on getting solar panels installed

Next steps and conclusion

Overall, I'm super happy about the whole setup. We're reducing our footprint, while saving money, raising the value of the house and for absolutely no trouble. Hard to do better.

I think we've about done what we could for energy production in the household. I think the next big thing for us will revolve around insulation, looking at getting better windows and more. I'll post about this another time!

Hope you liked this post, maybe it can motivate you too!

]]>
<![CDATA[Visualizing your AAARRP priorities as a way to manage up in your DevRel team]]>https://lengrand.fr/aaarrp-metrics-as-a-way-to-manage-expectations-up/6523b75f02fb1c43eb7fea1bTue, 12 Dec 2023 09:46:10 GMT

TL;DR : we have been using the AAARRRP framework as a visual and easy way to align with stakeholders which types of activities are the most relevant for our customers. It can look like this

Visualizing your AAARRP priorities as a way to manage up in your DevRel team
Possible DevRel priorities focussing on activation, retention and product

Let's dive into it!

Some context

Even though I have been doing Developer Relations for a few years unofficially or semi-officially, it was only two years ago that I got my first official role as a Developer Advocate. And after just a few months, I was offered to lead the team. We're a small team of three (me included) and our team falls under engineering.

It was never really an issue for us to create a strategy on what to achieve, or how to achieve it. We focussed on our 3 pillars: Code, Content and Community and selected the most relevant activities.

For example, being a B2B company we knew that running the conference circuit wasn't the most efficient way to create value, for the simple reason that people wouldn't be able to sign up for our services. We quickly decided instead to embed ourselves close to documentation and provide many samples for the companies who were using us, bringing a lot of internal feedback at the same time.

But even though we had no doubt about our priorities, it wasn't always as clear for higher management, nor other stakeholders. After all, Developer Relations was a relatively new thing inside the company and only a few people had direct experience with the domain in the past. And we all know that even for companies seasoned with DevRel, measuring impact and making sure activities align are a difficult topic.

We received a lot of challenging questions over time regarding our priorities :

  • We see that the insert competitor DevRel team is very present on Youtube. How is it that your are not doing the same?
  • How is it that your Twitter account gets very few likes compared to insert other company.
  • Why do we not see many community contributions on your GitHub account, compared to insert large company?
  • Would you please join us at the booths for insert conference, we want to increase hiring.

All those questions are very fair, and they deserve clear answers. Thing is, they were quite obvious for us given the context we had and it was a struggle for me to communicate those answers at scale and in a strategic way (understand : Find a way that people understand those answers, and hopefully give enough context that they come to the same decisions by themselves).

It's after reading about the AAARRRP framework that a potential solution came to mind 😊.

Quick sideline about priorities / activities

My point with this method is to focus on how to communicate in a scalable way the type of activity we plan on focussing on, not the exact activity performed.

For example, we might want people to understand that writing code samples is the most impactful activity we can do.

However, we will not use it to communicate which code sample to write. For this, we might instead use a combination of data sources like documentation statistics, or support tickets for example. I may write more about this later in another article.

A quick intro about the AAARRRP framework

I'm not going to expand myself much about this, because I would be ripping off the original article. I recommend you to go read it instead if you have never heard of it before.

Here is the very minimal version. As Developer Relations team, we can operate on several key moments of the customer journey :

Visualizing your AAARRP priorities as a way to manage up in your DevRel team
The AAARRRP framework funnel

Depending on the type of company we're working on and the strategic priorities of that company, some of those parts will be more crucial than others.

Early on, many SAAS products want to focus on awareness and acquisition to drive as many new customers in as possible, and grow fast (The famous MAD) .

As your product grows in maturity, you may want to increase the "stickiness" of your product and increase retention, or work on activation of additional features for your existing customer base.

Typical DevRel activities that we carry have impact of different parts of that funnel (usually several at the same time).

Visualizing your AAARRP priorities as a way to manage up in your DevRel team
Activities and funnel mapping. Original from https://www.leggetter.co.uk/aaarrrp/

Note: The original post also adds weight to the equation in order to select activities. We will not here, for simplicity's sake. The whole idea stays valid here.

From AAARRRP to strategic communication

Note: All those charts have been altered and do not contain internal information 😊

So far, nothing new under the sun. That's where we'll go just one step further and use the content above for strategic communication.

Let's say that based on the current context, we decide to focus on activation, retention and product.

  • We can draw this as a chart to help communication with stakeholders.
  • We can also draw (rough, given we do not have internal knowledge) equivalent charts for other companies in the space for comparison.
  • We can even draw our own chart year on year, to show how our activities vary as priorities shift, or impact decreased (law of diminishing returns).
  • Now for the key part : We use these charts to validate assumptions with stakeholders, who will be able to derive the corresponding activities themselves.

For example, given the decision above we can draw this:

Visualizing your AAARRP priorities as a way to manage up in your DevRel team
Possible DevRel priorities focussing on activation, retention and product

With those priorities agreed, the common understanding is that we should focus on activities like Code Samples, Guides, Tutorials, and answering Stack Overflow questions.

Again, we can more easily explain why we have a lesser focus on social media and the conference circuit that others by comparing main focuses between entities.

Visualizing your AAARRP priorities as a way to manage up in your DevRel team
possible DevRel team profile from another company focussing more on MAD

Once the shape of our strategy is agreed to, the type of activities expected become clearer for everyone and we can focus on the impact of those activities. And if the priorities change, it's always time to come back to the drawing board 🎉.

Let's imagine now that our company's internal focus heavily shifts towards "closing deals" because we need to extend our runway. It might be time for the DevRel team to start acting on the Revenue part of the funnel, reduce Product efforts and do Pre-Sales activities. We can update the plan, propose a new shape and double check with stakeholders that the change fits the new landscape :

Visualizing your AAARRP priorities as a way to manage up in your DevRel team
DevRel team proposing to shift priorities following a company strategic change

Of course, whether the type of activities to be carried on fits DevRel well, is sustainable long term and more is another discussion altogether. The main point here is to make your priorities clear and communicate them accordingly.

About the scale of priorities

One question you might wonder is : How do you calculate the scale of each priority in your bar chart?

Well, so far our method isn't very scientific 😊. We mostly want to show relative priorities of activities and as such we're using a set number of points and sharing them across the different axes of the chart. For example, pick 70 points (10 per angle) and subdivide. The chart then helps us make decisions quarter to quarter or day to day. Of course, when drawing other team's chart, all numbers are best effort and based on impressions.

As we use these graphs more, we're experimenting with other ways to measure that could be more relevant. For example incorporating alignment and relevant weights that the original AAARRRP article mentions.

A word of conclusion

Overall, the only thing we've been doing is taking the AAARRRP framework as an inspiration to clearly visualise what our team focusses on and bring some clarity internally. We've been using it on all of our presentations and internal pages focussing on our team's strategy.

It has helped to focus discussions with stakeholders more on how much impact we can bring, rather than what activities to carry and why and it's really been useful for us.

Having joined DevRelCon this year I know that internal communication is always a strong topic in DevRel teams. Maybe this can help bring some clarity on the topic!

Cheers, and talk soon.

Julien

]]>
<![CDATA[Creating an OpenAPI generator from scratch : From YAML to JetBrains HTTP Client]]>https://lengrand.fr/creating-an-openapi-generator-from-scratch-from-yaml-to-jetbrains-http-client/6544f72902fb1c43eb7fec8fSat, 04 Nov 2023 17:40:33 GMT

This is the online version of the article with the same name I wrote for the Dutch Java Magazine 😊.

In the previous edition of the magazine, we discussed how the JetBrains HTTP Client could be used to run HTTP Queries, automate them and even use them in your CI/CD pipelines. Just like Postman, but text based and can be part of your source code. Pretty cool.

For reference, it could look like this.

### Github API - Traffic per day

GET https://api.github.com/repos/{{owner}}/{{repo}}/traffic/views?per=day
Accept: application/vnd.github+json
X-GitHub-Api-Version: 2022-11-28
Authorization: Bearer {{github_key}}

With an environment file that looks like this :

{
  "dev": {
    "github_key": "not_that_easy",
    "owner": "jlengrand",
    "repo": "elm-firebase"
  }}

Now, that is very nice, but it requires a lot of manual work. Wouldn’t it be nice to be able to automate this? Fortunately, most of us developing APIs also generate OpenAPI Specifications for them. When I looked however, there was no OpenAPI generator yet available for the Jetbrains HTTP Client. This is the story of how I've implemented it from scratch, and how you could too if you find yourself in the same situation! We'll use the JetBrains HTTP Client as a practical example, but the knowledge is transferable 🙂.

The OpenAPI generator project contains a core engine, as well as many packages with each a specific generator (Java, Ada, …). The Jetbrains HTTP Client generator is actually published, and you can find the merge request as well as the documentation on GitHub. If you have installed the last available OpenAPI generator release, you can actually try it out in a terminal as such :

$ openapi-generator generate -i https://api.opendota.com/api  -g jetbrains-http-client -o dotaClient

At its core, the idea of the OpenAPI generator is quite simple : It takes a specification file (JSON or YAML), transforms it into a set of objects in memory, and uses those objects to generate code / files using mustache template files.

You can actually find most of that logic in the DefaultGenerator source file of the library. There, you can see that actions are separated into 3 groups :

  • models (basically data types)
  • operations (actual operations)
  • supporting files (environments, READMEs, …).

Each of those is illustrated by a method, and takes separate objects as inputs :

 …
  void generateModels(List<File> files, List<ModelMap> allModels, List<String> unusedModels) {…}
…
    void generateApis(List<File> files, List<OperationsMap> allOperations, List<ModelMap> allModels) {...}
…
    private void generateSupportingFiles(List<File> files, Map<String, Object> bundle) {...}
…

You can find the actual source file on GitHub. The objects for each of those methods are large Map classes that contain the necessary data in a semi-structured format. Here is an example of how allModels looks like:

Creating an OpenAPI generator from scratch : From YAML to JetBrains HTTP Client
a debug view of the allModels object

As you can see, the object is essentially a lot of key/value pairs that are quite recognisable and directly come from the OpenAPI specification file.

To create our own client, we will take advantage of this nice work. Let's dive into it. We first clone the repository

$ [email protected]:OpenAPITools/openapi-generator.git; cd openapi-generator

We can then use the /new.sh script to generate a few placeholder files for us. We'll be generating a client, and since we're not creating any bugs we won't be generating test files.

$  ./new.sh -n java-magazine-client -c

Creating modules/openapi-generator/src/main/java/org/openapitools/codegen/languages/JavaMagazineClientClientCodegen.java
Creating modules/openapi-generator/src/main/resources/java-magazine-client/README.mustache
Creating modules/openapi-generator/src/main/resources/java-magazine-client/model.mustache
Creating modules/openapi-generator/src/main/resources/java-magazine-client/api.mustache
Creating bin/configs/java-magazine-client-petstore-new.yaml
Finished.

The library nicely generates a client generator for us, as well as some template files and even a config so we can test it easily! The config uses the well known petstore by default.

This is how the config file looks like :

generatorName: java-magazine-client
outputDir: samples/client/petstore/java/magazine/client
inputSpec: modules/openapi-generator/src/test/resources/3_0/petstore.yaml
templateDir: modules/openapi-generator/src/main/resources/java-magazine-client
additionalProperties:
  hideGenerationTimestamp: "true"

It nicely mentions to the OpenAPI generator library which generator to use, which sample OpenAPI file to use as input, where the mustache template files are located and where to store the output

Let's run it!

$ ./mvnw clean package # package once to have the generator inside the generated jar
$ ./bin/generate-samples.sh bin/configs/java-magazine-client-petstore-new.yaml

Let's see what the generated output looks like :

Creating an OpenAPI generator from scratch : From YAML to JetBrains HTTP Client
a tree view of the generated client

We haven't done any work yet, and our generator is already spitting out things! Unfortunately, as we can see, all those files are empty. That's because our mustache template files also are empty. Let's fix that now!

We'll start by customizing the JavaMagazineClientClientCodegen to fit our needs. We want a very minimal implementation that fits in this article, so we'll actually decide to NOT implement any supporting files (the README), nor Models and instead focus solely on the API. The way to do this in a custom generator is to extend the postProcessOperationsWithModels from the CodeGenConfig interface. We change the .zz extension into .http files that will be recognised by IntelliJ. And because in this specific (simplistic) case, we will not need any alterations to the OperationsMap object we can actually only call the super method. Our final class looks like this :

package org.openapitools.codegen.languages;
import org.openapitools.codegen.*;
import java.io.File;
import java.util.*;
import org.openapitools.codegen.model.ModelMap;
import org.openapitools.codegen.model.OperationsMap;

public class JavaMagazineClientClientCodegen extends DefaultCodegen implements CodegenConfig {

    public CodegenType getTag() { return CodegenType.CLIENT; }

    public String getName() { return "java-magazine-client"; }

    public String getHelp() { return "Generates a java-magazine-client client."; }

    public JavaMagazineClientClientCodegen() {
        super();

        outputFolder = "generated-code" + File.separator + "java-magazine-client";
        apiTemplateFiles.put("api.mustache", ".http");
        embeddedTemplateDir = templateDir = "java-magazine-client";
        apiPackage = "Apis";
    }

    @Override
    public OperationsMap postProcessOperationsWithModels(OperationsMap objs, List<ModelMap> allModels) {
        return super.postProcessOperationsWithModels(objs, allModels);
    }
}

Note: At first glance, the method and variable names may look a bit like magic. It is because most of the logic comes from DefaultGenerator, and CodeGenConfig. If you feel lost, those two classes are where it's at.

Now that we have our baseline, what we want to do is work on our mustache files. Those files are basically templates that will be fed into the processing pipeline to generate our .http files.

We know we want one file per main API endpoint, with some documentation. We also want the @name unique identifier from the Jetbrains HTTP Client to be able to reference our code. Finally, we want to add the supported content type for the calls.

If we look at the data object available for operations, we end up with this, where each {{item}} notation is the value of the item key inside the object.

## {{classname}}
{{#operations}}
{{#operation}}

### {{#summary}}{{summary}}{{/summary}}
# @name {{operationId}}
{{httpMethod}} {{basePath}}{{path}}
{{#consumes}}Content-Type: {{{mediaType}}}
{{/consumes}}
{{/operation}}
{{/operations}}

We can see it clearly if we look at the object during processing

Creating an OpenAPI generator from scratch : From YAML to JetBrains HTTP Client
Creating an OpenAPI generator from scratch : From YAML to JetBrains HTTP Client

Note: Unfortunately, to my knowledge the best way to dive into the data model is still to go pause at runtime, I haven't yet found a complete data model documentation online. If you do, let me know!

Let's rerun the generation and see what we get now :

$ ./mvnw package 
$ ./bin/generate-samples.sh bin/configs/java-magazine-client-petstore-new.yaml
Creating an OpenAPI generator from scratch : From YAML to JetBrains HTTP Client
the result of the generation of our client

Great! Only API files, and one per API, as wanted. Let's see what they contain!

## PetApi

### Add a new pet to the store
# @name addPet
POST http://petstore.swagger.io/v2/pet
Content-Type: application/json
Content-Type: application/xml

### Deletes a pet
# @name deletePet
DELETE http://petstore.swagger.io/v2/pet/{petId}

Looks great to me! Let's try to run one of the calls

Creating an OpenAPI generator from scratch : From YAML to JetBrains HTTP Client
Running one of the calls that's just been generated

It works just fine, and we get a 200 response as well. Success!

Now, there's only one little issue. The variables! In the Jetbrains HTTP Client format, variables are written as {{variable}}. OpenAPI implementations only have single braces. We need to fix that!

What we'll be doing here is implement a custom mustache lambda that doubles up the braces when it finds them. The lambda is essentially a string replacement

public static class DoubleMustacheLambda implements Mustache.Lambda {
        @Override
        public void execute(Template.Fragment fragment, Writer writer) throws IOException {
            String text = fragment.execute();
            writer.write(text
                    .replaceAll("\\{", "{{")
                    .replaceAll("}", "}}")
            );
        }
    }

In order to make it available in our Generator, the openapi generator library offers the same mechanism as for the rest : We have to override a ready-made method.

    @Override
    protected ImmutableMap.Builder<String, Mustache.Lambda> addMustacheLambdas() {

        return super.addMustacheLambdas()
                .put("doubleMustache", new JavaMagazineClientClientCodegen.DoubleMustacheLambda());
    }

The code above is added to our JavaMagazineClientClientCodegen class.

Next, we also need to modify our mustache template to add that lambda at the right location (around the path parameter). If that path is a variable, the braces will then be doubled

## {{classname}}
{{#operations}}
{{#operation}}

### {{#summary}}{{summary}}{{/summary}}
# @name {{operationId}}
{{httpMethod}} {{basePath}}{{#lambda.doubleMustache}}{{path}}{{/lambda.doubleMustache}}
{{#consumes}}Content-Type: {{{mediaType}}}
{{/consumes}}
{{/operation}}
{{/operations}}

Et voilà! Running the sample again, we can now use variables as they are meant to be inside IntelliJ!

In the sample below, I'm using the following local environment file :

{
  "dev": {
    "petId": 3
  }
}

There is still a lot more to do with this generator. READMEs, payload, auth, headers, … But now it's a matter of updating the mustache files as we want.

I'd love to have a more fleshed out generator, because it'd be an amazing and cheap way together with the client CLI to have a great automated integration tests pipeline and get people running in second with your API.

I hope this article made you feel like trying to create your own OpenAPI generator. The only limit is your imagination! And as you can see, the merging process is actually relatively pleasant, because the volunteers of the project LOVE to see people bringing out new ideas to life.

Happy to hear your thoughts, as always!

]]>
<![CDATA[[Unit] Testing Supabase in Kotlin using Test Containers - PART 2]]>https://lengrand.fr/unit-testing-supabase-in-kotlin-using-test-containers-part-2/6532e76102fb1c43eb7febc4Fri, 20 Oct 2023 21:16:46 GMT

TL;DR : You can run a full Supabase instance inside Test Containers quite easily. See this repository.

In my last article, I was listing a few attempts I had done at running tests against my Kotlin Supabase application. The way the Supabase-Kt library is built makes it hard to mock, and I ended up building a minimal Docker Compose setup that was mimicking a Supabase instance.

In this second part, we're gonna push the madness further and actually run a FULL SUPABASE instance locally, still using Test Containers.

Right after I finished pushing my repository last week, I realised that Supabase actually offered a Docker Compose file to self-host their platform. So I decided to push the madness further and see how easy it was to use that file inside TestContainers. In short : Relatively easy.

The setup

The setup isn't actually much different from my homecrafted Docker Compose. version. Here it is in its entirety.

A few notable things:

  • I'm relying on a local clone of Supabase, and point a ComposeContainer to the src/test/resources/supabase/docker/docker-compose.yml file.
  • The setup uses an .env file, so I use a dotenv implementation to grab the parameters there and make the code slightly dynamic.
  • I have to run a database statement to populate and flush my database in between tests. The Docker Compose setup from Supabase comes with persistent volumes, which needs to be accounted for.
  • I don't have test for those here, but all services (auth, functions, storage), ... should actually be supported, given that we're running a full local instance.
import io.github.cdimascio.dotenv.dotenv
import io.github.jan.supabase.SupabaseClient
import io.github.jan.supabase.createSupabaseClient
import io.github.jan.supabase.postgrest.Postgrest
import kotlinx.coroutines.runBlocking
import org.junit.jupiter.api.AfterAll
import org.junit.jupiter.api.Assertions.assertEquals
import org.junit.jupiter.api.BeforeAll
import org.junit.jupiter.api.Test
import org.testcontainers.containers.ComposeContainer
import org.testcontainers.junit.jupiter.Container
import org.testcontainers.junit.jupiter.Testcontainers
import java.io.File
import java.sql.DriverManager


@Testcontainers
class MainKtTest {

    @Test
    fun testEmptyPersonTable(){
        runBlocking {
            val result = getPerson(supabaseClient)
            assertEquals(0, result.size)
        }
    }

    @Test
    fun testSavePersonAndRetrieve(){
        val randomPersons = listOf(Person("Jan", 30), Person("Jane", 42))

        runBlocking {
            val result = savePerson(randomPersons, supabaseClient)
            assertEquals(2, result.size)
            assertEquals(randomPersons, result.map { it.toPerson() })

            val fetchResult = getPerson(supabaseClient)
            assertEquals(2, fetchResult.size)
            assertEquals(randomPersons, fetchResult.map { it.toPerson() })
        }
    }

    companion object {

        private const val DOCKER_COMPOSE_FILE = "src/test/resources/supabase/docker/docker-compose.yml"
        private const val ENV_LOCATION = "src/test/resources/supabase/docker/.env" // We grab the JWT token from here

        val dotenv = dotenv{
            directory = File(ENV_LOCATION).toString()
        }

        private val jwtToken = dotenv["SERVICE_ROLE_KEY"]
        private val dbPassword = dotenv["POSTGRES_PASSWORD"]
        private val db = dotenv["POSTGRES_DB"]

        private lateinit var supabaseClient: SupabaseClient

        @Container
        var container: ComposeContainer = ComposeContainer(File(DOCKER_COMPOSE_FILE))
            .withExposedService("kong", 8000)
            .withExposedService("db", 5432)   // Handy but not required

        @JvmStatic
        @AfterAll
        fun tearDown() {
            val dbUrl = container.getServiceHost("db", 5432) + ":" + container.getServicePort("db", 5432)

            val jdbcUrl = "jdbc:postgresql://$dbUrl/$db"
            val connection = DriverManager.getConnection(jdbcUrl, "postgres", dbPassword)

            try {
                val query = connection.prepareStatement(
                    """
            drop table public.person;
        """
                )

                query.executeQuery()
            } catch (ex: Exception) {
                println(ex)
            }
        }

        @JvmStatic
        @BeforeAll
        fun setUp() {
            val supabaseUrl = container.getServiceHost("kong", 8000) + ":" + container.getServicePort("kong", 8000)
            val dbUrl = container.getServiceHost("db", 5432) + ":" + container.getServicePort("db", 5432)

            supabaseClient = createSupabaseClient(
                supabaseUrl = "http://$supabaseUrl",
                supabaseKey = jwtToken
            ) {
                install(Postgrest)
            }

            val jdbcUrl = "jdbc:postgresql://$dbUrl/$db"
            val connection = DriverManager.getConnection(jdbcUrl, "postgres", dbPassword)


            try {
                val query = connection.prepareStatement(
                    """
                create table
                    public.person (
                                    id bigint generated by default as identity not null,
                                    timestamp timestamp with time zone null default now(),
                                    name character varying null,
                                    age bigint null
                ) tablespace pg_default;
                """
                )

                query.executeQuery()
            } catch (ex: Exception) {
                println("Error is fine here. This should actually run only once")
                println(ex) // Might be fine, this should actually run only once
            }
        }
    }
}

To achieve those results, a few manual steps are required. The Docker Compose file provided by Supabase uses container_name parameters, which aren't supported by Test Containers.

I needed to :

  • Clone the Supabase repository locally
  • Copy the env file
  • Run some magic to remove container_name
  • Once in a while, the Supabase repository will have to be pulled

The results

The results are as outrageous as I expected them to be, if not more : All tests are running fine, though it takes almost 1 minute to run them. The ComposeContainer is starting no less than 12 containers (!!!) so it is to be expected.

Obviously, that setup is not to be used for unit testing. That being said, I find it absolutely freaking cool to be able to recreate your complete environment locally that easily, and I'd definitely consider that an option for bigger integration tests. The confidence I didn't have with my home brewed Docker Compose file is much higher now, given that it's directly provided by Supabase. No network needed to run my tests, pretty cool.

[Unit] Testing Supabase in Kotlin using Test Containers - PART 2
Gradle running those massive tests just fine

What more

My original complete intent was to build a small layer on top of the Docker Compose file, kinda like AtomoicJar does it with its modules. It would have been cool to have a simple interface for a Supabase instance to start, while providing a locally for starting scripts, user roles, maybe a new set of credentials, ...

Here is how they describe it for NGinx for example. I would have loved to have something similar :

@Rule
public NginxContainer<?> nginx = new NginxContainer<>(NGINX_IMAGE)
    .withCopyFileToContainer(MountableFile.forHostPath(tmpDirectory), "/usr/share/nginx/html")
    .waitingFor(new HttpWaitStrategy());

All of the implementation I've seen extend from GenericContainer though, not ComposeContainer so I've decided to hold that off and keep it simple for now.

Could maybe be something for the future, who knows.

In conclusion

That was a fun experiment, in which I've learnt more about TestContainers 😊. I'm as happy as usual with the way Supabase shows love for their users. Providing a seemless Docker Compose like this allows for a great experience. And I'm also impressed with TestContainers and how they can run such complex flaws without breaking a sweat!

If anything, I'd like them to at least ignore the container_name parameter if possible. I've seen many folks being blocked by it, and I can imagine many cases, like this one where people are not in control of their compose file. I don't necessarily ask for support, but an option to ignore without throwing an exception would be great.

That's it folks, till next time!

]]>
<![CDATA[[Unit] Testing Supabase in Kotlin using Test Containers]]>https://lengrand.fr/unit-testing-supabase-in-kotlin/6526bacc02fb1c43eb7fea20Wed, 11 Oct 2023 22:01:10 GMT

TL;DR : The easiest way I found to test my database service is to mimick Supabase using Docker Compose and Test Containers. Here's the code

(Also, have a look at Part 2 of that blog here).

In case you don't know it, I'm a big fan of Supabase. I love that they're a viable alternative to Firebase. I love that they're built on top of Open-Source pieces. I love how innovative they are, and how much they give back to the community. And as you already know it, I love Kotlin as well.

Lately, I've been building a side-project which consists of a Ktor webapp, and uses Supabase-Kt to communicate with the database. I've been looking into ways to test my Kotlin component that interacts with the database, and it's been harder than expected.

In this article, I'll dive into several methods I've been looking into and why I finally decided to go for a Docker Compose / Test Containers solution. You can check the example repository here AND THE FINAL CODE HERE.

What I want to achieve

Let's imagine a minimal code example that contains a Person data class, and wants to save/fetch persons via a SupabaseClient. It can look like this:

@Serializable
data class Person (val name: String, val age: Int)

@Serializable
data class ResultPerson (
    val id: Int,
    val name: String,
    val age: Int,
    val timestamp: String
)

fun main() {
    val supabaseClient = createSupabaseClient(
        supabaseUrl = "",
        supabaseKey = ""
    ) {install(Postgrest)}

    runBlocking {
        savePerson(listOf(Person("Jan", 30), Person("Jane", 42)), supabaseClient)
    }
}

suspend fun getPerson(client: SupabaseClient): List<ResultPerson> {
    return client
        .postgrest["person"]
        .select().decodeList<ResultPerson>()
        .filter { it.age > 18 }
}


suspend fun savePerson(persons: List<Person>, client: SupabaseClient): List<ResultPerson> {
    val adults = persons.filter { it.age > 18 }

    return client
        .postgrest["person"]
        .insert(adults)
        .decodeList<ResultPerson>()
}

The SQL definition of our table looks like this

create table
    public.person (
                    id bigint generated by default as identity not null,
                    timestamp timestamp with time zone null default now(),
                    name character varying null,
                    age bigint null
) tablespace pg_default;

We want to be able to test that our functions behave properly. For the sake of this minimal example, I've decided to filter all non adults, but you can imagine any other use case where the functions contain some business logic.

First attempt : Mock Supabase

When unit testing code using third parties that I don't have control over, my first reflex is to try and mock it.

It stopped being fun really quickly. The Supabase-Kt library is making use of a lot of inline function and I ended up having to mock more and more parts of the library and never managed to get a functional tests.

The short version is that because they are as the name indicates, inlined, inline functions cannot be mocked in Kotlin. So that was the end of that experiment

The MainKtTestMock file of my example repository reflects that attempt.

class MainKtTestMock {

    private lateinit var supabaseClient : SupabaseClient

    @BeforeTest
    fun setUp() {

        supabaseClient = mockk<SupabaseClient>()
        val postgrest = mockk<Postgrest>()
        val postgrestBuilder = mockk<PostgrestBuilder>()
        val postgrestResult = PostgrestResult(body = null, headers = Headers.Empty)

        every { supabaseClient.postgrest } returns postgrest
        every { postgrest["path"] } returns postgrestBuilder
        coEvery { postgrestBuilder.insert(values = any<List<Path>>()) } returns postgrestResult
    }

    @Test
    fun testSavePerson(){
        val randomPersons = listOf(Person("Jan", 30), Person("Jane", 42))

        runBlocking {
            val result = savePerson(randomPersons, supabaseClient)
            assertEquals(2, result.size)
            assertEquals(randomPersons, result.map { it.toPerson() })
        }
    }
}

Here's the final error I encountered :

java.lang.IllegalStateException: Plugin rest not installed or not of type Postgrest. Consider installing Postgrest within your supabase client builder
	at io.github.jan.supabase.postgrest.PostgrestKt.getPostgrest(Postgrest.kt:172)
	at MainKtTestMock$setUp$1.invoke(MainKtTestMock.kt:34)
	at MainKtTestMock$setUp$1.invoke(MainKtTestMock.kt:34)

Second attempt: Encapsulate the Supabase Client

My second attempt was to get around the problem by encapsulating the problematic client inside a class of mine that I can then control.

It can be as simple as this :

class DatabaseClient(private val client: SupabaseClient){
    suspend fun savePerson(persons: List<Person>): List<ResultPerson> {
        val adults = persons.filter { it.age > 18 }

        return client
            .postgrest["person"]
            .insert(adults)
            .decodeList<ResultPerson>()
    }
}

And my test can then look like this (see MainKtTestSubclass):

class MainKtTestSubclass {

    private lateinit var client : DatabaseClient

    @BeforeTest
    fun setUp() {
        client = mockk<DatabaseClient>()
        coEvery { client.savePerson(any<List<Person>>()) } returns listOf(ResultPerson(2, "name_2", 2, "timestamp_2"))
    }

    @Test
    fun testSavePerson(){
        val fakePersons = listOf(Person("name_1", 1), Person("name_2", 2))

        runBlocking {
            val result = client.savePerson(fakePersons)
            assertEquals(2, result.size)
        }
    }
}

My main issue now is that because I have to indicate every single time what my output should be. It also just displaces the problem, because I don't really have any nice and clean way to check that my business logic works as intended, since I'm mocking it.

Third attempt : Ktor mock

The main contributor of the project gave another possible workaround in the GitHub issue I created : mock the internal Ktor engine of the Supabase client.

See the MainKtTestMockEngine :

class MainKtTestMockEngine {

    private val supabaseClient : SupabaseClient = createSupabaseClient("", "",) {
        httpEngine = MockEngine { _ ->
            respond(Json.encodeToString(Person.serializer(), Person("name_1", 16)))
        }
    }

    @Test
    fun testSavePerson(){
        val randomPersons = listOf(Person("Jan", 30), Person("Jane", 42))

        runBlocking {
            val result = savePerson(randomPersons, supabaseClient)
            assertEquals(2, result.size)
            assertEquals(randomPersons, result.map { it.toPerson() })
        }
    }
}

This is actually not a bad idea, it's light and it gets the job done is a clear and readable way. Those tests are also fast to run.

My main issue with this method would be that to test my business logic I'd have to dive into the received requests of the mock engine every time, which is a little cumbersome and prone to lots of maintenance.

I do want to investigate it further though.

Proposed solution : Test Supabase db

Now, one semi obvious solution would be to fire up a test database in supabase itself, and test there!

That'd work. I even do it to test my release deployments!

It has some obvious downsides though:

  • We couldn't be further away from unit tests, since we're testing on the cloud
  • Tests run slower, require internet, and also require a setup database. Cleanup can also be a mess
  • I'd be terrified to run that against the wrong database
  • It uses my bandwidth and projects, that either are limited, or I have to pay for!

Final solution : Docker Compose and Test Containers

I had one last idea, and that's the one I've decided to stick with for now. It leverages the fact that at its core, Supabase is built on a lot of Open-Source. And when we're using the Supabase client, we're essentially interacting with a glorified PostgreSQL / postgrest combo!

I've decided to create a Docker Compose setup that would mimick the actual Supabase production setup and connect to this instead.

A few things had to be taken into account for this to work :

  • I had to redirect all my postgrest calls to /rest/v1, which is the path that Supabase expects. So a GET on /persons should actually be on /rest/v1/persons.
  • postgrest uses JSON Web Tokens for authentication, so we have to set that up as part of the test class.

One last thing to note is that I would have to do MORE work in case I start using any of the other services of Supabase (say auth for example).

The Docker Compose setup looks like this :

version: '3'

services:
  ################
  # postgrest-db #
  ################
  postgrest-db:
    image: postgres:16-alpine
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${POSTGRES_DB}
      - DB_SCHEMA=${DB_SCHEMA}
    volumes:
      - "./initdb:/docker-entrypoint-initdb.d"
    networks:
      - postgrest-backend
    restart: always

  #############
  # postgrest #
  #############
  postgrest:
    image: postgrest/postgrest:latest
    ports:
      - "3000:3000"
    environment:
      - PGRST_DB_URI=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgrest-db:5432/${POSTGRES_DB}
      - PGRST_DB_SCHEMA=${DB_SCHEMA}
      - PGRST_DB_ANON_ROLE=${DB_ANON_ROLE}
      - PGRST_JWT_SECRET=${PGRST_JWT_SECRET}
    networks:
      - postgrest-backend
    restart: always

  #############
  # Nginx     #
  #############
  nginx:
    image: nginx:alpine
    restart: always
    tty: true
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
    ports:
      - "80:80"
      - "443:443"
    networks:
      - postgrest-backend

networks:
  postgrest-backend:
    driver: bridge

Note that a few auxiliary files are needed for this to work. You can find everything in the test/resources folder of the example GitHub repository.

There is :

  • A short nginx.conf file.
  • An SQL file to setup the database (note that in my actual repo, this guy already exists since I need it to setup production :)).
  • a .env file to list all my environment variables

Once that is done, it is already possible to run $ docker-compose up -d and to run your application against localhost like if you were interacting with the real Supabase. (don't forget to call $docker-compose down --remove-orphans -v to kill and delete all containers once you're done).

To make the magic complete, we're gonna use the power of TestContainers to run this as unit/integration tests. My final MainKtTestTestContainers test class looks like this :

import io.github.jan.supabase.SupabaseClient
import io.github.jan.supabase.createSupabaseClient
import io.github.jan.supabase.postgrest.Postgrest
import kotlinx.coroutines.runBlocking
import org.junit.jupiter.api.Assertions.assertEquals
import org.junit.jupiter.api.BeforeEach
import org.junit.jupiter.api.Test
import org.testcontainers.containers.ComposeContainer
import org.testcontainers.junit.jupiter.Container
import org.testcontainers.junit.jupiter.Testcontainers
import java.io.File

@Testcontainers
class MainKtTestTestContainers {

    // The jwt token is calculated manually (https://jwt.io/) based on the private key in the docker-compose.yml file, and a payload of {"role":"postgres"} to match the user in the database
    private val jwtToken = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoicG9zdGdyZXMifQ.88jCdmcEuy2McbdwKPmuazNRD-dyD65WYeKIONDXlxg"

    private lateinit var supabaseClient: SupabaseClient

    @Container
    var environment: ComposeContainer =
        ComposeContainer(File("src/test/resources/docker-compose.yml"))
            .withExposedService("postgrest-db", 5432)
            .withExposedService("postgrest", 3000)
            .withExposedService("nginx", 80)

    @BeforeEach
    fun setUp() {
        val fakeSupabaseUrl = environment.getServiceHost("nginx", 80) +
                ":" + environment.getServicePort("nginx", 80)

        supabaseClient = createSupabaseClient(
            supabaseUrl = "http://$fakeSupabaseUrl",
            supabaseKey = jwtToken
        ) {
            install(Postgrest)
        }
    }

    @Test
    fun testEmptyPersonTable(){
        runBlocking {
            val result = getPerson(supabaseClient)
            assertEquals(0, result.size)
        }
    }

    @Test
    fun testSavePersonAndRetrieve(){
        val randomPersons = listOf(Person("Jan", 30), Person("Jane", 42))

        runBlocking {
            val result = savePerson(randomPersons, supabaseClient)
            assertEquals(2, result.size)
            assertEquals(randomPersons, result.map { it.toPerson() })

            val fetchResult = getPerson(supabaseClient)
            assertEquals(2, fetchResult.size)
            assertEquals(randomPersons, fetchResult.map { it.toPerson() })
        }
    }
}

All of the magic happens at the beginning, to setup a fake Supabase URL and connect to it. Once that is done, we can write our tests as easily as ever, since we're actually interacting with an actual light Supabase clone! (For reference, the tests take about 2 seconds to run on my machine)

A word of conclusion

It took me a little while to get all these tests running, but I'm very happy about the final result. It might not be the best solution for production grade apps, but the trade off of running test containers for my side project definitely makes up for the fact that I literally have no boilerplate to run and can avoid using mocks.

I'll check in the future how much I can extend the docker compose image as I get to use more Supabase services. Maybe it would be nice of Supabase to offer that image themselves so we can test easily and avoid using the cloud where not necessary :).

Check here for part 2 of the article.

]]>