{ "version": "https://jsonfeed.org/version/1", "title": "Curling IO Dev Blog", "home_page_url": "https://curling.io/blog", "description": "Developer insights on building and enhancing Curling IO", "items": [ { "id": "https://curling.io/blog/why-your-curling-club-shouldnt-use-a-cms", "content_html": "

We know that many of our clubs use WordPress or Joomla for their curling websites. These are popular platforms, and for good reason: they're flexible and there's no shortage of tutorials and plugins. But that popularity comes with a serious downside. General-purpose CMS platforms are massive targets for hackers, and the threat has been escalating rapidly. Here's what you need to be aware of.

\n

The security problem is real

\n

WordPress powers roughly 43% of all websites on the internet. That kind of market share makes it the single biggest target for malicious actors. According to Sucuri's annual threat research report, WordPress accounts for over 95% of all hacked CMS platforms they remediate. Joomla and Drupal make up most of the rest.

\n

Patchstack's State of WordPress Security report found 5,948 new WordPress vulnerabilities disclosed in a single year. The vast majority, over 97%, were in plugins and themes rather than WordPress core.

\n

Supply chain attacks

\n

It's not just about unpatched bugs. In June 2024, WordPress.org disclosed that several plugins in the official repository had been compromised. Attackers gained access to developer accounts and pushed malicious updates to trusted plugins with tens of thousands of active installs. The malicious code created unauthorized admin accounts and exfiltrated data.

\n

Even more concerning is a growing pattern where malicious actors purchase legitimate, established plugins from their original developers and then inject malicious code in subsequent updates. Users who have auto-updates enabled, as is generally recommended, receive the compromised version automatically without any indication that the plugin has changed hands.

\n

The Balada Injector campaign, documented by Sucuri, has been exploiting known plugin vulnerabilities since 2017. By their estimates, it has compromised over one million WordPress sites, injecting malicious JavaScript that redirects visitors to scam sites.

\n

In August 2024, a critical vulnerability in the LiteSpeed Cache plugin (used by over 5 million sites) allowed attackers to create admin accounts on any site running the vulnerable version. It was actively exploited in the wild.

\n

Joomla and Drupal aren't immune

\n

Joomla has had its share of critical vulnerabilities. In February 2024, a high severity XSS vulnerability in Joomla's core filter component (CVE-2024-21726) could lead to remote code execution. Proof-of-concept exploit code was published shortly after disclosure. An earlier vulnerability (CVE-2023-23752) that leaked database credentials was still being mass-exploited well into 2024 because so many sites remained unpatched.

\n

Drupal has a better security track record thanks to a more curated extension ecosystem and dedicated security team, but its complexity means fewer organizations keep it fully patched. The infamous Drupalgeddon vulnerabilities from 2014 and 2018 continued to be used against unpatched sites years later.

\n

AI is making it worse

\n

The rapid advancement of AI agents over the past year has made this problem significantly worse. Automated bot traffic now accounts for roughly half of all internet traffic, with malicious bots making up about a third. Attackers are using AI to discover vulnerabilities, generate exploits, and launch attacks at a scale and speed that wasn't possible before.

\n

A University of Illinois study demonstrated that an AI agent could successfully exploit 87% of known vulnerabilities when given their CVE descriptions. Microsoft and OpenAI confirmed that state-affiliated threat actors are already using LLMs for reconnaissance and scripting attacks. The window between a vulnerability being disclosed and being actively exploited has compressed from days to hours, partly because AI tools help attackers weaponize published CVE information almost instantly.

\n

For a CMS ecosystem like WordPress with thousands of new plugin vulnerabilities disclosed each year, this means every unpatched plugin is now discovered and targeted faster than ever.

\n

Why this matters for curling clubs

\n

Most curling clubs have a small group of volunteers managing their online presence. They don't have a dedicated IT team. They don't have a security budget. When a WordPress plugin introduces a vulnerability, they may not even know about it until their site is defaced, redirecting members to phishing pages, or quietly harvesting payment information.

\n

The maintenance burden is significant even without a security incident:

\n\n

A general-purpose CMS is designed to build any kind of website. That flexibility comes with complexity that most clubs don't need for what is often a fairly simple informational site.

\n

If you insist on using a CMS

\n

Some clubs may have reasons to stick with WordPress or another CMS. If that's the case, treat security as a non-negotiable priority.

\n

If someone else manages your site for you, whether that's a volunteer, a local web developer, or an agency, you should be asking them about each of the following points. If their response is dismissive or vague, that's a red flag. Your club's reputation and your members' data are on the line.

\n

Keep everything updated

\n

Enable automatic core updates and update plugins within 24-48 hours of security releases. Subscribe to security advisories from Wordfence or Patchstack (for WordPress), the Joomla Security Strike Team, or the Drupal Security Team. An unpatched site is a compromised site waiting to happen.

\n

Minimize your attack surface

\n

Use the fewest plugins possible. Every plugin you install is code you're trusting with your site and your members' data. Before installing anything, check when it was last updated, how many installs it has, and whether it has known vulnerabilities. Remove (don't just deactivate) anything you're not using.

\n

Use a web application firewall

\n

Deploy a cloud-based WAF like Cloudflare in front of your site. A WAF can block known exploit patterns even before you've applied an update, providing protection against zero-day attacks.

\n

Harden authentication

\n

Enforce strong passwords and enable two-factor authentication for all admin accounts. Limit login attempts to prevent brute force attacks. Change the default admin username. These are basic steps that block a surprising number of attacks.

\n

Use managed hosting

\n

If possible, use a managed hosting provider that specializes in your CMS (WP Engine or Kinsta for WordPress, Pantheon for Drupal). These providers handle server-level security, automatic backups, and often include malware scanning. It costs more than bargain shared hosting, but the security and peace of mind are worth it for a volunteer-run organization.

\n

Monitor your site

\n

Set up alerts for unauthorized file changes and unexpected admin account creation. Run regular security scans. Have a plan for what to do if your site is compromised, including who to contact and where your backups are stored.

\n

Consider a simpler alternative

\n

If your curling club website is primarily informational, showing hours, ice schedules, contact info, and news, you may not need a CMS at all. There are two good options:

\n

A static site

\n

Tools like Hugo, Eleventy, or even plain HTML hosted on Netlify or GitHub Pages have virtually no attack surface. There's no database to breach and no server-side code to exploit. The performance is also fantastic since there's no server-side processing or database queries slowing things down.

\n

AI agents like Claude, ChatGPT, and Gemini have made this incredibly easy these days. If you're already paying for one of these AI tools, you can ask it to create a static site for your curling club. It will walk you through the entire process, show you mockups, help you pick a hosting provider, and even deploy it for you. It also makes updating your site with new content easy. Your agent can even handle the integration of our Curling IO widgets and APIs for you.

\n

Make sure you also ask the agent to document what it's built, how it's hosted, and how deployments are done, so the next volunteer can pick up where you left off. Ask it to commit the source code to somewhere like GitHub as well. This might sound like a lot, but the agents will step you through all of it. Claude Code is especially useful for this if you don't mind paying $20 / month (and it's great for other things too).

\n

If you need something bespoke for your club's website that Curling IO doesn't offer, we highly recommend this approach.

\n

Curling IO's built-in website hosting

\n

Every club on Curling IO can enable a hosted website at no extra cost. It's minimalist by design, so it's best suited for clubs that don't need a bunch of bells and whistles.

\n

It includes a customizable landing page, pages, news articles, sponsor sections, and member-only content, all tightly integrated with your registrations, leagues, and bonspiels. There's no plugin ecosystem to exploit, no PHP stack to maintain, and no CMS updates to fall behind on. You can even redirect your existing domain to it.

\n

Either approach eliminates the entire class of vulnerabilities described in this post. Your volunteers' time is better spent running the club than patching software.

", "url": "https://curling.io/blog/why-your-curling-club-shouldnt-use-a-cms", "title": "Why Your Curling Club Shouldn't Use a CMS", "summary": "General-purpose CMS platforms like WordPress and Joomla introduce serious security risks and maintenance burden for curling clubs. Here is what you need to know.", "date_modified": "2026-04-14T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "club-management", "security", "website" ] }, { "id": "https://curling.io/blog/automate-club-management-with-ai", "content_html": "

Imagine you're a club manager setting up next season. You open your AI agent and type (or just say):

\n
\n

Set up early bird pricing for the Tuesday Night League. 15% off if they register before September 1st.

\n
\n

Five seconds later, it's done. No browser tabs, no forms, no clicking through menus. With Curling IO v3, this is something you'll be able to do.

\n

The Agent in Action

\n

The following conversation occurs after you've already logged in to Curling IO and authorized the curling CLI (more on that below):

\n
\n

You: Help me get the new season ready. Set up early bird pricing for the Tuesday Night League, 15% off if they register before September 1st.

\n
\n
Agent
Ran 2 commands
$ curling call admin.items.list --org vancouver
$ curling commands admin.discounts.create

Found the Tuesday Night League (2026-27). Creating the early bird discount now.

Ran 1 command
$ curling call admin.discounts.create --org vancouver '{\"params\": {...}}'

Done. 15% early bird discount on the Tuesday Night League, valid until September 1st. I also see a Monday Mixed and Friday Doubles league for this season. Want early bird pricing on those too?

\n
\n

You: Same deal for Monday Mixed. Skip Friday, that one fills up on its own.

\n
\n
Agent
Ran 1 command
$ curling call admin.discounts.create --org vancouver '{\"params\": {...}}'

Done. Monday Mixed has the same 15% early bird discount. By the way, last season's Tuesday Night League still has 3 people on the waitlist. Want me to send them an email letting them know registration is open for the new season?

\n
\n

You: Yes, do that.

\n
\n
Agent
Ran 2 commands
$ curling call admin.waitlists.list --org vancouver '{\"item_id\": 31}'
$ curling call admin.emails.send --org vancouver '{\"params\": {...}}'

Sent. Three emails went out to the waitlisted curlers from last season's Tuesday Night League, letting them know the 2026-27 season is open with early bird pricing until September 1st.

\n
\n

You: What does our registration look like so far this season?

\n
\n
Agent
Ran 1 command
$ curling call admin.items.list --org vancouver '{\"season\": \"2026-27\"}'

Here's where things stand for 2026-27:

LeagueRegisteredCapacityStatus
Monday Mixed1472Open
Tuesday Night League2396Open
Friday Doubles5264Open

Friday Doubles is already at 81% capacity with no discount. Tuesday and Monday are still early. Want me to keep an eye on these and let you know when any league crosses 80%?

\n
\n

You: Yeah, check in with me weekly on that.

\n
\n

Season setup, targeted outreach, and reporting in a five-minute conversation.

\n

The same pattern works for setting up a full season's events, bulk-adjusting fees across leagues, generating financial reports, or processing waitlists. Anything an admin can do through the web interface, an agent can do through the CLI.

\n

This isn't theoretical. We're building this right now.

\n

How It Works

\n

Behind the scenes, the agent is calling a command-line tool called curling. Any AI agent with shell access can call it: OpenClaw, NemoClaw, Claude Code, Codex.

\n

We considered MCP (Model Context Protocol), a standard for connecting AI models to external tools. We built a working prototype. But a CLI is simpler, more portable, and works with every agent framework, not just MCP-compatible ones. MCP can come later as a thin layer on top.

\n

The CLI connects to the same backend as the Curling IO admin web interface. Every admin operation available in the browser is also available through the CLI. When we add a new feature to the admin, it becomes available to agents automatically with no client update.

\n

Authentication

\n

Before an agent can manage your club, you grant it access. One time, takes about 30 seconds.

\n
curling login
\n

The CLI prints a URL. You open it, log into Curling IO, and approve access. That's it. The CLI stores a token and handles refreshes silently from that point on.

\n

This uses the OAuth 2.1 device flow (RFC 8628), the same standard behind the GitHub CLI, Google Cloud CLI, and AWS SSO. It's well understood by security teams and works in any environment.

\n

After login, the agent discovers which clubs you manage:

\n
curling orgs
\n

And scopes its commands accordingly:

\n
curling call admin.events.list --org vancouver
\n

Who It's For

\n

The real target is an AI agent acting on behalf of a club manager. The CLI's natural users are administrators who are early adopters of AI tools, the ones already using agents to help manage their inbox, draft communications, and automate repetitive work. For them, giving an agent access to curling is the next logical step.

\n

We build it as if a human expert might use it directly. No wizard prompts, no hand-holding. Terse, precise output. Comprehensive --help that rewards reading. Agents are evolving toward human-like behaviour, so building a proper CLI serves both audiences.

\n

The Bigger Picture

\n

Club management software has always been built for humans navigating web interfaces. That's the right design for most users most of the time. But administrators have repetitive, structured work: setting up a new season, bulk-updating pricing, generating reports, approving registration queues. Work that an agent can handle.

\n

Most clubs aren't thinking about this yet, but we believe they will be within the next few years as AI tools and agents become more prominent. In the near term it'll probably be just a handful of technically oriented club managers who give it a try, and will probably never go back.

", "url": "https://curling.io/blog/automate-club-management-with-ai", "title": "Automate Club Management With AI", "summary": "Imagine you're a club manager setting up next season. You open your AI agent and type (or just say):", "date_modified": "2026-04-12T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "ai", "v3", "ui", "club-management" ] }, { "id": "https://curling.io/blog/drag-and-drop-registration-forms", "content_html": "

Registration forms in Curling IO have always collected the basics: team name, lineup, skill level, contact info. But every club runs things a little differently. Some need emergency contacts. Others want dietary restrictions for banquet planning. A bonspiel might ask for team contact information while a league doesn't.

\n

In v2, admins can already choose which questions appear and create custom ones. What's new in v3 is control over the layout: where each question sits, how wide it is, and how the form is organized into sections.

\n

How it works

\n

The form builder is a drag-and-drop tool inside the product admin. You start with a panel of available questions on the left and an empty canvas on the right. Drag a question onto the canvas and it becomes part of the registration form. Drag it back to remove it. Reorder by dragging within the canvas.

\n

\"Form

\n

Questions have a defined column width (1, 2, or 3 columns) that controls how they sit in the grid. A one-column question like \"Shoe Size\" takes up a third of the row. A two-column question like \"Street Address\" spans two thirds. A three-column question like \"Notes\" takes the full width. The layout uses a masonry grid, so questions pack together without leaving gaps.

\n

Separators let you break the form into visual sections. Questions above a separator stay above it. Below it, a new section starts fresh.

\n

Each question can be toggled between required and optional. Required questions show an asterisk on the public form and are enforced on submission.

\n

What curlers see

\n

The public registration form renders the layout the admin designed. Questions float their labels above the input when you start typing, keeping the form compact. Hints appear as info icons that expand on hover.

\n

\"Public

\n

The form is responsive. On a phone, everything stacks into a single column. On a tablet, two columns. On a desktop, the full three-column layout shows.

\n

Available questions

\n

The form builder ships with a catalog of predefined questions covering contact info, emergency contacts, medical details, sport-specific questions, apparel sizes, dietary restrictions, and more. Clubs can also create their own custom questions to collect whatever else they need. The list is searchable, and any combination of predefined and custom questions can be used on any product.

\n

Some questions are tied to event settings. If an admin configures a team name label in the event settings, the team name question appears on the form builder canvas automatically and is locked in place. Same for the lineup question when a lineup option is selected. Turn those settings off and the locked questions disappear from the builder.

\n

Gaps and separators are layout tools, not data questions. Add a gap to leave an empty cell in the grid. Add a separator to visually divide the form into sections.

\n

The form builder is part of Curling IO v3. We'll share more v3 features as we get closer to launch.

", "url": "https://curling.io/blog/drag-and-drop-registration-forms", "title": "Custom Registration Forms Are Coming to Curling IO", "summary": "A first look at the drag-and-drop form builder in Curling IO v3. Club admins design registration forms by arranging questions on a visual canvas.", "date_modified": "2026-03-28T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "v3", "registration", "form-builder", "sneak-peek" ] }, { "id": "https://curling.io/blog/live-admin-without-javascript", "content_html": "

Curling IO's admin panel has dozens of pages: product configuration, discount rules, affiliate fees, registrations, event scheduling, reports. In Version 2, every form submission reloads the page. Click save, wait for the round trip. It works, but it's slow when you're editing multiple sections on the same product, and it makes features like live toggle switches or inline editing impossible without sprinkling in JavaScript.

\n

Version 3's admin is a single Lustre server component running on the BEAM. One WebSocket connection, one long-lived Erlang process per session. Every interaction goes over that WebSocket and comes back as a DOM patch. The page never reloads, and there's no client-side JavaScript framework. The entire admin UI is written in Gleam, compiled to Erlang, and runs server-side.

\n
\"Curling
The product registration page: sidebar, breadcrumbs, toggle switches, and form fields, all rendered server-side over a single WebSocket connection. Every toggle, input, and save is a live state update.
\n

What's a Server Component?

\n

Phoenix LiveView popularized this idea: render HTML on the server, send patches to the client over a WebSocket, handle events the same way. The browser becomes a thin rendering layer. Lustre, Gleam's UI framework, has the same concept built in as \"server components.\"

\n

A Lustre server component is an Elm-architecture application (Model → Update → View) that runs as a BEAM process. When the model changes, Lustre diffs the old and new virtual DOM and sends a JSON patch over the WebSocket. The client applies it. Events from the browser (clicks, form submissions, input changes) travel back as JSON. The whole loop takes single-digit milliseconds on a local network.

\n

The client side is a custom element called <lustre-server-component>. You point it at a WebSocket route and it handles everything: creating a shadow DOM, adopting your stylesheets, applying patches, and forwarding events. Lustre ships the client runtime as a single JavaScript file you include in the page.

\n

The Architecture

\n

The admin loads a minimal HTML shell:

\n
<html>
<head>
<link rel=\"stylesheet\" href=\"/static/css/app.css\">
</head>
<body>
<lustre-server-component route=\"/ws/admin?url=/en/admin/products\">
</lustre-server-component>
<script src=\"/static/lustre-server-component.mjs\"></script>
<script src=\"/static/js/admin-live.js\" defer></script>
</body>
</html>
\n

That's the entire HTML the server sends. Everything else renders through the WebSocket.

\n

When the browser opens that page, Lustre's custom element connects to /ws/admin, which upgrades to a WebSocket. On the server side, Mist (the HTTP server) hands the connection to our WebSocket handler, which starts a Lustre runtime:

\n
fn(_connection) {
let assert Ok(runtime) =
lustre.start_server_component(admin.app(), flags)

let self = process.new_subject()
let selector =
process.new_selector()
|> process.select(for: self)

server_component.register_subject(self)
|> lustre.send(to: runtime)

#(SocketState(runtime:, self:), Some(selector))
}
\n

lustre.start_server_component spawns an actor that holds the application state, runs the update function on every message, diffs the view, and emits patches. The register_subject call tells the runtime \"send your patches to this WebSocket connection.\" From here, everything is bidirectional:

\n\n

Each admin session is its own BEAM process. They share nothing: no session store, no pub/sub, no state to coordinate. If one session crashes, the others don't notice. That's the BEAM's process isolation at work.

\n

Navigation Without Page Loads

\n

Server-rendered pages usually reload on every navigation. That kills the experience for an admin panel where you're moving between product sections constantly. We needed client-side navigation semantics (pushState, back button support) without a client-side router.

\n

The solution is about 90 lines of vanilla JavaScript in admin-live.js that bridges the browser's history API with the server component. It handles three flows:

\n

Link clicks. A click listener on the server component intercepts <a> tags, prevents the page load, calls history.pushState, and tells the server about the new URL. Modifier keys (cmd+click), external links, and target=\"_blank\" pass through to the browser normally. The listener uses composedPath() to pierce the shadow DOM boundary and find the actual <a> element.

\n

Back/forward buttons. A popstate listener detects when the user navigates with browser buttons and sends the new URL to the server without pushing to the history stack (since the browser already updated it).

\n

Server-initiated navigation. When a save operation redirects to a different page (like navigating from the edit form to the overview after saving), the server emits a custom event with the new URL. The client intercepts it and calls pushState.

\n

All three flows use the same trick to communicate with the server component: hidden <input> elements inside the shadow DOM. JavaScript sets the input's value to the new URL and dispatches a change event. Lustre's event handler on the server side picks it up. Regular DOM events don't cross the shadow DOM boundary reliably, but events on form elements bubble through the component's internal wiring. We tried custom events first and spent a while debugging before landing on this.

\n

Shadow DOM and Stylesheets

\n

Lustre renders inside a shadow DOM. This gives you proper encapsulation. Styles don't leak in or out. The admin's CSS won't affect the rest of the page, and vice versa. But it means your stylesheets need to get inside the shadow root somehow.

\n

Lustre handles this automatically through adoptedStyleSheets, a browser API that lets shadow roots share stylesheets with the parent document. When the component mounts, it iterates over every stylesheet in the document, and pushes each one into the shadow root's adoptedStyleSheets array. If a stylesheet can't be directly adopted (cross-origin restrictions), it copies the CSS rules into a new stylesheet. If that fails too, it clones the <link> or <style> element into the shadow root. Three strategies, in order of preference.

\n

This worked out of the box for Tailwind and our Basecoat component library. One place it tripped us up: CSS custom properties. We had alert color variants that referenced --success, --warning, and --info CSS variables, but those variables were never defined. The styles adopted fine, but color: var(--success) resolved to nothing because the variable didn't exist. The fix was just adding the variable definitions to :root. CSS custom properties inherit through shadow boundaries (unlike regular CSS rules), so once defined on :root they're available everywhere.

\n

What Runs on the Server, What Runs on the Client

\n

The split is straightforward. The server handles everything that doesn't need sub-frame visual feedback:

\n\n

The client handles the navigation glue (90 lines of JS) and stylesheet adoption (built into Lustre). That's it. The entire admin UI, including the sidebar, breadcrumbs, form fields, tables, and alert components, is written in Gleam.

\n
\"The
The Curling IO bracket builder. This kind of drag-and-drop interactivity needs client-side rendering.
\n

Things we explicitly don't do in the server component: timers, drag-and-drop, animations, resize observers. Those require client-side feedback loops that would overwhelm the WebSocket. When we build the bracket builder and team drag-and-drop, those will be Lustre \"islands\": small client-side Gleam applications compiled to JavaScript, embedded within specific pages. The server component renders the page; the island handles the interactive widget.

\n

What We Got Wrong Along the Way

\n

Back button infinite loop. Our first navigation implementation used a single NavigateTo message for everything. Server tells client to push URL, client pushes URL. User hits back, browser fires popstate, client sends the old URL to server, server processes it and emits another pushState. Loop. The fix was separating NavigateTo (which pushes to history) from UrlChanged (which doesn't).

\n

Sidebar flicker. When navigating between product pages, we were clearing the product context during the loading state. The sidebar would collapse to its default and then re-expand when the page loaded. We fixed this by persisting the product context on the admin model across navigations within the same product.

\n

Flash messages disappearing. Save operations on discounts and affiliate fees navigate to the list page and show a flash. But we also added \"clear flash on navigation.\" The save set the flash, then called NavigateTo, which cleared it. The fix was to set the flash after the navigation resolved.

\n

What We Got for Free

\n

BEAM's hot code loading applies to server components. When the dev watcher recompiles a module, the running process picks up the new code on the next message. The WebSocket stays connected, the session state is preserved, and the next click or navigation renders the updated view. We didn't build this. We didn't configure it. We noticed it working one day during development and realized the BEAM had been doing it the whole time.

\n

Why Not SSR or a SPA?

\n

Over SSR

\n

The Version 2 admin is server-rendered with full page reloads. It works. But every interaction has visible latency. Toggle a switch, wait for the page to reload to see the result. Navigate between product sections, lose your scroll position.

\n

With the server component, toggling a switch updates the UI immediately (the BEAM process is right there, no network hop to a database and back). Form saves can navigate to a different page with a flash message, and the transition is instant. The sidebar persists across pages because it's part of the same running application.

\n

Over a SPA

\n

From the user's perspective, the server component feels identical to a single-page application. Navigation doesn't reload the page, state persists, and updates are instant. The difference is where the work happens.

\n

A SPA downloads a JavaScript bundle before the user sees anything. A modest React admin with a component library, router, and state management easily runs 200-500KB of JavaScript. Our admin sends ~15KB of initial HTML over the WebSocket and zero application JavaScript (the Lustre client runtime is a generic 10KB script, not application code).

\n

A SPA also needs a REST or GraphQL API to talk to the server. That means designing endpoints, serializing data to JSON, deserializing it on the client, handling loading states, caching, and keeping client and server types in sync. With the server component, the update function has direct access to the full domain and business logic. There's no API ceremony in between. When a user saves a form, the update function writes to the database and returns the new model. The view diffs automatically. The whole admin is one Gleam codebase with state in one place.

\n

Developer experience

\n

Every admin page is a Gleam module with Model, Msg, init, update, and view. The types enforce that pages handle all their messages. Adding a new page means adding a route variant (the compiler tells you everywhere that needs to handle it), a page module, and wiring it into the admin's update function. You don't need a template language or context objects passed through middleware. It's functions all the way down.

\n

The Numbers

\n

For context on what we're working with:

\n\n

There's also a payload advantage over traditional SSR. Every HTTP request carries headers: cookies, content-type, CSRF tokens, accept headers, cache directives. That's typically 1-2KB of overhead on every round trip, in both directions. A WebSocket frame is just the payload, a few bytes of framing around the actual data. An admin session might make hundreds of interactions (toggling switches, navigating between sections, saving forms). Over HTTP, each one pays the header tax. Over the WebSocket, the connection is already established and authenticated. A toggle switch that changes one boolean sends maybe 80 bytes of JSON and gets back a 200-byte patch. The equivalent HTTP POST would be 2-3KB counting headers, redirect, and full page re-render.

\n

What's Next

\n

The server component covers the 95% of admin pages that are forms, tables, and configuration. The remaining 5% (our bracket builder, drag-and-drop team management, live scoreboards) will be client-side Lustre islands that communicate back to the server through the same WebSocket or separate API calls.

\n
\n

This is Part 8 of the Curling IO Foundation series. Previous: Parallel Tests for Free.

", "url": "https://curling.io/blog/live-admin-without-javascript", "title": "LiveView Admin Panel w/ Gleam and Lustre", "summary": "Curling IO's admin panel has dozens of pages: product configuration, discount rules, affiliate fees, registrations, event scheduling, reports. In Version 2, every form submission reloads the page. Click save, wait for the round trip. It works, but it's slow when you're editing multiple sections on the same product, and it makes features like live toggle switches or inline editing impossible without sprinkling in JavaScript.", "date_modified": "2026-03-26T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "foundation", "gleam", "beam", "lustre", "architecture" ] }, { "id": "https://curling.io/blog/21-reasons-ai-agents-love-gleam", "content_html": "

Fair warning: this post contains some opinions that are going to be controversial and may not age well. Here be dragons.

\n

AI coding agents like Claude Code, OpenAI Codex, and Google Gemini can write code, run it, read the errors, and try again. That loop is the whole game. The faster and more informative that loop is, the more useful the agent becomes. After building Curling IO Version 3 in Gleam alongside AI coding agents, I'm convinced Gleam is the best language for this workflow. Agents don't write better Gleam - there's less training data. But Gleam's compiler lets agents self-correct without waiting for a human.

\n

The Feedback Loop That Matters

\n

Every AI coding agent works the same way: write code, check if it works, fix what's broken, repeat. The quality of that \"check if it works\" step determines everything.

\n

In a dynamically typed language, \"check if it works\" means running the test suite. Tests take time, they might not cover the thing that's actually broken, and they're code too - every test you add increases the complexity of your project. And many bugs don't surface until runtime, sometimes much later, in production. The agent writes code that looks correct, you review it, it looks correct to you too, and then a user hits a nil error at 2am.

\n

In Gleam, \"check if it works\" means compiling. That takes a few seconds. When compilation fails, the error messages are specific: here's the file, here's the line, here's what you wrote, here's what was expected. The agent reads that, fixes it, and compiles again. A few rounds of this and the code is structurally sound.

\n

What the Compiler Catches

\n

No Nulls

\n

Gleam has no null. Optional values are represented as Option(T), which is either Some(value) or None. You can't accidentally dereference a nil. The compiler forces you to handle both cases. Null-related errors are the most common exception in production, appearing in 70% of production environments in a study of over 1 billion events.

\n

When an agent writes code in JavaScript, it has to remember to check for null everywhere. It doesn't always remember. Gleam removes the possibility entirely.

\n

Null references are famously the billion dollar mistake, and they're not just a theoretical concern. In June 2025, a single null value in a database field cascaded through Google Cloud's Service Control system and took down multiple GCP and Workspace products worldwide for hours. The null hit a code path with no error handling, replicated globally in seconds, and the resulting outage took nearly three hours to fully resolve. Gleam's Option type would have forced the developer or the agent to handle the missing case before the code compiled. It doesn't prevent every failure in a chain like that, but it removes the specific class of bug that triggered it.

\n

There's a security angle too. Unhandled nulls can leave an application in unexpected states - authentication checks skipped because a nil slipped through, or data exposed through an error page that should never have been reached. Every error class the compiler eliminates is attack surface the agent can't accidentally introduce.

\n

Structural Changes

\n

Here's a real example from our codebase. Say the agent adds a new field to a type:

\n
pub type Listing {
Listing(
id: Int,
name: String,
capacity: Int,
waitlist_enabled: Bool, // new field
)
}
\n

In JavaScript, adding a property to an object changes nothing until runtime. If some template or API endpoint doesn't include the new field, you won't know until that code path executes. Your test suite might catch it. Or it might not, if coverage is incomplete.

\n

In Gleam, every function that constructs or destructures a Listing now fails to compile. The compiler lists every location that needs updating. The agent works through the list, updates each one, and compiles clean.

\n

This extends to every structural change: renaming a field, changing a type from String to Int, adding a variant to a union type. Agents handle this well. Humans forget things in lists.

\n

Exhaustive Pattern Matching

\n

Gleam's compiler requires that pattern matches cover every possible case. If you match on a Result type, you handle both Ok and Error. If you match on a custom union type with four variants, you handle all four. Miss one and the compiler tells you.

\n

Say you have a payment status type:

\n
pub type PaymentStatus {
Pending
Completed
Refunded
Failed
}
\n

If someone adds a Disputed variant, every case expression matching on PaymentStatus across the entire codebase will fail to compile until it handles Disputed. In a dynamically typed language, the agent writes a switch or if chain that handles the common cases and forgets the edge case. That's a runtime error waiting to happen. In Gleam, the compiler catches it before the code ever runs.

\n

Simple Syntax, Fewer Ways to Go Wrong

\n

Gleam is a small language. There's one way to define a function, one way to handle errors and optional values (Result and Option). No exceptions, no implicit conversions, no macros.

\n

This matters for agents because smaller decision space means fewer wrong decisions. When there are six ways to do something, the agent has to pick one, and it might not pick the idiomatic one. In Gleam, there's usually one way. The agent doesn't need to know the community's style preferences or the codebase's conventions for error handling. The language already decided.

\n

Formatting is the same story. gleam format is canonical. No configuration, no style debates. The agent's output looks identical to hand-written code after formatting. You can't tell the difference, and you don't need to.

\n

Whitespace Doesn't Matter

\n

I actually prefer whitespace-significant languages. Elm and Haskell are two of my favorites. Clean indentation instead of curly braces everywhere looks better to me. But agents aren't good at it yet. In my experience, agents constantly trip over Slim templates: indentation errors are common with LLM-generated code. The model might mix tabs and spaces, or get the nesting level wrong by one indent. These errors are silent and semantic (they change what the code does).

\n

Gleam uses curly braces. Whitespace is irrelevant to the compiler. gleam format normalizes it. One fewer class of errors for agents to make.

\n

Tight Feedback Loops

\n

Fast Compilation

\n

Gleam's compiler runs in a few seconds for our full project. A typical test suite in a dynamically typed language takes 30 seconds to a few minutes. When the agent is iterating, the difference between a few-second compile and a multi-minute test run adds up fast.

\n

Fast Tests

\n

Tests are fast too. Because each test gets its own in-memory SQLite database with no shared state, we run them all in parallel. Around 800 tests finish in under a second. That speed accumulates over a session where the agent is compiling and testing dozens of times.

\n

Less Training Data Is a Real Cost

\n

Gleam has less training data than JavaScript, TypeScript, or Python. Agents write worse Gleam on their first attempt. They reach for patterns that don't exist and invent functions that aren't in the standard library.

\n

This is a real cost. The agent takes longer to write initial code in Gleam than it would in JavaScript.

\n

But here's what I've found: the total time from \"start writing\" to \"code is correct and deployed\" is shorter in Gleam. The agent writes slower but the compiler catches errors instantly. In JavaScript, the agent writes faster but errors surface later, in tests or in production.

\n

The Workflow in Practice

\n

Here's how a typical feature goes:

\n
    \n
  1. We write a spec describing what we want. This is iterative - we go back and forth with the agent refining the spec before any code is written. A good spec is the difference between useful output and garbage.
  2. \n
  3. The agent writes the implementation: types, database queries, request handlers, views
  4. \n
  5. It runs gleam build. Compilation fails with 5-10 errors.
  6. \n
  7. The agent reads each error, fixes the code, rebuilds. A few rounds.
  8. \n
  9. Compilation succeeds. The agent runs the tests. They pass, or they fail on business logic (not on null errors or type mismatches).
  10. \n
  11. We review the diff. We're looking at logic, intent, and redundancy.
  12. \n
\n

Agents tend to duplicate code rather than reuse existing functions, probably because of context window limits. Catching those opportunities to extract shared logic is the most common feedback we give.

\n

We still write a fair amount of code by hand. Agents aren't always right, and they sometimes produce ugly or redundant code that needs to be caught and rewritten. We step in when the agent is spinning out on something, going in circles trying to fix the same error. In Gleam that happens a lot less than what we've seen in dynamic languages.

\n

Other Languages

\n

Other statically typed languages with good compilers share some of these properties. Rust, OCaml, Haskell, and Elm all have strong type systems that catch errors at compile time.

\n

But Gleam has a specific combination that makes it well-suited for agents:

\n\n

It also runs on the BEAM, which gives you fault tolerance and concurrency, but that's a separate conversation.

\n

The trade-off is ecosystem maturity and training data. Gleam is young. Libraries are fewer. Agent-generated code needs more correction on the first pass. That gap is closing as training data grows.

\n

The Bet

\n

I believe AI agents are already writing most of the code in a growing number of projects. Language choice is being evaluated differently. \"How fast can a human write this?\" matters less. \"How much of the developer's review time does this require?\" matters most.

\n

The developer is the bottleneck. We're slow compared to computers. A language where the compiler has already verified structural correctness before the diff reaches your screen means the reviewer can focus on logic and intent instead of chasing down missing nil checks.

\n

We picked Gleam for other reasons, but how well it works with AI coding agents has been huge.

\n
\n

Addendum: The State of AI-Assisted Coding

\n

Stepping back from the technical argument. AI-assisted coding is here to stay. Even if the technology doesn't meaningfully improve from where it is today, it's already useful enough that developers are adopting it en masse.

\n

The Bar Is Higher, Not Lower

\n

Agents are good enough for a lot of business application work right now, but \"good enough\" comes with a big asterisk. Every diff needs scrutiny. The developer needs to be a domain expert, a security expert, and a programming expert. The speed at which code is produced has raised the expertise required from the developer, not lowered it.

\n

Expect More Security Vulnerabilities

\n

Agents generate code that works, but \"works\" and \"secure\" are very different bars. They'll probably catch the obvious stuff like SQL injection. The less obvious stuff is where it gets dangerous - like not realizing that a decision you made three prompts ago means all your customers' PII is now accessible on a public URL as a side effect. That requires understanding the full picture, and agents don't have that yet. I think this will become a visible reality over the next few years as more agent-written code hits production without adequate review.

\n

The Industry Is Evolving

\n

The industry is actively trying to figure out what the right abstractions look like for AI-assisted development. Two notable attempts:

\n

Mog is a language designed to be written and read by AIs rather than humans. I'm skeptical. If the developer reviewing the diff is the bottleneck, optimizing for machine readability at the expense of human readability makes things worse.

\n

Codespeak takes a different approach: specs and code as interchangeable representations, where you can move between them seamlessly. I think both specs and code are important, and right now neither is enough on its own. A spec is great for big-picture thinking, but it lacks the detail that matters when things go wrong. Code has all the detail, but it's hard to step back and reason about the whole system by reading it. They operate at different levels of granularity, and collapsing them into one thing loses what makes each useful.

\n

Statically typed languages have an advantage here: types are self-documenting. A well-defined type in Gleam already communicates a lot of what a spec would say about the shape of data and the boundaries of a function, without needing a separate document to describe it.

\n

What Comes Next

\n

The developer's role is shifting toward a supervisory one. That might mean a lot more software gets written, or a lot fewer developers get employed, or both. And the supervisory role itself might not last if agents get good enough to close the loop on their own. That's a real possibility and it's worth being honest about.

\n

After a year of building production software with AI coding agents, it works if you put in the effort to verify everything that comes out the other end. Gleam wasn't designed for AI agents, but good language design turns out to matter more than any tool built specifically for them.

", "url": "https://curling.io/blog/21-reasons-ai-agents-love-gleam", "title": "AI Agents Love Gleam", "summary": "Fair warning: this post contains some opinions that are going to be controversial and may not age well. Here be dragons.", "date_modified": "2026-03-12T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "gleam", "ai", "architecture" ] }, { "id": "https://curling.io/blog/parallel-tests-for-free", "content_html": "

While writing the previous post about our per-test SQLite databases, I was describing how each test gets its own in-memory database, no shared connections, no shared state. And I thought: wait, if nothing is shared, can we just run them all at the same time?

\n

Turns out we could, and our server test suite went from ~4 seconds to ~0.85 seconds for around 800 tests. Zero code changes to the tests themselves. One 25-line Erlang module.

\n

Why Sequential Was the Default

\n

Gleam's test runner, gleeunit, delegates to Erlang's EUnit framework. By default, EUnit runs test modules one at a time. This is the safe choice because most test suites have shared mutable state somewhere: a database connection, a named process, a file on disk. Running those tests concurrently produces the kind of failures that pass locally and fail in CI, or pass on Tuesday and fail on Wednesday.

\n

Our tests don't have shared mutable state. Every test clones its own database. No two tests touch the same connection. The previous post explains the full setup, but the short version is: test_db.setup() clones a cached template database via SQLite's backup API and returns a fresh, independent connection. When the test ends, the connection is garbage collected and the database disappears.

\n

What's Actually Isolated

\n

Before flipping the switch, we checked every category of shared state:

\n
ResourceIsolation
SQLite databasesEach test clones its own in-memory DB
persistent_term cacheIdempotent first-writer-wins (template DB, timezone data, logger config)
ETS tables (rate limiter)Unnamed, each test creates its own via ets:new
Named processesNone started in tests
File systemNo writes in tests
\n

The persistent_term entries are write-once caches. Multiple tests might try to initialize the template database at the same time, but the first one wins and subsequent calls just read the cached value. That's safe.

\n

ETS tables used in tests (for the rate limiter) are created without the named_table option, so each call to ets:new returns a unique table reference. No conflicts.

\n

The Implementation

\n

EUnit supports a {inparallel, Tests} wrapper that distributes test functions across BEAM schedulers. It's been there for years, but gleeunit doesn't expose it. We submitted a PR to add a main_parallel() function, but it was pointed out that this fits better as a project-local solution than a change to gleeunit's core. That makes sense. We closed the PR and wrote a project-local Erlang module that replicates the test discovery logic and calls EUnit directly.

\n
-module(parallel_test_runner).
-export([main/0]).

main() ->
Options = [verbose, no_tty,
{report, {gleeunit_progress, [{colored, true}]}},
{scale_timeouts, 10}],
Files = filelib:wildcard(\"**/*.{erl,gleam}\", \"test\"),
Modules = [to_module(list_to_binary(F)) || F <- Files],
case eunit:test({inparallel, Modules}, Options) of
ok -> erlang:halt(0);
_ -> erlang:halt(1)
end.

to_module(Path) ->
case filename:extension(Path) of
<<\".gleam\">> ->
binary_to_atom(
binary:replace(
binary:replace(Path, <<\".gleam\">>, <<\"\">>),
<<\"/\">>, <<\"@\">>, [global]),
utf8);
<<\".erl\">> ->
binary_to_atom(
binary:replace(lists:last(binary:split(Path, <<\"/\">>, [global])),
<<\".erl\">>, <<\"\">>),
utf8)
end.
\n

This does the same thing gleeunit does (glob for test files, convert filenames to module atoms) but wraps the module list in {inparallel, ...} instead of passing it flat. It reuses gleeunit_progress from the hex dependency for the dot-per-test output with color, so the test output looks the same as before.

\n

The test entry point is one line:

\n
@external(erlang, \"parallel_test_runner\", \"main\")
pub fn main() -> Nil
\n

No fork. No submodule. Stock gleeunit stays in gleam.toml as a dependency (we still use it for the progress reporter). Only the server package uses the parallel runner. The shared and client packages use gleeunit's standard sequential main() because they have far fewer tests and don't need it.

\n

The Results

\n
PackageTestsBeforeAfter
server~800~4.0s~0.85s
shared~100~0.5s~0.5s
client1~0.2s~0.2s
Total~900~4.7s~1.6s
\n

The shared and client packages didn't change because they already ran in under a second.

\n

EUnit serializes its listener callbacks through a single process, so the dot-per-test progress output still arrives in order. Per-test stdout is delivered after each test completes, so you don't get interleaved output from concurrent tests. The test output looks identical to before, just faster.

\n

Keeping It Working

\n

This works because the test architecture is clean. If someone adds a test that registers a named process with a fixed atom, or writes to a shared file path, two copies of that test will collide and produce flaky failures. The rules are simple:

\n\n

If a test can't follow these rules, EUnit lets you mix {inparallel, ...} and {inorder, ...} in the same test run. We could maintain a list of sequential modules in the runner and group them separately:

\n
Sequential = [some_integration_test],
Parallel = Modules -- Sequential,
eunit:test([{inparallel, Parallel}, {inorder, Sequential}], Options)
\n

The sequential modules would run one at a time while everything else stays parallel. We haven't needed this yet, but it's a few lines if we do.

\n

The Accidental Win

\n

None of this was planned. We chose per-test SQLite databases because they eliminated cleanup code, ordering dependencies, and flaky tests from leaked state. We're still a bit surprised that a permanent 4.7x speedup, one we'll benefit from for the lifetime of this project, took an afternoon and 25 lines of Erlang.

\n

37 lines total (counting the 12-line FFI module from the previous post) for per-test database isolation and parallel execution across all available CPU cores. No test framework plugins. No configuration. The whole thing compiles with gleam build and runs with gleam test.

\n
\n

This is Part 7 of the Curling IO Foundation series. Next up: A Live Admin Panel Without Writing JavaScript.

", "url": "https://curling.io/blog/parallel-tests-for-free", "title": "Parallel Tests for Free", "summary": "While writing the previous post about our per-test SQLite databases, I was describing how each test gets its own in-memory database, no shared connections, no shared state. And I thought: wait, if nothing is shared, can we just run them all at the same time?", "date_modified": "2026-03-08T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "foundation", "gleam", "beam", "testing", "architecture" ] }, { "id": "https://curling.io/blog/curling-io-powers-provincial-associations", "content_html": "

Every provincial and territorial curling association in Canada uses Curling IO, and so does the national body, Curling Canada. That's 15 membership associations, all on the same platform, along with their affiliated clubs.

\n

The Full List

\n\n

What Associations Use It For

\n

Provincial associations use Curling IO to run their competitions: provincial championships, playdowns, regional qualifiers. They manage team registrations, schedule draws, score games live, and publish results. Fans and participants see brackets, standings, and scoreboards update in real time. For a closer look at how this works at the national level, see how Curling Canada uses Curling IO.

\n

Associations also collect affiliate fees through the platform. When a curler registers at their local club, the club's registration fees and the association's affiliate fees are collected together in one transaction. Curling Canada's national fee, the provincial fee, and the club's own fees all appear on a single checkout. No separate invoicing, no manual reconciliation.

\n

The affiliate fee tie-in is optional. Clubs can use Curling IO independently without inheriting association fees or reporting member data. But clubs that do opt in get the fee collection and reporting handled automatically.

\n

The platform is fully bilingual (English and French), which matters for associations like Curling Québec and for clubs across the country with francophone members.

\n

Why It Matters for Clubs

\n

When your provincial association is on Curling IO, a few things work in your favour.

\n

Your curlers already have profiles. A curler who registered for a provincial bonspiel or played in a championship already exists in the system. When they register at your club, they find their existing profile. No re-entering names, addresses, or emergency contacts.

\n

Competitions connect. A curler's results from club leagues, provincial playdowns, and national championships all live in the same system. Their profile carries from club leagues to provincial to national competitions.

\n

Why It Matters for Curlers

\n

From a curler's perspective, Curling IO is one account. They register for their Tuesday night league at their local club, sign up for a weekend bonspiel at another club, and qualify for a provincial championship. Same login, same profile, same payment flow.

\n

There are currently around 150,000 curler profiles in the system across hundreds of active clubs and all 15 membership associations. That number grows every season as more clubs come online.

\n

Getting Started

\n

If your club is under a Canadian provincial association, your association is already on Curling IO. Getting started means your affiliate fees and curler database are connected from day one. There's no setup fee, no monthly fee, and no contract.

\n

Clubs outside of Canada can use Curling IO too, just without the association tie-in.

\n

Check out our getting started guide or visit your provincial association's Curling IO page to see it in action.

", "url": "https://curling.io/blog/curling-io-powers-provincial-associations", "title": "Every Provincial and Territorial Curling Association in Canada Uses Curling IO", "summary": "All 14 provincial and territorial curling associations, plus Curling Canada, run on Curling IO. Here is what that means for clubs and curlers.", "date_modified": "2026-03-06T00:00:00.000Z", "author": { "name": "Chris" }, "tags": [ "provincial-associations", "partnerships", "curler-database" ] }, { "id": "https://curling.io/blog/sqlite-test-isolation", "content_html": "

Most web frameworks treat test database isolation as a hard problem. Rails has database_cleaner with three strategies. Django wraps every test in a transaction it rolls back. Phoenix does the same with its SQL sandbox. They all exist because tests share a single database server, and that shared state is the root of flaky tests and ordering dependencies, the kind where a test passes alone but fails in the suite.

\n

Curling IO Version 3 doesn't have this problem. Each test gets its own database. Not a transaction. Not a truncated copy. A completely independent in-memory SQLite database, cloned from a template in microseconds using SQLite's backup API.

\n

The Shared Database Problem

\n

In a traditional Postgres or MySQL test setup, every test talks to the same database server. You need a strategy to keep tests from contaminating each other:

\n

Transaction rollback. Wrap each test in a transaction, roll it back at the end. Works until your code uses transactions internally, or spawns processes that need to see the test data (the classic Ecto sandbox problem).

\n

Truncation. Delete all rows from every table between tests. Slow, and you need to get the table ordering right to avoid foreign key violations.

\n

Database cleaner. A gem/library that combines both strategies with configuration for which tables to clean, which strategy to use, and when. It works, but it's ceremony that exists purely because of the shared database.

\n

All of these are workarounds for the same architectural constraint: one database server, many tests.

\n

In-Memory SQLite: One Database Per Test

\n

SQLite opens a database by passing a file path. Pass :memory: instead and you get an in-memory database that exists only for the lifetime of that connection. It's fast (no disk I/O) and completely isolated. Nothing else can see it, and closing the connection frees everything.

\n

Our test setup function:

\n
pub fn setup() -> sqlight.Connection {
let assert Ok(conn) = sqlight.open(\":memory:\")
let assert Ok(_) = sqlight.exec(\"PRAGMA foreign_keys=ON;\", conn)
let assert Ok(_) = sqlight.exec(schema_sql, conn)
conn
}
\n

The schema_sql variable contains the contents of db/schema.sql, which is regenerated from migrations by bin/reset. This means tests always use the current schema without anyone remembering to update them.

\n

Every test calls setup(), gets a fresh database with the full schema, inserts whatever test data it needs, and runs its assertions. When the test ends, the connection is garbage collected and the database disappears. There's nothing to clean up.

\n
pub fn should_reject_duplicate_registration_test() {
let conn = test_db.setup()
// Insert test-specific data
insert_org(conn, test_org())
insert_listing(conn, test_listing())
// ... test logic, assertions
// conn goes out of scope, database vanishes
}
\n

There's no possible cross-contamination because there's nothing shared. Test A and test B literally operate on different databases. You can run them in parallel on BEAM processes without any coordination.

\n

The Schema Execution Cost

\n

This approach has an obvious cost: every test executes the full schema. For us that's 77 CREATE TABLE, CREATE INDEX, and CREATE VIEW statements. We profiled it:

\n\n

At 1.1 ms per test across ~500 tests that need a database, that's about 550 ms of schema execution. Not terrible, but not free either. And it scales linearly with both the number of tests and the size of your schema.

\n

SQLite's Backup API: Page-Level Cloning

\n

SQLite has a backup API designed for copying databases between connections. It operates at the page level: it doesn't re-parse or re-execute SQL, it copies raw database pages from one connection to another.

\n

The idea: build the schema once in a template database, then clone it per test.

\n
Once at startup:  open(\":memory:\") → execute full schema → template
Per test: open(\":memory:\") → backup_copy(template) → ready
\n

The backup API is three calls: backup_init (set up the copy), backup_step with -1 (copy all pages in one shot), and backup_finish (release resources).

\n

Our Gleam SQLite driver, sqlight, doesn't expose the backup API. But it's built on esqlite, an Erlang NIF wrapper around SQLite's C library, and esqlite exposes the full backup API. Since Gleam compiles to Erlang and runs on the BEAM, we can call Erlang libraries directly through Gleam's foreign function interface (FFI). We wrote a 12-line Erlang module that takes two sqlight connections (which are just esqlite records under the hood) and performs the clone:

\n
-module(test_db_ffi).
-export([clone_db/1]).

clone_db(Template) ->
{ok, Dest} = esqlite3:open(\":memory:\"),
{ok, Backup} = esqlite3:backup_init(Dest, \"main\", Template, \"main\"),
'$done' = esqlite3:backup_step(Backup, -1),
ok = esqlite3:backup_finish(Backup),
{ok, Dest}.
\n

On the Gleam side, we declare the FFI binding and call it like any other function:

\n
@external(erlang, \"test_db_ffi\", \"clone_db\")
fn clone_db(
template: sqlight.Connection,
) -> Result(sqlight.Connection, Nil)
\n

This is one of the nice things about the BEAM ecosystem. When your high-level driver doesn't expose what you need, the lower-level library almost always does, and the FFI boundary is trivial to cross.

\n

The template is created once per test run and cached in BEAM's persistent_term (a global immutable store optimized for read-heavy access):

\n
fn template_db() -> sqlight.Connection {
case get_cached(\"test_template_db\") {
Ok(conn) -> conn
Error(Nil) -> {
let assert Ok(conn) = sqlight.open(\":memory:\")
let assert Ok(_) = sqlight.exec(\"PRAGMA foreign_keys=ON;\", conn)
let assert Ok(_) = sqlight.exec(schema_sql, conn)
cache(\"test_template_db\", conn)
conn
}
}
}

pub fn setup() -> sqlight.Connection {
let assert Ok(conn) = clone_db(template_db())
let assert Ok(_) = sqlight.exec(\"PRAGMA foreign_keys=ON;\", conn)
conn
}
\n

The PRAGMA foreign_keys=ON runs after the clone because SQLite pragma settings are per-connection, not stored in the database file. It's a single statement, negligible cost.

\n

The Numbers

\n

At the time we switched to the clone method, we had 594 tests and 77 schema objects. We're still early in the rewrite and already approaching 1,000 tests. By launch we expect closer to twice the schema objects and somewhere around 5,000 tests.

\n

Here's what we measured at 594 tests:

\n
ApproachServer test time
Execute full schema per test3.2 s
Clone template via backup API2.6 s
\n

The schema execution cost dropped from ~550 ms to near zero. The remaining 2.6 seconds is actual test logic (inserting data, running business logic, assertions) plus BEAM VM startup overhead.

\n

The savings become more meaningful at scale. In a traditional Postgres setup, you'd use truncation between tests, issuing TRUNCATE TABLE on every table, in foreign-key-safe order, over a socket to the database server. With around 100 tables and 5,000 tests (a conservative estimate for our full rewrite, realistically 2-3x the tests), that's 500,000 truncation statements hitting a database server. Even at sub-millisecond per truncate, it adds up.

\n

With the backup approach, the clone cost is ~25 microseconds per test regardless of schema size. At 5,000 tests that's 0.125 seconds total. No truncation ordering, no socket round-trips.

\n

For context, this is a real application test suite covering cart operations, payment processing, registration validation, affiliate fees, waivers, round robin generation, scheduling, scoring, and more. Not trivial tests.

\n

Why This Works So Well

\n

The backup API is a nice optimization, but the real win is that in-process SQLite eliminates the shared database problem entirely.

\n

With Postgres, your test process talks to a database server over a socket. Every test that writes data is writing to the same place. You need isolation strategies because the architecture demands them.

\n

With in-process SQLite, the database lives in your process's memory. Creating a new one is a memory allocation, not a network connection. There's no server to coordinate with and no connection pool to manage. The isolation falls out of the architecture for free.

\n

This also means:

\n\n

Trade-offs

\n

This isn't free of trade-offs. You're not testing against the same database engine you run in production... unless you also run SQLite in production, which we do. If you're using SQLite for tests but Postgres in production, you'd miss Postgres-specific behavior (custom types, advisory locks, jsonb operators, etc.).

\n

The other trade-off is that in our setup, each test builds up its own data from scratch. In practice this is a feature (every test explicitly declares its dependencies) but it does mean more setup code per test compared to a shared fixtures approach. That said, this is a choice, not a limitation of the technique. You could just as easily insert seed data into the template database before caching it, and every clone would start with that data pre-loaded. If your test suite benefits from a well-defined set of standard users, organizations, or other reference data, seeding the template is a straightforward way to reduce per-test setup while keeping full isolation.

\n

This Isn't BEAM-Specific

\n

Our examples are in Gleam and Erlang, but the backup API is a C-level SQLite feature. Most languages expose it:

\n
# Python 3.7+ has it built in
source = sqlite3.connect(\":memory:\")
source.executescript(schema_sql)

dest = sqlite3.connect(\":memory:\")
source.backup(dest)
\n
// Node.js better-sqlite3
const template = new Database(\":memory:\");
template.exec(schemaSQL);

const dest = template.backup(\":memory:\");
\n
// Rust rusqlite
let template = Connection::open_in_memory()?;
template.execute_batch(schema_sql)?;

let mut dest = Connection::open_in_memory()?;
let backup = backup::Backup::new(&template, &mut dest)?;
backup.run_to_completion(5, Duration::ZERO, None)?;
\n

The pattern is the same in every language: build a template once, clone it per test via the backup API. The FFI detour we took through esqlite is only necessary because our particular Gleam driver doesn't expose it yet.

\n

The Full Stack

\n

Here's what our test infrastructure looks like in its entirety:

\n\n

That's the entire test infrastructure. No test framework plugins, no database cleaner gems, no truncation strategies. SQLite and 12 lines of Erlang.

\n
\n

This is Part 6 of the Curling IO Foundation series. Next up: Parallel Tests for Free.

", "url": "https://curling.io/blog/sqlite-test-isolation", "title": "Test Isolation for Free with SQLite", "summary": "Most web frameworks treat test database isolation as a hard problem. Rails has database_cleaner with three strategies. Django wraps every test in a transaction it rolls back. Phoenix does the same with its SQL sandbox. They all exist because tests share a single database server, and that shared state is the root of flaky tests and ordering dependencies, the kind where a test passes alone but fails in the suite.", "date_modified": "2026-03-03T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "foundation", "sqlite", "testing", "architecture" ] }, { "id": "https://curling.io/blog/what-to-look-for-in-curling-club-management-software", "content_html": "

If your curling club is shopping for management software, or wondering whether it's time to replace what you've got, the wrong choice can mean years of workarounds. Here's what to look for.

\n

Purpose-Built for Curling

\n

Generic sports platforms and website builders can handle basic registration, but curling has concepts that most software doesn't account for: ends, hammer, draw schedules, round robins with pools, page playoffs, spare management, rental ice, bonspiels, waitlists, and online waivers.

\n

If the software you're evaluating doesn't understand these things natively, you'll spend your time working around it instead of working with it. Ask whether you can schedule a draw across four sheets, score a game end-by-end, or run a triple knockout bracket without manual intervention.

\n

Management First, Not a CMS

\n

Some club management software is really just an extension of a content management system (CMS), focused on building websites like it's still 1999. Building a nice website is already a solved problem. Squarespace, Wix, WordPress, and hundreds of others already have that covered. Look for software that has the curling-specific depth you need, not a website builder with registration bolted on. Most clubs are better off with a simple website any volunteer can update than a giant CMS no one remembers how to administer.

\n

Modern and Fast

\n

Performance matters more than people think. If the platform feels slow when a curler is trying to register and pay, they notice. Pages need to feel snappy, not just functional. If the platform is also used for provincial or national competitions, it's already proven it can handle real traffic, not a few dozen people checking scores at the same time.

\n

More than half of web traffic comes from phones now. Volunteers entering scores, curlers checking draw times, spectators following results: most of that happens on a phone. The platform needs to work well on smaller screens, desktops included but not prioritized.

\n

Pricing Transparency

\n

Pricing models vary widely. Some platforms charge setup fees, monthly fees, per-member fees, or some combination. Others take a percentage of transactions. Before you sign up, make sure you understand:

\n\n

Clubs are often volunteer-run with tight budgets. A platform that costs hundreds of dollars a month before a single curler registers doesn't make sense for a 4-sheet club with 120 members.

\n

Online Registration and Payments

\n

If your members can't register and pay online, you're creating work for yourself. Curlers should be able to browse events, register, and pay in one flow. No filling out PDFs, emailing forms, or bringing cheques to the club.

\n

The payment processing should be built in, not bolted on through a third-party plugin. And it should handle what curling clubs deal with: membership fees, family registrations, multiple events in one cart, early bird pricing, discounts, and partial refunds.

\n

League and Competition Management

\n

You should be able to:

\n\n

If the platform can't handle a standard 8-team round robin on 4 sheets without manual scheduling, it's not built for curling.

\n

Data Portability and Accounting Integration

\n

Your data should be yours. Look for the ability to export registration data, financial records, and member information. If you ever want to switch platforms, you shouldn't be locked in.

\n

For clubs that track finances seriously, look for accrual, double-entry accounting and integration with software like QuickBooks, Xero, or Sage. Ask whether the platform can export transactions in a format your bookkeeper can actually use.

\n

Track Record, Development, and Support

\n

Who else is using the platform? A tool used by a handful of clubs is a different proposition than one trusted by national organizations and hundreds of clubs. Is the software still being actively improved? How often do updates ship? Some platforms haven't changed in years, and it shows.

\n

Support turnaround matters too. When something breaks during league night or a bonspiel weekend, you need a response in hours, not weeks. Ask other clubs what their experience has been. Community adoption also means your curlers are more likely to already have an account, which makes registration smoother for everyone.

\n

Who Owns It

\n

It's worth looking past the product and asking who actually owns the company behind it. A platform built and run by people in the curling community is a different thing than one owned by an investment firm in another country. When a private equity group acquires a software company, the priorities tend to shift: prices go up, support gets outsourced, and development slows down or focuses on whatever makes the numbers look good for the next quarter. The curling world is small and mostly volunteer-driven. You want the people making decisions about your software to understand that.

\n

AI Ready

\n

Are you already using chatbots like Claude.ai to help with day-to-day tasks? A lot of club managers are. As these tools get more capable, they'll be able to do more than just answer questions. Look for a platform that's ready for this. Setting up a draw, checking registration numbers, pulling a financial report: these are things your AI assistant should eventually be able to do for you, if the software supports it. Your members could benefit too, using their own chatbots to register for events or set up reminders for upcoming matches.

\n

Ask the Right Questions

\n

Full disclosure: we built Curling IO to check all of these boxes. But regardless of what you choose, these are the questions worth asking.

", "url": "https://curling.io/blog/what-to-look-for-in-curling-club-management-software", "title": "What to Look for in Curling Club Management Software", "summary": "A practical guide to evaluating club management software for curling clubs: what matters, what to watch for, and the questions worth asking before you commit.", "date_modified": "2026-03-02T00:00:00.000Z", "author": { "name": "Chris" }, "tags": [ "club-management", "buyers-guide", "choosing-software" ] }, { "id": "https://curling.io/blog/why-we-chose-sqlite", "content_html": "

We assumed PostgreSQL for Version 3. After a decade running Postgres in production, why would we even consider something else? We knew the tooling, the failure modes, the operational playbook. Postgres is the safe choice for good reason.

\n

Then we looked at what \"self-hosting Postgres\" actually involves, compared it to what Litestream does for SQLite, and changed our minds. This post covers the decision, the architecture, the trade-offs, and why we'd make the same call again.

\n

What Version 2 Runs On

\n

Curling IO Version 2 runs on Rails backed by PostgreSQL on Crunchy Bridge, a fantastic managed Postgres service. It works. Crunchy Data handles backups, failover, and tuning. We don't think about shared_buffers or autovacuum settings. We don't run pgBackRest or schedule base backups. (Crunchy Data was acquired by Snowflake in 2025, which adds another reason to reduce our dependency on third-party managed services.)

\n

The trade-off is cost, lock-in, and jurisdiction. Crunchy Bridge is AWS-only, so the app servers have to live on AWS too. That's the full stack dependent on a single US cloud provider. We're a 100% Canadian company and we'd rather keep our infrastructure closer to home, especially given the current political uncertainty south of the border.

\n

For Version 3, we're moving to OVH, a French-owned provider with data centers in Quebec and Ontario. All data stays in Canada. Crunchy Bridge doesn't run on OVH, so we can't bring it along. Self-hosting Postgres on OVH would mean taking on everything Crunchy handles today: connection pooling, server tuning (shared_buffers, effective_cache_size, work_mem, max_connections), autovacuum monitoring, and the full backup story (pgBackRest, WAL archiving, scheduled base backups, tested restore procedures, monitoring backup freshness). That's a lot to take on for a small organization.

\n

The Architecture: Database Per Sport

\n

Version 3 uses a database-per-sport architecture. Each sport gets its own SQLite file with an identical schema:

\n
db/
├── curling.db # All curling club data
├── pickleball.db # All pickleball club data
└── shared.db # Cross-sport reference data, jobs, email suppressions
\n

Each sport database is multi-tenant. Every table with tenant data includes an org_id column, and every query filters by it. Adding a new sport means creating a new database file with the same schema. No cross-sport queries, no shared-database contention.

\n

shared.db holds things that span sports: Canadian tax jurisdictions (17 rows covering GST, HST, PST, and QST rates by province), the background job queue, and email suppression lists from Postmark webhooks.

\n

At startup, the server opens one connection per database and holds it for the lifetime of the app:

\n
pub fn open(path: String) -> Result(sqlight.Connection, sqlight.Error) {
use conn <- result.try(sqlight.open(path))
use _ <- result.try(sqlight.exec(\"PRAGMA journal_mode=WAL;\", conn))
use _ <- result.try(sqlight.exec(\"PRAGMA busy_timeout=5000;\", conn))
use _ <- result.try(sqlight.exec(\"PRAGMA foreign_keys=ON;\", conn))
Ok(conn)
}
\n

Three PRAGMAs on every connection: WAL mode for concurrent reads, a 5-second busy timeout so writes queue instead of failing immediately, and foreign key enforcement (which SQLite disables by default).

\n

One connection per database. No connection pool. No pool configuration. Each incoming HTTP request gets the sport-specific connection (resolved from the hostname) plus the shared connection.

\n

Why In-Process SQLite

\n

SQLite runs inside the application process. There's no socket, no TCP round-trip, no serialization between the app and the database. A query is a function call.

\n

Our workload is primarily simple indexed lookups: single-row fetches by primary key, filtered lists by org_id, a handful of line items joined to an order. SQLite handles tens of thousands of write transactions per second, far beyond what 1,000+ curling clubs will generate. Even during peak registration or when a provincial championship is posting live scores, the load profile is overwhelmingly reads with occasional write bursts.

\n

The cost follows from the architecture. No separate database server means no database hosting bill. The entire v3 stack runs on a single server for less than what managed Postgres alone costs on AWS.

\n

WAL Mode and Concurrency

\n

WAL (Write-Ahead Logging) is what makes SQLite viable for a concurrent web application. Without it, any write locks the entire database for both readers and writers. With WAL, readers proceed concurrently with writes. On the BEAM, where hundreds of lightweight processes might query simultaneously during a traffic spike, this is essential.

\n

Writes are still serialized. One writer at a time, coordinated by a mutex in the SQLite library. If the write lock can't be acquired within the 5-second busy timeout, SQLite returns SQLITE_BUSY. With our read-heavy workload, write contention hasn't been a practical concern. This is why we use a separate database for background jobs. Job processing writes to shared.db, not the sport databases, so it never contends with registration queries. With SQLite's single-writer model, separating write-heavy workloads into different files is a real architectural consideration, not just organization.

\n

Litestream: The Tipping Point

\n

This is what tipped the scales.

\n

SQLite is a file. Litestream watches that file and continuously replicates its WAL frames to S3-compatible object storage. For us, that's OVH Object Storage in two different geographic locations in Canada. Changes sync every 10 seconds to both.

\n

The configuration is one YAML file:

\n
dbs:
- path: /var/lib/curling-io/databases/curling.db
replicas:
- type: s3
bucket: xxxxx
endpoint: xxxxx
sync-interval: 10s
- type: s3
bucket: xxxxx
endpoint: xxxxx
sync-interval: 10s
\n

Recovery is one command:

\n
litestream restore -o /path/to/curling.db s3://xxxxx/curling.db
\n

That produces a ready-to-use database file. Copy it to the right path, start the app.

\n

Now compare that to self-hosting PostgreSQL backup and recovery:

\n
    \n
  1. Install pgBackRest
  2. \n
  3. Configure a stanza, set archive_mode and archive_command in postgresql.conf
  4. \n
  5. Schedule base backups via cron
  6. \n
  7. Monitor backup success, WAL retention, and disk usage
  8. \n
  9. Periodically test restores (because an untested backup isn't a backup)
  10. \n
  11. When disaster strikes: stop PostgreSQL, run pgbackrest restore --type=time --target=\"...\", wait for WAL replay, verify the recovery point, promote to primary
  12. \n
\n

PostgreSQL gives you something Litestream doesn't: exact transaction-level point-in-time recovery. You can roll back to any specific transaction. Litestream gives you \"latest snapshot plus WAL frames,\" which means a worst-case recovery gap of about 10 seconds. For our use case that's more than sufficient.

\n

The operational gap is where it gets decisive. Litestream is one binary, one config file, near-zero ongoing maintenance. PostgreSQL backup is a practice, something you rehearse, monitor, and maintain over the life of the project. Even with perfect tooling, you can't reduce \"restore a running server's state\" to \"download a file and start the app.\" That gap is architectural. It won't close with better Postgres tooling because the complexity is inherent to backing up a running server process.

\n

The simplicity ceiling for SQLite backup is fundamentally higher than what's possible for any client-server database.

\n

The SQLite Tax

\n

Here's what we pay for that simplicity. Every item below is something PostgreSQL handles natively.

\n

No Real Types

\n

SQLite doesn't have booleans, dates, or enums. Booleans are stored as INTEGER 0/1. Dates are Unix epoch integers. Enums are TEXT strings.

\n

Every boolean parameter needs a db.bool_to_int(value) call. Every boolean decoder needs a != 0 check. All date formatting, timezone conversion, and comparison lives in application code. There's no DATE_TRUNC, no INTERVAL, no AT TIME ZONE.

\n

We use STRICT tables on everything, which enforces that values match declared column types (INTEGER, TEXT, REAL, BLOB). It's a free safety net for anything that bypasses the application layer, but it doesn't help with booleans, dates, or enums. Those stay as application-level concerns, but since Gleam and Parrot are the only source of writes and types are enforced there, it's manageable.

\n

Limited ALTER TABLE

\n

SQLite stores its schema as the original CREATE TABLE text, not as structured system catalogs like PostgreSQL. This means ALTER TABLE is minimal:

\n\n

Changing a column post-launch requires a four-step dance: add the new column, copy the data, drop the old column, rename. PostgreSQL handles this with a single ALTER COLUMN.

\n

No CHECK Constraints

\n

We don't use CHECK constraints anywhere. This is a policy decision, and not one we're thrilled about.

\n

CHECK constraints are embedded in the CREATE TABLE DDL string. Modifying one means a full table rebuild. Adding a new enum value shouldn't require that. So all validation lives in Gleam: union types for enums (with exhaustive pattern matching at compile time), validation functions for business logic, and let assert in decoders to crash fast if bad data gets in.

\n

Write Serialization

\n

As covered above, writes are serialized. In practice this barely matters because there's no network round-trip and no protocol overhead. A write is a function call that completes in microseconds. The lock is held for such a short window that writers rarely wait for each other at all. But it does rule out horizontal scaling to multiple app servers writing to the same database.

\n

Single Node Only

\n

SQLite is a file on local disk. Multiple app servers can't share it (NFS breaks SQLite's file locking and leads to corruption). We're locked into a single-server architecture. On a BEAM runtime that handles massive concurrency on one node, this is fine for well over 100,000 clubs. It's a hard ceiling if we ever need more, and if we do, that's a really good problem to have.

\n

There are projects that add distributed replication to SQLite (LiteFS, rqlite, dqlite), but they add significant complexity: FUSE filesystems, consensus protocols, additional infrastructure. At that point you're fighting to make SQLite behave like Postgres, and you should probably just use Postgres.

\n

NIF Risk

\n

Our SQLite driver (esqlite) is an Erlang NIF - compiled C code that runs outside the BEAM scheduler's control. A long-running query blocks a scheduler thread. A segfault takes down the entire VM.

\n

In practice, SQLite's C library is near-bulletproof, and our queries return in microseconds. We mitigate the risk by keeping all queries simple (indexed lookups, no full table scans) and pushing anything heavy to background jobs. But the risk surface exists.

\n

SQL Dialect Gaps

\n

Small things that add friction:

\n\n

Why We Accept It

\n

Every tax above is an application-level workaround. Type conversions, date formatting, schema evolution dances, validation in Gleam instead of CHECK constraints. None of them change the architecture. They're annoyances, not obstacles.

\n

The things that would actually force a migration (write contention, multi-node scaling, database size) aren't happening at 1,000 clubs with a read-heavy workload on the BEAM.

\n

When We'd Switch to PostgreSQL

\n

Honestly, we don't see it happening anytime soon. Most of the limits people worry about with SQLite (database size, write throughput, busy errors) can be pushed further by upgrading the server. Vertical scaling is cheap and simple, and the BEAM makes excellent use of whatever hardware you give it.

\n

The only real trigger would be needing multiple application servers, and that would mean we've seriously outgrown our predictions or miscalculated. If that happens, the migration path is mechanical, not architectural. Swap sqlight for gleam_pgo. Adjust SQL dialect across all query files: ? becomes $1, $2, ..., INTEGER PRIMARY KEY becomes GENERATED ALWAYS AS IDENTITY, booleans become real booleans, remove the DELETE subquery workarounds. Separate .db files map to PostgreSQL schemas within a single database. Business logic and request handlers stay the same.

\n

What's Next

\n

If SQLite hits a wall, we switch to Postgres. The exit strategy is defined and the migration is a dialect swap, not a rewrite.

\n

But so far, Litestream plus in-process SQLite has been one of the best decisions in the v3 rewrite. And there's one more benefit worth its own post: because SQLite runs in-process, every test gets its own in-memory database. No shared state, no cleanup, no flaky tests. That's next.

\n
\n

This is Part 5 of the Curling IO Foundation series. Next up: Test Isolation for Free with SQLite.

", "url": "https://curling.io/blog/why-we-chose-sqlite", "title": "Why We Chose SQLite", "summary": "We assumed PostgreSQL for Version 3. After a decade running Postgres in production, why would we even consider something else? We knew the tooling, the failure modes, the operational playbook. Postgres is the safe choice for good reason.", "date_modified": "2026-02-27T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "foundation", "sqlite", "architecture", "gleam" ] }, { "id": "https://curling.io/blog/how-curling-canada-uses-curling-io", "content_html": "

If you've ever checked a scoreboard during the Brier or the Scotties Tournament of Hearts, you've seen Curling IO in action. Since 2014, Curling Canada has used Curling IO to manage their national competitions, and the same platform is available to any curling club.

\n

Powering Canada's Biggest Curling Events

\n

That includes the events that curling fans across the country follow every year:

\n\n

In total, Curling Canada has managed over 360 competitions through Curling IO, with more than 20,000 games scored on the platform.

\n

What Curling IO Handles

\n

For each competition, Curling IO handles registration and team assembly, draw scheduling with round robins and playoff brackets across sheets and time slots. Games are scored live with end scores and shot-by-shot tracking, and standings and bracket advancement update automatically.

\n

When fans check scores on curling.ca/scoreboard during a national event, both the data and the scoreboard widgets they're looking at were built by us.

\n

During major events like the Brier, with competitions running simultaneously across the country, traffic spikes well beyond what any individual club would ever generate, and Curling IO handles it smoothly.

\n

The Same Platform Your Club Can Use

\n

The draw scheduling, scoring, and bracket tools that Curling Canada relies on for national championships are the same tools you'd use for your Tuesday night league or weekend bonspiel. We handle a 4-sheet club's leagues the same way we handle a nationally televised championship.

\n

Built for Curling, Not Adapted for It

\n

Curling has concepts that generic sports software doesn't account for: ends, hammer, last stone advantage, round robins with multiple pools, page playoffs, mixed doubles scoring. Even draw scheduling has curling-specific nuances, like minimizing how often a team plays on the same sheet. We even published a free draw schedule tool at curlingschedules.com that anyone can use.

\n

Curling IO was built around these concepts. It's why Curling Canada chose it, and it's why all 14 provincial and territorial curling associations in Canada use it too.

\n

See It in Action

\n

You can browse Curling Canada's competitions and results at canada.curling.io. Every draw schedule, scoreboard, and bracket you see there runs on Curling IO.

\n

If your club wants the same tools that power Canadian curling at the national level, get started here. There are no setup fees or monthly fees. Check out our pricing page for a full feature breakdown and cost calculator.

", "url": "https://curling.io/blog/how-curling-canada-uses-curling-io", "title": "How Curling Canada Uses Curling IO for National Competitions", "summary": "Curling Canada runs all of their national championships through Curling IO, from the Brier to the Scotties to the Trials. The same platform is available to any curling club.", "date_modified": "2026-02-26T00:00:00.000Z", "author": { "name": "Chris" }, "tags": [ "curling-canada", "national-competitions", "live-scoring" ] }, { "id": "https://curling.io/blog/background-jobs-without-the-baggage", "content_html": "

In most web stacks, adding background jobs means adding infrastructure: Redis, Sidekiq, a separate worker process, a monitoring dashboard, another thing to deploy and keep running. Curling IO Version 2 uses Delayed Job backed by PostgreSQL, which works well but requires a separate worker daemon alongside the web process.

\n

Curling IO Version 3 runs on the BEAM (Erlang's virtual machine), and background jobs are just another process in the same runtime. No Redis. No separate worker. No additional infrastructure. This post covers how we built it, why we chose SQLite persistence over in-memory queues, and how the whole thing fits into a few hundred lines of Gleam.

\n

The BEAM Advantage

\n

The BEAM VM was designed for telecom systems that needed to handle millions of concurrent operations without downtime. Every BEAM application already has lightweight processes, supervisors, and message passing built in. These aren't OS threads. They're managed by the VM's own scheduler, and you can run hundreds of thousands of them in a single OS process.

\n

This means a \"background worker\" isn't a separate service. It's just another process running alongside your HTTP handlers, sharing the same runtime and the same log output. Starting one is a function call, not a deployment.

\n

Why Not Just Fire-and-Forget?

\n

The simplest approach on BEAM would be to spawn a process for each job and let it run. Or use an OTP actor with in-memory messages. We considered this, but it has a gap: if the server restarts, every pending job disappears. A lost login email is minor (the user just requests another), but we'll also be running draw schedule generation, payment processing through Stripe, and accounting syncs. Losing those mid-flight is a real problem.

\n

We wanted durability without complexity. SQLite was already there.

\n

A Separate Database

\n

Curling IO already uses SQLite for its sport-specific databases (one per sport). We could have added a jobs table there, but we deliberately put it in a separate shared.db. Background job processing means frequent writes (insert job, mark running, mark completed), and there's no reason to contend for write locks on the sport databases. SQLite uses a write-ahead log with a single writer, so separating the workloads means job processing never blocks a registration query and vice versa.

\n

The jobs table:

\n
CREATE TABLE IF NOT EXISTS jobs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
kind TEXT NOT NULL,
payload TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
attempts INTEGER NOT NULL DEFAULT 0,
max_attempts INTEGER NOT NULL DEFAULT 3,
max_running_seconds INTEGER NOT NULL DEFAULT 120,
error TEXT,
created_at INTEGER NOT NULL,
run_at INTEGER NOT NULL,
started_at INTEGER,
completed_at INTEGER
) STRICT;
\n

Enqueue Is Just an INSERT

\n

From a request handler, sending a background email looks like this:

\n
jobs.enqueue_email(request_context.shared_db, request_context.postmark_api_token, email)
\n

Under the hood, that's a single INSERT INTO jobs with the email details serialized as JSON in the payload column. It returns in microseconds. The user gets their response immediately instead of waiting 100-500ms for the Postmark API call to complete.

\n

The Worker: An OTP Actor

\n

The worker is a single OTP actor, which is BEAM's abstraction for a stateful process that handles messages sequentially. Ours has exactly one message type:

\n
pub type Message {
CheckForWork
}
\n

On startup, the actor sends itself a CheckForWork message after a one-second delay. When it handles that message, it queries for pending jobs, processes them, and schedules the next check. It's a self-sustaining polling loop:

\n
fn handle_message(state, msg) {
case msg {
CheckForWork -> {
process_pending_jobs(state)
process.send_after(state.self, 1000, CheckForWork)
actor.continue(state)
}
}
}
\n

Each poll picks up a batch of pending jobs where run_at is in the past, processes them one at a time, and marks each as completed or failed. The actor never crashes on a bad job. Errors are caught and logged, and the actor keeps processing the rest of the batch.

\n

Retry With Exponential Backoff

\n

Not every failure is permanent. Postmark might be briefly unreachable, or return a 500. The retry logic is straightforward: if a job fails and has attempts remaining, set it back to pending with a future run_at. The backoff is exponential (5 seconds, 25 seconds, 125 seconds):

\n
Attempt 1 fails → retry in 5s
Attempt 2 fails → retry in 25s
Attempt 3 fails → permanently failed
\n

Three attempts is the default, configurable per job via the max_attempts column. A permanently failed job stays in the table with its error message, so you can see exactly what went wrong:

\n
SELECT id, kind, error, attempts FROM jobs WHERE status = 'failed';
\n

No separate monitoring dashboard needed. It's just SQL.

\n

Supervised for Resilience

\n

The actor runs under an OTP supervisor. If the actor process dies, the supervisor restarts it automatically under the same registered name. The polling loop resumes, and any jobs that were marked running when the crash happened get recovered by a cleanup sweep.

\n

Each job has a max_running_seconds column (default: 120 seconds). At the start of every poll cycle, the actor checks for jobs that have been in running status longer than their timeout and resets them back to pending. Since the attempts counter was already incremented when the job entered running, the existing retry and backoff logic handles the rest, including capping retries at max_attempts. Completed and failed jobs are purged after 7 days, which is plenty of time to debug anything that goes wrong.

\n

This is the BEAM's bread and butter. Erlang's \"let it crash\" philosophy means you don't write defensive code to prevent every possible failure. You write a supervisor that recovers from it, and a cleanup sweep that catches anything that slips through. The result is less code that's more resilient.

\n

The Full Architecture

\n

Here's what happens when a user requests a login email:

\n
    \n
  1. Request handler validates the email, generates a token, and calls jobs.enqueue_email(), which is just an INSERT into shared.db (microseconds)
  2. \n
  3. Response returns immediately with the \"check your email\" page
  4. \n
  5. Jobs actor picks up the pending job on its next poll (within 1 second)
  6. \n
  7. Actor marks the job as running, calls the Postmark API, marks it completed
  8. \n
  9. If Postmark is down, the job goes back to pending with a backoff delay
  10. \n
\n

What We Didn't Need

\n

Here's what's absent from this setup:

\n\n

This setup can comfortably scale to tens of thousands of clubs. SQLite handles the throughput, the BEAM handles the concurrency, and you can read the whole thing in one sitting.

\n

What's Next

\n

The jobs system is designed to grow. Adding a new job type means adding a new kind string and a handler function. Long-running report generation, data imports, email campaigns: they all follow the same pattern. INSERT a row, let the actor pick it up.

\n
\n

This is Part 4 of the Curling IO Foundation series. Next up: Why We Chose SQLite.

", "url": "https://curling.io/blog/background-jobs-without-the-baggage", "title": "Background Jobs Without the Baggage", "summary": "In most web stacks, adding background jobs means adding infrastructure: Redis, Sidekiq, a separate worker process, a monitoring dashboard, another thing to deploy and keep running. Curling IO Version 2 uses Delayed Job backed by PostgreSQL, which works well but requires a separate worker daemon alongside the web process.", "date_modified": "2026-02-25T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "foundation", "gleam", "beam", "otp", "architecture" ] }, { "id": "https://curling.io/blog/passwordless-auth-done-right", "content_html": "

Curling IO has been passwordless since Version 2. No passwords to remember, no passwords to steal, no password reset flows. You enter your email, we send you a short-lived login code, and you're in. It's been working well for over a decade, and for Version 3 we're keeping the same approach while fixing some rough edges and adding multi-email support.

\n

But first, let's talk about why we made this controversial decision in the first place.

\n

Why Not Passwords?

\n

The conventional wisdom is that passwords are the \"real\" way to authenticate and login links are a shortcut. We think it's the opposite. Passwords are the shortcut, and they create more problems than they solve. We're not alone. Slack, Notion, Medium, and Substack all use passwordless login links as their primary authentication method.

\n

Passwords get reused. Study after study shows that most people reuse the same password across multiple sites. When any one of those sites gets breached, attackers try those credentials everywhere else. This isn't a theoretical risk. It happens constantly, and it's the number one way accounts get compromised. As an application developer, you can't control what your users do on other sites, but you inherit the risk.

\n

Passwords get phished. A convincing fake login page can harvest credentials at scale. Login links are inherently resistant to phishing because there's no credential to hand over. The code is short-lived and tied to a specific email address.

\n

Passwords require just as much email verification. Here's the thing people overlook: even with passwords, you still need to verify the user's email. Otherwise anyone can create an account with someone else's email address. So you end up building the same email verification flow that login links use, plus the password management on top of it. You're not avoiding email. You're adding a password layer on top of it.

\n

The industry agrees. Look at what banks and other high-security sites do now: even after you enter your email and password, they send you a one-time code via text or email. Password alone isn't enough anymore, so they've added the exact same verification step we use. Some have a \"remember this device for 2 weeks\" checkbox, but in practice it rarely works. You log in an hour later after your session times out and they're sending you another code. It's not just frustrating. It reveals that the password itself has become a meaningless step. The one-time code is doing the actual authentication. The password is just theater you have to sit through before you get to the part that matters. We skip it entirely.

\n

Passwords add surface area. With passwords, you need: secure hashing (bcrypt/argon2), a \"forgot password\" flow (which is itself a login link), password strength validation, brute force protection on the login form, and secure storage. Login links need rate limiting and token management too, but the overall surface area is smaller. There's no credential to store long-term, no password policy to enforce, and no \"forgot password\" flow to build since the login flow is the recovery flow.

\n

\"But what about password managers?\" Password managers are great. They solve the reuse and phishing problems for users who use them. But most users don't. And for the curling club use case, where members log in a handful of times per season, asking them to set up and maintain a password manager is unrealistic. Login links meet users where they are.

\n\n

A login link is a short-lived code sent to your email. The security model rests on a simple assumption: if you can read email sent to that address, you control that address. This is the same assumption that password reset flows rely on, but we cut out the middleman.

\n

Here's what makes the implementation secure:

\n

High-entropy tokens. Each login code is drawn from a high-entropy space, but it doesn't need to be astronomical because of the next two points. The token generator uses Gleam's binary pattern matching to map cryptographically random bytes to an unambiguous alphabet:

\n
pub fn generate_token() -> String {
let alphabet_size = string.length(token_alphabet)
crypto.strong_random_bytes(token_length)
|> pick_chars(alphabet_size, \"\")
}

fn pick_chars(bytes: BitArray, alphabet_size: Int, acc: String) -> String {
case bytes {
<<b, rest:bits>> -> {
let idx = b % alphabet_size
let ch = string.slice(token_alphabet, idx, 1)
pick_chars(rest, alphabet_size, acc <> ch)
}
_ -> acc
}
}
\n

Aggressive rate limiting. Authentication attempts are tightly rate limited across multiple layered time windows. Brute-forcing the token space is completely impractical. Each window is checked in sequence using Gleam's use syntax. If any window is exceeded, it short-circuits and returns the retry-after time without checking the rest:

\n
pub fn check_verification_attempt(
limiter: RateLimiter,
email: String,
now: Int,
) -> Result(Nil, Int) {
let key = key_prefix <> string.lowercase(email)
use _ <- result.try(check(limiter, key, attempts_per_minute, 60, now))
use _ <- result.try(check(limiter, key, attempts_per_15_minutes, 900, now))
use _ <- result.try(check(limiter, key, attempts_per_hour, 3600, now))
check(limiter, key, attempts_per_day, 86_400, now)
}
\n

Short-lived. Each code expires quickly. After verification, the token hash is cleared from the database. There's no window for replay attacks.

\n

Constant-time comparison. We look up the email row first, then compare the submitted token hash against the stored hash using Gleam's crypto.secure_compare. This takes the same amount of time regardless of where the first mismatch occurs, preventing timing attacks that could leak information about partial matches. If the current token doesn't match, we check the previous token, not to log them in, but to return a specific error message guiding the user to check for a more recent email:

\n
let hashes_match =
crypto.secure_compare(
bit_array.from_string(submitted_hash),
bit_array.from_string(stored_hash),
)

case hashes_match {
True ->
case row.token_expires_at {
Some(expires_at) if expires_at > now ->
complete_verification(conn, row, now)
_ -> Error(ExpiredToken)
}
False ->
case row.previous_token_hash {
Some(prev_hash) -> {
let prev_match =
crypto.secure_compare(
bit_array.from_string(submitted_hash),
bit_array.from_string(prev_hash),
)
case prev_match {
True -> Error(SupersededToken)
False -> Error(InvalidToken)
}
}
None -> Error(InvalidToken)
}
}
\n

Every branch returns a specific error variant, and the compiler ensures we handle all of them. No forgotten edge cases. The previous token never grants access. It only exists to give a better error message than a generic \"invalid token.\"

\n

No credential storage. There are no passwords in our database. In a breach scenario, attackers get token hashes that are short-lived. Compare that to a password database where every hash is a target for offline cracking.

\n

It's also worth noting that the main vulnerability of login links is a compromised email account. But if someone's email is compromised, they have much bigger problems than their Curling IO profile. And a password-based system is equally vulnerable in that scenario. The attacker just clicks \"forgot password\" and they're in.

\n

The bottom line: login links are not a weaker form of authentication. For our use case, they're stronger. They eliminate entire categories of attacks (credential stuffing, phishing, password spraying) while being simpler for both users and developers.

\n

Takeaways from Version 2

\n

Version 2's login link flow is straightforward: enter your email, get a code, enter the code, you're in. It works. But after a decade and hundreds of thousands of users, a few pain points emerged.

\n

One email, one identity. In Version 2, your email is your identity. Log in with a different email, you get a different account. This has been a real source of confusion. A club manager registers with their work email, then tries to log in from their phone with their personal email, and they're looking at an empty account. We'd get support requests about \"missing registrations\" that were really just the same person with two accounts.

\n

Unhelpful error messages. This one came up a lot. A curler would request a login link, wait a minute or two, not see the email, and request another one. The first email was usually just delayed. Many email clients only check for new mail every 5 to 15 minutes. When the first email finally arrived and they clicked it, they'd get a generic \"invalid token\" error because the second request had replaced the first. They'd wonder if they mistyped something, try again, get confused, and contact support.

\n

Version 3 Adds Multiple Email Addresses

\n

This is the big addition. Users can now associate multiple verified email addresses with their account:

\n

Add an email. From the account page, enter a new email address. We send a verification code using the same mechanism as login.

\n

Verify it. Enter the code. The email is now linked to your account and marked as verified. Unverified emails can't be used for anything.

\n

Set it as primary. Any verified email can be promoted to primary. Your primary email is what shows up across the platform, in admin dashboards, on team rosters, in email communications. Changing your primary is a single click.

\n

Remove it. Non-primary emails can be removed. You can't remove your primary (switch to a different one first) and you can't remove your last verified email (you need at least one to log in).

\n

Log in with any of them. This is the key benefit. Once you've added and verified multiple emails, logging in with any of them resolves to the same account, the same registrations, the same history. No more duplicate accounts because you used a different email.

\n

How It Works Under the Hood

\n

Each email row tracks a few key fields:

\n
email               -- globally unique, one owner max
user_id -- nullable; not set until verified for first-time logins
token_hash -- pending verification token
previous_token_hash -- the token that was replaced (for superseded detection)
verified_at -- null until verified
\n

This single structure handles both login verification and email ownership.

\n

The design means multi-email support was almost free. The same verification flow used for login handles email verification for logged-in users. The only difference is that the user is already set when you're adding an email to your existing account.

\n

Superseded Token Detection

\n

Remember the delayed email problem from Version 2? A curler requests a link, doesn't see it right away, requests another, and then clicks the first one when it finally arrives. Version 3 handles this gracefully.

\n

When a new login link is requested, we don't just overwrite the old token. We move the current token_hash to previous_token_hash first. If someone then tries the old token, we can tell the difference:

\n
pub type AuthError {
InvalidToken // not found or already used
SupersededToken // replaced by a newer token
ExpiredToken // past expiry
UserDisabled // account disabled
DatabaseError(sqlight.Error)
}
\n

A SupersededToken gets a specific message: \"This code has been replaced. Check for a more recent email or request a new link.\" Instead of leaving users confused, we point them in the right direction.

\n

This is a pattern we use throughout Version 3. Gleam's union types let us model every possible error state explicitly, and the compiler ensures we handle all of them. No forgotten edge cases, no generic catch-all error messages.

\n

OAuth (Google and Facebook)

\n

Login links aren't the only way in. Google and Facebook login work alongside them for users who prefer one-click SSO.

\n

The OAuth flow uses a central auth subdomain (auth.curling.io) to handle callbacks, since both providers require a fixed redirect URI. After the provider verifies the user's identity, we look up the email in the same table used for login links. If the email exists with a user, log them in. If not, create the user and a verified email row (OAuth emails are pre-verified by the provider).

\n

The same user resolution logic, regardless of how you authenticate.

\n

A Note on Rolling Your Own Auth

\n

Writing your own authentication is generally a bad idea. Battle-tested libraries like Devise (Ruby), NextAuth (JavaScript), and django-allauth (Python) exist for good reason. They've been hardened over years of real-world use and security audits. If you're building on a stack that has a mature auth library, use it.

\n

We couldn't find an existing Gleam auth library that was the right fit for our specific needs, but we didn't design in a vacuum. We studied Devise's modules extensively (Lockable, Timeoutable, Trackable, Confirmable) and used them as a checklist for what a production auth system needs to handle. Every security decision we made, from constant-time comparison to layered rate limiting to email enumeration prevention, was informed by what these libraries have learned the hard way over the past decade.

\n

What's Next

\n

Authentication is foundational, but the real value of multi-email support shows up when it connects to everything else: registrations, team management, admin permissions, payment history. As we build those features, having clean user resolution across multiple emails will simplify a lot of workflows that were painful in Version 2.

\n

We're also considering optional two-factor authentication for club administrators, via authenticator apps or SMS, for organizations that want to force an extra layer of security on accounts with administrative access.

\n
\n

This is Part 3 of the Curling IO Foundation series. Next up: Background Jobs Without the Baggage.

", "url": "https://curling.io/blog/passwordless-auth-done-right", "title": "Passwordless Auth, Done Right", "summary": "Curling IO has been passwordless since Version 2. No passwords to remember, no passwords to steal, no password reset flows. You enter your email, we send you a short-lived login code, and you're in. It's been working well for over a decade, and for Version 3 we're keeping the same approach while fixing some rough edges and adding multi-email support.", "date_modified": "2026-02-20T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "foundation", "authentication", "gleam", "security" ] }, { "id": "https://curling.io/blog/bilingual-by-design", "content_html": "

Curling IO serves hundreds of clubs across Canada, where English and French aren't optional, they're official languages. A club in Quebec needs a fully French experience. A national organization like Curling Canada needs both. Rails has mature i18n support and Version 2 has been fully bilingual from the start, but after a decade of maintaining around 10,000 YAML translation keys, we've hit the limits of what that approach can catch: missing keys, missing translations, and unused keys that accumulate silently over time.

\n

In Version 3, we wanted compile-time guarantees that make those problems impossible. This post covers how we designed the i18n system, why we split it into two layers, and what we changed from Version 2.

\n

Two Kinds of Text, Two Different Systems

\n

Every bilingual web application has two fundamentally different kinds of text:

\n
    \n
  1. App labels: static UI strings like \"Leagues\", \"Contact\", \"Email\". These are written by developers and change at deploy time.
  2. \n
  3. Content: user-entered data like league names, summaries, and descriptions. These are entered by club managers and may change at any time.
  4. \n
\n
App LabelsContent
Who writes itDevelopersClub managers
When it changesAt deploy timeAt any time
Where it livesCompiled codeDatabase
ScopeGlobal (all clubs)Per-record
\n

In Version 2, app labels came from Rails i18n YAML files loaded at boot, and content was stored in per-language database columns (name_en, name_fr, summary_en, summary_fr). This worked, but had pain points we wanted to address.

\n

What Version 2 Taught Us

\n

The YAML files grew unwieldy. Over the years, our en.yml and fr.yml files accumulated thousands of keys each. Finding unused keys was a manual process. Ensuring every English key had a French counterpart required discipline that occasionally slipped, resulting in untranslated UI elements showing up in production as blank strings or key paths like en.registrations.confirm_button.

\n

Per-language columns didn't scale. The leagues table in Version 2 has name_en, name_fr, summary_en, summary_fr, description_en, description_fr, post_registration_message_en, post_registration_message_fr, required_reading_en, required_reading_fr. That's 10 columns just for translated text on one table. Every new translatable field requires a migration adding two columns. If we ever added a third language, it would mean another column for every translatable field on every table.

\n

Blank vs. missing was ambiguous. When name_fr was empty, did the manager intentionally leave it blank, or did they just not get around to translating it? The system couldn't tell the difference.

\n

Version 3: App Label Translations

\n

App labels are compiled directly into the application. No database, no YAML files, no runtime file loading. Just Gleam code.

\n

The structure is simple:

\n
translations/keys.gleam   constants for every key
translations/en.gleam English values
translations/fr.gleam French values
translations.gleam builds the lookup, provides t()
\n

Key constants are pub const values in keys.gleam. This is extra code we've added specifically to get compile-time correctness. Every key used in a page or language file references these constants, so a misspelled key is a compile error rather than a silent runtime bug:

\n
pub const leagues = \"leagues\"
pub const contact = \"contact\"
pub const email_label = \"email_label\"
\n

Notice these are flat, no nesting. In Version 2, Rails i18n keys were nested by namespace (en.registrations.confirm_button, en.leagues.index.title, etc.). In practice, the nesting added complexity without much benefit: keys were harder to grep for, reorganizing namespaces meant touching multiple files, and deeply nested YAML was error-prone to edit. Flat keys are simpler to search, simpler to maintain, and work just as well with thousands of entries.

\n

Language files map those constants to translated values:

\n
// en.gleam
#(k.leagues, \"Leagues\"),
#(k.contact, \"Contact\"),
#(k.email_label, \"Email\"),

// fr.gleam
#(k.leagues, \"Ligues\"),
#(k.contact, \"Contact\"),
#(k.email_label, \"Courriel\"),
\n

At startup, translations.build() assembles these into a nested dictionary (outer key is language code, inner key is translation key). This dictionary gets stored in the application context and threaded into every request. Lookup is two hash table reads, effectively instant.

\n

In pages, we partial-apply the lookup function for the current language:

\n
let t = t.t(rc.translations, rc.lang, _)

// Then just call t() with key constants
element.text(t(k.leagues)) // \"Leagues\" or \"Ligues\"
element.text(t(k.contact)) // \"Contact\" (same in both languages!)
\n

This is clean, type-safe, and impossible to mess up at runtime. If a key is missing from a language file, the UI shows \"[missing: postal_code_label]\" so it's immediately clear something needs to be fixed. Much better than a silent blank.

\n

Catching Problems at Compile and Test Time

\n

Using constants instead of raw strings means a typo like t(k.legues) is a compile error because the constant doesn't exist. That alone eliminates an entire class of bugs we dealt with in Version 2.

\n

We also have a test that verifies every English key has a French counterpart and vice versa:

\n
pub fn should_have_same_keys_in_en_and_fr_test() {
let en_keys = en.all() |> dict.keys() |> set.from_list()
let fr_keys = fr.all() |> dict.keys() |> set.from_list()
set.difference(en_keys, fr_keys) |> should.equal(set.new())
set.difference(fr_keys, en_keys) |> should.equal(set.new())
}
\n

Adding a key to English without adding it to French fails the test suite. No more discovering untranslated strings in production.

\n

We also have a script that checks for keys defined but never actually used in any page or template (dead key detection). Between the compiler, the parity test, and the unused key checker, translation problems are caught long before they reach users.

\n

Notice how small the test surface is. Because Gleam's type system handles so much at compile time, our tests focus on validating design decisions and business logic rather than guarding against runtime surprises. In Version 2 with Rails, a lot of test effort goes toward catching things that a type system would prevent: nil values, wrong argument types, missing method calls. In Gleam, the compiler handles all of that, so our translation tests are purely about the logic we actually care about: do all languages have the same keys? Does the fallback behavior work correctly?

\n

Version 3: Content Translations

\n

Content translations (the user-entered data) take a different approach. Instead of per-language columns, each translatable table has a single translations JSON column:

\n
{\"fr\": {\"name\": \"Ligue du lundi\", \"summary\": \"Résumé en français\"}}
\n

The primary language value lives in the regular column (name, summary). Only secondary language overrides go in the JSON. This means:

\n\n

The resolution logic is straightforward: if the user is viewing in the primary language, return the regular column. If they're viewing in a secondary language, look it up in the JSON. If it's not there, fall back to the primary language value. Every field degrades gracefully. A missing French translation shows the English text rather than nothing.

\n
// Parse once per record
let parsed = i18n.parse(league.data.translations)

// Resolve each field for the current language
let name = i18n.resolve(league.data.name, parsed, lang, primary_lang, \"name\")
\n

This solves the blank-vs-missing ambiguity from Version 2: if a field isn't in the translations JSON, it simply hasn't been translated yet, and the primary language value is shown. No guessing.

\n

Language Routing

\n

Language selection comes from the URL path, not cookies or browser headers. This keeps things deterministic and shareable. A French URL always shows French content:

\n\n

The router extracts the language from the first path segment, stores it in the request context, and every page and API handler downstream uses it. Clean, predictable, and great for SEO.

\n

What's Next

\n

The i18n foundation is in place. As we build out more features (admin interfaces, registration flows, email notifications) we'll add keys to the app label system and the translations JSON column to more tables. The system is designed to scale to thousands of keys without any architectural changes.

\n
\n

This is Part 2 of the Curling IO Foundation series. Next up: passwordless authentication and multi-email support.

", "url": "https://curling.io/blog/bilingual-by-design", "title": "Bilingual by Design", "summary": "Curling IO serves hundreds of clubs across Canada, where English and French aren't optional, they're official languages. A club in Quebec needs a fully French experience. A national organization like Curling Canada needs both. Rails has mature i18n support and Version 2 has been fully bilingual from the start, but after a decade of maintaining around 10,000 YAML translation keys, we've hit the limits of what that approach can catch: missing keys, missing translations, and unused keys that accumulate silently over time.", "date_modified": "2026-02-15T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "foundation", "i18n", "gleam", "architecture" ] }, { "id": "https://curling.io/blog/the-next-version-of-curling-io", "content_html": "

We're building the next version of Curling IO - same features, same experience, stronger foundation. After over a decade of serving hundreds of curling organizations across Canada, the US, and internationally, we're investing in technical upgrades that set Curling IO up for the next 20+ years.

\n

This is Part 1 of a series of technical posts documenting our journey and decisions as we build.

\n

How This Affects You

\n

For club managers: Curling IO isn't changing from your perspective. We're upgrading the technical infrastructure behind the scenes - no downtime, no disruption, no action required from you. Your data stays exactly where it is. When the upgrade is complete (thoroughly tested and proven), the transition will be seamless. You'll have the same Curling IO you know, even faster and more reliable.

\n

For curlers: You won't notice the technology change. Registration, scoring, schedules, results - everything works the same way. You might notice things feel snappier, especially during peak registration times or when checking live scores at major competitions.

\n

Why a New Foundation?

\n

Curling IO has been serving curling clubs since late 2014. We upgraded the platform once already in 2019, and our Version 2 system has been running successfully on Rails ever since. Everything works, clubs are happy, and growth is strong. So why invest in a new foundation?

\n

Because we're planning to be here for the long haul. This new foundation ensures Curling IO can serve curling clubs not just for the next few years, but for the next 20+ years. While competitors come and go, we're investing in infrastructure built to last and evolve with the sport.

\n

Three key reasons:

\n
    \n
  1. \n

    AI Agent APIs - We're adding new APIs specifically designed for AI agents to work with easily. Your existing web interface isn't changing - these are additional capabilities on top of what you already use. As AI assistants become ubiquitous (whether you love them or not), we're designing for that reality. Users are already asking Claude.ai and Google's Gemini to help them register for leagues or check schedules - we want those interactions to work reliably.

    \n
  2. \n
  3. \n

    Concurrency and correctness - As the platform grows, we need stronger guarantees around correctness and performance. We want a tech stack that can handle massive concurrency - both during peak registration periods and during large traffic spikes for scoring and results at provincial and national competitions like the Scotties and Brier. It needs to prevent runtime errors and scale efficiently as the sport grows.

    \n
  4. \n
  5. \n

    Easier developer onboarding - We want a codebase that's easy to hand off between developers. Type safety and functional patterns make code self-documenting - new developers (with help from AI coding assistants) can jump in and contribute safely without deep institutional knowledge. Gleam's deterministic patterns are much easier for AI assistants to reason about than dynamic languages, which means faster onboarding and more productive collaboration between developers and their AI tools. This significantly reduces key-person dependency.

    \n
  6. \n
\n

Choosing the Tech Stack

\n

Given our goals - agent-native APIs, clean architecture, and battle-tested reliability at scale - we need a stack that can deliver:

\n

What we're looking for

\n\n

Why Gleam?

\n

Gleam checks every box. It's a type-safe functional language that compiles to both Erlang (for the server) and JavaScript (for the client). You get:

\n\n

What about other options?

\n

We didn't jump straight to Gleam. We seriously considered several other stacks:

\n

PostgREST + Elm/Lustre - This was a very serious contender using tools we're already familiar with. PostgREST provides instant APIs from your PostgreSQL schema, and Elm/Lustre provides type-safe UI. But we'd need something else for background jobs (like Go), and the tooling for building, debugging, and testing PostgreSQL functions isn't great. The multi-language setup felt fragmented.

\n

F# SAFE Stack - F# is a fantastic functional language with a solid .NET ecosystem. The SAFE stack (Suave/Saturn, Azure/AWS, Fable, Elmish) offers full-stack type safety with The Elm Architecture (TEA) on the frontend. We really liked this option, but ultimately chose Gleam for its BEAM VM benefits and simpler deployment story.

\n

TypeScript + Node + React - The obvious choice given its massive ecosystem and the sheer volume of material AI agents can draw from. We've been using React off and on since 2014 and like Redux (which is Elm-inspired) for the frontend. But JavaScript/TypeScript has issues we couldn't get past: nulls everywhere, a type system that feels bolted on (TypeScript's safety is opt-out with any, as, and !), relentless library churn, and recurring npm supply chain vulnerabilities. React itself has become overly complex. Popularity is a real advantage, but it wasn't enough to outweigh the correctness and simplicity we wanted.

\n

BEAM was the killer feature. None of these other options came close to what the BEAM VM offers: rock-solid concurrency, fault tolerance, and proven scalability (WhatsApp, Discord, etc.). Gleam gives us BEAM on the backend with familiar functional patterns (like Elm) on the frontend, all in a single language.

\n

Why not stick with Rails?

\n

To be clear: we love Ruby on Rails. It's been amazing for rapid development and has scaled well for our needs. For heavier interactive features, we've already been using Elm for frontend apps like Bracket Builder, Scheduling, Scoring, and Results widgets, with PostgREST providing clean APIs on the backend. This functional programming approach has worked really well.

\n

But for where we're headed, we need:

\n\n

This isn't about Rails being bad - we've been building with it since version 1.2 and nothing else lets you iterate that fast. But over time we've moved towards functional, compiled, typed languages because runtime errors are the absolute worst to debug and fix. Gleam gives us that correctness without giving up the developer experience we love about Rails.

\n

SQLite Over PostgreSQL

\n

This one surprised us.

\n

Curling IO Version 2 runs on PostgreSQL. We assumed we'd use Postgres for Version 3. But we're betting on SQLite instead.

\n

Why SQLite?

\n\n

What about scale?

\n

For the same infrastructure budget as Rails + PostgreSQL, we expect Gleam + SQLite to handle roughly:

\n\n

These numbers sound outrageous, but they aren't pulled out of thin air. BEAM processes are ~2KB each vs Rails threads at ~1MB - that's where the 1,000x connection multiplier comes from. And eliminating network round-trips to a separate database server (SQLite runs in-process) while leveraging BEAM's lightweight concurrency model accounts for the throughput gains. These are well-documented characteristics of the technology, not optimistic guesses.

\n

That said, we'll be validating everything with extensive benchmarking and real-world load tests as we build.

\n

If SQLite doesn't scale as expected, we'll just stick with PostgreSQL - an equally great database.

\n

Vertical-First Scaling

\n

Our scaling strategy is simple: start on a single powerful server and scale up from there.

\n\n

If we ever outgrow a single server, we have a clean path to splitting things up. But with BEAM's concurrency and SQLite's in-process speed, we expect that to be a long way off.

\n

What's Next

\n

We'll be building this out over the coming months alongside the existing platform. Curling IO Version 2 isn't going anywhere - it continues to run and receive updates as usual. When Version 3 is ready and thoroughly tested, we'll transition everyone over at once - a clean, coordinated switch rather than a drawn-out process. We won't make that switch until we're confident everything is solid.

\n
\n

This is Part 1 of the Curling IO Foundation series. Next up: bilingual support with compile-time guarantees.

", "url": "https://curling.io/blog/the-next-version-of-curling-io", "title": "The Next Version of Curling IO", "summary": "We're building the next version of Curling IO - same features, same experience, stronger foundation. After over a decade of serving hundreds of curling organizations across Canada, the US, and internationally, we're investing in technical upgrades that set Curling IO up for the next 20+ years.", "date_modified": "2026-02-12T00:00:00.000Z", "author": { "name": "Dave Rapin" }, "tags": [ "foundation", "architecture", "gleam", "sqlite", "scaling" ] } ] }