
Whilst there is some documentation, if you're building more involved command line interfaces with Yargs in TypeScript, you may find that you need to do a bit of extra work to get strong typing working well with commands that have builders. In this post, I'll demonstrate how to use Yargs to create statically typed commands with builders in TypeScript.
Before we start, I should say that I'm working with Yargs version 18.0.0 in this post. The type definitions come from Definitely Typed and the version is 17.0.35. However, there is no significant difference in the types between Yargs 17 and 18 and so the difference is not an issue.
As an aside, it's possibly worth mentioning that these days it's possible to go without third party libraries entirely to parse command line arguments thanks to features like parseArgs which have been part of Node.js since version 18. However, Yargs remains a popular choice and is still widely used. I have no plans to replace Yargs in my existing projects just yet.
Let's start with a simple example. Imagine we want to create a command line tool that has a number of commands. Each command has its own options, and we want to use builders to define those options. Here's how we might set that up with Yargs, first we have a main entry point:
import yargs from 'yargs';
import { hideBin } from 'yargs/helpers';
import { mySimpleCommand } from './commands/mySimpleCommand.js';
const _args = await yargs(hideBin(process.argv))
.scriptName('my-cli')
.option('verbose', { alias: 'v', type: 'boolean' })
.demandCommand(1, 'Please specify a command')
.command(mySimpleCommand)
// Add more commands here as needed
.help().argv;
As we can see, this imports a single command called mySimpleCommand. Let's look at how that command is defined:
import type * as yargs from 'yargs';
export const mySimpleCommand: yargs.CommandModule = {
command: 'simple-command',
describe: 'This is a simple command example',
builder: (b) =>
b
.option('myOption', {
type: 'string',
demandOption: true,
description: 'An option for the simple command.',
})
.help(),
handler: async (argv) => {
const { myOption } = argv; // myOption is `unknown` here
// Implement your command logic here
},
};
You can see we have a command called simple-command that has a single option called myOption in the builder. However, if we look at the handler function, we can see that myOption is of type unknown. This is because Yargs does not know the shape of the arguments that will be passed to the handler.
This is the problem we need to solve. Inside the handler, we want to have statically typed access to the options defined in the builder.
To achieve strong typing, we can define an interface that describes the shape of the arguments for our command. We can then use this interface to type the CommandModule. Here's how we can modify the mySimpleCommand to achieve this:
import type * as yargs from 'yargs';
interface Args {
myOption: string;
}
export const mySimpleCommand: yargs.CommandModule<unknown, Args> = {
command: 'simple-command',
describe: 'This is a simple command example',
builder: (b) =>
b
.option('myOption', {
type: 'string', // this is important for strong typing
demandOption: true,
description: 'An option for the simple command.',
})
.help(),
handler: async (argv) => {
const { myOption } = argv; // myOption is now `string`
// Implement your command logic here
},
};
There's three things to note here:
Args interface that describes the shape of the arguments for the command.CommandModule type to use Args as the second type, which defines the return type of the builder.builder, we've specified the type of myOption as string. This is crucial for strong typing to work correctly. Without this we will have compilation errors from TypeScript.Now we have statically typed access to myOption inside the handler. Yay!
It's not unusual to have options that are shared among multiple commands. Imagine a common option that all commands need to use. How can we share that option definition among multiple commands while maintaining strong typing? We can achieve this by defining a shared interface for the common options and a function that adds those options to a builder. Here's how we can do that:
export interface SharedOptions {
someSharedOption: string;
// other shared options could go here
}
export function getSharedOptions<T>(y: yargs.Argv<T>) {
return y.option('someSharedOption', {
type: 'string',
demandOption: true,
});
// other shared options could go here
}
Then, in our command files, we can import the interface and the function and use them like this:
import type * as yargs from 'yargs';
import { type SharedOptions, getSharedOptions } from './sharedOptions.js';
interface Args extends SharedOptions {
myOption: string;
}
export const mySimpleCommand: yargs.CommandModule<unknown, Args> = {
command: 'simple-command',
describe: 'This is a simple command example',
builder: (b) =>
getSharedOptions(b)
.option('myOption', {
type: 'string',
demandOption: true,
description: 'An option for the simple command.',
})
.help(),
handler: async (argv) => {
const { myOption, someSharedOption } = argv;
// Now you have statically typed access to myOption and someSharedOption
// Implement your command logic here
},
};
By following this pattern, we can create statically typed commands with builders in Yargs while also sharing common options among multiple commands.
It's a beautiful pattern that sparks joy in my soul. Happy coding!
]]>There's also more technical reasons to care about commit messages. For example, if you're using a tool like semantic-release to automate your release process, it relies on conventional commit messages to determine the next version number and generate release notes. It turns out that Azure DevOps has some challenges when it comes to maintaining a git commit history of conventional commits, especially when merging pull requests. By default, Azure DevOps uses a commit strategy that creates a merge commit with a message like "Merge PR 123: Title of pull request". This is acts against conventional commits.

You can use the UI to change the commit message when completing a pull request, but it's very easy to forget to do this. And if you're using squash merges, you lose the individual commit messages from the feature branch, which can be a problem if you're trying to maintain a history of conventional commits.
There is a way to bend Azure DevOps to our will; to allow us to control our commit messages. In this post, I'll show you how to do just that using the Azure DevOps API, some TypeScript and build validations. The fact this mechanism lives in a build validation means you cannot forget to set the commit message. That's the feature.
This post is not, in fact, specifically about using conventional commits. That's just a common use case. Rather this post is about being able to control the commit message when merging pull requests in Azure DevOps.
The internet has been angry about Azure DevOps pull request commit messages for a while. There are feature requests which have been open since 2018 and Stack Overflow questions on the topic.
Azure DevOps very rarely gets new features these days, and so it's unlikely that we'll see any changes here. However, there is one avenue that is open to us. Azure DevOps has the ability for a pull request to be set to autocomplete, which means that it will automatically merge when all policies are satisfied. This is useful for ensuring that the pull request is merged without manual intervention once it meets the requirements. For example when build validations have passed, and the required reviewers have approved.
I've written about merging pull requests and setting autocomplete with the Azure DevOps API previously. We're going to build on that knowledge here, but add in the magic of setting the merge commit message when we set the pull request to autocomplete. This is achieved by the pull requests API. It allows us to update a pull request and set it autocomplete with a specific message commit message.
This should allow us to go from commits like this:

To commits like this:

I should say that I'm using conventional commits as my commit message style, but you can use whatever style you like. The important thing is that you have control over the commit message.
We're going to write a script that can be run in a build validation pipeline. This script will set the pull request to autocomplete with a specific merge commit message. This means that if you use conventional commits, you'll get to maintain a history of conventional commits in your git history.
Before we dive into the full code, here's the bit that does the magic of setting the merge commit message when setting the pull request to autocomplete:
const updateData = {
autoCompleteSetBy: {
id: authenticatedUser.id,
},
completionOptions: {
mergeStrategy: 'squash',
mergeCommitMessage: 'feat: conventional commit message', // <- set your commit message here
},
};
const response = await fetch(
`https://dev.azure.com/${org}/${project}/_apis/git/repositories/${repo}/pullrequests/${pullRequestId}?api-version=7.1`,
{
method: 'PATCH',
headers: defaultHeaders,
body: JSON.stringify(updateData),
},
);
Here we are:
autoCompleteSetBy property to the authenticated user. This is required when setting a pull request to autocomplete.completionOptions.mergeStrategy to squash. You can change this to rebase or noFastForward if you prefer those strategies.completionOptions.mergeCommitMessage. This is where we set our conventional commit message. Or if you wanted to use a different style, you could set it to whatever you like.Now that we understand the principle, here's the full set-autocomplete-and-commit-message.ts script that you can use in your build validation pipeline:
import { Buffer } from 'buffer';
import { parseArgs } from 'util';
interface LocationData {
authenticatedUser?: AuthenticatedUser;
}
interface AuthenticatedUser {
customDisplayName?: string;
id: string;
providerDisplayName: string;
}
function getArgs() {
const { values } = parseArgs({
options: {
token: {
type: 'string',
short: 't',
description: 'Personal Access Token for Azure DevOps API',
},
'pr-id': {
type: 'string',
short: 'i',
description: 'Pull Request ID',
},
org: {
type: 'string',
short: 'o',
description: 'Azure DevOps organization name',
},
project: {
type: 'string',
short: 'j',
description: 'Azure DevOps project name',
},
repo: {
type: 'string',
short: 'r',
description: 'Repository name',
},
},
});
const token = values.token;
const currentPullRequestId = values['pr-id'];
const org = values.org
?.replace('https://dev.azure.com/', '')
.replace('/', '');
const project = values.project;
const repo = values.repo;
if (!token) {
throw new Error('PAT token must be provided using --token');
}
if (!currentPullRequestId) {
throw new Error('Pull Request ID must be provided using --pr-id');
}
if (!org) {
throw new Error('Organization must be provided using --org');
}
if (!project) {
throw new Error('Project must be provided using --project');
}
if (!repo) {
throw new Error('Repository must be provided using --repo');
}
return { token, currentPullRequestId, org, project, repo };
}
async function getAuthenticatedUser(
org: string,
defaultHeaders: Record<string, string>,
) {
console.log('Fetching authenticated user info...');
const connectionDataResponse = await fetch(
`https://dev.azure.com/${org}/_apis/ConnectionData?api-version=7.2-preview.1`,
{
method: 'GET',
headers: defaultHeaders,
},
);
if (!connectionDataResponse.ok) {
const errorText = await connectionDataResponse.text();
throw new Error(
`Failed to fetch connection data: HTTP ${String(connectionDataResponse.status)}: ${errorText}`,
);
}
const connectionData = (await connectionDataResponse.json()) as LocationData;
const authenticatedUser = connectionData.authenticatedUser;
if (!authenticatedUser?.id) {
throw new Error('Could not determine authenticated user');
}
console.log(
`Authenticated as: ${authenticatedUser.customDisplayName ?? authenticatedUser.providerDisplayName} (ID: ${authenticatedUser.id})`,
);
return authenticatedUser;
}
/**
* Set the merge commit message for a pull request using squash merge strategy
*/
async function setMergeCommitMessageAndAutocomplete({
pullRequestId,
mergeCommitMessage,
token,
org,
project,
repo,
}: {
pullRequestId: string;
mergeCommitMessage: string;
org: string;
project: string;
repo: string;
token: string;
}) {
const defaultHeaders = {
Authorization: `Basic ${Buffer.from(`:${token}`).toString('base64')}`,
'Content-Type': 'application/json',
};
const authenticatedUser = await getAuthenticatedUser(org, defaultHeaders);
console.log(
`Setting autocomplete and merge commit message for PR #${pullRequestId} as ${authenticatedUser.customDisplayName ?? authenticatedUser.providerDisplayName} (${authenticatedUser.id})...`,
);
const updateData = {
autoCompleteSetBy: {
id: authenticatedUser.id,
},
completionOptions: {
mergeStrategy: 'squash',
mergeCommitMessage,
},
};
const response = await fetch(
`https://dev.azure.com/${org}/${project}/_apis/git/repositories/${repo}/pullrequests/${pullRequestId}?api-version=7.1`,
{
method: 'PATCH',
headers: defaultHeaders,
body: JSON.stringify(updateData),
},
);
if (!response.ok) {
const errorText = await response.text();
throw new Error(
`Failed to set autocomplete and merge commit message: HTTP ${String(response.status)}: ${errorText}`,
);
}
console.log(
`Successfully set autocomplete and merge commit message for PR #${pullRequestId}`,
);
}
async function main() {
const { token, currentPullRequestId, org, project, repo } = getArgs();
await setMergeCommitMessageAndAutocomplete({
pullRequestId: currentPullRequestId,
mergeCommitMessage: 'feat: conventional commit message', // <- set your commit message here
token,
org,
project,
repo,
});
}
main().catch((err: unknown) => {
const errorMessage =
err instanceof Error ? err.message : 'Unknown error occurred';
console.error(`[ERROR] ${errorMessage}`);
throw err;
});
This can also be run locally with node ./set-autocomplete-and-commit-message.ts --token [PAT TOKEN WITH SCOPES: vso.code_write and vso.identity] --pr-id [PULL_REQUEST_ID] --org [NAME_OF_ORGANISATION] --project [NAME_OF_PROJECT] --repo [NAME_OF_REPOSITORY]. You'll need Node.js 24 or later to run this. (And yes, you can run TypeScript files directly with Node.js these days).
The thing I haven't included here is how you determine the mergeCommitMessage. In my case, I use the title of the pull request as the commit message. You can fetch the pull request details using the Azure DevOps API and extract the title. I left this out for brevity, but you can easily add it in. Or use whatever logic you like to determine the commit message. The point is that you have control over it.
Now we have our script, we need to run it in a build validation pipeline. Here's an example of an Azure DevOps pipeline that runs the script:
trigger: none
pool:
vmImage: ubuntu-latest
variables:
isPullRequest: ${{ eq(variables['Build.Reason'], 'PullRequest') }}
stages:
- stage: AutoCompleteAndCommitMessage
displayName: Set autocomplete and commit message
condition: ${{ variables['isPullRequest'] }}
jobs:
- job:
steps:
- task: NodeTool@0
inputs:
versionSpec: 24
displayName: Install Node.js
- bash: node ./scripts/set-autocomplete-and-commit-message.ts --token $(System.AccessToken) --pr-id $(System.PullRequest.PullRequestId) --org "$(System.CollectionUri)" --project "$(System.TeamProject)" --repo "$(Build.Repository.Name)"
displayName: Set autocomplete and commit message
Crucially, this pipeline is triggered only for pull requests; the System.PullRequest.PullRequestId is only available in build validations run as part of a pull request. The pipeline installs Node.js 24 and then runs our script, passing in the necessary parameters. The System.AccessToken is used to authenticate with the Azure DevOps API, so make sure that the pipeline has the necessary permissions to use it.
And that's it! With this setup, you can maintain a git commit history of conventional commits in Azure DevOps, even when merging pull requests. By using the Azure DevOps API to set the merge commit message when setting a pull request to autocomplete, you can ensure that your commit messages are meaningful and consistent.
]]>
I'm using the Azure DevOps Client for Node.js; but if you want to use the REST API directly, you can do that too. The principles are the same, but you'll need to make HTTP requests instead of using the client library.
To get up and running with the Azure DevOps Client for Node.js, you can see how we work with it in this post on dynamic required reviewers in Azure DevOps post. This will help you set up your environment and authenticate with Azure DevOps.
If you'd like to read about setting commit messages when merging pull requests in Azure DevOps, you can check out my post on merging pull requests with conventional commits in Azure DevOps.
To merge a pull request with the API, you need to use the GitApi class from the Azure DevOps Client for Node.js. Here's what merging a pull request looks like:
async function mergePullRequest({
gitApi,
repositoryName,
pullRequest,
projectName,
}: {
gitApi: IGitApi;
repositoryName: string;
pullRequest: GitPullRequest;
projectName: string;
}): Promise<void> {
try {
await gitApi.updatePullRequest(
{
status: PullRequestStatus.Completed,
lastMergeSourceCommit: pullRequest.lastMergeSourceCommit,
completionOptions: {
mergeStrategy: GitPullRequestMergeStrategy.Squash,
},
},
/** repositoryId */ repositoryName,
pullRequest.pullRequestId!,
/** project */ projectName,
);
console.log(
`✅ Successfully merged pull request ${pullRequest.pullRequestId}`,
);
} catch (error) {
const errorMessage = `❌ Failed to merge pull request ${pullRequest.pullRequestId}`;
console.error(errorMessage, error);
}
}
This expects that you provide the gitApi instance, the name of the repository, the pull request object, and the project name. The mergeStrategy is set to Squash, but you can change it to Rebase or NoFastForward if you prefer those strategies.
If you have policies that restrict your merge type, you must pick the merge strategy that complies with those policies. For example, if your project requires squash merges, you should use GitPullRequestMergeStrategy.Squash. AKA the one true merge strategy.
Setting a pull request to autocomplete means that it will automatically merge when all policies are satisfied. This is useful for ensuring that the pull request is merged without manual intervention once it meets the requirements. For example when build validations have passed, and the required reviewers have approved.
Here's how you can set a pull request to autocomplete:
async function setPullRequestToAutocomplete({
gitApi,
locationsApi,
repositoryName,
pullRequest,
projectName,
}: {
gitApi: IGitApi;
locationsApi: ILocationsApi;
repositoryName: string;
pullRequest: GitPullRequest;
projectName: string;
}): Promise<void> {
if (pullRequest.autoCompleteSetBy) {
return;
}
try {
const { authenticatedUser } = await locationsApi.getConnectionData();
console.log(
`Setting pull request ${pullRequest.pullRequestId} to auto-complete with squash merge as ${authenticatedUser?.providerDisplayName} (${authenticatedUser?.id})`,
);
await gitApi.updatePullRequest(
{
autoCompleteSetBy: {
id: authenticatedUser?.id,
},
completionOptions: {
mergeStrategy: GitPullRequestMergeStrategy.Squash,
},
},
/** repositoryId */ repositoryName,
pullRequest.pullRequestId!,
/** project */ projectName,
);
console.log(
`✅ Successfully set pull request ${pullRequest.pullRequestId} to auto-complete`,
);
} catch (error) {
const errorMessage = `❌ Failed to set pull request ${pullRequest.pullRequestId} to auto-complete`;
console.error(errorMessage, error);
}
}
What might be surprising about this code is that you have explicitly provide your user id when setting the pull request to autocomplete. The unlovely aspect of this is that you need to discover that id somehow. We achieve it here by fetching the authenticated user from the locationsApi.
Once you have the user id, you can set the autoCompleteSetBy property of the pull request to that user id. This will allow the pull request to be set to autocomplete. Again we must specify the mergeStrategy so it knows how to merge when the time comes.
In this post, we've seen how to merge a pull request and set it to autocomplete using the Azure DevOps Client for Node.js. This can be a powerful way to automate your workflow and ensure that pull requests are merged when they meet the necessary criteria.
I'm personally using this in build validation pipelines to ensure that pull requests are merged automatically when all policies are satisfied. This helps to streamline the development process and reduce manual intervention.
]]>
I build a lot of SPA style applications that run JavaScript / TypeScript on the front end and C# / ASP.NET on the back end. The majority of those apps require some kind of authentication. In fact I'd struggle to think of many apps that don't. This post will walk through how to integrate ASP.NET authentication with the Static Web Apps CLI local authentication emulator to achieve a great local development setup. Don't worry if that doesn't make sense right now, once we have walked through the setup, it will.
This post builds somewhat on posts I've written about using the Static Web Apps CLI with the Vite proxy server with for enhanced performance and how to use the --api-location argument to connect to a separately running backend API. However, you need not have read either post to understand what we're doing.
We're going to first walk through what we're trying to achieve, and then we'll walk through the steps to get there. When it comes to implementation, we're going to use Vite as our front end server, and ASP.NET as our back end server. The Static Web Apps CLI will be used for local authentication emulation.
When we're building an application, let's think about the options that we have, with regards to our local development setup. It's pretty typical for applications to use some kind of third party authentication provider, rather than providing their own. This could be Okta, Microsoft Entra / Azure AD, Auth0 or something else.
It's possible to configure a local development setup which integrates with a third party authentication provider. However, is that wise? Do you want to couple your ability to be able to test scenarios on your local machine, to a server, somewhere out there on the internet? You certainly can. It typically involves setting a redirect URI on the authentication provider to http://localhost:5173 (or wherever your local setup runs).
But it is inconvenient to get that set up in the first place. And even once it is set up, you're then coupled to being online whenever you're testing locally. We're offline more than we appreciate. I'm writing these words on an aeroplane which is currently flying over Botswana. I have no internet access right now. But as you've gathered, I'm on my computer and I'm able to do things. How? Because I'm using the Azure Static Web Apps CLI local authentication emulator for local development.
That's what this post is about. How to use the Static Web Apps CLI local authentication emulator with ASP.NET authentication to enable a great (and offline-first) local development setup.
We should probably talk about what the Static Web Apps CLI is. It describes itself as:
The Static Web Apps (SWA) CLI is an open-source commandline tool that streamlines local development and deployment for Azure Static Web Apps.
It's original purpose was to provide a local development server for an Azure service known as Azure Static Web Apps. However, it has a number of features that make it useful for general web application development. For example, it can be used to proxy requests to a backend API server, and it can also be used to emulate authentication.
We're going to use the authentication emulator to provide a local authentication server whilst we're developing. Just that piece of functionality; we're intentionally only using a subset of the Static Web Apps CLI functionality.
Incidentally, there are alternatives. I'm aware of one other local authentication emulator, which is the Firebase Authentication Emulator. This could likely be used in a similar way. However, we'll be using the Static Web Apps CLI local authentication emulator.
When running the Static Web Apps start command, it surfaces login endpoints at this location: http://localhost:4280/.auth/login/<PROVIDER_NAME>. <PROVIDER_NAME> is the name of the authentication provider you want to use. This might be aad, github etc. If you look at the code (and you can here) you'll realise that the <PROVIDER_NAME> can actually be any string; it's not limited to the names of the authentication providers that are supported by Azure Static Web Apps. So if you want to use an arbitary name like potato as the provider name, you can do that. In terms of emulation, it doesn't matter what the provider name is.
When started, the CLI will serve a local authentication UI at this endpoint which looks like this:

When you hit the Login button, it will use the form data to create a fake user and set a cookie in your browser named StaticWebAppsAuthCookie. That cookie will look something like this:

And whilst it looks like a JWT, it isn't. It's actually a base64 encoded string which contains the user information that you provided in the form. In fact you can see what it is by flipping open the browser devtools and running this code in the console after you have hit the Login button:
JSON.parse(
atob(
document.cookie
.split('; ')
.find((row) => row.startsWith('StaticWebAppsAuthCookie='))
?.split('=')[1],
),
);
This will acquire the cookie that has just been created by the Static Web Apps CLI local authentication emulator from your browser. It then decodes it and parses the JSON string to get the user information that you provided in the form. It produces something like this:
{
"userId": "4a27b7326639199f5de91c4b9a62531b",
"userRoles": ["anonymous", "authenticated", "other-role"],
"claims": [{ "typ": "MyId", "val": "123456789" }],
"identityProvider": "couldbeanything",
}
This cookie will be sent to your backend API server with every request.
To make a working development setup, we need three things:
We'll bring these three things together to create a local development setup that allows us to develop our application locally, with authentication. This diagram shows how the three components will work together:
From the developer's browser, HTTP requests will be sent to the Vite server running on http://localhost:5173. The Vite server will proxy authentication emulation requests to the Static Web Apps CLI local authentication emulator running on http://localhost:4280. All other requests will be proxied to the ASP.NET backend server running on http://localhost:5000.
This setup means that the cookie that is set by the Static Web Apps CLI local authentication emulator will be shared with the ASP.NET backend server through the Vite proxy mechanism. So to make the ASP.NET authentication work, we need to make sure that the ASP.NET server is configured to accept this cookie and use it to authenticate the user.
Now that we've talked about what we're trying to achieve, let's walk through the steps to get there. We'll need both the .NET SDK and Node.js installed on our machine. From here on out it's code. The full example can be found in this GitHub repository.
We'll start by creating a new Vite project in a folder we'll call AppFrontEnd. We'll use the React + TypeScript template. You can use whatever template you like:
npm create vite@latest AppFrontEnd -- --template react-ts
Next we'll create a new ASP.NET project in a folder we'll call AppBackEnd:
dotnet new web -n AppBackEnd
And we'll initialise a package.json in the root of our project:
npm init -y
This package.json will be used as a general purpose task runner later on.
We'd first like to adjust the port that our ASP.NET server runs on in development. We'll update the launchSettings.json file to look like this:
{
"$schema": "https://json.schemastore.org/launchsettings.json",
"profiles": {
"http": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": false,
"applicationUrl": "http://localhost:5000",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}
This will set the ASP.NET server to run on http://localhost:5000 in development. You can run the server with dotnet run.
We need to build an AuthenticationHandler that will accept the cookie set by the Static Web Apps CLI local authentication emulator. This is a custom authentication handler that will be used to authenticate users based on the cookie set by the Static Web Apps CLI local authentication emulator. So here the StaticWebAppsCLIAuthentication.cs in all its glory:
using System.Security.Claims;
using System.Text.Encodings.Web;
using System.Text.Json;
using System.Text.Json.Serialization;
using Microsoft.AspNetCore.Authentication;
using Microsoft.Extensions.Options;
namespace AppBackEnd;
public static class StaticWebAppsCLIAuthentication
{
public const string STATICWEBAPPSCLIAUTH_SCHEMENAME = "StaticWebAppsCLIAuthScheme";
public static AuthenticationBuilder AddStaticWebAppsCLIAuth(
this AuthenticationBuilder builder,
Action<AuthenticationSchemeOptions>? configure = null)
{
if (configure == null) configure = o => { };
return builder.AddScheme<AuthenticationSchemeOptions, StaticWebAppsCLIAuthenticationHandler>(
STATICWEBAPPSCLIAUTH_SCHEMENAME,
STATICWEBAPPSCLIAUTH_SCHEMENAME,
configure
);
}
}
public class StaticWebAppsCLIAuthenticationHandler : AuthenticationHandler<AuthenticationSchemeOptions>
{
public StaticWebAppsCLIAuthenticationHandler(
IOptionsMonitor<AuthenticationSchemeOptions> options,
ILoggerFactory logger,
UrlEncoder encoder)
: base(options, logger, encoder)
{
}
protected override async Task<AuthenticateResult> HandleAuthenticateAsync()
{
try
{
var principal = await MakeClaimsPrincipalFromHeaderOrCookie(
// eg eyJ1c2VySWQiOiIwMTA2ZWMwYmU2MjAwNDM5YjY5ODc3NTFiOGJmNmU3YSIsInVzZXJSb2xlcyI6WyJhbm9ueW1vdXMiLCJhdXRoZW50aWNhdGVkIl0sImNsYWltcyI6W3sidHlwIjoibmFtZSIsInZhbCI6IkF6dXJlIFN0YXRpYyBXZWIgQXBwcyJ9XSwiaWRlbnRpdHlQcm92aWRlciI6ImFhZCIsInVzZXJEZXRhaWxzIjoiam9obm55cmVpbGx5In0
Context.Request.Cookies["StaticWebAppsAuthCookie"] ??
// eg eyJ1c2VySWQiOiIwMTA2ZWMwYmU2MjAwNDM5YjY5ODc3NTFiOGJmNmU3YSIsInVzZXJSb2xlcyI6WyJhbm9ueW1vdXMiLCJhdXRoZW50aWNhdGVkIl0sImlkZW50aXR5UHJvdmlkZXIiOiJhYWQiLCJ1c2VyRGV0YWlscyI6ImpvaG5ueXJlaWxseSJ9
// for reasons that are unclear, the X-MS-CLIENT-PRINCIPAL header presently excludes claims; see https://github.com/Azure/static-web-apps-cli/blob/062fb288d34126a095be6f3e1dc57fe5adb3f4bf/src/msha/handlers/function.handler.ts#L38-L42
Context.Request.Headers["X-MS-CLIENT-PRINCIPAL"].FirstOrDefault()
);
if (principal == null)
return AuthenticateResult.NoResult();
Context.User = principal;
return AuthenticateResult.Success(new AuthenticationTicket(principal, principal.Identity?.AuthenticationType ?? "unknown"));
}
catch (Exception ex)
{
return AuthenticateResult.Fail(ex);
}
}
private async Task<ClaimsPrincipal?> MakeClaimsPrincipalFromHeaderOrCookie(string? headerOrCookie)
{
if (string.IsNullOrEmpty(headerOrCookie))
return null;
var decodedBytes = Convert.FromBase64String(headerOrCookie);
using var memoryStream = new MemoryStream(decodedBytes);
var staticWebAppClientPrinciple = await JsonSerializer.DeserializeAsync<StaticWebAppsCLIClientPrinciple>(memoryStream);
if (staticWebAppClientPrinciple == null ||
string.IsNullOrWhiteSpace(staticWebAppClientPrinciple.UserDetails) ||
string.IsNullOrWhiteSpace(staticWebAppClientPrinciple.UserId))
return null;
var claims = DefaultMapClaims(staticWebAppClientPrinciple);
var principal = new ClaimsPrincipal();
principal.AddIdentity(new ClaimsIdentity(
claims,
authenticationType: staticWebAppClientPrinciple.IdentityProvider ?? "unknown",
nameType: ClaimTypes.Email,
roleType: ClaimTypes.Role
));
return principal;
}
/// <summary>
/// Method takes the StaticWebAppClientPrinciple and produces a list of claims constructed
/// from the claims, user details and user roles
/// </summary>
static Claim[] DefaultMapClaims(StaticWebAppsCLIClientPrinciple staticWebAppClientPrinciple)
{
var claims = new List<StaticWebAppsCLIClaim>();
if (!string.IsNullOrWhiteSpace(staticWebAppClientPrinciple.UserDetails))
{
claims.Add(new StaticWebAppsCLIClaim
{
Type = "preferred_username",
Value = staticWebAppClientPrinciple.UserDetails
});
claims.Add(new StaticWebAppsCLIClaim
{
Type = ClaimTypes.Email,
Value = staticWebAppClientPrinciple.UserDetails
});
claims.Add(new StaticWebAppsCLIClaim
{
Type = ClaimTypes.Name,
Value = staticWebAppClientPrinciple.UserDetails
});
}
if (staticWebAppClientPrinciple.Claims != null)
{
claims.AddRange(staticWebAppClientPrinciple.Claims);
}
var mappedClaims = claims.Select(claim => new Claim(claim.Type, claim.Value));
// translate the user roles into claims with the type ClaimTypes.Role
// eg "userRoles": ["anonymous", "authenticated"]
var userRoleClaims = staticWebAppClientPrinciple.UserRoles?.Select(role =>
new Claim(ClaimTypes.Role, role)) ?? [];
Claim[] combinedClaims = [..mappedClaims, ..userRoleClaims];
return combinedClaims;
}
}
/// <summary>
/// This is the JSON object that is decoded from either the
/// StaticWebAppsAuthCookie cookie or the X-MS-CLIENT-PRINCIPAL header
/// when working with the Static Web Apps CLI local authentication emulator.
///
/// The claims element is not present on the X-MS-CLIENT-PRINCIPAL header
///
/// {
/// "userId": "9b516349fcf5caf60f715703d8804aa7",
/// "userRoles": [
/// "anonymous",
/// "authenticated"
/// ],
/// "claims": [{"typ":"blarg","val":"Azure Static Web Apps"}],
/// "identityProvider": "aad",
/// "userDetails": "[email protected]"
/// }
/// </summary>
public class StaticWebAppsCLIClientPrinciple
{
/// <summary>
/// User Id
/// </summary>
[JsonPropertyName("userId")]
public string? UserId { get; set; }
/// <summary>
/// User roles
/// </summary>
[JsonPropertyName("userRoles")]
public IEnumerable<string>? UserRoles { get; set; }
/// <summary>
/// Identity provider typically "aad"
/// </summary>
[JsonPropertyName("identityProvider")]
public string? IdentityProvider { get; set; }
/// <summary>
/// User details typically email address
/// </summary>
[JsonPropertyName("userDetails")]
public string? UserDetails { get; set; }
/// <summary>
/// Claims for the user
/// </summary>
[JsonPropertyName("claims")]
public IEnumerable<StaticWebAppsCLIClaim>? Claims { get; set; } = [];
}
public class StaticWebAppsCLIClaim
{
/// <summary>
/// Type of claim eg "name"
/// </summary>
[JsonPropertyName("typ")]
public string Type { get; set; } = string.Empty;
/// <summary>
/// Value of claim eg "Someone Name"
/// </summary>
[JsonPropertyName("val")]
public string Value { get; set; } = string.Empty;
}
Whilst there's a good amount of code here, what we're actually doing is relatively simple:
StaticWebAppsCLIAuthSchemeStaticWebAppsCLIAuthenticationHandler. The handler will decode the cookie and create a ClaimsPrincipal object that contains the user information it. This will be used to authenticate users based on the cookie that has been set by the Static Web Apps CLI local authentication emulator. If the StaticWebAppsAuthCookie cookie is not detected, the handler will fallback to the X-MS-CLIENT-PRINCIPAL header. (This should never happen in practice, but it's there for completeness.)With our handler in place, we need to update the Program.cs file in the AppBackEnd project to use it. We'll also add a /api/me endpoint to test the authentication. So the modified Program.cs file looks like this:
using AppBackEnd;
var builder = WebApplication.CreateBuilder(args);
if (builder.Environment.IsDevelopment())
{
// in development, we want to use the Static Web Apps CLI authentication scheme
builder.Services
.AddAuthentication(StaticWebAppsCLIAuthentication.STATICWEBAPPSCLIAUTH_SCHEMENAME)
.AddStaticWebAppsCLIAuth();
}
else
{
// in production we will use an alternative authentication scheme
// ...
}
builder.Services.AddAuthorization();
var app = builder.Build();
app.UseAuthentication();
app.UseAuthorization();
app.MapGet("/", () => "Hello World!");
app.MapGet("/api/me", (context) => {
var user = context.User;
var roleClaimType = user.Identities.First().RoleClaimType;
var userDetails = new {
user.Identity?.Name,
user.Identity?.IsAuthenticated,
Claims = user.Claims.Select(claim => new { claim.Type, claim.Value }).ToArray(),
};
return context.Response.WriteAsJsonAsync(userDetails);
});
app.Run();
I haven't included the production authentication scheme here; that will be specific to your application. The important part is that we are using the StaticWebAppsCLIAuthentication scheme in only in development.
Let's test this works. We'll start the ASP.NET server with dotnet run. Then we'll use the curl command to call the /api/me endpoint:
curl http://localhost:5000/api/me --cookie "StaticWebAppsAuthCookie=eyJ1c2VySWQiOiIxNDdmMTVjNmFiODBhNmY3NjJhOWQyZDRmNzczNzUwOSIsInVzZXJSb2xlcyI6WyJhbm9ueW1vdXMiLCJhdXRoZW50aWNhdGVkIiwib3RoZXItcm9sZSJdLCJjbGFpbXMiOlt7InR5cCI6Ik15SWQiLCJ2YWwiOiIxMjM0NTY3ODkifV0sImlkZW50aXR5UHJvdmlkZXIiOiJjb3VsZGJlYW55dGhpbmciLCJ1c2VyRGV0YWlscyI6Im1lZ2FuLnNAdGhpbmcuY29tIn0="
This will return the user information that was set in the cookie by the Static Web Apps CLI local authentication emulator:
{
"isAuthenticated": true,
"claims": [
{
"type": "preferred_username",
},
{
"type": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress",
},
{
"type": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
},
{
"type": "MyId",
"value": "123456789"
},
{
"type": "http://schemas.microsoft.com/ws/2008/06/identity/claims/role",
"value": "anonymous"
},
{
"type": "http://schemas.microsoft.com/ws/2008/06/identity/claims/role",
"value": "authenticated"
},
{
"type": "http://schemas.microsoft.com/ws/2008/06/identity/claims/role",
"value": "other-role"
}
]
}
So our authentication mechanism works! Now we just need to set up the Vite server and the Static Web Apps CLI local authentication emulator to provide the full local development experience.
Now we'll move over to the AppFrontEnd folder.
We'll install the Static Web Apps CLI as a development dependency:
npm install -D @azure/static-web-apps-cli
And we'll add a start script to the package.json file to start the Static Web Apps CLI and the Vite server. The start script will look like this:
{
"scripts": {
"start": "swa start http://localhost:5173 --run \"npm run dev\" --api-devserver-url http://127.0.0.1:5000"
}
}
When run, this will start the Static Web App CLI and the Vite server. The --run argument will start the Vite server, and the --api-devserver-url argument will set the URL of the ASP.NET backend server. We'll create a mechanism for starting the ASP.NET server alongside the front end shortly.
We already have the Vite server in place, but it needs a little configuration.
I'm now going to borrow from this post on using the Vite proxy server with for enhanced performance. We'll update the vite.config.ts file to add a proxy configuration. The vite.config.ts file will look like this:
import { defineConfig } from 'vite';
// https://vitejs.dev/config/
export default defineConfig({
// ...
server: {
proxy: {
'/api': {
// our .NET application is running on port 5000
target: 'http://127.0.0.1:5000',
changeOrigin: true,
autoRewrite: true,
},
'/.auth': {
// the Static Web Apps local auth emulator is running on port 4280
target: 'http://127.0.0.1:4280',
changeOrigin: true,
autoRewrite: true,
},
},
},
});
The code above is responsible for ensuring that requests to the Vite server with the prefix /.auth are proxied to the Static Web Apps CLI local authentication emulator. Requests with the prefix /api (most requests) are proxied to the ASP.NET backend server.
With this in place, the Static Web Apps CLI local authentication emulator will be able to set the StaticWebAppsAuthCookie cookie in the browser, and the Vite server will proxy requests with that cookie to the ASP.NET backend server for use.
We now have a back end and a front end in place. We need both to be running for local development. We mentioned earlier we'd be using the package.json in the root of the project as a task runner.
We'll use the concurrently package to allow us to run both the ASP.NET server and the Static Web Apps CLI local authentication emulator at the same time.
npm install -D concurrently
And we'll update the scripts section of the package.json as follows:
{
"scripts": {
"postinstall": "npm run install:frontend && npm run install:backend",
"install:frontend": "cd AppFrontEnd && npm install",
"install:backend": "cd AppBackEnd && dotnet restore",
"start": "concurrently -n \"FE,BE\" -c \"bgBlue.bold,bgMagenta.bold\" \"npm run start:frontend\" \"npm run start:backend\"",
"start:frontend": "cd AppFrontEnd && npm run start",
"start:backend": "cd AppBackEnd && dotnet watch run"
}
}
We can now install all our dependencies (front end and backend) with a single command:
npm install
Then we can start the local development server with:
npm start
If we then go to http://localhost:5173, we should see the Vite server running:

So far, this is just the Vite server. Time to get our login mechanism in place.
We're going to replace the contents of the src/App.tsx file in the AppFrontEnd project with code that will call the /api/me endpoint on the ASP.NET backend server to get the user information from the cookie set by the Static Web Apps CLI local authentication emulator and display it in the browser. If the user is not authenticated, it will show a login link.
A reminder, we're using React and TypeScript for our front end. But that's just because it's my preference; this technique is front end agnostic. You can use whatever framework (or none) that you like.
Our implementation will look like this:
import { useState, useEffect } from 'react';
import './App.css';
interface UserData {
name: string | null;
isAuthenticated: boolean;
claims: { type: string; value: string }[];
}
function App() {
const [userData, setUserData] = useState<UserData>();
useEffect(() => {
async function fetchData() {
try {
const response = await fetch('/api/me');
if (!response.ok) {
throw new Error('Failed to fetch user data');
}
const data = await response.json();
setUserData(data);
} catch (error) {
console.error('Error fetching user data:', error);
}
}
fetchData();
}, []);
return (
<>
<h1>Static Web Apps CLI:</h1>
<h2>local authentication emulation with ASP.NET</h2>
{userData ? (
userData.isAuthenticated ? (
<p style={{ textAlign: 'left' }}>
You are logged in {userData.name} |{' '}
<a href="/.auth/logout">Log out</a>
<pre>{JSON.stringify(userData, null, 2)}</pre>
</p>
) : (
<p>
<a href="/.auth/login/aad">Log in</a>
</p>
)
) : (
<p>Loading user data...</p>
)}
</>
);
}
export default App;
Let's test it! First we'll start our local dev setup with npm start. When we browse to http://localhost:5173, we should see the Vite server running:

If we click the Login link, we'll be taken to the Static Web Apps CLI local authentication emulator. We'll fill in the form and hit Login. This will set the StaticWebAppsAuthCookie cookie in our browser.

We'll be redirected back to the Vite server, and the cookie will be sent to the ASP.NET backend server. The ASP.NET backend server will use the cookie to authenticate the user and return the user information when the browser calls the /api/me endpoint. This will be displayed in the browser:

It works!
This has been a long post, but hopefully it has been useful. We've walked through how to set up a local development environment that uses the Static Web Apps CLI local authentication emulator with ASP.NET authentication. This allows us to develop our application locally, with authentication, without being coupled to a third party authentication provider.
The code built in this post can be found here.
.gitignore and .npmrc files in an npm package. I was surprised to find that the npm publish command strips these out of the published package by default. As a consequence, This broke my package, and so I needed to find a way to get round this shortcoming.
I ended up using zipping and unzipping with postinstall and prepare scripts to include these files into my npm package.

This post shows how to use zipping and unzipping with postinstall and prepare scripts to include these files into your npm package.
I'm currently beavering away on a "create-*-app" tool that generates new projects from a number of available templates. That tool takes the form of a CLI tool built with TypeScript, published as a package to an npm registry and consumed with npx. Significantly, the templates that ship with the CLI take the form of a templates folder in the package, and the folders in those templates include .npmrc and .gitignore files; which are key to the functionality of the templates.
When publishing my npm package, I discovered that the .npmrc and .gitignore files in subfolders were being stripped from the package. After a little research, I happened upon this GitHub issue about npm which describes some of the behaviour I was seeing. After a touch more digging, I came to understand that this behaviour is a result of npm treating the .gitignore and .npmrc files as configuration files rather than part of the package's intended content.
However, given these files are essential to the templates' functionality, I needed to find a way to include them in the package.
I mused with explicitly including the specific files in the files section of the package.json file, but this would have been a maintenance headache. I wanted a more automated solution. Given that I have a single "special" folder called templates that contains all the templates, I pondered whether I could zip the folder on publish and unzip it on install. This would allow me to include the .gitignore and .npmrc files in the templates and have them copied into the new project when the template was used. And if there was another other curious behaviour around publishing, this solution should cover that too.
prepare and postinstall scriptsI achieved this by using prepare and postinstall scripts in the package.json file.
The prepare and postinstall scripts are two of the lifecycle scripts that npm runs when installing a package. The prepare script runs before the package is packaged and published, and the postinstall script runs after the package is installed. I opted to use these scripts to zip and unzip the templates folder in my package.
I performed the actual zipping and unzipping with some Node.js scripts. We'll look into the implementation of these scripts in a moment, but first please note the scripts we added to the package.json file:
"scripts": {
"postinstall": "node ./scripts/postinstall.js",
"prepare": "node ./scripts/prepare.js"
},
These scripts contain the paths to the Node.js scripts that perform the zipping and unzipping. The postinstall script runs after the package is installed, and the prepare script runs before the package is packaged and published.
When it comes to zipping and unzipping, I used the adm-zip package. This package provides a simple API for zipping and unzipping files and folders.
prepare.jsWe'll first look at the prepare.js script. This script zips the templates folder in the package into a templates.zip file. The script then writes the zip file to the package's root directory.
import AdmZip from 'adm-zip';
import fs from 'node:fs';
import { fileURLToPath } from 'node:url';
function packTemplates() {
console.log('prepare running - packing templates');
const templatesZipPath = fileURLToPath(
new URL('../templates.zip', import.meta.url),
);
const templatesDir = fileURLToPath(new URL('../templates', import.meta.url));
const zip = new AdmZip();
console.log(`removing existing ${templatesZipPath}`);
fs.rmSync(templatesZipPath, {
force: true,
});
console.log(`adding ${templatesDir} to zip file`);
zip.addLocalFolder(templatesDir);
console.log(`writing zip to ${templatesZipPath}`);
zip.writeZip(templatesZipPath);
}
packTemplates();
It also removes any existing templates.zip file in the package's root directory before creating a new one. This is to ensure that the zip file is always up to date.
postinstall.jsNow we'll look at the postinstall.js script. This script unzips the templates.zip file in the package into a templates folder. The script then writes the unzipped folder to the package's root directory.
import AdmZip from 'adm-zip';
import fs from 'node:fs';
import { fileURLToPath } from 'node:url';
function extractTemplates() {
console.log('postinstall running - extracting templates');
const templatesZipPath = fileURLToPath(
new URL('../templates.zip', import.meta.url),
);
const templatesDir = fileURLToPath(new URL('../templates', import.meta.url));
let templatesExistsAlready = true;
try {
fs.accessSync(templatesDir);
} catch {
templatesExistsAlready = false;
}
if (templatesExistsAlready) {
console.log('templates already extracted');
return;
}
console.log(`extracting from ${templatesZipPath} to ${templatesDir}`);
const extractZip = new AdmZip(templatesZipPath);
extractZip.extractAllTo(templatesDir, /* overwrite */ false);
console.log('templates extracted');
}
extractTemplates();
You'll notice that the script checks whether the templates folder already exists before unzipping the templates.zip file. This is to ensure that the folder is only unzipped once.
So here we have a method for including .gitignore and .npmrc files in an npm package. By using zipping and unzipping with postinstall and prepare scripts, we can include these files in the package and have them copied into the new project when the package is installed.
My example is a templates folder - yours could be anything. And likewise if you have other files that are being stripped from your package, you could use this method to include them too.

However, the required reviewers are static. You can set them up in the branch policies, but they don't change dynamically based on the code being altered or the people involved in the pull request. I spent many moons trawling the internet for an answer to this question, and I found that many people were asking the same question. The answer was always the same: "You can't do that."
However, there is a way. It is, hand on heart, marginally clunky. But the clunk is marginal, and more than acceptable. It involves co-opting build validations to achieve the desired effect. In this post, I'll show you how to do that.
Build validations in Azure DevOps are a way to ensure that code meets certain criteria before it is merged into the main branch. They are, crucially, Azure DevOps pipelines that run when a pull request is created or updated. They are typically used to ensure that the code builds successfully, tests pass, linting succeeds etc. Build validations are set up in the branch policies for a repository. It's pretty typical for a repository to have a build validations.
The crucial thing to note is that, typically, build validations must pass before a pull request can be completed. That's how they provide their value; as a control to prevent changes breaking the codebase. What we're going to do, is use this blocking aspect to our advantage. We'll include a new stage in our build validation pipeline that, each time it runs, does one of the following:
The thing to pay attention to is that the pipeline will fail if dynamically assigned required reviewers have not given their approval by the end of the pipeline run. This applies equally if the pipeline is running for the first time against a pull request and assigning the reviewers. This means that the pull request cannot be completed until any dynamically assigned required reviewers have approved it.
This is the part that makes your risk and audit teams happy. You cannot circumvent the required reviewers; the pipeline failing will prevent the pull request from being merged / completed until the required reviewers have approved it. This is a way to ensure that the code is reviewed by the appropriate people before it is merged into the main branch.
I mentioned "clunky" earlier. The clunkiness comes from the need to rerun the build validation pipeline in the Azure DevOps UI when the approval has been given. This is because there is no way (that I'm aware of) to trigger the build validation pipeline when a reviewer approval has been provided. So, if the required reviewers approve the pull request, you will need to rerun the build validation pipeline to ensure that it passes and the pull request can be completed.
As long as the failing pipeline provides a clear message about what is required, this is a small price to pay for the ability to have dynamic required reviewers.
Now I've convinced you that this is a good idea, let's look at how to implement it.
I'm making an assumption that you already have a build validation pipeline set up for your repository. If you don't, you'll need to set one up first.
To your existing build validation pipeline, you'll need to add a new stage that will run the code to dynamically assign required reviewers:
- stage: DynamicRequiredReviewers
displayName: Dynamic required reviewers
dependsOn: [] # This stage does not depend on any other stages and so will run in parallel with the others
jobs:
- job:
steps:
- task: NodeTool@0
inputs:
versionSpec: 22
displayName: Install Node.js
- bash: npm ci
displayName: 'Install dependencies'
workingDirectory: 'scripts/dynamic-required-reviewers'
- bash: npm test
displayName: 'Run tests'
workingDirectory: 'scripts/dynamic-required-reviewers'
- bash: npm start -- --sat $(System.AccessToken) --pullRequestId $(System.PullRequest.PullRequestId) --organization $(System.CollectionUri) --repositoryName $(Build.Repository.Name) --projectName $(System.TeamProject)'
displayName: 'Validate claims'
workingDirectory: 'scripts/dynamic-required-reviewers'
You can see reference to the scripts/dynamic-required-reviewers directory. This is where we'll put the code that will dynamically assign required reviewers. The code will run in a Node.js environment, so we'll need to install Node.js and the dependencies for the code to run.
You can also see that we're using the System.AccessToken and System.PullRequest.PullRequestId variables. The System.AccessToken is a token that allows the code to interact with the Azure DevOps API, and the System.PullRequest.PullRequestId is the ID of the pull request that the build validation pipeline is running against. We'll use these in our code to dynamically assign required reviewers to the pull request.
We also use the System.CollectionUri, Build.Repository.Name, and System.TeamProject variables to get the organization, repository name, and project name respectively. These will be used to make API calls to Azure DevOps with our token.
You'll need to create the scripts/dynamic-required-reviewers directory. In there we're going to add a package.json file to manage our dependencies:
{
"name": "dynamic-required-reviewers",
"version": "1.0.0",
"scripts": {
"build": "tsc",
"start": "npm run build && node dist/index.js"
},
"license": "ISC",
"type": "module",
"dependencies": {
"azure-devops-node-api": "^15.1.0"
},
"devDependencies": {
"@types/node": "^22.0.0",
"typescript": "^5.8.3"
}
}
We also need a tsconfig.json file to configure TypeScript:
{
"compilerOptions": {
"allowJs": true,
"declaration": true,
"declarationMap": true,
"esModuleInterop": true,
"module": "NodeNext",
"moduleResolution": "NodeNext",
"noEmit": false,
"resolveJsonModule": true,
"skipLibCheck": true,
"sourceMap": true,
"strict": true,
"target": "ES2022",
"outDir": "dist"
},
"include": ["src"]
}
Now we'll add our src/index.ts file where we'll put our code to dynamically assign required reviewers.
import { parseArgs } from 'node:util';
import * as nodeApi from 'azure-devops-node-api';
import type { GitPullRequest } from 'azure-devops-node-api/interfaces/GitInterfaces.js';
import type { IGitApi } from 'azure-devops-node-api/GitApi.js';
async function main() {
const args = parseArgs({
options: {
pat: { type: 'string', short: 'p', default: process.env.ADO_PAT },
sat: { type: 'string', short: 's', default: process.env.ADO_SAT },
pullRequestId: {
type: 'string',
short: 'i',
default: process.env.ADO_PULL_REQUEST_ID,
},
organization: {
type: 'string',
short: 'o',
default: '',
},
repositoryName: { type: 'string', short: 'r', default: '' },
projectName: { type: 'string', short: 'j', default: '' },
},
});
const pullRequestId = parseInt(args.values.pullRequestId ?? '0', 10);
const pat = args.values.pat ?? '';
const sat = args.values.sat ?? '';
// https://dev.azure.com/johnnyreilly/ -> johnnyreilly
const organization = (args.values.organization ?? "")
.replace("https://dev.azure.com/", "")
.replace("/", "");
const repositoryName = args.values.repositoryName ?? '';
const projectName = args.values.projectName ?? '';
const webApi = await makeWebApi({
pat,
sat,
organization,
});
const gitApi = await webApi.getGitApi();
const pullRequest = await gitApi.getPullRequest(
/* repositoryId */ repositoryName,
/* pullRequestId */ pullRequestId,
/* project */ projectName,
/* maxCommentLength */ undefined,
/* skip */ undefined,
/* top */ undefined,
/* includeCommits */ true,
/* includeWorkItemRefs */ false,
);
const requiredReviewerName = await determineRequiredReviewerName(pullRequest);
if (!requiredReviewerName) {
console.log(
'✅ No required reviewer was deemed necessary. No action needed.',
);
return;
}
const requiredReviewer = await searchIdentityForReviewer({
pat,
sat,
organization,
searchTerm: requiredReviewerName,
});
if (!requiredReviewer) {
const errorMessage = `❌ Failed to look up reviewer: ${requiredReviewerName}`;
throw new Error(errorMessage);
}
await determineAction({
requiredReviewer,
pullRequest,
gitApi,
projectName,
repositoryName,
});
}
async function determineRequiredReviewerName(
pullRequest: GitPullRequest,
): Promise<string | undefined> {
// This is a placeholder function. You should implement your logic to determine the required reviewer name.
return 'Required Reviewer Name'; // Replace with actual logic
}
async function makeWebApi({
organization,
pat,
sat,
}: {
organization: string;
pat?: string;
sat?: string;
}) {
if (!pat && !sat) {
throw new Error(
'Either a Personal Access Token (PAT) or a Service Account Token (SAT) must be provided.',
);
}
const webApi = new nodeApi.WebApi(
`https://dev.azure.com/${organization}`,
pat
? nodeApi.getPersonalAccessTokenHandler(
pat,
/** allowCrossOriginAuthentication */ true,
)
: nodeApi.getHandlerFromToken(
sat ?? '',
/** allowCrossOriginAuthentication */ true,
),
);
return webApi;
}
interface Identity {
id: string;
providerDisplayName: string;
}
/**
* This searches the organization's identity system directly
* based on https://learn.microsoft.com/en-us/rest/api/azure/devops/ims/identities/read-identities?view=azure-devops-rest-7.1&tabs=HTTP
*/
async function searchIdentityForReviewer({
pat,
sat,
searchTerm,
organization,
}: {
pat: string;
sat: string;
searchTerm: string;
organization: string;
}): Promise<Identity | undefined> {
try {
// Use the identities API endpoint for broader search
const searchUrl = `https://vssps.dev.azure.com/${organization}/_apis/identities?searchFilter=General&filterValue=${encodeURIComponent(
searchTerm,
)}&api-version=7.1-preview.1`;
const response = await fetch(searchUrl, {
method: 'GET',
headers: {
Authorization: `Basic ${Buffer.from(`PAT:${pat || sat}`).toString(
'base64',
)}`,
Accept: 'application/json',
'Content-Type': 'application/json',
},
});
if (!response.ok) {
console.warn(
`Identity search failed: ${response.status} ${response.statusText}`,
);
return undefined;
}
const data = await response.json();
if (data.value && data.value.length > 0) {
const identity: Identity = data.value[0]; // Take the first match
console.log(`✅ Found identity via search:`);
console.log(` ID: ${identity.id}`);
console.log(` Display Name: ${identity.providerDisplayName}`);
return identity;
}
console.warn(`⚠️ No identities found for: ${searchTerm}`);
return undefined;
} catch (error) {
console.error(`❌ Error searching identities for ${searchTerm}:`, error);
return undefined;
}
}
const voteValues = {
approved: 10,
approvedWithSuggestions: 5,
noVote: 0,
waitingForAuthor: -5,
rejected: -10,
} as const;
async function determineAction({
requiredReviewer,
pullRequest,
gitApi,
projectName,
repositoryName,
}: {
requiredReviewer: Identity;
pullRequest: GitPullRequest;
gitApi: IGitApi;
projectName: string;
repositoryName: string;
}): Promise<void> {
const assignedReviewer = pullRequest.reviewers?.find(
(reviewer) => reviewer.id === requiredReviewer.id,
);
const requiredReviewIsAssigned =
assignedReviewer && assignedReviewer.isRequired;
const hasBeenApprovedByRequiredReviewer =
assignedReviewer && assignedReviewer.vote === voteValues.approved;
if (requiredReviewIsAssigned && hasBeenApprovedByRequiredReviewer) {
console.log(
`✅ Reviewer with ID ${
assignedReviewer.displayName ?? assignedReviewer.id
} is already assigned and has approved the pull request. No action needed.`,
);
} else if (requiredReviewIsAssigned) {
const errorMessage = `⚠️ Reviewer with ID ${
assignedReviewer.displayName ?? assignedReviewer.id
} is already assigned but has not approved the pull request.`;
throw new Error(errorMessage);
} else {
console.log(
`⚠️ Reviewer with ID ${requiredReviewer.providerDisplayName} is not yet assigned. Will assign them.`,
);
const reviewerToBeAssigned = {
id: requiredReviewer.id,
vote: voteValues.noVote,
isRequired: true,
};
try {
await gitApi.createPullRequestReviewer(
reviewerToBeAssigned,
/** repositoryId */ repositoryName,
pullRequest.pullRequestId!,
/** reviewerId */ reviewerToBeAssigned.id,
/** project */ projectName,
);
console.log('✅ Successfully added reviewer to pull request');
} catch (error) {
const errorMessage = `❌ Failed to add reviewer to pull request`;
throw new Error(errorMessage, { cause: error });
}
const errorMessage = `Added reviewer ${requiredReviewer.providerDisplayName} to pull request for review and approval. Once approved, please re-run the build validation and it should pass.`;
try {
await gitApi.createThread(
/** commentThread */ {
comments: [
{
parentCommentId: 0,
content: errorMessage,
commentType: CommentType.Text,
},
],
status: CommentThreadStatus.Active,
},
/** repositoryId */ repositoryName,
/** pullRequestId */ pullRequest.pullRequestId!,
/** project */ projectName
);
console.log("✅ Successfully add comment to pull request");
} catch (error) {
const errorMessage = `❌ Failed to add comment to pull request`;
throw new Error(errorMessage, { cause: error });
}
throw new Error(errorMessage);
}
}
main();
There's a good bit of code here, so let's break it down:
main function is the entry point of the script. It parses the command line arguments and sets up the Azure DevOps API client.makeWebApi function creates an instance of the Azure DevOps Web API client using either a Personal Access Token (PAT) or a Service Account Token (SAT). You'll use a PAT for local development and a SAT in the build validation pipeline. If using a PAT it requires the scopes: vso.code and vso.identity.getRequiredReviewerName function is a placeholder for your logic to determine the name of your required reviewer, if any. You should implement your logic here to determine when dynamically assigned reviewers are appropriate.searchIdentityForReviewer function searches for the required reviewer in the Azure DevOps identity system. It uses the Azure DevOps REST API to search for identities based on a search term. Rather frustratingly, you can't directly use the Azure AD / Entra ID Graph API to search for users in Azure DevOps.determineAction function checks if the required reviewer is already assigned to the pull request and whether they have approved it.
You can run the code locally to test it. You'll need to set up a Personal Access Token (PAT) with the required scopes and set the environment variables accordingly. You can then run the code using:
npm start -- --pat [YOUR_PAT] --pullRequestId [PULL_REQUEST_ID] --organization [ORGANISATION] --repositoryName [ADO_REPOSITORY_NAME] --projectName [ADO_PROJECT_NAME]
In this post, we've seen how to dynamically assign required reviewers for a pull request in Azure DevOps using build validations and the Azure DevOps API brought together with a little TypeScript. By co-opting your existing build validation pipeline, you can ensure that the code is reviewed by the appropriate people before it is merged into the main branch.
Use this. Make your risk and audit teams happy!
]]>
I'm writing this post because when I attempted to use the Azure DevOps Client for Node.js package to acquire them I found it lacking, and not for the first time. I am going to allow myself a little moan here; ever since Microsoft acquired GitHub, the Azure DevOps ecosystem feels like it has had insufficient investment.
However, as is often the case, there is a way. The Azure DevOps REST API is there for us, and with a little fetch we can get the job done.
Before we get into the code, let's clarify the terminology. In Azure DevOps, service connections are the connections to external services that your pipelines need to run. However, the Azure DevOps REST API refers to these as "service endpoints".

So when you're looking the screenshot above and you see "service connections", remember that in the API they're referred to as "service endpoints". If there is an actual distinction between "service connections" and "service endpoints" I'm not aware of it. If you know, please do let me know!
Once you know that service connections are referred to as "service endpoints" in the Azure DevOps REST API, you can use the documentation to get them. Here's a curl to get you started:
curl --user '':'PERSONAL_ACCESS_TOKEN' --header "Content-Type: application/json" --header "Accept:application/json" https://dev.azure.com/{organization}/{project}/_apis/serviceendpoint/endpoints?api-version=7.1
Now that we can see there's a way to get service connections with curl, let's look at how we can do this with TypeScript. Effectively we'll want to fetch (instead of using curl) and statically type the response with some interfaces. Here's a function that will retrieve service connections:
export async function getAzureDevOpsServiceConnections({
personalAccessToken,
organization,
projectName,
}: {
/** requires the vso.serviceendpoint scope which grants the ability to read service endpoints / service connections */
personalAccessToken: string;
/** eg "johnnyreilly" */
organization: string;
/** eg "blog-demos" */
projectName: string;
}): Promise<ServiceConnection[]> {
// https://learn.microsoft.com/en-us/rest/api/azure/devops/serviceendpoint/endpoints/get-service-endpoints?view=azure-devops-rest-7.1&tabs=HTTP
const url = `https://dev.azure.com/${organization}/${projectName}/_apis/serviceendpoint/endpoints?api-version=7.1`;
const response = await fetch(url, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
Authorization: `Basic ${Buffer.from(`PAT:${personalAccessToken}`).toString('base64')}`,
'X-TFS-FedAuthRedirect': 'Suppress',
},
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status.toString()}`);
}
const data = (await response.json()) as Wrapper;
return data.value;
}
interface Wrapper {
count: number;
value: ServiceConnection[];
}
export interface ServiceConnection {
authorization: Authorization;
createdBy: CreatedBy;
data: Record<string, null | string>;
description: string;
id: string;
isOutdated: boolean;
isReady: boolean;
isShared: boolean;
modificationDate?: string;
modifiedBy?: ModifiedBy;
name: string;
owner: string;
serviceEndpointProjectReferences: ServiceEndpointProjectReference[];
serviceManagementReference: null;
type: string;
url: string;
}
export interface CreatedBy {
_links: Links;
descriptor: string;
displayName: string;
id: string;
imageUrl: string;
uniqueName: string;
url: string;
}
export interface Links {
avatar: Avatar;
}
export interface Avatar {
href: string;
}
export interface Authorization {
parameters: Record<string, null | string>;
scheme: string;
}
export interface ServiceEndpointProjectReference {
description: string;
name: string;
projectReference: ProjectReference;
}
export interface ProjectReference {
id: string;
name: string;
}
export interface ModifiedBy {
displayName: null | string;
id: string;
}
The function getAzureDevOpsServiceConnections is the one you'll want to call. It takes the following inputs:
personalAccessToken: the personal access token you've created in Azure DevOps with the vso.serviceendpoint scope. You could equally use the System.AccessToken that's available in your pipeline if that was appropriate.organization: the name of your Azure DevOps organizationprojectName: the name of the project you're interested inThe function returns an array of ServiceConnection objects. You'll note that the API actually returns a Wrapper object that contains a count and an array of ServiceConnection objects. This isn't actually useful for my purposes, so I've just returned the array of ServiceConnection objects.
Honestly the hardest part about writing this post was being sure that "service connections" and "service endpoints" were the same thing. Truly naming things is hard.
Hopefully this post has helped you get the service connections you need. And as I mentioned earlier, if you know an actual distinction between "service connections" and "service endpoints" please do let me know!
]]>DefaultAzureCredential to authenticate against Azure resources - this is also available in other platforms such as .NET.

The DefaultAzureCredential is a great way to authenticate locally; I can az login and then run my script, safe in the knowledge that the DefaultAzureCredential will authenticate successfully. However, how can I use the DefaultAzureCredential in an Azure DevOps pipeline?
This post will show you how to use the DefaultAzureCredential in an Azure DevOps pipeline, specifically by using the AzureCLI@2 task.
DefaultAzureCredential?To quote the documentation:
DefaultAzureCredentialis an opinionated, preconfigured chain of credentials. It's designed to support many environments, along with the most common authentication flows and developer tools. In graphical form, the underlying chain looks like this:
The first credential in the chain is EnvironmentCredential, which looks for environment variables to authenticate. This means that if you set the right environment variables, you can use DefaultAzureCredential and it will authenticate with them.
The specific environment variables that DefaultAzureCredential looks for are:
AZURE_TENANT_ID - The Microsoft Entra tenant (directory) ID.AZURE_CLIENT_ID - The client (application) ID of an App Registration in the tenant.AZURE_CLIENT_SECRET - A client secret that was generated for the App Registration.The fifth credential in the chain is AzureCliCredential, which uses the Azure CLI to authenticate. This means that if you have already authenticated using az login, you can use DefaultAzureCredential without setting any environment variables.
EnvironmentCredential and AzureCliCredential?Great question! When I'm developing locally, I can use DefaultAzureCredential without thinking further about it. I run az login and then run my script. DefaultAzureCredential will do what I need.
You can make use of AzureCliCredential, both locally and in an Azure DevOps pipeline, and it can be used for authentication as long as you have a service connection set up in Azure DevOps that uses the same credentials as your local az login. Should you need it, EnvironmentCredential is another option, and we'll demonstrate it too.
The nice thing about DefaultAzureCredential is that it supports both approaches, so you can use it in your code without having to specifically cater for the credential type being used.
AzureCLI@2 task with AzureCliCredentialTo use the Azure CLI in an Azure DevOps pipeline, you can use the AzureCLI@2 task. This task allows you to run Azure CLI commands in your pipeline, given it is configured to use a service connection that has the necessary permissions to access your Azure resources.
Consider the following example pipeline YAML:
- task: AzureCLI@2
displayName: Run with AzureCliCredential
inputs:
azureSubscription: myServiceConnection # this is the name of your Azure service connection in Azure DevOps
scriptType: bash
scriptLocation: inlineScript
inlineScript: npm start # where `npm start` is your command that uses DefaultAzureCredential
The above will run the npm start command in the context of the Azure CLI. We won't document it here, but imagine the npm start command runs a Node.js script which makes use of DefaultAzureCredential. The supplied service connection will authenticate using the credentials of the associated service principal. When the code runs and DefaultAzureCredential is used, the AzureCliCredential will be used to authenticate, as at this point the pipeline is effectively a logged user with the Azure CLI.
AzureCLI@2 task with EnvironmentCredentialIf, for whatever reason, you want to use EnvironmentCredential in your Azure DevOps pipeline, you can do so by setting the necessary environment variables in the pipeline. I don't have a specific reason to do this, but you may. To achieve this, you can modify the approach as follows:
- task: AzureCLI@2
displayName: Set service principal variables
inputs:
azureSubscription: myServiceConnection # this is the name of your Azure service connection in Azure DevOps
scriptType: bash
scriptLocation: inlineScript
addSpnToEnvironment: true
inlineScript: |
echo "##vso[task.setvariable variable=AZURE_CLIENT_ID;issecret=true]${servicePrincipalId}"
echo "##vso[task.setvariable variable=AZURE_CLIENT_SECRET;issecret=true]${servicePrincipalKey}"
echo "##vso[task.setvariable variable=AZURE_SUBSCRIPTION_ID;issecret=true]$(az account show --query 'id' -o tsv)"
echo "##vso[task.setvariable variable=AZURE_TENANT_ID;issecret=true]${tenantId}"
- bash: npm start
displayName: 'Run with EnvironmentCredential'
env:
# see https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/identity/Azure.Identity/README.md#environment-variables
AZURE_CLIENT_ID: $(AZURE_CLIENT_ID)
AZURE_CLIENT_SECRET: $(AZURE_CLIENT_SECRET)
AZURE_SUBSCRIPTION_ID: $(AZURE_SUBSCRIPTION_ID)
AZURE_TENANT_ID: $(AZURE_TENANT_ID) # this is optional and not necessary for EnvironmentCredential to work
You can see this is a little different from the previous example. We're now using the AzureCLI@2 task to set the necessary environment variables for DefaultAzureCredential to work. Specifically, AZURE_CLIENT_ID, AZURE_CLIENT_SECRET and AZURE_SUBSCRIPTION_ID.
The addSpnToEnvironment input of the task is set to true, which ensures that the service principal's credentials are added to the environment. We then map those credentials to the names that the DefaultAzureCredential expects and write them to the pipeline to be used in the next task.
The second task is a simple bash task that runs npm start, but it now has an env section that sets the necessary environment variables for DefaultAzureCredential to work. The environment variables are set using the variables exposed by the previous task.
In this post, we explored how to use DefaultAzureCredential in an Azure DevOps pipeline, specifically when using the AzureCLI@2 task. We discussed the two main approaches: using AzureCliCredential and EnvironmentCredential, and how to configure the pipeline to support both.
It's worth noting that whilst my own use case is JavaScript / TypeScript, the same principles apply to other languages that support DefaultAzureCredential, such as .NET. The key takeaway is that you can use DefaultAzureCredential in an Azure DevOps pipeline just as easily as you can locally, allowing for a consistent development and deployment experience.

There's no shortage of content out there detailing what is known about the port. This piece is not that. Rather, it's the reflections of two people in the TypeScript community. What our thoughts, feelings and reflections on the port are.
It's going to be a somewhat unstructured wander through our reactions and hopes. Buckle up for opinions and feelings.
John Reilly is a software engineer and an early adopter of TypeScript. He worked on Definitely Typed, the home of high quality type definitions which allow the integration of TypeScript and JavaScript. John wrote the history of Definitely Typed and featured in the TypeScript documentary. He also worked (and works) on ts-loader, the webpack loader for TypeScript. In his day job, he works at Investec, a South African bank and is based in London. The greatest city on earth (in his opinion).
Ashley is a software engineer who has the pleasure of living not too far from John, occasionally joining him on his morning walks, where we can kick start our day talking about TypeScript together. Ashley first started writing TypeScript when it was on version 1.8 and has thoroughly enjoyed following its evolution. He has contributed to TypeScript and works at Bloomberg as part of the 'JavaScript Infrastructure & Tooling' team. Opinions are his own.
I mean, weren't we happy with each other anyway? Just as we were? Yes, but also no.
If you're in the JavaScript / TypeScript ecosystem, recent years have been notable for the number of projects that have appeared to support JavaScript related development, but built in non-JavaScript languages. We've had esbuild, written in Go. We've had SWC, written in Rust. We've had Bun, written in Zig. We've had Deno, written in Rust.
The list goes on, and was getting longer and longer. All of these increased performance, which was and is a wonderful thing. We'll talk more about performance later. One hold-out was TypeScript. It remained being written in TypeScript. Whilst performance improvements did happen, and were an area of focus for the team, the level of improvements that happened were incremental; not transformative.
You could see the impatience in the community, as people started making their own efforts to speed up TypeScript by building their own implementations. Most notable here was DongYoon Kang; the creator of SWC. SWC, amongst other things, implemented the transpilation aspect of TypeScript. Donny decided to see if he could implement the type checker as well, again using Rust. He then switched to attempting a port using Go. After some time he then switched back to trying to implement in Rust.
It didn't end up succeeding, but the fact there were people out there willing to try this demonstrated the desire for performance in the community. At some point a port was likely to succeed, and if it wasn't driven by the actual TypeScript team it would probably have landed the ecosystem in a tricky situation. A port of TypeScript, to a language other than TypeScript, seemed to be inevitable. And here we are.
It's useful to think about what the Go port meaningfully changes about TypeScript. Josh Goldberg has provided a useful framing on different aspects of TypeScript. It's four things:
The language is unaffected by the port. Syntax is unchanged. You'll still be writing types and interfacess as you were before. No difference.
The same applies to the checks that the type checker is performing. The code that was detected as an error before will still fail to type check with TypeScript 7:
const i: number = 'not actually a number';
// ts: Type 'string' is not assignable to type 'number'
This is where the differences begin. The type checker, compiler, and the language services do change. They become an order of magnitude faster.
Put your hands up if you don't care about performance. That's right, no hands went up. We all care about performance. If you ever have the misfortune to work with technology that lags, which breaks you out of flow as you are working, you notice it. It's almost the only thing you can notice.
The TypeScript team has always cared about performance, particularly in the area of development tooling. Anders Hejlsberg in particular has mentioned in interviews, the need for language servers to provide fast feedback as people work. Something measured in milliseconds and not seconds.
What are the implications of these changes to the TypeScript ecosystem? Put simply: a faster VS Code and faster builds.
Where John works, at Investec, there are many engineers who use VS Code, and spend part of their engineering life writing TypeScript and JavaScript. All those engineers will benefit from a snappier development experience. When they open up a project in VS Code, the time taken for the language service to wake up will drop dramatically. As they refactor their code, the experience will be faster. The "time to red squiggly line" metric will decrease. And that's a good thing.
As a consequence, engineers should be incrementally more effective, given that there are fewer pauses in their workflow.
The same incremental gain applies to builds. As our engineers build applications, they run TypeScript builds on their machines and in a Continuous Integration context. These will all be faster than they were before. We'll continually experience a performance improvement which is a benefit.
This, of course, is not Investec specific. Rather this is a general improvement that everyone will benefit from. Across the world, wherever anyone writes and builds TypeScript, they will do so faster.
Many languages have bootstrapping compilers. This means the compiler is written in the program language that it is the compiler for. TypeScript has been an example of this since it was first open sourced. That is about to change; the compiler will stop being written in TypeScript and will start being written in Go. This is possibly the first example of a language moving away from having a bootstrapping compiler. This is done in the name of performance.
Of all the aspects about the Go port, this one was the one that gave John most anxiety. (It's John writing this by the way, writing in the third person feels very strange.) The TypeScript team will be moving away from writing TypeScript in their day to day lives. They won't abandon it of course, but they will certainly write less TypeScript and more Go. An implication of this is that there will be reduced dogfooding - which means less direct feedback to the makers of TypeScript about what it's like to write TypeScript.
Given how broad the TypeScript community is, this is perhaps not the concern that it might be. The team are very connected with the community and even if they are writing TypeScript less, people who are writing more will be sure to be vocal. It's maybe worth remembering that for most of the time TypeScript has been around, the team has often written TypeScript in a style that is not necessarily representative of the broader community. We're thinking here of classes (talked about below), and until recently modules. Before Jake Bailey's mammoth work to migrate the TypeScript codebase to use modules, the codebase used namespaces. This didn't stop TypeScript working with on improving support for these JavaScript features at all. So it seems reasonable we need not fear.
Another angle on this, is wondering if the TypeScript team might become less involved with TC39 (the committee that develops the JavaScript language specification). TypeScript have been instrumental in language development over the years, from optional chaining to decorators and beyond. As the TypeScript team will be writing less TypeScript, there's a view that they might become less directly involved in influencing the development of JavaScript.
Ashley, who is one of Bloomberg's TC39 delegates, is not at all worried about this. The Principal Product Manager of TypeScript, Daniel Rosenwasser, recently became one of the two incoming TC39 Facilitators. There is also Ron Buckton, another TC39 delegate from the TypeScript team, who continues to champion multiple exciting proposals such as Explicit Resource Management. The importance of having the TypeScript team's input into the evolution of JavaScript remains the same regardless of which language is used to implement TypeScript's analysis of JavaScript.
There are four primary ways to interact with the TypeScript package. Let's have a think about how these might change.
tsc
import ts from "typescript"
tsserver
Let's drill further into tools that use TypeScript internally. There will be an impact. John is the maintainer of ts-loader, a widely used webpack loader for TypeScript. This loader depends upon TypeScript APIs which have been unchanged in years.
In fact, in John went so far as to comment as such on Bluesky in early March:

Only to have the TypeScript team effectively come out and say "hold my beer".
It's very early days, but we know for sure that the internal APIs of TypeScript (that ts-loader depends upon) will change. To quote Daniel Rosenwasser of the TypeScript team:
While we are porting most of the existing TypeScript compiler and language service, that does not necessarily mean that all APIs will be ported over.
ts-loader has two modes of operation:
It's very unlikely that TypeScript 7 will work with ts-loaders type checking mode, without significant refactoring. However, it's quite likely that ts-loader might be able to support transpilation only mode with minimal changes. This mode only really depends on the transpileModule API of TypeScript. If the transpileModule API lands, then the transpilation only mode of ts-loader should just work. On the other hand, this might be the natural end of the road for the type checking mode of ts-loader.
Ashley is the author of ts-blank-space (an open source TypeScript to JavaScript transform published by Bloomberg that avoids the need for source maps). It also depends on TypeScript's API so may be affected by the port. It's too early to say but the change here may turn into an opportunity. A not uncommon request of ts-blank-space is to investigate using a different parser. This is because while ts-blank-space itself is very small and only uses TypeScript's parsing API, this is not an isolated part of TypeScript so still ends up importing the whole type checker. For projects that already depend on TypeScript there is no added cost, but it makes ts-blank-space less appealing for use-cases that are not already importing TypeScript as a library.
Some tooling will have a natural path forwards. For instance, typescript-eslint will continue onwards with TypeScript 7. The TypeScript team are planning to help with typed linting with the new, faster APIs. So this means that ESLint (which many people are used to using), will become faster, as TypeScript becomes faster.
However, it's likely that tooling that depends upon internal TypeScript APIs which are going to radically change, may cease to be in their current forms. This will vary project by project, but expect change. And this is fine. Change is a constant.
Given that TypeScript decided to move away from being written with TypeScript, many people had and have opinions on the language being picked: Go. The folk who like C# wish that the team had picked C#. Particularly given Anders' involvement with C#. Those people that like Rust would very much have liked for the team to have picked Rust. The good news for Rust fans is that there is a chance that the Node.js bindings for TypeScript 7 will use a package that was written in Rust.
If John was to guess what the team might have picked he would have either said Rust or Zig (what Bun is built with). Go felt like a slightly leftfield choice, but upon reflection it completely makes sense. ESBuild is written in Go, so there's prior art. Go has a garbage collector (Rust does not) which means the work of porting the code is significantly reduced. Likewise, C# is all about classes and so a port from TypeScript (which makes only light use of classes in the compiler codebase) to C# would be uphill work.
The Go choice represents pragmatism; which is very much a TypeScript ethos. In fact if you look at the TypeScript Design goals, you can see how TypeScript has always espoused a pragmatic approach. Perhaps most famously by having soundness as a "non-goal". Instead, striking a balance between correctness and productivity.
Pragmatism is the TypeScript way. Go is a pragmatic choice.
This is evidence that JavaScript can be a slow language to implement a type-checker. To re-purpose a quote from Anders in this 'why Go' post:
No single language is perfect for every task
The task of type-checking is an intensive one. One way to think of type-checking is that it is taking the program it is checking, emulating executing every line of code, and detecting when the emulation breaks a rule. So the larger a program is, the more work there is to do. When the type-checker is written in a dynamic language this means that it requires another program to run it. For TypeScript this means we effectively have a JavaScript engine, running the TypeScript checker which is running an emulation of another program. It's not a surprise that if the type-checker itself can be run natively that it will run noticeable faster.
Going ten times faster with Go has been attributed to roughly 3.5 times faster by being native, and another speed up comes from being able to run more in parallel (source).
Considering how much more work it is to execute a dynamic language like JavaScript than to execute a pre-compiled native binary, if anything it's amazing that the switch to native isn't a larger difference. This shows how much work has gone into V8, the JavaScript engine used by Node.js, to execute JavaScript very effectively.
It is possible to write JavaScript programs that do work in parallel today, but the APIs to do this efficiently are things like the low-level SharedArrayBuffer, where you are now dealing with the raw bytes. There is a Stage 2 proposal to add "Shared Structs" to JavaScript - if this progresses it will be interesting to see JavaScript programs more easily benefit from using multiple cores.
There are still many benefits to using JavaScript for other tasks. Just some of the benefits:
The ecosystem demanded a faster TypeScript. Performance cannot be ignored these days. As a consequence, some kind of port of TypeScript was bound to happen. If we accept that view, then what next? Well, the way that the TypeScript team has started executing on the migration fills us with confidence. The TypeScript team are talented, they are pragmatists and their choices are wise.
This is going to Go well.
Thanks to Jake Bailey, of the TypeScript team, for reviewing this piece - greatly appreciated! Also to Josh Goldberg to writing up his classification of what makes up TypeScript; many thanks!
]]>npx command is a powerful tool for running CLI tools shipped as npm packages, without having to install them globally. npx is typically used to run packages on the public npm registry. However, if you have a private npm feed, you can also use npx to run packages available on that feed.
Azure Artifacts is a feature of Azure DevOps that supports publishing npm packages to a feed for consumption. (You might want to read this guide on publishing npm packages to Azure Artifacts.) By combining npx and Azure Artifacts, you can deliver your CLI tool to consumers in a way that's easy to use and secure.

This post shows how to use npx and Azure Artifacts to deliver your private CLI tool to consumers.
npx with private npm feeds useful?If you've ever found a need to deliver a private CLI tool to consumers, you'll know that it can be a challenge.
I work for a large organization and we need to share internal tools with our colleagues. The problem is, that it's hard to get people to install tools. Either you need to provide detailed instructions on how to acquire and install the tool, or you need to work out some kind of internal distribution mechanism. You also have to think about how to update the tool. It's not simple.
By combining npx and Azure Artifacts it becomes much simpler. You can publish your CLI tool to a private npm feed and then consumers can run it with a single command. They don't need to install anything up front (apart from Node.js which they likely already have), and they don't need to worry about updates.
A typical usecase is the one I've mentioned; sharing tools internally in an organisation. But, broader than that, if you want to deliver a private CLI tool to consumers, this is a great way to do it.
We're going to look at how we'd achieve this with Azure Artifacts as the host of the npm package. But, you could use any private npm feed that you have access to.
Before you can use npx to run your CLI tool, you need to publish it to a private npm feed. Here is a guide on how to publish a private npm package with Azure Artifacts. In that example we published a package to a feed called npmrc-script-organization in the johnnyreilly organization of Azure DevOps / Azure Artifacts.
For the sake of this post, we'll say that our package is a CLI tool with the name @johnnyreilly/my-cli-tool.
Remember, an npm package which houses a CLI tool is merely an npm package with a bin entry in the package.json. This post is not about how to create a CLI tool, but rather how to deliver one to private consumers. If you would like to see an example of what a CLI tool package looks like, you can check out the azdo-npm-auth package on GitHub. (In fact, we'll use azdo-npm-auth later in this post - it's an example of a CLI tool published to the public npm registry.)
The question now is, how we can run the (private) @johnnyreilly/my-cli-tool package with npx?
registry config setting of npm / npxThe secret sauce of running a CLI tool from a private npm feed with npx is the registry config setting of npm / npx. The registry option allows you to specify the URL of the npm feed that you want to use.
For our case, we grabbed the registry URL from the Azure DevOps UI by clicking on the "Connect to Feed" button in the Azure Artifacts section:

When we selected npm, ADO displayed instructions for setting up an .npmrc file for private consumption:

We don't need to set up an .npmrc file to run the CLI tool with npx, but we do need to grab the registry URL, which we can see in the example .npmrc file above. In our case, the URL is https://pkgs.dev.azure.com/johnnyreilly/_packaging/npmrc-script-organization/npm/registry/. This is the URL of the registry (private npm feed) that we want to use.
npxEquipped with the registry URL, we can now run our CLI tool with npx:
npx -y --registry https://pkgs.dev.azure.com/johnnyreilly/_packaging/npmrc-script-organization/npm/registry/ @johnnyreilly/my-cli-tool
This command will download the @johnnyreilly/my-cli-tool package from the private npm feed and run it. The --registry option tells npx to use the specified registry URL to download the package and the -y option tells npx to answer "yes" to the installation prompt.
If you need to pass arguments to the CLI tool, you can simply add them to the end of the command as you would with any CLI tool: (I'll put this over multiple lines for readability, but you can run it as a single line)
npx -y \
--registry https://pkgs.dev.azure.com/johnnyreilly/_packaging/npmrc-script-organization/npm/registry/ \
@johnnyreilly/my-cli-tool --arg1 hello
There is another way to specify the registry URL, which is to use the npm_config_registry environment variable. This approach is more verbose and is not cross platform (it won't work on Windows). But, if you prefer this approach, you can use this style of command:
npm_config_registry=https://pkgs.dev.azure.com/johnnyreilly/_packaging/npmrc-script-organization/npm/registry/ npx -y @johnnyreilly/my-cli-tool
If you encounter an error like this:
npm error code E401
npm error Unable to authenticate, your authentication token seems to be invalid.
npm error To correct this please try logging in again with:
npm error npm login
Then npm is telling you to authenticate with the private npm feed / registry. This is because the feed is private and requires authentication. This is a good thing; it means that your package is secure; just as you'd hoped.
You may have your own way of authenticating with the feed. If so, great! Do that now and skip the next section.
azdo-npm-auth to authenticate with Azure ArtifactsOn the other hand, if you're using Azure Artifacts (and your Azure DevOps organisation is connected with your Azure account / Microsoft Entra ID), you can use azdo-npm-auth to solve your authentication needs. You can run azdo-npm-auth like this:
npx -y azdo-npm-auth --registry https://pkgs.dev.azure.com/johnnyreilly/_packaging/npmrc-script-organization/npm/registry/
The above command will acquire a PAT (Personal Access Token) from Azure DevOps and use it to create a user .npmrc file, which will be used by npx to authenticate with the feed subsequently.
If you encounter a npm error code E401 as you run the azdo-npm-auth command, it's possible that you have a local .npmrc file that is tripping npx up. You can get around that by explicitly passing the --registry of the public npm feed / registry to npx:
npx -y --registry https://registry.npmjs.org azdo-npm-auth --registry https://pkgs.dev.azure.com/johnnyreilly/_packaging/npmrc-script-organization/npm/registry/
That's right; we're passing the public npm registry to npx's --registry and we're passing our private npm feed / registry to azdo-npm-auth's --registry. This gets around the npm error code E401 issue.
Whichever way you authenticated, you should now be ready. You can now run the original command again; it should work this time. For example:
npx -y --registry https://pkgs.dev.azure.com/johnnyreilly/_packaging/npmrc-script-organization/npm/registry/ @johnnyreilly/my-cli-tool
And that's it! You've successfully run your CLI tool from a private npm feed with npx.
In this post we've used Azure Artifacts as the host of the npm package, but you could use any npm feed that you have access to. The key is to use the registry option of npm / npx to specify the URL of the npm feed.
By combining npx and private npm feeds, you can deliver your CLI tool to consumers in a way that's easy to use and secure. Consumers can run your tool with a single command, without having to install anything up front. This is a powerful way to share private CLI tools.
import { thing } from '../../file.js' style imports in code. By using subpath imports instead I might have import { thing } from 'src/folder/file.js' and that feels much cleaner to me.
But I also like consistency in my codebase, and I don't want to have a mix of import styles. So while using subpath imports can help me avoid relative imports, I also want to make sure that everyone on the team is using the same style. How? Here's how!

Before I dive into the details, I want to be clear that this is a very opinionated setup. It's not necessarily the best approach for every team or project. I'm sharing it because it's what I'm trying out right now, but I fully expect that there are trade-offs and that it might not be the right choice for everyone. I'm not even sure if it's the right choice for me in the long run. So take this with a grain of salt and consider whether it makes sense for your own context.
The first thing to do is to set up Node.js subpath imports in package.json:
{
"type": "module",
"imports": {
"#src/*": {
"types": "./src/*",
"default": "./lib/*"
},
"#test/*": {
"types": "./test/*",
"default": "./test/*"
}
}
}
The idea is straightforward:
#src/* to ./src/*.#src/* to ./lib/* (where our compiled output goes - yours could go somewhere else like dist).#test/* to ./test/*.The practical result is that I can write import { thing } from '#src/features/thing.js'; everywhere and avoid relative import gymnastics.
## But what about Vitest?
To configure Vitest to understand the subpath imports, I need to add some aliasing to vitest.config.js:
import path from 'node:path';
import { fileURLToPath } from 'node:url';
import { defineConfig } from 'vitest/config';
const __dirname = path.dirname(fileURLToPath(import.meta.url));
export default defineConfig({
resolve: {
alias: [
{
find: /^#src\//,
replacement: path.resolve(__dirname, 'src') + '/',
},
{
find: /^#test\//,
replacement: path.resolve(__dirname, 'test') + '/',
},
],
},
// ...other config
});
Imports in test files will now work as expected, and I can use the same #src/* style imports in tests as well. There's a change required for mocks too, but it's not too bad. A migration in my project took me from this:
import { makeProxy } from '../../../../test/makeProxy.js';
vi.mock('chalk', () => ({
default: makeProxy({}),
}));
To this:
vi.mock('chalk', async () => {
const { makeProxy } = await import('#test/makeProxy.js');
return { default: makeProxy({}) };
});
Not too bad. I actually prefer the import sitting inside the mock factory function, as it makes it clear that the import is only used for the mock setup.
no-restricted-importsAt this point, I have subpaths working, but other import styles are still allowed. Time to end that and enforce that everyone uses the #src/* style for imports.
We achieve this is two ways using ESLint's no-restricted-imports rule. First, I make sure that no one can use relative imports or other path styles:
rules: {
"no-restricted-imports": [
"error",
{
patterns: [
{
group: ["~/*", "@/*", "./*"],
message: "Use #src/... subpath imports instead.",
},
],
},
],
}
This is intentionally strict. If someone reaches for ~/, @/, or ./, linting pushes them back to #src/....
Is this heavy-handed? Yes. That's the point.
perfectionist/sort-importsI'm a big fan of consistent import grouping and sorting, and I achieve that with the perfectionist/sort-imports rule. This rule allows me to define custom groups and enforce a specific order for imports.
Once everything uses subpaths, I also want imports grouped consistently:
rules: {
"perfectionist/sort-imports": [
"error",
{
groups: [
"type-import",
["value-builtin", "value-external"],
["type-subpath", "value-subpath"],
["type-parent", "type-sibling", "type-index"],
["value-parent", "value-sibling", "value-index"],
"ts-equals-import",
"unknown",
],
},
],
}
The key bit is placing subpath imports after value-external imports. It makes #src/* style imports appear after imports from packages. So instead of this:
import type { Logger } from "#src/shared/cli/logger.js";
import { DefaultAzureCredential } from "@azure/identity";
We have this:
import { DefaultAzureCredential } from "@azure/identity";
import type { Logger } from "#src/shared/cli/logger.js";
As a side note, I have asked on the perfectionist repo about whether an approach like this could / should be supported by default, as opposed to being configurable. I don't know if it will be - but you can follow along here.
Yes! Support has been in place since TypeScript 5.4, thanks to microsoft/TypeScript#55015.
Possibly not.
This approach is highly opinionated, and maybe too rigid for many teams. I'm not fully convinced it's a universally good pattern. I'm trying it because the consistency is appealing, but I haven't come to a settled view yet. I'm writing this post in part to share the approach, but also to help me mull over the idea.
If you want one import path to rule them all, Node.js subpaths plus ESLint enforcement can absolutely do it.
But this is a strong convention with sharp edges. I don't yet know whether it's a great long-term idea. It is definitely interesting to try out, and I like the consistency it brings. But whether it's the right choice for your team or project is something only you can decide.
]]>
This is one of those posts that gathers together information I found doing research and puts it in one place.
Setting up Azure Application Insights with Node.js is straightforward. You need to install the applicationinsights package which integrates Node.js and Azure Application Insights:
npm install applicationinsights --save
Then configure it with your connection string. You want to do this as early as possible when your application starts, so if there are issues, you know as soon as possible. Here's how you can do this:
import appInsights from 'applicationinsights';
let client: appInsights.TelemetryClient | undefined;
if (process.env.APPLICATIONINSIGHTS_CONNECTION_STRING) {
// https://github.com/microsoft/applicationinsights-node.js?tab=readme-ov-file#configuration
appInsights
.setup(process.env.APPLICATIONINSIGHTS_CONNECTION_STRING)
.setAutoCollectRequests(true)
.setAutoCollectPerformance(true, true)
.setAutoCollectExceptions(true)
.setAutoCollectDependencies(true)
.setAutoCollectConsole(true, true) // this will enable console logging
.setAutoCollectPreAggregatedMetrics(true)
.setSendLiveMetrics(false)
.setInternalLogging(false, true)
.enableWebInstrumentation(false)
.start();
client = appInsights.defaultClient;
}
The above code does two things of interest:
APPLICATIONINSIGHTS_CONNECTION_STRING and I see no reason to deviate from that.setAutoCollectConsole(true, true) which enables console logging. This is in response to this part of the docs:Note that by default
setAutoCollectConsoleis configured to exclude calls toconsole.log(and otherconsolemethods). By default, only calls to supported third-party loggers (e.g.winston,bunyan) will be collected. You can change this behavior to include calls toconsolemethods by usingsetAutoCollectConsole(true, true).
I'm not using a third-party logger in this example, so I want to include calls to console methods.
If you're not a Fastify user you can stop here. But if you are a Fastify user, you may have discovered that setAutoCollectRequests appears not to be be working. There's a GitHub issue to track this, but no discernible sign that it's going to be fixed soon. So, I've created a Fastify plugin to track requests manually based upon @stefanpeer's comment:
import { FastifyInstance, FastifyPluginOptions } from 'fastify';
import fp from 'fastify-plugin';
import appInsights from 'applicationinsights';
declare module 'fastify' {
export interface FastifyRequest {
// here we augment FastifyRequest interface as advised here: https://fastify.dev/docs/latest/Reference/Hooks/#using-hooks-to-inject-custom-properties
app: { start: number };
}
}
// based on https://github.com/microsoft/ApplicationInsights-node.js/issues/627#issuecomment-2194527018
export const appInsightsPlugin = fp(
async (fastify: FastifyInstance, options: FastifyPluginOptions) => {
if (!options.client) {
console.log('App Insights not configured');
return;
}
const client: appInsights.TelemetryClient = options.client;
const urlsToIgnore = options.urlsToIgnore || [];
fastify.addHook(
'onRequest',
async function (this: FastifyInstance, request, _reply) {
// store the start time of the request
const start = Date.now();
request.app = { start };
},
);
fastify.addHook(
'onResponse',
async function (this: FastifyInstance, request, reply) {
if (urlsToIgnore.includes(request.raw.url)) return;
const duration = Date.now() - request.app.start;
client.trackRequest({
name: request.raw.method + ' ' + request.raw.url,
url: request.raw.url,
duration: duration,
resultCode: reply.statusCode.toString(),
success: reply.statusCode < 400,
properties: {
method: request.raw.method,
url: request.raw.url,
},
measurements: {
duration: duration,
},
});
},
);
},
);
The above code creates a Fastify plugin that tracks requests manually. It does this by adding two hooks to Fastify:
onRequest - This hook stores the start time of the request.onResponse - This hook calculates the duration of the request and sends it to Azure Application Insights.It also includes an urlsToIgnore option. This is an array of URLs that you don't want to track. For example, you might not want to track requests to the root of your application. You can pass this option when you register the plugin.
To make the code play nicely with TypeScript, we augment the FastifyRequest interface to include a start property. This is where we store the start time of the request that we supply to the onResponse hook and read from the onRequest hook to calculate the duration of the request.
To consume the plugin in a Fastify application, you can do something like this:
import appInsights from 'applicationinsights';
import Fastify, { FastifyInstance } from 'fastify';
import { appInsightsPlugin } from './appInsightsPlugin.js';
let client: appInsights.TelemetryClient | undefined;
if (process.env.APPLICATIONINSIGHTS_CONNECTION_STRING) {
// https://github.com/microsoft/applicationinsights-node.js?tab=readme-ov-file#configuration
appInsights
.setup(process.env.APPLICATIONINSIGHTS_CONNECTION_STRING)
.setAutoCollectRequests(true)
.setAutoCollectPerformance(true, true)
.setAutoCollectExceptions(true)
.setAutoCollectDependencies(true)
.setAutoCollectConsole(true, true) // this will enable console logging
.setAutoCollectPreAggregatedMetrics(true)
.setSendLiveMetrics(false)
.setInternalLogging(false, true)
.enableWebInstrumentation(false)
.start();
client = appInsights.defaultClient;
}
export const fastify: FastifyInstance = Fastify({
logger: true,
});
fastify.register(appInsightsPlugin, { client, urlsToIgnore: ['/'] });
This both sets up Azure Application Insights and registers the Fastify plugin. The urlsToIgnore option is set to ['/'] which means that requests to the root of the application will not be tracked.
With this in place you'll see traffic in Azure Application Insights:

Using Azure Application Insights with Node.js is straightforward. However, if you're using Fastify, you'll need to track requests manually. This post shows you how to do that with a Fastify plugin. I hope you find it useful!
]]>
This post exists to dig into a little of the nuance around how software development has been changed by the innovations of AI: both the positives and the negatives. We’ll also discuss the tooling and approaches that have carried the industry forward, and the brand new pitfalls that are now revealing themselves.
This post won’t be exhaustive. Given how fast AI has changed and keeps changing, many of the reference points in this post may seem out of date from the moment of publication. But hopefully, the underlying principles should be evergreen.
This will be a personal story, informed by the experiences both from the world of open source and from colleagues and friends in the industry.
A little information about myself: I work as a software engineer for Investec, a multinational financial services group. The story of engineering at Investec is relevant to this piece. Investec has been an early and enthusiastic adopter of AI generally, and AI software tooling particularly. I’ll seek to draw on this experience.
I also work on open source software outside of work and have done for many years, primarily in the world of TypeScript.
One of the most well-known AI coding tools is GitHub Copilot. In many ways, it was the original AI coding tool, and competitors like Claude Code and Cursor still can’t match Copilot’s dominance. Let’s consider for a moment what makes these IDE AI tools useful.
GitHub Copilot has ask and edit features which are not radically different from what you’d get with say ChatGPT, but something like agent mode was truly the game-changer. This feature transformed the way I approach complex coding tasks, shifting from line-by-line assistance to higher-level problem solving.
Rather than implementing a feature myself from scratch, I can describe a feature or a bug fix in natural language, and the AI will generate the necessary code changes across multiple files. This holistic approach to coding feels more like collaborating with a junior developer who can take on significant chunks of work. This allows me to focus on reviewing and refining the output rather than getting bogged down in the minutiae of implementation. AI is just fantastic at boilerplate, it turns out.
Another aspect that agent mode shines at is its ability to generate bash scripts and automation tasks rather than just making direct code changes. There’s something to be said for deterministic results. When AI generates a script that you can review, understand, and then execute, you maintain control over the process. This approach is far more reliable than having AI make direct modifications to your codebase that might introduce subtle bugs or architectural inconsistencies.
The predictable nature of scripted solutions means you can verify the approach before execution, understand exactly what will happen, and easily roll back if needed. It’s a more methodical approach that aligns well with engineering best practices around reproducible builds and transparent processes.
With that context in mind, let’s look at how these AI tools have reshaped application development — sometimes brilliantly, sometimes problematically.
One of the most interesting aspects of AI-assisted coding is the notion of prompting your way to an application using a tool like Replit.
Historically, you may have run multiple workshops, the output of which may have been a vague design and some wireframes. But with something like Replit, you can leave a workshop with a functioning application. Crucially, you also have access to the source code of the AI-generated application, so it‘s possible to take that, leave Replit, and deploy it to your environment of choice.
Let’s walk through some of the aspects of this.
The time saved in the early stages of application development is considerable. The number of applications that would struggle to cross the chasm from idea to code is now vastly reduced. If you have an idea, it’s now possible to realise some form of it in hours. That’s incredibly useful.
A number of times in my career, I’ve been confronted by the idea of an application that just doesn’t work. A number of the underlying ideas are in conflict, in non-obvious ways. Discovering that information when you’ve already engaged your team and started developing is a waste of time, money, and good humor.
Using a tool like Replit to prototype drastically reduces the possibility that this might happen. By prompting AI to actually build a prototype of the app for you, it’s possible to surface poorly thought-out aspects of design before the actual build. This is really helpful and increases the chances of success.
Tools like Replit offer the ability to build bespoke applications. Recently, at Investec, the organisation was looking at a tool to manage vendors for the organisation. There are a number of products in the marketplace that perform this function.
But as the company examined these tools, it was clear that each tool was opinionated. If we wanted to use one of the tools, it wouldn’t quite fit our existing organisational processes and structures. We could use one of them, but we’d either be working around the differences or adjusting the way we worked to use the tool effectively.
The idea occurred: why not build our own? Historically, that would have taken a long time to achieve, and may not have been cost-effective given the number of users. But maybe we could prompt our way to an application that aligned with our processes?
Three people got in a room for an afternoon and “maybe” became “actually.” They left the room with an AI-generated application that was more aligned with Investec’s preferences and processes.
The question then was, can we take our prototype application and “productionise” it? To achieve that, we wanted:
There’s maybe more detail here than you need, but what hopefully shines through is how it’s possible to build applications for dedicated purposes, in ways that wouldn’t have been practical previously. Brilliant stuff.
While AI-generated applications offer tremendous advantages, they’re not without their challenges. The very speed and ease that make these tools so appealing can also mask significant issues that only become apparent later in the development process. Let’s explore some of the key limitations and pitfalls.
One of the most frustrating experiences with AI app generation occurs when you’re working with a tool like Replit and it starts making changes to one part of your stack while rendering another part meaningless. I’ve encountered situations where Replit created an app with a Python backend and a TypeScript front end.
At some point in the prompting journey, though, it stopped updating the TypeScript frontend and converted the Python into a full-stack web application, effectively leaving half the application non-functional.
The people prompting the app were not aware this had happened, and it wasn’t until we tried to migrate the application from Replit that we realised what had occurred. We’ve talked about AI saving time, but in this case, it cost us a good amount of effort to unravel what had happened.
This highlights a broader issue with AI-generated applications: they often struggle with the complexity of modern full-stack development, where changes in one layer can have cascading effects throughout the system.
Tools like Dependabot will often flag AI-generated applications for using outdated libraries with known vulnerabilities. This isn’t necessarily the AI’s fault; it’s working with training data that includes examples using older versions of libraries. But it does mean that the “finished” application isn’t actually ready for production without significant security updates.
Code quality is another concern. While AI can generate working code quickly, it doesn’t always generate good code. The applications work, but they may not follow best practices, lack proper error handling, or may have performance issues that only become apparent under load. AI application generation works exceptionally well when you “own the domain”, i.e. when the application functionality is self-contained and doesn’t rely heavily on external integrations.
AI tools can, however, create a false sense of completion. This is especially true if parts of your app functionality depend on external systems, APIs that don’t exist in standardised forms, or data that isn’t readily available. The app appears finished, but the hard work of integration is still ahead of you.
Perhaps most concerning is when AI tools use libraries that simply don’t have vulnerability-free versions available. This puts you in the immediate position of choosing between security and functionality: a choice that shouldn’t exist.
At Investec, we are very biased in the direction of security. So when this presents, we will take some time to identify and securely resolve this. This often involves more work, which is fine. The point to note here is that AI will not necessarily land you with production-grade code.
For organisations with established design systems (and Investec certainly has one) AI tools present an interesting challenge. AI can generate visually appealing interfaces that work well functionally, but they often don’t align with your company’s design language and brand guidelines.
You might end up with a beautiful application that looks nothing like the rest of your company’s digital properties. This disconnect between AI capabilities and organisational standards creates additional work to bring AI-generated UIs into compliance with house styles.
One of the unexpected consequences of AI-generated applications is how they expose the assumptions and biases embedded in AI training data and the system prompts. When you prompt for an application, the AI doesn’t just write code: it makes opinionated choices about which libraries, frameworks, and architectural patterns to use.
These choices often reflect what was popular in the AI’s training data, rather than what’s current or best suited to your specific needs. You might find your AI-generated application using React class components when functional components with Hooks would be more appropriate, or choosing older state management libraries when simpler solutions exist.
The challenge becomes more pronounced when you consider that AI tools tend to default to “safe” choices: libraries and patterns that were widely used and well-documented in their training period. While this reduces the likelihood of completely broken code, it can result in applications that feel outdated from the moment they’re generated.
This presents an interesting dilemma: do you accept the AI’s technology choices for the sake of speed, or do you invest time in modernising the stack to align with your preferences and current best practices? The answer often depends on whether you’re building a quick prototype or something intended for longer-term use.
One of the most significant benefits of AI coding tools is how they can help engineers work with languages and frameworks they’re not experts in. The AI becomes a bridge, allowing a Python developer to confidently work with TypeScript or a frontend engineer to write backend services. This democratisation of technical knowledge is genuinely powerful.
However, this strength also reveals a critical weakness. When AI generates code in domains where the engineer lacks expertise, it becomes much harder to review the output critically. The AI might know APIs that you don’t, and while this can accelerate development, it can also lead to situations where engineers ship code they don’t fully understand.
I’ve seen examples where smart engineers have relied on AI to write infrastructure code — Bicep templates, for instance — but then struggled when debugging issues arose. The problem isn’t that the AI wrote bad code; often, the code works perfectly. The issue is that when you don’t understand what’s been written, you can’t effectively maintain, debug, or extend it.
This creates a fundamental principle: if you don’t understand the code, you shouldn’t ship it. It’s important to hold fast to this principle for the long-term benefits of maintainable code. It’s easy to ignore the principle when AI makes it so easy to generate working solutions. But it pays off to do this in the long term.
Interestingly, one area where AI consistently excels is writing and fixing tests. AI-generated tests often surface actual issues in the codebase that human engineers might have missed. This creates an almost heretical situation for TDD advocates; the AI is finding bugs through tests that were written after the fact, not before.
While this approach might make purists uncomfortable, the practical benefit is undeniable. AI-generated tests serve as a safety net, catching edge cases and potential issues that improve overall code quality.
The flip side of this is that it can make some very silly decisions when writing tests. It may cover edge cases that are irrelevant, or miss important scenarios that a human would consider obvious. It often stubs out the implementation that you actually want to test.
Again, we’re highlighting the importance of human review and understanding when working with AI-generated code. We augment with AI; we don’t replace with AI.
As with many things in my life, it was open source software that got me thinking about this topic. Some OSS pals Josh and Brad posted this on Bluesky recently:


For a while now, the land of OSS had been awash with AI generated PRs. These are PRs that have been written by AI. This problem hasn't impacted my projects particularly but I've certainly been aware of the toll it takes on other projects in terms of maintainers time spent on reviews.
The most benign reading of this sort of PR is that the post is talking about, is that the contributor thinks they know what they want, but believes AI will do a better job of writing the code than they will.
It's not about OSS; it's a general software development concern. AI isn't the problem here; humans are. We talk often about working effectively with AI by having a "human in the loop". The issue we're seeing here, is use of AI without sufficient humans.
I've long had a personal rule when coding: If you submit a PR you must be the first reviewer.
This predates Copilot by some years. The idea essentially came to me when someone reviewed one of my PRs and raised some perfectly reasonable questions. Essentially as I'd been working on something, I'd changed approaches a few times, and what ended up in the PR wasn't entirely coherent.
So now, before I share a PR, I try to review it and see if it all makes sense.
This rule of thumb has served me well, and the use of AI coding tools only heightens the need for something similar.
While agent mode is undoubtedly a game-changer, the code it creates can sometimes be questionable from an architectural standpoint. I’ve seen AI choose unusual solutions that work but are suboptimal.
Here’s an example: I’d prompted Copilot to build a particular feature. The nature of the feature is not interesting, but the approach it used was. Rather than having an array of objects that represented the data it should use, it instead created a string array.
Each string in the array contained text that represented the data needed in a human-readable format. It then also created various regex-powered parsing mechanisms to extract the data it needed from the strings later on. It was a very roundabout way of achieving what I wanted. It was also very inefficient and buggy. The approach worked, but it was neither performant nor maintainable.
This highlights the importance of understanding not just what AI-generated code does, but how it does it. The “how” often reveals whether the solution will scale, perform well, and be maintainable over time. Other, less serious concerns I’ve seen include generating poorly factored code, using inefficient algorithms, or creating convoluted logic that is hard to follow.
Beyond anecdotal successes and challenges, the AI-assisted coding trend brings big-picture implications to the world of software development.
One consistent characteristic of AI-generated code is its verbosity. AI tends to be wordy, both in code comments and in implementation approaches. While thorough documentation isn’t inherently bad, brevity often leads to more maintainable code. In my experience, developers tend to accept AI’s verbose defaults without question, but this can lead to codebases that are unnecessarily complex and harder to maintain.
The same principle applies to pull request descriptions and documentation; AI tends toward comprehensive but overly detailed explanations when concise clarity would be more valuable. Oh, and emojis, always with the emojis! It can be steered to be more concise, but that requires deliberate prompting and review.
In many ways, AI functions like a travelator (aka a moving walkway) in an airport; it accelerates your movement in the direction you’re already going. If you’re heading in the right direction with solid engineering fundamentals, AI can dramatically speed up your progress. But if your approach or architecture is flawed, AI will help you get to the wrong destination much faster.
This acceleration effect means that the foundational skills of software engineering — understanding requirements, designing systems, and making architectural decisions — become even more critical in an AI-assisted world.
I started out long before AI was a thing. I learned to code by reading books, writing code, making mistakes, and learning from those mistakes. I learned to debug by debugging. I learned to architect systems by studying good architecture and bad architecture and learning from both. With AI tooling, it’s possible to go a long way without really learning these foundational skills.
It’s too early to know what the implications of this will be. Will we see a generation of engineers who can deliver features but don’t understand the underlying principles? Will debugging skills atrophy because AI can often generate working code?
These are open questions, but they highlight the importance of maintaining a strong foundation in software engineering principles even as we embrace AI tools.
The Zero Trust security principle operates on the concept of “never trust, always verify.” This means that no user, device, or application is inherently trusted, regardless of whether they are inside or outside the network perimeter. Although AI is a very different kettle of fish, this principle remains useful when considering AI inputs into your ecosystem.
GitHub Copilot wrote you some code that seems to do the job? Great! Look hard at what you received and be sure that you’re happy with what it’s doing and how it’s doing it. Source control becomes your safety belt when coding with AI; it provides the rollback mechanism when AI takes you down an unexpected path.
One unexpected consequence of AI-assisted coding is its impact on flow state. The traditional deep, uninterrupted focus that characterises productive programming sessions becomes harder to achieve when you’re constantly context-switching between writing code and reviewing AI suggestions.
Personally speaking, I derive great joy from getting deep into flow state as I build an application or a feature. It’s a wonderfully meditative state and genuinely improves my mental health.
The collaborative nature of AI-assisted development, while powerful, can disrupt this meditative quality of coding that many engineers cherish. It’s a trade-off between speed and the satisfying rhythm of sustained, focused work.
Every now and then, I’ll have a “feel the force, Luke” moment, turn off my Copilot, and intentionally enter into flow, unaccompanied by my AI buddy. I bet I’m not alone.
AI has undoubtedly transformed software development, offering unprecedented speed in prototyping, unprecedented access to knowledge across domains, and unprecedented assistance in solving complex problems. It’s a great unblocker; you don’t always need to find the expert (or “Marcel,” as we call him at Investec) when AI can provide guidance.
However, the key to successful AI-assisted development lies in maintaining the human element. AI excels at generating code, but humans remain essential for understanding requirements, making architectural decisions, reviewing outputs critically, and ensuring that solutions align with business needs and engineering standards.
The future of software development isn’t about replacing engineers with AI; it’s about engineers learning to work effectively with AI as a powerful tool in their toolkit. The most successful teams will be those that embrace AI’s capabilities while maintaining rigorous standards for code quality, architectural soundness, and security practices.
As we continue to navigate this rapidly evolving landscape, the principles of good engineering remain constant: understand what you’re building, review what you’re shipping, and never lose sight of the bigger picture. AI can accelerate the journey, but human judgment must still set the destination.
]]>
You're likely aware of the popularity of excellent projects like tRPC which provide a way to use TypeScript end-to-end. However, if you're working in a polyglot environment where your back end is written in C# or [insert other language here], and your front end is written in TypeScript, then cannot take advantage of that. However, by generating front end clients from a server's OpenAPI specs, it's possible to have integration tests that check your front end and your back end are aligned.
This post will show you how to do that using NSwag.
Let's talk about the kind of situation that I'm imagining. From reading the web, you'd think that every organisation is running both TypeScript on its front end and on its back end. In my experience, this is not the case. Many organisations have a particular technology stack on the back end - maybe C#/.NET - and a different one on the front end - generally TypeScript/JavaScript.
The technique of generating clients from OpenAPI specs is useful in this scenario. It means that you can keep your front end and back end in sync, even if they are written in different languages. The source of truth is the Open API spec, which is typically generated from the back end code. The front end client code is generated from the Open API spec. And then, because TypeScript is compiled, you can do a compilation test to check that the front end code is in sync with the back end code.
It's a simple idea but it is powerful. I typically achieve it in two steps:
Most people will have done the first step. If you have a strongly typed front end project that is compiled as part of your CI process, then you will get a compilation errors if have code that does not type check successfully.
The second step is ensuring alignment between front and back ends. This is where you add an integration test that checks that the generated client is up to date with the back end. This is remarkably easy to achieve. You already have a mechanism for generating a TypeScript client from the back end code. You just need to generate the client code as part of your test, and then check that the generated code is the same as the already committed code.
Let's see what that looks like.
I'm going to use Vitest for this example. I love it, you could use pretty much any test framework you like. Really we're just checking that one string is the same as another and so you could get by without any test framework at all if you fancied.
In my example, I'll have a separate client-server-tests project for the tests. Here's the package.json for that project:
{
"name": "client-server-tests",
"version": "0.0.0",
"scripts": {
"copy-original-generated-client": "copyfiles --flat ../client-app/src/clients.ts tmp/",
"regenerate-generated-client": "cd ../../ && pnpm run generate-client",
"testprepare": "pnpm run copy-original-generated-client && pnpm run regenerate-generated-client",
"test:ci": "pnpm run testprepare && vitest run --reporter=default --reporter=junit --outputFile=reports/junit.xml",
"test": "pnpm run testprepare && vitest"
},
"devDependencies": {
"copyfiles": "^2.4.1",
"typescript": "^5.8.3",
"vite": "^6.3.6",
"vitest": "^3.2.4"
}
}
But you could fold this into your front end project directly if you prefer.
I'm not going to repeat the code of the previous post that demonstrated how to generate TypeScript clients with NSwag - but please imagine that the code for the TypeScript client has been generated and is available in a front end project. The code above refers to the front end of that project as client-app and the generated client code is in client-app/src/clients.ts.
For that project, the back end is in C#, but it could be in any language that can generate an OpenAPI spec. The important thing is that the OpenAPI spec is the source of truth for the API.
You'll notice that both the test:ci and test scripts run a testprepare script first. This script does two things:
This leaves us ready to do our comparison. We can write a test that checks that the generated code is up to date. Here's what that looks like:
import { expect, test } from 'vitest';
import fs from 'fs';
test('generated-client', () => {
const committedClient = fs.readFileSync('tmp/clients.ts', 'utf8');
const newlyGeneratedClient = fs.readFileSync(
'../client-app/src/clients.ts',
'utf8',
);
expect(
committedClient,
"try `pnpm run generate-client`; generated client doesn't match the server",
).toBe(newlyGeneratedClient);
});
This test simply compares the previously generated client code with the newly generated client code. If they don't match, the test fails and the message suggests running the client generation command. So as well as being a useful test, it also provides a helpful hint to the developer. Like so:

Of course, you want to end up seeing a passing test:

The last thing to do is to make sure that this test project is run as part of your CI process. I'm not going to show you how to do this, because it will depend on your CI system. But the idea is that you add a step to your CI process that installs your dependencies and runs the tests in the client-server-tests project.
With that in place, you now have a way to ensure that your front end and back end are in sync. If someone makes a change to the back end that affects the OpenAPI spec, and they forget to regenerate the client code, the test will fail and they'll be prompted to run the client generation command.
If the change that they made in some way breaks the front end code, then the front end build will fail because of the compilation errors. So you get two layers of protection.
This is a simple but effective way to keep your front end and back end in sync, even if they are written in different languages.
]]>endswith filter with the Microsoft Graph client. This falls into the category of "Advanced query capabilities on Microsoft Entra ID objects" and I found tricky to get working.

Performing an endsWith or similar filter shouldn't be difficult. But the method of how to do so isn't obvious. If you've ever encountered a message like this:
Operator 'endsWith' is not supported because the 'ConsistencyLevel:eventual' header is missing. Refer to https://aka.ms/graph-docs/advanced-queries for more information
Then this blog post is for you.
I've written before about the Microsoft Graph client, and how to use it with Azure AD groups. You can find those posts here:
First let's quote from the documentation around advanced query capabilities on Microsoft Entra ID objects:
Microsoft Graph supports advanced query capabilities on various Microsoft Entra ID objects, also called directory objects, to help you efficiently access data. For example, the addition of not (
not), not equals (ne), and ends with (endsWith) operators on the$filterquery parameter.
Let's say we have a need for the endsWith operator or similar, for example: when querying for Entra ID / Azure AD groups. We may want to filter for groups that end with a certain string. And you can do that, but it does not work by default.
This is quite disappointing, and the documentation explains why this is the case:
The Microsoft Graph query engine uses an index store to fulfill query requests. To add support for additional query capabilities on some properties, those properties might be indexed in a separate store. This separate indexing improves query performance. However, these advanced query capabilities aren't available by default but, the requestor must set the
ConsistencyLevelheader to eventual and, except for$search, use the$countquery parameter. TheConsistencyLevelheader and$countare referred to as advanced query parameters.
I find this quite surprising. Essentially, the implementation of the endsWith and similar operators are bleeding through into the API design. This is quite clunky, and not something that I would expect from a modern API. It feels like a design decision that is more about performance than usability. Even if you do want to discourage the use of the endsWith operator, it would be nice to have a more user-friendly way of doing so. For example, you could have a separate endpoint for advanced queries, or simple query parameter that enables advanced queries without having to set headers and a seemingly arbitrary query parameter.
However, the good news is that it is possible to use the endsWith operator (and others like it) with the Microsoft Graph client. The bad news is the way you have to do it.
endsWith operator with the Microsoft Graph clientWe'll make what we're looking into concrete in this post, by having a meaningful example of querying the Graph client. We'll query for Entra ID / Azure AD groups. First let's see what a broken example looks like:
import { Client, type PageCollection } from '@microsoft/microsoft-graph-client';
export async function getAzureADGroups(
graphClient: Client,
): Promise<PageCollection> {
return (await graphClient
.api('/groups')
.filter(`startsWith(displayName, 'startfilter-')`)
.filter(`endsWith(displayName, '-endfilter')`)
.select(['displayName', 'id'])
.get()) as PageCollection;
}
The above code is intended to query for Azure AD groups that start with startfilter- and end with -endfilter. However, it will fail with this error:
Operator 'endsWith' is not supported because the 'ConsistencyLevel:eventual' header is missing. Refer to https://aka.ms/graph-docs/advanced-queries for more information
This error is at least helpful in that it points you to the documentation. But it doesn't mention the $count query parameter. And it doesn't suggest how you might use the query parameter and header with the Microsoft Graph client.
endsWith operator with the Microsoft Graph clientTo use the endsWith operator with the Microsoft Graph client, you need to set the ConsistencyLevel header to eventual and use the $count query parameter as you make your call.
I pieced together how to do this with the Microsoft Graph client based upon these two pieces of documentation:
Based upon this, I was able to produce the following code:
import { Client, type PageCollection } from '@microsoft/microsoft-graph-client';
export async function getAzureADGroups(
graphClient: Client,
): Promise<PageCollection> {
return (await graphClient
.api('/groups')
.query({
$count: 'true',
})
.header('ConsistencyLevel', 'eventual')
.filter(`startsWith(displayName, 'startfilter-')`)
.filter(`endsWith(displayName, '-endfilter')`)
.select(['displayName', 'id'])
.get()) as PageCollection;
}
This code sets the ConsistencyLevel header to eventual and uses the $count query parameter. This allows you to use the endsWith operator in your query. It works as expected and returns the groups that start with startfilter- and end with -endfilter.
To make this a complete example, let's look at how you might use this in a real application. We'll create a function that retrieves Azure AD groups using the Microsoft Graph client. This function will use the endsWith operator and the ConsistencyLevel header.
import { DefaultAzureCredential } from '@azure/identity';
import { Client, type PageCollection } from '@microsoft/microsoft-graph-client';
export interface AzureADGroup {
/** eg name-of-group */
displayName: string;
/** eg GUID-GUID-GUID-GUID-GUID */
id: string;
}
export async function getMyAzureADGroups(): Promise<AzureADGroup[]> {
return getAzureADGroupsImpl({
queryProvider: async (graphClient: Client) => {
return (await graphClient
.api('/me/memberOf')
.select(['displayName', 'id'])
.get()) as PageCollection;
},
});
}
export async function getAzureADGroups(): Promise<AzureADGroup[]> {
return getAzureADGroupsImpl({
queryProvider: async (graphClient: Client) => {
return (await graphClient
.api('/groups')
.query({
$count: 'true',
})
.header('ConsistencyLevel', 'eventual')
.filter(`startsWith(displayName, 'startfilter-')`)
.filter(`endsWith(displayName, '-endfilter')`)
.select(['displayName', 'id'])
.get()) as PageCollection;
},
});
}
async function getAzureADGroupsImpl({
queryProvider,
}: {
queryProvider: (graphClient: Client) => Promise<PageCollection>;
}): Promise<AzureADGroup[]> {
// Use DefaultAzureCredential to authenticate
const credential = new DefaultAzureCredential();
// Initialize the Graph client
const graphClient = Client.initWithMiddleware({
authProvider: {
getAccessToken: async () => {
const tokenResponse = await credential.getToken([
'https://graph.microsoft.com/.default',
]);
return tokenResponse.token;
},
},
});
const groups: AzureADGroup[] = [];
try {
let response = await queryProvider(graphClient);
while (response.value.length > 0) {
for (const group of response.value as AzureADGroup[]) {
// {
// '@odata.type': '#microsoft.graph.group',
// displayName: 'azure-our-engteam',
// id: 'GUID-GUID-GUID-GUID-GUID'
// }
groups.push(group);
}
if (response['@odata.nextLink']) {
response = (await graphClient
.api(response['@odata.nextLink'])
.get()) as PageCollection;
} else {
break;
}
}
return { data: groups };
} catch (err) {
const errorMessage = `Error listing Entra ID / Azure AD groups: ${err instanceof Error ? err.message : 'UNKNOWN'}`;
console.error(errorMessage);
throw new Error(errorMessage, { cause: err });
}
}
This code has an implementation method getAzureADGroupsImpl that takes a queryProvider function. This function is responsible for providing the query to the Microsoft Graph client. The getAzureADGroupsImpl function handles the pagination of the results. It uses the @odata.nextLink property to retrieve the next page of results until there are no more pages left.
The getAzureADGroups and getMyAzureADGroups functions call this getAzureADGroupsImpl function method with their respective queries.
The getMyAzureADGroups function queries for the groups that the signed-in user is a member of. It does not use the endsWith operator. It simply queries for all groups that the user is a member of. This will work just fine without the ConsistencyLevel header or the $count query parameter.
The getAzureADGroups function queries for all groups that start with startfilter- and end with -endfilter. It uses the endsWith operator and consequently needs the ConsistencyLevel header and the $count query parameter. If it doesn't have these, it will fail with the error we saw earlier.
This code is a complete example of how to use the Microsoft Graph client to query for Azure AD groups using the endsWith operator. It handles authentication, pagination, and error handling. You can use this code as a starting point for your own applications that need to query Azure AD groups using the Microsoft Graph client.
In this post, we looked at how to use the endsWith operator (and similar "advanced query" operators) with the Microsoft Graph client. We saw that it is possible to use advanced query operators with the Microsoft Graph client, but it requires setting the ConsistencyLevel header to eventual and using the $count query parameter.

This post implements an alternative mechanism, directly using the Azure DevOps API and thus handling pagination. If you're curious as to how to create a pipeline, then check out my post on creating a pipeline with the Azure DevOps API.
Here's the TypeScript code:
export interface AzureDevOpsPipeline {
_links: {
self: {
/** eg "https://dev.azure.com/my-ado-organisation/87cf415b-a062-4f62-93e2-b37c26aa268b/_apis/pipelines/6805?revision=1" */
href: string;
};
web: {
/** eg "https://dev.azure.com/my-ado-organisation/87cf415b-a062-4f62-93e2-b37c26aa268b/_build/definition?definitionId=6805" */
href: string;
};
};
/** eg "\\my-app" or "\\" */
folder: string;
/** eg 25978 */
id: number;
/** eg "pipeline-name" */
name: string;
/** eg 1 */
revision: number;
/** eg "https://dev.azure.com/my-ado-organisation/87cf415b-a062-4f62-93e2-b37c26aa268b/_apis/pipelines/25978?revision=1" */
url: string;
}
export async function getAzureDevOpsPipelines({
personalAccessToken,
organization,
projectName,
}: {
personalAccessToken: string;
/** eg "my-ado-organisation" */
organization: string;
/** eg "my-ado-project" */
projectName: string;
}) {
const batchSize = 100;
let continuationToken = '';
const pipelines: AzureDevOpsPipeline[] = [];
// eslint-disable-next-line @typescript-eslint/no-unnecessary-condition
while (true) {
// https://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/pipelines/list?view=azure-devops-rest-7.1
const url = `https://dev.azure.com/${organization}/${projectName}/_apis/pipelines?api-version=7.1&$top=${batchSize.toString()}&continuationToken=${continuationToken}`;
try {
const response = await fetch(url, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
Authorization: `Basic ${Buffer.from(`PAT:${personalAccessToken}`).toString('base64')}`,
'X-TFS-FedAuthRedirect': 'Suppress',
},
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status.toString()}`);
}
// this will be the name of the next pipeline or not present if there are no more pipelines
continuationToken = response.headers.get('x-ms-continuationtoken') ?? '';
// eslint-disable-next-line @typescript-eslint/no-unsafe-assignment
const json = await response.json();
const nextPipelines = json as { value: AzureDevOpsPipeline[] }; // TODO: validate with Zod
if (nextPipelines.value.length > 0) {
pipelines.push(...nextPipelines.value);
}
const noMorePipelines =
nextPipelines.value.length === 0 || !continuationToken;
if (noMorePipelines) {
break;
}
} catch (error) {
console.error('Error:', error);
throw error;
}
}
return pipelines;
}
The above code uses the fetch API to list the Azure DevOps pipelines, but significantly, it takes the x-ms-continuationtoken from the response headers, so you can keep paginating if the number of pipelines exceeds the page size.
/ preceding the command; hence the name "slash commands". GitHub has its own slash commands that you can use in issues and pull requests to add code blocks and tables etc. The slash commands are, in truth, quite limited.
However, through clever use of the GitHub Actions platform, it's possible to build something quite powerful which is "slash-command-shaped". In this post, we'll look at how to implement a /deploy slash command which, when invoked in a pull request, will deploy an Azure Container App with GitHub Actions.

The technique we'll use is covering a deployment usecase, as we'll see, it could be adapted to many other scenarios.
I have an aunt that is a Poor Clare nun, and I've been over-engineering her convent's website for years. Most of the time the site moulders away, but every now and then I get a flurry of requests for minor changes. Once I've made the changes, they go live thanks to the magic of continuous deployment. But there's only ever been a single environment; production or "main".
Sometimes I'd like to eyeball a change before I've shipped it. Not always, sometimes. A particular case where this is useful, is when Renovate has submitted a dependency upgrade PR, and I'd like to see the impact without having to install and run it locally somewhere. Because, unless I instead hit "merge" with crossed fingers, that's what I'll need to do. (I have done this and it doesn't always end well.)
So I decided it was time that the "Convent with Continuous Delivery™️" had a staging environment. And I decided that I'd like to be able to deploy to it by entering the slash command /deploy in a pull request comment. Like this:

As we can see, I entered /deploy in a comment. In response, a GitHub Actions workflow then kicked off and deployed the staging environment. How did I do this? Let's find out.
The secret sauce that makes implementing slash commands in GitHub Actions possible is the issue_comment event. This event is triggered when an issue or pull request comment is created, edited, or deleted. We're interested in the situation where a pull request comment is created, and it contains the /deploy command.
Based upon the example here it's possible to create a workflow that is triggered by the issue_comment event, but only when the comment is on a pull request, and that comment contains the text /deploy.
Here's the workflow:
on:
issue_comment:
types: [created]
jobs:
run-for-pr-comment-with-deploy-command:
# check if the comment comes from a pull request and contains the command `/deploy`
if: github.event.issue.pull_request && contains(github.event.comment.body, '/deploy')
# ...
The if statement is the key to this workflow. It checks if the comment comes from a pull request and contains the command /deploy. If both conditions are met, the workflow continues. We're in business!
I already have a GitHub Actions workflow that deploys the main environment. I don't want to duplicate this logic in the new workflow. Instead, I want to reuse the existing workflow and just pass in a different environment name. This is where reusable workflows come in.
I think of these as functions that can be called from other workflows. They have inputs and outputs, and can be parameterised.
I migrated the deployment logic to a reusable workflow called util-build-and-deploy.yaml. I pondered the best way to share this information with you, and I've finally opted to include the entire workflow here. It's a bit long, but I think it's the best way to show you how it all fits together:
name: Build and deploy
on:
workflow_call:
inputs:
deploy:
required: true
type: boolean
branchName:
required: true
type: string
outputs:
containerAppUrl:
description: 'The URL of the deployed container app'
value: ${{ jobs.deploy.outputs.containerAppUrl }}
env:
RESOURCE_GROUP: rg-my-convent
REGISTRY: ghcr.io
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
outputs:
image-name: ${{ steps.vars.outputs.image_name }}
sha-short: ${{ steps.vars.outputs.sha_short }}
built-at: ${{ steps.vars.outputs.built_at }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ inputs.branchName }}
- name: Set sha-short and image-name environment variables
id: vars
run: |
image_name=$(echo "${{ env.REGISTRY }}/${{ github.repository }}/node-service" | tr '[:upper:]' '[:lower:]')
echo "image_name=$image_name" >> $GITHUB_OUTPUT
sha_short=$(echo "$(git rev-parse --short HEAD)" | tr '[:upper:]' '[:lower:]')
echo "sha_short=$sha_short" >> $GITHUB_OUTPUT
echo "built_at=$(date +'%Y-%m-%dT%H:%M:%S')" >> $GITHUB_OUTPUT
# Login against a Docker registry
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ steps.vars.outputs.image_name }}
context: git # so it uses the git branch that is checked out
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=ref,event=branch
type=ref,event=pr
type=sha
# Build and push Docker image with Buildx (don't push if deploy is false)
# https://github.com/docker/build-push-action
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: ./
push: ${{ inputs.deploy }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
VITE_BRANCH_NAME=${{ inputs.branchName }}
VITE_GIT_SHA=${{ steps.vars.outputs.sha_short }}
VITE_BUILT_AT=${{ steps.vars.outputs.built_at }}
deploy:
runs-on: ubuntu-latest
if: inputs.deploy == true
needs: build
outputs:
containerAppUrl: ${{ steps.deploy.outputs.CONTAINER_APP_URL }}
permissions:
id-token: write
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ inputs.branchName }}
- name: Azure login
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: Deploy to Azure
id: deploy
uses: azure/CLI@v2
with:
inlineScript: |
RESOURCE_GROUP="${{ env.RESOURCE_GROUP }}"
BUILT_AT="${{ needs.build.outputs.built-at }}"
BRANCH_NAME="${{ inputs.branchName }}"
SHA_SHORT="${{ needs.build.outputs.sha-short }}"
REF_SHA="${{ inputs.branchName }}.${{ needs.build.outputs.sha-short }}"
DEPLOYMENT_NAME="${REF_SHA////-}"
echo "DEPLOYMENT_NAME=$DEPLOYMENT_NAME"
webServiceImage="${{ needs.build.outputs.image-name }}:sha-$SHA_SHORT"
echo "webServiceImage=$webServiceImage"
if [ "$BRANCH_NAME" == "main" ]; then
webServiceContainerAppName="main-web"
else
webServiceContainerAppName="preview-web"
fi
echo "webServiceContainerAppName=$webServiceContainerAppName"
az deployment group create \
--resource-group $RESOURCE_GROUP \
--name "$DEPLOYMENT_NAME" \
--template-file ./infra/main.bicep \
--parameters \
webServiceImage="$webServiceImage" \
containerRegistry=${{ env.REGISTRY }} \
containerRegistryUsername=${{ github.actor }} \
containerRegistryPassword=${{ secrets.PACKAGES_TOKEN }} \
branchName="$BRANCH_NAME" \
gitSha="$SHA_SHORT" \
builtAt="$BUILT_AT" \
workspaceName='shared-log-analytics' \
appInsightsName='shared-app-insights' \
managedEnvironmentName='shared-env' \
webServiceContainerAppName="$webServiceContainerAppName" \
tags="$TAGS" \
APPSETTINGS_API_KEY="${{ secrets.APPSETTINGS_API_KEY }}" \
APPSETTINGS_DOMAIN="${{ vars.APPSETTINGS_DOMAIN }}" \
APPSETTINGS_PRAYER_REQUEST_FROM_EMAIL="${{ vars.APPSETTINGS_PRAYER_REQUEST_FROM_EMAIL }}" \
APPSETTINGS_PRAYER_REQUEST_RECIPIENT_EMAIL="${{ vars.APPSETTINGS_PRAYER_REQUEST_RECIPIENT_EMAIL }}"
CONTAINER_APP_URL=$(az containerapp show \
--resource-group "$RESOURCE_GROUP" \
--name "$webServiceContainerAppName" \
--query properties.configuration.ingress.fqdn \
--output tsv)
echo "CONTAINER_APP_URL=$CONTAINER_APP_URL"
echo "CONTAINER_APP_URL=$CONTAINER_APP_URL" >> $GITHUB_OUTPUT
Let's talk through what this workflow does:
workflow_call event, which is how reusable workflows are triggered.build and deploy and the deploy job is only run if the deploy input is true. (This allows us to call the workflow with deploy: false to only build the image)build job checks out the code, sets some environment variables, logs into the Docker registry, extracts metadata for Docker, builds and pushes the Docker image.deploy job checks out the code, logs into Azure, and deploys the container app to Azure Container Apps. It then outputs the URL of the deployed container app in order that we can display it to the user.Now most of this workflow is the same as the one I was originally using to deploy to the main environment. The key difference is it is now parameterised with the branchName input. This is important for two reasons:
preview-web container app if the branch name is not main. Otherwise we'll deploy to main-web.branchName input in the actions/checkout steps and you'll see us use context: git in the docker/metadata-action step.You might be thinking at this point, "fine - but I don't have a containerised application and I don't have an Azure Container Apps service to deploy to". That's great! You can adapt this workflow to build any type of app you would like and deploy to any type of service. The crucial part is that you must build and deploy the code for the correct branch. This is why we pass the branchName input to the workflow.
And now some bad news: the issue_comment event doesn't know the branch that the pull request is for. We're going to need to build another reusable workflow that we will use to determine the branch name of the pull request.
Now we're old hands at creating reusable workflows, we're going to create another one that will determine the branch name of the pull request. We'll call this workflow util-get-pr-branch-name.yaml:
name: Get PR branch name
on:
workflow_call:
inputs:
pullRequestNumber:
required: true
type: number
outputs:
branchName:
description: 'The source branch name for the pull request'
value: ${{ jobs.get-pr-branch-name.outputs.branchName }}
jobs:
get-pr-branch-name:
runs-on: ubuntu-latest
outputs:
branchName: ${{ steps.get-pr-branch-name.outputs.branchName }}
steps:
- id: get-pr-branch-name
run: |
branchName=$(gh pr view ${{ inputs.pullRequestNumber }} --json "headRefName" --jq ".headRefName" --repo ${{ github.repository }})
echo "branchName=$branchName" >> $GITHUB_OUTPUT
env:
GH_TOKEN: ${{ github.token }}
This is fairly self explanatory. The workflow takes a pullRequestNumber input and outputs the branchName of the pull request. It uses the gh CLI to get the branch name of the pull request using the headRefName property of a pull request. (Incidentally, the env: GH_TOKEN: ${{ github.token }} line is important as it allows the workflow to authenticate with GitHub.)
Now we have our two reusable workflows, we can put them together in a workflow that is triggered by the issue_comment event. This workflow will call the util-get-pr-branch-name.yaml workflow to get the branch name of the pull request, and then call the util-build-and-deploy.yaml workflow to build and deploy the code for that branch. Here's the pull-request-commands.yaml workflow:
name: Pull request commands
on:
issue_comment:
types: [created]
jobs:
get-pr-branch-name:
uses: ./.github/workflows/util-get-pr-branch-name.yaml
with:
pullRequestNumber: ${{ github.event.issue.number }}
pre-deploy:
# check if the comment comes from a pull request and contains the command `/deploy`
if: github.event.issue.pull_request && contains(github.event.comment.body, '/deploy')
runs-on: ubuntu-latest
steps:
- run: |
gh issue comment ${{ github.event.issue.number }} --body "Preview environment [deploying](https://github.com/johnnyreilly/poorclaresarundel-aca/actions/runs/${{ github.run_id }})..." --repo ${{ github.repository }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
deploy:
# check if the comment comes from a pull request and contains the command `/deploy`
if: github.event.issue.pull_request && contains(github.event.comment.body, '/deploy')
needs: [get-pr-branch-name, pre-deploy]
uses: ./.github/workflows/util-build-and-deploy.yaml
with:
deploy: true
branchName: ${{ needs.get-pr-branch-name.outputs.branchName }}
secrets: inherit
post-deploy:
runs-on: ubuntu-latest
needs: deploy
steps:
- run: |
gh issue comment ${{ github.event.issue.number }} --body "Preview environment deployed: https://${{ needs.deploy.outputs.containerAppUrl }}" --repo ${{ github.repository }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
What you're hopefully gleaning from the above is that we have 4 jobs in this workflow:
get-pr-branch-name - this job calls the util-get-pr-branch-name.yaml workflow to get the branch name of the pull request. Note that we pass the github.event.issue.number as the pullRequestNumber input.pre-deploy - this job runs immediately to post a comment in the pull request to let the user know that the preview environment is being deployed. (Again using the GitHub CLI.) The comment gives the user feedback that the command has been received and is being actioned. Based upon my experience, this response will show up in the pull request 5-10 seconds after the /deploy command is entered. Not as fast as I'd like, but reasonable. For bonus points, I've chosen to include a link to the GitHub Actions run that is deploying the preview environment. This is useful as it allows the user to see the progress of the deployment.deploy - this job calls the util-build-and-deploy.yaml workflow to build and deploy the code for the branch. Note that we pass the branchName input to the workflow using the get-pr-branch-name.outputs.branchName output. Note also that we're passing deploy: true to the workflow to ensure that the code is deployed, and that we're inheriting the secrets from the parent workflow. This is important as it allows the child workflow to access the secrets it needs to deploy the code.post-deploy - this job posts a comment in the pull request to let the user know that the preview environment has been deployed and where they can find it.Or maybe I should have said it better as a screenshot:

Yup! That's the same screenshot as before. I'm just showing it again to remind you that this is what we've built.
We've written a slash command for deployment in this post, but you could write a slash command for anything you like. The key is to use the issue_comment event to trigger the workflow, and to check the comment body for the command you're interested in. You could pass more information in the comment body than just the slash command. For example, you could pass the name of the environment you want to deploy to, or the version of the app you want to deploy. You could even pass multiple commands in a single comment. The world is your oyster!
You can then call other workflows to do the heavy lifting for you, remembering to pass in any inputs that are needed.
If you would like to see the repo where this was implemented, look here.
]]>NSwag is still great, but it produces OpenAPI 3.0 specifications. However, Microsoft have been working on their own OpenAPI tooling for .NET. The Microsoft.AspNetCore.OpenApi package provides functionality to generate OpenAPI specifications from ASP.NET Core Web APIs and it supports OpenAPI 3.1. This difference turns out to be significant when it comes to handling nullability.
There was a change to how nullablity is represented in OpenAPI 3.1 compared to 3.0. Whether that change is the cause or not I'm not sure, but the OpenAPI specifications produced by Microsoft.AspNetCore.OpenApi seem to surface nullability better than I've found with NSwag or Swashbuckle. If something is not defined as nullable in the C# model, it is not marked as nullable in the OpenAPI spec. This means that when we generate TypeScript clients from the OpenAPI spec, we get better nullability support in TypeScript too. Previously I'd find I'd do a lot of null checks or assertions in TypeScript even when the C# model didn't allow nulls. Now, with OpenAPI 3.1 and Microsoft.AspNetCore.OpenApi, I find that much less often.
The client that NSwag generates is also still very useful. But it is somewhat "heavy" in that it creates a lot of code, and it is runtime code, so it adds to my bundle size and my execution time. The alternative I'm going to show you here is to use OpenAPI TypeScript / openapi-ts. This is a lightweight TypeScript client generator for OpenAPI 3.x specifications. Most of the work it does is in the form of TypeScript type definitions. Given that type definitions are erased at runtime, the resulting client code is very lightweight. It also has good support for OpenAPI 3.1.
So in this post we're going to do exactly what I did in my 2021 post, but this time using Microsoft.AspNetCore.OpenApi to generate the OpenAPI spec and openapi-ts to generate the TypeScript client.
We will:
Microsoft.AspNetCore.OpenApi.openapi-ts.If you're going to do this, you will need both Node.js and the .NET SDK installed.
We'll now create an API which exposes an Open API endpoint:
dotnet new webapi -o server
The above command creates a new .NET Web API project in a folder called server. Pretty much all the code we care about is in Program.cs:
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
// Learn more about configuring OpenAPI at https://aka.ms/aspnet/openapi
builder.Services.AddOpenApi();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.MapOpenApi();
}
app.UseHttpsRedirection();
var summaries = new[]
{
"Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};
app.MapGet("/weatherforecast", () =>
{
var forecast = Enumerable.Range(1, 5).Select(index =>
new WeatherForecast
(
DateOnly.FromDateTime(DateTime.Now.AddDays(index)),
Random.Shared.Next(-20, 55),
summaries[Random.Shared.Next(summaries.Length)]
))
.ToArray();
return forecast;
})
.WithName("GetWeatherForecast");
app.Run();
record WeatherForecast(DateOnly Date, int TemperatureC, string? Summary)
{
public int TemperatureF => 32 + (int)(TemperatureC / 0.5556);
}
This is simply exposing a single endpoint, /weatherforecast which returns some (fake) weather data. If we run our API with:
dotnet run --urls="http://localhost:5000"
We can then navigate to http://localhost:5000/weatherforecast and see the JSON output:
[
{
"date": "2025-12-30",
"temperatureC": 11,
"summary": "Sweltering",
"temperatureF": 51
},
{
"date": "2025-12-31",
"temperatureC": 4,
"summary": "Cool",
"temperatureF": 39
},
{
"date": "2026-01-01",
"temperatureC": -19,
"summary": "Cool",
"temperatureF": -2
},
{
"date": "2026-01-02",
"temperatureC": -8,
"summary": "Warm",
"temperatureF": 18
},
{
"date": "2026-01-03",
"temperatureC": -16,
"summary": "Sweltering",
"temperatureF": 4
}
]
And we can see the OpenAPI endpoint at http://localhost:5000/openapi/v1.json:
{
"openapi": "3.1.1",
"info": {
"title": "server | v1",
"version": "1.0.0"
},
"servers": [
{
"url": "http://localhost:5000/"
}
],
"paths": {
"/weatherforecast": {
"get": {
"tags": ["server"],
"operationId": "GetWeatherForecast",
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/WeatherForecast"
}
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"WeatherForecast": {
"required": ["date", "temperatureC", "summary"],
"type": "object",
"properties": {
"date": {
"type": "string",
"format": "date"
},
"temperatureC": {
"pattern": "^-?(?:0|[1-9]\\d*)$",
"type": ["integer", "string"],
"format": "int32"
},
"summary": {
"type": ["null", "string"]
},
"temperatureF": {
"pattern": "^-?(?:0|[1-9]\\d*)$",
"type": ["integer", "string"],
"format": "int32"
}
}
}
}
},
"tags": [
{
"name": "server"
}
]
}
This is great! (Actually, there's some problems with the temperatureC and temperatureF properties being marked as both integer and string but we'll ignore that for now.)
We'll now create a web app with which to consume our API:
npm create vite@latest client -- --template react-ts
This creates a React + TypeScript app in a folder called client. We'll now follow the openapi-ts setup instructions to add openapi-ts to our project:
cd client
npm i -D openapi-typescript typescript
And we'll update the tsconfig.app.json to include the recommended settings:
{
"compilerOptions": {
"noUncheckedIndexedAccess": true
}
}
To make local development easier, we'll also add a proxy to our vite.config.ts so that API request is proxied to our .NET API:
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
// https://vite.dev/config/
export default defineConfig({
plugins: [react()],
server: {
proxy: {
'/weatherforecast': {
target: 'http://127.0.0.1:5000',
changeOrigin: true,
autoRewrite: true,
},
},
},
});
Now we no longer need to deal with CORS during development, and our local development setup more closely resembles production. Incidentally, we could put all our API requests behind the proxy if we wanted to by using a standard prefix like /api, but for this demo we'll just proxy the one endpoint.
We have a front end app ready to consume our API. But we need to generate an OpenAPI client first.
We'll add an npm script to our package.json in the client folder to generate our OpenAPI client using openapi-ts:
"scripts": {
// ... other scripts ...
"generate-client": "openapi-typescript http://localhost:5000/openapi/v1.json --output src/GeneratedClient.ts --root-types --root-types-no-schema-prefix"
}
This, when run, will generate a TypeScript client in src/GeneratedClient.ts based on the OpenAPI spec exposed by our .NET API. It will also include the "root types" so we can import them in our code easily. To generate the client, we need to ensure our API is running. So we'll jump back up to the root of our .NET / React project and we'll add a package.json. We'll add the following two dependencies:
npm install --save-dev start-server-and-test concurrently
Then we'll add scripts to handle running client and server together, and to generate the client:
{
"name": "openapi-ts-test",
"version": "1.0.0",
"license": "ISC",
"scripts": {
"start": "concurrently -n \"FE,BE\" -c \"bgBlue.bold,bgMagenta.bold\" \"npm run dev:client\" \"npm run dev:server\"",
"dev:client": "cd client && npm run dev",
"dev:server": "cd server && dotnet run --urls=\"http://localhost:5000\"",
"generate-client": "start-server-and-test dev:server http-get://localhost:5000/openapi/v1.json generate-client:make",
"generate-client:make": "cd client && npm run generate-client"
},
"devDependencies": {
"concurrently": "^9.2.1",
"start-server-and-test": "^2.1.3"
}
}
Running npm run generate-client in the root of our project will now:
http://localhost:5000start-server-and-testgenerate-client script in the client folder to generate the TypeScript client.Here's what our generated client looks like:
/**
* This file was auto-generated by openapi-typescript.
* Do not make direct changes to the file.
*/
export interface paths {
'/weatherforecast': {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
get: operations['GetWeatherForecast'];
put?: never;
post?: never;
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
}
export type webhooks = Record<string, never>;
export interface components {
schemas: {
WeatherForecast: {
/** Format: date */
date: string;
/** Format: int32 */
temperatureC: number | string;
summary: null | string;
/** Format: int32 */
temperatureF?: number | string;
};
};
responses: never;
parameters: never;
requestBodies: never;
headers: never;
pathItems: never;
}
export type WeatherForecast = components['schemas']['WeatherForecast'];
export type $defs = Record<string, never>;
export interface operations {
GetWeatherForecast: {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
requestBody?: never;
responses: {
/** @description OK */
200: {
headers: {
[name: string]: unknown;
};
content: {
'application/json': components['schemas']['WeatherForecast'][];
};
};
};
};
}
You can see our /weatherforecast endpoint is represented in the paths section and the WeatherForecast model is represented in the components.schemas section.
Microsoft.AspNetCore.OpenApi surfaced typesI mentioned earlier that the temperatureC and temperatureF properties were marked as both integer and string in the OpenAPI spec. This is because Microsoft.AspNetCore.OpenApi is being ... interesting ... about number types. If we look at the types created in our client we see:
WeatherForecast: {
/** Format: date */
date: string;
/** Format: int32 */
temperatureC: number | string;
summary: null | string;
/** Format: int32 */
temperatureF?: number | string;
};
Note how temperatureC and temperatureF are both number | string. This isn't what we're after; we want them to be just number to reflect the C# int model. To fix this, we can create 2 IOpenApiSchemaTransformer implementations to fix up the number | string types to just number types. One to handle integer style numbers (IntegerSchemaTransformer) and one to handle numbers with decimal places (NumberSchemaTransformer).
using Microsoft.AspNetCore.OpenApi;
using Microsoft.OpenApi;
namespace Server.OpenApi;
/// <summary>
/// Transforms OpenAPI schema for integer types to ensure they are represented
/// with proper type and format, removing unwanted pattern and string type alternatives.
/// This affects integer types like int, long, short, etc.
/// </summary>
public sealed class IntegerSchemaTransformer : IOpenApiSchemaTransformer
{
public Task TransformAsync(
OpenApiSchema schema,
OpenApiSchemaTransformerContext context,
CancellationToken cancellationToken)
{
var type = context.JsonTypeInfo.Type;
// Handle nullable integers
var actualType = Nullable.GetUnderlyingType(type) ?? type;
// Check if this is an integer type
if (actualType == typeof(int) ||
actualType == typeof(long) ||
actualType == typeof(short) ||
actualType == typeof(byte) ||
actualType == typeof(sbyte) ||
actualType == typeof(uint) ||
actualType == typeof(ulong) ||
actualType == typeof(ushort))
{
// Set type to integer only (not ["integer", "string"])
schema.Type = JsonSchemaType.Integer;
// Clear any pattern that might have been added
schema.Pattern = null;
// Set appropriate format based on the actual type
schema.Format = actualType switch
{
// based on https://spec.openapis.org/oas/v3.1.1.html#data-types
Type t when t == typeof(int) => "int32",
Type t when t == typeof(uint) => "int32",
Type t when t == typeof(long) => "int64",
Type t when t == typeof(ulong) => "int64",
Type t when t == typeof(short) => "int32",
Type t when t == typeof(ushort) => "int32",
Type t when t == typeof(byte) => "int32",
Type t when t == typeof(sbyte) => "int32",
_ => "int32"
};
// Clear any enum values that might have been set
schema.Enum?.Clear();
}
return Task.CompletedTask;
}
}
/// <summary>
/// Transforms OpenAPI schema for number types to ensure they are represented
/// with proper type and format, removing unwanted pattern and string type alternatives.
/// This affects floating-point types like double, float, and decimal.
/// </summary>
public sealed class NumberSchemaTransformer : IOpenApiSchemaTransformer
{
public Task TransformAsync(
OpenApiSchema schema,
OpenApiSchemaTransformerContext context,
CancellationToken cancellationToken)
{
var type = context.JsonTypeInfo.Type;
// Handle nullable numbers
var actualType = Nullable.GetUnderlyingType(type) ?? type;
// Check if this is an integer type
if (actualType == typeof(double) ||
actualType == typeof(decimal) ||
actualType == typeof(float))
{
// Set type to integer only (not ["number", "string"])
schema.Type = JsonSchemaType.Number;
// Clear any pattern that might have been added
schema.Pattern = null;
// Set appropriate format based on the actual type
schema.Format = actualType switch
{
// based on https://spec.openapis.org/oas/v3.1.1.html#data-types
Type t when t == typeof(double) => "double",
Type t when t == typeof(decimal) => "double",
Type t when t == typeof(float) => "float",
_ => "double"
};
// Clear any enum values that might have been set
schema.Enum?.Clear();
}
return Task.CompletedTask;
}
}
And the Program.cs is updated to register these transformers:
builder.Services.AddOpenApi(options =>
{
options.AddSchemaTransformer<Server.OpenApi.IntegerSchemaTransformer>();
options.AddSchemaTransformer<Server.OpenApi.NumberSchemaTransformer>();
});
With this in place, when we next run npm run generate-client from the root of our project, we find that our generated client now has the correct types for temperatureC and temperatureF:
WeatherForecast: {
/** Format: date */
date: string;
/** Format: int32 */
temperatureC: number;
summary: null | string;
/** Format: int32 */
temperatureF?: number;
};
I've inquired whether the default behaviour makes the most sense here.
Now we want to make use of our generated client in our React app. First we're going to install openapi-fetch to help with making requests:
npm i openapi-fetch
(A quick note, openapi-fetch is not strictly necessary here, but it makes things easier. It provides a fetch-based HTTP client which works well with openapi-ts generated clients. It's worth saying that there are plans to deprecate openapi-fetch which you can read about here. As of right now though, it's still a useful library to use alongside openapi-ts.)
Now let's start our client and server with npm run start. We'll then replace the contents of App.tsx with:
import { useEffect, useState } from 'react';
import './App.css';
import createClient from 'openapi-fetch';
import type { paths, WeatherForecast } from './GeneratedClient'; // generated by openapi-typescript
const client = createClient<paths>();
function App() {
const [weather, setWeather] = useState<WeatherForecast[] | null>();
useEffect(() => {
async function loadWeather() {
const { data, error } = await client.GET('/weatherforecast');
if (data) {
setWeather(data);
} else if (error) {
console.error('Failed to load weather:', error);
}
}
loadWeather();
}, [setWeather]);
return (
<div className="App">
<header className="App-header">
{weather ? (
<table>
<thead>
<tr>
<th>Date</th>
<th>Summary</th>
<th>Centigrade</th>
<th>Fahrenheit</th>
</tr>
</thead>
<tbody>
{weather.map(({ date, summary, temperatureC, temperatureF }) => (
<tr key={date}>
<td>{new Date(date).toLocaleDateString()}</td>
<td>{summary}</td>
<td>{temperatureC}</td>
<td>{temperatureF}</td>
</tr>
))}
</tbody>
</table>
) : (
<p>Loading weather...</p>
)}
</header>
</div>
);
}
export default App;
Let's break down what's happening here:
GeneratedClient.tsopenapi-fetch client using those types.useEffect hook, we call the /weatherforecast endpoint using the generated client.From a users perspective, when we run the app we see: (I've reused the GIF from my previous post here as the experience is the same.)

In this post we've seen how to create a .NET Web API which exposes an OpenAPI endpoint using Microsoft.AspNetCore.OpenApi. We've then seen how to generate a TypeScript client from that OpenAPI spec using openapi-ts. Finally, we've seen how to consume that generated client in a React + TypeScript application.
What's significant here is that we have static typing all the way from back end to front end. The C# models we defined in our .NET API are represented in the OpenAPI spec, and those same models are represented in TypeScript types in our front end application. This means that if we change a model on the back end, we can regenerate the TypeScript client and get type safety on the front end too. I'm using C#, but you could be using something else entirely on the back end, as long as it can produce an OpenAPI spec.
There was a little adjustment needed to get the number types working correctly, but overall this was a pretty straightforward process. If you're building full stack applications with TypeScript on the front end and .NET on the back end, I recommend giving this approach a try!
]]>I write this post not as one of the most significant contributors to npmx.dev. I barely rank. At the point of writing I've submitted two PRs, one of which was merged, and one was not (for reasons I entirely agree with).
I'm writing as I'm excited by npmx; I really want it to succeed. So I thought I'd share my own mini story of npmx. If you walk away from this post with one thought I hope it's this: "npmx is a welcoming community, doing good work and very open to contributions, however minor".

I saw Daniel Roe's post on Bluesky in January and I thought "huh, that's interesting!":

I'd long felt that the npm website might generously be described as "adequate". It works, sure. But it does not spark joy. The last time I could remember a feature being added to the npm website was the addition of a "DT" badge to packages which had TypeScript type definitions available via Definitely Typed.
That was added a long time ago, and possibly by Orta Therox. (During his time with the TypeScript compiler team if my memory serves me right.) The point is, npmjs.com is in no way under active development.
Daniel's post seemed really punk rock. "I'll do it myself!"
Little pun there 😅. Given Daniel is the lead maintainer of Nuxt, it entirely made sense he would use that for the npm browser registry.
I was interested, but I wasn't sure whether I'd make any contributions myself; given my web background is mostly React. Also, life was and is quite full in other ways.
But before I knew what had happened, there was already this npmx.dev website in existence. It was new, shiny and impressive. Many people were actively contributing to the codebase day by day:

I starred the repo in GitHub and allowed the flood of notifications to flow into my inbox. As a matter of course, I do this rarely. Generally there's too much noise to have notifications on. But I was interested in seeing if I might pick something up through the osmosis. Similarly I joined the (very active) Discord.
I'd expected my contributions to npmx to be limited to a bit of testing. So I thought I'd do some testing. I looked up a project I work on called ts-loader and was surprised to discover it was missing from npmx.

I found myself raising an issue, puzzled at the absence:

It turned out that I was running into npm rate limiting API requests. So when I was typing in "ts-loader", behind the scenes API requests were firing at npmjs.com, and at a point the server decided to say "you've had enough!" and returning 429 Too Many Requests responses.
But the npmx.dev website didn't reflect that. It rather suggested that the package didn't exist. This niggled at me. And one fine Saturday morning I decided to see if I could have a go working on this.
The repo for npmx.dev has a very fine CONTRIBUTING.md. By following that I was quickly able to get a working version of the website on my machine.
I had two ideas of potential fixes. First idea: debounce the input box people type into.
As I've mentioned, I don't know Vue / Nuxt. I know TypeScript, but not the frameworks. So I fired up Claude Code, told it what I wanted to do and it wrote the code for idea 1. It didn't work. The implementation did; we had debounced successfully. But that was not sufficient to stop 429s from showing up.
So I reverted that. I blooming love Git.
Idea number two was more straightforward: when a 429 happens make the UI simply say "you've been rate limited - give it a moment".
For the second time I gave Claude Code his marching orders. (Sidebar: is Claude a "he"? Probably. Claude sounds like "a gent" 😅. Sorry.)
This time the approach worked. When I typed into the input box and 429s happened, I was presented with something like this:

Beautiful right?
When I looked at the code produced, it seemed plausible. It seemed to reflect the idioms of the codebase as best I could tell. And crucially, it worked.
The CONTRIBUTING.md specifically calls out using AI. Let me quote the guidance in full as I think it is excellent:
Using AI
You're welcome to use AI tools to help you contribute. But there are two important ground rules:
1. Never let an LLM speak for you
When you write a comment, issue, or PR description, use your own words. Grammar and spelling don't matter – real connection does. AI-generated summaries tend to be long-winded, dense, and often inaccurate. Simplicity is an art. The goal is not to sound impressive, but to communicate clearly.
2. Never let an LLM think for you
Feel free to use AI to write code, tests, or point you in the right direction. But always understand what it's written before contributing it. Take personal responsibility for your contributions. Don't say "ChatGPT says..." – tell us what you think.
For more context, see Using AI in open source.
I made sure that my usage of AI for this change was above board. I looked at what code I'd ended up with and learned a bit about how Vue and Nuxt work. It was PR time: https://github.com/npmx-dev/npmx.dev/pull/1200
Some PRs have a lot of back and forth. This one just landed. Which was nice!
Ironically, by the time you read this the original issue I raised, and the fix I provided, is I think generally no longer relevant. The npmx.dev website is now not just querying npmjs.com for package information, so the likelihood of running into npm rate limits is much lower. But that's the nature of development. You fix one thing, and then the world changes and that thing is no longer an issue. But that's fine.
I'm really happy npmx.dev exists and I've a good feeling about it. If you're thinking about something in npmx.dev that you might be able to improve, you should have a crack. It's a wonderful community; get involved!
]]>
Publishing a private npm package with Azure DevOps is fairly straightforward, but surprisingly documentation is a little sparse.
If you don't already have a feed to publish your npm package to, you can create one in Azure DevOps by following these instructions.
If you're trying to find out what feeds are available in Azure Artifacts, you can find them in the Azure DevOps UI. Go to the Artifacts section in Azure DevOps and you'll see a list of feeds. The URL for the feed will be in the format https://dev.azure.com/[ORGANIZATION]/_artifacts/feed.
There you'll see a dropdown with the feeds you have access to:

You'll see from the screenshot that I have access to a feed called npmrc-script-organization. Let's use that feed to publish a private npm package.
.npmrc fileSo that you can publish to a private feed, you need to set up an .npmrc file in your project. This file will contain the URL of the feed you want to publish to, and your credentials. To set up the .npmrc file, you can click on the "Connect to Feed" button in the Azure DevOps UI:

Then select npm and you'll see the instructions for setting up the .npmrc file:

Now we're ready to publish our npm package with Azure DevOps. Here's an example of an Azure Pipelines YAML file that publishes a private npm package:
trigger:
batch: true
pool:
vmImage: ubuntu-latest
variables:
isMainBranch: ${{ eq(variables['Build.SourceBranch'], 'refs/heads/main') }}
stages:
- stage: Build_Package_Publish
displayName: Build package and publish
jobs:
- job:
steps:
- task: NodeTool@0
inputs:
versionSpec: 20
displayName: Install Node.js
- task: npmAuthenticate@0
inputs:
workingFile: $(System.DefaultWorkingDirectory)/.npmrc
- bash: npm install
displayName: 'npm install'
- bash: npm run build
displayName: 'npm build'
- task: Npm@1
displayName: Publish Package
inputs:
command: 'publish'
publishRegistry: 'useFeed'
publishFeed: 'npmrc-script-organization'
Let's break down the steps in this YAML file:
.npmrc file.npm install and npm run build. These are standard steps for building a Node.js project; yours might vary; what's important is that you are able to get your built package set up.Npm@1 task to publish the package. We specify the publishRegistry as useFeed and the publishFeed as npmrc-script-organization. This is the feed we're publishing to.In this post, we've seen how to publish a private npm package with Azure DevOps. We've set up the .npmrc file, and we've used an Azure Pipelines YAML file to publish the package. This is a common scenario for teams that want to share code across projects or organizations. I hope this post has been helpful to you!