<![CDATA[The blog of Radoslav Gatev]]>https://www.gatevnotes.com/https://www.gatevnotes.com/favicon.pngThe blog of Radoslav Gatevhttps://www.gatevnotes.com/Ghost 4.32Sat, 21 Mar 2026 15:44:00 GMT60<![CDATA[Ghost on Azure App Service: No Longer Maintained]]>My open-source project which let anyone quickly provision a free Ghost blog on Azure App Service, will no longer be maintained. As of this post, I've archived the repository.

I hoped to revive it someday for the latest Ghost versions, but that's off the table now.

]]>
https://www.gatevnotes.com/ghost-on-azure-app-service-sunsets/69a42908c307230aa016e840Mon, 02 Mar 2026 16:25:25 GMT

My open-source project which let anyone quickly provision a free Ghost blog on Azure App Service, will no longer be maintained. As of this post, I've archived the repository.

I hoped to revive it someday for the latest Ghost versions, but that's off the table now. This post is my project post-mortem.

Project State At Archival

The last supported version of Ghost was 4.36.0, released back in 2022. It lags two major releases behind Ghost 6.0, flashing warnings such as "Update Ghost now: your Ghost site is vulnerable to an attack that lets unauthenticated attackers read arbitrary data from the database."

GitHub Dependabot flags numerous critical dependency vulnerabilities. This version should not be used for new deployments.

How It Started and Why People Loved It

I kicked off Ghost-Azure in 2017 to make blogging dead simple and free on Azure. I hosted it on Azure App Service's Free tier (Windows plan), with an optional Azure CDN for caching and custom domains. People loved it. By the time of archival, it had 509 forks and 134 GitHub stars. Most forks were created for applying custom tweaks to blogs, so I estimate thousands deployed it successfully.

Azure Changes More Than We Think

Azure in 2026 looks nothing like 2017. Services launch, retire, merge; some previews vanished without graduating to general availability. App Service Free tier still works on Windows, but Linux plans now make more sense for Node.js apps.

I hosted my own blog on the Free tier (Windows) behind a cheap Azure CDN provided by Verizon. There have been some interesting changes with the companies that provided the CDN technology. Actually, the CDN services originated from Edgecast Networks, Inc., which Verizon acquired in 2013 and rebranded as Verizon Digital Media. Apollo Global Management later bought Verizon Media (including the CDN) in 2021, reviving the Edgecast name before Limelight Networks merged with it in 2022 to form Edgio. Following Edgio's Chapter 11 bankruptcy filing in September 2024, the company officially ceased CDN operations on January 15, 2025. Akamai acquired select assets like customer contracts and security for $125M, while the remaining assets went to buyers including Parler Cloud Technologies.

Azure CDN from Edgio (formerly Verizon) and Akamai are retired; Microsoft CDN is phasing out (no new profiles can be created). Everything funnels to Azure Front Door Standard now, at about $35/month base, which is overkill for personal blogs.

Updates Became a Nightmare

I poured many hours into automating the integration of new Ghost releases into the repo. Hooks fired on Ghost's official releases, an automation picked up the most recent version, deployed it and ran some integration tests. Custom deployment script in Kudu took care of rebuilding Node packages, and handled DB migrations. But Node package bloat hit Kudu's deployment timeout limits, making deploys unreliable. I stopped all automations and decided that App Service on Windows was no longer right for modern Ghost.

The Burden Of Open-source

I love open-source. This project was also built in the open and at some point it got popular, with issues and questions rolling in. Helping others felt great at first, as you feel it matters to people. But it turned into a full-time job that I was doing for free. No real community formed, and there were no meaningful outside contributions. Most users fixed their own stuff without giving back. And I'm not claiming I did a great job growing the community.

The lesson I took: Without a healthy community, even the best open-source project dies when the priorities of the original author shift. I remind myself of that when I have to select an open-source piece to use in a client project.

What's Next

As I write this post, notifications scream that this version is insecure. I know that. I tinkered with Ghost 5.0 some months back, paused when 6.0 dropped, and picked it up again recently. I'll try to share a fresh solution soon.

Thanks to everyone who used or forked my project over the years.

]]>
<![CDATA[Tips & Tricks for Azure Repos: Configure Git Credential Manager for Entra ID Auth]]>https://www.gatevnotes.com/tips-and-tricks-azure-repos-configure-git-credential-manager-entra-id-auth/69515b2bdb550b0930879030Sun, 28 Dec 2025 18:02:55 GMT

Git Credential Manager (GCM) is a cross-platform Git credential helper that handles authentication to common Git hosting services and stores credentials or tokens so you do not have to enter them for every Git operation.

You probably use it to authenticate against Azure DevOps without thinking about it.​ It is the tool that shows the login prompt whenever you run Git commands against a repository.​

The problem

Most of the time, GCM “just works” and makes it easy to move between Azure DevOps organizations and repositories.

But in some organizations, cloning (or any Git operation) fails and Git falls back to prompting for a username and password, which often won’t work in modern Azure DevOps setups.

git clone https://<organization>.visualstudio.com/<project-name>/_git/<repository-name>
Cloning into '<repository-name>'...
fatal: Failed to create PAT: DisablePatCreationPolicyViolation
Username for 'https://<organization>.visualstudio.com':

The root cause

The key detail here is the error message:
fatal: Failed to create PAT: DisablePatCreationPolicyViolation.

By default, GCM authenticates you to Azure DevOps and then obtains a personal access token (PAT), which it stores securely.​ Seeing an Entra sign-in prompt does not mean GCM is using Entra tokens as the final credential; it can still be exchanging that sign-in for a PAT behind the scenes.

It took some time to understand this behavior when I first hit the problem.

PATs are simple and convenient, but they are being phased out as the primary way to access Azure DevOps APIs.​ They are long‑lived secrets, and many organizations now restrict how they are created and used.​ In Azure DevOps, administrators can control PAT usage at the organization level with dedicated policies.

The fix

For Azure DevOps, the Git Credential Manager credential type defaults to PAT.​ To use Entra ID-based OAuth tokens instead, change the Azure Repos credential type to OAuth:

git config --global credential.azreposCredentialType oauth

After this change, GCM will request Entra ID tokens instead of PATs by default when you connect to Azure Repos

]]>
<![CDATA[One scheduled Power Automate flow, many dynamically configurable recurrences]]>Managing Power Automate flows that run on a schedule can become complex, especially when users have different scheduling preferences. But what if there was a way to simplify this process?

There is at least one scheduled Power Automate flow in almost every Power Platform solution that's responsible for

]]>
https://www.gatevnotes.com/one-scheduled-power-automate-flow-many-dynamically-configurable-recurrences/67ea937095eedf1abccf4c17Mon, 31 Mar 2025 14:13:00 GMT

Managing Power Automate flows that run on a schedule can become complex, especially when users have different scheduling preferences. But what if there was a way to simplify this process?

There is at least one scheduled Power Automate flow in almost every Power Platform solution that's responsible for sending emails or notifications on a scheduled basis. However, things can get complicated when different user groups have conflicting requirements. Some users might want to receive a notification every morning, while others prefer getting it on a weekly basis. To add to the complexity, your users may be spread across various time zones—making the concept of “morning” a relative one.

You might be tempted to create multiple flows with different schedules to meet the various time-based requirements of your users. However, you'll soon need to figure out the best strategy for reusing the same logic to execute certain actions, which likely means using child flows. As unique requirements continue to pile up, it can quickly become unmanageable. Eventually, you'll end up with dozens of scheduled flows that essentially perform the same task—executing a set of actions at specific times.

An alternative solution I propose is to use just one scheduled flow, triggered anywhere from multiple times an hour to once a day, depending on how frequently the task needs to be performed. That is a single trigger in one flow. The examples in this blog post will focus on executions up to four times per hour. To achieve this, you can split the hour into four 15-minute periods.

Configure the dynamic scheduling

Now that we have the basic understanding, let's dive into how you can configure dynamic scheduling to accommodate diverse user needs.

You’ll need to create a Dataverse table to store information about individual user groups and their scheduling preferences. It might look like this:

One scheduled Power Automate flow, many dynamically configurable recurrences

The most important column in the table is Schedule, which contains a cron expression. A cron expression is a string pattern used to define specific times for actions to occur, allowing you to schedule tasks flexibly and precisely based on your desired frequency. It consists of five key components: minute, hour, day of the month, month, and day of the week. Each component works together to define the precise schedule for executing a task.

The other columns shown are just examples and should be customized based on your use case. Each record in the table is linked to a specific Business Unit, allowing you to configure one or more schedules per unit. The Receiver column points to a Team that will receive the notifications, but this is just one possible implementation.

To define and understand cron expressions, you can use https://crontab.guru. If you want to allow users to make changes to the cron expression table themselves, you'll need to implement proper validation in the corresponding form or application.

The scheduled flow

The flow is scheduled to trigger once every 15 minutes. The key here is that the flow will execute 96 times a day; however, whether a particular run performs an action (such as sending a notification, an email, or executing another task) depends on the cron expression stored in the Schedules table and the current time provided by the trigger.

While this setup works seamlessly most of the time, there are occasional timing quirks to be aware of, especially when the platform is under heavy load.

The actual time of the trigger is not always deterministic, based on my research. You can control its behavior by setting the 'Start time' value so that both the hour and minute parts are zero. These components are used to calculate future run times. By ensuring they are zero, you guarantee that each trigger is scheduled precisely at 19:30:00, rather than at an arbitrary time like 19:30:37, which could result from the first recurrence.

The exact time of the trigger isn't always deterministic, based on my research. You can control its behavior by setting the 'Start time' value so that both the hour and minute are set to zero. These components are used to calculate future run times. By ensuring both are zero, you guarantee that each trigger is scheduled precisely at 19:30:00, rather than at an arbitrary time like 19:30:37, which could result from the first recurrence if a start time is not set.

It works properly most of the time, but I’ve seen complaints online about triggers being delayed. This may happen if the platform is under heavy load. This is a crucial detail when evaluating a cron expression against the current time. That’s why I recommend allowing some flexibility in your cron expression by specifying a range for the minutes component, such as 0-14 * * * *, instead of just 0 * * * *. The trigger might be slightly late, but it shouldn’t fire early. For example, if it's scheduled for 19:30 but triggers at 19:31:02, the cron expression will still match the current time.

One scheduled Power Automate flow, many dynamically configurable recurrences

If you have scheduled the flow four times an hour—at 0, 15, 30, and 45 minutes—the ranges for the minutes component can be 0-14, 15-29, 30-44, and 45-59, respectively. Here are a few examples:

  • Once an hour: 0-14 * * * *
  • Twice an hour: 0-14,30-44 * * * *
  • Every day at 8:00 AM UTC: 0-14 8 * * *
  • Every 1st day of the month at 8:30 AM UTC: 30-44 8 1 * *

Evaluating the cron expression

Now, you might wonder—how do we actually evaluate a cron expression in Power Automate? Let’s break that down. Unfortunately, there isn’t a built-in action for working with cron expressions. I recall there were some third-party actions available in the past, but I can't seem to find them anymore. In general, you should be cautious when using third-party extensions, primarily due to security and support concerns.

The natural solution to this is Azure Functions. This is why the HTTP connector above calls some-function.azurewebsites.net. Here’s the implementation:

[Function("EvaluateCron")]
public IActionResult Run([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req)
{
    var cronExpression = req.Query["cron"];
    var currentTimeString = req.Query["currentTime"];

    if (string.IsNullOrWhiteSpace(cronExpression) || string.IsNullOrWhiteSpace(currentTimeString))
    {
        return new BadRequestObjectResult("Please provide both cron expression and current time.");
    }

    if (!DateTime.TryParse(currentTimeString, null, DateTimeStyles.AdjustToUniversal, out DateTime currentTime))
    {
        return new BadRequestObjectResult("Invalid current time format.");
    }

    // Adjust the current time to the start of the minute
    var currentTimeAtZeroSeconds = currentTime.AddSeconds(-currentTime.Second);
    var previousMinute = currentTimeAtZeroSeconds.AddSeconds(-1);

    var schedule = CrontabSchedule.Parse(cronExpression);
    var nextOccurrence = schedule.GetNextOccurrence(previousMinute);

    bool isMatch = nextOccurrence == currentTimeAtZeroSeconds;

    return new OkObjectResult(isMatch);
}

The function takes two query parameters: cron and currentTime, where currentTime represents the scheduled trigger time. It adjusts currentTime by removing any seconds and stepping back by one second. This ensures that the next occurrence of the cron expression, relative to the provided time, aligns exactly with the same minute as currentTime.

Visit my Code-Samples repo to find the full source code for the function. If you need assistance with deploying it to Azure, refer to the official documentation.

By using a single scheduled Power Automate flow with configurable dynamic recurrences, you can simplify scheduling to meet diverse user needs while keeping your environment clean. However, there are a few things to keep in mind, such as potential delays and evaluating cron expressions.

I hope this guide helps you streamline your Power Automate flows. Feel free to share your thoughts or questions in the comments!

]]>
<![CDATA[A deep-dive into User Delegation Shared Access Signature (SAS) for Azure Storage]]>In a previous blog post, I covered what Shared Access Signatures are and how to choose between the various types. Among the three types, User Delegation SAS is the preferred option when it comes to accessing either Azure Blob Storage or Azure Data Lake Storage Gen2. It's just

]]>
https://www.gatevnotes.com/a-deep-dive-into-user-delegation-shared-access-signature/677c24f9a6040929d49057e4Tue, 25 Mar 2025 06:00:00 GMT

In a previous blog post, I covered what Shared Access Signatures are and how to choose between the various types. Among the three types, User Delegation SAS is the preferred option when it comes to accessing either Azure Blob Storage or Azure Data Lake Storage Gen2. It's just not available for the other storage services, namely Queue, Table, and File services at the time of writing this blog post.

What is so special about User Delegation SAS?

User Delegation SAS is a type of Shared Access Signature that differs in how it is signed. Unlike its companions, the Account SAS and the Service SAS, which are signed with the account key, the User Delegation SAS is signed with a delegation key. This delegation key is issued to a security principal in Entra ID - user, service principal, or managed identity. So, it's not meant to be used by users only, although the name can be a bit misleading.

You can issue a user delegation SAS at a container level, at a blob level, or in the case of Azure Data Lake Storage Gen2, at the directory level. However, the permissions granted by the other two types of SAS tokens, service and account, differ significantly, as they provide uniform access across all resources. So let's cover how permissions work here.

Just like the other token types, the user delegation SAS token contains a list of fields that describe what the token is capable of doing. As you might already know, these are kept in the signedPermissions (sp) field that is for specifying the permissions of the SAS. A few example values for this field are r (read), w (write), rw (read and write), rl (read and list). But those are not the effective permissions of a user delegation SAS. Instead, the effective permissions are the intersection between the following checked at runtime:

  • The permissions that are explicitly stated in the signedPermissions field of the SAS token.
  • The Azure RBAC permissions that are granted to the security principal. For example, Storage Blob Data Reader, Storage Blob Data Contributor, etc.
  • The POSIX ACLs, if it is an Azure Data Lake Storage Gen2 and there isn't a matching RBAC permission granted to the security principal.

Let's work through one example of a user delegation SAS token: ?sv=2023-11-03&st=2025-01-12T15%3A03%3A31Z&se=2025-01-13T15%3A03%3A31Z&skoid=1b44b83a-7a61-4f10-85b3-6690926afb63&sktid=1712d5af-25d4-4ca9-9582-0b6efdb0440d&skt=2025-01-12T15%3A03%3A31Z&ske=2025-01-13T15%3A03%3A31Z&sks=b&skv=2023-11-03&sr=b&sp=r&sig=w5%2ckz0iViW3vpo67bVtMHOtWL2Gr3MvqA1j29gX62tw%3D

The signed permission specified in the SAS token above is sp=r (read). The Entra ID security principal is stated in the skoid field. If we pretend this token is still valid, we could prepend it to the URL of the Get Blob operation, for example. While the SAS itself looks fine, the Azure Storage service will have to check whether the security principal is allowed to read a given blob. It must have been granted an RBAC role such as Storage Blob Data Reader or read permissions configured via the POSIX Access Control Lists in the case of ADLS Gen2. If not, the operation will be denied.

Besides the way permissions work, a user delegation SAS token requires some special fields that are not present in the other types of SAS tokens. Those are signedTenantId (sktid) and a bunch of fields that describe various properties of the user delegation key such as signedKeyStartTime (skt), signedKeyExpiryTime (ske), signedKeyService (sks), and others. By possessing a user delegation key of a security principal, one can be regarded as the security principal without knowing their credentials. This exemplifies the inherence factor in authentication (something the user is), as a user delegation key is uniquely issued to a security principal for a specified period. Consequently, all fields describing a user delegation key are essential. You can check all fields in the documentation.

There are two more interesting fields:signedAuthorizedObjectId(saoid) and signedUnauthorizedObjectId (suoid). They allow you to specify the object ID of another security principal that is authorized by the owner of the user delegation key to perform some actions by using a SAS token. In a sense, that becomes a delegated delegation. To keep this post at a reasonable length, I won't cover them but you can learn more about them in the documentation.

Why is user delegation SAS a good idea?

You don't need the storage account key to sign the SAS. This is a major benefit as using a storage account key is generally not recommended. Handling the storage account key requires a lot of care and therefore its usage is typically disallowed in most organizations.

The delegated access is fine-grained and revocable. As discussed above, the effective permissions are the intersection of the permissions that are set on the SAS token itself and those that are assigned via either an RBAC role or POSIX ACLs. One option for revoking that SAS would be just unassigning the RBAC role or POSIX ACL permission. But do keep in mind that permissions might be cached at the storage service level, so it might take some time before the change comes into effect. Based on tests I have done, the permission changes are processed immediately but that likely depends on the depth and breadth of your storage account and also at what level you assign these permissions.

There are more Storage audit logs available. For example, each user delegation key that gets generated results in a distinct GetUserDelegationKey operation in the diagnostic logs of your storage account. You can see when and where this happened, along with what security principal requested it. Later on, when you supply a user delegation SAS to some operation in Azure Storage, the security principal is tracked in RequesterObjectId almost like it was a normal call that supplied an Entra ID access token of the security principal. But upon a further look, you see that AuthenticationType is DelegationSas and the value of AuthenticationHash is something like user-delegation(6908E4F39F0BD8FD47F8027024461A266D2800B552238122F432C4E75DE2CBE3),SasSignature(0582D8D7419F5B1C62C180AFF6885C53AA4BAB41602E427D3F1611B238F6C712).

You can optionally specify a correlation ID to be included in the user delegation SAS by specifying the signedCorrelationId field and supplying a Guid as a value. This could be handy if you want to get a correlation ID from the process that issues the SAS token and make sure that this correlation ID becomes an indispensable part of the SAS. As this correlation ID is included in the storage audit logs, later on you will be able to find out in what circumstances the SAS was issued by checking the logs of the process that issued the SAS token.

Enough theory, let's see how obtaining user delegation works in action!

Generate a user delegation SAS

As a prerequisite, make sure that the security principal you are going to use to obtain a user delegation key (see Step 2 below) has an RBAC role that contains the Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey action at the level of the storage account or above. The built-in Storage Blob Delegator has just this action, so choose this role should you want to follow the principle of least privilege. Alternatively, you could use Contributor or Storage Blob Data Reader/Contributor/Owner roles because they have the abovementioned action too.

In general, the steps to issue a user delegation SAS are as follows:

  1. Obtain an Entra ID access token via an applicable OAuth 2.0 flow.
  2. Get a user delegation key by invoking the Get User Delegation Key operation.
  3. Assemble all the fields of the SAS token and sign using the user delegation key.

Depending on the method you use for issuing the SAS token, this step can range from very easy to very complex. There are several ways to issue a user delegation SAS token that are also applicable to the other types of SAS tokens:

  • Azure Storage Explorer - this might be the easiest of all. Just right-click on a blob/directory/container, choose Get Shared Access Signature from the context menu, then choose "User delegation key" as in the "Signing Key" dropdown. Done!
  • Storage Browser available in the Azure Portal - it provides a similar but more restricted experience than Azure Storage Explorer right from the Azure Portal. Navigate to your storage account resource, then you will see the "Storage browser" blade from the menu. Click on the three dots next to a file or directory, to see "Generate SAS". Easy!
  • Use a client library - if you want to issue SAS tokens programmatically. I will show you an example with Azure.Storage.Blobs below.
  • Implement it yourself - you must have strong reasons and know what you are doing. For example, I had to implement such a thing once for a client inside a Power Platform Custom Connector. It was the only place where we could shamelessly stick a piece of .NET code to be executed. However, the major restriction was that we could not use any .NET libraries. Therefore I had to implement it from scratch.

Azure Storage Explorer and the Storage Browser from the Azure Portal are really handy if you want to generate a SAS occasionally. But if you have to implement it as part of some solution, you'll have to write some code. It's always good if you know how things really work. By comparing both of the examples that follow, you come to appreciate how SDKs are practically shielding us, the users, from unneeded complexity.

Generate a user delegation SAS using Azure.Storage.Blobs

I am going to use a service principal for both examples, so the applicable OAuth 2.0 is client credentials, therefore I chose to use ClientSecretCredential. But depending on your use case, you can use any of the implementations of the TokenCredential abstract class.

  1. Create an instance of BlobServiceClient:
    var azCredential = new ClientSecretCredential(tenantId, clientId, clientSecret);
    var blobServiceClient = new BlobServiceClient(
        new Uri(blobEndpoint),
        azCredential);
    
  2. Get a user delegation key for the service principal:
    var utcNow = DateTimeOffset.UtcNow;
    var userDelegationKey = await blobServiceClient.GetUserDelegationKeyAsync(
        utcNow,
        utcNow.AddHours(hours), 
        cancellationToken);
    
  3. Create an instance of BlobClient that points to a specific blob:
     var blobClient = blobServiceClient
         .GetBlobContainerClient(blobContainerName)
         .GetBlobClient(blobName);
    
  4. Construct a user delegation SAS URL.
    var utcNow = DateTimeOffset.UtcNow;
    var sasPermissions = BlobSasPermissions.Read | BlobSasPermissions.Write;
    var sasBuilder = new BlobSasBuilder(sasPermissions, utcNow.AddHours(validityInHours))
    {
        BlobContainerName = blobClient.BlobContainerName,
        BlobName = blobClient.Name,
        Resource = "b",
        StartsOn = utcNow,
    };
    
    // Add the SAS token to the blob URI
    var uriBuilder = new BlobUriBuilder(blobClient.Uri)
    {
        // Specify the user delegation key
        Sas = sasBuilder.ToSasQueryParameters(
            userDelegationKey.OriginalUserDelegationKey,
            blobServiceClient.AccountName)
    };
    
    var sasUri = uriBuilder.ToUri();
    

You can see the full example in GitHub. I had to abstract things away via an interface so that both examples become easily swappable strategies.

Generate a user delegation SAS: the DIY way

This time, let's approach it differently by implementing it from scratch instead of relying on a client library. If that's too much code in a blog post for your taste, you can follow along by opening the full example on GitHub.

  1. Obtain an access token for the service principal:

    var tokenEndpoint = $"https://login.microsoftonline.com/{_tenantId}/oauth2/v2.0/token";
    using var aadTokenRequest = new HttpRequestMessage(HttpMethod.Post, tokenEndpoint);
    var form = new Dictionary<string, string>
    {
        {"grant_type", "client_credentials"},
        {"client_id", _clientId},
        {"client_secret", _clientSecret},
        {"scope", "https://storage.azure.com/.default"}
    };
    aadTokenRequest.Content = new FormUrlEncodedContent(form);
    
    var response = await httpClient.SendAsync(aadTokenRequest, cancellationToken);
    using var jsonContent = await response.Content.ReadAsStreamAsync(cancellationToken);
    
    var parsedJson = await JsonDocument.ParseAsync(jsonContent, cancellationToken: cancellationToken);
    var accessToken = parsedJson.RootElement.GetProperty("access_token").GetString();
    
  2. Get a user delegation key for the service principal:
    Here the code starts to look a bit messy, so let me provide some context. I use the access key that was just obtained to invoke Get User Delegation Key. The XML payload specifies the period of validity for the delegation key.

    var delegationUriBuilder = new UriBuilder(_storageServiceUri)
    {
        Scheme = Uri.UriSchemeHttps,
        Port = -1, // default port for scheme
        Query = "restype=service&comp=userdelegationkey&timeout=30"
    };
    using var delegationKeyRequest = new HttpRequestMessage(HttpMethod.Post, delegationUriBuilder.Uri.ToString());
    delegationKeyRequest.Headers.Authorization = new AuthenticationHeaderValue("Bearer", authToken);
    delegationKeyRequest.Headers.Add("x-ms-version", "2023-01-03");
    
    var startTime = DateTimeOffset.UtcNow;
    var endTime = startTime.AddHours(hours);
    var payload = $"<KeyInfo><Start>{startTime.ToString("s")}Z</Start><Expiry>{endTime.ToString("s")}Z</Expiry></KeyInfo>";
    var payloadContent = new StringContent(payload, Encoding.UTF8, "application/xml");
    delegationKeyRequest.Content = payloadContent;
    
    using var delegationKeyResponse = await httpClient.SendAsync(delegationKeyRequest, cancellationToken);
    var xmlResponse = await delegationKeyResponse.Content.ReadAsStringAsync(cancellationToken);
    
    var userDelegationKey = xmlResponse.FromXml<UserDelegationKey>();
    
  3. Construct a string to sign.
    The string to sign contains the value of the fields from the SAS token, separated by the newline character \n. The order of the fields is predefined and you should strictly follow it. If you wonder what the sequences of repeating \n characters are for, these represent empty placeholders for several fields that I did not specify any values.

    var storageAccountEndpointUri = new Uri(_storageServiceUri);
    var storageAccountName = storageAccountEndpointUri.Host.Split('.').FirstOrDefault();
    
    var startTime = DateTimeOffset.UtcNow;
    var endTime = startTime.AddHours(validityInHours);
    var start = startTime.ToString("s") + 'Z';
    var end = endTime.ToString("s") + 'Z';
    
    var pathCleansed = $"{blobContainerName}/{blobName}".Replace("//", "/");
    var stringToSign = "r" + "\n" +
        start + "\n" +
        end + "\n" +
        $"/blob/{storageAccountName}/{pathCleansed}".Replace("//", "/") + "\n" +
        userDelegationKey.SignedObjectId + "\n" +
        userDelegationKey.SignedTenantId + "\n" +
        userDelegationKey.SignedStartsOn.ToString("s") + 'Z' + "\n" +
        userDelegationKey.SignedExpiresOn.ToString("s") + 'Z' + "\n" +
        userDelegationKey.SignedService + "\n" +
        userDelegationKey.SignedVersion + "\n" +
        "\n\n\n\n" +
        "https" + "\n" +
        "2023-01-03" + "\n" +
        "b" + "\n" +
        "\n\n\n\n\n\n";
    
  4. Compute an HMAC-SHA256 over the string-to-sing using the user delegation key.
    This might seem a bit complex, so let's break it down. We start with the stringToSign, which includes all the fields of our future SAS token. HMAC-SHA256, or Hash-based Message Authentication Code, is the algorithm we use. It employs a cryptographic hash function, SHA-256, and a key to generate a message authentication code (MAC). In our case, the key is the user delegation key that we obtained from Azure Storage.

    The output, or MAC, is essentially a hash that can't be reversed to reveal the original value. When Azure Storage receives a signature, it replicates the same steps you took to compute the signature. If the incoming signature matches the one Azure Storage computed, the SAS token is considered valid and trustworthy.

    For convenience, I implemented a simple extension method that does just this.

    string signature = stringToSign.ComputeHMACSHA256(userDelegationKey.Value);
    string signatureUrlEncoded = HttpUtility.UrlEncode(signature);
    
  5. Assemble the final SAS URL:

    var query = string.Join("&",
        "sp=r",
        $"st={start}",
        $"se={end}",
        $"skoid={userDelegationKey.SignedObjectId}",
        $"sktid={userDelegationKey.SignedTenantId}",
        $"skt={userDelegationKey.SignedStartsOn.ToString("s") + 'Z'}",
        $"ske={userDelegationKey.SignedExpiresOn.ToString("s") + 'Z'}",
        "sks=b",
        $"skv={userDelegationKey.SignedVersion}",
        "spr=https",
        $"sv={userDelegationKey.SignedVersion}",
        "sr=b",
        $"sig={signatureUrlEncoded}"
    );
    
    var sasBuilder = new UriBuilder(_storageServiceUri)
    {
        Scheme = Uri.UriSchemeHttps,
        Port = -1, // default port for scheme
        Path = pathCleansed,
        Query = query
    };
    
    var sasUrl = sasBuilder.Uri;
    

If User Delegation SAS seemed complex before, I hope this blog post has made it a bit less mysterious.

]]>
<![CDATA[On keeping up with Microsoft technology]]>As someone who loves and professionally uses Microsoft products, I have been following Microsoft news and announcements for over 10 years now. And I must tell you, it gets difficult because Microsoft is even bigger company now than it was back then. Microsoft Azure, as we know it, was and

]]>
https://www.gatevnotes.com/on-keeping-up-with-microsoft-technology/67d3c87e1e643a199c554402Tue, 18 Mar 2025 20:00:10 GMT

As someone who loves and professionally uses Microsoft products, I have been following Microsoft news and announcements for over 10 years now. And I must tell you, it gets difficult because Microsoft is even bigger company now than it was back then. Microsoft Azure, as we know it, was and still remains a moving target.

Now that everything is moving fast, and content being AI-powered, I feel that an immense amount of stuff is thrown at us techies to the point that it becomes overwhelming to process. I must tell you it's impossible to keep track of everything that is being developed and released by Microsoft. And let's be clear that this is also difficult for Microsoft employees and Microsoft MVPs like me. But we are all trying our best.

In this blog post, I will explain what my process is and what sources I like and use the most.

Know your sources

There are a lot of websites where Microsoft publishes news, announcements or technical articles. Visiting them multiple times a day to check what is new doesn't work for me. And that's why I rely on RSS for that part. I use Feedly where I add the RSS feeds and it accumulates all the new articles. Once, I have some time to read, I open it and I can clearly see what is waiting for me to read it. If you are not a fan of RSS readers, you can also use Outlook.

The Microsoft Azure Blog

The major announcements across Microsoft Azure are usually published there.

RSS feed: https://azure.microsoft.com/en-us/blog/feed/

Azure Updates

There are dozens of services across Azure. Every month there are quite some updates being published there - new services or features being announced, some of them are made available in private previews while others are in public previews. Some services or features previously in preview, reach general availability while others are being retired.

RSS feed: https://www.microsoft.com/releasecommunications/api/v2/azure/rss

Microsoft Tech Community

It's an all-in-one portal for the technical communities for Microsoft products where you can find blogs, events, and discussions. I have to mention that the Tech Community was revamped a couple of months ago. The UI looks modern now; however, along with the UI change, the option to use the site-wise RSS feed that has all blog posts is gone. This means that if you want to follow the blog posts, you'll have to go through every blog available on the website. Have in mind that some of the blogs might be old and might not get any new posts.

  1. Open https://techcommunity.microsoft.com/Blogs
  2. Find the Blogs title on the right. Make sure that "All Blogs" is the selected option. On keeping up with Microsoft technology
  3. Open a blog in a new tab.
  4. Find the RSS icon on the right. For example, the RSS feed URL of the AI Platform Blog is https://techcommunity.microsoft.com/t5/s/gxcuf89792/rss/board?board.id=AIPlatformBlog.
  5. Add it to your favorite RSS reader. As I mentioned, I use Feedly.
  6. Repeat for every blog of interest.

Sure, it takes some time to set it up. But maybe the good news is that you can pick and choose which blogs you want to follow. However, with time new blogs will emerge and content will be posted there. Other blogs will be abandoned. You'll have to repeat the process I described above every so often in order to check if there is a new blog you want to add to your collection of RSS feeds.

Microsoft Developer Blogs

As the name implies, this website caters to developers who are using Microsoft technologies. It really has blogs for wide range of topics - .NET languages, Python, TypeScript, IDEs, using various SDKs, and many more.

RSS feed (for the whole website): https://devblogs.microsoft.com/landing

But you can also use a specific RSS feed for a given topic. Here is the feed for All .NET posts: https://devblogs.microsoft.com/dotnet/feed/

Microsoft Security Blog

Here you'll find research that covers a broad spectrum of threats. Most reports provide detailed descriptions of attack chains, including tactics and techniques mapped to the MITRE ATT&CK framework, exhaustive lists of recommendations, and powerful threat hunting guidance. There are also posts that cover some best practices on safeguarding infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) resources.

RSS feed: https://www.microsoft.com/en-us/security/blog/feed/  

YouTube channels

Sometimes seeing a new thing on a video is way better than reading about it. Here are some channels on YouTube in no particular order that are good in my opinion:

Follow widely, read selectively

There is a lot of content being thrown at you that competes for your attention, my reader. It's not very practical to track absolutely everything closely. That's why I subscribe to a lot of sources that cover a wide range of topics. However, I skim through most of it. Some content is just not relevant for me at the time so I skip it. Other content can be promotional or full of marketing fluff, therefore I pass.  I only scrutinize what seems interesting and is well-written.

How do you keep up with Microsoft?

]]>
<![CDATA[Delegate access to Azure Storage using Shared Access Signature (SAS)]]>https://www.gatevnotes.com/delegate-access-to-azure-storage-using-shared-access-signatures-sas/67547374a6040929d49052deSat, 01 Feb 2025 20:55:16 GMT

Azure Storage allows many methods for authentication and authorization and it can be confusing when choosing the right one. For example, you can leverage Entra ID (formerly known as Azure Active Directory) and obtain a token for your user, service principal, or managed identity. Of course, methods that rely on Entra ID are generally preferred.

But sometimes using Entra ID is simply impossible or not practical. In such cases, you should probably consider using a Shared Access Signature. Although sometimes tools and SDKs make it easy to enter an account key and get going quickly, that's generally not a good idea and you should avoid using your storage account key. Shared Key authorization is another method not to be confused with Shared Access Signature. It has some resemblance to SAS as it needs to be signed with the account key though. But it's less secure than SAS and it isn't used so much nowadays.

Why is SAS needed?

Simply put, it's for delegation. But what is delegation one would ask and how is it different from trying to directly access a Storage Account to which we have an Azure RBAC role, for example?

Shared Access Signatures allow us to grant a client temporary access that is allowed to execute certain actions on a Storage Account. Those actions are restricted by the permissions you select such as Read, Write, List, etc. The SAS token is typically generated and passed to the client just before it needs to start using it. Then that client utilizes the SAS Token when invoking various Azure Storage APIs.

Here are a few examples that involve the blob service; however, keep in mind that SAS tokens can be also used for the Table, File, and Queue services of Azure Storage:

  • You want to enable users of your web application to download files stored in Azure Storage. When the user clicks on the link, the file is downloaded. Each link contains a SAS that is generated just before the file needs to be visualized in the UI, and the link expires in one hour.
  • A third-party system (that is not represented in Entra ID in any way) needs to download blobs from your storage account for some further processing.
  • Sometimes you want to overcome known issues with particular services. For example, I've had a case where we used an ADF pipeline to delete a directory in a very big data lake recursively. The linked service in ADF pointed to a managed identity that was granted some permissions via POSIX ACLs. During development, everything was fine and worked as expected but as the data lake grew, some recursive deletes started timing out. It turned out that ACLs were checked for every directory and file before anything could be deleted. Because the hierarchy under the path we wanted to delete was very deep, the recursive delete failed every time. This is indeed a limitation when you rely on ACLs to grant access. The only solution is to invoke the requests to Path - Delete as a super user so that ACLs are not checked at all. The solution for our issue was to generate a SAS with permissions to delete stuff.

What is a SAS token?

A SAS token is a string with a specific format like sv=2023-01-03&ss=b&srt=co&st=2024-12-07T18%3A14%3A55Z&se=2024-12-07T20%3A14%3A00Z&sp=rl&sig=1ndGXxufe%2BTOrrnDbDAR%2FDOmoTKnupT8glQfET0QPdY%3D.

Sometimes, Microsoft names things right, and at other times, they need to change the names of products and services. I believe they got the name of Shared Access Signature right. So let's see what it means.

Shared because it is meant to be shared with other parties for delegation purposes.

Access because the SAS token specifies what the holder of it is allowed to do. The sp field stands for signed permissions. As the name implies, it contains the permissions this SAS is signed for, e.g. r means Read, w means Write, d means Delete, and so on. The st(Signed Start) and se(Signed Expiry) specify the time window in which your SAS is valid. Some fields are optional, while others are required. All the fields define the capabilities of your token.

Signature is one special field that typically stays at the end of each SAS token - the sig field. This signature attests to all fields that are specified in the SAS token. The signature is produced by combining all the field values by following a specific order in what's called a string-to-sign which is then signed with a key. This key can either be the Storage account key or a User delegation key. If you want a deep dive into User delegation SAS, check out my blog post, where I delve into more details about how the signature is produced.

There are three main forms in which you can see a SAS token:

  • Use the SAS token itself: Pass the SAS token and specify a URL separately. For example, BlobServiceClient has such a constructor - BlobServiceClient(Uri, AzureSasCredential, BlobClientOptions). Assuming access is properly granted, this approach allows you to target various objects in the account (or blobs in the case of blob storage) while using the same SAS token.
  • In a URL: Append the SAS token to the query string of the URL: https://<your-storage-account>.blob.core.windows.net/<your-container>/<your-file>?sv=2023-01-03&ss=b&srt=co&st=2024-12-07T18%3A14%3A55Z&se=2024-12-07T20%3A14%3A00Z&sp=rl&sig=<signature>. You can open such a URL in your browser and you will access the file. Or by providing a SAS URL that points to a specific container, you can attach that container in Azure Storage Explorer.
  • In a connection string: It's typically used when you want to establish a broader level of access to one or multiple storage services in an Azure Storage account. The SAS token is supplied in a dedicated parameter of the connection string. Here is an example: SharedAccessSignature=sv=2023-01-03&ss=btqf&srt=sco&st=2024-12-07T19%3A42%3A30Z&se=2024-12-08T19%3A42%3A30Z&sp=rl&sig=<signature>;BlobEndpoint=https://<your-storage-account>.blob.core.windows.net/;FileEndpoint=https://<your-storage-account>.file.core.windows.net/;QueueEndpoint=https://<your-storage-account>.queue.core.windows.net/;TableEndpoint=https://<your-storage-account>.table.core.windows.net/;

When you use the SAS token in a request to a particular Azure Storage service, it checks whether the SAS token is valid. More specifically, it ensures that fields in the SAS and the signature belong together. If you try to tamper the fields in the SAS to hopefully gain more permissions, the signature will be invalid. At some point, the SAS token will become invalid because the signed expiry is in the past. So you will need a new SAS token with a value for the se parameter that is in the future. This, of course, results in a new SAS token that contains a new signature.

SAS types

Sometimes it's tricky if you need to choose the type of SAS that you want to use. There are three types of SAS that offer you different options - Account SAS, service SAS, and user delegation SAS. Service and user delegation SAS give you the ability to delegate fine-grained access, for example to a specific file or a specific container, while account SAS allows you to delegate coarsely in the whole account.

I've seen people, as long as they don't care or don't know the differences, opting almost automatically for Account SAS as it allows them to achieve a lot. If you tick most of the checkboxes of Account SAS, it almost becomes as powerful as the storage account key. I hope it is obvious that this is not how you follow the principle of least privilege.

I created a little decision tree that can help you choose the type. The more you stay on the green line towards User delegation SAS, the more secure your SAS is.

Delegate access to Azure Storage using Shared Access Signature (SAS)

The following table summarizes what you can achieve with each type of SAS.

Feature Account SAS Service SAS User delegation SAS
Scope Multi-service (any combination between Blob, Queue, Table, File) Single service Single service
Supported services Blob (blob and dfs endpoints), Queue, Table, File Blob (blob and dfs endpoints), Queue, Table, File Blob (blob and dfs endpoints)
Signed with Account Key Account Key User delegation key
Resource specificity Low (the SAS permissions apply to all services/objects in the account) High (SAS permissions apply to a specific container or object in it) High (SAS permissions apply to a specific container or object in it)
Effective permissions Defined in the SAS token Defined in the SAS token An intersection between the permissions defined in the SAS token and the Azure RBAC role or POSIX ACLs assigned to the Entra ID security principal
Stored Access Policy support No Yes No
Revocation methods Account key rotation - If Stored Access Policy was used to issue the SAS token, modify or delete the policy.
- For ad-hoc SAS, account key rotation.
- Remove the RBAC role of the security principal.
- For accounts with hierarchical namespace enabled, you can remove the POSIX ACLs granted to the security principal.
- Revoke all user delegation keys from the account.
Behavior when AllowSharedKeyAccess=false Request is denied Request is denied Request is allowed
Level of implied security Low Medium Best

4 tips when using a SAS

There are many intricacies and nuances around using a SAS. But following several simple tips can go a long way and keep you out of trouble.

  1. Follow the principle of least privilege. Keep the permissions and the scope of the SAS at the minimum that is required for your use case. For example, if you only need to delegate read-only access to a specific blob from a Blob storage container or a file in a File Share, it doesn't make any sense to generate an Account SAS with all permissions available.
  2. Use a short-lived SAS token, if possible. It can be convenient to generate a long-lived SAS, set it, and forget about it. That's however a terrible idea. Make sure the lifetime of a SAS gives just enough time to the client of that SAS to perform the designated actions but no more. By generating the SAS just before it is needed, you can make sure its lifetime is not too long. That makes sense to be done programmatically by using the respective client libraries.
  3. Treat the SAS token as a secret. SAS tokens must not appear in logs, be present in source code, etc. As a rule of thumb, keep them secure like you do for the storage account key. A SAS while valid can be very limited in the permissions it allows but it might as well give almost identical access as your storage account key does.
  4. Understand the risks. As mentioned just above, a SAS can give away too much power which might not be desirable without having the capabilities to track how many tokens were issued, if they are kept safe, how they are used, or when they expire and need to be renewed. Not to mention, that you'll need to have a plan for revocation of SAS tokens in case they leak. In some cases, you might consider changing the architecture of your solution, such as moving an operation that uses SAS but requires too much power to a tier that can be easily integrated with Entra ID.

]]>
<![CDATA[Key Takeaways from Microsoft Ignite 2024]]>I couldn't make it to Ignite this year, but you know what? Watching it online has its perks. No distractions from the bustling conference floor, just pure, undivided attention to all the juicy details.

Here's the TLDR version: Start with AI, AI, AI, sprinkle in some

]]>
https://www.gatevnotes.com/key-highlights-from-microsoft-ignite-2024/6741c14aa6040929d49052aeSat, 23 Nov 2024 13:43:48 GMT

I couldn't make it to Ignite this year, but you know what? Watching it online has its perks. No distractions from the bustling conference floor, just pure, undivided attention to all the juicy details.

Here's the TLDR version: Start with AI, AI, AI, sprinkle in some Copilots soaring through the fluffy clouds, mix in a wealth of data, and top it all off with a robust layer of security. Voilà! Just kidding, here are my top highlights from Microsoft Ignite 2024.

Disclaimer: This blog post doesn't cover every announcement from Ignite 2024. Instead, it highlights the announcements that I found most significant and impactful. I tried distilling them enough to make it easy for reading but if you want to check everything, here's the Book of News from Ignite.

Azure and AI

Azure AI Foundry: Unified AI Development Platform

Microsoft has launched Azure AI Foundry, a comprehensive platform designed to simplify the development and management of AI applications. This platform integrates Azure AI models, tools, and safety solutions, offering a unified SDK called Azure AI Foundry SDK and portal experience (formerly AI Studio) to help organizations efficiently design, customize, and scale their AI solutions.

Learn more: https://azure.microsoft.com/en-us/blog/the-next-wave-of-azure-innovation-azure-ai-foundry-intelligent-data-and-more/

Azure AI Agent Service: Revolutionizing AI Agent Development and Deployment

Azure AI Agent Service introduces managed capabilities to empower developers to build secure, stateful autonomous AI agents that automate complex business processes. This service integrates models, tools, and technology from Microsoft, OpenAI, and partners like Meta, Mistral, and Cohere, extending agents with knowledge from Bing, SharePoint, Fabric, Azure AI Search, Azure Blob, and licensed data.

Learn more: https://techcommunity.microsoft.com/blog/azure-ai-services-blog/introducing-azure-ai-agent-service/4298357

AI Search Enhancements: Raising the Bar for RAG Excellence

Microsoft has introduced two major updates to Azure AI Search: generative query rewriting and a new semantic ranker. These enhancements significantly improve search relevance and performance, setting new standards for relevance and latency across various benchmarks.

Learn more: https://techcommunity.microsoft.com/blog/azure-ai-services-blog/raising-the-bar-for-rag-excellence-query-rewriting-and-new-semantic-ranker/4302729

Model Fine-Tuning Collaborations: Enhancing AI Customization

Microsoft has announced new collaborations with Weights & Biases, Scale AI, Gretel, and Statsig to streamline model fine-tuning and experimentation. These partnerships aim to address challenges in AI development by providing advanced tools, synthetic data, and specialized expertise.

Learn more: https://techcommunity.microsoft.com/blog/AIPlatformBlog/announcing-model-fine-tuning-collaborations-weights--biases-scale-ai-gretel-and-/4289514

Azure AI Content Understanding: Transforming Multimodal Data

Azure AI Content Understanding is a new service designed to extract insights from diverse content types like documents, images, videos, and audio. This tool simplifies the process of converting unstructured data into structured, actionable insights, enhancing efficiency and accuracy for businesses.

Learn more: https://techcommunity.microsoft.com/blog/azure-ai-services-blog/announcing-azure-ai-content-understanding-transforming-multimodal-data-into-insi/4297196

Azure Container Apps updates

Azure Container Apps has introduced several new features at Ignite 2024, enhancing its capabilities for modern, cloud-native applications and microservices.

  • Serverless GPUs: Azure Container Apps now supports serverless GPUs, offering NVIDIA A100 and T4 GPUs in a serverless environment for AI workloads. This feature provides scale-to-zero capabilities, built-in data governance, and flexible compute options.
  • Dynamic Sessions: Dynamic sessions are now generally available, providing instant access to compute sandboxes for running untrusted code at scale.
  • Private Endpoints: Public preview support for private endpoints in workload profile environments allows secure connections using a private IP address in an Azure Virtual Network, eliminating exposure to the public internet.
  • Planned Maintenance: A new planned maintenance feature in public preview lets users control when non-critical updates are applied to their Container Apps environment, minimizing downtime.
  • Path-Based Routing: Early access to path-based routing enables customers to configure routing rules for application traffic without needing an additional reverse proxy.
  • Java Support: Azure Container Apps now offers multiple features for deploying Java Spring applications, including integration with popular development tools and automation tools.

Learn more: https://techcommunity.microsoft.com/blog/appsonazureblog/what’s-new-in-azure-container-apps-at-ignite’24/4303445

Azure Local: A New Enhanced Edge Computing

Azure Local, a new cloud-controlled hybrid infrastructure platform enabled by Azure Arc, is a comprehensive solution designed to provide cloud-connected infrastructure at physical locations under operational control. This solution will replace the Azure Stack product family, offering features like secure data storage, seamless integration with Azure services, and flexible hardware options.

Learn more: https://techcommunity.microsoft.com/blog/azurearcblog/introducing-azure-local-cloud-infrastructure-for-distributed-locations-enabled-b/4296017

AI at work

Agents everywhere: Purpose-built agents in Microsoft 365 Copilot

Microsoft 365 Copilot introduces purpose-built agents across various platforms, including SharePoint, Teams, and Planner, to streamline tasks and enhance productivity.
These agents, such as the Employee Self-Service agent in Business Chat and the Project Manager agent in Planner, are designed to handle specific roles, from managing HR and IT tasks to automating project management. SharePoint agents empower employees to gain insights faster and make informed decisions grounded on specific SharePoint content1. In Teams, the Facilitator agent takes real-time notes during meetings, allowing everyone to co-author and collaborate seamlessly.

Learn more: https://www.microsoft.com/en-us/microsoft-365/blog/2024/11/19/introducing-copilot-actions-new-agents-and-tools-to-empower-it-teams/

The evolution of Bot Framework: Meet the Microsoft 365 Agents SDK

The Microsoft 365 Agents SDK, now in preview, enables developers to build and deploy scalable, multi-channel agents using C# code, supporting various AI services like Azure AI Foundry. The agents can operate in a variety of channels, including Microsoft 365 Copilot, Microsoft Teams, web, and more.

Learn more: https://devblogs.microsoft.com/microsoft365dev/introducing-the-microsoft-365-agents-sdk/

Measure AI Impact with the new Copilot Analytics

Microsoft has introduced Copilot Analytics to help organizations measure the business impact of AI-powered assistants like Copilot and agents. This tool provides out-of-the-box experiences and customizable reporting for deeper analysis, available via the Microsoft 365 admin center, the Copilot Dashboard, and Viva Insights.

Copilot Analytics provides business impact measurement capabilities, including an out-of-the-box dashboard for Copilot readiness, adoption, impact, and learning categories. It also features customizable reporting tools in the Microsoft 365 admin center and Viva Insights for analyzing Copilot usage against business KPIs.

Learn more: https://techcommunity.microsoft.com/blog/microsoftvivablog/introducing-copilot-analytics-to-measure-ai-impact-on-your-business/4301717

Teams

  • Analyze and Summarize Screen-Shared Content: Copilot in Teams can analyze and summarize content shared on-screen during meetings, providing valuable insights and ensuring no details are overlooked
  • Interpreter Agent: Provides real-time speech-to-speech translation in up to nine languages, allowing participants to speak and listen in their preferred language. This can actually simulate your voice!
  • File Summaries: Copilot in Teams can quickly summarize the content of files shared in 1:1 chats and group chats, helping users grasp key ideas without opening the file.
  • Meeting Recaps: Enhanced transcription and translation features support more languages, providing comprehensive meeting recaps in the chosen language.
  • Microsoft Places Integration: Allows employees to book meeting rooms and desks, and manage workplace presence directly within Teams and Outlook.

Learn more: https://techcommunity.microsoft.com/blog/microsoftteamsblog/what’s-new-in-microsoft-teams--microsoft-ignite-2024/4287538

PowerPoint

  • Narrative Builder: Integrates insights from documents into compelling narratives with branded slide designs, speaker notes, and transitions.
  • Presentation Translation: Translates entire presentations into one of 40 languages while maintaining design integrity.
  • Organization Image Support: Allows seamless integration of organizational images from asset libraries like SharePoint and Templafy.

Learn more: https://techcommunity.microsoft.com/blog/microsoft365copilotblog/what’s-new-in-microsoft-365-copilot-in-powerpoint-at-ignite-2024/4298971

Outlook

  • Meeting Management: Copilot in Outlook now helps users schedule 1:1s and focus time, find optimal meeting times, and draft agendas based on meeting goals.
  • Email Summarization: Copilot can summarize long email threads, create agendas, and find suitable meeting times to bring email discussions to a conclusion.

Learn more: https://techcommunity.microsoft.com/blog/outlook/what’s-new-and-coming-to-microsoft-outlook-–-ignite-2024/4297199

Data & Analytics

Fabric Databases: Unified Workloads for AI Development

Fabric Databases, now in preview, bring a world-class transactional database natively to Microsoft Fabric. This feature unifies transactional and analytical workloads, streamlining AI app development and enhancing efficiency for developers. The first database available in Fabric Databases is SQL Database, which is based on Azure SQL Database.

Learn more: https://www.microsoft.com/en-us/microsoft-fabric/blog/2024/11/19/accelerate-app-innovation-with-an-ai-powered-data-platform/#introducing-a-unified-data-platform-with-fabric-databases

Open mirroring in Microsoft Fabric

Preview of open mirroring, allowing applications or data providers to write change data directly into a mirrored database within Fabric.

Learn more: https://learn.microsoft.com/en-us/fabric/database/mirrored-database/open-mirroring

OneLake Catalog

A new solution for exploring, managing, and governing your entire Fabric data estate, with tabs for discovery and governance, integrated with Microsoft 365 and Microsoft Purview.

Learn more: https://www.microsoft.com/en-us/microsoft-fabric/blog/2024/11/19/accelerate-app-innovation-with-an-ai-powered-data-platform/#onelake-catalog-a-complete-catalog-for-discovery-management-and-governance

SQL Server 2025 Announced with a Wealth of AI Capabilities

SQL Server 2025 introduces native JSON datatype and indexing, native vector data type, and vector indexes. Not only does it become a vector database, but it also offers AI integration for both cloud-based and locally hosted models, and native vector search capabilities. Additionally, it brings features from Azure SQL DB, including Fabric mirroring, regular expressions, and optimized locking.

Learn more: https://www.microsoft.com/en-us/sql-server/blog/2024/11/19/announcing-microsoft-sql-server-2025-apply-for-the-preview-for-the-enterprise-ai-ready-database/

Azure Managed Redis: a new cost-effective caching for your AI apps

Azure Managed Redis, now in public preview, offers the latest Redis innovations, including multi-core utilization, vector search, and active geo-replication, ensuring high performance and availability for AI applications. It also provides a cost-effective total cost of ownership (TCO).

Learn more: https://techcommunity.microsoft.com/blog/AppsonAzureBlog/introducing-azure-managed-redis-cost-effective-caching-for-your-ai-apps/4299104

Security

Microsoft Purview Insider Risk Management: Elevating AI Security

Microsoft announced significant updates to Purview Insider Risk Management, focusing on identifying and mitigating risky AI usage. New features include detections for sensitive information in GenAI prompts and responses, integration with Microsoft Defender XDR for enhanced security operations, and improved analytics for better risk assessment and policy recommendations.

Learn more: https://techcommunity.microsoft.com/blog/microsoftsecurityandcompliance/insider-risk-management-empowering-risky-ai-usage-visibility-and-security-invest/4298246

Microsoft Purview Data Governance Solution becomes Unified Catalog

Microsoft has rebranded its Purview Data Governance solution as Unified Catalog to better reflect its growing catalog capabilities. This update includes integration with the new OneLake catalog, a new data quality scan engine, Purview Analytics in OneLake, and expanded Data Loss Prevention (DLP) capabilities for Fabric lakehouse and semantic models.

Learn more: https://techcommunity.microsoft.com/blog/microsoftsecurityandcompliance/safely-activate-your-data-estate-with-microsoft-purview/4298286

Introducing Microsoft Purview Data Security Posture Management: A New Level of Data Visibility

Microsoft Purview Data Security Posture Management (DSPM) provides comprehensive visibility into data security risks, offering contextual insights and continuous risk assessment. It integrates with Microsoft 365 and Windows devices, leveraging AI-powered capabilities like Security Copilot to identify and mitigate threats. DSPM also includes centralized visibility, policy recommendations, and continuous risk assessment to help organizations strengthen their data security posture.

Learn more: https://techcommunity.microsoft.com/blog/MicrosoftSecurityandCompliance/strengthen-your-data-security-posture-in-the-era-of-ai-with-microsoft-purview/4298277

New enhancements in Microsoft Security Service Edge

Microsoft Entra has introduced several new features to enhance security and streamline network access. Key updates include the general availability of Microsoft Entra Private Access and Microsoft Entra Internet Access, which simplify migration from traditional VPNs and improve user connectivity. New capabilities like Universal Continuous Access Evaluation (CAE) and TLS inspection strengthen threat protection and provide comprehensive visibility of encrypted traffic. Additionally, integrations with third-party network security vendors and the public preview of Global Secure Access clients for macOS and iOS offer more flexible and secure access options.

Learn more: https://techcommunity.microsoft.com/blog/identity/what’s-new-in-microsoft’s-security-service-edge-solution/3847827

Microsoft Defender for Cloud Enhances Container Security

Microsoft Defender for Cloud now scans container images from CI/CD pipeline creation through cloud platforms, registries, and Kubernetes clusters. Enhanced monitoring, binary drift detection, and AI-driven threat remediation streamline container security and response.

Learn more: https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-introduction

Windows

Among other Windows updates such as enhanced security features, AI tools for developers, and productivity enhancements with AI-powered Copilot+ PCs, two particular announcements caught my attention.

Microsoft has unveiled Windows 365 Link, a purpose-built device designed to connect securely to Windows desktop in the Microsoft Cloud. This compact and lightweight device offers seamless connectivity, high-performance video playback, and enhanced security by eliminating local data and apps, making it ideal for shared workspaces and desk-based workers.

Learn more: https://www.microsoft.com/en-us/windows-365/link

Enhanced Security for WSL and WinGet

The latest updates to Windows Subsystem for Linux (WSL) and Windows Package Manager (WinGet) include integration with Microsoft Entra ID for identity-based access control, enhancing security and management capabilities.

Learn more: https://blogs.windows.com/windowsexperience/2024/11/19/microsoft-ignite-2024-embracing-the-future-of-windows-at-work/

]]>
<![CDATA[How to uncover the names and parameters of blades in Azure Portal]]>Azure Portal has its functionality organized in blades and extensions. Blades are the visual parts you play with, while extensions serve as the logical module that contains blades and are typically paired with Azure Resource Providers so that an extension provides the necessary UI bits and pieces for an Azure

]]>
https://www.gatevnotes.com/uncovering-blade-names-and-parameters-azure-portal/6607ed3fc2ddab34ec085f8cSun, 31 Mar 2024 12:18:44 GMT

Azure Portal has its functionality organized in blades and extensions. Blades are the visual parts you play with, while extensions serve as the logical module that contains blades and are typically paired with Azure Resource Providers so that an extension provides the necessary UI bits and pieces for an Azure Resource Provider.

Some parts of the Azure Portal functionality make sense to be reused by the UI of your template specs. To learn more about how to invoke blades, check out Using Azure Portal blades in the Template Specs UI.

Azure Portal is among the biggest (if not the biggest) applications in the world based on Knockout.js which used to be one of the best frameworks that helps you leverage data binding in your applications by applying the Model-View-ViewModel (MVVM) pattern. This was a game changer back in 2012 and is what laid the foundation of the modern front-end frameworks. Now, don't get me wrong, I haven't moved to front-end development. As an architect and developer, my work requires versatility and problem-solving across different domains. It’s challenging to describe all my responsibilities, but let’s focus on the main subject for now.

Finding out the blade and extension names

Your best friend for the next steps will be the DevTools in your browser. Hit F12 and let's get started:

  1. Go to the Azure Portal.
  2. Make sure you open the blade in question. The portal loads its stuff on demand and you need the blade to be loaded to be able to find it.
  3. Keep an eye on the Console tab where the extension name is typically logged, e.g. Extension: WebsitesExtension.
  4. Execute the following code snippet in the Console:
    const allElements = document.getElementsByTagName("*");
    for (const element of allElements) {
        const context = ko.contextFor(element);
        if (context && context["$partName"] !== undefined
            && context["$extensionName"] !== undefined) {
            console.log(`Extension: ${context["$extensionName"]}, ` +
                `Blade/Part: ${context["$partName"]}`);
        }
    }
    
  5. The code snippet above loops through all elements on the page that have a Knockout.js binding context. It tries to obtain the name of the blade and the name of the extension it belongs to. A blade consists of many elements, so the same logs are repeated a lot, but that's okay:
    How to uncover the names and parameters of blades in Azure Portal
  6. You can probably identify the blade you need, by looking at their names which seem to be self-explanatory.

Congrats if you've found them! If not, then most likely the blade is loaded in an iframe so the code is technically loaded as a document in another browsing context. That is the case with Entra ID ObjectPickerBlade.

Figuring out the blade parameters

This is not so straightforward as you will have to read through the Azure Portal's implementation right from the sources tab in DevTools. The code of the Azure Portal is somewhat minified but it still makes sense as some names are retained.

The first step is to locate the Search tool to scan for all references and the implementation of the blade in question. The tool can find text across all loaded resources. Here are the results for CertPickerV2ViewModel for example:

How to uncover the names and parameters of blades in Azure Portal

Next, you have to find the implementation among all search results. If it's like BladeReferences.forExtension("Microsoft_Azure_KeyVault").forBlade("CertPickerV2ViewModel").createReference, then that is just a reference where the blade is actually used. Therefore, you will not be able to see all available parameters as there might be some optional ones.

But if the line starts like define("_generated/Blades/CertPickerV2ViewModel", ["require", "exports", "Cert/Picker/ViewModels/CertPickerV2ViewModel"], (function(n, t) { ..., you have most likely found the implementation. That's the blade specified defined as a RequireJS module.

Take a look at the inputs and optionalInputs that define required and optional parameters, respectively.

How to uncover the names and parameters of blades in Azure Portal

To find out what each parameter does, you will have to dive deeper into the implementation of the ViewModel that is most likely in the same file.

]]>
<![CDATA[How to use Azure Portal blades in the Template Specs UI]]>Template specs are a great way to share templates across the organization so that users and developers can deploy approved workloads within the organization. Access to them can be controlled with Azure RBAC.

By default, the Azure Portal will create some default look and feel by inferring the parameters of

]]>
https://www.gatevnotes.com/using-blades-from-the-azure-portal-for-ui-of-template-specs/65e8144c0451ae22a44122c3Wed, 20 Mar 2024 07:18:01 GMT

Template specs are a great way to share templates across the organization so that users and developers can deploy approved workloads within the organization. Access to them can be controlled with Azure RBAC.

By default, the Azure Portal will create some default look and feel by inferring the parameters of your template spec. But if you want greater control, you can define the UI by defining a UI form definition in JSON that you attach to your template spec. You can do so via Azure CLI or Azure PowerShell.

About UI Definitions

The template specs UI definition has a lot in common with Azure Managed Applications. But there are differences. The most important is that there are 2 JSON schemas for defining Portal UI - CreateUIDefinition.MultiVm.json and uiFormDefinition.schema.json. Although they have a lot of similarities, the former is considered the old format and it's used for Azure Managed Application, while the latter is the newer format used by Template Specs.

You can find the documentation of most of the supported UI elements here. If you ask me, we need a lot more to be exposed.

Extensions and blades

But if you closely examine the uiFormDefinition.schema.json you'll find an intriguing type that is, well, mostly undocumented in the official docs: Microsoft.Solutions.BladeInvokeControl.

Most of the visual elements in Azure Portal are combined into panes, the so-called blades. Blades belong to an extension that usually provides multiple blades and typically implements the Portal functionality needed by a particular provider, e.g. the Microsoft_Azure_KeyVault extension.

The BladeInvokeControl opens a blade and provides the output values of that blade as a result. It is also capable of doing transformations using JMESPath which allows you to massage the resultant objects.

The BladeInvokeControl is typically used in conjunction with another element like Microsoft.Common.Selector which is a common pattern you see across the Portal. The selector can display certain properties of the blade output and has a color-coded state that shows success or error.

Here is a basic example of selecting a secret from a Key Vault where you can see the interplay between the blade and the selector:

{
    "name": "keyVault",
    "label": "Key Vault",
    "elements": [
        {
            "name": "secretSection",
            "type": "Microsoft.Common.Section",
            "label": "Secret selection",
            "elements": [
                {
                    "name": "secretPickerBlade",
                    "type": "Microsoft.Solutions.BladeInvokeControl",
                    "openBladeStatus": "[steps('keyVault').secretSection.secretSelector.changing]",
                    "bladeReference": {
                        "name": "SecretPickerV2ViewModel",
                        "extension": "Microsoft_Azure_KeyVault",
                        "parameters": {
                            "subscriptionId": "[steps('basics').resourceScope.subscription.subscriptionId]",
                            "showSubscriptionDropdown": true
                        },
                        "inFullScreen": false
                    }
                },
                {
                    "name": "secretSelector",
                    "type": "Microsoft.Common.Selector",
                    "label": "Secret",
                    "keyPath": "secretId",
                    "value": "[steps('keyVault').secretSection.secretPickerBlade]",
                    "visible": true,
                    "barColor": "[if(empty(steps('keyVault').secretSection.secretPickerBlade), '#FF0000', '#7fba00')]",
                    "constraints": {
                        "required": true
                    },
                    "link": "Select a secret"
                }
            ]
        }
    ]
}

It looks like this:

How to use Azure Portal blades in the Template Specs UI

You can check all the parameters of BladeInvokeControl in Azure/portaldocs.

How to try a UI definition?

You can find a full working example of all blades from this blog post here. The easiest way to play with the UI definition is to copy the JSON contents and paste it into the Form view Sandbox, then hit Preview.

Blades reference

Some blades are very useful and I want to keep track of their parameters in this blog post. I already had to explore their implementation a couple of times because I tend to forget about the specifics. That was one of the reasons for writing that blog post.

Below, you will find some of the blades that I see as universal and will be needed in most Template Specs sooner rather than later. I've documented their inputs but if you'd like to see the outputs, make sure to play with them because I did put a text field to show the entire output of every blade.

Entra ID object picker

Extension: Microsoft_AAD_IAM
Blade Name: ObjectPickerBlade

How to use Azure Portal blades in the Template Specs UI

I'm sure you used this one many times to select users, groups, and applications. Although the blade looks simple, it offers a lot of parameters that you can set:

Parameter Value Description
queries int (required) The query or queries to be performed by the object picker blade. Multiple queries can be combined with bitwise OR. See the Queries table below.
additionalQueriesOnSearch int Queries to be tried upon typing some text in the search field. See the Queries table below.
advancedQueryOptions object Some of the queries support passing additional options. It is best to check the implementation of a given query.
bladeSubtitle string Blade subtitle
bladeTitle string Blade title
disabledObjectIds [string] Array of object IDs not to be offered for selection
disablers int What information to be excluded from results of the queries. Multiple disablers can be combined with bitwise OR. See the Disablers table below.
informationHeader {"informationText": "Longer informational header", "informationLink": "https://example.com"} Text and link for the information header.
inviteEnabled bool It can be used to invite users. It's not clear what query one should use but presumably some type that allows entering the email addresses of guest users.
searchBoxLabel string Label of the search input.
searchBoxPlaceHolderText string Placeholder to display in the search input.
searchBoxTooltip string The text of the tooltip that appears next to the label.
searchGridNoRowsMessage string Text of the message to be displayed when no results were found.
selectButtonText string Text of the select button.
selectedGridLabel string Label for the selected objects grid
selectedGridNoRowsMessage string Message to be displayed in the selected object grid when no records are selected.
selectedObjectIds [string] Array of object IDs to be pre-selected when the object picker is opened.
selectionMaximum int Maximum number of objects to be selected
selectionMinimum int Minimum numbers of objects to be selected
suggestedObjectsOptions object It is best to check the implementation of the queries you want to use. To use the SuggestedObjectsQuerier, the suggestedObjectsOptions parameter must be defined.

Queries

If you would like to enable selection for users, groups, and service principals, for example, that would be 1 | 2 | 4 = 7.

Flag Query
1 AllUsersBatchFormer
2 AllGroupsBatchFormer
4 AllServicePrincipalsBatchFormer
8 AllDevicesBatchFormer
16 AllContactsQuerier / AllContactsBatchFormer
32 SecurityEnabledGroupsBatchFormer
64 MailEnabledGroupsBatchFormer
128 TeamsGroupsBatchFormer
256 UnifiedGroupsBatchFormer
512 GroupsAssignableToRoleBatchFormer
1024 DynamicGroupsBatchFormer
2048 GuestUsersBatchFormer
4096 TransitiveMembersBatchFormer
8192 AllAdministrativeUnitsBatchFormer
16384 RoleMembersBatchFormer
32768 AADIntegratedServicePrincipalsBatchFormer
65536 ThirdPartyServicePrincipalsQuerier
131072 AllUnifiedRoleDefinitionsQuerier
262144 AllApplicationsBatchFormer
524288 OwnedObjectsBatchFormer
1048576 SuggestedObjectsQuerier
2097152 AccessPackagesBatchFormer
4194304 OnPremSecurityEnabledGroupsBatchFormer
8388608 CrossTenantApplicationsQuerier
16777216 FirstPartyServicePrincipalsQuerier

Disablers

Disablers offer a way to exclude certain objects from the results of the active Queries.

Flag Disabler
1 DirectorySyncedDisabler
2 DynamicGroupDisabler
4 DistributionGroupDisabler / UnifiedGroupDisabler
16 TeamsGroupDisabler
32 ExoGroupDisabler
64 GuestUserDisabler
128 SelfDisabler
256 RoleAssignableGroupDisabler
512 MailEnabledSecurityGroupDisabler
1024 MSAUserDisabler
2048 NonGuestMSAUserDisabler
4096 NoMailboxUserDisabler
8192 FederatedUserDisabler
16384 CustomRoleDisabler
32768 RestrictedAdminUnits

Key Vault - Key Picker blade

Extension: Microsoft_Azure_KeyVault
Blade Name: KeyPickerV2ViewModel

How to use Azure Portal blades in the Template Specs UI
Parameter Value Description
subscriptionId string (required) The default subscription ID to list Key Vaults from.
showSubscriptionDropdown bool Whether to display the subscriptions dropdown.
showCreateNew bool Whether to show a hyperlink that allows the creation of a new key vault, key, or version.
keyAndVersionDropdownOptional bool Set to true if the key and version are optional fields, i.e. the blade can be used for selecting a key vault only.
requiredKeyTypes [int] 0-3 where 0 - "RSA", 1 - "RSA-HSM", 2 - "EC", 3 - "EC-HSM" What key types are to be allowed for selection. Note: all keys will appear in the dropdown; however, selection will be possible only for the keys of the required types.
requiredKeySizes [int] 0-2 where 0 - 2048, 1 - 3072, 2 - 4096 What key sizes to allow for selection
requiredCurveNames [int] 0-3 where 0 - "P-256", 1 - "P-384", 2 - "P-521", 3 - "P-256K" The elliptic curve names to be available for selection.
requiredKeyOperations [int] 0-5 respectively ["sign", "verify", "wrapKey", "unwrapKey", "encrypt", "decrypt"] The permitted operations that are required for a version to be available for selection.
enableMHSM bool Displays a filter for the key vault type - Key Vault or Managed HSM
location string The name of the region to list key vaults from.
showVersionPicker bool Allow versions to be selected.
supportAlwaysUseCurrentKeyVersion bool Displays a checkbox to opt to use the latest version. showVersionPicker needs to be true.
defaultVaultId string Resource ID of the Key Vault to be selected upon opening the blade.

Key Vault - Secret Picker blade

Extension: Microsoft_Azure_KeyVault
Blade Name: SecretPickerV2ViewModel

Sure, there is Microsoft.KeyVault.KeyVaultCertificateSelector available for use, but it is not that configurable.

How to use Azure Portal blades in the Template Specs UI
Parameter Value Description
subscriptionId string (required) The default subscription ID to list Key Vaults from.
showSubscriptionDropdown bool Whether to display the subscriptions dropdown.
location string The name of the region to list key vaults from.
showCreateNew bool Whether to show a hyperlink that allows the creation of a new key vault, secret, or version.
showVersionPicker bool Allow versions to be selected.
defaultVaultId string Resource ID of the Key Vault to be selected upon opening the blade.

Key Vault - Certificate Picker blade

Extension: Microsoft_Azure_KeyVault
Blade Name: CertPickerV2ViewModel

How to use Azure Portal blades in the Template Specs UI
Parameter Value Description
subscriptionId string (required) The default subscription ID to list Key Vaults from.
requiredSubjectName string X.500 distinguished name
defaultCertId string A default certificate to be pre-selected in the format "https://{key-vault-name}.vault.azure.net/certificates/{certificate-name}"
showSubscriptionDropdown bool Whether to display the subscriptions dropdown.
location string The name of the region to list key vaults from.
showCreateNew bool Whether to show a hyperlink that allows the creation of a new key vault, certificate, or version.
showVersionPicker bool Allow versions to be selected.
defaultVaultId string Resource ID of the Key Vault to be selected upon opening the blade.

An important disclaimer

Although it's quite handy to reuse functionality you are already familiar with because it comes from the Azure Portal, keep in mind that Microsoft can change those blades at any given time and at their own discretion. As you might notice in the BladeInvokeControl, we don't specify a version of the blade. Blades are simply not versioned for external invocation. The Azure Portal application goes hand in hand with its blades, so it intrinsically knows how to work with the "current" version.

Based on my experience in the last year, you won't see a lot of changes on the existing blades. But be prepared for parameters, functionality, and outputs to change. If it is a breaking change, Microsoft will likely change the name of the blade, for reference CertPickerV2ViewModel.

What blades are useful to you?

Let me know in the comments what blades can come in handy for you. If they can be massively reused I can consider adding them to this post.

]]>
<![CDATA[Lively Azure Dashboards with Azure Resource Graph]]>https://www.gatevnotes.com/live-azure-dashboards-with-resource-graph/641757cfa9ad6516cc3f1b46Sat, 25 Mar 2023 17:59:52 GMT

If you have ever used Microsoft Azure you've undoubtedly seen an Azure Dashboard. Whenever you open the Azure Portal, you land on a dashboard. It typically visualizes all your resources and resource groups. It also displays various shortcuts to other places in the Azure Portal. But besides the default look, have you ever wondered how far you can go to customize those dashboards?

In this post, we'll explore a few ways that might help you make your dashboards more dynamic by using Azure Resource Graph which offers a powerful way to query resources across subscriptions by using a subset of the Kusto Query Language (KQL).

Azure Dashboards gotchas

Before I show you the Kusto queries, let me share a few quirks the portal has at the time of writing this.

When you edit an Azure Dashboard, you would expect to see all available tiles when clicking on the "Add tiles" button. But that's not the case - some tiles can only be embedded in a dashboard by hitting that pin icon that can be present somewhere on the page (blade) of some resources. Here's an example of App Insights failures, and it shows up twice:

Lively Azure Dashboards with Azure Resource Graph

Here is how it looks when pinned to a dashboard:

Lively Azure Dashboards with Azure Resource Graph

But it is not available in the Tiles Gallery:

Lively Azure Dashboards with Azure Resource Graph

So keep an eye out for pinnable resources while clicking around in the Portal.

#2 Resource Graph tiles

This one is specific to the tiles that visualize the result of a Resource Graph query. When you click on the tile, regardless of whether it is a grid, a chart, or a single value, it opens an Azure Resource Graph Explorer query page that lets you edit the Kusto query. At the bottom, you configure how you want this part to look in the dashboard. Once you are satisfied with what is shown in the Results tab, make sure to update the pinned part on the dashboard and set the toggle for formatted results accordingly:

Lively Azure Dashboards with Azure Resource Graph

If switched on, Formatted results will try to interpret the columns returned by the query and if possible, will replace raw text and identifiers with meaningful names or clickable links. You will have to help it by using column names that it can make sense of. For the queries below, make sure to have this enabled.

Now off to the queries!

Show the latest resource changes

If you are working on automation that touches some properties of an Azure resource, you might be only using the Azure Portal to verify that the changes were applied as you expect. But whenever you open the portal, you struggle to find where that resource is. Or maybe you just want to audit the recent changes performed on your resources and resource containers (resource groups, subscriptions, management groups).

If you want to have quick access to all recent changes, this query is for you:

resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp)
| project
    ['Change time'] = changeTime,
    ['Change type'] = properties.changeType,
    id = properties.targetResourceId,
    type = properties.targetResourceType,
    Changes = properties.changes,
    resourceGroup,
    subscriptionId
| union kind = inner 
    (
    resourcecontainerchanges
    | extend changeTime = todatetime(properties.changeAttributes.timestamp)
    | project
        ['Change time'] = changeTime,
        ['Change type'] = properties.changeType,
        id = properties.targetResourceId,
        type = properties.targetResourceType,
        Changes = properties.changeAttributes,
        resourceGroup,
        subscriptionId
    )
| order by ['Change time'] desc

The changes are sorted by starting with the most recent ones:

Lively Azure Dashboards with Azure Resource Graph

List Orphan Resources

Redundant resources can pile up on your Azure bill over time and you may not realize it. They are sometimes left as residual resources from a bigger deployment: a public IP that is not used anywhere, or a disk that is not attached to a virtual machine, just to name a few. Those resources are called orphan resources. Some of them like Disks or Azure App Service plans are more expensive than others which simply clutter your subscriptions.

I found Dolev Shor's repo which serves as a collection of some Kusto queries for finding orphaned resources. However, I had to merge all queries into one so that you can see them all in a single Resource Graph grid tile:

resources
| where (type has "microsoft.compute/disks" and managedBy == "" and not(name endswith "-ASRReplica" or name startswith "ms-asr-" or name startswith "asrseeddisk-"))
    or (type has "microsoft.network/networkinterfaces" and isnull(properties.privateEndpoint) and isnull(properties.privateLinkService) and properties.hostedWorkloads == "[]" and properties !has 'virtualmachine')
    or (type =~ "microsoft.network/publicipaddresses" and properties.ipConfiguration == "" and properties.natGateway == "" and properties.publicIPPrefix == "")
    or (type =~ "microsoft.network/networksecuritygroups" and isnull(properties.networkInterfaces) and isnull(properties.subnets))
    or (type =~ 'Microsoft.Compute/availabilitySets' and properties.virtualMachines == "[]")
    or (type =~ "microsoft.network/routetables" and isnull(properties.subnets))
    or (type =~ "microsoft.network/loadbalancers" and properties.backendAddressPools == "[]")
    or (type =~ "microsoft.web/serverfarms" and properties.numberOfSites == 0)
    or (type =~ "microsoft.network/frontdoorwebapplicationfirewallpolicies" and properties.frontendEndpointLinks == "[]" and properties.securityPolicyLinks == "[]")
    or (type =~ "microsoft.network/trafficmanagerprofiles" and properties.endpoints == "[]")
| project id, type, resourceGroup, subscriptionId
| union kind = outer
    (
    resourcecontainers
    | where type =~ "microsoft.resources/subscriptions/resourcegroups"
    | extend rgAndSub = strcat(resourceGroup, "--", subscriptionId)
    | join kind=leftouter (
        resources
        | extend rgAndSub = strcat(resourceGroup, "--", subscriptionId)
        | summarize count() by rgAndSub
        )
        on rgAndSub
    | where isnull(count_)
    | project id, type, resourceGroup, subscriptionId
    )

It is capable of finding the following types of orphaned resources:

  • Disks
  • Network Interfaces
  • Public IPs
  • Resource Groups
  • Network Security Groups (NSGs)
  • Availability Sets
  • Route Tables
  • Load Balancers
  • App Service Plans
  • Front Door WAF Policy
  • Traffic Manager Profiles

When pasted into a grid tile, it is displayed like this:

Lively Azure Dashboards with Azure Resource Graph

Download this dashboard

If you want to download a dashboard with both queries and import it right away, you can find it here.

Conclusion

I hope those queries are useful to you. If you have some other favorite Resource Graph queries that you embed into your Portal Dashboards, share them as a comment and I will consider including them in the post.

]]>
<![CDATA[Passwordless deployments from GitHub Actions to Microsoft Azure]]>You've likely had to deploy something to Microsoft Azure. Whether that is a deployment of an application or a deployment of infrastructure, you always have to provide some unique information that you possess to authenticate. In this blog post, I will show you how you can make your

]]>
https://www.gatevnotes.com/passwordless-authentication-github-actions-to-microsoft-azure/6407a037a9ad6516cc3f14feWed, 15 Mar 2023 17:13:38 GMT

You've likely had to deploy something to Microsoft Azure. Whether that is a deployment of an application or a deployment of infrastructure, you always have to provide some unique information that you possess to authenticate. In this blog post, I will show you how you can make your life easier in case you use GitHub Actions workflows.

If you just deploy something very occasionally, you may use your personal account but in most cases deployments are done by some type of automated process - an Azure DevOps pipeline or GitHub Actions workflow, to name a few. In that case, you have to use credentials that are not tied to your personal account but rather to an application registration in your Azure Active Directory. That credential is typically a secret or a certificate.

What's wrong with secrets and certificates?

If you are into software development, you've mostly heard that having any secrets in a raw form in your source code is a bad practice. Having your code under source control even makes it worse! Yes, scanners that scan through your source code and flag any strings that look like a secret do exist but that's not the point.

Okay, you may think that if you follow all the good practices for handling secrets/certificates, you are on the safe side. But you are not. Creating such a credential is a two-step process - you generate it, then you copy-paste it to your target environment that will execute the deployment. For example, those credentials can be stored securely in Azure Key Vault, HashiCorp Vault, GitHub secret, etc. But obtaining and securely handling the credentials is still a manual process at large so there is room for mistakes. And credentials tend to leak with time. To somewhat mitigate the problem, you establish a process to periodically rotate credentials, which creates even more problems because not only can they leak but they also expire. This leads to problems with security and the stability of your applications.

So what's the solution? Don't use password-like credentials if you can. That's just as applicable for automated deployments as I explained in my post A guide to online account security.

What is passwordless auth?

To follow the passwordless way, you'll need to establish trust between your cloud provider (in this case Microsoft Azure) and your deployment pipelines. Microsoft announced the support of OpenID Connect for Azure Active Directory in late 2021 but in my opinion, it's still a highly underused feature. People are still attached to generating secrets and storing them somewhere.

Passwordless deployments from GitHub Actions to Microsoft Azure
The authentication flow between the cloud provider and GitHub workflow. Source: GitHub Docs

OpenID Connect is an authentication protocol built on top of OAuth 2.0. The two sides in the diagram above communicate without the need for exchanging a credential because trust is established between the two. Essentially, you instruct the cloud identity provider (in our case Azure Active Directory) to trust JSON Web Tokens (JWT) issued by GitHub. When AAD successfully recognizes and validates the incoming JWT, it returns an access token, just like it would do if you used an app registration secret or a certificate for authentication purposes. Then the access token is used by the GitHub Actions workflow to access resources in your Azure subscription. In other words, you allow a GitHub workflow to impersonate a security principal from your AAD tenant. This is also called workload identity federation and is specified in RFC 8693 OAuth 2.0 Token Exchange.

Let's look at how things can be configured on both ends.

Configuring federated identity credential

Firstly, in Azure Active Directory, you have two options for a security principal to choose from:

  • Use an Application Registration. You can configure the application to trust an external identity provider by following the steps in the documentation.
  • Use a User-Assigned Managed Identity. To configure the managed identity to trust GitHub by following the respective steps.

Whether to use one over the other, is up to you. Both are registered at the Azure Active Directory level and can have access to multiple Azure subscriptions. The managed identity lives as a native Azure resource, while the app registration has many dials and switches that allow you to configure a lot of things.

Have in mind that federated credentials for managed identites are not supported in all Azure regions.

Here's how it looks for an Application Registration although you will be presented with an almost identical UI when you configure it for a Managed Identity:

Passwordless deployments from GitHub Actions to Microsoft Azure
Adding a federated credential
Passwordless deployments from GitHub Actions to Microsoft Azure
Establishing trust with GitHub

When selecting Entity type you are offered 4 choices - environment, branch, pull request, and tag. Here the Azure Portal guides you in configuring the format of the expected subject claim of the incoming JWT token. To successfully recognize the incoming subject, it should match the expected format. More on that will come later in the blog post.

Lastly, make sure to collect the ClientID, Tenant ID, and Subscription ID as you will need them later in GitHub. And of course, you will have to grant your service principal/managed identity some Azure RBAC role so that it can access resources within Azure.

The GitHub Actions workflow

Once the application registration or the user-assigned managed identity has been configured to trust incoming JWT tokens issued by GitHub, we need to create the GitHub workflow. But before we do that, let me take a step back to describe how things work on the GitHub side.

I guess you have heard about the automatic GITHUB_TOKEN that is available in every workflow run. It is typically granted some permissions so that your workflow can access some parts from GitHub as part of its operation. Those permissions can be assigned either for the overall workflow or at a specific job level. There is such a permission that controls whether you can generate a JWT token from your workflow run:

permissions:
   id-token: write # write is required for requesting the JWT

Let's proceed with the GitHub workflow. Luckily each major cloud provider offers an authentication action that helps you generate the JWT and exchange it for an access token. For Microsoft Azure, this is azure/login:

name: Run Azure CLI commands with OIDC
on: [push,  workflow_dispatch]

permissions:
      id-token: write # write is required for requesting the JWT
      contents: read 
      
jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
      - name: 'Az CLI login'
        uses: azure/login@v1
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}            
      - name: 'Run az commands'
        run: |
          az group list

Save this file at .github\workflows\workflow-name.yml. Client ID, Tenant ID, and Subscription ID are considered somewhat sensitive information, so make sure to have the values you collected at the end of the previous section persisted as secrets in your repository.

If everything is good, you will be able to see all the resource groups that the security principal has access to:

[
    {
        "id": "/subscriptions/***/resourceGroups/rg-core",
        "location": "westeurope",
        "managedBy": null,
        "name": "rg-core",
        "properties": {
          "provisioningState": "Succeeded"
        },
        "tags": {},
        "type": "Microsoft.Resources/resourceGroups"
      },
      ...
  ]

Advanced: Exploring the JWT token

It felt almost too easy! azure/login@v1 abstracts way too much if you want to understand what happens behind the scenes. If you don't pass any credentials to the Azure Login action, it attempts to use OpenID Connect as a fallback method. The first thing it will do is generate a JWT from GitHub by calling core.getIDToken(audience) from the core toolkit. Here is more information on the various ways to obtain a JWT: https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-cloud-providers.

The next thing it must do is exchange the JWT for an access token issued by the Microsoft identity platform. Luckily, Connect-AzAccount already offers such a parameter called FederatedToken, so it simply utilizes it:

Connect-AzAccount -ServicePrincipal -ApplicationId '${args.servicePrincipalId}' -Tenant '${tenantId}' -FederatedToken '${args.federatedToken}' -Environment '${args.environment}'

Here is the payload of a sample JWT:

{
  "jti": "cfa67dfb-bed4-447c-ac69-a3a9557531f0",
  "sub": "repo:RadoslavGatev/FederatedIdentityRepo:ref:refs/heads/main",
  "aud": "api://AzureADTokenExchange",
  "ref": "refs/heads/main",
  "sha": "30b8e95348f0dd7e4befede2e315281a5523c6da",
  "repository": "RadoslavGatev/FederatedIdentityRepo",
  "repository_owner": "RadoslavGatev",
  "repository_owner_id": "3099248",
  "run_id": "4420308259",
  "run_number": "14",
  "run_attempt": "1",
  "repository_visibility": "private",
  "repository_id": "613931023",
  "actor_id": "3099248",
  "actor": "RadoslavGatev",
  "workflow": "Run Azure CLI commands with OIDC",
  "head_ref": "",
  "base_ref": "",
  "event_name": "push",
  "ref_type": "branch",
  "workflow_ref": "RadoslavGatev/FederatedIdentityRepo/.github/workflows/azure-oidc.yml@refs/heads/main",
  "workflow_sha": "30b8e95348f0dd7e4befede2e315281a5523c6da",
  "job_workflow_ref": "RadoslavGatev/FederatedIdentityRepo/.github/workflows/azure-oidc.yml@refs/heads/main",
  "job_workflow_sha": "30b8e95348f0dd7e4befede2e315281a5523c6da",
  "runner_environment": "github-hosted",
  "iss": "https://token.actions.githubusercontent.com",
  "nbf": 1678828194,
  "exp": 1678829094,
  "iat": 1678828794
}

As you can see above, there are various claims sent within the JWT. To find out more about all supported claims check out either the OpenID Provider Configuration endpoint of GitHub or the documentation.

Among all claims, there are two that are of utmost importance for the Microsoft identity platform:

  • Audience or aud is a string or array of strings that identifies the recipients that the JWT is intended for. Its value must be api://AzureADTokenExchange. This is already done for you by the Azure Login GitHub action but make sure to specify the audience if you generate the token in any other way.
  • Subject or sub identifies the workload that requested the JWT. GitHub follows a predefined format that is assembled by concatenating information about the workflow and the job context, such as GitHub organization, repository, branch, or environment. The resulting subject claim must exactly match what you have defined as an expected value as part of the federated identity configuration. Otherwise, the token exchange will be rejected. A few examples: repo:<orgName/repoName>:ref:refs/heads/branchName, repo:<orgName/repoName>:pull_request, repo:<orgName/repoName>:environment:<environmentName>.

Some identity providers allegedly support defining conditions using some expression language, e.g. Google's Common Expression Language (CEL). So in theory, you could base your logic not only on the subject claim. But that's not the case now for the Microsoft Identity platform - it doesn't even support wildcards, but rather requires exact matches. And maybe for a good reason - if you mess up your logic, in the worst scenario you could accept and trust pretty much any valid token coming from GitHub.

Some shortcomings

There should be an exact match between the value of the subject claim and the expected value in the identity federation configuration of the application registration or managed identity.

The default structure of the subject claim postulates the first segment is the full repository name and the other is based on the context of the event that triggered the workflow. If you specify an environment in your workflow, then the structure will be repo:<orgName/repoName>:environment:<environmentName>. If you don't specify an environment and the workflow is triggered by a pull request, then it will be repo:<orgName/repoName>:pull_request. In case the workflow neither specifies an environment nor is triggered by a pull request, then the structure can be repo:<orgName/repoName>:ref:refs/heads/branchName. If you run the workflow from a tag, you get the idea - repo:<orgName/repoName>:ref:refs/tags/<tagName>.

This brings some variety in the possible values of the subject claim. If you don't specify an environment in the workflow and you run it from either a branch or a tag, the number of possible values can quickly become unmanageable as you won't know the name of the branches or the tags in advance. However, if the way you use your repository and workflows is rather predictable, you can define all the expected values of the incoming subject claim by creating multiple federated identity credentials. In this example here, I've created 3 - for dev and main branches and one for pull requests:

Passwordless deployments from GitHub Actions to Microsoft Azure

Unfortunately, you can only create up to 20 federated identity credentials per application registration or managed identity. Now imagine, if you had 50 repositories in your organization that you want to deploy to Microsoft Azure from. Each of the repositories can contain just some part of the whole solution you have in the cloud. In that case, you need to create an app registration/managed identity per repository as you will quickly reach the limit of 20 federated credentials. But that's probably impractical...

If you have multiple repositories that deploy to the same Microsoft Azure subscription, I suggest creating one app registration/managed identity per subscription.  If you utilize a GitHub environment in your workflow, you won't have to have to maintain a federated identity credential per branch or tag. So that would simplify things a little.

Additionally, GitHub API offers a way to customize the template for the subject claim either on an organizational level or for a single repository. The default template looks like the following, where repo is outputted as repo:<orgName/repoName> and context claim is dynamic (based on the job context), e.g. ref:refs/heads/branchName or environment:<environmentName>:

{
    "include_claim_keys": [
        "repo",
        "context"
    ]
}

Let's remove the repository name from the subject claim template. I'm setting it for a single repo but you might want to do it organizational-wide:

curl --location --request PUT 'https://api.github.com/repos/RadoslavGatev/FederatedIdentityRepo/actions/oidc/customization/sub' --header 'X-GitHub-Api-Version: 2022-11-28' --header 'Authorization: Bearer github_pat_goes_here' --header 'Content-Type: application/vnd.github+json' --data-raw '{
    "use_default": false,
    "include_claim_keys": [
        "repository_owner",
        "context"
    ]
}'

After executing that PUT request the subject claim will come with a value repository_owner:<orgName>:environment:<environmentName> or repository_owner:<orgName>:pull_request. Make sure to add environment: environmentName to your workflow definitions. So in my case, the subject claim comes with the following value: repository_owner:RadoslavGatev:environment:dev.

Then go to your app registration or managed identity and add a new federated identity credential. You can either choose the GitHub Actions scenario or the Other issuer. If you selected the GitHub Actions scenario, click on the hyperlinked text to customize it. Have in mind that at the time of writing this post, the UI for creating a federated credential of a managed identity doesn't let you customize the default GitHub template. In that case just select Other issuer, as the other options do nothing special but help you to construct the final format of the subject claim.

Passwordless deployments from GitHub Actions to Microsoft Azure

You are done. A simple federated credential that just specifies an organization and environment can authenticate all workflows in all repositories to which you applied the custom subject claim template.

Beware that there are certain risks with that approach - handing the Client ID of your security principal for a specific environment to any repository from the organization that has the subject claim template customization applied, gives immediate access to this particular environment. In other words, the Client ID serves as the key to the castle. Think about if this is a security concern for you or not.

Bonus question: Does Azure DevOps support this?

Yes, it is. See the blog post Workload identity federation for Azure deployments is now generally available.

Summary

In this blog post, I showed you how to authenticate to Microsoft Azure without using any credentials in raw form when deploying workloads from GitHub Actions workflows. At this point, you should be able to utilize workload identity federation in your organization and understand what limitations it has. Forget about passing or rotating credentials, go passwordless!

]]>
<![CDATA[Publish ASP.NET Core 7.0 web apps to an Azure Virtual Machine]]>https://www.gatevnotes.com/publish-asp-net-core-web-apps-to-azure-virtual-machine/63f661c1a9ad6516cc3f128fFri, 24 Feb 2023 11:30:00 GMT

I've recently had to find a way to deploy an ASP.NET Core web app based on .NET 7 to a set of Azure Virtual Machines. It turned out that although the option is easily available in Visual Studio publishing settings, there are quite a lot of steps in the configuration of the Virtual Machine itself to enable that otherwise easy-looking deployment option. I will guide you in configuring your Azure VM to unleash Visual Studio publishing for applications built with the recent versions of .NET in this blog post.

Prerequisites

Before you can get started, make sure you have the following prerequisites:

  • Visual Studio 2022;
  • An ASP.NET Core 7.0 Web Application;
  • One or more Azure Virtual Machines running Windows Server. The VMs I used ran Windows Server 2022.

Installing the required components on the Azure VM

There are quite a few things that need to be running on the virtual machine so that it can accept deployments from Visual Studio and host the ASP.NET core applications accordingly. These are namely:

  • Internet Information Services (IIS) with Management Console;
  • Web Management Service;
  • Web Deploy 4.0;
  • ASP.NET Core Hosting Bundle (includes everything you need to run applications - the .NET runtime, the ASP.NET Core runtime, etc.) - find the most recent version here https://dotnet.microsoft.com/permalink/dotnetcore-current-windows-runtime-bundle-installer. The script below installs v7.0.3, so make sure to change the link accordingly should you prefer to install a more recent version.

A note on the version of Web Deploy - as far as I know, there should be parity between the version of Web Deploy used by the client (in our case Visual Studio) and the server (the VM). The version of Web Deploy you are going to find available from the Microsoft Download Center is 3.6 but versions of Visual Studio 2017 and higher come with Web Deploy 4.0. But don't worry, I've managed to find the proper download link.

Here is the Windows PowerShell script that does all that for you:

# Install IIS (with Management Console)
Install-WindowsFeature -Name Web-Server -IncludeManagementTools

# Install Web Management Service (enable and start service)
Install-WindowsFeature -Name Web-Mgmt-Service
Set-ItemProperty -Path  HKLM:\SOFTWARE\Microsoft\WebManagement\Server -Name EnableRemoteManagement -Value 1
Set-Service -name WMSVC -StartupType Automatic
if ((Get-Service WMSVC).Status -ne "Running") {
    net start wmsvc
}

# Install Web Deploy 4.0
# Download file from Microsoft Downloads and save to local temp file (%LocalAppData%/Temp/2)
$msiFile = [System.IO.Path]::GetTempFileName() | Rename-Item -NewName { $_ -replace 'tmp$', 'msi' } -PassThru
Invoke-WebRequest -Uri 'https://download.visualstudio.microsoft.com/download/pr/e1828da1-907a-46fe-a3cf-f3b9ea1c485c/035860f3c0d2bab0458e634685648385/webdeploy_amd64_en-us.msi' -OutFile $msiFile
# Prepare a log file name
$logFile = [System.IO.Path]::GetTempFileName()
# Prepare the arguments to execute the MSI
$arguments= '/i ' + $msiFile + ' ADDLOCAL=ALL /qn /norestart LicenseAccepted="0" /lv ' + $logFile
# Execute the MSI and wait for it to complete
$proc = Start-Process -file msiexec -arg $arguments -Passthru
$proc | Wait-Process
Get-Content $logFile

# Install Microsoft .NET Windows Server Hosting
$dotnetHostingFile = [System.IO.Path]::GetTempFileName() | Rename-Item -NewName { $_ -replace 'tmp$', 'exe' } -PassThru
# Find the most recent version here: https://dotnet.microsoft.com/permalink/dotnetcore-current-windows-runtime-bundle-installer
Invoke-WebRequest -Uri 'https://download.visualstudio.microsoft.com/download/pr/ff197e9e-44ac-40af-8ba7-267d92e9e4fa/d24439192bc549b42f9fcb71ecb005c0/dotnet-hosting-7.0.3-win.exe' -OutFile $dotnetHostingFile
$dotnetHostingLogFile = [System.IO.Path]::GetTempFileName()
$arguments= '/install /quiet /log '+ $dotnetHostingLogFile
$dotnetHostingFile
$dotnetproc = Start-Process -file $dotnetHostingFile -arg $arguments -Passthru
$dotnetproc | Wait-Process
Get-Content $dotnetHostingLogFile

# Restart IIS
net stop was /y
net start w3svc

After the installation takes place, ports 8172 and 80 will be opened in the Windows Defender Firewall.

Executing the installation script

Now let me offer you two options for running the installation script.

Option A: Manual execution via RDP

That's the most obvious thing you will do if you just want to test out something. Simply RDP into the virtual machine, copy the script, open an elevated Windows PowerShell terminal, and execute it. Instead of RDP-ing into the machine, you can as well use Azure Bastion.

Option B: Execution on a scale using Custom Script Extension

By using Custom Script Extension you can automate the execution of the installation script over multiple Azure Virtual Machines. However, this setup requires you to do a little more preparation:

  1. Enable managed identity of the Virtual Machine if you haven't done so. In this example, I've enabled system-assigned identity. Should you decide to use a user-assigned one, you need to provide a value of either the propertyclientId or objectId both of which are under managedIdentity. You can pass those as part of the parameters of Set-AzVMExtension in the script below. You can find more info about the structure of that object in the documentation.
  2. Upload the installation script to an Azure Storage Account.
  3. Grant the managed identity access to the Storage Account so that it can access the installation script. I've granted it a Storage Blob Data Reader role on the Storage Account.
  4. Enable the Custom Script Extension on the VM using the following PowerShell script. You'll need to have the relevant Az modules imported into your PowerShell session. Please make sure to change the strings from the first three lines to reflect the names from your environment:
$vmName = 'vm-aspnet-publish'
$storageAcctName = "savmscripts001"
$fileUri = @("https://$storageAcctName.blob.core.windows.net/scripts/install-aspnetcorehosting.ps1")

$setAzVmExtensionParams = @{
    VMName             = $vmName
    Name               = 'aspnetCoreHosting' 
    Publisher          = 'Microsoft.Compute'
    ExtensionType      = 'CustomScriptExtension'
    TypeHandlerVersion = '1.10' 
    Settings           = @{ 
        "fileUris" = $fileUri
    }
    ProtectedSettings  = @{
        commandToExecute   = 'powershell -ExecutionPolicy Unrestricted -File install-aspnetcorehosting.ps1'
        managedIdentity    = @{}
    }
}
$vm | Set-AzVMExtension @setAzVmExtensionParams

Configuring Inbound NSG rules and DNS name of the Public IP address

We are not done yet. If you try publishing from Visual Studio 2022 at this point, you will probably get one of the following error messages:

The specified virtual machine does not have a domain name associated with any public IP address.
Could not reach the Web Deploy endpoint on the specified machine.

This is because Web Deploy on the client side expects to talk to Web Deploy on the server on port 8172 at custom-dns-name.region.cloudapp.azure.com. With the following script, you can configure both:

$vmName = 'vm-aspnet-publish'
$vm = Get-AzVM -Name $vmName 
$nic = Get-AzNetworkInterface -ResourceId $vm.NetworkProfile.NetworkInterfaces.Id

# Set a DNS name to the Public IP address
$pipName = $nic.IpConfigurations[0].PublicIpAddress.Name
$pip = Get-AzPublicIpAddress -Name $pipName
$pip.DnsSettings = @{DomainNameLabel = $vmName }
Set-AzPublicIpAddress -PublicIpAddress $pip

# Get NSG
$nsgName = $nic.NetworkSecurityGroup.Id -split '/' | Select-Object -Last 1
$nsg = Get-AzNetworkSecurityGroup -ResourceName $nsgName

# Add inbound rules to NSG
$httpInboundRule = @{
    Name                     = 'AllowAnyHTTPInbound'
    Access                   = 'Allow'
    Protocol                 = 'Tcp'
    Direction                = 'Inbound' 
    Priority                 = 350 
    SourceAddressPrefix      = "*" 
    SourcePortRange          = '*'
    DestinationAddressPrefix = '*'
    DestinationPortRange     = 80
}
$nsg | Add-AzNetworkSecurityRuleConfig @httpInboundRule

$webDeployInboundRule = @{
    Name                     = 'AllowAnyWebDeployInbound'
    Access                   = 'Allow'
    Protocol                 = '*'
    Direction                = 'Inbound' 
    Priority                 = 400 
    SourceAddressPrefix      = "*" 
    SourcePortRange          = '*'
    DestinationAddressPrefix = '*'
    DestinationPortRange     = 8172
}
$nsg | Add-AzNetworkSecurityRuleConfig @webDeployInboundRule

$nsg | Set-AzNetworkSecurityGroup

The script will also configure an inbound rule to allow the HTTP traffic on port 80 so that you can access the ASP.NET Core Web App that is hosted in the VM.

In Azure Portal things should look like this:

Publish ASP.NET Core 7.0 web apps to an Azure Virtual Machine
The custom DNS name of the Public IP address
Publish ASP.NET Core 7.0 web apps to an Azure Virtual Machine
The two inbound NSG rules

Creating the Visual Studio Publish Profile

Now that you have fulfilled all prerequisites, you're ready to deploy your ASP.NET Core Web App to the Azure Virtual Machine using Visual Studio. Here's how:

  1. Open your ASP.NET Core project in Visual Studio.
  2. Right-click on the project in the Solution Explorer and select "Publish".
  3. In the Publish dialog, select "Azure" as a target.
  4. Select "Azure Virtual Machine" as a specific target environment.
  5. Find and select your virtual machine.
  6. If everything is as expected, the "Finish" button will become available, as seen in the screenshot below. Otherwise, an error message will be displayed.
  7. You know the drill, just hit "Publish".
Publish ASP.NET Core 7.0 web apps to an Azure Virtual Machine

Summary

In this blog post, I've covered everything you need to know to enable Visual Studio publishing of your ASP.NET Core project to your existing Virtual Machine in Microsoft Azure. From installing IIS, Web Deploy, and .NET on your VM to enabling inbound communication and setting up the DNS name of the Public IP address. And the best part? I've included some handy PowerShell scripts to make the process easy and stress-free. So go ahead and give it a try!

]]>
<![CDATA[Invoke any Azure REST API from Azure Data Factory or Synapse pipelines]]>https://www.gatevnotes.com/call-any-azure-api-from-azure-data-factory-synapse-pipelines/62019460d767df27904166bfFri, 18 Feb 2022 07:00:00 GMT

Azure Data Factory and Azure Synapse have brilliant integration capabilities when it comes to working with data. You can have various relational or non-relational databases, file storage services, or even 3rd party apps registered as linked services. You can then create datasets on top of a linked service and gain access to its data.

There are also a number of compute services you can use from within your pipelines. For example, you can invoke Azure Function, execute a notebook from Azure Databricks, or submit an Azure Batch job, among others. But sooner or later you will feel limited in what you can do with the catalog of supported compute services. Have you wondered how to execute some action on an otherwise unsupported Azure resource in Azure Data Factory? You might be willing to import an Azure SDK of choice and simply use it. But for better or worse, you can't have any programming code in a pipeline. Please stick to using activities! At first glance, there isn't such an activity...

Meet the Web activity

The Web activity is very simple yet powerful. There is no need to create any linked services or datasets in advance. This meets the need of executing an ad hoc request to some REST API. As you may know, each Azure service exposes a REST API.

Authentication and authorization

Before I show you a couple of examples, let's cover two important topics: authentication and authorization. To issue a request to an API endpoint, your pipeline will have to authenticate by using some type of security principal. The supported ones at the time of writing are:

  • None (means no authentication, i.e. the API is public)
  • Basic
  • Managed Identity
  • Client Certificate
  • Service Principal
  • User-assigned Managed Identity

In this post, I am going to focus on accessing Azure resources, and hence you have three relevant options: Service Principal, Managed Identity, or User-assigned Managed Identity. I recommend using managed identities whenever possible. The downside of choosing to authenticate using Service Principal is the reason for having Managed Identity: you have to deal with its credentials, i.e. generating, rotating, securing them. It's a hassle.

When it comes to Managed Identity, no matter if you use Data Factory or Azure Synapse, they both have what's called a System-assigned Managed Identity. In other words, this is the identity of the resource itself. This means that the service knows how to handle the authentication behind the scenes.

The User-assigned Managed Identity is a true Azure resource. You create it on your own and then you can assign it to one or more services. That's the only difference with the system-assigned type that is solely managed by the service itself.

Now that you have identified the security principal, you will have to create a certain role assignment in order for the operation that you are trying to execute to be properly authorized by the API. What role to assign will largely depend on what type of operation you want to execute. I encourage you not to use the built-in Contributor role; instead, follow the Principle of Least Privilege. Assign the minimal role that allows the action on the lowest possible scope, i.e. the Azure resource itself.

A few examples

Let's cover the groundwork with some examples. If you need to perform some other operation, have a look at the links near the end of the post.

There are two types of operations in Azure:

  • Control-plane operations: those are implemented on the Resource Provider level and can be accessed on the Azure Resource Managed URL, that for Azure Global is https://management.azure.com
  • Data-plane operations: those are provided by your specific resource, i.e. the requests are targeting the specific FQDN of the service in question, i.e. https://{yourStorageAccountName}.blob.core.windows.net or https://{yourKeyVault}.vault.azure.net

Example 1: Getting all Storage Account containers via the Management API

One of the options to retrieve the containers of a Storage Account is to use the List operation of the control-plane API which constitutes the following HTTP request:

GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/blobServices/default/containers?api-version=2021-04-01

The settings of your Web activity should look like the following:

Invoke any Azure REST API from Azure Data Factory or Synapse pipelines
Listing all containers via the control-plane API

Instead of hardcoding the URL, I decided to parametrize it using pipeline variables: @concat('https://management.azure.com/subscriptions/', variables('subscriptionId'), '/resourceGroups/', variables('resourceGroupname'), '/providers/Microsoft.Storage/storageAccounts/', variables('storageAccountName'), '/blobServices/default/containers?api-version=2021-04-01').

To make this query execute successfully you will need to do a role assignment to the System-assigned Managed Identity of the Data Factory or Synapse. The API operation requires Microsoft.Storage/storageAccounts/blobServices/containers/read action for which you can use the Storage Blob Data Reader built-in role. Besides the action needed, it contains the smallest set of allowed actions. The role assignment looks like this:

Invoke any Azure REST API from Azure Data Factory or Synapse pipelines
The role assignment granted to the Managed Identity of ADF

Now back to the result of the Web activity. The API returns JSON payload which means you can easily process it in the pipeline:

Invoke any Azure REST API from Azure Data Factory or Synapse pipelines

Example 2: Getting all Storage Account containers via the data plane API

An alternative way for obtaining a list of all containers would be calling the Blob Service Rest API using the FQDN of your storage account. It's an HTTP GET to https://{accountname}.blob.core.windows.net/?comp=list. The role assignment from Example 1 will suffice. The web activity is configured as follows:

Invoke any Azure REST API from Azure Data Factory or Synapse pipelines
Listing all containers via the blob data-plane API

One of the gotchas of the Blob Service REST API is that the fact it works with XML payload:

Invoke any Azure REST API from Azure Data Factory or Synapse pipelines

At the time of writing it is not possible to negotiate the content type and work with JSON. It is somewhat unpleasant to work with XML in Azure Data Factory but it is nowhere near impossible. You have most likely noticed the Set variable activity that comes after the HTTP request. It initializes a variable of type array with the output of the following XPath query: @xpath(xml(activity('GetBlobListViaDataPlane').output.Response), '//Container/Name/text()'). The result is very compact, namely an array of container names: ["container1", "container2", ...].

Example 3: Retrieving a secret from Azure Key Vault

Although Azure Data Factory provides very good integration with Key Vault for configuring Linked Services, for example, it doesn't offer an activity that allows you to retrieve a secret for later use in the pipeline. Here comes the Get Secret operation which looks like this in the form of activity:

Invoke any Azure REST API from Azure Data Factory or Synapse pipelines

The URL from the screenshot above is set to @concat('https://', variables('keyVaultName'), '.vault.azure.net/secrets/', variables('secretName'), '?api-version=7.0').

Then, as usual, some access-related configuration will be needed. In case your Key Vault leverages access policies as its permission model, there must be an access policy granting the Managed Identity of the Data Factory permission to read secrets:

Invoke any Azure REST API from Azure Data Factory or Synapse pipelines
The access policies of the key vault grant Get secret permissions to the ADF's Managed Identity

The output of the request looks like this:

Invoke any Azure REST API from Azure Data Factory or Synapse pipelines

REST API Reference

Here are some links that can help you find the API of interest:

Summary

In this post, I've shown how to execute Azure REST API queries right from the pipelines of either Azure Data Factory or Azure Synapse. Although the pipelines are capable of doing this, they shouldn't be used for any large-scale automation efforts that affect many Azure resources. Instead, it should be used to complement your data integration needs.

Photo by Sigmund on Unsplash

]]>
<![CDATA[A guide to online account security]]>I've been witnessing how people I know are falling victim to identity theft. Their online accounts get stolen because they make a lot of mistakes when it comes to securing them. I can see every week new accounts of friends popping out in the digital world and asking

]]>
https://www.gatevnotes.com/guide-to-online-account-security/61e44f27d767df2790416127Fri, 21 Jan 2022 18:30:00 GMT

I've been witnessing how people I know are falling victim to identity theft. Their online accounts get stolen because they make a lot of mistakes when it comes to securing them. I can see every week new accounts of friends popping out in the digital world and asking me for friendship.

The weakest link of security is usually the end-user. I've found myself explaining the various best practices for securing an account over and over again, so I decided to write a blog post about it. Of course, it's impossible to cover everything and there are a lot of opportunities for making mistakes down the road.


The first factor: The password

Providing a string of characters to prove your identity before a system is still one of the most widely used means of authentication.

Here are several tips for having a strong password:

  • Use completely different passwords for different accounts.
  • The longer and more complex, the better.
  • Change the passwords of your most important accounts occasionally.
  • Don't count on your password solely. I'll tell you why in a second.

Remembering your password

I could imagine that you have more than a handful of accounts. That means that if you closely follow all the best practices when choosing a password, you won't be able to remember them all unless you hold the Guinness world record for Most Pi places memorized.

Use a password manager! It offers secure storage and retrieval of your passwords. I've been using Bitwarden for quite some time and I'm happy with it. It's open-source so you can examine its source code. You can even host it yourself although that's not a very good idea. There are some other popular password managers as well:  LastPass, 1Password, and KeePass. Do your own research and pick what fits you best.

The password manager will make it easy for you to have a unique password per account and will also help you generate a long and complex one.

But wait, my browser already offers me to remember the passwords for me?! That's true; however, browsers are not meant to be used for everything. Storing passwords in the browser is prone to a number of attacks. For example, everyone with temporary access to your device can reveal all your autofill passwords. Not to mention that malware can also target the autofill data more often than not. It's just better to use purpose-builts software like the password managers I mentioned above. It's a healthy tradeoff between convenience and security.

Why are passwords irrelevant?

You have to assume that sooner or later the password you chose will be in the hands of some attacker. There are various methods for that: you either use simple or common passwords (easy to be guessed), you were simply asked (phished), or there was a data breach from a website that doesn't properly secure the passwords at rest.

This means that your Pa$$w0rD doesn't matter, but still, do follow the tips I gave you while you still use a password. Yes, you read it correctly.


The second factor: Choosing among options

The only way to protect your accounts is to add Multi-Factor Authentication (MFA or 2FA). This means that not only you will provide your password (something you know) but you will also prove possession of something (phone number, security key, etc.) or some physical characteristic, e.g. biometrics.

According to Microsoft, your account is 99.9% less likely to be compromised if you use MFA. Although specific details of how this statistic was derived were not provided, Microsoft manages billions of consumer and work accounts, i.e. Microsoft Accounts and work accounts from Azure Active Directory.

I am going to go through the popular second-factor authentication methods. Some of them are vulnerable. There are two main attack vectors: compromising the communication channel used for delivery/registration or real-time phishing. Real-time phishing is a clever example of leveraging the Man in the middle type of attack for replaying the bidirectional flow of communication with a target system. Meanwhile, it is collecting all your passwords and authentication codes.

But when the second factor is combined with a strong password it is deemed relatively difficult for an account to be compromised. If that happens, you've most likely been the victim of a targeted attack against you. On the positive side, you are a very special person to someone out there.

OTP codes delivered via SMS

Using One-time password (OTP) codes that are usually delivered via SMS is maybe the most widely used second factor. You type your password and then your phone receives an OTP code that you type back to the website.

As you're maybe thinking, this method is susceptible to both channel-jacking and real-time phishing. Reasons for the former are that mobile networks aren't as secure as you might think and in addition, the customer support of the mobile operator can be tricked to issue a replacement SIM card while you are sleeping. And since that OTP code is not strictly designated to be used by a particular authentication session, it can be used by anyone possessing it while it is valid. In the case with the machine in the middle (real-time phishing) this OTP code will likely be used by the intercepting machine firstly.

Provided that you have enabled your account with several of the methods I discuss below, I encourage you to disable OTP codes delivered via SMS if the account settings permit it, of course.

OATH codes

OATH (Initiative for Open Authentication) is an industry-wide collaboration to develop two open authentication standards: TOTP (Time-based One-time Password) and HOTP HMAC-based One-time Password. Both standards output an OTP code; however, the difference with the ordinary OTP is that the user can generate the code themselves and it doesn't have to be transmitted from the server. The code can be generated by using some type of OATH software (more on that below) or OATH hardware tokens which are, I would say, uncommon for most of your accounts. Additionally, this code slides automatically and has a period of validity. When you scan that QR code during the registration process, it actually contains a shared secret that is securely stored both on the server and on your device.

It's called time-based (TOTP) because it relies on the clock of the system you use for generating it and also the time on the server and there is a window during which the generated OTP code is valid. TOTP is the more widely used algorithm than HOTP.

HMAC-based (HOTP) uses an internal counter that is incremented each time you generate an OTP code. When the server receives this counter along with a valid password of yours, it remembers the last value of that internal counter. There is some allowance window that specifies how far out-of-sync those counters can be because as you may imagine, sometimes you will generate them accidentally or just mistype them.

Many applications implement the TOTP algorithm, the most famous being Microsoft Authenticator and Google Authenticator. Although both apps are great, they have a significant limitation: they are mobile apps. What happens if you have lost your phone or it's simply not around you at the time you want to log in somewhere? Moreover, the backup and restore functionality is not the best.

Some of the password managers also implement TOTP. I encourage you not to use them. Why do you have to store the pre-shared secret in the same place where your password is. Is that a true second factor?

Because of all that, I use Authy for generating the TOTP codes for all my personal accounts. It is available for mobile and desktop and can be used on multiple devices. It also offers a secure backup to the cloud. Don't get me wrong, I use Microsoft Authenticator for work-related accounts as it sends push notifications for approval. If I have to type that OTP tens of times a day, I would have gone crazy.

How secure are the OATH tokens? Since the codes are generated in an app or by a device you own, the channel-jacking is almost irrelevant. But the code that you type in can be used by anyone who possesses it which means that this factor is still prone to Real-time phishing. This generally makes it a good second factor.

FIDO2 security keys

We are getting to one of, if not the most secure option available. The FIDO2 security keys offer a high level of security by utilizing public-key (asymmetric) cryptography. For every account that you register with the security key, it generates a pair of a public and a private key, whereby the private key is only stored in the device. The private key never leaves the boundary of the security key device, whereas the public key is stored in the server. Upon consecutive login, it will use the public key to prove that the challenges it sent to the authenticator were signed using the corresponding private key. For each operation with the security key, it prompts you to acknowledge the action by either touching a button, scanning your fingerprint, or entering a PIN code.

The security key is typically an external device that communicates with your computer or mobile phone via USB, BLE, or NFC. But keep in mind that you can also have a built-in FIDO2 authenticator on your laptop. I use a Lenovo ThinkPad which has one and whenever I register a new account I have to be careful where it is stored. The thing is, you can have multiple external FIDO2 keys plugged in at the same time along with the platform one. If I accidentally touch the fingerprint scanner of my laptop during the registration process, the account will be registered with the built-in authenticator. That's not what I intend most of the time.

I've been using those two FIDO2 Goldengate security keys by TrustKey that come with a fingerprint reader for the past two years. The only difference is that the one has USB Type-C whereas the other has Type-A. A good idea is to have a spare security key if something happens with the one you actively use. You don't want to be locked out of your accounts.

A guide to online account security
My G310 & G320 FIDO2 security keys by TrustKey solutions

I can see that those models have been revised to also support TOTP, HOTP, and Windows Hello.  With the new models and at this price range, I think they are well-positioned ahead of the usual market leader Yubico.

To understand how FIDO2 keys work at a lower level, you can read more about the two complementary standards that make it all possible: WebAuthn (developed by W3C) and CTAP2 (developed by FIDO Alliance).

How secure are FIDO2 security keys? They are effectively channel-independent because the private key stays in the key itself. The main risk here is when your security key is stolen. If you have a biometric security key with a fingerprint reader would make the job very difficult. Losing a security key is generally not considered a big risk, as the person who finds has to know all accounts that you have registered in the key. As far as I know, there is no way to dump them.


Going passwordless

The passwords are soon to become a thing of the past. If you think about it, with such secure second factors as FIDO2 security keys, your password doesn't add so much value to your security posture. It rather adds inconvenience every time you are prompted for it. So a secure second factor can become the only factor.

If you happen to use Azure Active Directory at your company, you can go fully passwordless. The supported passwordless methods are three: FIDO2 keys, Microsoft Authenticator, and Windows Hello for Business. You can learn more here.

Since September 2021, your personal Microsoft Accounts can also go passwordless.


Securing your accounts properly is something that the user is responsible for. Understanding the different options is very important. Most people postpone such decisions until it is too late and their accounts have been compromised. Don't be like them!

Cover photo by Towfiqu barbhuiya on Unsplash.

]]>
<![CDATA[Announcing my book Introducing Distributed Application Runtime (Dapr)]]>It's been a tough year! My blog hasn't received much of my attention. I am sorry, my dear readers. But I am delighted to announce a new book named Introducing Distributed Application Runtime (Dapr) published by Apress with a foreword written by Yaron Schneider, Principal Software

]]>
https://www.gatevnotes.com/introducing-dapr-distributed-application-run-time/60ec6bb87eed6f0d083e8e8eFri, 30 Jul 2021 12:54:02 GMT

It's been a tough year! My blog hasn't received much of my attention. I am sorry, my dear readers. But I am delighted to announce a new book named Introducing Distributed Application Runtime (Dapr) published by Apress with a foreword written by Yaron Schneider, Principal Software Engineer at Microsoft and Dapr Co-creator.

The book is now available as an eBook and in Paperback on apress.com.

How did I end up writing a technical book?

Let's rewind to early 2020 when COVID-19 started getting attention. At that time I was in-between projects and I had a few client opportunities that eventually failed because of all the craze built by the pandemic. In a few days, MVP Summit was canceled. It couldn't get any worse.

Fortunately, I had a lot of spare time to learn and try new and exciting things. One of them was Dapr (Distributed Application Runtime) that I covered in a blog post that became very popular. Not much after that Apress contacted me with a proposal to write a book. I've always wanted to write a technical book but somehow I couldn't squeeze enough time in. Well, this was the perfect chance for me!

However, shortly after we finalized all details with the publisher I started working full time on a new project and I had to combine the book-writing with it. So I had to put in long hours, each and every day. Fast forward a year from then, the book is published and available. From my perspective, that was a year of lockdowns well-spent!

What is this book about?

When I first heard about Dapr I was amazed by the simple concept of reusing its building blocks that make the development of distributed applications simpler. Essentially, you can do more with less.

I always try to think and understand things conceptually. And I've been doing so since I learned about design patterns. Dapr brought kind of the same feelings as it bundles ready-to-use patterns for your Microservices applications out of the box.

Introducing Dapr aims to guide you on your way of learning Dapr. You don't need to have a lot of experience working with Microservices, container orchestration platforms like Kubernetes, or any common patterns. I tried to provide introductory chapters for all that. After that, the book walks you through every building block of Dapr in detail. The last part of the book is dedicated to integrating Dapr within various frameworks and programming models like ASP.NET Core, Azure Functions, and Azure Logic Apps.

I tried to provide a lot of examples. Most of them are in .NET Core while a handful of them are based on Node.js (to emphasize the language-agnostic properties of Dapr). Dapr is cloud-agnostic as well but I stuck to a single public cloud platform, that is Microsoft Azure. I couldn't help it... 😊

I want to thank all Dapr members and maintainers for being so helpful and welcoming in the Dapr community. The book couldn't have been possible without them. I want to also thank Mark Russinovich and Yaron Schneider who were among the first to receive an early copy of the book. Then the first early review came in:

Introducing Dapr is a fantastic guide to getting you started on leveraging Dapr to supercharge your cloud native applications.

―Mark Russinovich
Azure CTO and Technical Fellow
Microsoft
Introducing Distributed Application Runtime (Dapr) - Simplifying Microservices Applications Development Through Proven and Reusable Patterns and Practices | Radoslav Gatev | Apress
This book teaches developers Dapr, an event-driven runtime that helps build microservice applications, using a variety of languages and frameworks....
Announcing my book Introducing Distributed Application Runtime (Dapr)
]]>