Mohamad Dbouk https://mdbouk.com/ Recent content on Mohamad Dbouk Hugo -- 0.147.7 en-us Wed, 06 Aug 2025 00:00:00 +0000 Run Local AI Models in .NET Like a Boss https://mdbouk.com/run-local-ai-models-in-.net-like-a-boss/ Wed, 06 Aug 2025 00:00:00 +0000 https://mdbouk.com/run-local-ai-models-in-.net-like-a-boss/ Master the art of running local AI models in .NET. This guide covers using Ollama and Docker to build efficient, cost-effective AI-powered applications. You know that you don’t need to pay for AI services to get the job done, right? If you’re looking to run local AI models in your .NET applications, I’ve got just the thing for you.

In this post, I walk you through how to set up and run local AI models using .NET, Docker, and Ollama. We’ll cover everything from installing the necessary tools to running your first model, and to building a simple application that uses these models.

If you want, you can also check out the video version of this post on my YouTube channel:

Install And Run Ollama

You will need docker installed on your machine. Once you have that, you can install Ollama by running the following command:

docker run -d -v ollama_data:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Notice the -v ollama_data:/root/.ollama part? This is where Ollama will store its data, so you can keep your models even after stopping the container.

Once Ollama image is downloaded and started, you can interact with the container by running the following command:

docker exec -it ollama bash

Download a Model

Now that Ollama is running, you can download a model. For example, to download the llama3.2:1b model, run:

ollama pull llama3.2:1b

or you can use the ollama run command to run the model directly:

ollama run llama3.2:1b

You can also list all available models by running:

ollama list

And since we are using a preserved volume, you can stop the container and start it again without losing your models.

I’m going to use the llama3.2:1b model for the rest of this post, but you can choose any model that Ollama supports.

Check out the Ollama documentation for more details on available models and how to use them.

Let’s try to run the model and interact with it using the Ollama CLI:

ollama run llama3.2:1b
>>> What is the capital of France?
The capital of France is Paris.

Create a .NET Application

Now that we have Ollama running and a model downloaded, let’s create a simple .NET application to interact with the model.

I’m going to start first using a console application, but you can adapt this to any .NET application type you prefer.

Create a new console application:

dotnet new console -n LocalAIApp
cd LocalAIApp

Next, add the following packages to your project:

dotnet add package Microsoft.Extensions.AI
dotnet add package Microsoft.Extensions.AI.Ollama

And then, add the following code to your Program.cs file:

using Microsoft.Extensions.AI;

var client = new OllamaChatClient("http://localhost:11434", "llama3.2:1b");

var response = await client.GetResponseAsync("What is the capital of France?");

Console.WriteLine(response.Text);

This code creates an OllamaChatClient that connects to the Ollama server running on http://localhost:11434 (your docker instance) and uses the llama3.2:1b model. It then sends a question to the model and prints the response.

Now, run your application:

dotnet run

You should see the response from the model printed in the console:

The capital of France is Paris.

Explore Further

You noticed in the basic example that we used the GetResponseAsync method to get a response from the model. and we had to wait for the response to be returned. And since we are running a local model, the response time was very slow, or at least you could know that the model is running in the first place. But there is a way using the Ollama client to stream the response back to you, which is much more efficient and allows you to handle larger responses.

Let’s modify our code to use streaming, and we are also going to implement a simple chat loop that allows us to interact with the model continuously.

using Microsoft.Extensions.AI;

var client = new OllamaChatClient("http://localhost:11434", "llama3.2:1b");

Console.WriteLine("Welcome to the Local AI Chat!");
Console.Write("> ");

while (true)
{
    var prompt = Console.ReadLine();
    if (string.IsNullOrWhiteSpace(prompt)) continue;

    await foreach (var response in client.GetStreamingResponseAsync(prompt))
    {
        Console.Write(response.Text);
    }
    
    Console.WriteLine();
    Console.Write("> ");
}

Now when you run your application, you can type in questions and get responses in real-time, similar to what you would expect from an online AI chat service.

dotnet run
Welcome to the Local AI Chat!
> What is the capital of France?
The capital of France is Paris.
> What is the largest planet in our solar system?
The largest planet in our solar system is Jupiter.
>

You can also use the ChatMessage class to send more complex messages, such as system prompts or user messages, which can help the model understand the context better.

You can create a ChatMessage like this:

// ...
List<ChatMessage> chatMessages =
[
    new ChatMessage(ChatRole.System, "You are a sarcastic but helpful AI agent"),
    new ChatMessage(ChatRole.User, prompt)
];

await foreach (var response in client.GetStreamingResponseAsync(chatMessages))
{
    Console.Write(response.Text);
}
// rest of the code here

This allows you to set the context for the model, making it more effective in generating responses that fit your needs.

DI Registration in Web API

You noticed that we created the OllamaChatClient directly in our code. However, in a real application, you would want to use Dependency Injection (DI) to manage your clients and services.

To showcase how to do this, let’s first create a new web API project:

dotnet new webapi -n LocalAIWebApi
cd LocalAIWebApi

Next, add the same packages as before:

dotnet add package Microsoft.Extensions.AI
dotnet add package Microsoft.Extensions.AI.Ollama 

Now, open the Program.cs file and register the OllamaChatClient in the DI container:

using Microsoft.Extensions.AI;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddChatClient(new OllamaChatClient("http://localhost:11434", "llama3.2:1b"));

var app = builder.Build();

Next, let us create a new endpoint, using minimal APIs, that allow us to interact with the model, and maybe even create a simple Roast Me endpoint that uses the AI model to roast the user:

app.MapPost("/roast", async (string input, OllamaChatClient client) =>
{
    var response = await client.GetResponseAsync($"Roast me: {input}");
    return Results.Ok(response.Text);
});

Now, you can run your application:

dotnet run

And test the /roast endpoint using a tool like Postman or curl:

curl -X POST "http://localhost:5000/roast" -H "Content-Type: application/json" -d "\"I love pineapple on pizza\""

(I don’t actually love pineapple on pizza šŸ˜…)

You should get a response from the model roasting you for your pizza preferences.

Conclusion

Running local AI models in .NET is not only possible but also straightforward with tools like Ollama. You can create powerful applications that leverage AI capabilities without relying on external services, giving you more control over your data and costs.

]]>
I Built My Own MediatR! Here How You Can Too https://mdbouk.com/i-built-my-own-mediatr-here-how-you-can-too/ Mon, 02 Jun 2025 00:00:00 +0000 https://mdbouk.com/i-built-my-own-mediatr-here-how-you-can-too/ Discover how to create your own MediatR alternative in .NET. This comprehensive guide explores the Mediator pattern, CQRS, and the use of source generators to streamline your development process. So, MediatR’s changing things up, right? If you’re looking for a free way to handle messaging in your .NET apps, I’ve got something cool for you.

In my latest video, I walk you through building your own alternative to MediatR. We’ll dive into the Mediator pattern, CQRS (Command Query Responsibility Segregation) and how to implement CQRS using the Mediator pattern.

You’ll see how to set up the basics for sending commands and queries, and how to get the right code to handle them.

We also touch on making things easier with Scrutor for registering these handlers.

But the really interesting part? We explore using Source Generators. This lets you automatically generate some of the code, which can make your application run better.

Want to see how to build your own message handler in .NET and maybe move away from MediatR? Check out the video!

Let me know what you think!

]]>
HybridCache in .NET 9 is Awesome! https://mdbouk.com/hybridcache-in-.net-9-is-awesome/ Sat, 23 Nov 2024 00:00:00 +0000 https://mdbouk.com/hybridcache-in-.net-9-is-awesome/ Dive into the innovative HybridCache in .NET 9, a game-changer for caching strategies. Learn how it bridges in-memory and distributed caching for optimal performance and scalability. .NET 9 is now live, and it comes with a new set of features. Some are great, and some are just icing on the cake. But what really stands out is the newĀ HybridCache!

HybridCache is not just another caching API in .NET; it’s designed to solve problems we didn’t even know we had. It combines the best of two worlds by bridging the gap betweenĀ IMemoryCacheĀ andĀ IDistributedCache. It also introduces advanced features likeĀ cache stampede protection andĀ efficient serialization.

In this blog,Ā I’m going to review the existing caching mechanisms, highlight their strengths and weaknesses, and demonstrate how to implement the newĀ HybridCache. We’ll see how it simplifies caching and makes it incredibly easy to use.

Download the source code from here

You can also watch my YouTube video on the same topic here: https://youtu.be/PDIKTbbkmCk

Understanding Existing Caching Mechanisms

IMemoryCache

The **IMemoryCache** provides fast, in-memory caching within the application’s process. It has also few limitations, such as the cache is limited to only a single instance and data is lost if the application restarts or craches.

IDistributedCache

In the other hand, **IDistributedCache** offers a caching mechanism that can be shared across multiple instances, giving you the ability to preserve and distribute cache data across your distributed system. And as for the drawbacks, accessing the distributed cache can be slower than the in-memory cache due to network overhead, and it does require additional setup and configuration.

Introducting HybridCache

HybridCache in .NET 9 combines both **IMemoryCache** and **IDistributedCache**, providing a unified API that leverages a two level caching strategy:

  1. Primary Cache (L1): In-memory cache for fast access.

  2. Secondary Cache (L2): Distributed cache for scalability and persistance

HybridCache comes with some interesting features, one of which is the cache stampede protenction, where it prevents multiple concurrent requests from overwhelming the data source by ensuring only one request populate the cache for a given key.

It also support various serialization options, making it a serialization optimized solution where you can chose the right option for your application.

And my favorite option is, the tag based invalidation, where it enables you to invalidate a group of cache entries related to a specific tag for efficient invalidations.

Implementing **HybridCache**

First, you need to install the nuget package

dotnet add package Microsoft.Extensions.Caching.Hybrid --version "X.X.X" # add the latest version here

Then you need to register the HybridCache inside your DI

// Program.cs
var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddHybridCache();

You can also configure options such as serialization and cache entry settings:

builder.Services.AddHybridCache(options =>
{
    options.MaximumPayloadBytes = 1024 * 1024; // 1 MB
    options.MaximumKeyLength = 1024;
    options.DefaultEntryOptions = new HybridCacheEntryOptions
    {
        Expiration = TimeSpan.FromMinutes(5),
        LocalCacheExpiration = TimeSpan.FromMinutes(5)
    };
});

And to use it in your application, simply inject the **HybridCache** into your service or endpoint and use it to cache data:

public class BookService
{
    private readonly HybridCache _hybridCache;
    public BookService(HybridCache hybridCache)
    {
        _hybridCache = hybridCache;
    }
    
    public async Task<Book?> GetBookAsync(int id)
    {
        var key = $"book-{id}";
        
        return await _hybridCache.GetOrCreateAsync(key, async (cancellation) =>
        {
            // Get the book from the database
        });

You can also use **_hybridCache.SetAsync** to set the cache directly.

public async Task<Book?> UpdateYearAsync(int id, int year)
{
    var book = await _context.Books.FindAsync(id);
    if (book is null) return null;
    
    book.Year = year;
    await _context.SaveChangesAsync();
    await _hybridCache.SetAsync($"book-{id}", book);
    
    return book;
}

Or you can use **_hybridCache.RemoveAsync** to remove and invalidate a cache entry

await _hybridCache.RemoveAsync($"book-{id}");

HybridCacheĀ in .NET 9 offers a robust solution for caching by combining the speed of in-memory access with the scalability of distributed caching. Its advanced features like stampede protection and configurable serialization make it a valuable addition to any .NET application.

Happy Coding!

]]>
Adding Custom Formatting to Your Classes with IFormattable https://mdbouk.com/adding-custom-formatting-to-your-classes-with-iformattable/ Mon, 18 Nov 2024 00:00:00 +0000 https://mdbouk.com/adding-custom-formatting-to-your-classes-with-iformattable/ Master the art of custom formatting in C# by leveraging the IFormattable interface. This guide provides practical examples to make your class outputs dynamic and user-friendly. DateTime has a great feature that I often replicate in my classes: the ability for users to format the ToString output however they want. Using it is cool, but building it yourself? Even better.

In this post, I’ll walk you through implementing custom formatting for any class using the IFormattable interface, covering multiple approaches.

You can also check out my YouTube video on the same topic here: https://youtu.be/hocPk1eBUuE

What we’re aiming to achieve is similar to the ToString method with format options found in the DateTime class.

Console.WriteLine(DateTime.Now.ToString("(MM) d - yyyy"));

// Output: (11) 18 - 2024

You’ll see that we can pass a format string to the ToString method, and based on what we specify, we get the corresponding data. Plus, you can customize the output by adding multiple ā€œtokensā€ to the format string.

So, let’s get started by creating a new class called Employee.

public class Employee
{
    public required string FirstName { get; set; }
    public required string LastName { get; set; }
    public required string JobTitle { get; set; }
    public required string CompanyName { get; set; }
}

Next, we’ll implement the IFormattable interface and add a new ToString method.

public class Employee : IFormattable
{
  // (..)
  public string ToString(string? format, IFormatProvider? formatProvider)
    {
        throw new NotImplementedException();
    }
}

I’ll override the ToString method and create another overload with a single parameter for the format.

public class Employee : IFormattable
{
  // (..)
  public override string ToString()
  {
      return ToString("G"); // G as General format
  }
  public string ToString(string? format)
  {
      return ToString(format, CultureInfo.CurrentCulture); // use CurrentCulture for now
  }
  public string ToString(string? format, IFormatProvider? formatProvider, int x)
  {
        // Let's start
    }
}

Simple Approach

The first approach is straightforward: mapping a ā€œformat tokenā€ to an actual output. For instance, using the format Login should display FirstName.LastName.

public string ToString(string? format, IFormatProvider? formatProvider)
{
    // G, Full => F L is the JobTitle at CompanyName
    // login, L => firstname.lastname
    // userdomain, ud => [email protected]

    if (string.IsNullOrEmpty(format))
    {
        format = "G";
    }

    format = format.ToUpper();

    return format switch
    {
        "G" or "FULL" => $"{FirstName} {LastName} is the {JobTitle} at {CompanyName}",
        "L" or "LOGIN" => $"{FirstName}.{LastName}",
        "UD" or "USERDOMAIN" => $"{FirstName}.{LastName}@{CompanyName}.local",
        _ => throw new FormatException($"Invalid format {format}")
    };
}

Note: I decided to make the keywords work for both uppercase and lowercase inputs. To do this, I converted format to an uppercase string using ToUpper.

var employee = new Employee()
{
    FirstName = "John",
    LastName = "Smith",
    JobTitle = "CEO",
    CompanyName = "BMW"
};

Console.WriteLine(employee); // John Smith is the CEO at BMW
Console.WriteLine(employee.ToString()); // John Smith is the CEO at BMW
Console.WriteLine(employee.ToString("G")); // John Smith is the CEO at BMW
Console.WriteLine(employee.ToString("Full")); // John Smith is the CEO at BMW

Console.WriteLine(employee.ToString("L")); // John.Smith
Console.WriteLine(employee.ToString("LOgin")); // John.Smith

Console.WriteLine(employee.ToString("Ud")); // [email protected]
Console.WriteLine(employee.ToString("UserDomain")); // [email protected]

Dynamic Approach

This approach gives users the flexibility to include multiple tokens in a single format. For example, a user can specify the first name, initial of the middle name, job title, etc., all within the same string.

public class Person : IFormattable
{
    public required string FirstName { get; set; }
    public string? MiddleName { get; set; }
    public required string LastName { get; set; }
    
    public override string ToString()
    {
        return ToString("G");
    }

    public string ToString(string? format)
    {
        return ToString(format, CultureInfo.CurrentCulture);
    }

    public string ToString(string? format, IFormatProvider? formatProvider)
    {
        // F. LL
        // FF M. LL
        // F,M LL
        // G => full

        if (string.IsNullOrEmpty(format))
        {
            format = "G";
        }

        Dictionary<string, string> mapper = new Dictionary<string, string>()
        {
            { "F", FirstName[0].ToString() },
            { "FF", FirstName },
            { "M", (MiddleName ?? "")[0].ToString() },
            { "MM", MiddleName ?? "" },
            { "L", LastName[0].ToString() },
            { "LL", LastName },
            { "G", $"{FirstName} {MiddleName} {LastName}" }
        };

        // FF M. L
        StringBuilder sb = new StringBuilder();
        for (int i = 0; i < format.Length; i++)
        {
            char c = format[i]; // F
            string token = c.ToString(); // F
        
            while (i + 1 < format.Length && c == format[i + 1])
            {
                token += c;
                i++;
            }
        
            if (mapper.TryGetValue(token, out var value))
            {
                sb.Append(value);
            }
            else
            {
                sb.Append(token);
            }
        }
        
        return sb.ToString();
    }
}

Here’s a breakdown of the code:

First, we create a dictionary of tokens mapped to their respective values. For example, F represents the initial of the first name, and FF represents the full first name.

Then, we loop through the format string, checking each character for matching tokens. We also handle cases where a token could be part of a longer token (e.g., F vs FF).

Once we identify a token, we retrieve its value from the dictionary and append it to a StringBuilder.

Testing this approach gives us the following output:

var person = new Person()
{
    FirstName = "Jack",
    MiddleName = "John",
    LastName = "Smith"
};

Console.WriteLine(person.ToString("G")); // Jack John Smith
Console.WriteLine(person.ToString("FF M. LL")); // Jack J. Smith
Console.WriteLine(person.ToString("FM LL")); // JJ Smith

Flexible Approach

This approach provides even more flexibility by allowing users to append external text to the format string without disrupting the output. For instance, if we try formatting a string like this, it will break:

var person = new Person()
{
    FirstName = "Jack",
    MiddleName = "John",
    LastName = "Smith"
};

Console.WriteLine(person.ToString("Man, L is the LOSER"));
// Jan, S is the SOSER

Notice how the M in ā€œManā€ was replaced by the middle name initial, and the L in ā€œLoserā€ was replaced by the last name initial. To solve this, we can use a different method by wrapping tokens in curly braces {X}.

public string ToString(string? format, IFormatProvider? formatProvider)
{
  // F. LL
  // FF M. LL
  // F,M LL
  // G => full
  
  if (string.IsNullOrEmpty(format))
  {
      format = "{G}";
  }
  
  Dictionary<string, string> mapper = new Dictionary<string, string>()
  {
      { "{F}", FirstName[0].ToString() },
      { "{FF}", FirstName },
      { "{M}", (MiddleName ?? string.Empty)[0].ToString() },
      { "{MM}", MiddleName ?? string.Empty },
      { "{L}", LastName[0].ToString() },
      { "{LL}", LastName },
      { "{G}", $"{FirstName} {MiddleName} {LastName}" }
  };
  
  foreach (var key in mapper.Keys)
  {
      format = format.Replace(key, mapper[key]);
  }
  
  return format;
}

Now, instead of iterating through each character in the format string as an array, we loop through the dictionary keys and replace each key in the format string with its corresponding value.

Console.WriteLine(person.ToString("Man, {F}.{L}. is the LOSER"));

// Man, J.S. is the LOSER

With these techniques, you can add custom formatting to your classes and make your code more flexible and easy to use.

Happy Coding!

]]>
Azure-Sync: Sync your Azure App Settings to local https://mdbouk.com/azure-sync-sync-your-azure-app-settings-to-local/ Sat, 25 May 2024 00:00:00 +0000 https://mdbouk.com/azure-sync-sync-your-azure-app-settings-to-local/ Simplify your .NET development workflow with Azure-Sync. This tool syncs Azure App Service environment variables, including KeyVault secrets, to local .NET secrets for seamless debugging. Azure-Sync is a handy shell script tool designed to help .NET developers working with Azure App Services. Inspired by the functionality provided by the Azure Functions Core Tools (func cli), Azure-Sync allows you to retrieve all environment variables from a specified Azure App Service, including any Azure KeyVault secrets, and add them to your local .NET secrets.

Check the source code on GitHub: mhdbouk/azure-sync

How to use Azure-Sync

Using azure-sync is straightforward. Here are the steps

First, clone the Azure-Sync repository to your local machine. You can do this by running the following command in your terminal:

git clone https://github.com/mhdbouk/azure-sync.git

Navigate to the cloned repository and install the tool by running the following command in your terminal:

chmod +x ./install.sh && ./install.sh

This command makes theĀ install.shĀ script executable and runs it. The script copiesĀ azure-sync.shĀ toĀ /usr/local/binĀ and makes it executable.

Once installed, you can run azure-sync within your .NET application by passing your Azure App Service name and resource group as arguments:

azure-sync <appname> <app_resource_group>

ReplaceĀ <appname>Ā with the name of your Azure App Service andĀ <app_resource_group>Ā with the name of the resource group your App Service is in.

Benefits of Using Azure-Sync

Azure-Sync provides several benefits

  1. Ease of Use: azure-sync is easy to install and use. It requires minimal configuration and can be run with a single command.

  2. Time-Saving: azure-sync automates the process of retrieving environment variables from Azure App Service, saving you the time and effort of doing it manually.

  3. Local Development and Debugging: By syncing your Azure environment variables locally, you can ensure that your local development environment closely matches your production environment. This can help catch potential issues early in the development process. Moreover, it allows you to debug actual production issues by replicating the production environment locally.

  4. Security: azure-sync prioritizes security by retrieving secrets from Azure KeyVault and storing them in .NET user secrets. This approach ensures that your application has all the necessary configuration for local development, while also preventing sensitive information from being accidentally committed to your Git repository. This way, Azure-Sync helps maintain the integrity and confidentiality of your application’s secrets.

Remember, you must be logged in to the Azure CLI with an account that has access to the specified Azure App Service and any referenced Azure KeyVaults.

Happy coding!

]]>
Implement Builders easily with Source Generator in .NET https://mdbouk.com/implement-builders-easily-with-source-generator-in-.net/ Wed, 28 Feb 2024 00:00:00 +0000 https://mdbouk.com/implement-builders-easily-with-source-generator-in-.net/ Streamline your .NET development by implementing the Builder pattern with Source Generators. Learn how to automate builder class creation for complex objects with fluent APIs. I created a YouTube video on Source Generator in which I showcased one possible implementation. However, I feel that I didn’t fully highlight its capabilities. In this blog post, I aim to demonstrate how to use the Source Generator to automatically create builders for a given class.

Check the YouTube video and the source code here

For our Builder Generator implementation, let’s start from scratch. Create a new class library project, name it SourceGenerator, add the necessary NuGet package references, and add a new class called AutoBuilderGenerator as shown in the code snippet below

SourceGenerator.csproj

<Project Sdk="Microsoft.NET.Sdk">

    <PropertyGroup>
        <TargetFramework>netstandard2.0</TargetFramework>
        <ImplicitUsings>enable</ImplicitUsings>
        <Nullable>enable</Nullable>
        <LangVersion>latest</LangVersion>
        <EnforceExtendedAnalyzerRules>true</EnforceExtendedAnalyzerRules>
    </PropertyGroup>

    <ItemGroup>
      <PackageReference Include="Microsoft.CodeAnalysis.Analyzers" Version="3.3.4">
        <PrivateAssets>all</PrivateAssets>
        <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
      </PackageReference>
      <PackageReference Include="Microsoft.CodeAnalysis.CSharp" Version="4.8.0" />
    </ItemGroup>

</Project>

The EnforceExtendedAnalyzerRules is required to remove the warning that appears when adding the Generator attribute.

AutoBuilderGenerator class

namespace SourceGenerator;

[Generator]
public class AutoBuilderGenerator : IIncrementalGenerator
{
    public void Initialize(IncrementalGeneratorInitializationContext context)
    {
        // implementation goes here
    }
}

You need to generate a provider instance. This instance uses the context to filter and return only the syntax related to the class definition that has an AutoBuilder attribute:

public void Initialize(IncrementalGeneratorInitializationContext context)
{
    var provider = context.SyntaxProvider.CreateSyntaxProvider(
        (node, _) => node is ClassDeclarationSyntax t && t.AttributeLists.Any(x => x.Attributes.Any(a => a.Name.ToString() == "AutoBuilder")),
        (syntaxContext, _) => (ClassDeclarationSyntax)syntaxContext.Node
    ).Where(x => x is not null);

    var compilation = context.CompilationProvider
                             .Combine(provider.Collect());
    
    context.RegisterSourceOutput(compilation, Execute);
}

And now go ahead and create the Execute method:

private void Execute(SourceProductionContext context, (Compilation compilation, ImmutableArray<ClassDeclarationSyntax> classes) tuple)
{
  var (compilation, classes) = tuple;
  
  foreach (var syntax in classes)
  {
    // per class implementation
    var symbol = compilation.GetSemanticModel(syntax.SyntaxTree)
                                    .GetDeclaredSymbol(syntax) as INamedTypeSymbol;
  }
}

Since we already know that the class has the AutoBuilder attribute, we can start gathering all the necessary information to construct our generated code.

First, to get the class namespace we can use the following code:

// get the namespace of the current syntax
var syntaxParent = syntax.Parent;
string @namespace = string.Empty;
if (syntaxParent is BaseNamespaceDeclarationSyntax namespaceDeclaration)
{
    @namespace = namespaceDeclaration.Name.ToString();
}

Now, let’s generate our boilerplate code. I’m going to use a StringBuilder to build the class code string.

Note that I’m adding a static implicit operator here. This will be useful when I want to implicitly convert the builder into the desired object.

string prefixCode = $$"""
              // <auto-generated />
              namespace {{@namespace}};
              
              public class {{symbol!.Name}}Builder
              {
                protected {{symbol!.Name}} {{symbol!.Name}} = new {{symbol!.Name}}();
                public static implicit operator {{symbol!.Name}}({{symbol!.Name}}Builder builder)
                {
                    return builder.{{symbol!.Name}};
                }
              """;

string suffixCode = """
              }
              """;

StringBuilder codeBuilder = new StringBuilder();

codeBuilder.AppendLine(prefixCode);

// append code for every properties

codeBuilder.AppendLine(suffixCode);

Now, let’s identify all the properties that have a setter so we can create a WithMethod for each property:

replace // append code for every properties with the following

var properties = symbol!.GetMembers()
                                    .OfType<IPropertySymbol>()
                                    .Where(x => x.SetMethod is not null);
                                    
foreach (var property in properties)
{
  codeBuilder.AppendLine($@"    public {symbol!.Name}Builder With{property.Name}({property.Type} {property.Name.ToLower()})");
  codeBuilder.AppendLine("    {");
  codeBuilder.AppendLine($@"        {symbol!.Name}.{property.Name} = {property.Name.ToLower()};");
  codeBuilder.AppendLine("        return this;");
  codeBuilder.AppendLine("    }");
  codeBuilder.AppendLine();
}

We can go a step further by ignoring any property with the AutoBuilderIgnore attribute using the following code:

foreach (var property in properties)
{
  // check if property has AutoBuilderIgnore as attribute
  var ignoreAttribute = property.GetAttributes()
                             .Any(x => x.AttributeClass?.Name == "AutoBuilderIgnoreAttribute");
  
  if (ignoreAttribute)
  {
      continue;
  }
  
  // rest of the properties code here
}

If we have a property with a List, we might want to generate a method that adds one item instead of a list. I’m using params here so we can have multiple items, if needed, as parameters.

if (property.Type.ToString().StartsWith("System.Collections.Generic.List<"))
{
    var listType = property.Type.ToString().Replace("System.Collections.Generic.List<", "").Replace(">", "");
    codeBuilder.AppendLine($@"    public {symbol!.Name}Builder Add{property.Name}(params {listType}[] {property.Name.ToLower()})");
    codeBuilder.AppendLine("    {");
    codeBuilder.AppendLine($@"        {symbol!.Name}.{property.Name}.AddRange({property.Name.ToLower()}.ToList());");
    codeBuilder.AppendLine("        return this;");
    codeBuilder.AppendLine("    }");
    codeBuilder.AppendLine();
}

and now register the code in the context and let us try everything

// end of foreach loop

codeBuilder.AppendLine(suffixCode);
            
context.AddSource($"{symbol!.Name}Builder.g.cs", codeBuilder.ToString());

To test this out, create a new project, add the SourceGenerator reference, and ensure to specify the OutputItemType as Analyzer.

Client.csproj:

<Project Sdk="Microsoft.NET.Sdk">

    <PropertyGroup>
        <OutputType>Exe</OutputType>
        <TargetFramework>net8.0</TargetFramework>
        <ImplicitUsings>enable</ImplicitUsings>
        <Nullable>enable</Nullable>
    </PropertyGroup>

    <ItemGroup>
      <ProjectReference Include="..\SourceGenerator\SourceGenerator.csproj" OutputItemType="Analyzer"  />
    </ItemGroup>

</Project>

Now create the following attributes and a class to test the generated builder

// Attributes

public class AutoBuilderAttribute : Attribute
{
}

public class AutoBuilderIgnoreAttribute : Attribute
{
}
[AutoBuilder]
public class Person
{
    public int Id { get; set; }
    public string Name { get; set; } = string.Empty;
    public string Address { get; set; } = string.Empty;
    
    public List<string> Orders { get; set; } = [];

    [AutoBuilderIgnore]
    public double Ignored { get; set; }
}

Now, in your Program.cs, you can start building a Person using the automatically generated PersonBuilder class.

var builder = new PersonBuilder()
              .WithId(1)
              .WithName("John")
              .WithAddress("123 Main St")
              .AddOrders("My Order", "My Second Order");

Person john = builder; // implicit operator will convert the builder into Person.

Happy Coding!

]]>
Secure On-Premise .NET Application with Azure Key Vault https://mdbouk.com/secure-on-premise-.net-application-with-azure-key-vault/ Fri, 09 Feb 2024 00:00:00 +0000 https://mdbouk.com/secure-on-premise-.net-application-with-azure-key-vault/ Learn how to enhance the security of your on-premise .NET applications by integrating Azure Key Vault. This guide provides a step-by-step approach to secure sensitive data. Suppose you have your Web App and Database server hosted locally on your On-Premises servers. You want to use Azure Key Vault with your .NET application to retrieve your app settings. This allows you to avoid storing them as plain text in the appsettings.json file. In this blog post, I will show you how to integrate Azure Key Vault with your .NET application by using the Service Principle client secret. This approach can be applied to any environment even the on-premise one.

YouTube Video

I have created a YouTube video that provides a detailed, step-by-step guide on how to secure your .NET application with Azure Key Vault. However, it does not include the part where we use the Client Secret. Check the full video here

https://www.youtube.com/watch?v=Mi6ups54bSU

The full source code can be found here: https://github.com/mhdbouk/keyvault-configuration-demo

Create your Azure Key Vault

I’m using here Azure CLI to create the keyvault, but you can create the same using the Azure Portal.

Create Resource Group

First create the resource group where you are going to put the Azure Key Vault.

az group create --name rg-myapplication --location NorthEurope

Create Key Vault

az keyvault create --name kvmyapplication --resource-group rg-myapplication --location NorthEurope

Make sure to specify a unique name across all Azure

Add your Secret

We are going to store our On-Premise server password as a secret in the KeyVault, you can do that using the following command

az keyvault secret set --vault-name kvmyapplication --name "MyApp-ConnectionStrings--DefaultConnection" --value "Server=myServerAddress;Database=myDataBase;User Id=myUsername;Password=myPassword;"

Notice that I named the secret MyApp-ConnectionStrings--DefaultConnection. This will be mapped to our appsettings.json structure later in this blog post.

MyApp- is the prefix, primarily used for our Application. In this case, we can use the same Key Vault for multiple applications.

-- is later on going to be replaced with the default Configuration Key Delimiter.

Create New App Registration

The app registration will be used to provide access between the web app and the Key Vault, by generating the needed Secret / Certificate and we will configure the Key Vault with the needed policy as well.

  1. Navigate to Microsoft Entra ID

  2. Under Manage, click on App Registrations

  3. Click on New Registration. Specify a name for it, and you can keep the Supported Account Type as Single Tenant.

  4. Save the Application (client) ID, Object ID, and Tenant ID.

  5. Once created, navigate to Certificates & secrets. Then create a New Client Secret and save that secret for later.

Add Key Vault Access Policy

Now we need to allow accessing the secrets using the App Registration. Any application using the App Registration credentials should be allowed to get the secret values.

Using Azure CLI I’m going to create a new Access Policy but you can use the Azure Portal to perform the same thing.

az keyvault set-policy --name kvmyapplication --object-id app_registration_object_id --secret-permissions get list

--object-id app_registration_object_id is the app registration object ID we receive it when creating the app registration.

We need to allow Get and List secret permissions. Make sure to specify only the needed permissions for your use case.

More Info about set-policy here

Coding TIME

Now, in your .NET Web Application, add package references for the following packages:

Now add the following to your appsettings.json

{
  // existing appsettings keys
  "KeyVault": {
    "url": "https://kvmyapplication.vault.azure.net/",
    "clientId": "CLIENT ID HERE",
    "clientSecret": "CLIENT SECRET HERE",
    "tenantId": "TENANT ID HERE" 
  }
}

Next, navigate to Program.cs and configure your web application with the KeyVault configuration:

using Azure.Extensions.AspNetCore.Configuration.Secrets;
using Azure.Identity;
using KeyVaultYouTubeDemo;
using Microsoft.EntityFrameworkCore;

var builder = WebApplication.CreateBuilder(args);

if (builder.Environment.IsProduction())
{
    var keyVaultUri = new Uri(builder.Configuration["KeyVault:url"]!);
    var tenantId = builder.Configuration["KeyVault:TenantId"];
    var clientId = builder.Configuration["KeyVault:ClientId"];
    var clientSecret = builder.Configuration["KeyVault:ClientSecret"];
    
    builder.Configuration.AddAzureKeyVault(
        keyVaultUri,
        new ClientSecretCredential(tenantId, clientId, clientSecret),
        new AzureKeyVaultConfigurationOptions()
        {
            Manager = new CustomSecretManager("MyApp"),
            ReloadInterval = TimeSpan.FromSeconds(30)
        }
    );
}

We are configuring the Key Vault for use only in a production environment. For the development environment, it is best to use User Secrets.

Note that we are retrieving theĀ tenantId,Ā clientId, andĀ clientSecretĀ from the configuration. We are using these to create aĀ ClientSecretCredential

AzureKeyVaultConfigurationOptionsĀ is used to specify the Manager, which prepares the secrets from the Key Vault. TheĀ ReloadIntervalĀ ensures that our app is always refreshing the secrets from the Key Vault, eliminating the need to restart.

Now, create a new class called CustomSecretManager and add the following to it:

using Azure.Extensions.AspNetCore.Configuration.Secrets;
using Azure.Security.KeyVault.Secrets;

namespace KeyVaultDemo;

public class CustomSecretManager : KeyVaultSecretManager
{
    private readonly string _prefix;
    public CustomSecretManager(string prefix)
    {
        _prefix = $"{prefix}-";
    }

    public override bool Load(SecretProperties secret)
        => secret.Name.StartsWith(_prefix);

    public override string GetKey(KeyVaultSecret secret)
        => secret.Name[_prefix.Length..].Replace("--", ConfigurationPath.KeyDelimiter);
}

This class is used to load and get the keys. It retrieves all secrets starting with the prefix, allowing our Key Vault to be used with multiple applications. It also replaces the -- in the secret name with the Key Delimiter used for the configuration, which is “:” in our case.

That’s it! Now, when you run the application, it will connect to your Key Vault to retrieve the app secrets.

Happy Coding!

]]>
Running Integration Tests with Docker in .NET using TestContainers https://mdbouk.com/running-integration-tests-with-docker-in-.net-using-testcontainers/ Thu, 30 Nov 2023 00:00:00 +0000 https://mdbouk.com/running-integration-tests-with-docker-in-.net-using-testcontainers/ Learn how to run integration tests in .NET using Docker and TestContainers. This guide covers setting up a TestContainer for SQL Edge, creating a DbContext, and performing tests against a real database instance. Hello everyone, in today’s post I will show you the easiest and cleanest way to perform integration testing of your code with database dependencies with the help of docker and TestContainers.

Unlike unit tests where you can use In-Memory or Mock the database calls, integration tests need to test the actual calls to the database, and using an empty, In-Memory, database will not represent the reality of what the system is doing, or perhaps your code is tightly coupled with specific database behaviors or relies on certain configurations that an In-Memory setup won’t capture accurately. For that, we will use docker to run a dockerized database image and perform our tests against it. And for that, we will use TestContainers.

What is TestContainers?

TestContainers is a .NET library that simplifies the management of Docker containers for testing. It provides an API to interact with containers directly from your test suite, supporting a wide range of services, including popular databases such as SqlServer, MySQL, PostgreSQL, and MongoDB, along with other services like Selenium for browser automation or RabbitMq for message broker tests. We will focus on one notable TestContainers module, the Azure SQL Edge module, which allows an easy setup of Azure SQL Edge instances in Docker for testing. This will enable us to configure our test environment very similar to our production apps hosted in Azure using Azure SQL Database.

Add TestContainers to your test project

To view the source code of this project, check it out on my GitHub repository github/mhdbouk/testcontainers-demo

To begin with TestContainers, we can start by adding the following NuGet packages to your test project:

dotnet add package Testcontainers.SqlEdge

Then, to create and use the Testcontainer, add the following to the test

var container = new SqlEdgeBuilder()
          .WithImage("mcr.microsoft.com/azure-sql-edge")
          .Build();
          
await container.StartAsync();
          
var connectionString = container.GetConnectionString();

This code will create and run a docker container with mcr.microsoft.com/azure-sql-edge image - the offical azure image - and container.GetConnectionString() will return the connectionString of that database server so we can use it to create our DB Context

var context = new TodoDbContext(new DbContextOptionsBuilder<TodoDbContext>().UseSqlServer(container.GetConnectionString()).Options);

await context.Database.EnsureCreatedAsync();

var repository = new TodoRepository(context);

// Perform the tests on the repository

Make sure that Docker is running when you run the tests.

Happy testing and coding!

]]>
CSharpRepl – Your Ultimate C# Playground in the Terminal https://mdbouk.com/csharprepl-your-ultimate-c%23-playground-in-the-terminal/ Sat, 07 Oct 2023 00:00:00 +0000 https://mdbouk.com/csharprepl-your-ultimate-c%23-playground-in-the-terminal/ Discover CSharpRepl, a command-line tool that allows you to write and execute C# code directly in your terminal. It&#39;s a powerful playground for C# developers, enabling quick experimentation and testing without the need for a full IDE. So, listen up! CSharpRepl is this awesome command-line tool that allows you to get hands-on with C# programming without any hassle. With this tool, you can write, execute, and experiment with your code, all within your terminal. This means that you no longer have to create a new console app each time you want to test something in C#. It’s a great way to get hands-on with C# programming without any hassle.

It’s super easy to use, with fancy features like code highlighting and instant error checking to keep you on track, and even support adding Nuget packages.

Go ahead and install the latest dotnet SDK and run the following in your terminal

dotnet tool install -g csharprepl

After installation, simply type csharprepl and press enter to use the tool.

If you happen to make any mistakes, there is an error checking mechanism in place to help you identify and correct them.

You can install external Nuget packages or add references to existing projects using the #r command. Additionally, installed packages provide intellisense.

Go give Will a star on github (https://github.com/waf/CSharpRepl) and explore more features

Happy Coding!

]]>
Bringing Blazor to Desktop and Mobile with MAUI https://mdbouk.com/bringing-blazor-to-desktop-and-mobile-with-maui/ Sun, 24 Sep 2023 00:00:00 +0000 https://mdbouk.com/bringing-blazor-to-desktop-and-mobile-with-maui/ Learn how to host a Blazor WebAssembly application as a native desktop and mobile app using .NET MAUI. This guide covers the setup, shared components, and how to run your app on multiple platforms. I wanted to host a Blazor WebAssembly application natively as a desktop application, and to achieve that, I planned to use a MAUI Blazor app. However, I wanted to avoid duplicating the Razor pages between both of my projects since I intended to continue using the web version as well. In this blog post, I will show you how to accomplish this. Consider using this approach to host your SaaS application in the cloud and simultaneously offer support for a native desktop app, much like Slack.

To keep things simple, I’ll be using the Tic Tac Toe project we made in our previous blog post. We’ve already created a Blazor WebAssembly app and will now adapt it to run on both desktop and mobile. You can find the complete source code here: mhdbouk/tictactoe-blazor.

What is .NET MAUI?

.NET MAUI is a multi-platform native UI framework developed by Microsoft that allows deployment to multiple devices across mobile and desktop using a single project codebase.

On the other hand, Blazor is also used to build native client apps, but it takes a hybrid approach. Hybrid apps are native applications that make use of web technologies for their functionality. In a Blazor Hybrid app or MAUI Blazor App, Razor components run directly within the native app (rather than on WebAssembly), alongside other .NET code, and they render the web UI (HTML, CSS) to a web view control called BlazorWebView. Utilizing Blazor with .NET MAUI provides a convenient way to create cross-platform Blazor Hybrid apps for both mobile and desktop.

source: Microsoft

Setting Up the Development Environment

If you don’t have a Blazor web app and a MAUI Blazor app in your solution, you can easily create them using the following commands or by using the Visual Studio interface.

dotnet workload install wasm-tools // install the blazor wasm tools
dotnet workload install maui // install maui

// Create the solution & the projects
dotnet new sln -n TicTacToe

dotnet new blazorwasm -o TicTacToe.Web
dotnet new maui-blazor -o TicTacToe.Maui

dotnet sln add TicTacToe.Web/TicTacToe.Web.csproj
dotnet sln add TicTacToe.Maui/TicTacToe.Maui.csproj

New Shared Class Library Project

In your solution, start by creating a new Class Library project named Shared. This project will serve as a central repository for all the common Blazor functionality, including Razor files, components, pages, and any static CSS/JS files.

After creating the project, proceed to update the .csproj file by replacing its content with the following:

<Project Sdk="Microsoft.NET.Sdk.Razor">

    <PropertyGroup>
        <TargetFramework>net7.0</TargetFramework>
        <Nullable>enable</Nullable>
        <ImplicitUsings>enable</ImplicitUsings>
    </PropertyGroup>

    <ItemGroup>
        <PackageReference Include="Microsoft.AspNetCore.Components.Web" Version="7.0.0" />
    </ItemGroup>

</Project>

First, we changed the SDK type from Microsoft.NET.Sdk to Microsoft.NET.Sdk.Razor. This is a very important step, as, without it, the Blazor and MAUI projects will not be able to communicate with the Shared project. Also, make sure to add the Microsoft.AspNetCore.Components.Web NuGet package so we can resolve our Blazor dependencies.

As of the date of writing this blog post, I had to downgrade all my projects to the initial release of .NET 7 due to an unsupported package (Microsoft.Extensions.Logging.AbstractionsĀ ).
reference: https://github.com/dotnet/maui/issues/16244
and this is the error message
TicTacToe.Maui.csproj: Error NU1605 : Warning As Error: Detected package downgrade: Microsoft.Extensions.Logging.Abstractions from 7.0.1 to 7.0.0. Reference the package directly from the project to select a different version. TicTacToe.Maui -> TicTacToe.Shared -> Microsoft.AspNetCore.Components.Web 7.0.11 -> Microsoft.AspNetCore.Components 7.0.11 -> Microsoft.AspNetCore.Authorization 7.0.11 -> Microsoft.Extensions.Logging.Abstractions (>= 7.0.1) TicTacToe.Maui -> Microsoft.Extensions.Logging.Abstractions (>= 7.0.0)

Move common files to Shared

Start by moving everything that is shared between the two projects. This includes pages, components, and layout files like the MainLayout.razor file. This step will ensure that both projects have access to the same components and resources.

Create a new file called _Imports.razor in the Shared project and add the following to it:

@using Microsoft.AspNetCore.Components.Routing
@using Microsoft.AspNetCore.Components.Web

Once you’ve copied all the shared logic, make sure you update all the namespaces to reflect the Shared project. For example, if you are moving files from your web Blazor project and the namespace is called TicTacToe.Web, rename it to TicTacToe.Shared:

namespace TicTacToe.Web; // <- change this to

namespace TicTacToe.Shared; // <- this one

Another thing you should do is create a new wwwroot folder and add all the CSS/JS/assets files from your web project (we will update the references in index.html later in the post).

Update Blazor Web App and MAUI Blazor App projects

We are going to make a few changes in both of our projects (Blazor Web and MAUI Blazor App), These changes are necessary to establish a connection between the projects and make shared components accessible.

First, we are going to add a reference to the Shared project, update the _Imports.razor file, update the App.razor & Main.razor files with the additional assembly attribute, and fix our index.html static file mapping.

_Index.razor

A few things are needed here. First, add a reference to the new Shared project in the web project. Then, add the using statement in the _Imports.razor file in both the Blazor Web app and the MAUI Blazor App. This step will let other files like App.razor or Main.razor to resolve the shared component and files.

... // other using

@using TicTacToe.Shared;

App.razor / Main.razor

We need to update the App.razor in the Blazor web project and the Main.razor in the MAUI app project, by including a reference to the MainLayout class using the AdditionalAssemblies attribute. Open both files and update the content with the following:

<Router AppAssembly="@typeof(App).Assembly" AdditionalAssemblies="new[] { typeof(MainLayout).Assembly }" PreferExactMatches="true">
    <Found Context="routeData">
        <RouteView RouteData="@routeData" DefaultLayout="@typeof(MainLayout)" />
        <FocusOnNavigate RouteData="@routeData" Selector="h1" />
    </Found>
    <NotFound>
        <PageTitle>Not found</PageTitle>
        <LayoutView Layout="@typeof(MainLayout)">
            <p role="alert">Sorry, there's nothing at this address.</p>
        </LayoutView>
    </NotFound>
</Router>

Notice how we added AdditionalAssemblies="new[] { typeof(MainLayout).Assembly }." This is needed to let the project resolve the MainLayout class from outside of the current assembly.

index.html

Open the index.html file and make sure to reference the static files in the following format:

...
<head>
  ...
  
  <link href="_content/TicTacToe.Shared/css/app.css" rel="stylesheet" />
  
  ...
</head>
...

Start the hypertext reference (href) with _content/{{Shared project Name}}/{{the old file path}}.

Repeat the same for other files (JS, fonts, …).

While working on this, XCode automatically updated to version 15, and unfortunately, MAUI in .NET 7 does not yet offer support for this version. The good news is that it will be supported in .NET 8. To address this situation, I had to use Xcodes to install the older version of Xcode, allowing me to run the app on an iPhone with the older iOS 16 simulation.

Conclusion

Tic Tac Toe app running in the browser, iPhone simulator, and native desktop app

In completing these steps, you’ve successfully achieved seamless integration between your Blazor WebAssembly application and the MAUI platform, allowing you to run your app both in a web browser and as a desktop or mobile application. Now, it’s time to enjoy the flexibility of this setup. Happy coding!

]]>
Let's Build a Tic Tac Toe Game with Blazor WebAssembly! https://mdbouk.com/lets-build-a-tic-tac-toe-game-with-blazor-webassembly/ Tue, 29 Aug 2023 00:00:00 +0000 https://mdbouk.com/lets-build-a-tic-tac-toe-game-with-blazor-webassembly/ In this tutorial, we will build a simple Tic Tac Toe game using Blazor WebAssembly. We&#39;ll create a game board, implement the game logic, and enhance the user experience with CSS styles. Hey there, fellow developer! In this tutorial, I’ll walk you through building an awesome Tic Tac Toe game using Blazor WebAssembly. So grab your coding gear, and let’s get started!

To check the full source code, you can visit github/mhdbouk/tictactoe-blazor.

Step 1: Set Up the Blazor WebAssembly Project

First things first, we need to set up a new Blazor WebAssembly project. Do your thing with the command line or your favorite IDE like Visual Studio or Visual Studio Code. Get that project ready for some Tic Tac Toe action!

dotnet new blazorwasm-empty -o TicTacToe

Step 2: Create the Game Board Component

Now, we have to create a component to show off the game board. We want a 3x3 grid where players can make their moves. So go ahead and create a new component called `GameBoardComponent`.

Inside the razor file component, we will create three divs for rows. Inside each of these divs, we will have three different divs for the columns.

<div>
    <div>1</div>
    <div>2</div>
    <div>3</div>
</div>

<div>
    <div>4</div>
    <div>5</div>
    <div>6</div>
</div>

<div>
    <div>7</div>
    <div>8</div>
    <div>9</div>
</div>

@code {
    
}

Add the following import inside _imports.razor

@using TicTacToe
@using TicTacToe.Components

Now, we need to add the GameBoardComponent inside our main Pages/Index.razor. Replace the content of the file with the following:

@page "/"

<GameBoardComponent></GameBoardComponent>

Run the application, you should see something like this:

Let us add a few CSS styles to make it look better. Open wwwroot/css/app.css and add the following

ChatGPT wrote the following CSS, so if you have any feedback, please let me know how we can make it better in the comments below!


.game-container {
    width: 300px;
    margin: 50px auto 0;
}

/* Style each row */
.row {
    display: grid;
    grid-template-columns: repeat(3, 1fr);
    gap: 10px;
    margin: 10px;
}
/* Style the individual cells */
.row div {
    width: 80px; 
    height: 80px;
    display: flex;
    align-items: center;
    justify-content: center;
    background-color: #2c3e50; 
    font-size: 24px;
    color: #ecf0f1;
    border: 2px solid #34495e;
    border-radius: 8px;
    cursor: pointer;
    transition: background-color 0.3s, color 0.3s;
}

/* Add hover effect */
.row div:hover {
    background-color: #3498db; /* Brighter blue on hover */
    color: #ffffff; /* White text on hover */
}

And now we have something like this

Step 3: Implement the User Logic

Alright, now it’s time to add some brains to our game. We need to implement the game logic, so it can keep track of what’s happening. You know, figuring out the game state, handling player moves, and checking for a winner.

In your project, create a new class file called GameBoardComponent.razor.cs. This class will contain the C# code for the GameBoardComponent component. Here’s an example of how it can be structured:

using Microsoft.AspNetCore.Components;

namespace TicTacToe.Components;

public partial class GameBoardComponent : ComponentBase
{
    protected override void OnInitialized()
    {
        base.OnInitialized();
        Console.WriteLine("Init");
    }
}

This separation allows you to keep the C# code separate from the Razor code, making it easier to maintain and understand.

First, Let us remove the duplicates code in the razor file, by using nested for loops we can create the boxes without having to duplucated the html. Replace the razor code in GameBoardComponenet.razor with the following code:

<div class="game-container">
    @for (int i = 0; i < 3; i++)
    {
        <div class="row" id="@i">
            @for (int j = 0; j < 3; j++)
            {
                int row = i;
                int col = j;
                <div id="column-@row-@col"></div>
            }
        </div>
    }
</div>

And now, let us add the Click event, when the user click on the boxes we need to capture the box id and display it in a console log:

GameBoardComponent.razor:

<div class="game-container">
    @for (int i = 0; i < 3; i++)
    {
        <div class="row" id="@i">
            @for (int j = 0; j < 3; j++)
            {
                int row = i;
                int col = j;
                <div @onclick="() => BoxClicked(row, col)" id="column-@row-@col"></div>
            }
        </div>
    }
</div>

And add the following method into the GameBoardComponent.razor.cs:

using Microsoft.AspNetCore.Components;

namespace TicTacToe.Components;

public partial class GameBoardComponent : ComponentBase
{
    protected override void OnInitialized()
    {
        base.OnInitialized();
        Console.WriteLine("Init");
    }

    private void BoxClicked(int row, int col)
    {
        Console.WriteLine($"Box clicked at {row},{col}");
    }
}

In this blog, we are going to assume that the user input is always the first one, and he is using X, the AI will use O as an input.

For that, let us update our code by adding a 2 dimensional array of strings.

GameBoardComponent.razor.cs:

using Microsoft.AspNetCore.Components;

namespace TicTacToe.Components;

public partial class GameBoardComponent : ComponentBase
{
    // Add 2 dimensional array
    private string[,] _board = new string[3, 3];
    protected override void OnInitialized()
    {
        base.OnInitialized();
        Console.WriteLine("Init");
    }

    private void BoxClicked(int row, int col)
    {
        // Return if there is an existing value
        if (_board[row, col].Length > 0)
        {
            return;
        }
        
        // Set the value as X (user option)
        _board[row, col] = "x";
        
        // Call this method to let the AI play its turn
        NextTurn();
    }
}

And in the GameBoardComponent.razor file:

<div class="game-container">
    @for (int i = 0; i < 3; i++)
    {
        <div class="row" id="@i">
            @for (int j = 0; j < 3; j++)
            {
                int row = i;
                int col = j;
                // Display the value in the array
                <div @onclick="() => BoxClicked(row, col)" id="column-@row-@col">@_board[row, col]</div>
            }
        </div>
    }
</div>

NextRun() is basically the AI turn, it will be triggered after the user checks a box.

Step 4: Implement the AI Logic

To create an enjoyable AI experience, it is crucial to develop a logic that provides a balanced challenge to players. While the AI should offer a fair gameplay experience, it should also present a reasonable level of difficulty that allows players to win with skill and strategy.

Let us start by implementing the NextTurn method to handle the AI’s turn and check for a winner after each move. The AI should first check if it has any winning move, or if it has any user winning move to block, and if not, it should randomly pick a box.

Here’s an updated version of the GameBoardComponent class that includes the NextTurn method:

...
private void NextTurn()
{
    var (row, col) = GetWinningMove("O");
    if ((row, col) == (null, null))
    {
        (row, col) = GetWinningMove("X");
        if ((row, col) == (null, null))
        {
            (row, col) = RandomTurn();
        }
    }
    
    _events.Add($"AI placed O at {row},{col}");
    _board[row!.Value, col!.Value] = "O";
}
...

Within the BoxClicked function, once the user has made their move, we proceed to invoke the NextTurn function to transition to the AI’s turn. Inside this NextTurn function, we first check for the AI’s winning move by using the GetWinningMove("o") method. If no winning move is found, we then search for the user’s winning move using GetWinningMove("X") to block the user from winning. In the event that neither a winning move for the AI nor the user is identified, we resort to generating a random move for the AI.

This is the implementation of GetWinningMove method:

private (int? row, int? col) GetWinningMove(string player)
{
    for (int row = 0; row < 3; row++)
    {
        for (int col = 0; col < 3; col++)
        {
            if (_board[row, col] == null)
            {
                _board[row, col] = player;
                if (CheckWinner(player))
                {
                    return (row, col);
                }
                _board[row, col] = null; // Reset the move if it didn't result in a win
            }
        }
    }

    return (null, null);
}

And the following is the implementation of RandomTurn method

private (int row, int col) RandomTurn()
{
    var row = RandomNumberGenerator.GetInt32(0, 3);
    var col = RandomNumberGenerator.GetInt32(0, 3);
    return _board[row, col] == null ? (row, col) : RandomTurn();
}

When it comes to generating random numbers, it is recommended to use the RandomNumberGenerator class instead of the Random class. This is because RandomNumberGenerator provides a more secure and cryptographically strong source of randomness, making it suitable for applications that require high levels of randomness and security.

Step 5: Check for a Winner

Now that we have implemented the game logic for the user and the AI, let’s add the necessary code to check for a winner after each move. We want to determine whether there is a winning combination on the game board.

Update the GameBoardComponent class with the following code:

private bool CheckWinner(string player)
{
    // Check rows
    for (int row = 0; row < 3; row++)
    {
        if (_board[row, 0] == player && _board[row, 1] == player && _board[row, 2] == player)
            return true;
    }

    // Check columns
    for (int col = 0; col < 3; col++)
    {
        if (_board[0, col] == player && _board[1, col] == player && _board[2, col] == player)
            return true;
    }

    // Check diagonals
    if ((_board[0, 0] == player && _board[1, 1] == player && _board[2, 2] == player) ||
        (_board[0, 2] == player && _board[1, 1] == player && _board[2, 0] == player))
    {
        return true;
    }

    return false;
}

private bool CheckTie()
{
    for (int row = 0; row < 3; row++)
    {
        for (int col = 0; col < 3; col++)
        {
            if (_board[row, col] == null)
            {
                return false;
            }
        }
    }

    return true;
}

In this code, we check for a winning combination by iterating through each row, column, and diagonal on the game board. If we find a sequence of three matching symbols for the given player, we return true, indicating that the player has won.

To check for a tie: we can iterate through all the cells and verify if each one contains data. If all cells are filled, you can consider it a tie. Make sure to call this method after checking for winners.

Additional Enhancements

Congratulations! You have successfully built a Tic Tac Toe game using Blazor WebAssembly. But don’t stop here. Here are a few more enhancements you can make to take your game to the next level:

  • Add a restart game button to allow the players to start a new game without refreshing the page.

  • Add animations and transitions to make the game more visually appealing.

  • Implement a multiplayer mode using SignalR to allow users to play against each other over the internet.

Feel free to experiment and add your own creative touches to make the game even more exciting!

Conclusion

In this tutorial, you have learned how to build a Tic Tac Toe game using Blazor WebAssembly. You have set up the project, created the game board component, implemented the user and AI logic, and added the game outcome checking. By following the steps and enhancements mentioned, you can create a fully functional Tic Tac Toe game with a visually appealing user interface.

I hope you have enjoyed this tutorial and found it helpful. Happy coding and have fun playing Tic Tac Toe!

]]>
Mapperly: The Coolest Object Mapping Tool in Town https://mdbouk.com/mapperly-the-coolest-object-mapping-tool-in-town/ Thu, 13 Jul 2023 00:00:00 +0000 https://mdbouk.com/mapperly-the-coolest-object-mapping-tool-in-town/ <p>Hey there, developers! Today I want to talk about an astonishing .NET library called Mapperly, which has been gaining much attention in the developer community. Mapperly is a powerful source generator that simplifies the implementation of object-to-object mappings in .NET applications. <code>Mapperly</code> takes mapping to a whole new level by generating mapping code for you based on the mapping method signatures you define.</p> <p>If you&rsquo;re tired of writing repetitive mapping code and seeking a seamless solution to simplify object mappings in your .NET projects, Mapperly is the answer you&rsquo;ve been waiting for. Join me in this blog post to learn more!</p> Hey there, developers! Today I want to talk about an astonishing .NET library called Mapperly, which has been gaining much attention in the developer community. Mapperly is a powerful source generator that simplifies the implementation of object-to-object mappings in .NET applications. Mapperly takes mapping to a whole new level by generating mapping code for you based on the mapping method signatures you define.

If you’re tired of writing repetitive mapping code and seeking a seamless solution to simplify object mappings in your .NET projects, Mapperly is the answer you’ve been waiting for. Join me in this blog post to learn more!

You got my attention, tell me more

One of the remarkable advantages of using Mapperly is that it generates the mapping code at build time, resulting in minimal runtime overhead. This means your application runs smoothly without any performance compromises. Plus, the generated code is incredibly readable, allowing you to easily understand and verify the mapping logic behind the scenes.

Mapperly leverages the capabilities of .NET Source Generators, avoiding the need for runtime reflection. This not only makes the generated code highly optimized but also ensures compatibility with Ahead-of-Time (AOT) compilation and trimming. Your application remains efficient and maintains excellent performance.

Notably, Mapperly is known for being one of the fastest .NET object mappers available. In fact, it even surpasses the traditional manual mapping approach in terms of speed and efficiency. These exceptional performance benchmarks have been verified using Benchmark.netCoreMappers.

Source: https://github.com/mjebrahimi/Benchmark.netCoreMappers

Getting Started

To install Mapperly, you just need to add a NuGet reference to theĀ Riok.MapperlyĀ package:

dotnet add package Riok.Mapperly

Creating a Mapper: Mapping a Class to a DTO

Now, let’s create a mapper using Mapperly to map a Person class to a PersonDto data transfer object (DTO).

Here’s the PersonĀ class definition:

public class Person
{
    public int Id { get; set; }
    public string Name { get; set; }
    public List<string> Tags { get; set; }
}

And the corresponding PersonDto class definition:

public class PersonDto
{
    public int PersonId { get; set; }
    public string Name { get; set; }
    public IReadOnlyCollection<TagDto> Tags { get; set; }
}

public record TagDto(string tag);

Now, let’s create our mapper class, PersonMapper, and define the mapping method PersonToPersonDto. We’ll mark it with the [Mapper] attribute and we will create it as partial:

using Riok.Mapperly.Abstractions;

[Mapper]
public partial class PersonMapper
{
    [MapProperty(nameof(Person.Id), nameof(PersonDto.PersonId))] // Map property with a different name in the target type 
    public partial PersonDto PersonToPersonDto(Person person);
}

Since both Id/PersonId properties are named differently, we added a [MapProperty] to configure the mapping

Generating and Viewing the Mapperly Source Code

Most IDEs provide easy access to the generated code, allowing you to navigate from the partial mapper method to its implementation. However, if your IDE doesn’t support this feature or if you prefer to include the generated source code in your source control, you can emit the generated files to disk.

To emit the generated files to disk, you need to set the EmitCompilerGeneratedFiles property in your project file (.csproj). Here’s how you can do it:

<PropertyGroup>
    <EmitCompilerGeneratedFiles>true</EmitCompilerGeneratedFiles>
</PropertyGroup>

By default, the emitted files are written to {BaseIntermediateOutpath}/generated/{Assembly}/Riok.Mapperly/{GeneratedFile}. The BaseIntermediateOutpath is typically obj/Debug/net7.0.

Once you’ve set up the emitted files, they will be generated during the build process. This allows you to easily access the updated mapper code and mapper diagnostics. It’s worth noting that emitting the files during each build provides better performance in the IDE, preventing potential lagginess.

With the ability to generate and view the source code, you can better understand and analyze the generated mappings. This feature enhances the transparency and maintainability of your codebase. Now let us look into the generated mapping code:

public partial class PersonMapper
{
    public partial global::Client.PersonDto PersonToPersonDto(global::Client.Person person)
    {
        var target = new global::Client.PersonDto();
        target.PersonId = person.Id;
        target.Name = person.Name;
        target.Tags = global::System.Linq.Enumerable.ToArray(global::System.Linq.Enumerable.Select(person.Tags, x => new global::Client.TagDto(x)));
        return target;
    }
}

Let us examin what happened here.

  1. PersonId and Id are mapped since we added the MapProperty attribute

  2. Name is automatically mapped since the both properties are the same name

  3. Tags property is automatically mapped from IEnumerable into collection of TagDto

Now, let’s add a few more things to our classes to check the generated mapping code. I’m going to add an enum in both classes, and the target DTO class will have some missing enums and different values. Here, we can see how Mapperly will map it.

The updated Person class

public class Person
{
    public int Id { get; set; }
    public string Name { get; set; }
    public MartialStatus MartialStatus { get; set; }
    public List<string> Tags { get; set; }
}

public enum MartialStatus
{
    Undefined = 0,
    Married = 1,
    Single = 2,
    Widowed = 3,
    Divorced = 4,
    Separated = 5
}

and here is the updated PersonDto class

public class PersonDto
{
    public int PersonId { get; set; }
    public string Name { get; set; }
    public MartialStatusDto MartialStatus { get; set; }
    public IReadOnlyCollection<TagDto> Tags { get; set; }
}

public enum MartialStatusDto
{
    Widowed = 10, 
    Married = 11, 
    Single = 12
}

public record TagDto(string tag);

Because theĀ MartialStatusĀ andĀ MaritalStatusDtoĀ entries have different numeric values, we need to configure Mapperly to map them by their name. To do that, we need to update theĀ [Mapper]Ā attribute as follows:

[Mapper(EnumMappingStrategy = EnumMappingStrategy.ByName)]
public partial class PersonMapper
{
    [MapProperty(nameof(Person.Id), nameof(PersonDto.PersonId))]
    public partial PersonDto PersonToPersonDto(Person person);
}

Let’s take a look at the generatedĀ PersonMapperĀ code:

public partial class PersonMapper
{
    public partial global::Client.PersonDto PersonToPersonDto(global::Client.Person person)
    {
        var target = new global::Client.PersonDto();
        target.PersonId = person.Id;
        target.Name = person.Name;
        target.MartialStatus = MapToMartialStatusDto(person.MartialStatus);
        target.Tags = global::System.Linq.Enumerable.ToArray(global::System.Linq.Enumerable.Select(person.Tags, x => new global::Client.TagDto(x)));
        return target;
    }

    private global::Client.MartialStatusDto MapToMartialStatusDto(global::Client.MartialStatus source)
    {
        return source switch
        {
            global::Client.MartialStatus.Married => global::Client.MartialStatusDto.Married,
            global::Client.MartialStatus.Single => global::Client.MartialStatusDto.Single,
            global::Client.MartialStatus.Widowed => global::Client.MartialStatusDto.Widowed,
            _ => throw new System.ArgumentOutOfRangeException(nameof(source), source, "The value of enum MartialStatus is not supported"),
        };
    }
}

It’s interesting to see that Mapperly generates a new method,Ā MapToMarialStatusDto, which uses a switch case to mapĀ MartialStatusĀ intoĀ MaritalStatusDto.

Conclusion

In conclusion, Mapperly is a powerful .NET tool that automates object mapping code generation. It streamlines the mapping process, improves code maintainability, and saves developers time and effort. Embrace Mapperly’s simplicity and efficiency to optimize object mappings in your .NET projects.

Happy mapping with Mapperly!

]]>
How to Resolve Unauthorized Access Issue with Private NuGet Repository https://mdbouk.com/how-to-resolve-unauthorized-access-issue-with-private-nuget-repository/ Mon, 12 Jun 2023 00:00:00 +0000 https://mdbouk.com/how-to-resolve-unauthorized-access-issue-with-private-nuget-repository/ Struggling with 401 Unauthorized errors while accessing private NuGet repositories? This guide provides step-by-step solutions to resolve authentication issues in Azure DevOps and JetBrains Rider. Have you ever encountered an issue while trying to restore packages from a private NuGet repository in JetBrains Rider using Azure DevOps? I recently faced a similar problem and struggled with the “401 Unauthorized” error. In this blog post, I will guide you through the steps I took to fix this issue. So, let’s dive in!

Azure DevOps and Private NuGet Repositories

Azure DevOps is a comprehensive platform that provides a range of tools for software development, including package management with NuGet. Private NuGet repositories in Azure DevOps allow you to host and manage your own packages. You may encounter authentication challenges when you try to restore packages from a private NuGet repository.

Modify NuGet.config Global File

To resolve the unauthorized access issue, we need to modify the NuGet.config file, which is a global configuration file JetBrains Rider uses to manage NuGet package sources. Follow these steps:

1. Locate the NuGet.config file

  • Windows: The NuGet.config file is typically located at the following path: %APPDATA%\NuGet\NuGet.Config. You can easily access it by pasting this path into the File Explorer’s address bar.

  • Mac: The NuGet.config file is usually located at the following path: ~/.nuget/NuGet/NuGet.Config. You can navigate to this path using Finder by pressing Command + Shift + G and entering the path.

If the file doesn’t exist, create a new one. Open the NuGet.config file in any text editor

2. Add package source and credentials

Inside the <packageSources> section, add a new <add> element to specify the URL of your private NuGet repository. Replace YOUR_REPOSITORY_URL with the actual URL of your repository.

<packageSources>
  <add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
  <add key="YourPrivateRepo" value="YOUR_REPOSITORY_URL" />
</packageSources>

3. Provide authentication credentials

Under the <packageSourceCredentials> section, add a new <YourPrivateRepo> element to store the authentication credentials for your private repository. Replace YOUR_USERNAME and YOUR_PAT with your actual Azure DevOps username and Personal Access Token.

<packageSourceCredentials>
  <YourPrivateRepo>
    <add key="Username" value="-- needed but not used --" />
    <add key="ClearTextPassword" value="%PAT%" />
  </YourPrivateRepo>
</packageSourceCredentials>

4. Creating a Personal Access Token (PAT) in Azure DevOps

You can use a personal access token (PAT) as an alternate password to authenticate into Azure DevOps, check the official Microsoft documentation to know how to create PAT to be used in the NuGet.config

5. Store the PAT in Environment Variables

Since the password key is “ClearTextPassword”, it’s a terrible idea and a security concern if you’re saving NuGet.config with a clear PAT. So it’s best to store the PAT in an environment variable.

Also, when your PAT expires, you can simply get a new one and update the PAT variable value there.

Make sure to save the environment variable with the same name used in the NuGet.config file, in our example we used PAT

6. Save and close

After making the necessary modifications, save the NuGet.config file and close the text editor. Make sure also to close and reopen JetBrains Rider to refresh the environment variables.

7. Success!

Launch JetBrains Rider, open your .NET solution and right-click on the solution node in the Solution Explorer. Choose “Manage NuGet Packages” from the context menu and then click on Restore. JetBrains Rider will handle the package restoration process using the updated NuGet.config file. Sit back and let the packages from your private NuGet repository be successfully restored.

8. In case of failure, try this

Another solution is to install the Azure Artifacts Credential Provider provided by Microsoft. You can find it here: https://github.com/Microsoft/artifacts-credprovider

On Windows

iex "& { $(irm https://aka.ms/install-artifacts-credprovider.ps1) }"

On Mac or Linux

wget -qO- https://aka.ms/install-artifacts-credprovider.sh | bash

After installation, you’ll be able to perform an interactive ‘dotnet restore’ that prompts you to log in using your Microsoft credentials:

dotnet restore --interactive

Conclusion

By following these simple steps, you can overcome the “401 Unauthorized” issue when restoring packages from a private NuGet repository in JetBrains Rider using Azure DevOps. Modifying the NuGet.config file with the appropriate package source and authentication credentials, creating a Personal Access Token (PAT) in Azure DevOps, and securely storing the PAT in environment variables ensure a smooth and secure package restoration process. Another option is to use the Artifacts Credential Provider provided by Microsoft to perform an interactive dotnet restore using your Microsoft login.

Now you can enjoy hassle-free package management and continue developing your .NET solution with ease. Happy coding!

]]>
CodeWhisperer: Amazon Answer to GitHub Copilot https://mdbouk.com/codewhisperer-amazon-answer-to-github-copilot/ Sat, 15 Apr 2023 00:00:00 +0000 https://mdbouk.com/codewhisperer-amazon-answer-to-github-copilot/ <p>Hey friends, have you heard the latest news about AWS? Amazon has just released CodeWhisperer, an AI tool that you can use inside your IDE. This new tool is similar to GitHub Copilot, but it’s free for individuals, and it has some additional features that make it stand out. In this blog post, we’ll take a closer look at CodeWhisperer and see how it compares to its competitors.</p> <p><img loading="lazy" src="https://mdbouk.com/images/codewhisperer-amazon-answer-to-github-copilot/I-have-no-idea-what-Im-doing-1.jpeg"></p> <p>Me trying to code without CodeWhisperer</p> Hey friends, have you heard the latest news about AWS? Amazon has just released CodeWhisperer, an AI tool that you can use inside your IDE. This new tool is similar to GitHub Copilot, but it’s free for individuals, and it has some additional features that make it stand out. In this blog post, we’ll take a closer look at CodeWhisperer and see how it compares to its competitors.

Me trying to code without CodeWhisperer

CodeWhisperer vs GitHub Copilot

First things first, let’s talk about the main difference between CodeWhisperer and GitHub Copilot. The most significant advantage of CodeWhisperer is that it’s free for individuals. While GitHub Copilot has a monthly subscription of $10, AWS’s new tool offers a free tier, making it accessible to everyone. Amazon is definitely making strides to catch up with the competition, and offering a free tier is a smart move. Of course, there’s also a professional-level tier that costs $20 per month per user, but we’ll get to that later.

Code security

The second significant advantage of CodeWhisperer is its built-in security scanning feature. Unlike Copilot, CodeWhisperer can detect vulnerabilities in your code and suggest ways to fix them. Copilot X is trying to catch up, but it’s not there yet.

![When your code passes a security scan with Code Whisperer](/images/codewhisperer-amazon-answer-to-github-copilot/giphy (2).gif)

When your code passes a security scan with CodeWhisperer

CodeWhisperer aligns with best practices for tackling security vulnerabilities, such as those outlined by the Open Web Application Security Project (OWASP). OWASP is a nonprofit foundation that works to improve software security, providing a wealth of information on web application security, including a top ten list of the most critical web application security risks. By aligning with OWASP’s best practices, CodeWhisperer provides developers with insights into potential vulnerabilities that attackers could exploit. Additionally, it can suggest remediation solutions that adhere to industry standards and best practices.

It’s worth noting that the security scans are limited to Python, Java, and JavaScript code. Therefore, if you’re using other programming languages, you’re out of luck, for now.

Optimized for use with AWS services

Another great feature of CodeWhisperer is its integration with AWS services and tools. It’s optimized for the most commonly used AWS services and APIs, making it an excellent choice for developers who work with Amazon’s cloud-based infrastructure.

Free? yes, free

So, what about pricing? As mentioned earlier, CodeWhisperer has two tiers: individual and professional. The individual tier is free and comes with a limit of 50 security scans per user per month. The professional tier costs $20 per month per user and offers 500 security scans per month.

Unleashing the Whisperer

Now that we’ve covered the basics, let’s take CodeWhisperer for a spin. To get started, you need to install the VS Code or JetBrains extension and enable it by registering. Once you’ve opened a code file, you can hit Option C to generate some code. You can also open the extension options and enable auto-suggestions, which will generate code as you go.

One of the most impressive features of CodeWhisperer is that it seems to do an excellent job of understanding the context of the project. You can create unit tests for a function that you have, or even generate a new method using one of the local parameters in the file.

CodeWhisperer in action

Compared to GitHub Copilot, CodeWhisperer tends not to suggest as much code and goes line by line. While Copilot might suggest 50 lines of code at once, CodeWhisperer does not. This can be both good and bad, depending on the project. Sometimes GitHub Copilot recommends a bunch of annoying nonsense, but it can also be useful when you have a lot of boilerplates, like an HTML form.

New Calculator Class

And Unit Tests

Code Reference

One of the coolest things about CodeWhisperer is its transparency. It provides a reference to the code in the training data when producing its code, making it less likely that you accidentally steal some code and then use it in ways you’re not allowed to.

Cool but Limited

CodeWhisperer trying to handle complex coding tasks

CodeWhisperer is a top-performing AI-assisted coding tool, but it has some limitations. It’s not suitable for complex or creative coding tasks that require human logic and creativity. This means that you can’t entirely rely on it, and you’ll still need to use your brainpower to handle these kinds of tasks.

One significant issue is that CodeWhisperer’s code suggestions are based on its training data, so it may not generate diverse results. This can cause ethical and legal concerns if it generates code that’s too similar to copyrighted material. As a result, it’s important to be cautious and review its suggestions carefully.

CodeWhisperer may not be compatible with every IDE or programming language, so you might need to explore other options to find the right match for your project. Also, although it’s excellent for generating unit tests, it may not cover all possible scenarios and edge cases. Therefore, you need to double-check your code to ensure everything is in order.

Despite these limitations, CodeWhisperer and other AI-assisted coding tools are still developing and evolving to meet users’ needs. While it’s crucial to be aware of its shortcomings, it’s equally essential to recognize the significant benefits that these tools bring to the coding process.

Conclusion

In conclusion, CodeWhisperer is an impressive new AI tool from AWS that offers some unique features and advantages over its competitors, such as GitHub Copilot. Its built-in security scanning feature, integration with AWS services, and transparency make it a powerful addition to any developer’s toolkit. The fact that it’s free for individuals and offers a professional-level tier at an affordable price point makes it accessible to a broad range of developers.

However, it’s important to note that CodeWhisperer is not without limitations. While it can understand the context of a project and generate code that aligns with industry standards for security, it may not generate code as quickly as some of its competitors. Additionally, its suggestions may closely resemble the code from which it was trained, potentially raising ethical and legal concerns. It’s also worth noting that CodeWhisperer may not be compatible with every IDE and programming language, so developers may need to explore other options if it doesn’t meet their needs.

Despite these limitations, CodeWhisperer remains a valuable tool for developers who prioritize code quality and security. Its ability to understand the context of a project and integrate with AWS services make it a powerful addition to any developer’s toolkit. Overall, CodeWhisperer is a promising new addition to the world of AI-assisted coding, and it will be interesting to see how it evolves in the coming years.

]]>
How to create beautiful console applications with Spectre.Console https://mdbouk.com/how-to-create-beautiful-console-applications-with-spectre.console/ Sun, 19 Mar 2023 00:00:00 +0000 https://mdbouk.com/how-to-create-beautiful-console-applications-with-spectre.console/ <p>Hey there friends! If you are a .NET developer who loves to create console applications, you might have wondered how to make them more appealing and user-friendly. Sure, you can useĀ <code>Console.WriteLine</code>Ā andĀ <code>Console.ReadLine</code>Ā to output some text and get some input, but that’s pretty boring and limited. What if you want to display some colors, styles, tables, trees, progress bars, or even ASCII images? What if you want to parse command-line arguments and create complex commands likeĀ <code>git</code>,Ā <code>npm</code>, orĀ <code>dotnet</code>?</p> Hey there friends! If you are a .NET developer who loves to create console applications, you might have wondered how to make them more appealing and user-friendly. Sure, you can useĀ Console.WriteLineĀ andĀ Console.ReadLineĀ to output some text and get some input, but that’s pretty boring and limited. What if you want to display some colors, styles, tables, trees, progress bars, or even ASCII images? What if you want to parse command-line arguments and create complex commands likeĀ git,Ā npm, orĀ dotnet?

That’s where Spectre.Console comes in handy.Ā Spectre.Console is a .NET library that makes it easier to create beautiful console applications.Ā It is heavily inspired by the excellent Rich library for Python written by Will McGugan.Ā It supports 3/4/8/24-bit colors in the terminal with auto-detection of the current terminal’s capabilities.Ā It also provides a rich markup language that lets you easily output text with different colors and styles such as bold, italic, and blinking. So grab a cup of coffee and let’s dive in!

In this blog post, I will show you how to use Spectre.Console to create some awesome console applications with minimal code.

Installing Spectre.Console

To install Spectre.Console, you need to use NuGet Package Manager. You can either use Visual Studio or the dotnet CLI tool.

If you are using Visual Studio, right-click on your project and select Manage NuGet Packages. Then search for Spectre.Console and install it.

If you are using dotnet CLI tool, run this command in your project folder:

dotnet add package Spectre.Console

You can also specify the version number if you want:

dotnet add package Spectre.Console --version 0.49.1

Using Spectre.Console

To use Spectre.Console, you need to import its namespace:

using Spectre.Console;

Then you can access its main classĀ AnsiConsole, which provides various methods for outputting text and rendering widgets.

For example, here is how you can output some text with different colors and styles:

AnsiConsole.MarkupLine("[bold green]Hello[/] [italic blue]World[/]!");

This will produce something like this:

You can also use emojis or Unicode characters in your markup:

```csharp
```csharp
```csharp
AnsiConsole.MarkupLine(":fire: :alien_monster: :sparkles:");

This will produce something like this:

![](/images/how-to-create-beautiful-console-applications-with-spectre-console/image-1.png)

Also, you can use RGB or HEX values for specifying colors:

```csharp
AnsiConsole.MarkupLine("This is [rgb(255,0,0)]red[/], this is [rgb(0,255,0)]green[/], this is [rgb(0,0,255)]blue[/].");
AnsiConsole.MarkupLine("This is [#ff0000]red[/], this is [#00ff00]green[/], this is [#0000ff]blue[/].");

This will produce something like this:

You can also nest tags for combining colors and styles:

AnsiConsole.MarkupLine("[bold red on yellow blink underline]Warning![/] This is very [italic green on black strikethrough]important[/].");

This will produce something like this (depending on your terminal support):

You can find more details about the markup language syntax here: https://spectreconsole.net/markup

Rendering widgets

Spectre.Console also provides various widgets that you can render in your console applications. Some of them are:

  • Tables: Display tabular data with customizable headers, footers, borders, and alignment.

  • Trees: Display hierarchical data with expandable nodes and icons.

  • Bar Chart: Display a horizontal bar chart on the console.

  • Progress: Display progress for long-running tasks with live updates of progress bars and status controls.

  • and many others

Here are some examples of how to render these widgets:

Tables

To render a table, you need to create an instance ofĀ TableĀ class and add some columns and rows. You can also customize its appearance by setting properties such asĀ BorderStyle,Ā Border,Ā Title,Ā Caption, etc.

For example,

var table = new Table();
table.AddColumn("Name");
table.AddColumn("Age");
table.AddColumn("Occupation");

table.AddRow("Alice", "23", "Software Engineer");
table.AddRow("Bob", "32", "Accountant");
table.AddRow("Charlie", "28", "Teacher");

table.Title = new TableTitle("[underline yellow]People[/]");
table.Caption = new TableTitle("[grey]Some random people[/]");

AnsiConsole.Write(table);

This will produce something like this:

You can find more details about the table widget here: https://spectreconsole.net/widgets/table

Trees

To render a tree, you need to create an instance ofĀ TreeĀ class and add some nodes. You can also customize its appearance by setting properties such asĀ Style,Ā Guide,Ā Expand, etc.

For example,

var tree = new Tree("[yellow]Root[/]");

var child1 = tree.AddNode(new Markup("[green]Child 1[/]"));
var child2 = tree.AddNode(new Markup("[green]Child 2[/]"));
var child3 = tree.AddNode(new Markup("[green]Child 3[/]"));

child1.AddNode("[blue]Grandchild 1-1[/]");
child1.AddNode("[blue]Grandchild 1-2[/]");

child2.AddNode("[green]Grandchild 2-1[/]");

var grandchild3 = child3.AddNode("[green]Grandchild 3-1[/]");
child3.AddNode("[green]Grandchild 3-2[/]");
grandchild3.AddNode("[yellow]Great Grandchild 3-1-1[/]");
grandchild3.AddNode("[yellow]Great Grandchild 3-1-2[/]");

AnsiConsole.Write(tree);

This will produce something like this:

You can find more details about the tree widget here: https://spectreconsole.net/widgets/tree

Progress

To render progress, you need to create an instance ofĀ ProgressĀ class and add some tasks. You can also customize its appearance by setting properties such asĀ AutoClear,Ā AutoRefresh,Ā Columns, etc.

For example,

await AnsiConsole.Progress()
    .StartAsync(async ctx =>
    {
        // Define tasks
        var task1 = ctx.AddTask("[green]Chrome RAM usage[/]");
        var task2 = ctx.AddTask("[yellow]VS Code RAM usage[/]");

        while (!ctx.IsFinished)
        {
            // Simulate some work
            await Task.Delay(100);

            // Increment
            task1.Increment(4.5);
            task2.Increment(2);
        }
    });

This will produce something like this:

You can find more details about the progress widget here: https://spectreconsole.net/live/progress

Bar Chart

To render a bar chart, you need to create an instance of BarChart class and add some items with labels, values, and colors. You can also customize its appearance by setting properties such as Width, Label, CenterLabel, etc.

AnsiConsole.Write(new BarChart() // Create a bar chart
    .Width(60)
    .Label("[green bold underline]Global Smartphone Shipments Market Share (%)[/]") // Set the label of the chart
    .CenterLabel() //And center it
    .AddItem("Apple", 23, Color.Yellow) // Add the items with lables, values, and colors
    .AddItem("Samsung", 19, Color.Green)
    .AddItem("Xiaomi", 11, Color.Red)
    .AddItem("OPPO", 10, Color.Blue)
    .AddItem("Vivo", 8, Color.DarkMagenta)
    .AddItem("Others", 29, Color.Orange1));

This will produce something like this:

Using Live Display

Spectre.Console can update arbitrary widgets in place using theĀ Live DisplayĀ widget.

This can be useful for creating dynamic tables that show changing data over time. The live display is not thread-safe, and using it together with other interactive components such as prompts, status displays, or other progress displays is not supported.

To render a live table, you need to create a Table instance and add some columns and rows. Then you need to pass the table to AnsiConsole.Live() method and call Start() or StartAsync() with an action or a function that updates the table content.Ā You can use ctx.Refresh() to refresh the display after each update.

// Create a table
var table = new Table()
    .Border(TableBorder.Rounded)
    .AddColumn("Id")
    .AddColumn("Name")
    .AddColumn("Age");

// Add some initial rows
table.AddRow(Faker.Identification.SocialSecurityNumber(), Faker.Name.First(),
    Faker.RandomNumber.Next(18, 99).ToString());
table.AddRow(Faker.Identification.SocialSecurityNumber(), Faker.Name.First(),
    Faker.RandomNumber.Next(18, 99).ToString());

// Use LiveDisplay to update the table
await AnsiConsole.Live(table)
    .StartAsync(async ctx =>
    {
        // Loop until we are done
        for (int i = 0; i < 5; i++)
        {
            var id = Faker.Identification.SocialSecurityNumber();
            var name = Faker.Name.First();
            var age = Faker.RandomNumber.Next(18, 99);
            table.AddRow(id, name, age.ToString());

            ctx.Refresh();

            // Simulate doing the work
            await Task.Delay(1000);
        }
    });

I’m using Faker.Net to generate some data in the example above. More about Faker.Net here

This will produce something like this:

Conclusion

In this blog post, I showed you how to use Spectre.Console to create beautiful console applications with minimal code. You learned how to output text with colors and styles, render widgets such as tables, trees, progress bars, and charts, and display live tables

Spectre.Console is a powerful and versatile library that can help you create amazing console applications that are easy to use and maintain. I hope you enjoyed this blog post and found it useful. If you have any questions or feedback, feel free to leave a comment below or reach out to me on TwitterĀ @mhdbouk.

Thank you for reading and happy coding! šŸ˜ŠšŸ‘Øā€šŸ’»

]]>
Say Hello to Reliable Minimal APIs with Integration Tests https://mdbouk.com/say-hello-to-reliable-minimal-apis-with-integration-tests/ Sun, 08 Jan 2023 00:00:00 +0000 https://mdbouk.com/say-hello-to-reliable-minimal-apis-with-integration-tests/ Discover how to create reliable Minimal APIs in .NET 7 and write robust integration tests using XUnit and WebApplicationFactory. This guide covers essential testing strategies for API development. Hi there! Integration testing is an important part of the software development process because it helps ensure that your APIs are working correctly and returning the expected result. Unfortunately, many people often mix up integration tests with mock testing and use mock for integration tests. This can lead to issues when testing APIs, as mock tests don’t provide a complete picture of how your code will behave in a real-world scenario.

In this blog post, I’ll show you how to create a reliable todo API service using Minimal APIs in .Net 7 and write the needed integration tests using WebApplicationFactory and XUnit. We’ll be covering four different operations: getting a single todo item, getting all todo items, posting a new todo item, and updating an existing todo item.

You can find the full source code for this tutorial on GitHub at mhdbouk/minimal-api-integration-tests

Creating the Minimal API Project and XUnit Project

First things first, let’s create a new minimal API project in .NET 7 and an XUnit project for our integration tests.

// Create new directory for the solution
mkdir MinimalApiDemo
cd MinimalApiDemo

// Add new webapi using minimal apis instead of controllers
dotnet new webapi -minimal -o MinimalApiDemo.Api

// Add new xunit test project
dotnet new xunit -o MinimalApiDemo.Tests

// Add Api project refenrece in the test project
dotnet add MinimalApiDemo.Tests reference MinimalApiDemo.Api

// Create new Solution and add the 2 projects
dotnet new sln
dotnet sln add MinimalApiDemo.Api
dotnet sln add MinimalApiDemo.Tests

Open the solution we just created in Visual Studio and start by deleting the contents of Program.cs in MinimalApiDemo.Api and replace it with the following

// Program.cs

var builder = WebApplication.CreateBuilder(args);

var app = builder.Build();

// APIs goes here

app.Run();

Defining the Minimal API Endpoints

To create a minimal API endpoint, you can use the MapGet, MapPost, and MapPut methods of the WebApplication object (called app in our example) to define your endpoints. We also going to have a TodoService that will handle all the todo logic.

First we need to register the TodoService in our Program.cs

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddTransient<ITodoService, TodoService>();

Then we should configure our TodoDbContext to use Sqlite

// Program.cs

builder.Services.AddDbContext<TodoDbContext>(options =>
{
    var path = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
    options.UseSqlite($"Data Source={Path.Join(path, "MinimalApiDemo.db")}");
});

And now we can create our minimal APIs

// Program.cs

app.MapGet("/todo/{id}", async (int id, ITodoService todoService) =>
{
    var item = await todoService.GetItemAsync(id);
    if (item == null)
    {
        return Results.NotFound();
    }
    return Results.Ok(item);
});

app.MapGet("/todo/", (ITodoService todoService) => todoService.GetItemsAsync());

app.MapPost("/todo/", async (TodoItem item, ITodoService todoService) =>
{
    if (item is null)
    {
        return Results.BadRequest("Body is null");
    }
    if (string.IsNullOrWhiteSpace(item.Title))
    {
        return Results.BadRequest("Title is null");
    }
    var result = await todoService.AddItemAsync(item.Title!);
    return Results.Created($"/todo/{result.Id}", result);
});

app.MapPut("/todo/{id}", async (int id, TodoItem item, ITodoService todoService) =>
{
    var existingItem = await todoService.GetItemAsync(id);
    if (existingItem == null)
    {
        return Results.NotFound();
    }
    return Results.Ok(todoService.UpdateItemAsync(id, item.Title!));
});

app.MapPut("/todo/{id}/done", async (int id, ITodoService todoService) =>
{
    var existingItem = await todoService.GetItemAsync(id);
    if (existingItem == null)
    {
        return Results.NotFound();
    }
    return Results.Ok(await todoService.MarkAsDoneAsync(id));
});

Creating an integration test class with WebApplicationFactory

Now it’s time to write our integration tests. The goal here is to test the API without mocking any of the dependencies. This means that we will be making actual HTTP calls to the API and testing the response. Let’s get started!

To do that we need to use the WebApplicationFactory class to create the web host of our application. In MinimalApiDemo.Tests project, create a new class called TodoTests.cs and add the following

public class TodoTests : IClassFixture<WebApplicationFactory<Program>>
{
    private readonly WebApplicationFactory<Program> _factory;
    private readonly HttpClient _httpClient;

    public TodoTests(WebApplicationFactory<Program> factory)
    {
        _factory = factory;
        _httpClient = _factory.CreateClient();
    }
    
    // Add Tests here
}

Now we can add our test method

[Fact]
public async Task AddTodoItem_ReturnsCreatedSuccess()
{
    // Arrange
    var todoItem = new TodoItem { Title = "Cool Integration Test Item" };
    var content = new StringContent(System.Text.Json.JsonSerializer.Serialize(todoItem), Encoding.UTF8, "application/json");

    // Act
    var response = await _httpClient.PostAsync("/todo/", content);
    var responseContent = await response.Content.ReadAsStringAsync();
    var item = System.Text.Json.JsonSerializer.Deserialize<TodoItem>(responseContent, new System.Text.Json.JsonSerializerOptions(System.Text.Json.JsonSerializerDefaults.Web));

    // Assert
    Assert.Equal(HttpStatusCode.Created, response.StatusCode);
    Assert.NotNull(item);
    Assert.NotNull(response.Headers.Location);
    Assert.Equal(response.Headers.Location.ToString(), $"/todo/{item.Id}");
}

In this test, we first arrange our data by preparing the body content of our POST request. Next, we make the actual HTTP call by calling the PostAsync method on our _httpClient instance, which was created by the WebApplicationFactory. Finally, we assert that the response was successful by checking the status code and verifying that certain properties of the response are not null. Specifically, we check that the status code is HttpStatusCode.Created, that the response content (deserialized as a TodoItem object) is not null, that the Location header is present, and that its value matches the expected URI for the newly-created todo item.

Configuring the Test Database Connection String

Great! Now that we’ve got our integration test set up and running, let’s talk about how we can modify it to use a test database instead of the production one. This is important because we don’t want to accidentally modify or corrupt our production data while running our tests.

To accomplish this, we can create a new class called TodoWebApplicationFactory that inherits from WebApplicationFactory<TProgram>.

We can then override the ConfigureWebHost method to specify our test database connection string and set the environment to “Development” or “Testing”. This ensures that any changes made during our integration tests will not affect the production database.

With this setup, we can now write our integration tests with confidence, knowing that we are not impacting the production environment.

public class TodoWebApplicationFactory<TProgram> : WebApplicationFactory<TProgram> where TProgram : class
{
    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureServices(services =>
        {
            var dbContextDescriptor = services.SingleOrDefault(d => d.ServiceType == typeof(DbContextOptions<TodoDbContext>));

            services.Remove(dbContextDescriptor!);

            services.AddDbContext<TodoDbContext>(options =>
            {
                var path = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
                options.UseSqlite($"Data Source={Path.Join(path, "MinimalApiDemoTests.db")}");
            });
        });
        
        builder.UseEnvironment("development");
    }
}

And now we need to change our TodoTests class to implement IClassFixture> from the new TodoWebApplicationFactory<TProgram> and passing Program as the type parameter

public class TodoTests : IClassFixture<TodoWebApplicationFactory<Program>>
{
    private readonly TodoWebApplicationFactory<Program> _factory;
    private readonly HttpClient _httpClient;

    public TodoTests(TodoWebApplicationFactory<Program> factory)
    {
        _factory = factory;
        _httpClient = _factory.CreateClient();
    }

    [Fact]
    public async Task AddTodoItem_ReturnsCreatedSuccess()
    {
      ...
    }
}

Now, when we execute the tests, it will use the new connection string and the new environment (if our Program is using any environment-specific configurations). This allows us to test our API using a separate database and environment, ensuring that our tests are not affected by any changes made to the production configuration. This is especially important when running automated tests as part of a continuous integration and deployment pipeline, as it ensures that our tests are consistent and reliable.

Configuring Other Services in Tests

While testing our minimal API, there may be cases where we need to configure other services that our API relies on, such as an email provider. In this section, we will discuss how to set up these services in such a way that they do not send emails when running tests.

One way to do this is to use dependency injection to pass in a mock or fake implementation of the service. This way, we can control the behavior of the service and ensure that it does not send any emails during testing.

// Program.cs

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddTransient<ITodoService, TodoService>();
builder.Services.AddSingleton<IEmailProvider, SendGridEmailProvider>();

Create a new FakeEmailProvider in MinimalApiDemo.Tests with the following implementation

public class FakeEmailProvider : IEmailProvider
{
    public Task SendAsync(string emailAddress, string body)
    {
        // We don't want to actually send real emails when running integration tests.
        return Task.CompletedTask;
    }
}

And now we can configure our TodoWebApplicationFactory to register the fake email provider as below

Spublic class TodoWebApplicationFactory<TProgram> : WebApplicationFactory<TProgram> where TProgram : class
{
    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureServices(services =>
        {
            // Fake Email Provider
            var emailDescriptor = services.SingleOrDefault(d => d.ServiceType == typeof(IEmailProvider));

            services.Remove(emailDescriptor!);

            services.AddSingleton<IEmailProvider, FakeEmailProvider>();
            
            ...
        });
    }
}

Summary

This tutorial taught us how to write integration tests for minimal APIs using .NET 7, WebApplicationFactory, and XUnit. We started by creating a minimal API project and an XUnit project for our integration tests. Then, we used the WebApplicationFactory class to create the web host of our application and the TodoTests class to implement IClassFixture<WebApplicationFactory>. This allowed us to test the API by making actual HTTP calls and testing the response.

We also learned how to override the default connection string and environment variables in the TodoWebApplicationFactory inherited from WebApplicationFactory. This allowed us to use a test database connection string and set the environment to “development” or “testing” during our tests.

Additionally, we learned how to configure other services, such as email providers, to not send real emails when running the tests.

Overall, this tutorial demonstrated the importance of integration testing and how to effectively set it up to test our APIs.

If you have any questions or feedback, please don’t hesitate to leave a comment below. Happy coding šŸš€

]]>
From Frustrated to Successful: My Experience Deploying SonarQube in Azure https://mdbouk.com/from-frustrated-to-successful-my-experience-deploying-sonarqube-in-azure/ Thu, 22 Dec 2022 00:00:00 +0000 https://mdbouk.com/from-frustrated-to-successful-my-experience-deploying-sonarqube-in-azure/ Deploying SonarQube on Azure App Service can be challenging due to specific configuration requirements. In this post, I share my journey from frustration to success, including the solutions I found and a Bicep template to help you deploy SonarQube with ease. SonarQube is a popular open-source platform for continuous code inspection that helps developers identify and fix coding issues, security vulnerabilities, and other bugs in their codebase. As someone who has worked on multiple software projects, I can attest to the importance of having a tool like SonarQube in your toolkit. It saves time and prevents headaches by catching issues early on rather than at the end of a project. Recently, I struggled with deploying the latest version of SonarQube on Azure App Service, so I created the repository mhdbouk/azure-sonarqube (github.com) to document my process and help others avoid the same pitfalls. In this blog post, I’ll share my experience with deploying SonarQube on Azure App Service and provide the repository for anyone who may find it useful.

mhdbouk/azure-sonarqube (github.com)

The issue with latest version in App Service

I encountered a frustrating issue when attempting to deploy SonarQube using a Bicep file that included an Azure App Service, App Service plan. When trying to run the Docker image inside the Azure web app, I received an exception stating that the virtual memory allocated to the container was too low and needed to be increased to 262144. However, this option cannot be updated in Azure App Service, so I had to find an alternative solution.

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

I spent hours hitting my head against the keyboard, frantically searching the web for a solution. One option I considered was using an older version of SonarQube, as versions 7.8 and above require the vm.max_map_count to be set to 262144. However, I was not willing to downgrade to a version of SonarQube that was 4 years old in order to fix the issue. After crying out loud in frustration, I finally found a solution: adding the following app setting SONAR_SEARCH_JAVAADDITIONALOPTS with a value of -Dnode.store.allow_mmap=false. This allowed me to successfully run the latest version of SonarQube without any issues, and I was extremely relieved and happy to have found a solution.

SQL Server Support

By default, SonarQube creates an in-memory database that is useful for testing purposes only. In order to run SonarQube in a production environment, it is necessary to configure it to use a persistent database such as SQL Server. In this case, I added the needed Bicep resources and the necessary configuration to run SonarQube using SQL Server, which was also created and deployed using the Bicep file.

In order to configure SonarQube to use a persistent database, you added the following app settings to the siteConfig block in the Bicep file:

...
siteConfig {
  appSettings: [
    {
      name: 'SONARQUBE_JDBC_URL'
      value: 'jdbc:sqlserver://${sqlserver.outputs.fullyQualifiedDomainName}:1433;database=${sqlDatabaseName};encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;'
    }
    {
      name: 'SONARQUBE_JDBC_USERNAME'
      value: adminSqlUsername
    }
    {
      name: 'SONARQUBE_JDBC_PASSWORD'
      value: adminSqlPassword
    }
  ]
}
...

Bicep Modules

Bicep modules are self-contained blocks of Bicep code that can be reused and imported into other Bicep files, making it easier to manage and maintain complex deployments. I used the following Bicep modules in this repository:

  • linux-plan.bicep: This module creates an Azure App Service Plan with a Linux operating system. It includes the necessary output for the web app resource.

  • sqlserver.bicep: This module creates an Azure SQL Server and an Azure SQL database. It includes the necessary output for the web app resource, including the full qualified domain name of the SQL server.

To use a Bicep module, first you create a new bicep file, add the resources and the parameters needed, and then simply import it using the module function and specify the path to the module file. For example:

// sqlserver.bicep
param location string = resourceGroup().location
...

resource sqlServer 'Microsoft.Sql/servers@2022-05-01-preview' = {
  location: location
  name: sqlServerName
  tags: {
    displayName: 'Sql Server'
  }
  properties: {
    administratorLogin: adminSqlUsername
    administratorLoginPassword: adminSqlPassword
    version: '12.0'
    publicNetworkAccess: 'Enabled'
  }
}

output fullyQualifiedDomainName string = sqlServer.properties.fullyQualifiedDomainName

// main.bicep
module sqlserver 'sqlserver.bicep' = {
  ...
}

// Use the module output
fqdn: sqlserver.outputs.fullyQualifiedDomainName

Conclusion

In this blog post, we shared our experience with deploying the latest version of SonarQube on Azure App Service using Bicep and Docker. We encountered an issue during the deployment process, but were able to find a solution and successfully run the latest version of SonarQube.

We also explained how we used Bicep modules to organize and deploy the necessary resources. We then configured SonarQube to use the SQL Server by adding the necessary configuration to the Bicep file.

Overall, using Bicep and Docker made it easy to deploy and configure SonarQube on Azure. And now, with the provided bicep file, you can easily create a new instance of SonarQube with just a few clicks. Happy coding!

]]>
The Power of Bicep: Deploying on Azure in a Snap https://mdbouk.com/the-power-of-bicep-deploying-on-azure-in-a-snap/ Fri, 16 Dec 2022 00:00:00 +0000 https://mdbouk.com/the-power-of-bicep-deploying-on-azure-in-a-snap/ Discover how to deploy applications on Azure effortlessly using Bicep. This guide walks you through deploying a Docker image with just a few lines of code, showcasing the simplicity and efficiency of Bicep for infrastructure as code. If you’re looking for a simple and efficient way to deploy your application on Azure, you’re in the right place. In this blog post, I will guide you through the process of deploying an existing docker image on Azure using Bicep. It’s easier than you might think, and a lot more fun too!

You can find all of the information discussed in this blog post in my GitHub repository, located at mhdbouk/libretranslate-bicep (github.com)

What is Bicep?

Bicep is a new language for deploying infrastructure on Azure. It is designed to be simple, clean, and easy to use, making it a powerful tool for developers/DevOps who want to deploy infrastructure on Azure quickly and efficiently.

The most powerful thing about bicep is its ability to automate the process of deploying infrastructure on Azure. With just a few lines of code, you can deploy complex infrastructure on Azure, saving you time and effort. Additionally, bicep allows you to manage and update your infrastructure in a simple and intuitive way, making it a valuable tool for any developer who works with Azure.

For this blog post, I’m going to deploy LibreTranslate, a free and open-source translation platform. LibreTranslate allows users to translate texts and documents into various languages. It uses machine learning algorithms to provide high-quality translations, and it is designed to be easy to use and customizable.

Ready,Ā set,Ā go

Before we dive into how to use Bicep, there are a few prerequisites you’ll need to have in place:

  1. An active Azure Subscription - you can sign up for free here

  2. Visual Studio Code

  3. Azure CLI & Bicep CLI

  4. OR Bicep VS Code extension

With these prerequisites in place, let’s get started

First, open VS Code and create a new file. We’ll call ours main.bicep. Next, we’ll define the resources we need. In this example, we are going to create a new Azure Web App and a new Azure App Service Plan

resource appServicePlan 'Microsoft.Web/serverfarms@2020-12-01' = {
  name: 'plan-demo-northeurope'
  location: location
  sku: {
    name: 'F1'
    capacity: 1
  }
  kind: 'linux'
  properties: {
    reserved: true
  }
}

This resource means that we are going to create a new app service plan, Linux as an operating system, using the free plan.

We can make the values a parameter for the file by doing the following modification

@description('Describes plan\'s pricing tier and instance size. Check details at https://azure.microsoft.com/en-us/pricing/details/app-service/')
@allowed([
  'F1'
  'D1'
  'B1'
  'B2'
  'B3'
  'S1'
  'S2'
  'S3'
  'P1'
  'P2'
  'P3'
  'P4'
])
param skuName string = 'F1'

@description('Describes plan\'s instance count')
@minValue(1)
@maxValue(3)
param skuCapacity int = 1

@description('Location for all resources.')
param location string = resourceGroup().location

@description('App Service Plan Name.')
param appPlanName string = 'plan-demo-northeurope'

resource appServicePlan 'Microsoft.Web/serverfarms@2020-12-01' = {
  name: appPlanName
  location: location
  sku: {
    name: skuName
    capacity: skuCapacity
  }
  kind: 'linux'
  properties: {
    reserved: true
  }
}

Let us start deploying main.bicep using Azure CLI and Bicep CLI

First, open your terminal and login into your Azure account using the Azure CLI

az login

Set the subscription that you want to use for the deployment (In case you have multiple associated to your account)

az account set --subscription YOUR_SUBSCRIPTION_ID

Use the Bicep CLI to build and deploy the main.bicep file

bicep build main.bicep
az deployment group create --resource-group YOUR_RESOURCE_GROUP_NAME --template-file main.json

You should now have a successful deployment of main.bicep on Azure. You can use the Azure portal or the Azure CLI to verify that the resources have been created as expected. You should see the new app service plan created.

To add an Azure Web App, add the following resource into main.bicep

resource webApplication 'Microsoft.Web/sites@2021-01-15' = {
  name: appName
  location: location
  tags: {
    'hidden-related:${resourceGroup().id}/providers/Microsoft.Web/serverfarms/appServicePlan': 'Resource'
  }
  properties: {
    serverFarmId: appServicePlan.id
  }
}

This modification will create the web app and link it to the app service plan. To configure app settings, the following can be used

resource webApplication 'Microsoft.Web/sites@2021-01-15' = {
  name: appName
  location: location
  tags: {
    'hidden-related:${resourceGroup().id}/providers/Microsoft.Web/serverfarms/appServicePlan': 'Resource'
  }
  properties: {
    serverFarmId: appServicePlan.id
    siteConfig: {
      appSettings: [
        {
          name: 'My__Key'
          value: 'My Value'
        }
        {
          name: 'My__Other__Key'
          value: otherKeyValue
        }
      ]
    }
  }
}

And to add the docker image use the following

resource webApplication 'Microsoft.Web/sites@2021-01-15' = {
  name: appName
  location: location
  tags: {
    'hidden-related:${resourceGroup().id}/providers/Microsoft.Web/serverfarms/appServicePlan': 'Resource'
  }
  properties: {
    serverFarmId: appServicePlan.id
    siteConfig: {
      linuxFxVersion: 'DOCKER|getting-started:latest'
      appSettings: [
        {
          name: 'My__Key'
          value: 'My Value'
        }
        {
          name: 'My__Other__Key'
          value: otherKeyValue
        }
      ]
    }
  }
}

The following Gist contains all the needed resources in bicep to deploy LibraTranslate into Azure

https://gist.github.com/mhdbouk/972d0785c1273d86519f0ea39f6a1723

When deploying this bicep fileĀ to azure, it will create a new Linux-based Azure App Service Plan and configure a new Azure Web App with the necessary settings and configuration to runĀ LibreTranslateĀ from Docker

There is also an option to use your own Azure Container Registry (ACR), by changing theĀ Container RegistryĀ parameter to the URL of your custom registry.

Please note that in this example,Ā LibreTranslateĀ will only be configured to start with support for English, Arabic, and Chinese languages.

In addition, I have included a Deploy To Azure button in the GitHub repository: mhdbouk/libretranslate-bicep (github.com) to make it easy to deploy with just a single click

]]>
You no longer need a Dockerfile https://mdbouk.com/you-no-longer-need-a-dockerfile/ Sun, 04 Dec 2022 00:00:00 +0000 https://mdbouk.com/you-no-longer-need-a-dockerfile/ Starting with .NET 7, you can now publish your applications directly as container images without needing a Dockerfile. Learn how to leverage this new feature to simplify your containerization process. The .Net 7 release is bringing a lot of changes, and one of the most significant is the removal of the need for a Dockerfile. Container images are now a supported output type of the .NET SDK.

You no longer need a separate Dockerfile to containerize your .NET applications. You can publish your application and it will be built into a container image.

This is a significant improvement for .NET developers who want to containerize their applications. It will make it much easier to distribute and run your applications in the cloud. In this article, we’ll take a look at what this change means for you and your applications.

Previously in Dockerfile

Create a new Web API project, and open the project in Visual Studio. In the solution explorer, right-click on the project and select Add Docker Support. This will enable the addition of Docker functionality to the pre-existing project

I’m using Visual Studio for Mac, Windows has the same functionality

The following Dockerfile was generated by Visual Studio when docker support was added. It provides a helpful starting point for configuring your application’s Docker environment. You can customize the contents of this file as needed to fit your application’s requirements. Take a look at what’s included below:

FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY ["ApiWithDockerfile/ApiWithDockerfile.csproj", "ApiWithDockerfile/"]
RUN dotnet restore "ApiWithDockerfile/ApiWithDockerfile.csproj"
COPY . .
WORKDIR "/src/ApiWithDockerfile"
RUN dotnet build "ApiWithDockerfile.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "ApiWithDockerfile.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "ApiWithDockerfile.dll"]

Visual Studio will detect the type of project (aspnet) and its version (7.0), download the runtime to run the application, download the SDK for building the application, and execute necessary COPY and dotnet restore commands associated with the project’s csproj.

If we want to build and run the project in docker, Visual Studio is an option. For those who prefer the command line, we can run this command from the solution root level

docker build -f ApiWithDockerfile/Dockerfile -t apiwithdockerfile:0.1 .

The following command will enable you to run the project in a local docker environment:

docker run -p 5000:80 apiwithdockerfile:0.1

You can now call the weather API http://localhost:5000/WeatherForcast and should receive a normal response without any issues. That’s all there is to it.

The Annoying part

It can be quite difficult to maintain multiple projects while working with Dockerfiles. You need to be sure that all of the paths, variables, and ports are accurate in order for your Dockerfile to build and run as expected. Even seemingly small errors can lead to big headaches down the line. This task may quickly become overwhelming if you have several projects going within your Dockerfile.

The new part in .Net 7

In .Net 7, Microsoft made it easier than ever to deploy your apps to containers using their new Nuget package, Microsoft.NET.Build.Containers! It allows your project to be self-aware of its ability to be deployed in a container, and after adding the Nuget package you can simply publish the application by using the dotnet publish command in your project’s root folder - no need for a Dockerfile!

dotnet publish -os linux --arch x64 -p:PublishProfile=DefaultContainer

This will publish a new docker image with a tag/version 1.0.0 and the name of the image is the same name as the project.

Now you can use the same docker run command to launch your newly built image with a default tag of 1.0.0

docker run -p 5000:80 apiwithdockerfile:1.0.0

Using csproj properties

I think I know what your next question is - how do you run DevOps pipelines without manually adding all the variables into each publish command? The solution lies in the csproj file. Check out this example

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>net7.0</TargetFramework>
    <Nullable>enable</Nullable>
    <ImplicitUsings>enable</ImplicitUsings>

    <PublishProfile>DefaultContainer</PublishProfile>
    <ContainerBaseImage>mcr.microsoft.com/dotnet/aspnet:7.0</ContainerBaseImage>
    <ContainerRegistry>myregistry.com:1234</ContainerRegistry>
    <ContainerImageTag>1.0.1-patch2</ContainerImageTag>
    

  </PropertyGroup>

  <ItemGroup>
      <ContainerPort Include="80" Type="tcp" />
      <ContainerEnvironmentVariable Include="MyVariable" Value="MyValue" />
  </ItemGroup>

  <PropertyGroup Condition=" '$(RunConfiguration)' == 'https' " />
  <PropertyGroup Condition=" '$(RunConfiguration)' == 'http' " />
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.OpenApi" Version="7.0.0" />
    <PackageReference Include="Swashbuckle.AspNetCore" Version="6.4.0" />
    <PackageReference Include="Microsoft.NET.Build.Containers" Version="0.2.7" />
  </ItemGroup>

</Project>

ContainerBaseImage: It’s possible to change the base image detected from the project to use a different runtime if it better fits your needs. Overriding the default base image provides more flexibility in terms of usage.

ContainerRegistry If you intend to upload your image to a cloud registry

At the moment of writing, authentication for a cloud registry isn’t supported yet.

ContainerImageTag Save the image tag, and if you need to specify multiple tags, you can use ContainerImageTags and provide a comma-separated list. For environment variables, use the ContainerEnvironmentVariable parameter, and specifying ports can be done with ContainerPort.

Conclusion

In conclusion, you no longer need a Dockerfile. This is good news because it means that you can spend less time managing your Dockerfiles and more time focusing on your application. This article showed you how to use the new package Microsoft.NET.Build.Containers and how to publish your .Net application into a docker container.

You can check the source code here in my github repository https://github.com/mhdbouk/NoDockerFileSupport/

Happy coding and catch you in the next one šŸ‘‹

]]>
Update Azure SQL Server Firewall rules using Azure CLI https://mdbouk.com/update-azure-sql-server-firewall-rules-using-azure-cli/ Mon, 28 Nov 2022 00:00:00 +0000 https://mdbouk.com/update-azure-sql-server-firewall-rules-using-azure-cli/ Easily manage your Azure SQL Server firewall rules with Azure CLI. This guide shows you how to create, update, and automate firewall rule management using PowerShell and Azure CLI commands. If you’re used to working with Azure SQL Server without a private endpoint or virtual network, you might be used to adding your public IP to the Azure SQL Server Firewall rules each time your public IP changes. Luckily, there’s an easy option: use the Azure CLI to update the network firewall rules with your new public IP.

Get the Public IP using PowerShell

Getting your public IP address is quick and easy with PowerShell. Just use the following command:

$ip = (Invoke-WebRequest -uri "http://ifconfig.me/ip").Content

This will invoke an HTTP web request to http://ifconfig.me/ip and store the content of the request in a new variable called $ip (we are going to use that later in the automation script)

Azure CLI

Note #1
Make sure that Azure CLI is installed (Official Documentation) and that you are authenticated usingĀ az loginĀ command

Create Firewall Rule

To create a new firewall rule using Azure CLI, you can do it using the following command

az sql server firewall-rule create --name "Local Home" --resource-group "rg-sql-customers-prod" --server "sql-customers-prod-001" --start-ip-address 1.2.3.4 --end-ip-address 5.6.7.8

--name specify the name of the firewall rule, this will be shown in the Networking tab in the Azure Portal

--resource-group specify the azure SQL server resource group name

--server specify the azure SQL server name

--start-ip-address the start IPv4 address

--end-ip-address the end IPv4 address

Note #2
If you need to create a range rule you can specify a beginning and end public IP address, but if you have only one public IP to be set, bothĀ --start-ip-addressĀ andĀ --end-ip-addressĀ should be the same

Once you run the create command, you will get the JSON response of the created resource (the rule).

{
  "endIpAddress": "5.6.7.8",
  "id": "/subscriptions/xxx/resourceGroups/rg-sql-customers-prod/providers/Microsoft.Sql/servers/sql-customers-prod-001/firewallRules/Local Home",
  "name": "Local Home",
  "resourceGroup": "rg-sql-customers-prod",
  "startIpAddress": "1.2.3.4",
  "type": "Microsoft.Sql/servers/firewallRules"
}

The same thing can be shown in the Azure portal, the new firewall rule can be located in the Networking tab as shown in the screenshot below

The Rule name, the start IP, and the end IP addresses in Azure Portal

Update an existing Firewall Rule

To update the firewall rule, you can do that using the following

az sql server firewall-rule update --name "Local Home" --resource-group "rg-sql-customers-prod" --server "sql-customers-prod-001" --start-ip-address 1.2.3.4 --end-ip-address 5.6.7.8

Here the only thing being updated is the start-ip-address and the end-ip-address since Azure CLI will use the Name as the identifier to update the rule.

List and Show rules

To list all firewall rules configured for a SQL Server, you can type the following command

az sql server firewall-rule list --resource-group "rg-sql-customers-prod" --server "sql-customers-prod-001"

and to show one rule you can

az sql server firewall-rule show --name "Local Home" --resource-group "rg-sql-customers-prod" --server "sql-customers-prod-001"

Let’s Automate things

If you have a static public IP provided by your ISP (internet service provider), there is no need to automate anything, since your public IP rule will be stored once, and no need to change it. But, having a dynamic public IP that changes more often, makes the automation process a must, especially when you have multiple SQL servers that need a manual update. For that scenario, I create the following script to go through all your SQL servers inside a subscription and create/update the firewall rules with the new public IP

https://gist.github.com/mhdbouk/21a9d18a67336a4e12e6f99ad73fbdcc

In that script, I’m assuming that we have 3 different subscriptions dev, staging, and production and that we have multiple SQL servers. Note that to get all SQL servers I’m using az SQL server list with the --query parameter to update the JSON response with only the name and resource group. Also, I’m filtering the list of firewall rules to check the existing rule using the --query as well.
Happy coding and automation

]]>
How to Mock using NSubstitute https://mdbouk.com/how-to-mock-using-nsubstitute/ Fri, 04 Nov 2022 00:00:00 +0000 https://mdbouk.com/how-to-mock-using-nsubstitute/ Learn how to effectively use NSubstitute for mocking in unit tests. This guide covers the basics of setting up NSubstitute, creating mock objects, and configuring return values for methods, helping you write clean and efficient unit tests in .NET. Unit tests are code written to test small pieces of functionality of big programs. Performing unit tests is always designed to be simple, A “UNIT” in this sense is the smallest component of the large code part that makes sense to test, mainly a method out of many methods of some class.

Unit tests take milliseconds, can be run at the press of a button, and don’t necessarily require any knowledge of the system at large. And when writing unit tests, you need to focus on the UNIT at hand and not consider other dependencies like services or variables. To accomplish that we need to use mocking. There are many Mocking frameworks out there, but today we will check NSubstitute.

To get started, first you need to add the NSubstitute package to your existing project, run the following

dotnet add package NSubstitute

Now let’s say that we have the following basic interface.

public interface IClaculator
{
    int Add(int x, int y);
}

This is a small test using XUnit and NSubstitute

using NSubstitute;
using Xunit;

public class CalculatorTests
{
    [Fact]
    public void Calculator_Add_Test()
    {
        ICalculator calculator = Substitute.For<ICalculator>();

        calculator.Add(1, 2).Returns(3);

        var result = calculator.Add(1, 2);
        Assert.Equal(3, result);
    }
}

let’s break it down. First, to create the mock object, we are using Substitute.For<T> and to tell our mock/substitute what it should return we are using the following syntax:

calculator.Add(1, 2).Returns(3);

here we are configuring the method Add, when it got 1 and 2 as parameters, it should return 3.

We can add more complex arguments and parameter substitutions like the following

Substitute for any argument
calculator.Add(Arg.Any<int>(), Arg.Any<int>()).Returns(10);
Substitute for any argument giving a condition
calculator.Add(Arg.Is<int>(x => x < 100), 1).Returns(10);
Returns any args
calculator.Add(1, 2).ReturnsForAnyArgs(100); // this will return 100 always.

Summary

NSubstitute is straightforward to create mocks/substitutes for your objects. There is more you can do with NSubstitute, you can learn more about it in the documentation or check the code here at github/nsubstitute

]]>
Build your own cli because you can https://mdbouk.com/build-your-own-cli-because-you-can/ Mon, 26 Sep 2022 00:00:00 +0000 https://mdbouk.com/build-your-own-cli-because-you-can/ I have always been impressed by the Azure CLI and how it has played a significant role in my DevOps journey over the years. That got me wondering, how can I build my own CLI in a similar vein to Azure? Follow along in this blog post where I&#39;ll be sharing my step-by-step process for creating a CLI using C# in .NET. Don&#39;t miss out on this exciting adventure! I have always been impressed by the Azure CLI and how it has played a significant role in my DevOps journey over the years. That got me wondering, how can I build my own CLI in a similar vein to Azure? Follow along in this blog post where I’ll be sharing my step-by-step process for creating a CLI using C# in .NET. Don’t miss out on this exciting adventure!

What is CLI

A command line interface (CLI) is a text-based user interface used to run programs, manage computer files, and interact with the machine. Back in the day, everything was done on a computer using a CLI, there was no GUI (Graphical User Interface) and the user should learn how to use a CLI to perform different actions on the computer. Nowadays, CLI has become wildly used specifically for a developer or a technical person. Some of the great CLI out there are dotnet, git, GitHub, azure cli, and others.

Project Overview

The main goal of this blog post is to create a CLI using C# with .NET 7. As .NET 7 is cross-platform, at the end of this project, you will have a fully functional, cross-platform command line interface that can be deployed on any machine. We will be sharing the step-by-step process for creating this CLI and all codes will be available on the following GitHub repository.

We’ll be building a small todo CLI that will allow us to create and manage todo items. Let’s get started and see how we can put this powerful tool to use in our daily tasks and workflow.

Project Setup

Let’s start by creating our console app project using the dotnet cli (using cli to create our cli šŸ”„)

dotnet new console -o Hal9000Cli

ClichƩ Alert
I know it might be a bit of a clichĆ©, but I couldn’t resist usingĀ Hal9000Ā as the name for my CLI. So, that’s just what it is

After running the following command, we need to add the CommandLineUtils library from the Nuget package manager, the library is an open-source project created by Nate McMaster that will simplify parsing arguments provided on the command line, validating user inputs, and generating a help text.

dotnet add package McMaster.Extensions.CommandLineUtils

Let’s Start Coding

Open the new project in your preferred IDE and let’s start coding

There are 2 ways to implement the CommandLineUtils library, using the attributes or using the builder pattern, I will be using the attributes method in this post, you can check more about it in the library’s documentation

Create a new class called Hal9000Cmd.cs with the following content

[Command(Name = "hal9000", OptionsComparison = StringComparison.InvariantCultureIgnoreCase)]
[VersionOptionFromMember("--version", MemberName = nameof(GetVersion))]
public class Hal9000Cmd
{
    protected Task<int> OnExecute(CommandLineApplication app)
    {
        app.ShowHelp();
        return Task.FromResult(0);
    }
    
    private static string? GetVersion()
        => typeof(Hal9000Cmd).Assembly?.GetCustomAttribute<AssemblyInformationalVersionAttribute>()?.InformationalVersion;
}

Adding the Command attribute at the class level will register the command name to be used, so when you type hal9000 it will redirect you to that class. This class will be the main class, we will add sub-commands moving forward in the post.

OnExecute is important here, this is the method that will be called when you type any command in the terminal, this is used by the CommandLineUtils caller from the Program.cs file later.

the --version option is now applicable and it will return the assembly version, it is defaulted to 1.0.0, let us change it by adding the version tag in the project csproj file, mine is set to 0.0.1. The following is the csproj content so far

<Project Sdk="Microsoft.NET.Sdk">

	<PropertyGroup>
		<OutputType>Exe</OutputType>
		<TargetFramework>net7.0</TargetFramework>
		<ImplicitUsings>enable</ImplicitUsings>
		<Nullable>enable</Nullable>
		<Authors>Mohamad Dbouk</Authors>
		<Product>Hal9000</Product>
		<AssemblyName>hal9000</AssemblyName>
		<Version>0.0.1</Version>
	</PropertyGroup>

	<ItemGroup>
		<PackageReference Include="McMaster.Extensions.CommandLineUtils" Version="4.0.1" />
	</ItemGroup>

</Project>

Add the following into Program.cs file

CommandLineApplication.Execute<Hal9000Cmd>(args);

CommandLineApplication will execute the method Execute in the Hal9000Cmd class

Now let us test the application, run the following dotnet command to build and run the application

dotnet run

This will show the app help page, which contains the version and the main commands (currently we have only the version and the default show help command)

? dotnet run
0.0.1

Usage: hal9000 [options]

Options:
  --version     Show version information.
  -?|-h|--help  Show help information.

Now type the following dotnet run --version this will return the version specified in the csproj 0.0.1

A nerdy CLI intro

So let’s add a cool CLI intro for our HAL9000, something nerdy using ASCII codes

First Install the following NuGet package FIGlet.Net using the following command

dotnet add package FIGlet.Net --version 1.1.2

And then add the following line of code into the Execute method in Hal9000Cmd.cs class

protected Task<int> OnExecute(CommandLineApplication app)
{
    var displayTitle = new WenceyWang.FIGlet.AsciiArt("HAL9000");

    Console.ForegroundColor = ConsoleColor.Yellow;
    Console.WriteLine(displayTitle.ToString());
    Console.ResetColor();
    Console.WriteLine();

    app.ShowHelp();
    return Task.FromResult(0);
}

Now when you run the application you will get something like this:

? dotnet run

 _   _     _     _       ___    ___    ___    ___  
| | | |   / \   | |     / _ \  / _ \  / _ \  / _ \ 
| |_| |  / _ \  | |    | (_) || | | || | | || | | |
|  _  | / ___ \ | |___  \__, || |_| || |_| || |_| |
|_| |_|/_/   \_\|_____|   /_/  \___/  \___/  \___/

0.0.1

Usage: hal9000 [options]

Options:
  --version     Show version information.
  -?|-h|--help  Show help information.

The Todo Service

I’m not going to go advance here, we will have a JSON file, located in the local machine, and using the cli, we will be able to create, update, and query a list of task items in the todo list.

Now let us create 2 classes, TodoItem that contains the todo item model, and TodoService that will be used to perform the different action on our Todo

TodoItem.cs

public class TodoItem
{
    public int Id { get; set; }
    public string? Title { get; set; }
    public bool Completed { get; set; }
    public DateTime? CompletedTime { get; set; }
    public DateTime? UpdatedTime { get; set; }
    public DateTime CreatedTime { get; set; }

    public override string ToString()
    {
        var options = new JsonSerializerOptions { WriteIndented = true };
        return JsonSerializer.Serialize(this, options);
    }
}

TodoService.cs

public class TodoService
{
    public TodoItem AddItem(string title)
    {
        ...
    }

    public TodoItem? UpdateItem(int id, string title)
    {
        ...
    }

    public TodoItem? MarkAsDone(int id)
    {
        ...
    }

    public List<TodoItem> GetItems()
    {
        ...
    }

    public TodoItem? GetItem(int id)
    {
        ...
    }
}

Check the full code in the GitHub repo here

Adding Sub Commands

To add a sub command, we simply need to create a new class, the same way we did the Hal9000Cmd, adding the attributes to specify the command name. We need to have a TodoCmd class, that will also contain 3 sub commands CreateTodoItemCmd, UpdateTodoItemCmd, and QueryTodoItemCmd

In the terminal, the end goal is to achieve the following

hal9000 todo create --title "Create a todo item"
// this will create a new todo item

hal9000 todo update --id 1 --done
// this will mark the item as done

hal9000 todo list --query "[*].{Id:Id, Title: Title}"
// this will return the list of items in addition to 

So to do so, we will create the main class TodoCmd and we will create 3 sub-command classes for create, update, and query todo items

Todo Command

Create new file TodoCmd.cs and add the following boilerplate code to get it up and running

[Command(Name = "todo", Description = "Manage Todo Items (Create, Update, and List)")]
public class TodoCmd
{
    protected Task<int> OnExecute(CommandLineApplication app)
    {
        app.ShowHelp();
        return Task.FromResult(0);
    }
}

And add the following SubCommand attribute on top of the Hal9000Cmd class

[Command(Name = "hal9000", OptionsComparison = StringComparison.InvariantCultureIgnoreCase)]
[VersionOptionFromMember("--version", MemberName = nameof(GetVersion))]
[Subcommand(typeof(TodoCmd))]
public class Hal9000Cmd
{
  ...

Subcommand accept params of Types, so anytime we add new sub commands, we need to add it as a parameter (we will do that using the Away command)

Now when you run the application you will see that there is a new section in the help output showing the Commands as shown in the following output

 _   _     _     _       ___    ___    ___    ___  
| | | |   / \   | |     / _ \  / _ \  / _ \  / _ \ 
| |_| |  / _ \  | |    | (_) || | | || | | || | | |
|  _  | / ___ \ | |___  \__, || |_| || |_| || |_| |
|_| |_|/_/   \_\|_____|   /_/  \___/  \___/  \___/

0.0.1

Usage: hal9000 [command] [options]

Options:
  --version     Show version information.
  -?|-h|--help  Show help information.

Commands:
  todo          Manage Todo Items (Create, Update, and List)

Run 'hal9000 [command] -?|-h|--help' for more information about a command.

Create, Update, and List Commands

Now, create 3 classes, CreateTodoItemCmd, UpdateTodoItemCmd, and QueryTodoItemCmd

For Create, we need the title as an argument, so we will create a property and we set the Option attribute as follow

[Option(CommandOptionType.SingleValue, ShortName = "t", LongName = "title", Description = "The title of the todo item", ShowInHelpText = true, ValueName = "The title of the todo item")]
[Required]
public string? Title { get; set; }

And now from the OnExecute method, we can call the TodoService to create a new item as showing the final CreateTodoItemCmd class

[Command(Name = "create", Description = "Create a Todo Item")]
[HelpOption]
public class CreateTodoItemCmd
{
    private readonly IConsole _console;
    private readonly TodoService _todoService;

    public CreateTodoItemCmd(IConsole console, TodoService todoService)
    {
        _console = console;
        _todoService = todoService;
    }

    [Option(CommandOptionType.SingleValue, ShortName = "t", LongName = "title", Description = "The title of the todo item", ShowInHelpText = true, ValueName = "The title of the todo item")]
    [Required]
    public string? Title { get; set; }

    protected Task<int> OnExecute(CommandLineApplication app)
    {
        var item = _todoService.AddItem(Title!);

        _console.WriteLine($"Todo item \"{item.Title}\" with Id {item.Id} created succesfully!");

        return Task.FromResult(0);
    }
}

Now after doing so, we can create an item using the following command:

? hal9000 todo create --title "Eat a large size pizza"                    
Todo item "Eat a large size pizza" with Id 10 created succesfully!

JMESPath Query String

This is one of my favorite implementations, Azure CLI has it, so I replicated what they did here. Basically what the get list command does is return the list of all items using the properties of the TodoItem model. But, sometimes we don’t need all of the properties of the model, maybe we need only the Id and Title, and maybe we need to perform some kind of filter (where MarkAsDone is false) or (where CreatedTime > yesterday). So to perform that we can use the JMESPath implementation from the following github repo jdevillard/JmesPath.Net: A fully compliant implementation of JMESPATH for .NetCore (github.com) by jdevillard (github.com)

Calling the JMESPath is very simple, add the following implementation

private string JMESPathTransform(string input)
{
    if (!string.IsNullOrEmpty(Query))
    {
        var jmes = new JmesPath();
        return jmes.Transform(input, Query);
    }

    return string.Empty;
}

And then call the function from the QueryTodoItemCmd:

[Option(CommandOptionType.SingleValue, ShortName = "", LongName = "query", Description = " JMESPath query string. See http://jmespath.org/ for more information and examples.", ShowInHelpText = true, ValueName = "")]
public string? Query { get; set; }

protected Task<int> OnExecute(CommandLineApplication app)
{
    var items = _todoService.GetItems();

    var options = new JsonSerializerOptions { WriteIndented = true };
    string jsonString = JsonSerializer.Serialize(items, options);

    if (!string.IsNullOrEmpty(Query))
    {
        jsonString = JMESPathTransform(jsonString);
        jsonString = JsonSerializer.Serialize(JsonSerializer.Deserialize<object>(jsonString), options);
    }

    _console.WriteLine(jsonString);

    return Task.FromResult(0);
}

And now, from your terminal you can perform something like this:

? Hal9000 todo list --query "[*].{Id: Id, Title: Title, Date: CreatedTime}"
[
  {
    "Id": 1,
    "Title": "Create a Blog",
    "Date": "2022-09-26T12:31:05.86027\u002B03:00"
  },
  {
    "Id": 2,
    "Title": "Finish the blog",
    "Date": "2022-09-26T12:31:07.932452\u002B03:00"
  },
  {
    "Id": 3,
    "Title": "Eat Pizza",
    "Date": "2022-09-26T12:31:08.722755\u002B03:00"
  }
]

Or even something like this šŸ•

? hal9000 todo list --query "[?contains(Title, 'Pizza')].{Id: Id, Title: Title}"
[
  {
    "Id": 5,
    "Title": "Order a large Pizza"
  },
  {
    "Id": 6,
    "Title": "Order a small Pizza"
  },
  {
    "Id": 8,
    "Title": "Pizza Party!"
  },
  {
    "Id": 10,
    "Title": "Eat a large size Pizza"
  }
]

This is very helpful if you decided to perform some scripting on top of your CLI (getting data with different filters and then calling an API to store them in the database, ..)

Adding PostBuild commands

To test the application directly from the terminal, you need to add the following code to the project csproj file

<Target Name="PostBuild" AfterTargets="PostBuildEvent" Condition=" '$(OS)' == 'Windows_NT' ">
	<Exec Command="xcopy "$(SolutionDir)\bin\Debug\net7.0" "C:\Hal9000\"  /Y /I /E" />
</Target>
<Target Name="PostBuild" AfterTargets="PostBuildEvent" Condition=" '$(OS)' != 'Windows_NT' ">
	<Exec Command="cp $(SolutionDir)/bin/Debug/net7.0/* /Users/mohamaddbouk/Hal9000" />
</Target>

Here I added 2 PosBuild commands, one when the OS type is windows, and the other if not (macOS, Linux, ..). The PostBuild will be executed when you do a full build of the project, then automatically the bin directory will be copied into another directory. Once you build the project, you can open the terminal and run the Hal9000 application.

Environment variable of the output directory
Make sure to create the directory and add it to the Environment Path variable. Once added, restart your terminal and you will be able to callĀ Hal9000

Summary

Wow, that was a long blob post, I hope that you learned something new today, we talked about CLI, how to create them, and how to create multiple sub-commands with different attributes, we also added the JMESPath functionality and we added Post Build Events in Visual Studio. We also styled our CLI using FIGlet Asci Art and the most important part, we had fun doing all of that.

Thank you for reading and catch you in the next one, Ciao šŸ‘‹

]]>
Let’s benchmark .NET using BenchmarkDotNet! https://mdbouk.com/lets-benchmark-.net-using-benchmarkdotnet/ Thu, 15 Sep 2022 00:00:00 +0000 https://mdbouk.com/lets-benchmark-.net-using-benchmarkdotnet/ In this blog, we will deep dive into BenchmarkDotNet package and see how it can help us improve our dotnet application. In this blog, we will deep dive into BenchmarkDotNet package and see how it can help us improve our dotnet application.

What is Benchmarking?

Benchmarking is the act of comparing similar products, services, or processes in the same industry. When benchmarking it is very essential to measure the quality, time, and cost. Benchmarking will let you know if you are working on the latest and best practices across other parties in the same industry and it will help you identify your strengths and weaknesses.

And what is BenchmarkDotNet

BenchmarkDotNet is a lightweight, open-source powerful .NET library that is used for benchmarking. BenchmarkDotNet will help you transform methods into benchmarks, track their performance, and examine measurement experiments. It is very similar to unit tests, by creating functions and running them to generate a user-friendly result with all the important facts about the experiment.

It’s Demo Time

It is easy to start benchmarking in C#, first start by creating a new console application

dotnet new console -o BenchmarkDotNetDemo

Navigate to the newly created folder BenchmarkDotNetDemo and then add the BenchmarkDotNet library using the following command

dotnet add package BenchmarkDotNet

Let’s say we have a class that filters and gets data using different methods, we are going to benchmark multiple get methods, using LinQ and for loops. Create the following class and add the following code to it:

public class DataService
{
    List<string> _data = new()
    {
        "Carlos", "Adelaide", "Dexter", "Connie", "Annabella", "Sophia", "Alissa", "Kimberly", "Isabella", "Adam", "Valeria",
        "Tiana", "Michelle", "Justin", "Cadie", "Owen", "Mary", "Edwin", "Audrey", "Eddy", "Sarah", "Patrick", "Daniel",
        "Emily", "Sam", "Clark", "James", "Alen", "Michael", "Lenny", "Penelope", "Victor", "Ryan", "Lilianna", "Aida",
        "Lana", "Andrew", "Maximilian", "Savana", "Edward", "Sofia", "Harold", "Amelia", "Rosie", "William"
    };

    public string GetDataByFirstOrDefault(string key)
    {
        return _data.FirstOrDefault(x => key == x);
    }
    
    public string GetDataByFirst(string key)
    {
        return _data.First(x => key == x);
    }
    
    public string GetDataBySingle(string key)
    {
        return _data.Single(x => key == x);
    }

    public string GetDataByForEach(string key)
    {
        foreach (var item in _data)
        {
            if (item == key)
            {
                return item;
            }
        }
        return default;
    }

    public string GetDataByForLoop(string key)
    {
        for (int i = 0; i < _data.Count; i++)
        {
            if (_data[i] == key)
            {
                return _data[i];
            }
        }
        return default;
    }
}

As you can see, we have the DataService’s class that contains a List of strings with random names, and we have multiple methods to retrieve the data using different functionalities. We are going to benchmark different methods to see which one is the most efficient and which one we should exclude (depending on the use case of course)

Now let’s create a new class, DataBenchmark and here we will create multiple benchmark methods, you will see here it is very similar to unit tests, keep in mind we need to add the Benchmark attribute on each of the methods.

using BenchmarkDotNet.Attributes;

[MemoryDiagnoser]
[Orderer(BenchmarkDotNet.Order.SummaryOrderPolicy.FastestToSlowest)]
[RankColumn]
public class DataBenchmark
{
    DataService _service = new DataService();

    [Benchmark(Baseline = true)]
    public void GetDataByFirstOrDefault()
    {
        _service.GetDataByFirstOrDefault("Isabella");
    }

    [Benchmark]
    public void GetDataByForEach()
    {
        _service.GetDataByForEach("Isabella");
    } 
    [Benchmark]
    public void GetDataByFirst()
    {
        _service.GetDataByFirst("Isabella");
    } 
    [Benchmark]
    public void GetDataBySingle()
    {
        _service.GetDataBySingle("Isabella");
    }

    [Benchmark]
    public void GetDataByForLoop()
    {
        _service.GetDataByForEach("Isabella");
    }
}

Add the following in Program.cs file

using BenchmarkDotNet.Running;

BenchmarkRunner.Run<DataBenchmark>();

Now, you need to run the console app in Release mode, run the app using the following command:

dotnet run -c Release

After benchmarking is completed, you will see the following result in the terminal

|                  Method |      Mean |     Error |    StdDev |    Median | Ratio | RatioSD | Rank |   Gen0 | Allocated | Alloc Ratio |
|------------------------ |----------:|----------:|----------:|----------:|------:|--------:|-----:|-------:|----------:|------------:|
|        GetDataByForLoop |  36.56 ns |  0.662 ns |  1.211 ns |  36.24 ns |  0.19 |    0.01 |    1 |      - |         - |        0.00 |
|        GetDataByForEach |  37.09 ns |  0.772 ns |  1.450 ns |  36.80 ns |  0.19 |    0.01 |    1 |      - |         - |        0.00 |
| GetDataByFirstOrDefault | 195.22 ns |  4.120 ns | 11.689 ns | 193.47 ns |  1.00 |    0.00 |    2 | 0.0305 |     128 B |        1.00 |
|          GetDataByFirst | 215.05 ns | 11.025 ns | 31.632 ns | 199.19 ns |  1.10 |    0.19 |    3 | 0.0305 |     128 B |        1.00 |
|         GetDataBySingle | 857.50 ns | 27.473 ns | 79.704 ns | 827.47 ns |  4.42 |    0.50 |    4 | 0.0305 |     128 B |        1.00 |

The same result can be found in the folder BenchmarkDotNet.Artifacts in the release bin folder, with CSV, HTML, and MD files.

Now let us explain the result we got

Results and Summary

As we can see, GetDataBySingle ranked the worst by 857.50 ns and GetDataByForLoop ranked first by 36.56 ns. It is very logical for Single to be the last one on the list since SignleOrDefault will need to go through all the items in the list to check if the item is found and is unique. BenchmarkDotNet helped us here to cross-check different Get methods and allowed us to optimize our code base in a way that meets the performance standard.

SingleOrDefault in EF SqlServer
In EntityFramework – SQL Server, the call of SingleOrDefault is being translated into SELECT TOP(2), so if the returned result is 0 => the default value. If the result is 1, return that value and if it is 2, throw an exception.

If you are still here (thanks), please let me know what kind of benchmarking you are going to use by leaving a comment.

Thank you for reading, till next time šŸ‘‹

]]>
Hey there šŸ‘‹ I’m Mohamad! https://mdbouk.com/about/ Mon, 01 Jan 0001 00:00:00 +0000 https://mdbouk.com/about/ <p align="center"> <img src="https://mdbouk.com/images/logo.webp" alt="Logo" width="180" height="180" /> </p> <p>I’m a <strong>Solution Architect and Principal Software Engineer</strong> with over 15 years of experience designing and delivering <strong>cloud-native, event-driven, and multi-tenant SaaS platforms</strong> on <strong>.NET and Microsoft Azure</strong>.</p> <p>My focus is on architecting <strong>scalable, secure, and maintainable systems</strong> - built with <strong>Clean Architecture, DDD, and CQRS</strong> - that evolve gracefully as products grow. I’m passionate about building software that’s not just functional, but resilient, observable, and fun to maintain.</p> Logo

I’m a Solution Architect and Principal Software Engineer with over 15 years of experience designing and delivering cloud-native, event-driven, and multi-tenant SaaS platforms on .NET and Microsoft Azure.

My focus is on architecting scalable, secure, and maintainable systems - built with Clean Architecture, DDD, and CQRS - that evolve gracefully as products grow. I’m passionate about building software that’s not just functional, but resilient, observable, and fun to maintain.

Over the years, I’ve led engineering teams, designed modular monolith and microservice architectures, and helped organizations modernize their infrastructure using Azure, Terraform, and GitHub Actions.

Outside of work, I share what I learn through my YouTube channel and this blog - breaking down real-world architecture problems and modern .NET practices into simple, practical lessons.

I also teach software architecture and programming fundamentals, helping developers bridge the gap between writing code and designing systems that scale.

If you’d like to connect, reach out on LinkedIn or just explore the articles and projects I share here.


šŸ’¬ What I Do

  • Architect and build modern SaaS platforms using .NET, C#, and Azure
  • Design modular monolith, microservice, and event-driven systems
  • Automate environments and delivery pipelines with Terraform and GitHub Actions
  • Mentor teams on Clean Architecture, DDD, CQRS, and TDD
  • Share insights and technical content on YouTube and mdbouk.com

Thanks for visiting šŸ‘‹

]]>