Jekyll2024-09-22T11:41:23+02:00https://conductofcode.io/feed.xmlConduct of CodeHenrik Lau Eriksson on the conduct of codeHenrik Lau ErikssonCreating custom PowerToys Run plugins2024-02-13T23:00:00+01:002024-09-22T14:00:00+02:00https://conductofcode.io/post/creating-custom-powertoys-run-pluginsPowerToys Run is a quick launcher for Windows. It is open-source and modular for additional plugins.

Official plugins include:

  • Calculator
  • Unit Converter
  • Value Generator
  • Windows Search

At the time of writing, there are 20 plugins out of the box.

If you think the official plugins are not enough, you can write our own. The easiest way to get started is to look at what others did.

Browsing through some of the GitHub repos found above, gives you an idea of how the source code of a plugin looks like.

Contents

Demo Plugin

As a demo, I created a simple plugin that counts the words and characters of the query.

Demo Plugin

  • ActionKeyword: demo

Settings:

Demo Settings

  • Count spaces: true | false

The source code:

Throughout this blog post, the demo plugin will be used as an example.

Project

Before you create your own project, first take a look at the official checklist:

Key takeaways from the checklist:

  • Project name: Community.PowerToys.Run.Plugin.<PluginName>
  • Target framework: net8.0-windows
  • Create a Main.cs class
  • Create a plugin.json file

In Visual Studio, create a new Class Library project.

The edit the .csproj file to look something like this:

  • Platforms: x64 and ARM64
  • UseWPF to include references to WPF assemblies
  • Dependencies: PowerToys and Wox .dll assemblies

The .dll files referenced in the .csproj file are examples of dependencies needed, depending on what features your plugin should support.

Unfortunately, there are no official NuGet packages for these assemblies.

Traditionally, plugin authors commit these .dll files to the repo in a libs folder. Both the x64 and ARM64 versions.

You can copy the DLLs for your platform architecture from the installation location:

  • C:\Program Files\PowerToys\
    • Machine wide installation of PowerToys
  • %LocalAppData%\PowerToys\
    • Per user installation of PowerToys

You can build the DLLs for the other platform architecture from source:

Other plugin authors like to resolve the dependencies by referencing the PowerToys projects directly. Like the approach by Lin Yu-Chieh (Victor):

I have created a NuGet package that simplifies referencing all PowerToys Run plugin dependencies:

When using Community.PowerToys.Run.Plugin.Dependencies the .csproj file can look like this:

I have also created dotnet new templates that simplifies creating PowerToys Run plugin projects and solutions:

dotnet new list PowerToys

Anyway, it doesn’t matter if you create a project via templates or manually. The project should start out with these files:

  • Images\*.png
    • Typically dark and light versions of icons
  • Main.cs
    • The starting point of the plugin logic
  • plugin.json
    • The plugin metadata

Metadata

Create a plugin.json file that looks something like this:

The format is described in the Dev Documentation:

Main

Create a Main.cs file that looks something like this:

The Main class must have a public, static string property named PluginID:

public static string PluginID => "AE953C974C2241878F282EA18A7769E4";
  • 32 digits Guid without hyphens
  • Must match the value in the plugin.json file

In addition, the Main class should implement a few interfaces.

Let’s break down the implemented interfaces and the classes used in the example above.

Interfaces

Some interfaces of interest from the Wox.Plugin assembly:

  • IPlugin
  • IPluginI18n
  • IDelayedExecutionPlugin
  • IContextMenu
  • ISettingProvider

IPlugin

The most important interface is IPlugin:

public interface IPlugin
{
    List<Result> Query(Query query);

    void Init(PluginInitContext context);

    string Name { get; }

    string Description { get; }
}
  • Query is the method that does the actual logic in the plugin
  • Init is used to initialize the plugin
    • Save a reference to the PluginInitContext for later use
  • Name ought to match the value in the plugin.json file, but can be localized

IPluginI18n

If you want to support internationalization you can implement the IPluginI18n interface:

public interface IPluginI18n
{
    string GetTranslatedPluginTitle();

    string GetTranslatedPluginDescription();
}

IDelayedExecutionPlugin

The IDelayedExecutionPlugin interface provides an alternative Query method:

public interface IDelayedExecutionPlugin
{
    List<Result> Query(Query query, bool delayedExecution);
}

The delayed execution can be used for queries that take some time to run. PowerToys Run will add a slight delay before the Query method is invoked, so that the user has some extra milliseconds to finish typing that command.

A delay can be useful for queries that performs:

  • I/O operations
  • HTTP requests

IContextMenu

The IContextMenu interface is used to add context menu buttons to the query results:

public interface IContextMenu
{
    List<ContextMenuResult> LoadContextMenus(Result selectedResult);
}
  • Every Result can be enhanced with custom buttons

ISettingProvider

If the plugin is sophisticated enough to have custom settings, implement the ISettingProvider interface:

public interface ISettingProvider
{
    Control CreateSettingPanel();

    void UpdateSettings(PowerLauncherPluginSettings settings);

    IEnumerable<PluginAdditionalOption> AdditionalOptions { get; }
}
  • CreateSettingPanel usually throw a NotImplementedException
  • UpdateSettings is invoked when the user updates the settings in the PowerToys GUI
    • Use this method to save the custom settings and update the state of the plugin
  • AdditionalOptions is invoked when the PowerToys GUI displays the settings
    • Use this property to define how the custom settings are renderer in the PowerToys GUI

Classes

Some classes of interest from the Wox.Plugin assembly:

  • PluginInitContext
  • Query
  • Result
  • ContextMenuResult

PluginInitContext

A PluginInitContext instance is passed as argument to the Init method:

public class PluginInitContext
{
    public PluginMetadata CurrentPluginMetadata { get; internal set; }

    public IPublicAPI API { get; set; }
}
  • PluginMetadata can be useful if you need the path to the PluginDirectory or the ActionKeyword of the plugin
  • IPublicAPI is mainly used to GetCurrentTheme, but can also ShowMsg, ShowNotification or ChangeQuery

Query

A Query instance is passed to the Query methods defined in the IPlugin and IDelayedExecutionPlugin interfaces.

Properties of interest:

  • Search returns what the user has searched for, excluding the action keyword.
  • Terms returns the search as a collection of substrings, split by space (" ")

Result

A list of Result objects are returned by the Query methods defined in the IPlugin and IDelayedExecutionPlugin interfaces.

Example of how to create a new result:

new Result
{
    QueryTextDisplay = query.Search, // displayed where the user types queries
    IcoPath = IconPath, // displayed on the left side
    Title = "A title displayed in the top of the result",
    SubTitle = "A subtitle displayed under the main title",
    ToolTipData = new ToolTipData("A tooltip title", "A tooltip text\nthat can have\nmultiple lines"),
    Action = _ =>
    {
        Log.Debug("The actual action of the result when pressing Enter.", GetType());
        /*
        For example:
        - Copy something to the clipboard
        - Open a URL in a browser
        */
    },
    Score = 1, // the higher, the better query match
    ContextData = someObject, // used together with the IContextMenu interface
}

ContextMenuResult

A list of ContextMenuResult objects are returned by the LoadContextMenus method defined in the IContextMenu interface. These objects are rendered as small buttons, displayed on the right side of the query result.

Example of how to create a new context menu result:

new ContextMenuResult
{
    PluginName = Name,
    Title = "A title displayed as a tooltip",
    FontFamily = "Segoe Fluent Icons,Segoe MDL2 Assets",
    Glyph = "\xE8C8", // Copy
    AcceleratorKey = Key.C,
    AcceleratorModifiers = ModifierKeys.Control,
    Action = _ =>
    {
        Log.Debug("The actual action of the context menu result, when clicking the button or pressing the keyboard shortcut.", GetType());
        /*
        For example:
        - Copy something to the clipboard
        - Open a URL in a browser
        */
    },
}

Find the perfect Glyph to use from:

Actions

Examples of actions to use with Result or ContextMenuResult:

Action = _ =>
{
    System.Windows.Clipboard.SetText("Some text to copy to the clipboard");
    return true;
}
Action = _ =>
{
    var url = "https://conductofcode.io/";

    if (!Helper.OpenCommandInShell(DefaultBrowserInfo.Path, DefaultBrowserInfo.ArgumentsPattern, url))
    {
        Log.Error("Open default browser failed.", GetType());
        Context?.API.ShowMsg($"Plugin: {Name}", "Open default browser failed.");
        return false;
    }

    return true;
}

Logging

Logging is done with the static Log class, from the Wox.Plugin.Logger namespace. Under the hood, NLog is used.

Five log levels:

Log.Debug("A debug message", GetType());
Log.Info("An information message", GetType());
Log.Warn("A warning message", GetType());
Log.Error("An error message", GetType());
Log.Exception("An exceptional message", new Exception(), GetType());

The logs are written to .txt files, rolled by date, at:

  • %LocalAppData%\Microsoft\PowerToys\PowerToys Run\Logs\<Version>\

Dependencies

If you have the need to add third party dependencies, take a look at what is already used by PowerToys.

NuGet packages and the versions specified in the .props file are candidates to reference in your own .csproj file.

Packages of interest:

  • LazyCache
  • System.Text.Json

If the plugin uses any third party dependencies that are not referenced by PowerToys Run, you need to enable DynamicLoading.

In the plugin.json file:

{
    // ...
    "DynamicLoading": true
}
  • true makes PowerToys Run dynamically load any .dll files in the plugin folder

Tests

You can write unit tests for your plugin. The official plugins use the MSTest framework and Moq for mocking.

  • Project name: Community.PowerToys.Run.Plugin.<PluginName>.UnitTests
  • Target framework: net8.0-windows

The .csproj file of a unit test project may look something like this:

  • Apart from the actual test assemblies, some package references are also needed
  • As well as references to PowerToys and Wox .dll assemblies

Unit tests of the Main class may look something like this:

Some of the official plugins have unit test coverage:

Distribution

Unfortunately, the plugin manager in PowerToys Run does not offer support for downloading new plugins.

Community plugins are traditionally packaged in .zip files and distributed via releases in GitHub repositories.

The process is described in a unofficial checklist:

The Everything plugin by Lin Yu-Chieh (Victor) is next level and is distributed via:

  • Self-Extraction Installer (EXE)
  • Manual Installation (ZIP)
  • WinGet
  • Chocolatey

Linting

I have created a linter for PowerToys Run community plugins:

ptrun-lint

When running the linter on your plugin any issues are reported with a code and a description. If nothing shows up, your plugin is awesome. The lint rules are codified from the guidelines in the Community plugin checklist.

Resources

Demo Plugin:

Awesome PowerToys Run Plugins:

Third-Party plugins for PowerToy Run:

dotnet new templates for community plugins:

Documentation:

Dev Documentation:

If you want to look under the hood, fork or clone the PowerToys repo:

Get the solution to build on your machine with the help of the documentation:

]]>
{"twitter"=>"hlaueriksson"}
Retry flaky tests with dotnet test and PowerShell2023-03-26T21:00:00+02:002023-03-26T21:00:00+02:00https://conductofcode.io/post/retry-flaky-tests-with-dotnet-test-and-powershellAt my current client, Nordic Leisure Travel Group , we have a large number of web tests that automates some of the QA effort. The tests are written in Selenium and Playwright. They run on a schedule in Jenkins and we get reports on email. They take quite some time to run. Unfortunately some of the tests are a bit flaky and it is hard to make them stable.

Therefore I set out trying to mitigate failed test runs by rerunning the failing tests.

A question/answer on Stack Overflow inspired me to write my own test script.

My requirements were:

  • Use PowerShell
  • Use the dotnet test CLI command
  • Keep track of failing tests and retry them
  • Notify the test framework of the current retry iteration
  • Make it possible to accept a certain percentage of failing tests
  • Output the test result in a coherent way

This resulted in the following script:

The script runs dotnet test and uses the trx logger result file to collect failed tests. Then reruns the failed tests and reports the final result.

Parameters

  • Path - Path to the: project | solution | directory | dll | exe
  • Configuration - Build configuration for environment specific appsettings.json file.
    • Default value: Debug
    • Valid values: Debug | Release | Development | Production
  • Filter - Filter to run selected tests based on: TestCategory | Priority | Name | FullyQualifiedName
  • Settings - Path to the .runsettings file.
  • Retries - Number of retries for each failed test.
    • Default value: 2
    • Valid range: 1-9
  • Percentage - Required percentage of passed tests.
    • Default value: 100
    • Valid range: 0-100

Examples

Run regression tests:

.\test.ps1 -filter "TestCategory=RegressionTest"

Run FooBar smoke tests in Development:

.\test.ps1 .\FooBar.Tests\FooBar.Tests.csproj -filter "TestCategory=SmokeTest" -configuration "Development"

Retry failed tests once and reports the run as green if 95% of the tests passed:

.\test.ps1 -retries 1 -percentage 95

Run tests configured with a .runsettings file:

.\test.ps1 -settings .\test.runsettings

Output

If the tests passed, or the required percentage of tests passed, the script returns the exitcode 0. Otherwise the number of failed tests is returned as exitcode.

Output when successful:

Test Rerun Successful.

Output when required percentage of tests passed:

Test Rerun Successful.

Output when failed:

Test Rerun Failed.

Showcase:

.\test.ps1 -filter FullyQualifiedName=ConductOfCode.NUnitTests

Tests

To showcase the test script I have written some tests in three popular frameworks.

The script and tests are available in this repo:

NUnit

  • The Category attribute corresponds to the TestCategory filter in the script.
  • The Retry parameter in the TestContext contains the current retry iteration.

MSTest

  • The TestCategory attribute corresponds to the TestCategory filter in the script.
  • The Retry property in the TestContext contains the current retry iteration.

XUnit

Playwright

Playwright enables reliable end-to-end testing for modern web apps.

Web testing might be hard to get right and flaky tests can benefit from being retried.

Here is an example with Playwright and NUnit:

  • The SetUp method starts a trace recording if the test is in retry mode.
  • The TearDown method exports the trace into a zip archive if the test failed.

The trace zip archives of failing tests can be examined at:

A .runsettings file can be used to customizing Playwright options:

Run the Playwright tests with the .runsettings file:

.\test.ps1 -filter FullyQualifiedName=ConductOfCode.PlaywrightTests -settings .\test.runsettings

Resources

If you use Azure Pipelines consider using the Visual Studio Test v2 task with the rerunFailedTests option:

]]>
{"twitter"=>"hlaueriksson"}
Introducing LoFuUnit2019-12-29T20:00:00+01:002019-12-29T20:00:00+01:00https://conductofcode.io/post/introducing-lofuunitAbout a year ago I was experimenting with local functions and reflection, to improve my unit tests and the way to write BDD specs. I had been using Machine.Specifications and Machine.Fakes for years. MSpec has served me well, but the lack of async support made me look for alternatives.

The experiment with local functions in tests led to the release of LoFuUnit:

Testing with Local Functions 🐯

in .NET / C# ⚙️

with your favorite Unit Testing Framework ✔️

Recently I published version 1.1.1 of LoFuUnit, so I thought it would be a good time to introduce the framework to you in the form of a blog post.

Tests

Let’s try LoFuUnit and see how tests are written. I’ll use the same challenge from a previous blog post about BDD frameworks for .NET / C#:

The subject

The stack, a last-in-first-out (LIFO) collection:

The tests will assert the behavior of:

  • Empty vs nonempty stack
  • Peek() and Pop() methods
  • InvalidOperationException

Stack

Tests with LoFuUnit.NUnit:

  • LoFuUnit.NUnit provides the [LoFu] attribute, apply it to your test methods
  • Write the test steps in local functions, they are implicitly invoked by the [LoFu] attribute
  • Think about the naming of test methods and test functions, it will determine the test output

Test output:

Unit Test Sessions

Unit Test Sessions

Auto Mocks

Mocking is nice, automocking is even better

Auto-mocking is part of LoFuUnit. Let’s see how auto-mocked tests are written. I’ll use the same challenge from a previous blog post about Automocked base class for NUnit tests:

The subject:

Auto-mocked tests with LoFuUnit.AutoNSubstitute and LoFuUnit.NUnit:

  • LoFuUnit.AutoNSubstitute provides the base class LoFuTest<TSubject>, inherit your test fixtures from it
  • The Use<TDependency>() methods gives you the ability to setup the mocks
  • The The<TDependency>() method lets you access the mocks and verify expectations

Test output:

Unit Test Sessions

Installation

LoFuUnit is distributed as 7 packages via NuGet. The code examples in this blog post used two of them.

LoFuUnit.NUnit:

LoFuUnit.AutoNSubstitute:

You can view the code examples from this blog post at https://github.com/hlaueriksson/ConductOfCode

]]>
{"twitter"=>"hlaueriksson"}
Introducing Jekyll URL Shortener2018-11-01T07:00:00+01:002018-11-29T19:00:00+01:00https://conductofcode.io/post/introducing-jekyll-url-shortenerI felt the need to create my own URL Shortener. I ended up with a template repository for making URL Shorteners with Jekyll and GitHub Pages.

The source code and documentation can be found here:

What?

URL shortening is a technique on the World Wide Web in which a Uniform Resource Locator (URL) may be made substantially shorter and still direct to the required page.

The Jekyll URL Shortener repository is a template for creating your very own URL Shortener:

✂️🔗 This is a template repository for making URL Shorteners with Jekyll and GitHub Pages. Create short URLs that can be easily shared, tweeted, or emailed to friends. Fork this repo to get started.

How?

To create your own URL Shortener, do this:

  1. Buy and configure a (short) domain name
  2. Fork or clone the Jekyll URL Shortener repository
  3. Configure the repository and code to your liking
  4. Create short links with Jekyll pages

Here follows a description of how I created my own URL Shortener…

1. Domain Name

I bought the domain name hlaueriksson.me from a Swedish domain name registrar. I will use this as an apex domain, so I configured A records with my DNS provider:

DNS

2. Fork / Clone

I cloned the Jekyll URL Shortener repository to https://github.com/hlaueriksson/hlaueriksson.me

3. Configure

The _config.yml file, located in the root of the repository, contains configuration for Jekyll.

I modified this file to fit my needs:

The Settings / GitHub Pages of the repository provides configuration for hosting the Jekyll site.

I modified these settings to fit my needs:

GitHub Pages

Short links are created as Jekyll pages in the root of the repository:

Why?

Why would you create your own URL Shortener?

With Jekyll URL Shortener you could:

  • Have a vanity or branded domain name
  • Create neat custom slugs for the short links
  • Create short links with emoji
  • Track short links with Google Analytics

Credit

The Jekyll URL Shortener is made possible by:

Disclaimer

The redirecting is done with the jekyll-redirect-from plugin.

Redirects are performed by serving an HTML file with an HTTP-REFRESH meta tag which points to your destination.

So, the HTTP status codes 301 (permanent redirect), 302, 303 or 307 (temporary redirect) are not used.

]]>
{"twitter"=>"hlaueriksson"}
Easy Approach to Requirements Syntax and the segue to Behavior Driven Development2017-07-31T22:00:00+02:002017-08-15T14:00:00+02:00https://conductofcode.io/post/easy-approach-to-requirements-syntax-and-the-segue-to-behavior-driven-developmentI was attending a conference six months ago and listened to a talk about quality. During the talk, I was introduced to EARS — Easy Approach to Requirements Syntax. This way of writing requirements struck a chord with me, given my prior experience reading and writing requirement specifications.

When doing agile development, we write User Stories and define Acceptance Criteria.

As a <role>, I want <goal/desire> so that <benefit>

When doing BDD, we follow this format:

In order to <receive benefit>, as a <role>, I want <goal/desire>

It’s not always easy to go from user stories and acceptance criteria to start writing tests.

I think that with Easy Approach to Requirements Syntax in place, it will be easier to do Behavior Driven Development.

When doing Hypothesis Driven Development, we follow this format:

We believe <this capability>

Will result in <this outcome>

We will know we have succeeded when <we see a measurable signal>

So my hypothesis is:

I believe using Easy Approach to Requirements Syntax

Will result in easier implementation of Behavior Driven Development

I will know I have succeeded when business people can actually write the (SpecFlow) feature files themselves ☺

Easy Approach to Requirements Syntax

EARS was created during a case study at Rolls-Royce on requirements for aircraft engine control systems.

They identified eight major problems with writing requirements in an unstructured natural language:

  • Ambiguity (a word or phrase has two or more different meanings).
  • Vagueness (lack of precision, structure and/or detail).
  • Complexity (compound requirements containing complex sub-clauses and/or several interrelated statements).
  • Omission (missing requirements, particularly requirements to handle unwanted behavior).
  • Duplication (repetition of requirements that are defining the same need).
  • Wordiness (use of an unnecessary number of words).
  • Inappropriate implementation (statements of how the system should be built, rather than what it should do).
  • Untestability (requirements that cannot be proven true or false when the system is implemented).

To overcome or reduce the effects of these problems they came up with a rule set with five simple templates.

Requirements are divided into five types:

  • Ubiquitous
  • Event-driven
  • State-driven
  • Unwanted behaviors
  • Optional features

Ubiquitous

The <system name> shall <system response>

Event-driven

When <optional preconditions> <trigger>, the <system name> shall <system response>

State-driven

While <in a specific state>, the <system name> shall <system response>

Unwanted behaviors

If <optional preconditions> <trigger>, then the <system name> shall <system response>

Optional features

Where <feature is included>, the <system name> shall <system response>

The Stack Class

Let’s put this to the test with the Stack<T> Class as the example.

This is some of the documentation from MSDN:

Represents a variable size last-in-first-out (LIFO) collection of instances of the same specified type.

The capacity of the Stack<T> is the number of elements that the Stack<T> can store. Count is the number of elements that are actually in the Stack<T>.

Three main operations can be performed on a Stack<T> and its elements:

  • Push inserts an element at the top of the Stack<T>.
  • Pop removes an element from the top of the Stack<T>.
  • Peek returns an element that is at the top of the Stack<T> but does not remove it from the Stack<T>.

If we were to write a User Story in BDD format:

In order to store instances of the same specified type in last-in-first-out (LIFO) sequence

As a developer

I want to use a Stack<T>

If we were to write requirements with EARS templates:

Ubiquitous

The Stack<T> shall store instances of the same specified type in last-in-first-out (LIFO) order.

The Stack<T> shall return the number of elements contained when the property Count is invoked.

Event-driven

When the method Push is invoked, the Stack<T> shall insert the element at the top.

When the method Pop is invoked, the Stack<T> shall remove and return the element at the top.

When the method Peek is invoked, the Stack<T> shall return the element at the top without removing it.

State-driven

While an element is present, the Stack<T> shall return true when the method Contains is invoked.

While an element is not present, the Stack<T> shall return false when the method Contains is invoked.

Unwanted behaviors

If empty and the method Pop is invoked, then the Stack<T> shall throw InvalidOperationException.

If empty and the method Peek is invoked, then the Stack<T> shall throw InvalidOperationException.

Optional features

Where instantiated with a specified collection, the Stack<T> shall be prepopulated with the elements of the collection.

Behavior Driven Development

Let’s take this to the next level with BDD and SpecFlow.

  • Each requirement has its own scenario
  • I’ve tagged the scenarios with the type of requirement for clarity

In my opinion, it was easy to write the tests. I copy-and-pasted the requirement to the SpecFlow feature file and then I knew exactly how many scenarios I needed to implement. I think the examples in the scenarios makes the requirements easier to understand and reason about. Maybe this should be called Requirements by Example?

The BDD Cycle

When implementing the production code, we can use The BDD Cycle described in The RSpec Book.

The BDD Cycle

  • A photo of page 10 from my copy of The RSpec Book
  • As a .NET developer, you can replace Cucumber with SpecFlow and RSpec with Machine.Specifications

The BDD Cycle introduces two levels of testing. We can use SpecFlow to focus on the high-level behavior, the requirements. And use Machine.Specifications to focus on more granular behavior, unit testing code in isolation.

Resources

]]>
{"twitter"=>"hlaueriksson"}
Managing content for a JAMstack site with Netlify CMS2017-06-30T14:00:00+02:002017-06-30T14:00:00+02:00https://conductofcode.io/post/managing-content-for-a-jamstack-site-with-netlify-cmsContent for the modern web development architecture:

Headless CMS = Content Management System for JAMstack sites

This blog post covers:

Netlify CMS for a JAMstack site built with Hugo + jQuery + Azure Functions

This blog post is part of a series:

  1. Building a JAMstack site with Hugo and Azure Functions
  2. Managing content for a JAMstack site with Netlify CMS (this post)

JAMstack Recipes

The example used in this blog post is a site for jam recipes.

Example code:

Example site:

JAMstack

JAMstack is the modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup. The previous blog post covered how to build a JAMstack site with Hugo and Azure Functions. This post will focus on how to add a Headless CMS to manage the content for the site.

Headless CMS

A Headless CMS gives non-technical users a simple way to add and edit the content of a JAMstack site. By being headless, it decouples the content from the presentation.

A headless CMS doesn’t care where it’s serving its content to. It’s no longer attached to the frontend, and the content can be viewed on any platform.

Read all about it over at: https://headlesscms.org

A Headless CMS can be:

  • Git-based
  • API-driven

In this blog post the focus is on Netlify CMS, an open source, Git-based CMS for all static site generators.

With a git-based CMS you are pushing changes to git that then triggers a new build of your site.

Netlify CMS

An open-source CMS for your Git workflow

Find out more at:

Why Netlify CMS?

  • Integrates with any build tool
  • Plugs in to any static site generator

Setup

You can follow the Quick Start to install and configure Netlify CMS.

Basically do five things:

  1. Sign up for Netlify:
  2. New site from Git
  3. Register a new OAuth application in GitHub
  4. Install authentication provider
  5. Update the code

New site from Git

Go here:

Then:

  1. Select a git provider
  2. Select a repository
  3. Configure your build

Create a new site

The build configuration for this site:

Continuous Deployment

  • Build command: hugo -s src/site
  • Publish directory: src/site/public
  • Build environment variables: HUGO_VERSION = 0.21

Refer to Common configuration directives when configuring other static site generators.

Register a new OAuth application in GitHub

Go here:

Then:

  • Register a new OAuth application

Register a new OAuth application

  • Authorization callback URL: https://api.netlify.com/auth/done

Take note of:

  • Client ID
  • Client Secret

Install authentication provider

Go here:

Then:

  • <Site> / Access / Authentication providers / Install provider

Install authentication provider

Copy from GitHub:

  • Key: Client ID
  • Secret: Client Secret

Code

The code for the site:

The source code in this example is a clone of the Git repository from the previous blog post.

Some code has changed:

WinMerge

Let’s talk about the highlighted files…

To add Netlify CMS to a Hugo site, two files should be added to the /static/admin/ folder:

  • index.html
  • config.yml

index.html:

  • Entry point for the Netlify CMS admin interface
  • Scripts and styles from CDN
  • This is a React app
  • This is version 0.4

config.yml:

  • The backend is the GitHub repo
  • The media and public folders suitable for Hugo. Images can be uploaded directly from the editor.
  • Collections define the structure for content types and how the admin interface in Netlify CMS should work
  • One content type for recipes is configured in this example
  • The fields correspond to the yaml-formatted front matter in the generated markdown files
  • The last field, body, is the content in generated markdown files
  • You can enable the Editorial Workflow here

Content:

  • The content type from the config.yml generates markdown files like this
  • The yaml-formatted front matter correspond to the fields
  • The body field is the actual content

The following files needed to be modified to support the new site and the markdown files generated by the Netlify CMS.

config.toml:

  • The baseURL points to the new site hosted by Netlify

summary.html:

  • Images can be uploaded directly from the editor to a dedicated folder in the Git repo
  • The full URL to the image comes from the front matter

recipe.html:

  • The URL to the image was changed here too
  • Front matter formatted with yaml and toml has a slightly different structure
  • Access to the values of ingredients was changed

Content Manager

The Netlify CMS admin interface is accessed via the /admin/ slug.

If you are a collaborator on the GitHub repo you can log in:

Login with GitHub

After you authorize the application:

Authorize Application

You can view the content from the repo:

Content Manager

Add and edit the content from the repo:

Edit

Conclusion

JAMstack sites are awesome. Headless CMSs adds even more awesomeness.

Smashing Magazine just got 10x faster by using Hugo and Netlify CMS to move from a WordPress to a JAMstack site.

]]>
{"twitter"=>"hlaueriksson"}
Building a JAMstack site with Hugo and Azure Functions2017-05-31T22:00:00+02:002017-06-30T14:00:00+02:00https://conductofcode.io/post/building-a-jamstack-site-with-hugo-and-azure-functionsModern web development architecture:

Static site generators + JavaScript + Serverless = JAMstack

This blog post covers:

Hugo + jQuery + Azure Functions = JAMstack

This blog post is part of a series:

  1. Building a JAMstack site with Hugo and Azure Functions (this post)
  2. Managing content for a JAMstack site with Netlify CMS

JAMstack Recipes

The example used in this blog post is a site for jam recipes.

Example code:

Example site:

JAMstack

Modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.

Read all about it over at: https://jamstack.org

Three things are needed:

  • JavaScript
  • APIs
  • Markup

In this blog post the JavaScript is written with jQuery, the APIs implemented with Azure Functions and the markup generated with Hugo.

Why?

  • Better Performance
  • Higher Security
  • Cheaper, Easier Scaling
  • Better Developer Experience

Static site generators is The Next Big Thing and it is used At Scale.

Hugo

A fast and modern static website engine

Hugo is a static site generator.

It’s the second most popular according to https://www.staticgen.com

Why Hugo?

  • Extremely fast build times
  • Runs on Windows and is easy to install

Scaffold a Hugo site:

hugo new site .

hugo new site .

Add content:

hugo new recipe/apple-jam.md

hugo new recipe/apple-jam.md

Code

The code for the site:

Configuration for the Hugo site is done in config.toml:

  • Uses the TOML format, located in the root folder
  • The baseURL points to where the site is hosted on GitHub Pages

Data:

  • Uses the TOML format, located in the data folder
  • This example is for the API URLs invoked by the JavaScript

The HTML template for the header:

  • HTML file located in the layouts\partials folder
  • This example uses Bootstrap template with the Narrow jumbotron
  • Custom CSS in app.css, Bootstrap stylesheets from CDN
  • The body tag gets an id with the current page template, will be used in the JavaScript
  • The navigation is populated by pages configured with navigation = true in the front matter

The HTML template for the header:

  • HTML file located in the layouts\partials folder
  • Custom JS in app.js, third-party scripts from CDN

Content:

  • Uses the Markdown format with Front Matter
  • This example is for the Apple Jam page
  • The image and ingredients are stored as variables, available in templates via the Param method

Page Templates:

  • HTML file located in a layouts subfolder
  • Reference the header and footer partials
  • This example is for the recipe pages
  • Data for the JavaScript is stored as attributes in a dedicated element
  • The image and ingredients are accessed via the Param method
  • The page markdown content is accessed with the .Content variable

JavaScript:

  • Located in the static folder
  • This example uses jQuery
  • The recipe page will GET ingredients from a Azure Function
  • The submit page will POST recipes to a Azure Function

Hugo has a theming system, so you don’t have to implement all templates yourself.

Hugo Themes:

Run and Build

Hugo provides its own webserver which builds and serves the site:

hugo server

hugo server

  • In this example you can then browse http://localhost:1313/jamstack/

Build the Hugo site:

hugo --destination ../../docs

hugo --destination ../../docs

  • In this example the site is hosted by GitHub Pages from the docs folder in the git repo.

Troubleshoot the site generation with:

hugo --verbose

GitHub Pages

Get the site hosted on GitHub Pages by:

  1. Pushing the code to GitHub
  2. Configuring a publishing source for GitHub Pages

In this example the docs folder is used as publishing source:

GitHub Pages

Azure Functions

Azure Functions Core Tools is a command line tool for Azure Functions

The Azure Functions Core Tools provide a local development experience for creating, developing, testing, running, and debugging Azure Functions.

Create a function app:

func init

func init

Create a function:

func function create

func function create

If you don’t like the terminal, take a look at Visual Studio 2017 Tools for Azure Functions

Code

The code for the API:

Configuration for a function is done in function.json:

  • The authLevel can be set to anonymous in this example
  • The route and methods are important to know when invoking the function from the JavaScript

The actual function is implemented in run.csx:

  • Uses the scriptcs format
  • This example will return hard coded ingredients for the given recipe

Run

Run the functions locally:

func host start

When running the Hugo site against local functions, specify CORS origins:

func host start --cors http://localhost:1313

func host start --cors http://localhost:1313

Deployment

Get the functions hosted on Azure by:

  1. Pushing the code to GitHub, Bitbucket or Visual Studio Team Services
  2. Log in to the Azure portal
  3. Create a function app
  4. Set up continuous deployment

In this example the Azure Functions code is located in the \src\api\ folder in the git repo.

Therefor deploying with custom script is needed:

.deployment:

  • Run deploy.cmd during deployment

deploy.cmd:

  • Copy files from the \src\api folder to the repository root

During deployment the logs look like this:

Azure Logs

Configuration

Before the Hugo site and the JavaScript can invoke the Azure Functions, Cross-Origin Resource Sharing (CORS) needs to be configured.

In this example these origins are allowed:

  • https://hlaueriksson.github.io
  • http://localhost:1313

Azure CORS

Now the Hugo site can be configured to use these URLs:

  • https://jamstack.azurewebsites.net/api/Ingredients/{recipe}
  • https://jamstack.azurewebsites.net/api/Recipe

Conclusion

  • JAMstack is the modern web development architecture
  • Static site generators is a big thing right now
  • Hugo is fast and awesome
  • jQuery is still useful
  • Serverless is the next big thing
  • Azure Functions is awesome
]]>
{"twitter"=>"hlaueriksson"}
Serverless with Azure Functions2017-04-30T12:00:00+02:002022-09-14T22:00:00+02:00https://conductofcode.io/post/serverless-with-azure-functionsI set out to learn about Azure Functions and this is the knowledge I have gathered so far.

To have something to work with, I decided to migrate the ASP.NET Core Web API for “My latest online activities” to Azure Functions.

As background, refer to these related blog posts:

To get some basic understanding, read the introduction over at Microsoft Docs:

You can create your functions in the browser on the Azure Portal, but I find it to be a bit cumbersome. My approach is to use the Azure Functions CLI tool to scaffold a project, put it on GitHub and deploy to Azure Functions with continuous deployment.

Azure Functions CLI

Command line tool for Azure Function

The Azure Functions CLI provides a local development experience for creating, developing, testing, running, and debugging Azure Functions.

Read the docs at:

Install:

npm i -g azure-functions-cli

npm i -g azure-functions-cli

Install globally

Create Function App:

func init

func init

Create a new Function App in the current folder. Initializes git repo.

Create Function:

func function create

func function create

Create a new Function from a template, using the Yeoman generator

Run:

func host start

func host start

Launches the functions runtime host

So that is what I did when I created the latest-functions project. I’ll come back to that later. First let’s talk about my CommandQuery package.

CommandQuery

Command Query Separation (CQS) for ASP.NET Core and Azure Functions

Remember the background:

To migrate the ASP.NET Core Web API project to a Azure Functions app, I first needed to extend the CommandQuery solution.

As a result I ended up with three packages on NuGet:

  • CommandQuery
    • Common base functionality
  • CommandQuery.AspNetCore
    • Command Query Separation for ASP.NET Core
  • CommandQuery.AzureFunctions
    • Command Query Separation for Azure Functions

In this blog post I will cover the CommandQuery.AzureFunctions package:

Provides generic function support for commands and queries with HTTPTriggers

To get more information about the project, read the documentation over at the GitHub repository.

Get started:

Sample code:

When I was writing the code specific to Azure Functions, I needed to add dependencies. It makes sense to depend on the same assembly versions as the Azure Functions hosting environment use. Therefore, I ended up creating a project to gather that information.

AzureFunctionsInfo

⚡️ Information gathered on Azure Functions by executing Azure Functions ⚡️

I basically wanted to get information about available Assemblies and Types in the Azure Functions hosting environment.

The code and all the information gathered can be viewed at:

For example, this is some important information I found out:

The type TraceWriter comes from:

{
  "Type": "Microsoft.Azure.WebJobs.Host.TraceWriter",
  "Assembly": "Microsoft.Azure.WebJobs.Host, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
}

latest-functions

Okay, so now it is time to cover the actual Azure Functions app I created. The code was migrated from an existing ASP.NET Core Web API project.

Remember the background:

The app has one query function to get data on my latest:

  • Blog post
  • GitHub repo and commit
  • Instagram photo

The source code is available at:

The project was created with the Azure Functions CLI tool:

After the scaffolding, the generated code needed to be modified.

This is the end result:

function.json

Defines the function bindings and other configuration settings

  • authLevel set to anonymous
    • No API key is required
  • route query/{queryName} added
  • method post added
    • Array of the HTTP methods to which the function will respond

project.json

To use NuGet packages in a C# function, upload a project.json file to the function’s folder in the function app’s file system.

  • Only the .NET Framework 4.6 is supported, specify net46
  • CommandQuery.AzureFunctions, version 0.2.0
    • This will make the use of queries and query handlers possible in the function

bin folder

If you need to reference a private assembly, you can upload the assembly file into a bin folder relative to your function and reference it by using the file name (e.g. #r "MyAssembly.dll").

bin

  • The queries and query handlers used in this function are located in Latest.dll
  • Code: Latest

Make sure that private assemblies are built with <TargetFramework>net46</TargetFramework>.

run.csx

This is the code for the actual function. Written in scriptcs, you can enjoy a relaxed C# scripting syntax.

  • The #r "Latest.dll" directive reference the query assembly from the bin folder
  • Import the namespaces System.Reflection and CommandQuery.AzureFunctions
  • Create a static instance of QueryFunction and inject a QueryProcessor with the query Assembly. This will make IoC container find the correct query handlers for the queries.
  • Add the string queryName argument to the function signature so the corresponding route will work
  • Let the QueryFunction handle the query and return the result

When the code is done, only four things remains to get it running in the cloud:

  1. Push to GitHub, Bitbucket or Visual Studio Team Services
  2. Log in to the Azure portal
  3. Create a function app
  4. Set up continuous deployment

The end result of all this can be viewed at:

The code for the SPA that uses the function:

Testing

While building your functions, you want to test early and often.

Run the functions locally with the command:

func host start

If you need to specific CORS origins use something like:

func host start --cors http://localhost:3000

When the functions are running locally you can manually test them with tools like Postman or curl.

Advice from Microsoft:

Postman

Postman

The Postman collection for this function:

curl

Commands for this function hitting the cloud endpoints:

curl -X POST http://latest-functions.azurewebsites.net/api/query/BlogQuery -H "content-type: application/json" -d "{}"

curl -X POST http://latest-functions.azurewebsites.net/api/query/GitHubQuery -H "content-type: application/json" -d "{'Username': 'hlaueriksson'}"

curl -X POST http://latest-functions.azurewebsites.net/api/query/InstagramQuery -H "content-type: application/json" -d "{}"

Resources

Other people are testing the functions with code:

If you are using CommandQuery together with Azure Functions you can unit test your command and query handlers.

For example like this:

(Yeah, I know! Not really unit testing, but you get the point)

]]>
{"twitter"=>"hlaueriksson"}
Secure and explore ASP.NET Core Web APIs2017-03-31T23:00:00+02:002017-11-02T21:00:00+01:00https://conductofcode.io/post/secure-and-explore-aspnet-core-web-apisHow to create a ASP.NET Core Web API, secure it with JSON Web Tokens and explore it with Swagger UI and Postman.

You can view the example code in this post at https://github.com/hlaueriksson/ConductOfCode

ASP.NET Core Web API

I installed the new Visual Studio 2017 and created a new ASP.NET Core Web Application.

New ASP.NET Core Web Application

Then I added these dependencies:

The subject in this blog post is the StackController:

  • The controller provides a REST interface for an in-memory stack. It’s the same example code I have used in a previous blog post.

  • The [Authorize] attribute specifies that the actions in the controller requires authorization. It will be handled with JSON Web Tokens. The configuration for this will be done in the Startup class.

  • The actions are decorated with SwaggerResponse attributes. This makes Swashbuckle understand the types returned for different status codes.

The appsettings.json file has some custom configuration for the JWT authentication:

  • In this example we will use three things when issuing tokens; Audience, Issuer and the SigningKey

  • The values for Audience and Issuer can be an arbitrary string. They will be used as claims and the tokens that are issued will contain them.

  • The SigningKey is used when generating the hashed signature for the token. The key must be kept secret. You probably want to use the Secret Manager to secure the key.

The TokenOptions class is the type safe representation of the configuration in the appsettings:

  • The Type defaults to Bearer, which is the schema used by JWT.

  • The expiration of the tokens defaults to one hour.

The TokenOptions will be used in two places in the codebase. Therefor I extracted some convenience methods into TokenOptionsExtensions:

  • GetExpiration returns a DateTime (UTC) indicating when the issued token should expire.

  • GetSigningCredentials returns an object that will be used for generating the token signature. HmacSha256 is the algorithm used.

  • GetSymmetricSecurityKey returns an object that wraps the value of the SigningKey as a byte array.

The Startup class configures the request pipeline that handles all requests made to the application.

Swagger, SwaggerUI and JwtBearerAuthentication is configured here:

ConfigureServices:

  • The AddOptions method adds services required for using options.

  • The Configure<TokenOptions> method registers a configuration instance which TokenOptions will bind against. The TokenOptions from appsettings.json will be accessible and available for dependency injection.

  • The AddSwaggerGen method registers the Swagger generator.

Configure:

  • The UseSwagger method exposes the generated Swagger as JSON endpoint. It will be available under the route /swagger/v1/swagger.json.

  • The UseSwaggerUI method exposes Swagger UI, the auto-generated interactive documentation. It will be available under the route /swagger.

  • The InjectOnCompleteJavaScript method injects JavaScript to invoke when the Swagger UI has successfully loaded. I will get back to this later.

  • The UseStaticFiles method enables static file serving. The injected JavaScript for the Swagger UI is served from the wwwroot folder.

  • The UseJwtBearerAuthentication method adds JWT bearer token middleware to the web application pipeline. Audience and Issuer will be used to validate the tokens. The SigningKey for token signatures is specified here. The authorization to the StackController will now be handled with JWT.

Read more about Swashbuckle/Swagger here: https://github.com/domaindrivendev/Swashbuckle.AspNetCore

To issue tokens, let’s introduce the AuthenticationController:

  • An IOptions<TokenOptions> object is injected to the constructor. The configuration is used when tokens are issues.

  • The JwtSecurityToken class is used to represent a JSON Web Token.

  • The JwtSecurityTokenHandler class writes a JWT as a JSON Compact serialized format string.

  • Actual authentication is out of the scope of this blog post. You may want to look at IdentityServer4.

  • You probably want to require SSL/HTTPS for the API.

The response from the Token action looks like this:

{
  "token_type": "Bearer",
  "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE0OTA5ODkyNzUsImlzcyI6IkNvbmR1Y3RPZkNvZGUiLCJhdWQiOiJodHRwOi8vbG9jYWxob3N0OjUwNDgwIn0.iSP0Go20rzg69yxERldCCl4MRpCfC1JwcJTstkcc_Ss",
  "expires_in": 3600
}

Audience, Issuer and Expiration are included in the JWT payload:

{
  "exp": 1490989275,
  "iss": "ConductOfCode",
  "aud": "http://localhost:50480"
}

If you copy the value of the access_token, you can use https://jwt.io to view the decoded content of the JWT:

jwt.io

When accessing the StackController, a JWT should be sent in the HTTP Authorization header:

Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE0OTA5ODkyNzUsImlzcyI6IkNvbmR1Y3RPZkNvZGUiLCJhdWQiOiJodHRwOi8vbG9jYWxob3N0OjUwNDgwIn0.iSP0Go20rzg69yxERldCCl4MRpCfC1JwcJTstkcc_Ss

Read more about JWT here: https://jwt.io/introduction/

The request and response classes:

Swagger UI

We can explore the API and the StackController with Swagger UI.

The Swagger UI in this example is available at http://localhost:50480/swagger/

The Swagger specification file looks like this: swagger.json

Because the API is using JwtBearerAuthentication, we will now get a 401 Unauthorized if we don’t provide the correct HTTP Authorization header.

To fix this we can inject some JavaScript to Swagger UI with Swashbuckle. I was reading Customize Authentication Header in SwaggerUI using Swashbuckle by Steve Michelotti before I was able to do this myself.

There are two approaches and two scripts located in the wwwroot folder of the project:

wwwroot

authorization1.js

  1. JQuery is used to post to the AuthenticationController and get a valid JWT

  2. When the response is returned, the access_token is added to the authorization header

  3. The StackController actions should now return responses with status codes 200

Inject the script to Swagger UI in Startup.cs:

app.UseSwaggerUI(c =>
{
    c.SwaggerEndpoint("/swagger/v1/swagger.json", "ConductOfCode");
    c.InjectOnCompleteJavaScript("/swagger-ui/authorization1.js");
});

The result looks like this:

authorization1.js

  • Enter a username and password, click the Get token button to set the authorization header

  • The Get token button only needs to be clicked once per page load

authorization2.js

  1. CryptoJS is used to generate a valid JWT

    • The JWT payload and the SigningKey must be known
  2. The generated token is added to the authorization header

  3. The StackController actions should now return responses with status codes 200

Inject the scripts to Swagger UI in Startup.cs:

app.UseSwaggerUI(c =>
{
    c.SwaggerEndpoint("/swagger/v1/swagger.json", "ConductOfCode");
    c.InjectOnCompleteJavaScript("https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/crypto-js.min.js"); // https://cdnjs.com/libraries/crypto-js
    c.InjectOnCompleteJavaScript("/swagger-ui/authorization2.js");
});
  • The crypto-js script is injected from a CDN.

The result looks like this:

authorization2.js

  • Enter audience, issuer, and signing key. A token is generated and the authorization header is set every time the Try it out! button for an action is clicked.

Postman

We can explore the API and the StackController with Postman.

Download: https://www.getpostman.com

Import the Swagger specification file:

Import

Then the API is available in a collection:

Collection

Because the API is using JwtBearerAuthentication, we will now get a 401 Unauthorized if we don’t provide the correct HTTP Authorization header.

There are two approaches to fix this.

Tests

The Token action in the AuthenticationController issues tokens.

Add some JavaScript in the Tests tab:

Tests

var data = JSON.parse(responseBody);
postman.setGlobalVariable("Authorization", data.token_type + " " + data.access_token);
  • When the response is returned, the access_token is stored in the global variable Authorization

  • The request to the Token action only needs to be sent once per token lifetime (one hour)

Add Authorization to all actions in the Headers tab:

Headers

Authorization:{{Authorization}}
  • The token is accessed via the global variable {{Authorization}}

The StackController actions should now return responses with status codes 200.

Pre-request Script

We can add some JavaScript to Postman that generates a valid JWT. I was reading JWT Authentication in Postman before I was able to do this myself.

  1. CryptoJS is used to generate a valid JWT

    • The JWT payload and the SigningKey must be known
  2. The generated token is stored in the global variable Authorization

Add the content of the authorization3.js file to global variable authorize:

Globals

Create an environment and add variables:

Environment

  • Audience: http://localhost:50480
  • Issuer: ConductOfCode
  • SigningKey: cc4435685b40b2e9ddcb357fd79423b2d8e293b897d86f5336cb61c5fd31c9a3

Add JavaScript to all actions in the Pre-request Script tab:

Pre-request Script

eval(postman.getGlobalVariable('authorize'));
authorize();
  • The JavaScript is accessed from the global variable authorize and evaluated.

  • The function authorize is executed and the token is generated.

The StackController actions should now return responses with status codes 200.

The collection can be exported:

Export

  • Perfect for version control

The export file looks like this: ConductOfCode.postman_collection.json

Troubleshooting

If you are having problems with JavaScript in Postman, read Enabling Chrome Developer Tools inside Postman.

]]>
{"twitter"=>"hlaueriksson"}
Automocking and the Dependency Inversion Principle2017-02-28T21:00:00+01:002017-02-28T21:00:00+01:00https://conductofcode.io/post/automocking-and-the-dependency-inversion-principleI had reason to revisit the automocked base class from a previous blog post. I am working with another code base and have new opportunities for automocking. We have a lot of internal classes. Approximately 30% of the classes are marked as internal. The old approach did not work anymore.

With an internal subject, I got this error:

CS0060

Inconsistent accessibility: base class 'WithSubject<HelloWorld>' is less accessible than class 'When_GetMessage'

The Dependency Inversion Principle

The D in SOLID stands for the Dependency Inversion Principle:

A. High-level modules should not depend on low-level modules. Both should depend on abstractions. B. Abstractions should not depend on details. Details should depend on abstractions.

In other words:

Depend on abstractions, not on concretions.

One way to enforce this between projects in C# is to make classes internal.

The internal access modifier

The internal access modifier:

Internal types or members are accessible only within files in the same assembly

This is useful:

A common use of internal access is in component-based development because it enables a group of components to cooperate in a private manner without being exposed to the rest of the application code.

How can the unit tests access the internal classes then?

The [InternalsVisibleTo] attribute:

types that are ordinarily visible only within the current assembly are visible to a specified assembly.

We will add the attribute to the AssemblyInfo file in the project under test, to make the unit test project a friend assembly.

We will use an IoC container to configure the creation of the internal classes. The clients will depend on public interfaces and the IoC container.

WithFakes

My favorite framework for testing is still Machine.Specifications in combination with Machine.Fakes for automocking support.

At work, we use:

I will mimic the Machine.Fakes WithFakes base class:

  • The test fixture will inherit from WithFakes

  • Use The<TFake>() method for creating fakes

My implementation will use:

Code

You can get the example code at https://github.com/hlaueriksson/ConductOfCode

WithFakes

The Subject<TSubject>() method gives access the class under test. This is how the error Inconsistent accessibility: base class 'WithSubject<HelloWorld>' is less accessible than class 'When_GetMessage' is solved.

The The<TFake>() method gives access to the injected dependencies from the subject.

The With<TFake>() methods can be used to inject real or fake objects into the subject.

The subject

The interfaces are public, the concrete classes are internal.

The creation of internal classes is configured with StructureMap, the IoC container we are using.

The AssemblyInfo.cs is also modified to make the subject accessible for the unit tests:

[assembly: InternalsVisibleTo("ConductOfCode.Tests")].

The client

The client depends on interfaces and uses the IoC container to create concrete classes.

ConductOfCode.exe

The IoC container scans the assemblies for registries with configuration.

The tests

The Subject property gives access to the automocked instance via the Subject<TSubject>() method from the base class.

The With<TFake>() methods can be used to inject and setup mocks.

The The<TFake>() method is used for setup and verification of mocks.

The unit test sessions

Unit Test Sessions

]]>
{"twitter"=>"hlaueriksson"}