Official plugins include:
At the time of writing, there are 20 plugins out of the box.
If you think the official plugins are not enough, you can write our own. The easiest way to get started is to look at what others did.
Browsing through some of the GitHub repos found above, gives you an idea of how the source code of a plugin looks like.
As a demo, I created a simple plugin that counts the words and characters of the query.

demoSettings:

true | falseThe source code:
Throughout this blog post, the demo plugin will be used as an example.
Before you create your own project, first take a look at the official checklist:
Key takeaways from the checklist:
Community.PowerToys.Run.Plugin.<PluginName>net8.0-windowsMain.cs classplugin.json fileIn Visual Studio, create a new Class Library project.
The edit the .csproj file to look something like this:
x64 and ARM64UseWPF to include references to WPF assemblies.dll assembliesThe .dll files referenced in the .csproj file are examples of dependencies needed, depending on what features your plugin should support.
Unfortunately, there are no official NuGet packages for these assemblies.
Traditionally, plugin authors commit these .dll files to the repo in a libs folder.
Both the x64 and ARM64 versions.
You can copy the DLLs for your platform architecture from the installation location:
C:\Program Files\PowerToys\
%LocalAppData%\PowerToys\
You can build the DLLs for the other platform architecture from source:
Other plugin authors like to resolve the dependencies by referencing the PowerToys projects directly. Like the approach by Lin Yu-Chieh (Victor):
I have created a NuGet package that simplifies referencing all PowerToys Run plugin dependencies:
When using Community.PowerToys.Run.Plugin.Dependencies the .csproj file can look like this:
I have also created dotnet new templates that simplifies creating PowerToys Run plugin projects and solutions:

Anyway, it doesn’t matter if you create a project via templates or manually. The project should start out with these files:
Images\*.png
Main.cs
plugin.json
Create a plugin.json file that looks something like this:
The format is described in the Dev Documentation:
Create a Main.cs file that looks something like this:
The Main class must have a public, static string property named PluginID:
public static string PluginID => "AE953C974C2241878F282EA18A7769E4";
Guid without hyphensplugin.json fileIn addition, the Main class should implement a few interfaces.
Let’s break down the implemented interfaces and the classes used in the example above.
Some interfaces of interest from the Wox.Plugin assembly:
IPluginIPluginI18nIDelayedExecutionPluginIContextMenuISettingProviderThe most important interface is IPlugin:
public interface IPlugin
{
List<Result> Query(Query query);
void Init(PluginInitContext context);
string Name { get; }
string Description { get; }
}
Query is the method that does the actual logic in the pluginInit is used to initialize the plugin
PluginInitContext for later useName ought to match the value in the plugin.json file, but can be localizedIf you want to support internationalization you can implement the IPluginI18n interface:
public interface IPluginI18n
{
string GetTranslatedPluginTitle();
string GetTranslatedPluginDescription();
}
The IDelayedExecutionPlugin interface provides an alternative Query method:
public interface IDelayedExecutionPlugin
{
List<Result> Query(Query query, bool delayedExecution);
}
The delayed execution can be used for queries that take some time to run.
PowerToys Run will add a slight delay before the Query method is invoked, so that the user has some extra milliseconds to finish typing that command.
A delay can be useful for queries that performs:
The IContextMenu interface is used to add context menu buttons to the query results:
public interface IContextMenu
{
List<ContextMenuResult> LoadContextMenus(Result selectedResult);
}
Result can be enhanced with custom buttonsIf the plugin is sophisticated enough to have custom settings, implement the ISettingProvider interface:
public interface ISettingProvider
{
Control CreateSettingPanel();
void UpdateSettings(PowerLauncherPluginSettings settings);
IEnumerable<PluginAdditionalOption> AdditionalOptions { get; }
}
CreateSettingPanel usually throw a NotImplementedExceptionUpdateSettings is invoked when the user updates the settings in the PowerToys GUI
AdditionalOptions is invoked when the PowerToys GUI displays the settings
Some classes of interest from the Wox.Plugin assembly:
PluginInitContextQueryResultContextMenuResultA PluginInitContext instance is passed as argument to the Init method:
public class PluginInitContext
{
public PluginMetadata CurrentPluginMetadata { get; internal set; }
public IPublicAPI API { get; set; }
}
PluginMetadata can be useful if you need the path to the PluginDirectory or the ActionKeyword of the pluginIPublicAPI is mainly used to GetCurrentTheme, but can also ShowMsg, ShowNotification or ChangeQueryA Query instance is passed to the Query methods defined in the IPlugin and IDelayedExecutionPlugin interfaces.
Properties of interest:
Search returns what the user has searched for, excluding the action keyword.Terms returns the search as a collection of substrings, split by space (" ")A list of Result objects are returned by the Query methods defined in the IPlugin and IDelayedExecutionPlugin interfaces.
Example of how to create a new result:
new Result
{
QueryTextDisplay = query.Search, // displayed where the user types queries
IcoPath = IconPath, // displayed on the left side
Title = "A title displayed in the top of the result",
SubTitle = "A subtitle displayed under the main title",
ToolTipData = new ToolTipData("A tooltip title", "A tooltip text\nthat can have\nmultiple lines"),
Action = _ =>
{
Log.Debug("The actual action of the result when pressing Enter.", GetType());
/*
For example:
- Copy something to the clipboard
- Open a URL in a browser
*/
},
Score = 1, // the higher, the better query match
ContextData = someObject, // used together with the IContextMenu interface
}
A list of ContextMenuResult objects are returned by the LoadContextMenus method defined in the IContextMenu interface.
These objects are rendered as small buttons, displayed on the right side of the query result.
Example of how to create a new context menu result:
new ContextMenuResult
{
PluginName = Name,
Title = "A title displayed as a tooltip",
FontFamily = "Segoe Fluent Icons,Segoe MDL2 Assets",
Glyph = "\xE8C8", // Copy
AcceleratorKey = Key.C,
AcceleratorModifiers = ModifierKeys.Control,
Action = _ =>
{
Log.Debug("The actual action of the context menu result, when clicking the button or pressing the keyboard shortcut.", GetType());
/*
For example:
- Copy something to the clipboard
- Open a URL in a browser
*/
},
}
Find the perfect Glyph to use from:
Examples of actions to use with Result or ContextMenuResult:
Action = _ =>
{
System.Windows.Clipboard.SetText("Some text to copy to the clipboard");
return true;
}
Action = _ =>
{
var url = "https://conductofcode.io/";
if (!Helper.OpenCommandInShell(DefaultBrowserInfo.Path, DefaultBrowserInfo.ArgumentsPattern, url))
{
Log.Error("Open default browser failed.", GetType());
Context?.API.ShowMsg($"Plugin: {Name}", "Open default browser failed.");
return false;
}
return true;
}
Logging is done with the static Log class, from the Wox.Plugin.Logger namespace.
Under the hood, NLog is used.
Five log levels:
Log.Debug("A debug message", GetType());
Log.Info("An information message", GetType());
Log.Warn("A warning message", GetType());
Log.Error("An error message", GetType());
Log.Exception("An exceptional message", new Exception(), GetType());
The logs are written to .txt files, rolled by date, at:
%LocalAppData%\Microsoft\PowerToys\PowerToys Run\Logs\<Version>\If you have the need to add third party dependencies, take a look at what is already used by PowerToys.
NuGet packages and the versions specified in the .props file are candidates to reference in your own .csproj file.
Packages of interest:
LazyCacheSystem.Text.JsonIf the plugin uses any third party dependencies that are not referenced by PowerToys Run, you need to enable DynamicLoading.
In the plugin.json file:
{
// ...
"DynamicLoading": true
}
true makes PowerToys Run dynamically load any .dll files in the plugin folderYou can write unit tests for your plugin.
The official plugins use the MSTest framework and Moq for mocking.
Community.PowerToys.Run.Plugin.<PluginName>.UnitTestsnet8.0-windowsThe .csproj file of a unit test project may look something like this:
.dll assembliesUnit tests of the Main class may look something like this:
Some of the official plugins have unit test coverage:
Unfortunately, the plugin manager in PowerToys Run does not offer support for downloading new plugins.
Community plugins are traditionally packaged in .zip files and distributed via releases in GitHub repositories.
The process is described in a unofficial checklist:
The Everything plugin by Lin Yu-Chieh (Victor) is next level and is distributed via:
I have created a linter for PowerToys Run community plugins:

When running the linter on your plugin any issues are reported with a code and a description. If nothing shows up, your plugin is awesome. The lint rules are codified from the guidelines in the Community plugin checklist.
Demo Plugin:
Awesome PowerToys Run Plugins:
Third-Party plugins for PowerToy Run:
dotnet new templates for community plugins:
Documentation:
Dev Documentation:
If you want to look under the hood, fork or clone the PowerToys repo:
Get the solution to build on your machine with the help of the documentation:
]]>Therefore I set out trying to mitigate failed test runs by rerunning the failing tests.
A question/answer on Stack Overflow inspired me to write my own test script.
My requirements were:
dotnet test CLI commandThis resulted in the following script:
The script runs dotnet test and uses the trx logger result file to collect failed tests.
Then reruns the failed tests and reports the final result.
Path - Path to the: project | solution | directory | dll | exe
. (current directory)Configuration - Build configuration for environment specific appsettings.json file.
DebugDebug | Release | Development | ProductionFilter - Filter to run selected tests based on: TestCategory | Priority | Name | FullyQualifiedName
Settings - Path to the .runsettings file.
Retries - Number of retries for each failed test.
21-9Percentage - Required percentage of passed tests.
1000-100Run regression tests:
.\test.ps1 -filter "TestCategory=RegressionTest"
Run FooBar smoke tests in Development:
.\test.ps1 .\FooBar.Tests\FooBar.Tests.csproj -filter "TestCategory=SmokeTest" -configuration "Development"
Retry failed tests once and reports the run as green if 95% of the tests passed:
.\test.ps1 -retries 1 -percentage 95
Run tests configured with a .runsettings file:
.\test.ps1 -settings .\test.runsettings
If the tests passed, or the required percentage of tests passed, the script returns the exitcode 0.
Otherwise the number of failed tests is returned as exitcode.
Output when successful:

Output when required percentage of tests passed:

Output when failed:

Showcase:

To showcase the test script I have written some tests in three popular frameworks.
The script and tests are available in this repo:
Category attribute corresponds to the TestCategory filter in the script.Retry parameter in the TestContext contains the current retry iteration.TestCategory attribute corresponds to the TestCategory filter in the script.Retry property in the TestContext contains the current retry iteration.Trait attribute and TestCategory parameter corresponds to the TestCategory filter in the script.Playwright enables reliable end-to-end testing for modern web apps.
Web testing might be hard to get right and flaky tests can benefit from being retried.
Here is an example with Playwright and NUnit:
SetUp method starts a trace recording if the test is in retry mode.TearDown method exports the trace into a zip archive if the test failed.The trace zip archives of failing tests can be examined at:
A .runsettings file can be used to customizing Playwright options:
Run the Playwright tests with the .runsettings file:
.\test.ps1 -filter FullyQualifiedName=ConductOfCode.PlaywrightTests -settings .\test.runsettings
If you use Azure Pipelines consider using the Visual Studio Test v2 task with the rerunFailedTests option:
The experiment with local functions in tests led to the release of LoFuUnit:
Testing with Local Functions 🐯
in .NET / C# ⚙️
with your favorite Unit Testing Framework ✔️
Recently I published version 1.1.1 of LoFuUnit, so I thought it would be a good time to introduce the framework to you in the form of a blog post.
Let’s try LoFuUnit and see how tests are written. I’ll use the same challenge from a previous blog post about BDD frameworks for .NET / C#:
The subject
The stack, a last-in-first-out (LIFO) collection:
The tests will assert the behavior of:
Peek() and Pop() methodsInvalidOperationException
Tests with LoFuUnit.NUnit:
LoFuUnit.NUnit provides the [LoFu] attribute, apply it to your test methods[LoFu] attributeTest output:


Mocking is nice, automocking is even better
Auto-mocking is part of LoFuUnit. Let’s see how auto-mocked tests are written. I’ll use the same challenge from a previous blog post about Automocked base class for NUnit tests:
The subject:
Auto-mocked tests with LoFuUnit.AutoNSubstitute and LoFuUnit.NUnit:
LoFuUnit.AutoNSubstitute provides the base class LoFuTest<TSubject>, inherit your test fixtures from itUse<TDependency>() methods gives you the ability to setup the mocksThe<TDependency>() method lets you access the mocks and verify expectationsTest output:

LoFuUnit is distributed as 7 packages via NuGet. The code examples in this blog post used two of them.
LoFuUnit.NUnit:
LoFuUnit.AutoNSubstitute:
You can view the code examples from this blog post at https://github.com/hlaueriksson/ConductOfCode
]]>Introducing Jekyll URL Shortener - A template repository for making URL Shorteners with #Jekyll and #GitHubPageshttps://t.co/u9iutTiG2O@jekyllrb @github
— Henrik Lau Eriksson (@hlaueriksson) November 1, 2018
The source code and documentation can be found here:
URL shortening is a technique on the World Wide Web in which a Uniform Resource Locator (URL) may be made substantially shorter and still direct to the required page.
The Jekyll URL Shortener repository is a template for creating your very own URL Shortener:
✂️🔗 This is a template repository for making URL Shorteners with Jekyll and GitHub Pages. Create short URLs that can be easily shared, tweeted, or emailed to friends. Fork this repo to get started.
To create your own URL Shortener, do this:
Here follows a description of how I created my own URL Shortener…
I bought the domain name hlaueriksson.me from a Swedish domain name registrar.
I will use this as an apex domain, so I configured A records with my DNS provider:

I cloned the Jekyll URL Shortener repository to https://github.com/hlaueriksson/hlaueriksson.me
The _config.yml file, located in the root of the repository, contains configuration for Jekyll.
I modified this file to fit my needs:
The Settings / GitHub Pages of the repository provides configuration for hosting the Jekyll site.
I modified these settings to fit my needs:

Short links are created as Jekyll pages in the root of the repository:
Why would you create your own URL Shortener?
With Jekyll URL Shortener you could:
The Jekyll URL Shortener is made possible by:
The redirecting is done with the jekyll-redirect-from plugin.
Redirects are performed by serving an HTML file with an HTTP-REFRESH meta tag which points to your destination.
So, the HTTP status codes 301 (permanent redirect), 302, 303 or 307 (temporary redirect) are not used.
When doing agile development, we write User Stories and define Acceptance Criteria.
As a
<role>, I want<goal/desire>so that<benefit>
When doing BDD, we follow this format:
In order to
<receive benefit>, as a<role>, I want<goal/desire>
It’s not always easy to go from user stories and acceptance criteria to start writing tests.
I think that with Easy Approach to Requirements Syntax in place, it will be easier to do Behavior Driven Development.
When doing Hypothesis Driven Development, we follow this format:
We believe
<this capability>Will result in
<this outcome>We will know we have succeeded when
<we see a measurable signal>
So my hypothesis is:
I believe using Easy Approach to Requirements Syntax
Will result in easier implementation of Behavior Driven Development
I will know I have succeeded when business people can actually write the (SpecFlow) feature files themselves ☺
EARS was created during a case study at Rolls-Royce on requirements for aircraft engine control systems.
They identified eight major problems with writing requirements in an unstructured natural language:
- Ambiguity (a word or phrase has two or more different meanings).
- Vagueness (lack of precision, structure and/or detail).
- Complexity (compound requirements containing complex sub-clauses and/or several interrelated statements).
- Omission (missing requirements, particularly requirements to handle unwanted behavior).
- Duplication (repetition of requirements that are defining the same need).
- Wordiness (use of an unnecessary number of words).
- Inappropriate implementation (statements of how the system should be built, rather than what it should do).
- Untestability (requirements that cannot be proven true or false when the system is implemented).
To overcome or reduce the effects of these problems they came up with a rule set with five simple templates.
Requirements are divided into five types:
The
<system name>shall<system response>
When
<optional preconditions><trigger>, the<system name>shall<system response>
While
<in a specific state>, the<system name>shall<system response>
If
<optional preconditions><trigger>, then the<system name>shall<system response>
Where
<feature is included>, the<system name>shall<system response>
Let’s put this to the test with the Stack<T> Class as the example.
This is some of the documentation from MSDN:
Represents a variable size last-in-first-out (LIFO) collection of instances of the same specified type.
The capacity of the
Stack<T>is the number of elements that theStack<T>can store.Countis the number of elements that are actually in theStack<T>.
Three main operations can be performed on a
Stack<T>and its elements:
Pushinserts an element at the top of theStack<T>.Popremoves an element from the top of theStack<T>.Peekreturns an element that is at the top of theStack<T>but does not remove it from theStack<T>.
If we were to write a User Story in BDD format:
In order to store instances of the same specified type in last-in-first-out (LIFO) sequence
As a developer
I want to use a
Stack<T>
If we were to write requirements with EARS templates:
The
Stack<T>shall store instances of the same specified type in last-in-first-out (LIFO) order.
The
Stack<T>shall return the number of elements contained when the propertyCountis invoked.
When the method
Pushis invoked, theStack<T>shall insert the element at the top.
When the method
Popis invoked, theStack<T>shall remove and return the element at the top.
When the method
Peekis invoked, theStack<T>shall return the element at the top without removing it.
While an element is present, the
Stack<T>shall returntruewhen the methodContainsis invoked.
While an element is not present, the
Stack<T>shall returnfalsewhen the methodContainsis invoked.
If empty and the method
Popis invoked, then theStack<T>shall throwInvalidOperationException.
If empty and the method
Peekis invoked, then theStack<T>shall throwInvalidOperationException.
Where instantiated with a specified collection, the
Stack<T>shall be prepopulated with the elements of the collection.
Let’s take this to the next level with BDD and SpecFlow.
In my opinion, it was easy to write the tests. I copy-and-pasted the requirement to the SpecFlow feature file and then I knew exactly how many scenarios I needed to implement. I think the examples in the scenarios makes the requirements easier to understand and reason about. Maybe this should be called Requirements by Example?
When implementing the production code, we can use The BDD Cycle described in The RSpec Book.

The BDD Cycle introduces two levels of testing. We can use SpecFlow to focus on the high-level behavior, the requirements. And use Machine.Specifications to focus on more granular behavior, unit testing code in isolation.
Easy approach to requirements syntax (EARS) by Alistair Mavin et al. The six-page research paper.
EARS quick reference sheet [PDF] from Aalto University. A two-page summary.
EARS: The Easy Approach to Requirements Syntax [PDF] by John Terzakis. A 66-page presentation on EARS and how it is used at Intel.
The source code for the example in this blog post: GitHub
Headless CMS = Content Management System for JAMstack sites
This blog post covers:
Netlify CMS for a JAMstack site built with Hugo + jQuery + Azure Functions
This blog post is part of a series:

The example used in this blog post is a site for jam recipes.
Example code:
Example site:
JAMstack is the modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup. The previous blog post covered how to build a JAMstack site with Hugo and Azure Functions. This post will focus on how to add a Headless CMS to manage the content for the site.
A Headless CMS gives non-technical users a simple way to add and edit the content of a JAMstack site. By being headless, it decouples the content from the presentation.
A headless CMS doesn’t care where it’s serving its content to. It’s no longer attached to the frontend, and the content can be viewed on any platform.
Read all about it over at: https://headlesscms.org
A Headless CMS can be:
In this blog post the focus is on Netlify CMS, an open source, Git-based CMS for all static site generators.
With a git-based CMS you are pushing changes to git that then triggers a new build of your site.
An open-source CMS for your Git workflow
Find out more at:
Why Netlify CMS?
You can follow the Quick Start to install and configure Netlify CMS.
Basically do five things:
Go here:
Then:

The build configuration for this site:

hugo -s src/sitesrc/site/publicHUGO_VERSION = 0.21Refer to Common configuration directives when configuring other static site generators.
Go here:
Then:

https://api.netlify.com/auth/doneTake note of:
Client IDClient SecretGo here:
Then:
<Site> / Access / Authentication providers / Install provider
Copy from GitHub:
Client IDClient SecretThe code for the site:
The source code in this example is a clone of the Git repository from the previous blog post.
Some code has changed:

Let’s talk about the highlighted files…
To add Netlify CMS to a Hugo site, two files should be added to the /static/admin/ folder:
index.htmlconfig.ymlindex.html:
0.4config.yml:
fields correspond to the yaml-formatted front matter in the generated markdown filesbody, is the content in generated markdown filesContent:
config.yml generates markdown files like thisyaml-formatted front matter correspond to the fieldsbody field is the actual contentThe following files needed to be modified to support the new site and the markdown files generated by the Netlify CMS.
config.toml:
baseURL points to the new site hosted by Netlifysummary.html:
recipe.html:
yaml and toml has a slightly different structureThe Netlify CMS admin interface is accessed via the /admin/ slug.
If you are a collaborator on the GitHub repo you can log in:

After you authorize the application:

You can view the content from the repo:

Add and edit the content from the repo:

JAMstack sites are awesome. Headless CMSs adds even more awesomeness.
Smashing Magazine just got 10x faster by using Hugo and Netlify CMS to move from a WordPress to a JAMstack site.
]]>Static site generators + JavaScript + Serverless = JAMstack
This blog post covers:
Hugo + jQuery + Azure Functions = JAMstack
This blog post is part of a series:

The example used in this blog post is a site for jam recipes.
Example code:
Example site:
Modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.
Read all about it over at: https://jamstack.org
Three things are needed:
In this blog post the JavaScript is written with jQuery, the APIs implemented with Azure Functions and the markup generated with Hugo.
Why?
Static site generators is The Next Big Thing and it is used At Scale.
A fast and modern static website engine
Hugo is a static site generator.
It’s the second most popular according to https://www.staticgen.com
Why Hugo?
Scaffold a Hugo site:
hugo new site .

Add content:
hugo new recipe/apple-jam.md

The code for the site:
Configuration for the Hugo site is done in config.toml:
baseURL points to where the site is hosted on GitHub PagesData:
data folderThe HTML template for the header:
layouts\partials folderapp.css, Bootstrap stylesheets from CDNbody tag gets an id with the current page template, will be used in the JavaScriptnavigation = true in the front matterThe HTML template for the header:
layouts\partials folderapp.js, third-party scripts from CDNContent:
Page Templates:
layouts subfolder.Content variableJavaScript:
static folderGET ingredients from a Azure FunctionPOST recipes to a Azure FunctionHugo has a theming system, so you don’t have to implement all templates yourself.
Hugo Themes:
Hugo provides its own webserver which builds and serves the site:
hugo server

http://localhost:1313/jamstack/Build the Hugo site:
hugo --destination ../../docs

docs folder in the git repo.Troubleshoot the site generation with:
hugo --verbose
Get the site hosted on GitHub Pages by:
In this example the docs folder is used as publishing source:

Azure Functions Core Tools is a command line tool for Azure Functions
The Azure Functions Core Tools provide a local development experience for creating, developing, testing, running, and debugging Azure Functions.
Create a function app:
func init

Create a function:
func function create

If you don’t like the terminal, take a look at Visual Studio 2017 Tools for Azure Functions
The code for the API:
Configuration for a function is done in function.json:
authLevel can be set to anonymous in this exampleroute and methods are important to know when invoking the function from the JavaScriptThe actual function is implemented in run.csx:
Run the functions locally:
func host start
When running the Hugo site against local functions, specify CORS origins:
func host start --cors http://localhost:1313

Get the functions hosted on Azure by:
In this example the Azure Functions code is located in the \src\api\ folder in the git repo.
Therefor deploying with custom script is needed:
.deployment:
deploy.cmd during deploymentdeploy.cmd:
\src\api folder to the repository rootDuring deployment the logs look like this:

Before the Hugo site and the JavaScript can invoke the Azure Functions, Cross-Origin Resource Sharing (CORS) needs to be configured.
In this example these origins are allowed:
https://hlaueriksson.github.iohttp://localhost:1313
Now the Hugo site can be configured to use these URLs:
https://jamstack.azurewebsites.net/api/Ingredients/{recipe}https://jamstack.azurewebsites.net/api/RecipeTo have something to work with, I decided to migrate the ASP.NET Core Web API for “My latest online activities” to Azure Functions.
As background, refer to these related blog posts:
To get some basic understanding, read the introduction over at Microsoft Docs:
You can create your functions in the browser on the Azure Portal, but I find it to be a bit cumbersome. My approach is to use the Azure Functions CLI tool to scaffold a project, put it on GitHub and deploy to Azure Functions with continuous deployment.
Command line tool for Azure Function
The Azure Functions CLI provides a local development experience for creating, developing, testing, running, and debugging Azure Functions.
Read the docs at:
Install:
npm i -g azure-functions-cli

Install globally
Create Function App:
func init

Create a new Function App in the current folder. Initializes git repo.
Create Function:
func function create

Create a new Function from a template, using the Yeoman generator
Run:
func host start

Launches the functions runtime host
So that is what I did when I created the latest-functions project. I’ll come back to that later.
First let’s talk about my CommandQuery package.
Command Query Separation (CQS) for ASP.NET Core and Azure Functions
CommandQuery now has support for both #AspNetCore and #AzureFunctionshttps://t.co/qd8QHljnfx#CQS
— Henrik Lau Eriksson (@hlaueriksson) April 30, 2017
Remember the background:
To migrate the ASP.NET Core Web API project to a Azure Functions app, I first needed to extend the CommandQuery solution.
As a result I ended up with three packages on NuGet:
CommandQuery
CommandQuery.AspNetCore
CommandQuery.AzureFunctions
In this blog post I will cover the CommandQuery.AzureFunctions package:
Provides generic function support for commands and queries with HTTPTriggers
To get more information about the project, read the documentation over at the GitHub repository.
Get started:
Sample code:
When I was writing the code specific to Azure Functions, I needed to add dependencies. It makes sense to depend on the same assembly versions as the Azure Functions hosting environment use. Therefore, I ended up creating a project to gather that information.
⚡️ Information gathered on Azure Functions by executing Azure Functions ⚡️
I created a project to get information about available Assemblies and Types in #AzureFunctionshttps://t.co/L9aSd7NZ2h@AzureFunctions
— Henrik Lau Eriksson (@hlaueriksson) April 30, 2017
I basically wanted to get information about available Assemblies and Types in the Azure Functions hosting environment.
The code and all the information gathered can be viewed at:
For example, this is some important information I found out:
"Newtonsoft.Json, Version=9.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed"
"System.Net.Http, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
The type TraceWriter comes from:
{
"Type": "Microsoft.Azure.WebJobs.Host.TraceWriter",
"Assembly": "Microsoft.Azure.WebJobs.Host, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
}Okay, so now it is time to cover the actual Azure Functions app I created. The code was migrated from an existing ASP.NET Core Web API project.
Remember the background:
The app has one query function to get data on my latest:
The source code is available at:
The project was created with the Azure Functions CLI tool:
C#
HttpTrigger
QueryAfter the scaffolding, the generated code needed to be modified.
This is the end result:
function.json
Defines the function bindings and other configuration settings
anonymous
query/{queryName} added
post added
project.json
To use NuGet packages in a C# function, upload a project.json file to the function’s folder in the function app’s file system.
net46CommandQuery.AzureFunctions, version 0.2.0
bin folder
If you need to reference a private assembly, you can upload the assembly file into a
binfolder relative to your function and reference it by using the file name (e.g.#r "MyAssembly.dll").

Latest.dllMake sure that private assemblies are built with <TargetFramework>net46</TargetFramework>.
run.csx
This is the code for the actual function. Written in scriptcs, you can enjoy a relaxed C# scripting syntax.
#r "Latest.dll" directive reference the query assembly from the bin folderSystem.Reflection and CommandQuery.AzureFunctionsQueryFunction and inject a QueryProcessor with the query Assembly. This will make IoC container find the correct query handlers for the queries.string queryName argument to the function signature so the corresponding route will workQueryFunction handle the query and return the resultWhen the code is done, only four things remains to get it running in the cloud:
The end result of all this can be viewed at:
The code for the SPA that uses the function:
While building your functions, you want to test early and often.
Run the functions locally with the command:
func host start
If you need to specific CORS origins use something like:
func host start --cors http://localhost:3000
When the functions are running locally you can manually test them with tools like Postman or curl.
Advice from Microsoft:

The Postman collection for this function:
Commands for this function hitting the cloud endpoints:
curl -X POST http://latest-functions.azurewebsites.net/api/query/BlogQuery -H "content-type: application/json" -d "{}"
curl -X POST http://latest-functions.azurewebsites.net/api/query/GitHubQuery -H "content-type: application/json" -d "{'Username': 'hlaueriksson'}"
curl -X POST http://latest-functions.azurewebsites.net/api/query/InstagramQuery -H "content-type: application/json" -d "{}"
Other people are testing the functions with code:
If you are using CommandQuery together with Azure Functions you can unit test your command and query handlers.
For example like this:
(Yeah, I know! Not really unit testing, but you get the point)
]]>You can view the example code in this post at https://github.com/hlaueriksson/ConductOfCode
I installed the new Visual Studio 2017 and created a new ASP.NET Core Web Application.

Then I added these dependencies:
The subject in this blog post is the StackController:
The controller provides a REST interface for an in-memory stack. It’s the same example code I have used in a previous blog post.
The [Authorize] attribute specifies that the actions in the controller requires authorization. It will be handled with JSON Web Tokens. The configuration for this will be done in the Startup class.
The actions are decorated with SwaggerResponse attributes. This makes Swashbuckle understand the types returned for different status codes.
The appsettings.json file has some custom configuration for the JWT authentication:
In this example we will use three things when issuing tokens; Audience, Issuer and the SigningKey
The values for Audience and Issuer can be an arbitrary string. They will be used as claims and the tokens that are issued will contain them.
The SigningKey is used when generating the hashed signature for the token. The key must be kept secret. You probably want to use the Secret Manager to secure the key.
The TokenOptions class is the type safe representation of the configuration in the appsettings:
The Type defaults to Bearer, which is the schema used by JWT.
The expiration of the tokens defaults to one hour.
The TokenOptions will be used in two places in the codebase. Therefor I extracted some convenience methods into TokenOptionsExtensions:
GetExpiration returns a DateTime (UTC) indicating when the issued token should expire.
GetSigningCredentials returns an object that will be used for generating the token signature. HmacSha256 is the algorithm used.
GetSymmetricSecurityKey returns an object that wraps the value of the SigningKey as a byte array.
The
Startupclass configures the request pipeline that handles all requests made to the application.
Swagger, SwaggerUI and JwtBearerAuthentication is configured here:
ConfigureServices:
The AddOptions method adds services required for using options.
The Configure<TokenOptions> method registers a configuration instance which TokenOptions will bind against. The TokenOptions from appsettings.json will be accessible and available for dependency injection.
The AddSwaggerGen method registers the Swagger generator.
Configure:
The UseSwagger method exposes the generated Swagger as JSON endpoint. It will be available under the route /swagger/v1/swagger.json.
The UseSwaggerUI method exposes Swagger UI, the auto-generated interactive documentation. It will be available under the route /swagger.
The InjectOnCompleteJavaScript method injects JavaScript to invoke when the Swagger UI has successfully loaded. I will get back to this later.
The UseStaticFiles method enables static file serving. The injected JavaScript for the Swagger UI is served from the wwwroot folder.
The UseJwtBearerAuthentication method adds JWT bearer token middleware to the web application pipeline.
Audience and Issuer will be used to validate the tokens.
The SigningKey for token signatures is specified here.
The authorization to the StackController will now be handled with JWT.
Read more about Swashbuckle/Swagger here: https://github.com/domaindrivendev/Swashbuckle.AspNetCore
To issue tokens, let’s introduce the AuthenticationController:
An IOptions<TokenOptions> object is injected to the constructor. The configuration is used when tokens are issues.
The JwtSecurityToken class is used to represent a JSON Web Token.
The JwtSecurityTokenHandler class writes a JWT as a JSON Compact serialized format string.
Actual authentication is out of the scope of this blog post. You may want to look at IdentityServer4.
You probably want to require SSL/HTTPS for the API.
The response from the Token action looks like this:
{
"token_type": "Bearer",
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE0OTA5ODkyNzUsImlzcyI6IkNvbmR1Y3RPZkNvZGUiLCJhdWQiOiJodHRwOi8vbG9jYWxob3N0OjUwNDgwIn0.iSP0Go20rzg69yxERldCCl4MRpCfC1JwcJTstkcc_Ss",
"expires_in": 3600
}Audience, Issuer and Expiration are included in the JWT payload:
{
"exp": 1490989275,
"iss": "ConductOfCode",
"aud": "http://localhost:50480"
}If you copy the value of the access_token, you can use https://jwt.io to view the decoded content of the JWT:

When accessing the StackController, a JWT should be sent in the HTTP Authorization header:
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE0OTA5ODkyNzUsImlzcyI6IkNvbmR1Y3RPZkNvZGUiLCJhdWQiOiJodHRwOi8vbG9jYWxob3N0OjUwNDgwIn0.iSP0Go20rzg69yxERldCCl4MRpCfC1JwcJTstkcc_SsRead more about JWT here: https://jwt.io/introduction/
The request and response classes:
We can explore the API and the StackController with Swagger UI.
The Swagger UI in this example is available at http://localhost:50480/swagger/
The Swagger specification file looks like this: swagger.json
Because the API is using JwtBearerAuthentication, we will now get a 401 Unauthorized if we don’t provide the correct HTTP Authorization header.
To fix this we can inject some JavaScript to Swagger UI with Swashbuckle. I was reading Customize Authentication Header in SwaggerUI using Swashbuckle by Steve Michelotti before I was able to do this myself.
There are two approaches and two scripts located in the wwwroot folder of the project:

JQuery is used to post to the AuthenticationController and get a valid JWT
When the response is returned, the access_token is added to the authorization header
The StackController actions should now return responses with status codes 200
Inject the script to Swagger UI in Startup.cs:
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "ConductOfCode");
c.InjectOnCompleteJavaScript("/swagger-ui/authorization1.js");
});The result looks like this:

Enter a username and password, click the Get token button to set the authorization header
The Get token button only needs to be clicked once per page load
CryptoJS is used to generate a valid JWT
SigningKey must be knownThe generated token is added to the authorization header
The StackController actions should now return responses with status codes 200
Inject the scripts to Swagger UI in Startup.cs:
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "ConductOfCode");
c.InjectOnCompleteJavaScript("https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/crypto-js.min.js"); // https://cdnjs.com/libraries/crypto-js
c.InjectOnCompleteJavaScript("/swagger-ui/authorization2.js");
});crypto-js script is injected from a CDN.The result looks like this:

We can explore the API and the StackController with Postman.
Download: https://www.getpostman.com
Import the Swagger specification file:

Then the API is available in a collection:

Because the API is using JwtBearerAuthentication, we will now get a 401 Unauthorized if we don’t provide the correct HTTP Authorization header.
There are two approaches to fix this.
The Token action in the AuthenticationController issues tokens.
Add some JavaScript in the Tests tab:

var data = JSON.parse(responseBody);
postman.setGlobalVariable("Authorization", data.token_type + " " + data.access_token);When the response is returned, the access_token is stored in the global variable Authorization
The request to the Token action only needs to be sent once per token lifetime (one hour)
Add Authorization to all actions in the Headers tab:

Authorization:{{Authorization}}{{Authorization}}The StackController actions should now return responses with status codes 200.
We can add some JavaScript to Postman that generates a valid JWT. I was reading JWT Authentication in Postman before I was able to do this myself.
CryptoJS is used to generate a valid JWT
SigningKey must be knownThe generated token is stored in the global variable Authorization
Add the content of the authorization3.js file to global variable authorize:

Create an environment and add variables:

Audience: http://localhost:50480Issuer: ConductOfCodeSigningKey: cc4435685b40b2e9ddcb357fd79423b2d8e293b897d86f5336cb61c5fd31c9a3Add JavaScript to all actions in the Pre-request Script tab:

eval(postman.getGlobalVariable('authorize'));
authorize();The JavaScript is accessed from the global variable authorize and evaluated.
The function authorize is executed and the token is generated.
The StackController actions should now return responses with status codes 200.
The collection can be exported:

The export file looks like this: ConductOfCode.postman_collection.json
If you are having problems with JavaScript in Postman, read Enabling Chrome Developer Tools inside Postman.
]]>With an internal subject, I got this error:

Inconsistent accessibility: base class 'WithSubject<HelloWorld>' is less accessible than class 'When_GetMessage'
The D in SOLID stands for the Dependency Inversion Principle:
A. High-level modules should not depend on low-level modules. Both should depend on abstractions. B. Abstractions should not depend on details. Details should depend on abstractions.
In other words:
Depend on abstractions, not on concretions.
One way to enforce this between projects in C# is to make classes internal.
The internal access modifier:
Internal types or members are accessible only within files in the same assembly
This is useful:
A common use of internal access is in component-based development because it enables a group of components to cooperate in a private manner without being exposed to the rest of the application code.
How can the unit tests access the internal classes then?
The [InternalsVisibleTo] attribute:
types that are ordinarily visible only within the current assembly are visible to a specified assembly.
We will add the attribute to the AssemblyInfo file in the project under test, to make the unit test project a friend assembly.
We will use an IoC container to configure the creation of the internal classes. The clients will depend on public interfaces and the IoC container.
My favorite framework for testing is still Machine.Specifications in combination with Machine.Fakes for automocking support.
At work, we use:
I will mimic the Machine.Fakes WithFakes base class:
The test fixture will inherit from WithFakes
Use The<TFake>() method for creating fakes
My implementation will use:
You can get the example code at https://github.com/hlaueriksson/ConductOfCode
WithFakes
The Subject<TSubject>() method gives access the class under test.
This is how the error Inconsistent accessibility: base class 'WithSubject<HelloWorld>' is less accessible than class 'When_GetMessage' is solved.
The The<TFake>() method gives access to the injected dependencies from the subject.
The With<TFake>() methods can be used to inject real or fake objects into the subject.
The subject
The interfaces are public, the concrete classes are internal.
The creation of internal classes is configured with StructureMap, the IoC container we are using.
The AssemblyInfo.cs is also modified to make the subject accessible for the unit tests:
[assembly: InternalsVisibleTo("ConductOfCode.Tests")].
The client
The client depends on interfaces and uses the IoC container to create concrete classes.

The IoC container scans the assemblies for registries with configuration.
The tests
The Subject property gives access to the automocked instance via the Subject<TSubject>() method from the base class.
The With<TFake>() methods can be used to inject and setup mocks.
The The<TFake>() method is used for setup and verification of mocks.
The unit test sessions
