Dusted Codes
https://dusted.codes
Programming, Coffee and Indie Hackingen-gbCopyright 2026, Dustin Moris Gorski[email protected] (Dustin Moris Gorski)[email protected] (Dustin Moris Gorski)Sun, 19 Nov 2023 00:00:00 +0000Sun, 19 Nov 2023 00:00:00 +0000https://cdn.dusted.codes/images/general/logo.pngDusted Codes
https://dusted.codes
.NET Blazor<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-11-19/dotnet-blazor-banner.png" alt=".NET Blazor"></p>
<p>.NET Blazor has been touted as a revolutionary framework that allows .NET developers to build interactive web applications using C# instead of JavaScript. It's mainly aimed at ASP.NET Core developers who want to build SPA-like apps leveraging the .NET ecosystem and a wealth of existing libraries and tools available via NuGet. It's Microsoft's latest instalment in an attempt to gain traction amongst frontend developers. With the <a href="https://devblogs.microsoft.com/dotnet/announcing-dotnet-8/">recent release of .NET 8</a> Microsoft announced even more improvements to Blazor, most notably introducing a new rendering mode called "Static Server-side Rendering (SSR)".</p>
<p>But what exactly is <a href="https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blazor">Blazor</a> and how does it enable C# to work in the browser? More interestingly, how does it compare to traditional JavaScript based SPA frameworks with which it aims to compete with?</p>
<h2 id="blazor-wasm">Blazor WASM</h2>
<p>Blazor has been on a journey and in order to understand Blazor's current state one has to look at its evolution starting from the beginning. <a href="https://devblogs.microsoft.com/dotnet/get-started-building-net-web-apps-in-the-browser-with-blazor/">Launched in 2018</a>, Blazor initially started as an experimental project, aiming to leverage WebAssembly to run C# directly in the browser, allowing developers to build SPAs using .NET. This idea was realized with Blazor WebAssembly, which allowed the .NET runtime to execute on the client.</p>
<p>Blazor WebAssembly, commonly abbreviated as Blazor WASM, offers the most SPA-like experience among all Blazor options. When a user first visits a Blazor WASM application, the browser downloads the .NET runtime along with the application's assemblies (lots of .dlls) and any other required content onto the user's browser. The downloaded runtime is a WebAssembly-based .NET runtime (essentially a .NET interpreter) which is executed inside the browser's WebAssembly engine. This runtime is responsible for executing the compiled C# code entirely in the browser.</p>
<p>Although Blazor WASM applications are primarily written in C#, they can still interoperate with JavaScript code. This allows the use of existing JavaScript libraries and access to browser APIs which are not directly exposed to WebAssembly.</p>
<p>While Blazor WASM has received plenty of initial praise and has been improved over time, it's also been met with key criticisms which often revolve around the following areas:</p>
<ul>
<li>
<p><strong>Initial load time</strong>:<br>The requirement to download the .NET runtime and application assemblies upon the first visit can result in a <strong>significant initial load time</strong>. This is even more evident in complex apps with large dependencies and especially over slow networks.</p>
</li>
<li>
<p><strong>Performance</strong>:<br>Blazor WASM lags behind traditional JavaScript frameworks in terms of performance. The WebAssembly runtime is still generally slower than optimised JavaScript code for compute-intensive workloads.</p>
</li>
<li>
<p><strong>Compatibility</strong>:<br>While WebAssembly is widely supported in modern browsers there may still be issues with older browsers or certain mobile devices which can limit the reach of a Blazor WASM application.</p>
</li>
<li>
<p><strong>SEO challenges</strong>:<br>Beside the usual SEO challenges which all SPA frameworks come with, the additional longer load times and slower performance of Blazor WASM can negatively impact SEO rankings.</p>
</li>
<li>
<p><strong>Complexities of interop with JavaScript</strong>:<br>While Blazor WASM allows for JavaScript interop, it can be cumbersome to use alongside complex JavaScript libraries or when there is a need for extensive interaction between C# and JavaScript functions. This complexity can lead to additional development overhead and potential performance bottlenecks. Unfortunately due several limitations the need for JavaScript interop is very common and therefore kind of undermines the whole premise of using Blazor in the first place.</p>
</li>
</ul>
<h2 id="blazor-server">Blazor Server</h2>
<p>To counter act some of these critiques, <a href="https://devblogs.microsoft.com/dotnet/blazor-server-in-net-core-3-0-scenarios-and-performance/">Blazor Server was introduced a year after Blazor WebAssembly</a>, enabling server-side C# code to handle UI updates over a <a href="https://learn.microsoft.com/en-us/aspnet/signalr/overview/getting-started/introduction-to-signalr">SignalR</a> connection. Unlike in Blazor WASM, the client-side UI is maintained by the server in a .NET Core application. After the initial request, a WebSocket connection is established between the client and the server using ASP.NET Core and SignalR.</p>
<p>When a user interacts with the UI, the event is sent over the SignalR connection to the server. The server processes the event and any UI updates are rendered on the server. The server then calculates the diff between the current and the new UI and sends it back to the client over the persistent SignalR connection. This process keeps the client and server UIs in sync. Since the UI logic runs on the server, the actual rendering logic as well as the .NET runtime doesn't need to be downloaded to the client, resulting in a much smaller download footprint, directly addressing one of the major criticisms of Blazor WASM.</p>
<p>However, while innovative in its approach, Blazor Server has several downsides of its own which need to be considered:</p>
<ul>
<li>
<p><strong>Latency</strong>:<br>Since every UI interaction is processed on the server and requires a round trip over the network, any latency can significantly affect the responsiveness of a Blazor Server app. This can be particularly problematic for users with poor network connections or those geographically distant from the server.</p>
</li>
<li>
<p><strong>Scalability issues</strong>:<br>Each client connection with a Blazor Server app maintains an active SignalR connection (mostly via WebSockets) to the server. This can lead to scalability issues, as the server must manage and maintain state for potentially thousands of connections simultaneously.</p>
</li>
<li>
<p><strong>Server resource usage</strong>:<br>Blazor Server apps are much more resource-intensive because the server maintains the state of the UI. This can lead to higher memory and CPU usage, especially as the number of connected clients increases.</p>
</li>
<li>
<p><strong>Reliance on SignalR</strong>:<br>The entire operation of a Blazor Server app depends on the reliability of the SignalR connection. If the connection is disrupted, the app can't function. This reliance requires a robust infrastructure and potentially increases the complexity of deployment, especially in corporate environments with strict security requirements that may restrict WebSocket usage.</p>
</li>
<li>
<p><strong>No offline support</strong>:<br>Unlike Blazor WebAssembly apps, Blazor Server requires a constant connection to the server. If the client's connection drops, the app stops working, and the current state can be lost. This makes Blazor Server unsuitable for environments where offline functionality is required.</p>
</li>
<li>
<p><strong>ASP.NET Core Server requirement</strong>:<br>The reliance on SignalR also means that Blazor Server apps cannot be served from a Content Delivery Network (CDN) like other JavaScript SPA frameworks. Serverless deployments aren't possible and Blazor Server requires the deployment of a fully fledged ASP.NET Core server.</p>
</li>
</ul>
<h2 id="blazor-static-ssr">Blazor Static SSR</h2>
<p>Despite Blazor's versatility, both the WASM and Server rendering modes suffer from serious drawbacks which make Blazor a difficult choice over traditional SPA frameworks, which by comparison don't share any of Blazor's problems and are architecturally simpler too.</p>
<p>Being aware of these challenges, Microsoft tackled some of the primary concerns of Blazor WASM and Server by rolling out <a href="https://www.youtube.com/watch?v=YwZdtLEtROA">Blazor Static SSR</a>:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-11-19/blazor-wasm-vs-blazor-server-vs-blazor-ssr.png" alt="Blazor WASM vs. Blazor Server vs. Blazor Static Server-side Rendering"></p>
<p>Blazor Static SSR, as shown in the diagram above, is a third rendering option which operates entirely independent of WASM or SignalR, instead leveraging an open HTTP connection to stream UI updates to the client. This approach, known as <a href="https://www.debugbear.com/blog/server-side-rendering">static site rendering</a>, involves generating web pages server-side and transmitting the fully composed HTML to the client, where it then gets wired back into the DOM to function as a dynamic application.</p>
<p>During an initial page load, Blazor Static SSR behaves similarly to a traditional server-side application by delivering a complete HTML page to the user's browser. Additionally, it fetches a <code>blazor.server.js</code> script that establishes a long lived HTTP connection to an ASP.NET Core server. This connection is used to stream UI updates to the client. This architecture is more straightforward, much like a classic server-rendered website, yet it provides a dynamic, SPA-like experience by selectively updating portions of the DOM and therefore eliminating the need for full page reloads.</p>
<p>The benefits over Blazor WASM and Blazor Server are twofold:</p>
<ul>
<li>
<p><strong>Reduced load times</strong>:<br>There's no need for users to download the full .NET runtime and application files when visiting the website, and as they navigate through the site, complete page reloads are avoided.</p>
</li>
<li>
<p><strong>Scalability</strong>:<br>No SignalR connection is required which greatly reduces the load on the server and removes many of the complexities around WebSocket connections.</p>
</li>
</ul>
<p>Nonetheless, Blazor Static SSR is not an actual SPA framework in the traditional sense. It doesn't allow for rich interactivity beyond web forms and simple navigation. It also doesn't allow for real-time updates as there is no code running on the client after the initial page was loaded:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-11-19/blazor-static-ssr-overview.png" alt="Blazor Static SSR Overview"></p>
<p>To combat this, Blazor starting with .NET 8 enables the mixing of different modes and introduces a fourth rendering option called <strong>Auto mode</strong>.</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-11-19/blazor-mixed-modes.png" alt="Blazor Mixed Modes"></p>
<p>In order to add interactivity to a Blazor Static SSR website one has to go back to creating either Blazor WASM or Blazor Server components. The auto rendering option aims to counter the main issues of Blazor WASM's slow load times and Blazor Server's requirement for a SignalR connection by using both rendering modes at different times:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-11-19/blazor-auto-mode.png" alt="Blazor Static SSR Overview"></p>
<p>A Blazor component operating in Auto-mode starts off by establishing a SignalR connection to enable immediate interactivity and bypass extended load times. Concurrently, it discreetly fetches the .NET runtime and all necessary dependencies to function as a Blazor WASM application. For later visits, Blazor transitions from the Server to the WASM version, maintaining SPA responsiveness without further dependence on the SignalR connection.</p>
<p>It's a fascinating approach which undoubtedly doesn't lack creativity or ambition. Even so, Blazor Static SSR incorporated with interactive components poses some old and new challenges too:</p>
<ul>
<li>
<p><strong>No interactivity without WASM or SignalR</strong>:<br>The biggest drawback of Blazor Static SSR is that it still relies on Blazor WASM or SignalR to become an interactive framework, which means it inherits not just one, but all of the many unresolved downsides when running in Auto-mode.</p>
</li>
<li>
<p><strong>Increased complexity</strong>:<br>Combining three different rendering modes adds a lot of complexity on the server and presents a <a href="https://x.com/danpdc/status/1726042720160846158?s=20">steep learning curve for developers</a> who must comprehend and manage those complexities effectively.</p>
</li>
<li>
<p><strong>No serverless deployments</strong>:<br>Deployments from a CDN are still not possible due to the reliance on ASP.NET Core.</p>
</li>
<li>
<p><strong>No offline support</strong>:<br>Blazor Static SSR minimises full page reloads but still requires an active connection to stream updates to the UI.</p>
</li>
<li>
<p><strong>Caching challenges</strong>:<br>While static content is easily cacheable, dynamic content that changes frequently can be challenging to cache effectively, potentially missing out on valuable performance optimisations.</p>
</li>
</ul>
<p>Having said that, Blazor Static SSR also comes with a few benefits when it's not mixed with WASM or Server together:</p>
<ul>
<li>
<p><strong>SEO Friendliness</strong>:<br>Since SSR applications pre-load all the content on the server and send it to the client as HTML, they are inherently SEO-friendly. This allows search engines to crawl and index the content more efficiently.</p>
</li>
<li>
<p><strong>Fast initial load</strong>:<br>Blazor Static SSR can provide faster initial page loads compared to SPAs. This is because the HTML is ready to be rendered by the browser as soon as it's received, without waiting for client-side JavaScript to render the content.</p>
</li>
<li>
<p><strong>Stability across browsers</strong>:<br>SSR applications often have more consistent behavior across different browsers since they don't rely on client-side rendering, which can sometimes be unpredictable due to browser-specific JavaScript quirks.</p>
</li>
</ul>
<h2 id="blazor-vs-traditional-javascript-spas">Blazor vs. traditional JavaScript SPAs</h2>
<p>Overall Blazor is a remarkable achievement with buckets of originality and technical finesse, however with the exception of Blazor WASM, Blazor Server and Blazor Static SSR behave quite differently to traditional SPAs.</p>
<p>Neither Blazor Server or Blazor Static SSR load all the necessary HTML, JavaScript and CSS upfront. They have a hard dependency on an ASP.NET Core backend, can't be hosted serverless and require a constant connection to a server. The frontend is not separated from the backend and data is not fetched using APIs. Typical SPAs maintain state on the client side. The user's interactions with the application can change the state, and the UI updates accordingly without a server round-trip. Since SPAs don't require page reloads for content updates, they can offer a smoother and faster user experience that is similar to desktop applications. With conventional SPAs the same code can often be shared between web and mobile apps, another advantage over Blazor Server or Static SSR. The clean separation between the frontend and the backend makes the overall mental model simpler and allows to efficiently split the disciplines between different teams.</p>
<h3 id="blazor-wasm-vs-javascript-spas">Blazor WASM vs. JavaScript SPAs</h3>
<p>Blazor WASM stands out as the only rendering option which fully aligns with the ethos of a conventional SPA. Unfortunately the heavy nature of having to run the .NET Runtime over WebAssembly puts it at a significant disadvantage over comparable JavaScript frameworks.</p>
<h3 id="blazor-server-vs-javascript-spas">Blazor Server vs. JavaScript SPAs</h3>
<p>While Blazor Server is technically intriguing, offering a unique approach to web development, it paradoxically combines the limitations of both, a Single-Page Application and a server-intensive architecture, at the same time. To some extent Blazor Server represents a "worst of both worlds" scenario. Personally it's my least favourite option and I can't see any future in this design.</p>
<h3 id="blazor-static-ssr-vs-javascript-spas">Blazor Static SSR vs. JavaScript SPAs</h3>
<p>Blazor Static SSR deviates the most from the paradigm of a SPA. Apart from being placed under the Blazor brand it diverges significantly from the framework's initial architecture. <strong>Ironically this is where its strengths lie as well</strong>. Given that SPAs are inherently accompanied by their own set of challenges, the necessity for a SPA must be well-justified, or otherwise opting for a server-rendered application can be a more straightforward and preferable solution most of the times.</p>
<p>In my view, Blazor Static SSR is a compelling option that deserves to be its own framework, enabling .NET developers to enrich the functionality of everyday ASP.NET Core.</p>
<h2 id="a-word-of-caution">A word of caution</h2>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-11-19/dotnet-blazor-vs-javascript-spas.png" alt=".NET Blazor vs. JavaScript SPAs"></p>
<p>Would I opt for Blazor today? To be candid, probably not. While I maintain a hopeful stance on Blazor, I must remain truthful to myself. I've never been the person who blindly champions every Microsoft technology without critical thought. The truth is, currently Blazor is evolving into an unwieldy beast. In spite of its four rendering modes, intricate layers of complexity, and clever technical fixes, it still falls short when compared to established SPAs. This situation leads me to question the longevity of Microsoft's commitment and how long Blazor will be around. The parallels with Silverlight are hard to ignore, and without the .NET team delivering a technically sound framework, I find it hard to envision widespread adoption beyond a comparatively small group of dedicated C# enthusiasts who will accept any insanity over the thought of using JS.</p>
<h2 id="an-untapped-opportunity">An untapped opportunity?</h2>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-11-19/blazor-csharp-to-js-transpiler.png" alt=".NET Blazor reimagined?"></p>
<p>As I reach the end of this blog post I want to finish on a positive note. I dare to say it, but could C# learn another thing from F#? Thanks to <a href="https://fable.io">Fable</a>, an F# to JavaScript transpiler, F# developers have been able to create rich interactive SPAs using F# for quite some time. Developed in 2016, Fable was originally built on top of <a href="https://babeljs.io">Babel</a>, an ECMAScript 2015+ to JavaScript compiler. Wouldn't something similar work for C#? As I see it this could pave the way for a very appealing C# framework that circumvents the complexities around WASM and SignalR.</p>
<p><strong>Blazor not only in name but in glory too.</strong></p>
<p>In fact, I'm quite surprised that we haven't seen such a development yet, but perhaps it's a matter of perspective. Maybe it has been a case of the wrong team looking at the wrong problem all along? After all the ASP.NET Core team excels in web development and not compiler design. Not every problem needs to be solved using SignalR or streaming APIs. Perhaps it's time to put a hold on more rendering modes and looking at Blazor through a different lens?</p>
<p>In my view, without doubt, this is the best path forward and I shall remain hopeful until then.</p>
https://dusted.codes/dotnet-blazor
[email protected] (Dustin Moris Gorski)https://dusted.codes/dotnet-blazor#disqus_threadSun, 19 Nov 2023 00:00:00 +0000https://dusted.codes/dotnet-blazordotnetaspnet-coreblazorCreating a pretty console logger using Go's slog package<p>I had the privilege of attending <a href="https://www.gophercon.co.uk">GopherCon UK</a> last week, and among the many captivating talks, one that stood out to me was "Structured Logging for the Standard Library" presented by <a href="https://twitter.com/JonathanAmster2">Jonathan Amsterdam</a>.</p>
<p>The presentation provided an insightful dive into <a href="https://pkg.go.dev/log/slog">Go's <code>log/slog</code> package</a>. This talk couldn't have come at a better time given that I've just started on a new Go project, where I was eager to use Go's structured logging approach. The <code>slog</code> package in Go distinctly draws its inspiration from <a href="https://github.com/uber-go/zap">Uber's <code>zap</code></a>, making it a seamless transition for those who are well-acquainted with the latter. If you're already at ease with <code>zap</code>, you'll find yourself quickly at home with <code>slog</code> as well.</p>
<p>Currently, the <code>slog</code> library offers two built-in logging handlers: the <code>TextHandler</code> and the <code>JSONHandler</code>. The <code>TextHandler</code> formats logs as a series of <code>key=value</code> pairs, while the <code>JSONHandler</code> produces logs in JSON format. These handlers are greatly optimised for production scenarios, but can be somewhat verbose when troubleshooting applications during the development phase.</p>
<p>Recognizing this, I realized the necessity for a more visually friendly console logger tailored for local development purposes. Despite stumbling upon a code snippet within a blog post titled <a href="https://betterstack.com/community/guides/logging/logging-in-go/">A Comprehensive Guide to Logging in Go with Slog</a>, the implementation was both incomplete and flawed, rendering it unsuitable for my use case.</p>
<p>So my next question was: How difficult can it be to create one myself? Luckily, with a bit of clever hackery not that difficult at all!</p>
<h2 id="objectives">Objectives</h2>
<p>First let's address some of the shortcomings in the implementation outlined in the blog post referenced above. Unfortunately the custom handler fails to print preformatted attributes coming from the <code>WithAttrs</code> method. This means that attributes set on a parent logger are not propagated to child loggers at all. Additionally, the handler struggles to manage groups established on a parent logger too. Alongside these issues, there was no support for appending an <code>error</code> attribute, as well as addressing other edge cases which the original <code>JSONHandler</code> intuitively dealt with.</p>
<p>This didn't come as a huge surprise, given the difficulty of crafting a custom <code>slog.Handler</code> as highlighted by the Go team themselves. In fact, writing a custom <code>slog.Handler</code> is not to be taken lightly, and the Go team anticipates that only a select number of package authors will find themselves in the need of undertaking this task. To facilitate this, the Go team has thoughtfully provided a <a href="https://github.com/golang/example/tree/master/slog-handler-guide">comprehensive guide to writing slog handlers</a> to assist with this process.</p>
<p>Either way, I have no desire of writing a complete <code>slog.Handler</code> myself for something which is only ever going to be relevant during development time. With this in mind I set myself the following requirements:</p>
<h4 id="requirements">Requirements:</h4>
<ul>
<li>Logs must be visually pleasing</li>
<li>Implementation must be complete</li>
<li>Only use packages from the standard library</li>
<li>Keep it super simple (as I'm lazy)</li>
</ul>
<h4 id="non-requirements">Non requirements:</h4>
<ul>
<li>Doesn't have to be very fast</li>
<li>Doesn't have to be very memory efficient</li>
</ul>
<p>It's good to keep in mind that this "pretty" handler is tailored for development purposes and doesn't require blazing speed or intense memory efficiency. Since I won't be generating millions of logs on my local machine, this greatly simplifies the upcoming solution.</p>
<h4 id="final-output">Final output:</h4>
<p>The final logs should look something like this:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-08-23/prettylog-example-1.png" alt="Example 1"></p>
<p>Here's another example by enabling debug logs and adding source information to them:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-08-23/prettylog-example-2.png" alt="Example 2"></p>
<p>Evidently, these logs are designed for human readability through colouring and spacing. The default log attributes (time, level, message) are presented in a single line, while extra structured attributes are attached as a JSON object.</p>
<p>If you like this log style then keep on reading.</p>
<h2 id="creating-a-pretty-console-logger">Creating a pretty console logger</h2>
<p>For the purpose of this blog post I am calling this package <code>prettylog</code> but you can copy paste this logger into your own codebase and call it whatever you want.</p>
<p>Let's start with the function that will add color to the console output:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">package</span> prettylog
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">import</span> (
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"fmt"</span>
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"strconv"</span>
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">const</span> (
</span></span><span style="display:flex;"><span> reset = <span style="color:#ffa08f">"\033[0m"</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> black = <span style="color:#abfebc">30</span>
</span></span><span style="display:flex;"><span> red = <span style="color:#abfebc">31</span>
</span></span><span style="display:flex;"><span> green = <span style="color:#abfebc">32</span>
</span></span><span style="display:flex;"><span> yellow = <span style="color:#abfebc">33</span>
</span></span><span style="display:flex;"><span> blue = <span style="color:#abfebc">34</span>
</span></span><span style="display:flex;"><span> magenta = <span style="color:#abfebc">35</span>
</span></span><span style="display:flex;"><span> cyan = <span style="color:#abfebc">36</span>
</span></span><span style="display:flex;"><span> lightGray = <span style="color:#abfebc">37</span>
</span></span><span style="display:flex;"><span> darkGray = <span style="color:#abfebc">90</span>
</span></span><span style="display:flex;"><span> lightRed = <span style="color:#abfebc">91</span>
</span></span><span style="display:flex;"><span> lightGreen = <span style="color:#abfebc">92</span>
</span></span><span style="display:flex;"><span> lightYellow = <span style="color:#abfebc">93</span>
</span></span><span style="display:flex;"><span> lightBlue = <span style="color:#abfebc">94</span>
</span></span><span style="display:flex;"><span> lightMagenta = <span style="color:#abfebc">95</span>
</span></span><span style="display:flex;"><span> lightCyan = <span style="color:#abfebc">96</span>
</span></span><span style="display:flex;"><span> white = <span style="color:#abfebc">97</span>
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> <span style="color:#ecc77d">colorize</span>(colorCode <span style="color:#d179a3">int</span>, v <span style="color:#d179a3">string</span>) <span style="color:#d179a3">string</span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> fmt.<span style="color:#ecc77d">Sprintf</span>(<span style="color:#ffa08f">"\033[%sm%s%s"</span>, strconv.<span style="color:#ecc77d">Itoa</span>(colorCode), v, reset)
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>That's all that's required for straightforward coloured output, avoiding the need for an external dependency on the <a href="https://github.com/fatih/color">color</a> package. I'm not even going to use all the colours listed above but I've included them regardless so you can adjust your logs to your own liking.</p>
<p>Moving forward, we'll create a struct called <code>Handler</code> (later used as <code>prettylog.Handler</code>):</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">type</span> Handler <span style="color:#d179a3">struct</span> {
</span></span><span style="display:flex;"><span> h slog.Handler
</span></span><span style="display:flex;"><span> b <span style="color:#d179a3">*</span>bytes.Buffer
</span></span><span style="display:flex;"><span> m <span style="color:#d179a3">*</span>sync.Mutex
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>The handler has three dependencies:</p>
<ul>
<li>A "nested" <code>slog.Handler</code> which we wrap to effectively fulfil most of our handler's logic.</li>
<li>A <code>*bytes.Buffer</code> with the purpose to capture the output from the "nested" handler.</li>
<li>A mutex to guarantee thread safe access to our <code>*bytes.Buffer</code>.</li>
</ul>
<p>All three dependencies will make more sense once we implement the <code>Handle</code> function.</p>
<p>The <code>slog.Handler</code> interface requires four methods to be implemented:</p>
<ul>
<li><code>Enabled</code></li>
<li><code>WithAttrs</code></li>
<li><code>WithGroup</code></li>
<li><code>Handle</code></li>
</ul>
<p>The <code>Enabled</code> method denotes whether a given handler handles a <code>slog.Record</code> of a particular <code>slog.Level</code>. The <code>WithAttrs</code> and <code>WithGroup</code> methods create child loggers with predefined Attrs.</p>
<p>For all three methods we can use the implementation of our nested handler:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> (h <span style="color:#d179a3">*</span>Handler) <span style="color:#ecc77d">Enabled</span>(ctx context.Context, level slog.Level) <span style="color:#d179a3">bool</span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> h.h.<span style="color:#ecc77d">Enabled</span>(ctx, level)
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> (h <span style="color:#d179a3">*</span>Handler) <span style="color:#ecc77d">WithAttrs</span>(attrs []slog.Attr) slog.Handler {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#d179a3">&</span>Handler{h: h.h.<span style="color:#ecc77d">WithAttrs</span>(attrs), b: h.b, m: h.m}
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> (h <span style="color:#d179a3">*</span>Handler) <span style="color:#ecc77d">WithGroup</span>(name <span style="color:#d179a3">string</span>) slog.Handler {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#d179a3">&</span>Handler{h: h.h.<span style="color:#ecc77d">WithGroup</span>(name), b: h.b, m: h.m}
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>The <code>Handle</code> method is where things get interesting.</p>
<p>Writing a log line is actually remarkably easy if one completely ignores groups and attributes to begin with:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">const</span> (
</span></span><span style="display:flex;"><span> timeFormat = <span style="color:#ffa08f">"[15:04:05.000]"</span>
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> (h <span style="color:#d179a3">*</span>Handler) <span style="color:#ecc77d">Handle</span>(ctx context.Context, r slog.Record) <span style="color:#d179a3">error</span> {
</span></span><span style="display:flex;"><span> level <span style="color:#d179a3">:=</span> r.Level.<span style="color:#ecc77d">String</span>() <span style="color:#d179a3">+</span> <span style="color:#ffa08f">":"</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">switch</span> r.Level {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">case</span> slog.LevelDebug:
</span></span><span style="display:flex;"><span> level = <span style="color:#ecc77d">colorize</span>(darkGray, level)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">case</span> slog.LevelInfo:
</span></span><span style="display:flex;"><span> level = <span style="color:#ecc77d">colorize</span>(cyan, level)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">case</span> slog.LevelWarn:
</span></span><span style="display:flex;"><span> level = <span style="color:#ecc77d">colorize</span>(lightYellow, level)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">case</span> slog.LevelError:
</span></span><span style="display:flex;"><span> level = <span style="color:#ecc77d">colorize</span>(lightRed, level)
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Println</span>(
</span></span><span style="display:flex;"><span> <span style="color:#ecc77d">colorize</span>(lightGray, r.Time.<span style="color:#ecc77d">Format</span>(timeFormat)),
</span></span><span style="display:flex;"><span> level,
</span></span><span style="display:flex;"><span> <span style="color:#ecc77d">colorize</span>(white, r.Message),
</span></span><span style="display:flex;"><span> )
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#d179a3">nil</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>What about printing all the attributes that are added to the <code>slog.Record</code> or a parent logger? This is where the bytes buffer and the nested handler come into play.</p>
<h5 id="the-concept-is-simple">The concept is simple:</h5>
<p>We'll invoke the <code>Handle</code> function of the nested handler, but have it write to the <code>*bytes.Buffer</code> instead of the final <code>io.Writer</code>. We'll exclude the default log attributes such as time, level, and message from the nested handler to prevent repetition. Then, we'll append the remaining output as an indented JSON string to our log line. Since loggers need to function correctly when a single <code>slog.Logger</code> is shared among multiple goroutines, we also need to synchronize read and write access to the <code>*bytes.Buffer</code> using the mutex.</p>
<p>Let's encapsulate this behaviour in a function called <code>computeAttrs</code>:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> (h <span style="color:#d179a3">*</span>Handler) <span style="color:#ecc77d">computeAttrs</span>(
</span></span><span style="display:flex;"><span> ctx context.Context,
</span></span><span style="display:flex;"><span> r slog.Record,
</span></span><span style="display:flex;"><span>) (<span style="color:#d179a3">map</span>[<span style="color:#d179a3">string</span>]<span style="color:#d179a3">any</span>, <span style="color:#d179a3">error</span>) {
</span></span><span style="display:flex;"><span> h.m.<span style="color:#ecc77d">Lock</span>()
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">defer</span> <span style="color:#d179a3">func</span>() {
</span></span><span style="display:flex;"><span> h.b.<span style="color:#ecc77d">Reset</span>()
</span></span><span style="display:flex;"><span> h.m.<span style="color:#ecc77d">Unlock</span>()
</span></span><span style="display:flex;"><span> }()
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> err <span style="color:#d179a3">:=</span> h.h.<span style="color:#ecc77d">Handle</span>(ctx, r); err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#d179a3">nil</span>, fmt.<span style="color:#ecc77d">Errorf</span>(<span style="color:#ffa08f">"error when calling inner handler's Handle: %w"</span>, err)
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">var</span> attrs <span style="color:#d179a3">map</span>[<span style="color:#d179a3">string</span>]<span style="color:#d179a3">any</span>
</span></span><span style="display:flex;"><span> err <span style="color:#d179a3">:=</span> json.<span style="color:#ecc77d">Unmarshal</span>(h.b.<span style="color:#ecc77d">Bytes</span>(), <span style="color:#d179a3">&</span>attrs)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#d179a3">nil</span>, fmt.<span style="color:#ecc77d">Errorf</span>(<span style="color:#ffa08f">"error when unmarshaling inner handler's Handle result: %w"</span>, err)
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> attrs, <span style="color:#d179a3">nil</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>The <code>computeAttrs</code> works as following:</p>
<ol>
<li>It initially locks the mutex to ensure synchronized access across all goroutines utilizing the same logger or a child logger that shares the same <code>*bytes.Buffer</code>.</li>
<li>It defers the process of resetting the buffer (necessary to prevent outdated Attrs from previous <code>Log</code> calls) and releasing the mutex once the task is complete.</li>
<li>The <code>Handle</code> function of the inner <code>slog.Handler</code> is then invoked. This is where we compute a JSON object within the <code>*bytes.Buffer</code>, leveraging the capabilities of a <code>slog.JSONHandler</code>.</li>
<li>Lastly, the JSON buffer is transformed into a <code>map[string]any</code> after which the resulting object is returned to the caller.</li>
</ol>
<p>Now, let's revisit our own <code>Handle</code> function and integrate the following code:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>attrs, err <span style="color:#d179a3">:=</span> h.<span style="color:#ecc77d">computeAttrs</span>(ctx, r)
</span></span><span style="display:flex;"><span><span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> err
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>bytes, err <span style="color:#d179a3">:=</span> json.<span style="color:#ecc77d">MarshalIndent</span>(attrs, <span style="color:#ffa08f">""</span>, <span style="color:#ffa08f">" "</span>)
</span></span><span style="display:flex;"><span><span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> fmt.<span style="color:#ecc77d">Errorf</span>(<span style="color:#ffa08f">"error when marshaling attrs: %w"</span>, err)
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>Through invoking <code>computeAttrs</code>, we can obtain the <code>attrs</code> map, which we subsequently convert into a neatly formatted (indented) JSON string using marshaling. Admittedly, this code isn't the most efficient approach (writing a JSON string into a buffer, deserializing it into an object, and then re-serializing it as a string), but it's worth mentioning that I couldn't identify a more effective method to obtain an indented JSON string from the <code>slog.JSONHandler</code>. Fortunately, as highlighted earlier, this handler isn't designed to achieve peak speed performance in any case.</p>
<p>Finally, we attach the formatted JSON string in a dark gray hue to our "pretty" log entry:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>fmt.<span style="color:#ecc77d">Println</span>(
</span></span><span style="display:flex;"><span> <span style="color:#ecc77d">colorize</span>(lightGray, r.Time.<span style="color:#ecc77d">Format</span>(timeFormat)),
</span></span><span style="display:flex;"><span> level,
</span></span><span style="display:flex;"><span> <span style="color:#ecc77d">colorize</span>(white, r.Message),
</span></span><span style="display:flex;"><span> <span style="color:#ecc77d">colorize</span>(darkGray, <span style="color:#b4ddff">string</span>(bytes)),
</span></span><span style="display:flex;"><span>)
</span></span></code></pre><p>The final <code>Handle</code> method looks as following:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> (h <span style="color:#d179a3">*</span>Handler) <span style="color:#ecc77d">Handle</span>(ctx context.Context, r slog.Record) <span style="color:#d179a3">error</span> {
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> level <span style="color:#d179a3">:=</span> r.Level.<span style="color:#ecc77d">String</span>() <span style="color:#d179a3">+</span> <span style="color:#ffa08f">":"</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">switch</span> r.Level {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">case</span> slog.LevelDebug:
</span></span><span style="display:flex;"><span> level = <span style="color:#ecc77d">colorize</span>(darkGray, level)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">case</span> slog.LevelInfo:
</span></span><span style="display:flex;"><span> level = <span style="color:#ecc77d">colorize</span>(cyan, level)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">case</span> slog.LevelWarn:
</span></span><span style="display:flex;"><span> level = <span style="color:#ecc77d">colorize</span>(lightYellow, level)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">case</span> slog.LevelError:
</span></span><span style="display:flex;"><span> level = <span style="color:#ecc77d">colorize</span>(lightRed, level)
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> attrs, err <span style="color:#d179a3">:=</span> h.<span style="color:#ecc77d">computeAttrs</span>(ctx, r)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> err
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> bytes, err <span style="color:#d179a3">:=</span> json.<span style="color:#ecc77d">MarshalIndent</span>(attrs, <span style="color:#ffa08f">""</span>, <span style="color:#ffa08f">" "</span>)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> fmt.<span style="color:#ecc77d">Errorf</span>(<span style="color:#ffa08f">"error when marshaling attrs: %w"</span>, err)
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Println</span>(
</span></span><span style="display:flex;"><span> <span style="color:#ecc77d">colorize</span>(lightGray, r.Time.<span style="color:#ecc77d">Format</span>(timeFormat)),
</span></span><span style="display:flex;"><span> level,
</span></span><span style="display:flex;"><span> <span style="color:#ecc77d">colorize</span>(white, r.Message),
</span></span><span style="display:flex;"><span> <span style="color:#ecc77d">colorize</span>(darkGray, <span style="color:#b4ddff">string</span>(bytes)),
</span></span><span style="display:flex;"><span> )
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#d179a3">nil</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>Only one last task remains. Currently, the nested <code>slog.Handler</code> writes the time, log level, and log message in addition to other custom attributes. However, since our handler is responsible for displaying these three default attributes, we need to configure the inner <code>slog.Handler</code> to bypass the <code>slog.TimeKey</code>, <code>slog.LevelKey</code> and <code>slog.MessageKey</code> attributes.</p>
<p>The most straightforward approach is to provide a function to the <code>ReplaceAttr</code> property of the <code>slog.HandlerOptions</code>. However, we wish to preserve the ability for an application to specify its individual <code>ReplaceAttr</code> function and <code>slog.HandlerOptions</code>. Therefore we must apply a final touch of trickery to "merge" a custom <code>ReplaceAttr</code> function with our own requirements:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> <span style="color:#ecc77d">suppressDefaults</span>(
</span></span><span style="display:flex;"><span> next <span style="color:#d179a3">func</span>([]<span style="color:#d179a3">string</span>, slog.Attr) slog.Attr,
</span></span><span style="display:flex;"><span>) <span style="color:#d179a3">func</span>([]<span style="color:#d179a3">string</span>, slog.Attr) slog.Attr {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#d179a3">func</span>(groups []<span style="color:#d179a3">string</span>, a slog.Attr) slog.Attr {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> a.Key <span style="color:#d179a3">==</span> slog.TimeKey <span style="color:#d179a3">||</span>
</span></span><span style="display:flex;"><span> a.Key <span style="color:#d179a3">==</span> slog.LevelKey <span style="color:#d179a3">||</span>
</span></span><span style="display:flex;"><span> a.Key <span style="color:#d179a3">==</span> slog.MessageKey {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> slog.Attr{}
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> next <span style="color:#d179a3">==</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> a
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#ecc77d">next</span>(groups, a)
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>A helpful analogy for understanding the <code>suppressDefaults</code> function is to compare it to a middleware in an HTTP server. It takes in a <code>next</code> function that matches the same function signature as the <code>ReplaceAttr</code> property. It then performs filtering on <code>slog.TimeKey</code>, <code>slog.LevelKey</code>, and <code>slog.MessageKey</code> before continuing with <code>next</code> (if it's not nil).</p>
<p>With this in place, we're ready to create a constructor for our <code>prettylog.Handler</code> and assemble everything together:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> <span style="color:#ecc77d">NewHandler</span>(opts <span style="color:#d179a3">*</span>slog.HandlerOptions) <span style="color:#d179a3">*</span>Handler {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> opts <span style="color:#d179a3">==</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> opts = <span style="color:#d179a3">&</span>slog.HandlerOptions{}
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> b <span style="color:#d179a3">:=</span> <span style="color:#d179a3">&</span>bytes.Buffer{}
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#d179a3">&</span>Handler{
</span></span><span style="display:flex;"><span> b: b,
</span></span><span style="display:flex;"><span> h: slog.<span style="color:#ecc77d">NewJSONHandler</span>(b, <span style="color:#d179a3">&</span>slog.HandlerOptions{
</span></span><span style="display:flex;"><span> Level: opts.Level,
</span></span><span style="display:flex;"><span> AddSource: opts.AddSource,
</span></span><span style="display:flex;"><span> ReplaceAttr: <span style="color:#ecc77d">suppressDefaults</span>(opts.ReplaceAttr),
</span></span><span style="display:flex;"><span> }),
</span></span><span style="display:flex;"><span> m: <span style="color:#d179a3">&</span>sync.Mutex{},
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>The entire code can be found on <a href="https://github.com/dusted-go/logging/tree/main">GitHub</a>.</p>
<h2 id="final-result">Final result</h2>
<p>Below are a few examples of how those pretty logs will look like.</p>
<p>Example of a logger with no <code>*slog.HandlerOptions</code>:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-08-23/prettylog-example-3.png" alt="Example 3"></p>
<p>Creating a child logger with an additional group of attributes attached to it:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-08-23/prettylog-example-4.png" alt="Example 4"></p>
<p>Making sure custom <code>ReplaceAttr</code> functions are supported:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2023-08-23/prettylog-example-5.png" alt="Example 5"></p>
<p>Hopefully this blog post proved to be useful. It certainly provided me a valuable exercise for delving into the new <code>log/slog</code> package and gaining a better understanding of Go's latest structured logging capabilities.</p>
https://dusted.codes/creating-a-pretty-console-logger-using-gos-slog-package
[email protected] (Dustin Moris Gorski)https://dusted.codes/creating-a-pretty-console-logger-using-gos-slog-package#disqus_threadWed, 23 Aug 2023 00:00:00 +0000https://dusted.codes/creating-a-pretty-console-logger-using-gos-slog-packagegolangslogloggingHow fast is ASP.NET Core?<p>In recent years the .NET Team has been heavily advertising ASP.NET Core as one of the fastest web frameworks on the market. The source of those claims has always been the <a href="https://www.techempower.com/benchmarks">TechEmpower Framework Benchmarks</a>.</p>
<p>Take this slide from <a href="https://www.youtube.com/watch?v=2Ky28Et3gy0">BUILD 2021</a>, which <a href="https://twitter.com/coolcsh">Scott Hunter</a> - Director of Program Management .NET, presented last year:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/dotnet-5-performance-claims.png" alt="Dubious .NET 5 performance claims"></p>
<p>According to him .NET is <strong>more than 10 times faster</strong> than Node.js.</p>
<p>Scott also claims that .NET is faster than Java, Go and <strong>even C++</strong>, which is a huge boast if this is true!</p>
<p>Only recently <a href="https://twitter.com/sebastienros">Sébastien Ros</a>, from the ASP.NET Core team, wrote this on Reddit:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/reddit-comment.png" alt="Reddit comment by member of the ASP.NET Core team"></p>
<p>In particular this sentence was super interesting to read:</p>
<blockquote>
<p>Finally, even with the fastest Go web framework, .NET is still faster when using a high level stack (middleware, minimal APIs, …).</p>
</blockquote>
<p>That is a bold claim and equally super impressive if true, so I was naturally curious to find out more about ASP.NET Core's performance and the TechEmpower Framework Benchmarks.</p>
<h2 id="techempower-benchmarks">TechEmpower Benchmarks</h2>
<p><a href="https://www.techempower.com">TechEmpower</a> is a software agency located in Los Angeles, California who run an independent framework benchmark on their servers. They publish all the <a href="https://www.techempower.com/benchmarks/">results on their website</a> as well as the <a href="https://github.com/TechEmpower/FrameworkBenchmarks">framework code on GitHub</a>.</p>
<p>The first thing that stood out to me was that the last official round (<a href="https://www.techempower.com/benchmarks/#section=data-r21">Round 21</a>) was captured on 19th July 2022. The round before that (<a href="https://www.techempower.com/benchmarks/#section=data-r20">Round 20</a>) ran in February 2021, which means there was a gap of more than a year between those two official rounds. I am not sure why they have only so few official rounds but I also discovered that they have a continuos benchmark run which can be viewed on their <a href="https://tfb-status.techempower.com">Results Dashboard</a>. However, since the last official round was not that long ago and the difference between the results from Round 21 and the <a href="https://www.techempower.com/benchmarks/#section=test&runid=da435f15-1b5b-4347-acbe-a68ced6efb39">last completed run from the continuous benchmarks</a> is not that big I decided to stick with Round 21 for my further analysis.</p>
<p>TechEmpower divides their tests into the following categories:</p>
<ul>
<li>JSON serializers</li>
<li>Single query</li>
<li>Multiple queries</li>
<li>Cached queries</li>
<li>Fortunes</li>
<li>Data updates</li>
<li>Plaintext</li>
</ul>
<p>The <a href="https://www.techempower.com/benchmarks/#section=data-r21&test=fortune">Fortunes</a> benchmark is the gold standard of all benchmarks. It is the only one which tries to resemble a "real world scenario" which involves some reading from a database, sorting data by text, XSS prevention and it includes some server-side HTML template rendering too.</p>
<p>All the other test categories focus on an isolated aspect of a framework which makes it interesting for reading but useless when ranking web frameworks by general performance.</p>
<p>So let's take a closer look at the Fortunes benchmark from Round 21:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2022-11-14/techempower-benchmarks-round-21.png"><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/techempower-benchmarks-round-21.png" alt="TechEmpower Benchmark Results Top 20 from Round 21"></a></p>
<p>To my astonishment ASP.NET Core ranks 9th in place amongst the top 10 fastest frameworks! Two further flavours of the ASP.NET Core benchmark also rank 13th and 14th out of the 439 completed benchmark runs. That is very impressive indeed!</p>
<h3 id="what-are-the-different-aspnet-core-benchmarks">What are the different ASP.NET Core benchmarks?</h3>
<p>Why does ASP.NET Core appear more than once in the benchmark results with varying performance metrics?</p>
<p>It turns out that there are in fact 15 different ASP.NET Core benchmarks which can be broadly subdivided into these four categories:</p>
<ul>
<li>ASP.NET Core stripped naked</li>
<li>ASP.NET Core with middleware</li>
<li>ASP.NET Core MVC</li>
<li>ASP.NET Core on Mono</li>
</ul>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/aspnet-core-benchmark-frameworks.png" alt="ASP.NET Core Benchmark Frameworks"></p>
<p>However, those are self-chosen names (by the .NET Team) and in order to get a real picture of what is being tested one has to look at the actual code itself. Luckily all the <a href="https://github.com/TechEmpower/FrameworkBenchmarks">code is publicly available on GitHub</a>.</p>
<p>I'm not interested in checking out 15 different implementations of various ASP.NET Core benchmarks so I decided to focus on the top performing ones by further narrowing down the 15 benchmarks into the best 7 out of the bunch:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/aspnet-core-benchmark-frameworks-without-mysql-and-without-mono.png" alt="ASP.NET Core Benchmark Frameworks without MySQL and without Mono tests"></p>
<p>I removed the Mono benchmarks and all the tests which used MySQL as the underlying database, because those tests performed significantly worse in comparison to the .NET Core with Postgres equivalents (which has the <code>pg</code> suffix in the labels).</p>
<p>Slowly the picture becomes clearer. The above screenshot also includes the framework "classification" which can be seen on the right hand side of the image. The top benchmark (which is the impressive one that ranks 9th overall) is classified as <strong>"Platform"</strong>. The next three benchmarks are classified as <strong>"Micro"</strong> and the last three benchmarks are classified as <strong>"Full"</strong>. There seems to be a very significant performance drop as one moves from the "Platform" tests down to the "Full" tests.</p>
<p>Similar to the naming of the framework benchmarks, the classification is not standardised or audited by TechEmpower employees either. Anyone can submit code with an arbitrary name and classification and get very little or no scrutiny at all by the repository maintainers. At least that was my impression (I once submitted an F# benchmark test).</p>
<p>Only the code itself can be used as a reliable source of truth to draw conclusions from those tests.</p>
<p>Luckily the code for all ASP.NET Core (on .NET Core) benchmarks can be found inside the <a href="https://github.com/TechEmpower/FrameworkBenchmarks/tree/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore">/frameworks/CSharp/aspnetcore</a> folder of the GitHub repository.</p>
<p>On 19th July 2022 (when Round 21 took place) the ASP.NET Core benchmark was divided into two projects:</p>
<ul>
<li><a href="https://github.com/TechEmpower/FrameworkBenchmarks/tree/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/Benchmarks">/Benchmarks</a></li>
<li><a href="https://github.com/TechEmpower/FrameworkBenchmarks/tree/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/PlatformBenchmarks">/PlatformBenchmarks</a></li>
</ul>
<p>Both of these web applications are <strong>very different</strong> so it is important to understand which one is used by which benchmark. This can be done by inspecting the <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/config.toml"><code>config.toml</code></a> file and the associated <code>Dockerfile</code> for the respective test case.</p>
<p>For example, the best ranking ASP.NET Core benchmark (<code>aspcore-ado-pg</code>) has the following configuration:</p>
<h5 id="configtoml">config.toml</h5>
<pre><code>[ado-pg]
urls.db = "/db"
urls.query = "/queries/"
urls.fortune = "/fortunes"
urls.cached_query = "/cached-worlds/"
approach = "Realistic"
classification = "Platform"
database = "Postgres"
database_os = "Linux"
os = "Linux"
orm = "Raw"
platform = ".NET"
webserver = "Kestrel"
versus = "aspcore-ado-pg"
</code></pre>
<h5 id="aspcore-ado-pgdockerfile">aspcore-ado-pg.dockerfile</h5>
<pre><code>FROM mcr.microsoft.com/dotnet/sdk:6.0.100 AS build
WORKDIR /app
COPY PlatformBenchmarks .
RUN dotnet publish -c Release -o out /p:DatabaseProvider=Npgsql
FROM mcr.microsoft.com/dotnet/aspnet:6.0.0 AS runtime
ENV ASPNETCORE_URLS http://+:8080
# Full PGO
ENV DOTNET_TieredPGO 1
ENV DOTNET_TC_QuickJitForLoops 1
ENV DOTNET_ReadyToRun 0
WORKDIR /app
COPY --from=build /app/out ./
COPY PlatformBenchmarks/appsettings.postgresql.json ./appsettings.json
EXPOSE 8080
ENTRYPOINT ["dotnet", "PlatformBenchmarks.dll"]
</code></pre>
<p>The Dockerfile tells us that this test uses the <code>/PlatformBenchmakrs</code> code:</p>
<pre><code>COPY PlatformBenchmarks .
</code></pre>
<p>From the <code>config.toml</code> file we can derive that the Fortune test invokes the <code>/fortunes</code> endpoint during the benchmark run.</p>
<p>Also the .NET Team specified this particular benchmark to be classified as a realistic approach in the <code>config.toml</code> file:</p>
<pre><code>approach = "Realistic"
</code></pre>
<h2 id="the-aspnet-core-platform-benchmark">The "ASP.NET Core Platform" Benchmark</h2>
<p>Cool, so what's inside this highly performant realistic ASP.NET Core application?</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/aspnet-core-source-code.png" alt="ASP.NET Core PlatformBenchmarks code repository"></p>
<p>On first glance I didn't recognise a lot of what I'd normally consider a typical ASP.NET Core application (I've been developing professionally on ASP.NET and later ASP.NET Core since 2010).</p>
<p>The only thing that looked slightly familiar was the use of Kestrel (the .NET web server) inside <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/PlatformBenchmarks/Program.cs#L62-L73">Program.cs</a>:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/kestrel-setup.png" alt="Kestrel setup"></p>
<p>To my surprise this was also the <strong>only thing</strong> which I could recognise as an "ASP.NET Core" thing. The web application itself is not even initialised via one of the many ASP.NET Core idioms. Instead it creates a custom <code>BenchmarkApplication</code> as the listener on the configured endpoint.</p>
<p>An untrained eye might be thinking that <code>builder.UseHttpApplication<T>()</code> is a method that comes with Kestrel, but that is not the case either. The extension method as well as the <code>HttpApplication</code> class <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/PlatformBenchmarks/HttpApplication.cs">which is in use here</a> are not things which you'd find in the actual ASP.NET Core framework. It is yet another custom class specifically written for this benchmark:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/fake-aspnet-core-http-application.png" alt="Fake HttpApplication"></p>
<p>Not even the interface <code>IHttpApplication</code> comes from ASP.NET Core. This is also a <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/PlatformBenchmarks/IHttpConnection.cs">custom made type</a> which was specifically designed for the benchmark tests.</p>
<p>Looking further into the <code>BenchmarkApplication.cs</code> I was shocked by the sheer <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/PlatformBenchmarks/BenchmarkApplication.cs">amount of finely tuned low level C# code</a> that was tailor made for this (extremely simple) application.</p>
<p>Everything inside the <code>/PlatformBenchmarks</code> folder is custom code which you won't find anywhere in an official ASP.NET Core package.</p>
<p>A good example is the <code>AsciiString</code> class which is used to statically initialise huge chunks of the expected HTTP responses in advance:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/ascii-strings.png" alt="AsciiString Usage"></p>
<p>Even though it is called <code>AsciiString</code> it's only a string in name:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/ascii-string-implementation.png" alt="AsciiString Implementation"></p>
<p>In reality the <code>AsciiString</code> class is just a fancy (highly optimised) wrapper around a byte array which converts a string into bytes during initialisation. In the case of the Fortunes test the entire HTTP header (which the application is expected to return during a test run) is created upfront during application startup and then kept in memory for the entirety of the benchmark:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/http-header-trick.png" alt="Hardcoded HTTP Headers"></p>
<p>This is supposed to be a very simple application, something which a framework could probably squeeze into a single file of code, but the <code>/PlatformBenchmarks</code> project has <strong>many dozens of expertly crafted classes</strong> with all sorts of <strong>trickery</strong> applied to produce a desired outcome.</p>
<p>The extent to which the .NET Team went is extraordinary.</p>
<p>ASP.NET Core has many ways of implementing routing. They have Actions and Controllers, Endpoint Routing, Minimal APIs, or if someone wanted to operate on the <strong>lowest level of ASP.NET Core</strong> (= Platform), then they could work directly with the <code>Request</code> and <code>Response</code> objects from the <code>HttpContext</code>.</p>
<p>Neither of these options can be found <code>/PlatformBenchmarks</code>:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/optimised-routing.png" alt="Highly optimised routing"></p>
<p>In fact, you won't find a <code>HttpContext</code> anywhere at all. It's almost like the .NET Team tried to avoid using ASP.NET Core at all cost, which is strange to say the least.</p>
<p>Sieving through the project reveals even more bizarre code which the .NET Team applied to "tweak" the benchmark score.</p>
<p>For instance take a look at the HTML templating implementation of the ASP.NET Core solution:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/aspnet-core-fortunes-test.png" alt="ASP.NET Core Fortunes output writer"></p>
<p>There is no HTML template at all. The whole point of the Fortunes benchmark is - amongst others - to test different web frameworks for how fast they can output templated HTML. In ASP.NET Core we have two templating engines, <a href="https://learn.microsoft.com/en-us/aspnet/core/mvc/views/razor?view=aspnetcore-7.0">Razor Views</a> and <a href="https://learn.microsoft.com/en-us/aspnet/core/razor-pages/?view=aspnetcore-7.0&tabs=visual-studio">Razor Pages</a>, of which none is being used here.</p>
<p>Instead there are more hardcoded statically initialised byte arrays:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/html-template-cheat.png" alt="HTML Template Rendering Cheat"></p>
<p>Of course the question remains if these sort of tricks are allowed? The lines might be a bit blurry but I am certain that this implementation pushes the boundaries of what one might consider a real templating engine.</p>
<p>Web frameworks don't have to participate in every category of the TechEmpower Benchmark tests. In fact it is encouraged to only enter the categories which apply to a particular framework. For example, if a low level ASP.NET Core implementation (a real one which uses ASP.NET Core with <code>HttpContext</code> and so on) doesn't have template rendering included then it shouldn't enter the competition for Fortunes. If a higher level framework such as ASP.NET Core MVC has HTML template rendering available then it can enter the Fortunes benchmark. Entering the Fortunes competition with random C# code that doesn't resemble a real web framework at all makes very little sense and really just tarnishes the credibility of the entire TechEmpower Framework Benchmark test.</p>
<p>Perhaps I am being a little bit overly critical here, but this line of code really got me thinking:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/writing-date-header.png" alt="Date Header Cheat"></p>
<p>Setting the <code>Date</code> HTTP header with a date time value is such a small task that you don't even need a framework to do this job. It should be no more than a single line of code:</p>
<pre><code>response.WriteHeader("Date", DateTime.UtcNow.ToString())
</code></pre>
<p>However, the ASP.NET Core benchmark has a "<em>slightly more optimised</em>" <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/PlatformBenchmarks/DateHeader.cs">solution</a> to this task:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/date-header-implementation.png" alt="Date Header Code"></p>
<p>Setting a date time value has been so highly optimised that I can't even fit the entire code into a single screen. <strong>The creativity of finding ways to save computation cycles and therefore score higher in the benchmarks is truly astonishing.</strong> The <code>DateHeader</code> class is a static class (which means it only gets initialised once as a singleton and is then kept in memory) with a static <code>DateTimeOffset</code> value (of course already stored as a byte array). Additionally a <code>System.Threading.Timer</code> object is also statically initialised with a <strong>one second</strong> interval. This <a href="https://learn.microsoft.com/en-us/dotnet/api/system.threading.timer?view=net-7.0">Timer</a> will run on a separate thread and set a new date time value once every second:</p>
<pre><code>private static readonly Timer s_timer = new Timer((s) => {
SetDateValues(DateTimeOffset.UtcNow);
}, null, 1000, 1000);
</code></pre>
<p>You wonder how this is an optimisation? Well, the TechEmpower Benchmark will hit a web server many hundreds of thousand times <strong>per second</strong> to really test the limits of each framework. The <code>DateHeader</code> class will return the exact same timestamp for all those thousand requests and henceforth save itself from computing a new timestamp many thousand times. Then after one second the <code>Timer</code> (which runs on a separate thread) will sync a new timestamp exactly once and cache the timestamp for the next 300+ thousand requests. I'm impressed by the ingenuity. In all fairness the HTTP <code>Date</code> header doesn't accept timestamps more granular than a second and the <a href="https://github.com/TechEmpower/FrameworkBenchmarks/wiki/Project-Information-Framework-Tests-Overview">TechEmpower guidelines</a> mention this to be an accepted optimisation.</p>
<p>The only question I have is if this benchmark is testing ASP.NET Core why does it need to replicate something which ASP.NET Core already has <a href="https://github.com/dotnet/aspnetcore/blob/v5.0.17/src/Servers/Kestrel/Core/src/Internal/Http/DateHeaderValueManager.cs">out of the box</a>?</p>
<p>Now I ask myself, are all the ASP.NET Core benchmarks "tweaked" like this?</p>
<p>What about other frameworks?</p>
<p>I needed to further investigate this!</p>
<h2 id="aspnet-core-micro-benchmarks">ASP.NET Core Micro Benchmarks</h2>
<p>After dissecting the "Platform" benchmark it was time to look at the "Micro" frameworks:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/aspnet-core-benchmark-frameworks-without-mysql-and-without-mono.png" alt="ASP.NET Core Benchmark Frameworks without MySQL and without Mono tests"></p>
<p>Looking at the respective <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/aspcore-mw-ado-pg.dockerfile">Dockerfile</a> it turns out that the "Micro" benchmarks use the code from the <code>/Benchmarks</code> folder, which looks like an actual ASP.NET Core application:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/aspnet-core-benchmarks-folder.png" alt="ASP.NET Core Benchmarks folder"></p>
<p>This benchmark immediately has a different vibe than the one before. I'm very pleased to see that it's actually using elements which come from ASP.NET Core itself. The Fortunes tests are initialised via conventional middleware like this:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/fortunes-raw-middleware.png" alt="Fortunes Raw Middleware"></p>
<p>The <code>aspcore-mw-ado-pg</code> benchmark is what most .NET developers would probably call a low level "Platform" ASP.NET Core implementation. There is no higher level routing, no content negotiation, no other cross-cutting middlewares, no EntityFramework and still no actual HTML template rendering either, but at least it's ASP.NET Core.</p>
<p>The <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/Benchmarks/Middleware/FortunesRawMiddleware.cs">middleware</a> operates directly on the <code>HttpContext</code> to do basic routing:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/middleware-routing.png" alt="Middleware Routing"></p>
<p>This is okay and <a href="https://github.com/TechEmpower/FrameworkBenchmarks/wiki/Project-Information-Framework-Tests-Overview">inline with the TechEmpower guidelines</a>, because operating directly on the <code>HttpContext</code> is canonical for the framework (as opposed to the benchmark before):</p>
<blockquote>
<p>In some cases, it is considered normal and sufficiently production-grade to use hand-crafted minimalist routing using control structures such as if/else branching. This is acceptable where it is considered canonical for the framework.</p>
</blockquote>
<p>Although the middleware benchmark doesn't apply the <code>AsciiString</code> trickery any more, it still resorts to a "fake" templating engine:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/stringbuilder-templates.png" alt="StringBuilder Templates"></p>
<p>Overall it is a much more realistic (albeit not perfect) benchmark!</p>
<h2 id="aspnet-core-full-benchmarks">ASP.NET Core Full Benchmarks</h2>
<p>Finally it was time to check out the "MVC" benchmarks. It also derives its code from the <code>/Benchmarks</code> folder but instead of operating on the raw <code>HttpContext</code> it actually initialises the <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/Benchmarks/Startup.cs#L104-L114">least required MVC middleware</a> with the Razor View Engine:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/mvc-middleware.png" alt="MVC Core Middleware"></p>
<p>The Controller Action is also kept very realistic and finally uses the actual ASP.NET Core templating engine:</p>
<pre><code>[HttpGet("raw")]
public async Task<IActionResult> Raw()
{
var db = HttpContext.RequestServices.GetRequiredService<RawDb>();
return View("Fortunes", await db.LoadFortunesRows());
}
</code></pre>
<p>The Razor view matches what one would expect from this simple benchmark:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/mvc-view-template.png" alt="Razor View Template"></p>
<p>This is the most realistic ASP.NET Core application which actually meets the spirit of the Fortunes benchmark.</p>
<p>However, the results of this benchmark are <strong>very different</strong> to what Microsoft actively advertised to the .NET Community. The performance difference between a "fake" templating engine where a HTML response is being created <strong>in memory</strong> via a <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/CSharp/aspnetcore/Benchmarks/Data/StringBuilderCache.cs">cached StringBuilder</a> versus an actual templating engine which has to incur additional (expensive) I/O operations to read, parse and apply HTML templates from disk is enormous.</p>
<p>The latter only manages to serve <strong>184k requests/sec</strong> and <strong>only ranks 109<sup>th</sup></strong> overall in the TechEmpower Framework Benchmarks for the Fortunes test. That is a staggering difference and something to be kept in mind when comparing ASP.NET Core to frameworks written in Java, Go or C++.</p>
<h2 id="other-frameworks">Other frameworks</h2>
<p>Now that I've established a clearer picture of what the various ASP.NET Core benchmarks are, it was time to look at other frameworks too.</p>
<h3 id="java">Java</h3>
<p>The fastest Java benchmark which also uses Postgres as the underlying database is <a href="https://jooby.io">Jooby</a>.</p>
<p>Their <a href="https://github.com/TechEmpower/FrameworkBenchmarks/tree/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/Java/jooby">benchmark implementation</a> is astonishingly simple. The entire fortune implementation is basically this block of code:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/jooby-fortunes.png" alt="Jooby Fortunes"></p>
<p>It uses a higher level router (<code>get("/fortunes", ctx -> {}</code>) as well as conventional database access methods and a real templating engine too:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/jooby-template.png" alt="Jooby Template"></p>
<p>This is pretty much the Java equivalent to the ASP.NET Core MVC (aka Full) benchmark.</p>
<p>The interesting part is that this completely unoptimised fully fledged Java MVC framework <strong>ranks overall 12<sup>th</sup></strong> in the Fortunes benchmark with an incredible <strong>404k requests/sec</strong>. It is essentially <strong>more than twice as fast</strong> as the ASP.NET Core equivalent, still beats the "Micro" implementation of the ASP.NET Core benchmark (which skips all the expensive I/O operations by using a fake templating engine) and even manages to compete with the infamous <code>/PlatformBenchmarks</code> application which in all honesty due to its differences is not even worth a comparison.</p>
<p>No disrespect to ASP.NET Core (because 184k requests/sec is still an amazing result) but it doesn't come anywhere near this Java framework when it comes to performance. Credit where credit is due.</p>
<h3 id="go">Go</h3>
<p>What about Go?</p>
<p>Sébastien Ros (developer working on ASP.NET Core performance at Microsoft) specifically called out Go and claimed that ASP.NET Core is still faster than Go in a like-for-like comparison. I was personally very interested in this claim as I have migrated several .NET Core projects to Go and seen dramatic performance increases as a result of it.</p>
<p>At the time of writing this post the fasted Fortune benchmark is <a href="https://github.com/savsgio/atreugo">atreugo</a> for Go.</p>
<p>Similar to Java, the actual <a href="https://github.com/TechEmpower/FrameworkBenchmarks/tree/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/Go/atreugo">Go implementation</a> is kept extremely simple.</p>
<p>Routing is done via the framework provided idioms:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/atreugo-routing.png" alt="atreugo routing"></p>
<p>No shortcuts or trickery to be found here. The entire application for the Fortunes benchmark is basically <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/Go/atreugo/src/views/views.go#L97-L123">less than 20 lines of code</a>.</p>
<p>Templating is done the proper way too:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/atreugo-template.png" alt="atreugo template"></p>
<p>So where does this leave us overall? Well, just like with the fasted Java framework, the Go benchmark also compares to ASP.NET Core's "Full" implementation best. Anything other would simply not be fair. You cannot compare a benchmark which spits out in-memory crafted HTML (which is not even part of ASP.NET Core) versus one that actually uses a real templating engine that goes through expensive cycles of reading files from I/O, parsing them at runtime and having to execute their logic for every request (loops, variables, etc. in the template).</p>
<p>Nevertheless, the expensive Go implementation <strong>ranks 22<sup>nd</sup></strong> overall in the TechEmpower Fortunes Benchmark with an equally impressive <strong>381k requests/sec</strong>. Not quite as fast as the Java one but <strong>still more than 2x faster than the equivalent test in ASP.NET Core</strong>.</p>
<h3 id="c">C++</h3>
<p>Hopefully this shouldn't be a big surprise, but currently C++ with the <a href="https://github.com/drogonframework/drogon">drogon</a> framework <strong>leads the Fortunes</strong> benchmarks with a breathtaking <strong>616k requests/sec</strong> which beats every other framework by a long stretch (except Rust where the gap is not that big)! What makes this achievement even more astonishing is that it manages to do this with a <a href="https://github.com/drogonframework/drogon">fully fledged MVC implementation</a>. There is absolutely no shortcuts or trickery at play.</p>
<p>It even uses the <a href="https://github.com/TechEmpower/FrameworkBenchmarks/tree/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/C%2B%2B/drogon/drogon_benchmark/views">CSP templating engine</a> which looks like this:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-11-14/drogon-template.png" alt="drogon template"></p>
<p>I love .NET but there is no mental gymnastics that one could convincingly apply in which .NET comes on top of C++. Any benchmark that suggests otherwise knows it's not being honest with itself.</p>
<h3 id="rust-nodejs-kotlin-and-php">Rust, Node.js, Kotlin and PHP</h3>
<p>Since the .NET Team started to campaign ASP.NET Core as a much faster web framework than many others I thought it would only be fair to further probe those claims.</p>
<h4 id="rust">Rust</h4>
<p><a href="https://github.com/TechEmpower/FrameworkBenchmarks/tree/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/Rust/xitca-web">Rust</a> delivers <strong>588k requests/sec</strong> and comes <strong>2<sup>nd</sup></strong> in the overall Fortunes benchmark. It's the only other language platform which gives C++ a run for its money. The <a href="https://github.com/HFQR/xitca-web">xitca-web</a> framework accomplishes this unbelievable result with another proper <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/Rust/xitca-web/src/main.rs#L130-L136">MVC-like implementation</a> and an <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/Rust/xitca-web/templates/fortune.stpl">actual templating engine</a>.</p>
<h4 id="kotlin">Kotlin</h4>
<p>Another great result is achieved by a <a href="https://github.com/TechEmpower/FrameworkBenchmarks/tree/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/Kotlin/vertx-web-kotlin-coroutines">Kotlin web framework</a> with a very honest <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/Kotlin/vertx-web-kotlin-coroutines/src/main/kotlin/io/vertx/benchmark/App.kt#L147-L172">Fortunes implementation</a> which uses the <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/Kotlin/vertx-web-kotlin-coroutines/src/main/resources/templates/Fortunes.rocker.html">Rocker engine</a> for its HTML templating. It pegs at <strong>350k requests/sec</strong> and comes <strong>29<sup>th</sup></strong> overall which is still <strong>80 places ahead of the equivalent ASP.NET Core</strong> implementation.</p>
<h4 id="nodejs">Node.js</h4>
<p>One claim which turned out to be (partially) true is that <strong>ASP.NET Core is faster than Node.js</strong>. Although only <strong>3x and not 10x faster</strong> as it was claimed, ASP.NET Core still beats <a href="https://github.com/lukeed/polkadot">Polkadot</a> convincingly, which is the highest ranking Node.js framework which had a comparable implementation to the "Micro" benchmark in ASP.NET Core. With only <strong>125k requests/sec</strong> Node.js trails behind .NET.</p>
<h4 id="php">PHP</h4>
<p>Now this might actually take people by surprise, but if you haven't been paying attention then you might have missed all the work that has gone into PHP over the many years. Not least because Facebook invested a lot of effort into making PHP a better platform. It is now capable of serving an incredible <strong>309k requests/sec</strong> with it's <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/PHP/mixphp/views/fortunes.php">MVC-like implementation</a> delivered by <a href="https://github.com/TechEmpower/FrameworkBenchmarks/tree/62aaac842e6bf51540bb838bb9ffaaad0d7c9e73/frameworks/PHP/mixphp">mixphp</a>. That is still significantly faster than ASP.NET Core's MVC framework and certainly deserves a mention too!</p>
<h4 id="justjs">Just(js)</h4>
<p>If you are a <strong>JavaScript</strong> developer don't feel too bad about the Node.js benchmarks, because <a href="https://github.com/just-js/just">Just(js)</a> will knock you off your socks with a spectacular <strong>538k requests/sec</strong>. This is no joke, <a href="https://just.billywhizz.io">Just(js)</a> comes <strong>5<sup>th</sup> in the Fortunes benchmark</strong> and is the only framework which competes in the realms of C++ and Rust. It is a remarkable achievement which is <a href="https://just.billywhizz.io/blog/on-javascript-performance-01/">not something that happened by mistake</a>. It is far ahead of every other ASP.NET Core benchmark and had to be mentioned as part of this post!</p>
<h2 id="is-aspnet-core-actually-fast">Is ASP.NET Core actually fast?</h2>
<p><strong>Yes</strong>, it certainly is!</p>
<p>Especially if you think back to what Classic ASP.NET was during the .NET Framework times then it becomes very clear that ASP.NET Core is world's apart from its darker past.</p>
<p>Make no mistake, <strong>ASP.NET Core is very fast</strong> and certainly doesn't need to shy away from a healthy competition. However, it is <strong>evidently not faster than Java, Go or C++</strong>. Perhaps it will get there one day but at the moment this is not the case. I am certain that we haven't seen the ceiling for ASP.NET Core just yet and I look forward to what the .NET Team will deliver next. ASP.NET Core is a great platform and even though it's not the fastest (yet), it is still a joy!</p>
<p>I wish Scott Hunter and the rest of the ASP.NET Core Team didn't feel the need to market ASP.NET Core based on soft lies and bad-faith claims to make ASP.NET Core stand out amongst its peers. I'm sure there is more to be proud of!</p>
<h4 id="sidenotes">Sidenotes</h4>
<p>One final interesting thing which came up during my research is that TechEmpower <a href="https://www.techempower.com/benchmarks/#section=environment">switched their cloud hosting environment from AWS to Azure</a> around the time when Microsoft got interested in the tests. TechEmpower also receives its physical hardware for all their on-premise tests by Microsoft today.</p>
<h2 id="update-after-twitter-storm-15112022">Update after Twitter storm (15/11/2022)</h2>
<h3 id="bad-microsoft">Bad Microsoft?</h3>
<p>No, I don't think the .NET Team had any malice in mind. I am confident the engineers were simply geeking out over performance improvements and then the marketing department probably got wind of it and started to conveniently cherry pick comparisons. It happens, but <a href="https://twitter.com/davidfowl/status/1592311942005542912?s=20&t=YLN5ldr4cC00jXoc4PhesA">David Fowler from the ASP.NET Core team confirmed they will be more mindful</a> about this going forward.</p>
<h3 id="fair-comparisons">Fair comparisons?</h3>
<p>The TechEmpower Framework Benchmarks run over 300 different frameworks. Web frameworks consist of <strong>many layers with a huge variety of functionality</strong>. It's going to be impossible to have a 100% fair comparison when these web frameworks don't even have feature parity. I think there is still some value in having basic ground rules on the big things (e.g. db, templating engine, etc.) and accepting that those benchmarks won't be perfect without any flaws. A <a href="https://www.reddit.com/r/dotnet/comments/yuxkk7/comment/iwcaa5q/?utm_source=share&utm_medium=web2x&context=3">Redditer perfectly pointed out all the other things</a> that one has to take into account. Has the garbage collector been turned off for the benchmarks? Are they using a fully HTTP compliant router? Should frameworks use the same templating engine? It's complicated and hence why I personally think it was quite unfair by Microsoft to "smear" other frameworks as being slow based on cherry picked results from greatly inconsistent implementations. This was precisely the whole point of me writing this post.</p>
<h3 id="aspnet-core-platform-benchmark">ASP.NET Core Platform Benchmark</h3>
<p>The .NET Team pointed out on Twitter that the "Platform" benchmark is the lowest level of the "Platform" where some parts were used by Kestrel and others not. I don't have an issue with that personally, but it seems the .NET Team cannot really articulate what "Platform" means. The Platform of what? It is not ASP.NET Core, so perhaps they mean the ".NET Platform" or maybe "Platform" is just a conveniently chosen name to label a random collection of low level APIs bundled together into a benchmark application. The point is it is not ASP.NET Core as far as I know and therefore labelling the test as "ASP.NET Core Platform" so "ASP.NET Core" shows up at the top of the benchmark table is slightly disingenuous.</p>
<h3 id="postgres-pipelining">Postgres Pipelining</h3>
<p>The <a href="https://twitter.com/justjs14/status/1592189097782960133?s=20&t=YLN5ldr4cC00jXoc4PhesA">developer behind Just(js)</a> pointed me towards <a href="https://github.com/TechEmpower/FrameworkBenchmarks/issues/7019">an issue</a> where "the ASP.NET team made a big collective (successful) effort to force the removal of postgres pipelining from the benchmarks for no particularly good reason". Honestly I don't know enough about it, but it made me look into <a href="https://github.com/TechEmpower/FrameworkBenchmarks/issues/7402">other GitHub issues</a> and there is certainly an interest at Microsoft to <a href="https://github.com/TechEmpower/FrameworkBenchmarks/issues/4769">keep other frameworks in check</a>.</p>
<p>There is also <a href="https://github.com/TechEmpower/FrameworkBenchmarks/issues/4727">another issue from 2019</a> where someone pointed out that <strong>concatenating hardcoded strings is cheating</strong> and after <a href="https://twitter.com/ben_a_adams">Ben Adams</a> tried to defend the ASP.NET Core benchmark it was finally <a href="https://github.com/TechEmpower/FrameworkBenchmarks/issues/4727#issuecomment-489388402">ruled that it is indeed against the rules</a>. The Rust framework which was part of that discussion <a href="https://github.com/TechEmpower/FrameworkBenchmarks/pull/4729">made the necessary changes</a> afterwards, but Ben and the .NET Team never adjusted theirs.</p>
https://dusted.codes/how-fast-is-really-aspnet-core
[email protected] (Dustin Moris Gorski)https://dusted.codes/how-fast-is-really-aspnet-core#disqus_threadMon, 14 Nov 2022 00:00:00 +0000https://dusted.codes/how-fast-is-really-aspnet-coreaspnet-coredotnet-corecsharpThe type system is a programmer's best friend<p>I am tired of <a href="https://blog.ploeh.dk/2011/05/25/DesignSmellPrimitiveObsession/">primitive obsession</a> and the excessive use of primitive types to model a domain.</p>
<p>A <code>string</code> value is not a great type to convey a user's email address or their country of origin. These values deserve much richer and dedicated types. I want a data type called <code>EmailAddress</code> which cannot be null. I want a single point of entry to create a new object of that type. It should get validated and normalised before returning a new value. I want that data type to have helpful methods such as <code>.Domain()</code> or <code>.NonAliasValue()</code> which would return <code>gmail.com</code> and <code>[email protected]</code> respectively for an input of <code>[email protected]</code>. Such useful functionality should be embedded into those types. It provides safety, helps to prevent bugs and it immensely increases maintainability.</p>
<p>Well designed types with useful functionality guide a programmer to do the right thing.</p>
<p>For instance an <code>EmailAddress</code> could have two methods to check for equality:</p>
<ul>
<li><code>Equals</code> would return <code>true</code> if two (normalised) email addresses are identical.</li>
<li><code>EqualsInPrinciple</code> would return <code>true</code> for inputs of <code>[email protected]</code> and <code>[email protected]</code> also.</li>
</ul>
<p>These type specific methods would be extremely handy in a variety of scenarios. A user login should not fail if the user registered with <code>[email protected]</code> but then logs in with <code>[email protected]</code>. Equally it would be super convenient to match a user who contacted customer support from their non-aliased email address (<code>[email protected]</code>) to their registered account (<code>[email protected]</code>). Those are typical requirements which a simple <code>string</code> couldn't fulfil without a lot of additional domain logic scattered around a codebase.</p>
<p><em><strong>Note</strong>: According to the <a href="https://www.rfc-editor.org/rfc/rfc5321#section-2.3.11">official RFC</a> the part of an email address before the @-sign could be case-sensitive, but all major email hosts treat them as case-insensitive and so it's not unreasonable for a domain type to take this knowledge into consideration.</em></p>
<h2 id="good-types-can-prevent-bugs">Good types can prevent bugs</h2>
<p>Ideally I want to go even further. An email address can be verified or unverified. It's common practice to validate an email address by sending a unique code to a person's inbox. These "business" interactions can be expressed through the type system as well. For example, let's have a second type called <code>VerifiedEmailAddress</code>. If you wish it can even inherit from an <code>EmailAddress</code>. I don't care, but ensure that there is only one place in the code which can yield a new instance of <code>VerifiedEmailAddress</code>, namely the service which is responsible for validating a user's address. All of a sudden the rest of the application could rely on this new type to prevent bugs.</p>
<p>Any function which is sending emails can lean on the safety of a <code>VerifiedEmailAddress</code>. Imagine what it would look like if an email address was expressed via a simple <code>string</code>. One would have to find/load the associated user account first, check for some obscure flag like <code>HasVerifiedEmail</code> or <code>IsActive</code> (which is the worst flag by the way because it tends to grow in meaning over time) and then hope that this flag was actually correctly set and not mistakenly initialised as <code>true</code> in some default constructor. There is too much room for error to go unchecked! Using a primitive <code>string</code> for an object which could get so easily expressed through its own type is simply lazy and unimaginative programming.</p>
<h2 id="rich-types-protect-you-from-future-mistakes">Rich types protect you from future mistakes</h2>
<p>Another great example is money! I've lost count of how many applications express monetary values using the <code>decimal</code> type. Why? There are so many issues with that type that I find it incomprehensible. Where is the currency? Every domain that deals with people's money should have a dedicated type called <code>Money</code>. At the very least it should include the currency and some operator overloads (or other safety features) to prevent silly mistakes like multiplying $100 with £20. Besides, not every currency has <a href="https://en.wikipedia.org/wiki/ISO_4217">only two digits after the decimal point</a>. Some currencies such as the Bahraini or Kuwaiti dinar have three. If you deal with investments or bank loans in Chile then you better make sure that you render the <a href="https://en.wikipedia.org/wiki/Unidad_de_Fomento">Unidad de Fomento</a> with 4 decimal points. These concerns are already important enough to warrant a dedicated <code>Money</code> type, but that's not even all.</p>
<p>Unless you build everything in house you will eventually have to deal with third party systems too. For instance, most payment gateways request and respond with money as <code>integer</code> values. Integer values don't suffer from the same rounding issues which are often associated with <code>float</code> or <code>double</code> types and therefore preferred over floating-point numbers. The only caveat is that values have to be transmitted in minor units (e.g. Cent, Pence, Diram, Grosz, Kopeck, etc.), which means that if your program deals with <code>decimal</code> values you'll have to constantly convert them back and forth when talking to an external API. As explained before not every currency uses two decimal points so it's not going to be a simple multiplication/division by 100 every time. Things can get very quickly difficult and matters could be significantly simplified if those business rules were encapsulated into a concise single type:</p>
<ul>
<li><code>var x = Money.FromMinorUnit(100, "GBP")</code>: £1</li>
<li><code>var y = Money.FromUnit(100.50, "GBP")</code>: £1.50</li>
<li><code>Console.WriteLine(x.AsUnit())</code>: 1.5</li>
<li><code>Console.WriteLine(x.AsMinorUnit())</code>: 150</li>
</ul>
<p>As if this was not already complicated enough countries have different currency formats to render money too. In the UK "Ten Thousand Pounds and Fifty Pence" would be represented as <code>10,000.50</code> but in Germany "Ten Thousand Euro and Fifty Cent" would be shown as <code>10.000,50</code>. Just imagine the amount of money and currency related code that would be fragmented (and possibly duplicated with minor inconsistencies) across a codebase if those business rules were not put into a single <code>Money</code> type.</p>
<p>Additionally a dedicated <code>Money</code> type could include many more features which would make working with monetary values a breeze:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">var</span> gbp = Currency.Parse(<span style="color:#ffa08f">"GBP"</span>);
</span></span><span style="display:flex;"><span><span style="color:#d179a3">var</span> loc = Locale.Parse(<span style="color:#ffa08f">"Europe/London"</span>);
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">var</span> money = Money.FromMinorUnit(<span style="color:#abfebc">1000050</span>, gbp);
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>money.Format(loc) <span style="color:#8f8f8f">// ==> £10,000.50</span>
</span></span><span style="display:flex;"><span>money.FormatVerbose(loc) <span style="color:#8f8f8f">// ==> GBP 10,000.50</span>
</span></span><span style="display:flex;"><span>money.FormatShort(loc) <span style="color:#8f8f8f">// ==> £10k</span>
</span></span></code></pre><p>Sure modelling such a <code>Money</code> type would be a little bit of an effort to begin with, but once it has been implemented and tested to satisfaction then the rest of a codebase could rely on much greater safety and prevent the majority of bugs which would otherwise creep in over time. Even if small features such as the guarded initialisation of a <code>Money</code> object through either <code>Money.FromUnit(decimal v, Currency c)</code> or <code>Money.FromMinorUnit(int v, Currency c)</code> doesn't seem like much, it makes successive developers think every time whether the value which they received from a user input or external API is one or the other and therefore prevent bugs from the start.</p>
<h2 id="smart-types-can-prevent-unwanted-side-effects">Smart types can prevent unwanted side effects</h2>
<p>The great thing about rich types is that you can shape them in whichever way you want. If I haven't sparked your own imagination yet then let me show you another great example of how a dedicated type can save your team from a huge operational overhead and even prevent security bugs.</p>
<p>Every codebase that I've ever worked with had something like a <code>string secretKey</code> or <code>string password</code> somewhere as a parameter of a function. Now what could possibly go wrong with these variables?</p>
<p>Imagine you have this (pseudo-)code:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">try</span>
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">var</span> userLogin = <span style="color:#d179a3">new</span> UserLogin
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> Username = username
</span></span><span style="display:flex;"><span> Password = password
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">var</span> success = _loginService.TryAuthenticate(userLogin);
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> (success)
</span></span><span style="display:flex;"><span> RedirectToHomeScreen(userLogin);
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> ReturnUnauthorized();
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span><span style="color:#d179a3">catch</span> (Exception ex)
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span> Logger.LogError(ex, <span style="color:#ffa08f">"User login failed for {login}"</span>, userLogin);
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>The problem that arises here is that if an exception is thrown during the authentication process then this application would (accidentally) write the user's cleartext password into the logs. Now of course this code should never exist like this in the first place and you'd hope it would get caught during a code review before going to production but the reality is that this stuff happens over time. Most such bugs occur incrementally as time moves on.</p>
<p>Initially the <code>UserLogin</code> class could have had a different set of properties and this piece of code would have probably been fine during the initial code review. Years later someone might have modified the <code>UserLogin</code> class to include the cleartext password. Then this function would have not even shown up in the diff which was submitted for later review and violà you've just introduced a security bug. I am sure every developer with some years of experience would have run into a similar issues at some point during their career.</p>
<p>However this bug could have been easily prevented with the introduction of a dedicated type.</p>
<p>In C# (using this as my example language) the <code>.ToString()</code> method gets automatically called when an object gets written to a log (or anywhere else for that matter). Having this knowledge one could design a <code>Password</code> type like this:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">public</span> <span style="color:#d179a3">readonly</span> <span style="color:#d179a3">record</span> <span style="color:#c2d975">struct</span> Password()
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f">// implementation goes here</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">public</span> <span style="color:#d179a3">override</span> <span style="color:#d179a3">string</span> ToString()
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#ffa08f">"****"</span>;
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">public</span> <span style="color:#d179a3">string</span> Cleartext()
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> _cleartext;
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>It's only a minor change, but all of a sudden it would become impossible to accidentally output a cleartext password anywhere in the system. Isn't that great?</p>
<p>Of course you might still need the cleartext value during the actual authentication process but that is being made accessible via a very clearly named method <code>Cleartext()</code> so there is no ambiguity about the sensitivity of this operation and it automatically guides a developer to use this method with intention and care.</p>
<p>Dealing with a user's PII (e.g. National Insurance number, Tax number, etc.) would be the same principle. Model that information using dedicated types. Override default functions such as <code>.ToString()</code> to your benefit and expose sensitive data via accordingly named functions. You'll never leak PII into logs and other places that later might require a huge operation to scrub it out again.</p>
<p>These small tricks can go a long way!</p>
<h2 id="make-it-a-habit">Make it a habit</h2>
<p>Every time you deal with data that has particular rules, behaviours or dangers associated with them think about how you could help yourself with the creation of an explicit type.</p>
<p>Continuing from my example of the <code>Password</code> type we can go even further once again!</p>
<p>Passwords get hashed before being stored in the database, right? Sure thing, but that hash is (of course) not just a simple <code>string</code>. At some point we will have to compare a previously stored hash with a newly computed hash during the login process. The problem is that not every developer is a security expert and therefore knows that comparing two hash strings could make your code vulnerable to timing attacks.</p>
<p>The recommended way of checking the equality of two password hashes is by doing it in a non-optimised way:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#8f8f8f">// Compares two byte arrays for equality. The method is specifically written so that the loop is not optimized.</span>
</span></span><span style="display:flex;"><span>[MethodImpl(MethodImplOptions.NoInlining | MethodImplOptions.NoOptimization)]
</span></span><span style="display:flex;"><span><span style="color:#d179a3">private</span> <span style="color:#d179a3">static</span> <span style="color:#d179a3">bool</span> ByteArraysEqual(<span style="color:#d179a3">byte</span>[] a, <span style="color:#d179a3">byte</span>[] b)
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> (a == <span style="color:#d179a3">null</span> && b == <span style="color:#d179a3">null</span>)
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#d179a3">true</span>;
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> (a == <span style="color:#d179a3">null</span> || b == <span style="color:#d179a3">null</span> || a.Length != b.Length)
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> <span style="color:#d179a3">false</span>;
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">var</span> areSame = <span style="color:#d179a3">true</span>;
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">for</span> (<span style="color:#d179a3">var</span> i = <span style="color:#abfebc">0</span>; i < a.Length; i++)
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> areSame &= (a[i] == b[i]);
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> areSame;
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p><em><strong>Note:</strong> Code example taken from the <a href="https://github.com/aspnet/Identity/blob/rel/2.0.0/src/Microsoft.Extensions.Identity.Core/PasswordHasher.cs#L70">original ASP.NET Core repository</a>.</em></p>
<p>So it would only make sense to encode this particular functionality into a dedicated type:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">public</span> <span style="color:#d179a3">readonly</span> <span style="color:#d179a3">record</span> <span style="color:#c2d975">struct</span> PasswordHash
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f">// Implementation goes here</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">public</span> <span style="color:#d179a3">override</span> <span style="color:#d179a3">bool</span> Equals(PasswordHash other)
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> ByteArraysEqual(<span style="color:#d179a3">this</span>.Bytes(), other.Bytes());
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>If a <code>PasswordHasher</code> only returns values of type <code>PasswordHash</code> then even developers who don't know much about this topic will be forced to use a safe form of checking for equality.</p>
<p>Be thoughtful in how you model your domain!</p>
<p>Of course, it's almost needless to say that with everything in programming there is no clear right or wrong and there is always more nuance in people's personal use cases than what I could possibly convey in a single post, but my general suggestion is to think about how you could make the type system your best friend.</p>
<p>Many modern programming languages come with very rich type systems nowadays and I think on a broad spectrum we are probably heavily underutilising those as a way of improving our code.</p>
https://dusted.codes/the-type-system-is-a-programmers-best-friend
[email protected] (Dustin Moris Gorski)https://dusted.codes/the-type-system-is-a-programmers-best-friend#disqus_threadTue, 01 Nov 2022 00:00:00 +0000https://dusted.codes/the-type-system-is-a-programmers-best-friendarchitecturesoftware-designdddUsing Go generics to pass struct slices for interface slices<p>Have you ever tried to pass a struct slice into a function which accepts a slice of interfaces? In Go this won't work.</p>
<p>Let's have a quick look at an example. Let's assume we have an interface called <code>Human</code> and a function called <code>GreetHumans</code> which accepts a slice of humans and prints their names:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">package</span> main
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">import</span> <span style="color:#ffa08f">"fmt"</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">type</span> Human <span style="color:#d179a3">interface</span> {
</span></span><span style="display:flex;"><span> <span style="color:#ecc77d">Name</span>() <span style="color:#d179a3">string</span>
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> <span style="color:#ecc77d">GreetHumans</span>(humans []Human) {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">for</span> _, h <span style="color:#d179a3">:=</span> <span style="color:#d179a3">range</span> humans {
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Println</span>(<span style="color:#ffa08f">"Hello "</span> <span style="color:#d179a3">+</span> h.<span style="color:#ecc77d">Name</span>())
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>Then we have a separate struct which implements the <code>Human</code> interface:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">type</span> Hero <span style="color:#d179a3">struct</span> {
</span></span><span style="display:flex;"><span> FirstName <span style="color:#d179a3">string</span>
</span></span><span style="display:flex;"><span> LastName <span style="color:#d179a3">string</span>
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> (h Hero) <span style="color:#ecc77d">Name</span>() <span style="color:#d179a3">string</span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> h.FirstName <span style="color:#d179a3">+</span> <span style="color:#ffa08f">" "</span> <span style="color:#d179a3">+</span> h.LastName
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>Nothing unusual so far.</p>
<p>Now one can create an object of type <code>Hero</code> and pass it into any function that requires an object of <code>Human</code>. That is expected.</p>
<p>However, the issue occurs when one deals with a slice of <code>Hero</code> and a function accepts a slice of <code>Human</code>.</p>
<p>This code won't compile:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> <span style="color:#ecc77d">main</span>() {
</span></span><span style="display:flex;"><span> heroes <span style="color:#d179a3">:=</span> []Hero{
</span></span><span style="display:flex;"><span> {FirstName: <span style="color:#ffa08f">"Peter"</span>, LastName: <span style="color:#ffa08f">"Parker"</span>},
</span></span><span style="display:flex;"><span> {FirstName: <span style="color:#ffa08f">"Bruce"</span>, LastName: <span style="color:#ffa08f">"Wayne"</span>},
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#ecc77d">GreetHumans</span>(heroes) <span style="color:#8f8f8f">// <-- Compilation error here</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p><img src="https://cdn.dusted.codes/images/blog-posts/2022-09-04/go-incompatible-assignment-error.jpg" alt="Go incompatible assignment error"></p>
<p>Even though all heroes are humans the compiler won't accept this assignment.</p>
<p>You wonder why? Simply because Go doesn't want to hide expensive operations behind convenient syntax:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-09-04/go-type-cast-explanation.jpg" alt="Go type cast explanation"></p>
<p>One has to iterate through a slice of <code>Hero</code> themselves and convert each object of <code>Hero</code> explicitly to a <code>Human</code> in order to cast the entire slice before passing it into the <code>GreetHumans</code> function. The cost of this conversion becomes immediately visible to the programmer.</p>
<p>What did Go programmers do up until recently?</p>
<p>Well there were mainly three options:</p>
<ol>
<li>Create a conversion function for each individual type which implements the <code>Human</code> interface</li>
<li>Create a "generic" conversion function using <code>interface{}</code></li>
<li>Create a "generic" conversion function using reflection</li>
</ol>
<p>Option 1 is extremely tedious and option 2 and 3 provide very weak (or basically no) type safety (because checks would be only performed at runtime).</p>
<p>Since Go 1.18 one can use <a href="https://go.dev/blog/intro-generics">Generics</a> to tackle the issue.</p>
<p>First we can modify the <code>GreetHumans</code> function to use Generics and therefore not require any casting at all:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> GreetHumans[T Human](humans []T) {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">for</span> _, h <span style="color:#d179a3">:=</span> <span style="color:#d179a3">range</span> humans {
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Println</span>(<span style="color:#ffa08f">"Hello "</span> <span style="color:#d179a3">+</span> h.<span style="color:#ecc77d">Name</span>())
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>This makes it possible to pass the <code>heroes</code> slice into the <code>GreetHumans</code> functions now.</p>
<p>However, sometimes it's not possible to make a function generic. Methods (functions on a type) require all type parameters to be on the type. <a href="https://go.googlesource.com/proposal/+/refs/heads/master/design/43651-type-parameters.md#No-parameterized-methods">Parameterized methods are not allowed</a>.</p>
<p>The good news is that even in this case Generics can help to provide a much better conversion function then the previously listed options:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> CastToHumans[T Human](humans []T) []Human {
</span></span><span style="display:flex;"><span> result <span style="color:#d179a3">:=</span> []Human{}
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">for</span> _, h <span style="color:#d179a3">:=</span> <span style="color:#d179a3">range</span> humans {
</span></span><span style="display:flex;"><span> result = <span style="color:#b4ddff">append</span>(result, h)
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span> result
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>Here the generic <code>CastToHumans</code> function provides type safety at the time of compilation. It still remains an expensive operation but at least it cannot be used in an improper way any longer.</p>
<p>I wasn't sure if this is going to work and I was positively surprised to find out that it does indeed.</p>
<p>It's another neat use case of Generics in Go!</p>
https://dusted.codes/using-go-generics-to-pass-struct-slices-for-interface-slices
[email protected] (Dustin Moris Gorski)https://dusted.codes/using-go-generics-to-pass-struct-slices-for-interface-slices#disqus_threadSun, 04 Sep 2022 00:00:00 +0000https://dusted.codes/using-go-generics-to-pass-struct-slices-for-interface-slicesgolangBuilding a secure note sharing service in Go<p>Welcome to my first <a href="https://go.dev">Go</a> related article which I am releasing on my blog.</p>
<p>In this blog post I’ll be using Go to develop a super small web service which can be used by people or organisations to share secrets in a slightly more private way. We all have the occasional need to share a secret with a co-worker or another person. It may be an API key, a password or even some confidential data from a customer. When we share secrets via channels such as Slack, Teams or Email then we essentially send the secret to the servers of a complete stranger. We have no oversight over how the data is being handled, how long it will persist on third party servers and who the people are who have access to it. Sending secrets directly via Slack or Teams can also pose other unwanted side effects. For instance new employees who get added to an existing channel could discover previously shared confidential data via a channel's chat history. That could be a breach of security in itself if those employees didn't have the necessary clearance beforehand. Overall secrets and/or confidential data should never be shared directly via (untrusted) third party channels.</p>
<p>I thought writing a small data sharing app could be a good way of learning Go. The goal is to create a small web service which can be run as a single binary or from a Docker container inside a company's own infrastructure. Why rely on an (untrusted) third party service (<a href="https://www.noterip.com">noterip</a>, <a href="https://safenote.co/private-message">safenote</a>, <a href="https://onetimesecret.com">onetimesecret</a>, <a href="https://password.link">circumvent</a> or <a href="https://privnote.com">privnote</a>) if one could run their own?</p>
<h2 id="the-foundation">The Foundation</h2>
<p>This is going to be an MVP so we’ll be making some fast gains by keeping the service extremely simple and making use of Redis as the main persistence layer. Redis seems to be a good fit for an MVP as it can be easily hosted in a container and can be used as a distributed data store that can serve multiple instances of our app. We can also make use of the TTL (time to live) feature which gives us a quick and dirty implementation of short lived, self destructing links.</p>
<p>Our web service will be a simple Go executable which can also run in a container and which will implement basic functionality to persist and retrieve a secret.</p>
<p>The entire solution will be <a href="https://github.com/dustinmoris/self-destruct-notes">open source with an OSS friendly Apache 2.0 license</a> so that people can fork it and make their own modifications to it.</p>
<p>I call this project <code>self-destruct-notes</code>:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-07-26/self-destruct-in-golang-1.png" alt="self-destruct-notes GitHub repository"></p>
<p>For the purpose of this blog post I'll keep the service very rudimental and use as few third party dependencies as possible. I'm actually coding this project as I'm writing this blog post so one can follow the evolution of this app through this article or the associated <a href="https://github.com/dustinmoris/self-destruct-notes/commits/main">commit history in Git</a>.</p>
<h2 id="creating-a-new-go-project">Creating a new Go project</h2>
<p>First let’s create a simple Go project to kick things off.</p>
<p>I'll start with a basic <code>main.go</code> file which yields a typical "hello world" message and then I run a couple <code>go mod</code> commands to initiliase the project:</p>
<h4 id="maingo">main.go:</h4>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">package</span> main
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">import</span> <span style="color:#ffa08f">"fmt"</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> <span style="color:#ecc77d">main</span>() {
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Println</span>(<span style="color:#ffa08f">"hello world"</span>)
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><h4 id="terminal">Terminal:</h4>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">go</span> mod init github.com<span style="color:#d179a3">/</span>dustinmoris<span style="color:#d179a3">/</span>self<span style="color:#d179a3">-</span>destruct<span style="color:#d179a3">-</span>notes
</span></span><span style="display:flex;"><span><span style="color:#d179a3">go</span> mod tidy
</span></span></code></pre><p>Running <code>go run .</code> now will return <code>hello world</code>.</p>
<p>For the people who are new to Go (my main readership comes from .NET), the <code>go mod</code> commands are Go’s way of managing third party packages. They generate a <code>go.mod</code> and <code>go.sum</code> file when the service takes on any external dependencies. For now the <code>go.mod</code> file remains mostly empty:</p>
<h4 id="gomod">go.mod:</h4>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>module github.com<span style="color:#d179a3">/</span>dustinmoris<span style="color:#d179a3">/</span>self<span style="color:#d179a3">-</span>destruct<span style="color:#d179a3">-</span>notes
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">go</span> <span style="color:#abfebc">1.17</span>
</span></span></code></pre><h2 id="creating-a-simple-hello-world-web-server">Creating a simple hello world web server</h2>
<p>Next I’m going to change the <code>main</code> function from a <code>hello world</code> console application to a <code>hello world</code> web server. We’re going to read the <code>PORT</code> environment variable to establish the port which our HTTP server should be listening to. If that variable is not set then we’ll default to port <code>3000</code>.</p>
<p>The “web server” itself will be a basic HTTP handler of the form <code>func (http.ResponseWriter, *http.Request)</code>:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">package</span> main
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">import</span> (
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"fmt"</span>
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"net/http"</span>
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"os"</span>
</span></span><span style="display:flex;"><span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> <span style="color:#ecc77d">main</span>() {
</span></span><span style="display:flex;"><span> port <span style="color:#d179a3">:=</span> os.<span style="color:#ecc77d">Getenv</span>(<span style="color:#ffa08f">"PORT"</span>)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> <span style="color:#b4ddff">len</span>(port) <span style="color:#d179a3">==</span> <span style="color:#abfebc">0</span> {
</span></span><span style="display:flex;"><span> port = <span style="color:#ffa08f">"3000"</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> addr <span style="color:#d179a3">:=</span> <span style="color:#ffa08f">":"</span> <span style="color:#d179a3">+</span> port
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Printf</span>(<span style="color:#ffa08f">"Starting web server, listening on %s\n"</span>, addr)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> err <span style="color:#d179a3">:=</span> http.<span style="color:#ecc77d">ListenAndServe</span>(addr, http.<span style="color:#ecc77d">HandlerFunc</span>(webServer))
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> <span style="color:#b4ddff">panic</span>(err)
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> <span style="color:#ecc77d">webServer</span>(w http.ResponseWriter, r <span style="color:#d179a3">*</span>http.Request) {
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">WriteHeader</span>(<span style="color:#abfebc">200</span>)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">Write</span>([]<span style="color:#b4ddff">byte</span>(<span style="color:#ffa08f">"hello world"</span>))
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>If I hit <code>go run .</code> now then our web server will start and I'll be able to <code>curl localhost:3000</code> to get a <code>hello world</code> response:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-07-26/self-destruct-in-golang-2.png" alt="cURLing Go web server"></p>
<h2 id="adding-basic-routing">Adding basic routing</h2>
<p>Now that we have a super tiny web server running we can add basic routing for the three main endpoints which I'd like to support:</p>
<ul>
<li><code>Get /</code> (start page with a form to create a new note)</li>
<li><code>POST /</code> (handles the form post to persist a new note)</li>
<li><code>GET /{noteID}</code> (returns a previously saved note)</li>
</ul>
<p>So far our web server is just a singe function and we had to use the <code>http.HandlerFunc</code> wrapper in order to implement the <code>http.Handler</code> interface on the function. Another way of defining the server would have been to create a struct type which implements the <code>http.Handler</code> interface by exposing a <code>ServeHTTP</code> function. Personally I prefer this option because I know that we will require some dependencies for Redis and other functionality later on and management of those dependencies will be easier that way.</p>
<p>Below I have refactored the <code>webServer</code> function into a <code>Server</code> struct and added some basic routing to it:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">type</span> Server <span style="color:#d179a3">struct</span>{}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">ServeHTTP</span>(
</span></span><span style="display:flex;"><span> w http.ResponseWriter,
</span></span><span style="display:flex;"><span> r <span style="color:#d179a3">*</span>http.Request,
</span></span><span style="display:flex;"><span>) {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> r.Method <span style="color:#d179a3">==</span> <span style="color:#ffa08f">"GET"</span> <span style="color:#d179a3">||</span> r.Method <span style="color:#d179a3">==</span> <span style="color:#ffa08f">"HEAD"</span> {
</span></span><span style="display:flex;"><span> noteID <span style="color:#d179a3">:=</span> strings.<span style="color:#ecc77d">TrimPrefix</span>(r.URL.Path, <span style="color:#ffa08f">"/"</span>)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">WriteHeader</span>(http.StatusOK)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">Write</span>([]<span style="color:#b4ddff">byte</span>(
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Sprintf</span>(
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"You requested the note with the ID '%s'."</span>,
</span></span><span style="display:flex;"><span> noteID)))
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> r.Method <span style="color:#d179a3">==</span> <span style="color:#ffa08f">"POST"</span> <span style="color:#d179a3">&&</span> r.URL.Path <span style="color:#d179a3">==</span> <span style="color:#ffa08f">"/"</span> {
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">WriteHeader</span>(http.StatusOK)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">Write</span>([]<span style="color:#b4ddff">byte</span>(<span style="color:#ffa08f">"You posted to /."</span>))
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">WriteHeader</span>(http.StatusNotFound)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">Write</span>([]<span style="color:#b4ddff">byte</span>(<span style="color:#ffa08f">"Not Found"</span>))
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>As you can see I didn’t use any third party packages to add more complex routing capabilities. Our server only needs basic functionality which can be easily satisfied by the standard library. Although this is slightly more verbose than in other languages it keeps the Go code very simple and easy to understand.</p>
<p>Additionally I have also changed the <code>http.ListenAndServe</code> function call to accept a new <code>Server</code> object:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>http.<span style="color:#ecc77d">ListenAndServe</span>(addr, <span style="color:#d179a3">&</span>Server{})
</span></span></code></pre><p>Next I'm splitting the web server code into smaller functions. I am creating a new handler for posting notes, another handler for getting notes and a helper function to return a <code>404 Not Found</code> response.</p>
<p>After this refactoring the <code>ServeHTTP</code> function serves as the main routing handler and nothing else:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">ServeHTTP</span>(w http.ResponseWriter, r <span style="color:#d179a3">*</span>http.Request) {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> r.Method <span style="color:#d179a3">==</span> <span style="color:#ffa08f">"GET"</span> <span style="color:#d179a3">||</span> r.Method <span style="color:#d179a3">==</span> <span style="color:#ffa08f">"HEAD"</span> {
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">handleGET</span>(w, r)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> r.Method <span style="color:#d179a3">==</span> <span style="color:#ffa08f">"POST"</span> <span style="color:#d179a3">&&</span> r.URL.Path <span style="color:#d179a3">==</span> <span style="color:#ffa08f">"/"</span> {
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">handlePOST</span>(w, r)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">notFound</span>(w, r)
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">notFound</span>(w http.ResponseWriter, r <span style="color:#d179a3">*</span>http.Request) {
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">WriteHeader</span>(http.StatusNotFound)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">Write</span>([]<span style="color:#b4ddff">byte</span>(<span style="color:#ffa08f">"Not Found"</span>))
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">handlePOST</span>(w http.ResponseWriter, r <span style="color:#d179a3">*</span>http.Request) {
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">WriteHeader</span>(http.StatusOK)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">Write</span>([]<span style="color:#b4ddff">byte</span>(<span style="color:#ffa08f">"You posted to /."</span>))
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">handleGET</span>(w http.ResponseWriter, r <span style="color:#d179a3">*</span>http.Request) {
</span></span><span style="display:flex;"><span> noteID <span style="color:#d179a3">:=</span> strings.<span style="color:#ecc77d">TrimPrefix</span>(r.URL.Path, <span style="color:#ffa08f">"/"</span>)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">WriteHeader</span>(http.StatusOK)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">Write</span>([]<span style="color:#b4ddff">byte</span>(fmt.<span style="color:#ecc77d">Sprintf</span>(<span style="color:#ffa08f">"You requested the note with the ID '%s'."</span>, noteID)))
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>That's a very quick and neat way to keep the Go web server code well structured and yet super simple at the same time.</p>
<h2 id="adding-html-views">Adding HTML views</h2>
<p>The next step is to add some basic HTML to the project. We need a HTML template which we can return on the <code>/</code> index page when someone visits the service.</p>
<p>It's up to an individual to decide how they like to structure their code, but I generally place all web content which needs to get shipped alongside the Go executable into a <code>/dist</code> folder. With this in mind I've added two initial HTML templates to the project:</p>
<ul>
<li><code>/dist/layout.html</code></li>
<li><code>/dist/index.html</code></li>
</ul>
<p>The <code>layout.html</code> is the main layout page which can be re-used by other templates. Other programming languages might call this a "master page" (e.g. MVC). The <code>index.html</code> is the template which mainly includes the HTML form for creating a new note.</p>
<p>For simplicity I've kept them extremely small and I've added only a tiny bit of CSS to make the UI passable to the eye. If you're not familiar with Go's templating language then I'd recommend to have a quick look at the <a href="https://github.com/dustinmoris/self-destruct-notes/tree/main/dist">templates in my Git repository</a> itself.</p>
<p>In order to respond with a HTML template on the index (<code>/</code>) route I added one helper function to our <code>Server</code> struct and covered the <code>/</code> route in the <code>handleGET</code> handler:</p>
<h4 id="helper-function">Helper function:</h4>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">renderTemplate</span>(
</span></span><span style="display:flex;"><span> w http.ResponseWriter,
</span></span><span style="display:flex;"><span> r <span style="color:#d179a3">*</span>http.Request,
</span></span><span style="display:flex;"><span> data <span style="color:#d179a3">interface</span>{},
</span></span><span style="display:flex;"><span> name <span style="color:#d179a3">string</span>,
</span></span><span style="display:flex;"><span> files <span style="color:#d179a3">...</span><span style="color:#d179a3">string</span>,
</span></span><span style="display:flex;"><span>) {
</span></span><span style="display:flex;"><span> t <span style="color:#d179a3">:=</span> template.<span style="color:#ecc77d">Must</span>(template.<span style="color:#ecc77d">ParseFiles</span>(files<span style="color:#d179a3">...</span>))
</span></span><span style="display:flex;"><span> err <span style="color:#d179a3">:=</span> t.<span style="color:#ecc77d">ExecuteTemplate</span>(w, name, data)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> <span style="color:#b4ddff">panic</span>(err)
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>The <code>renderTemplate</code> function is just a small convenience method to parse all requested template files and initialise a <code>*template.Template</code> object and then write the result combined with the <code>data</code> model to the HTTP response body.</p>
<p>This method could be optimised by computing all known templates in advance but for brevity I'll keep it this way for now.</p>
<p>If you're new to Go and wonder what <code>files ...string</code> means in the function declaration then just know that this is a special form of specifying a string array where the function can accept a dynamic amount of string parameters at the end of a function call. It's similar to what <code>params string[]</code> does in C#.</p>
<p>Using the <code>renderTemplate</code> function from within the <code>handleGET</code> method looks like this now:</p>
<h4 id="handleget">handleGET:</h4>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">handleGET</span>(
</span></span><span style="display:flex;"><span> w http.ResponseWriter,
</span></span><span style="display:flex;"><span> r <span style="color:#d179a3">*</span>http.Request,
</span></span><span style="display:flex;"><span>) {
</span></span><span style="display:flex;"><span> path <span style="color:#d179a3">:=</span> r.URL.Path
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> path <span style="color:#d179a3">==</span> <span style="color:#ffa08f">"/"</span> {
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">renderTemplate</span>(
</span></span><span style="display:flex;"><span> w, r, <span style="color:#d179a3">nil</span>,
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"layout"</span>,
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"dist/layout.html"</span>,
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"dist/index.html"</span>)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> noteID <span style="color:#d179a3">:=</span> strings.<span style="color:#ecc77d">TrimPrefix</span>(path, <span style="color:#ffa08f">"/"</span>)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">WriteHeader</span>(http.StatusOK)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">Write</span>([]<span style="color:#b4ddff">byte</span>(fmt.<span style="color:#ecc77d">Sprintf</span>(<span style="color:#ffa08f">"You requested the note with the ID '%s'."</span>, noteID)))
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>In the <code>handleGET</code> function I check for the root path of the application and call the newly created <code>renderTemplate</code> function to complete the request. The <a href="https://github.com/dustinmoris/self-destruct-notes/blob/main/dist/index.html">index.html</a> template doesn't require any model at this point and therefore I kept the <code>data</code> argument <code>nil</code>. The <code>name</code> argument is set to <code>layout</code> because this is the <a href="https://github.com/dustinmoris/self-destruct-notes/blob/main/dist/layout.html#L1">name I chose</a> for the <code>layout.html</code> template at the top of the file.</p>
<p>Once everything is put together one should see the following UI when visiting <code>http://localhost:3000</code>:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-07-26/self-destruct-in-golang-3.png" alt="Self Destruct Notes Form UI"></p>
<h2 id="adding-a-redis-dependency">Adding a Redis dependency</h2>
<p>Now that the service can display a simple HTML page to create a note we need to implement the logic to actually handle the HTTP POST request and save the note on the backend. As mentioned before I'll use Redis for the purpose of this MVP.</p>
<p>First I'll add the required dependencies to the <code>go.mod</code> file and also reference them in the <code>main.go</code> file as part of the <code>import</code> declaration:</p>
<h4 id="gomod-1">go.mod</h4>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>module github.com<span style="color:#d179a3">/</span>dustinmoris<span style="color:#d179a3">/</span>self<span style="color:#d179a3">-</span>destruct<span style="color:#d179a3">-</span>notes
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">go</span> <span style="color:#abfebc">1.17</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#ecc77d">require</span> (
</span></span><span style="display:flex;"><span> github.com<span style="color:#d179a3">/</span><span style="color:#d179a3">go</span><span style="color:#d179a3">-</span>redis<span style="color:#d179a3">/</span>cache<span style="color:#d179a3">/</span>v8 v8<span style="color:#abfebc">.4.3</span>
</span></span><span style="display:flex;"><span> github.com<span style="color:#d179a3">/</span><span style="color:#d179a3">go</span><span style="color:#d179a3">-</span>redis<span style="color:#d179a3">/</span>redis<span style="color:#d179a3">/</span>v8 v8<span style="color:#abfebc">.11.4</span>
</span></span><span style="display:flex;"><span>)
</span></span></code></pre><h4 id="maingo-1">main.go</h4>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">import</span> (
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"fmt"</span>
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"net/http"</span>
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"os"</span>
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"strings"</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"github.com/go-redis/cache/v8"</span>
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"github.com/go-redis/redis/v8"</span>
</span></span><span style="display:flex;"><span>)
</span></span></code></pre><p>In order to access the Redis cache from our <code>Server</code> struct we also need to add a reference in the struct declaration itself:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">type</span> Server <span style="color:#d179a3">struct</span> {
</span></span><span style="display:flex;"><span> RedisCache <span style="color:#d179a3">*</span>cache.Cache
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>Finally we can initialise a Redis cache object from within the application's <code>main</code> function and subsequently pass it into the <code>server</code> object before launching the service:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>redisURL <span style="color:#d179a3">:=</span> os.<span style="color:#ecc77d">Getenv</span>(<span style="color:#ffa08f">"REDIS_URL"</span>)
</span></span><span style="display:flex;"><span><span style="color:#d179a3">if</span> <span style="color:#b4ddff">len</span>(redisURL) <span style="color:#d179a3">==</span> <span style="color:#abfebc">0</span> {
</span></span><span style="display:flex;"><span> redisURL = <span style="color:#ffa08f">"redis://:@localhost:6379/1"</span>
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>redisOptions, err <span style="color:#d179a3">:=</span> redis.<span style="color:#ecc77d">ParseURL</span>(redisURL)
</span></span><span style="display:flex;"><span><span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> <span style="color:#b4ddff">panic</span>(err)
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>redisClient <span style="color:#d179a3">:=</span> redis.<span style="color:#ecc77d">NewClient</span>(redisOptions)
</span></span><span style="display:flex;"><span><span style="color:#d179a3">defer</span> redisClient.<span style="color:#ecc77d">Close</span>()
</span></span><span style="display:flex;"><span>redisCache <span style="color:#d179a3">:=</span> cache.<span style="color:#ecc77d">New</span>(<span style="color:#d179a3">&</span>cache.Options{
</span></span><span style="display:flex;"><span> Redis: redisClient,
</span></span><span style="display:flex;"><span>})
</span></span><span style="display:flex;"><span>server <span style="color:#d179a3">:=</span> <span style="color:#d179a3">&</span>Server{
</span></span><span style="display:flex;"><span> RedisCache: redisCache,
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>The code is mostly self explanatory but I'll do a quick run through anyway. Similar to the port we also read the Redis URL (connection string) from an environment variable which I chose to call <code>REDIS_URL</code>. If none was provided then we assume a default Redis instance to run behind the default port <code>6370</code>. Using the <code>redis.ParseURL</code> function we can convert the "magic" string variable into a strongly typed object which encapsulates all the Redis options. Using the <code>redisOptions</code> object we can then initialise a Redis client. Go has a neat way of deferring the closure of the connection to the end of the function using the <code>defer</code> keyword:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">defer</span> redisClient.<span style="color:#ecc77d">Close</span>()
</span></span></code></pre><p>In the case of the <code>main</code> function that would be the case when the application shuts down. This is not quite the same but very similar to how one would write code using C#'s <code>using</code> statement.</p>
<p>Finally a cache object is being initialised using the <code>redisClient</code> and then passed into our <code>Server</code> struct.</p>
<h2 id="saving-notes">Saving notes</h2>
<p>Now that the <code>Server</code> struct has everything it needs we can implement the actual logic to persist a message. For that purpose we'll extend the <code>handlePOST</code> method, which is the handler which we call when someone sends a HTTP POST to the root (<code>/</code>) of the application.</p>
<p>For good measure let's run some basic validation first:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">handlePOST</span>(
</span></span><span style="display:flex;"><span> w http.ResponseWriter,
</span></span><span style="display:flex;"><span> r <span style="color:#d179a3">*</span>http.Request,
</span></span><span style="display:flex;"><span>) {
</span></span><span style="display:flex;"><span> mediaType <span style="color:#d179a3">:=</span> r.Header.<span style="color:#ecc77d">Get</span>(<span style="color:#ffa08f">"Content-Type"</span>)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> mediaType <span style="color:#d179a3">!=</span> <span style="color:#ffa08f">"application/x-www-form-urlencoded"</span> {
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">badRequest</span>(
</span></span><span style="display:flex;"><span> w, r,
</span></span><span style="display:flex;"><span> http.StatusUnsupportedMediaType,
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"Invalid media type posted."</span>)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> err <span style="color:#d179a3">:=</span> r.<span style="color:#ecc77d">ParseForm</span>()
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">badRequest</span>(
</span></span><span style="display:flex;"><span> w, r,
</span></span><span style="display:flex;"><span> http.StatusBadRequest,
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"Invalid form data posted."</span>)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> form <span style="color:#d179a3">:=</span> r.PostForm
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f">//...</span>
</span></span></code></pre><p>If the HTTP request didn't include the <code>application/x-www-form-urlencoded</code> HTTP header then we return a <code>400 Bad Request</code> response. The same is true if the form data cannot be successfully parsed by the server (because it didn't adhere to the url encoded format).</p>
<p>If everything was okay then we can access the posted form data via the <code>r.PostForm</code> property and assign it to a <code>form</code> variable.</p>
<p>Next I'll attempt to read the <code>message</code> and <code>ttl</code> (time to live) fields from the form and initialise matching variables to hold their values:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>message <span style="color:#d179a3">:=</span> form.<span style="color:#ecc77d">Get</span>(<span style="color:#ffa08f">"message"</span>)
</span></span><span style="display:flex;"><span>destruct <span style="color:#d179a3">:=</span> <span style="color:#d179a3">false</span>
</span></span><span style="display:flex;"><span>ttl <span style="color:#d179a3">:=</span> time.Hour <span style="color:#d179a3">*</span> <span style="color:#abfebc">24</span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">if</span> form.<span style="color:#ecc77d">Get</span>(<span style="color:#ffa08f">"ttl"</span>) <span style="color:#d179a3">==</span> <span style="color:#ffa08f">"untilRead"</span> {
</span></span><span style="display:flex;"><span> destruct = <span style="color:#d179a3">true</span>
</span></span><span style="display:flex;"><span> ttl = ttl <span style="color:#d179a3">*</span> <span style="color:#abfebc">365</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>A note will either persist for 24 hours or until read (but no longer than a maximum of 365 days). The <code>destruct</code> variable indicates whether a note should get destroyed immediately after it was opened or if it should remain until the end of the 24 hour period.</p>
<p>Using the initialised data we can create a <code>note</code> object of type <code>Note</code>:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">type</span> Note <span style="color:#d179a3">struct</span> {
</span></span><span style="display:flex;"><span> Data []<span style="color:#d179a3">byte</span>
</span></span><span style="display:flex;"><span> Destruct <span style="color:#d179a3">bool</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>... then inside <code>handlePOST</code> prepare a <code>note</code> which will be subsequently stored in a db:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>note <span style="color:#d179a3">:=</span> <span style="color:#d179a3">&</span>Note{
</span></span><span style="display:flex;"><span> Data: []<span style="color:#b4ddff">byte</span>(message),
</span></span><span style="display:flex;"><span> Destruct: destruct,
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>The last step before completing the request is to actually write the note to Redis:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>key <span style="color:#d179a3">:=</span> uuid.<span style="color:#ecc77d">NewString</span>()
</span></span><span style="display:flex;"><span>err = s.RedisCache.<span style="color:#ecc77d">Set</span>(
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">&</span>cache.Item{
</span></span><span style="display:flex;"><span> Ctx: r.<span style="color:#ecc77d">Context</span>(),
</span></span><span style="display:flex;"><span> Key: key,
</span></span><span style="display:flex;"><span> Value: note,
</span></span><span style="display:flex;"><span> TTL: ttl,
</span></span><span style="display:flex;"><span> SkipLocalCache: <span style="color:#d179a3">true</span>,
</span></span><span style="display:flex;"><span> })
</span></span><span style="display:flex;"><span><span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Println</span>(err)
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">serverError</span>(w, r)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>The <code>RedisCache</code> object which our <code>Server</code> is holding makes the storing of the note super easy.</p>
<p>By the way, the <code>s.badRequest(...)</code> and <code>s.serverError(...)</code> functions are further two small convenience methods which I've added to the <code>Server</code> struct to make error responses slightly easier:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">badRequest</span>(
</span></span><span style="display:flex;"><span> w http.ResponseWriter,
</span></span><span style="display:flex;"><span> r <span style="color:#d179a3">*</span>http.Request,
</span></span><span style="display:flex;"><span> statusCode <span style="color:#d179a3">int</span>,
</span></span><span style="display:flex;"><span> message <span style="color:#d179a3">string</span>,
</span></span><span style="display:flex;"><span>) {
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">WriteHeader</span>(statusCode)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">Write</span>([]<span style="color:#b4ddff">byte</span>(message))
</span></span><span style="display:flex;"><span>}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">serverError</span>(
</span></span><span style="display:flex;"><span> w http.ResponseWriter,
</span></span><span style="display:flex;"><span> r <span style="color:#d179a3">*</span>http.Request,
</span></span><span style="display:flex;"><span>) {
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">WriteHeader</span>(http.StatusInternalServerError)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">Write</span>([]<span style="color:#b4ddff">byte</span>(<span style="color:#ffa08f">"Ops something went wrong. Please check the server logs."</span>))
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>Anyhow, coming back to <code>handlePOST</code> the last remaining job is to print a friendly message with a link to the newly created note at the end of a successful POST:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>noteURL <span style="color:#d179a3">:=</span> fmt.<span style="color:#ecc77d">Sprintf</span>(<span style="color:#ffa08f">"%s/%s"</span>, s.BaseURL, key)
</span></span><span style="display:flex;"><span>w.<span style="color:#ecc77d">WriteHeader</span>(http.StatusOK)
</span></span><span style="display:flex;"><span>s.<span style="color:#ecc77d">renderMessage</span>(
</span></span><span style="display:flex;"><span> w, r,
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"Note was successfully created"</span>,
</span></span><span style="display:flex;"><span> template.<span style="color:#ecc77d">HTML</span>(
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Sprintf</span>(<span style="color:#ffa08f">"<a href='%s'>%s</a>"</span>, noteURL, noteURL)))
</span></span></code></pre><p>There's a couple new things in this code. First there is the <code>BaseURL</code> variable on the <code>Server</code> struct. This is something that I've added when instantiating the server object in the <code>main</code> function. It makes the <code>Server</code> struct aware of the public URL which should get displayed to the user after a successful POST (after all we won't show them a localhost URL in production).</p>
<p>The other thing is the <code>renderMessage</code> method, which is another helper function which outputs a new HTML page called <code>message.html</code> with an anonymous model:</p>
<h4 id="rendermessage">renderMessage</h4>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">renderMessage</span>(
</span></span><span style="display:flex;"><span> w http.ResponseWriter,
</span></span><span style="display:flex;"><span> r <span style="color:#d179a3">*</span>http.Request,
</span></span><span style="display:flex;"><span> title <span style="color:#d179a3">string</span>,
</span></span><span style="display:flex;"><span> paragraphs <span style="color:#d179a3">...</span><span style="color:#d179a3">interface</span>{},
</span></span><span style="display:flex;"><span>) {
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">renderTemplate</span>(
</span></span><span style="display:flex;"><span> w, r,
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">struct</span> {
</span></span><span style="display:flex;"><span> Title <span style="color:#d179a3">string</span>
</span></span><span style="display:flex;"><span> Paragraphs []<span style="color:#d179a3">interface</span>{}
</span></span><span style="display:flex;"><span> }{
</span></span><span style="display:flex;"><span> Title: title,
</span></span><span style="display:flex;"><span> Paragraphs: paragraphs,
</span></span><span style="display:flex;"><span> },
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"layout"</span>,
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"dist/layout.html"</span>,
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"dist/message.html"</span>,
</span></span><span style="display:flex;"><span> )
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><h4 id="messagehtml">message.html</h4>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#8f8f8f">{{</span> <span style="color:#dedede">define</span> <span style="color:#ffa08f">"header"</span> <span style="color:#8f8f8f">}}</span>
</span></span><span style="display:flex;"><span><style>
</span></span><span style="display:flex;"><span> h3 <span style="color:#dedede">{</span>
</span></span><span style="display:flex;"><span> margin: 2rem 0 1rem 0;
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span></style>
</span></span><span style="display:flex;"><span><span style="color:#8f8f8f">{{</span> <span style="color:#dedede">end</span> <span style="color:#8f8f8f">}}</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#8f8f8f">{{</span> <span style="color:#dedede">define</span> <span style="color:#ffa08f">"content"</span> <span style="color:#8f8f8f">}}</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><h3><span style="color:#8f8f8f">{{</span> <span style="color:#dedede">.Title</span> <span style="color:#8f8f8f">}}</span></h3>
</span></span><span style="display:flex;"><span><span style="color:#8f8f8f">{{</span> <span style="color:#dedede">range</span> <span style="color:#dedede">$</span><span style="color:#dedede">i</span><span style="color:#dedede">,</span> <span style="color:#dedede">$</span><span style="color:#dedede">p</span> <span style="color:#dedede">:=</span> <span style="color:#dedede">.Paragraphs</span> <span style="color:#8f8f8f">}}</span>
</span></span><span style="display:flex;"><span><p><span style="color:#8f8f8f">{{</span> <span style="color:#dedede">$</span><span style="color:#dedede">p</span> <span style="color:#8f8f8f">}}</span></p>
</span></span><span style="display:flex;"><span><span style="color:#8f8f8f">{{</span> <span style="color:#dedede">end</span> <span style="color:#8f8f8f">}}</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><p><a href="/">Back to home</a></p>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#8f8f8f">{{</span> <span style="color:#dedede">end</span> <span style="color:#8f8f8f">}}</span>
</span></span></code></pre><p>Thanks to Docker we can easily spin up a new instance of Redis and try out the app so far:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>docker run -p 6379:6379 redis:latest
</span></span></code></pre><p>If everything went according to plan then creating a note should return a response like this now:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2022-07-26/self-destruct-in-golang-4.png" alt="Self Destruct Notes Success Message"></p>
<h2 id="retrieving-notes">Retrieving notes</h2>
<p>At last we need to implement the logic to retrieve a previously saved note. This can be done inside the <code>handleGET</code> method. I would like to point out that <code>handlePOST</code> and <code>handleGET</code> are obviously not brilliant function names for a larger web application, but for this small app where our service has only ever 3 endpoints to deal with those names are appropriate enough. It keeps a nice balance between structure and simplicity for an app that won't need more than 250 lines of code when finished.</p>
<p>Reading a note from Redis is very easy using its key. In order to get the key we assume that everything that follows the forward slash in the URL will be part of the note ID:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>noteID <span style="color:#d179a3">:=</span> strings.<span style="color:#ecc77d">TrimPrefix</span>(path, <span style="color:#ffa08f">"/"</span>)
</span></span></code></pre><p>Then we can try to find the note inside Redis using its ID:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>ctx <span style="color:#d179a3">:=</span> r.<span style="color:#ecc77d">Context</span>()
</span></span><span style="display:flex;"><span>note <span style="color:#d179a3">:=</span> <span style="color:#d179a3">&</span>Note{}
</span></span><span style="display:flex;"><span>err <span style="color:#d179a3">:=</span> s.RedisCache.<span style="color:#ecc77d">GetSkippingLocalCache</span>(
</span></span><span style="display:flex;"><span> ctx,
</span></span><span style="display:flex;"><span> noteID,
</span></span><span style="display:flex;"><span> note)
</span></span><span style="display:flex;"><span><span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">badRequest</span>(
</span></span><span style="display:flex;"><span> w, r,
</span></span><span style="display:flex;"><span> http.StatusNotFound,
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Sprintf</span>(<span style="color:#ffa08f">"Note with ID %s does not exist."</span>, noteID))
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>If the user desired the note to be destroyed after it was opened then we should honour their request as well:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">if</span> note.Destruct {
</span></span><span style="display:flex;"><span> err <span style="color:#d179a3">:=</span> s.RedisCache.<span style="color:#ecc77d">Delete</span>(ctx, noteID)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Println</span>(err)
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">serverError</span>(w, r)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>If everything was fine so far then we finish the request by outputting the contents of the note itself:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>w.<span style="color:#ecc77d">WriteHeader</span>(http.StatusOK)
</span></span><span style="display:flex;"><span>w.<span style="color:#ecc77d">Write</span>(note.Data)
</span></span></code></pre><p>Voila, this completes the <code>handleGET</code> method and the application itself:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">func</span> (s <span style="color:#d179a3">*</span>Server) <span style="color:#ecc77d">handleGET</span>(
</span></span><span style="display:flex;"><span> w http.ResponseWriter,
</span></span><span style="display:flex;"><span> r <span style="color:#d179a3">*</span>http.Request,
</span></span><span style="display:flex;"><span>) {
</span></span><span style="display:flex;"><span> path <span style="color:#d179a3">:=</span> r.URL.Path
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> path <span style="color:#d179a3">==</span> <span style="color:#ffa08f">"/"</span> {
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">renderTemplate</span>(
</span></span><span style="display:flex;"><span> w, r, <span style="color:#d179a3">nil</span>,
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"layout"</span>,
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"dist/layout.html"</span>,
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">"dist/index.html"</span>)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> noteID <span style="color:#d179a3">:=</span> strings.<span style="color:#ecc77d">TrimPrefix</span>(path, <span style="color:#ffa08f">"/"</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> ctx <span style="color:#d179a3">:=</span> r.<span style="color:#ecc77d">Context</span>()
</span></span><span style="display:flex;"><span> note <span style="color:#d179a3">:=</span> <span style="color:#d179a3">&</span>Note{}
</span></span><span style="display:flex;"><span> err <span style="color:#d179a3">:=</span> s.RedisCache.<span style="color:#ecc77d">GetSkippingLocalCache</span>(
</span></span><span style="display:flex;"><span> ctx,
</span></span><span style="display:flex;"><span> noteID,
</span></span><span style="display:flex;"><span> note)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">badRequest</span>(
</span></span><span style="display:flex;"><span> w, r,
</span></span><span style="display:flex;"><span> http.StatusNotFound,
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Sprintf</span>(<span style="color:#ffa08f">"Note with ID %s does not exist."</span>, noteID))
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> note.Destruct {
</span></span><span style="display:flex;"><span> err <span style="color:#d179a3">:=</span> s.RedisCache.<span style="color:#ecc77d">Delete</span>(ctx, noteID)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> err <span style="color:#d179a3">!=</span> <span style="color:#d179a3">nil</span> {
</span></span><span style="display:flex;"><span> fmt.<span style="color:#ecc77d">Println</span>(err)
</span></span><span style="display:flex;"><span> s.<span style="color:#ecc77d">serverError</span>(w, r)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">WriteHeader</span>(http.StatusOK)
</span></span><span style="display:flex;"><span> w.<span style="color:#ecc77d">Write</span>(note.Data)
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><h2 id="links">Links</h2>
<p>The full <a href="https://github.com/dustinmoris/self-destruct-notes">source code is available on GitHub</a> and licensed under a permissive Apache-2.0 license.</p>
<p>I have also added a <a href="https://github.com/dustinmoris/self-destruct-notes/blob/main/.github/workflows/build.yml">GitHub Action to build and publish</a> a <a href="https://hub.docker.com/repository/docker/dustinmoris/self-destruct-notes">public Docker image on Docker Hub</a>.</p>
<h2 id="next-steps">Next steps</h2>
<p>This web service was just a little toy project to dip my toes into Go. I am not planning on making it much better beyond this point, although if someone would send me a PR I would probably accept it. If I did want to improve it I'd probably swap Redis for a different db and also encrypt the note before persisting it for additional security against database attacks.</p>
<h2 id="how-to-run">How to run</h2>
<p>The easiest way to start the public Docker image is by running this docker-compose:</p>
<h4 id="docker-composeyml">docker-compose.yml</h4>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>version: <span style="color:#ffa08f">"3.9"</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>services:
</span></span><span style="display:flex;"><span> db:
</span></span><span style="display:flex;"><span> image: redis:latest
</span></span><span style="display:flex;"><span> ports:
</span></span><span style="display:flex;"><span> - <span style="color:#ffa08f">"6379:6379"</span>
</span></span><span style="display:flex;"><span> web:
</span></span><span style="display:flex;"><span> image: dustinmoris/self-destruct-notes:1.0.0
</span></span><span style="display:flex;"><span> environment:
</span></span><span style="display:flex;"><span> - PORT=3000
</span></span><span style="display:flex;"><span> - REDIS_URL=redis://:@db:6379/1
</span></span><span style="display:flex;"><span> ports:
</span></span><span style="display:flex;"><span> - <span style="color:#ffa08f">"3000:3000"</span>
</span></span><span style="display:flex;"><span> depends_on:
</span></span><span style="display:flex;"><span> - db
</span></span></code></pre><h4 id="terminal-1">Terminal</h4>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>docker compose up
</span></span></code></pre><h2 id="closing-words">Closing words</h2>
<p>I love how quickly I was able to build a web service using Go. The standard library and the <code>net/http</code> package in particular are super powerful and include all the functionality which one would expect to build rich web applications with ease. I also admire how there was no unnecessary boilerplate to begin with. The web project feels very light-weight and close to the metal. There is no layers of bloatware, no hard requirement or pull towards a certain paradigm and no unnecessary additional packages for simple use cases like mine. At the moment the entire web application is essentially a single <code>main.go</code> file and less than a handful of HTML pages. However, it doesn't mean that the project is unstructured or "dirty" in any way. It's production ready code as far as I am concerned. It was very easy for me to expand on structure as the application grew, keeping things easy to understand and easy to maintain. Every single line of code has a purpose and adds functionality which makes sense.</p>
<p>If the application continued to grow then I could apply the same principles as I already did before and further break down the <code>handleGET</code> and <code>handlePOST</code> methods into smaller parts. I would continue to keep the main routing logic inside the <code>ServeHTTP</code> method of the <code>Server</code> struct, which is the main entry point of the (web) app.</p>
<p>I really appreciate the simplicity of Go and how it allowed me to add <strong>just enough complexity</strong> when I built out the app. For me Go enables "just-in-time-complexity" coding which feels very refreshing coming from .NET. I can start a project without any at all and only add more as and when I need.</p>
https://dusted.codes/building-a-secure-note-sharing-service-in-go
[email protected] (Dustin Moris Gorski)https://dusted.codes/building-a-secure-note-sharing-service-in-go#disqus_threadTue, 26 Jul 2022 00:00:00 +0000https://dusted.codes/building-a-secure-note-sharing-service-in-gogolangsecurityFund OSS through package managers<p>Open source software has become an integral part of our lives. Personally I don’t know a single app, website or other digital product which doesn’t make use of at least one open source project in their stack. We all know the value and importance of open source software and yet <a href="https://github.com/SixLabors/ImageSharp/discussions/2151">many open source maintainers</a> struggle with its sustainability. A big part of that struggle is the inability to cover the cost for the time and effort put into developing and maintaining a successful OSS project over a long period of time. Of course, very few if any software developers start an OSS project with the intent to make an income from it, not least because it would be a very bad business idea, but if a project grows in users and (user) demands over a long period of time then it inevitably turns from an occasional I-do-what-I-want hobby into something more akin to a real world job.</p>
<p>Unsurprisingly nobody likes to put in hard work for free. Unfortunately most OSS developers never manage to make a dime from their projects.</p>
<p>There may be many reasons why OSS maintainers find it difficult to get compensated for their hard work, but fundamentally I believe that the issue lies within the fact that there's no easy way to pay for OSS. Of course we have a <a href="https://ko-fi.com">few fringe projects</a> which <a href="https://www.buymeacoffee.com">try to fill the void</a>, but <a href="https://en.tipeee.com">none of them</a> are integrated into the daily tools which developers rely on every day. It requires some good will and someone going out of their way in order to make a financial contribution towards OSS. Ideally it should be the other way around.</p>
<h2 id="donations-dont-work">Donations don’t work</h2>
<p>Personally I don't think that the model of donations work that well. Initiatives like <a href="https://www.patreon.com">Patreon</a>, <a href="https://opencollective.com">OpenCollective</a> or <a href="https://github.com/sponsors">GitHub Sponsors</a> sound very noble and for very few people it might even generate enough income to compensate their work, but for the vast majority of OSS developers it just doesn’t help at all.</p>
<p>The problem with donations is that it promotes the wrong idea. It suggests that OSS is free by nature and only if a user feels charitable enough they might contribute a dollar or two to show some support. Fundamentally there is no set expectation to donate towards OSS and as a result most consumers won't pay anything at all.</p>
<p>It is important to understand that users are not to blame. They simply follow a social contract which has been understood by people since the beginning of time.</p>
<h3 id="digital-street-artists">Digital street artists</h3>
<p>Free open source projects are the digital equivalent of real world street artists.</p>
<p>We’ve all been to a town square with (free) street performances before. Only a small fraction of the audience will throw in a coin at the end of the show. Nobody thinks of you badly if you don't. After all nobody asked the street performers to put on a show. In most cases street artists will even begin their performance without an audience to start with. Only if the street artist performs well enough people will start to slowly congregate around them. It is not that different to how most open source projects operate today. The social contract is the same. It is the street artists who are grateful for the (voluntary) donations from their audience and not the other way around.</p>
<h3 id="as-a-charity-you-need-to-beg">As a charity you need to beg</h3>
<p>I often hear the argument that the entire software industry relies on OSS projects and therefore everyone should pay. I agree with the sentiment (to some extent) but then donations are the wrong model to achieve this goal.</p>
<p>Let’s look at another example from a different field. Five out of ten adults (in the developed world) will develop cancer at some point in their life. Yet less than one out of ten people will ever donate a single penny towards cancer research during their time. Nine out of ten people will be affected by cancer indirectly at the minimum. Despite cancer being a major issue in the developed world only very few people choose to voluntarily contribute towards the cause.</p>
<p>On the other hand people contribute towards a million other medical treatments and research projects by paying national insurance and income tax. The difference? It's done automatically via existing systems which we have in place.</p>
<p>What about Wikipedia? How many people use Wikipedia every day versus how many people choose to donate at least once a year? My point is that a user's willingness to pay <em>voluntarily</em> for a product or service is rarely related to its usefulness. Frequently people simply choose the path of least resistance. In the case of a free open source project that means no financial support at all.</p>
<p>Neither Cancer Research UK or Wikipedia would exist if they didn't spend a lot of time and energy every year to actively beg people for their support. This is not a model which we can reasonably expect from an average OSS developer.</p>
<p>Donations simply don't work.</p>
<h3 id="toxic-expectations">Toxic expectations</h3>
<p>Sometimes donations can be toxic too. As mentioned before, a donation is seen as a charitable act by the user towards the OSS maintainer. Open source developers often feel very grateful for their users' support and don't want to let them down. That feeling of gratitude and self imposed expectations can put maintainers under immense pressure too. It's hard to prioritise your own personal needs if someone who voluntarily pays for your coffee asks for your time. Even if one's supporters are the most understanding, kind and friendly people, it can be still extremely mentally taxing if one thinks that they're letting their supporters down. It takes a mentally strong individual to not fall into this trap. I wouldn't be surprised if it is often the mental strain rather than the physical work which drives OSS maintainers into the burnout stage.</p>
<p>Of course the situation isn't helped by the fact that some donors actually believe that their financial support entitles them to special treatment in return. The highest paying customer often demands the best table in the house. If someone tips their taxi driver well they probably expect help with their luggage also. I am not saying that this is right or wrong, but simply stating the fact that it's common enough for many people to believe that their donations buy them a favour too.</p>
<p>Whether it is intentional or not, the model of donations (or often camouflaged as "sponsorships") can put open source maintainers on the back foot and sometimes even be felt as more damaging than helpful under certain circumstances. Those concerns should not be taken lightly and I wholeheartedly believe that donations (or sponsorships) are a complete inappropriate way to sustain open source.</p>
<h2 id="convenience-drives-behaviour">Convenience drives behaviour</h2>
<p>It turns out that the vast majority of cinema goers don't mind to pay for films at all. There was a time when people were not so sure. In the late 90s and early 2000s ripping movies off the internet was so widely spread that one might have thought the opposite was true. However, the problem was never people's willingness to pay for films. Initially it just happened to be so much easier to download a movie illegally off the internet than buying a legitimate DVD. Today the tables have turned. Finding and ripping movies off the internet takes so much more effort than buying them on the Google or Apple store. Amazon Prime, Netflix, Disney+, Google Play or Apple TV have made film watching so incredibly convenient that for 99.9% of consumers any other alternative is just not worth their time.</p>
<p>We are a very busy society and convenience drives how we behave.</p>
<p>I believe that we should take this valuable lesson to the open source ecosystem an apply it in a similar way!</p>
<h2 id="package-managers-reimagined">Package managers reimagined</h2>
<p>Package managers are the central point of consuming third party OSS packages. It is the most straightforward and convenient way to publish, install or update an open source library for most programming languages. Package managers know everything about both sides of the transaction. They know who publishes a package and who consumes it. They know how many times a package gets installed in total or per party. They are already an established platform that developers could hardly live without today. They have fantastic and simple to use CLI tooling which could be easily extended to help with the sustainability of open source.</p>
<h3 id="introduce-sign-in-and-pricing-models">Introduce sign-in and pricing models</h3>
<p>Firstly, package managers such as npm or NuGet should introduce the ability for users to sign-in with an account. It should be an optional feature so that anonymous access remains the default out of the box experience. However, this could enable package authors to decide if their package may be consumed by anonymous users or authenticated user only. In the simplest form nothing would change. A package author could simply specify that their OSS library can be consumed by anyone without any limitations at all, just like it is today. However, in addition they could start introducing pricing models based on a variety of options:</p>
<ul>
<li>Disable anonymous access (only logged in users can install a package)</li>
<li>Allow first X amount of installs per user for free (e.g. for exploration)</li>
<li>If user installs package more than X-times they require to purchase a license</li>
</ul>
<p>If for whatever reason a package is prohibited from being installed then the CLI should output the reason just like it does for a variety of non commerical reasons already today.</p>
<p>Examples:</p>
<pre><code>> Please sign in to install package X
</code></pre>
<pre><code>> You've exceeded Y free installs for package X. Choose one of the following license options to continue:
1. $49 for a lifetime license
2. Don't install package
</code></pre>
<p>Purchasing a license should be as straightforward as buying a subscription in the Apple Store. Those licensing fees could be yearly, monthly or a one off fee. I haven't thought of all the possibilities but over time package platforms should evolve to give their users increasing flexibility to pick the right model for themselves.</p>
<h3 id="flexible-pricing">Flexible pricing</h3>
<p>Other ideas would be to limit the installation of a package only in certain scenarios. For example one may consume a package anonymously and without any limitations as long as they consume it from their development machine, but they may need a license when installing it from a CI build.</p>
<p>Perhaps a package remains entirely free but users who want to have early access to bleeding edge features will require a license for the consumption of pre-release versions. Maybe a user that is a verified student will not be restricted at all.</p>
<p>Another option could be to offer the same package licensed under different conditions. A free download could be available for a GPL licensed version and differently licensed version for a fee. The possibilities are endless and each package author could pick an appropriate model between them and their user base.</p>
<h3 id="make-license-acquisition-easy-via-cli">Make license acquisition easy via CLI</h3>
<p>The biggest challenge with introducing a commercial layer into the already existing complexity of package management is to not make it so cumbersome that it will become unusable. The key is that the vast majority (like 99%) of use cases should become satisfiable via the CLI. Nobody wants to have to open the browser, log into some platform and start entering card details or go through checkout screens in order to use a third party library. Package managers would have to capture a user's payment details once in advance and make the whole experience almost unnoticeable. There is already a good precedence for this model, although it might not be 100% transferable to package management it would be a good point to start with. I can already provision, update or extend expensive cloud infrastructure from the comfort of my CLI and at no point am I confronted with pesky payment or checkout forms. I believe a very similar experience could be introduced into package management in order to make OSS more sustainable!</p>
<h3 id="downstream-dependencies">Downstream dependencies</h3>
<p>Downstream dependency management could pose another challenge which needs to be carefully thought through. Again, there are many ways how this could be solved, but in its simplest form one could say that the end user must hold a valid (free or commercial) license for all downstream dependencies. Personally this feels the most straightforward and least ambiguous model and also the easiest to start with from a package management platform's point of view.</p>
<h3 id="users-can-still-circumvent-and-cheat">Users can still circumvent and cheat</h3>
<p>What if users cheat?</p>
<p>Yeah so what? There will always be a minority that will go out of their way to circumvent whatever restrictions have been imposed on them. However, they will have to jump through annoying hoops and take a path of constant resistance in order to achieve their goal. They will not be able to rely on the leading package managers for their language of choice. They will not be able to receive frictionless updates to their dependencies. They might have to run their own hosting or pay a fee elsewhere to create a back alley dependency channel. They will have to jump through additional hoops to stay anonymous or risk getting sued by the original package author or a big platform like npm itself.</p>
<p>If the legitimate path of acquiring a third party OSS package has been deeply engrained into developer's every day tools and made ridiculously easy then most programmers will simply not bother to go against the grain. It has worked for books, music and movies and there is little reason to believe that it won't work for the open source community either.</p>
<h2 id="final-thoughts">Final thoughts</h2>
<p>Apart from package managers other platforms can play an active role in promoting the sustainability of OSS as well.</p>
<p>GitHub for example could allow repository owners to disable issue creation unless a user pays a minor fee. Users who submitted a successful pull request which later gets merged could get a credit towards their account which in turn can be used as payment as well.</p>
<p>Maybe GitHub can create a new tab for "development requests" to track user desired work. Those could be new features, creating sample projects or expanding in documentation. Repository owners can then set a funding goal for each development request and only commence with the work when the goal was successfully met.</p>
<p>I don't want to claim that I have thought of all the potential pros and cons for each those ideas but I would like to bring two major points home before I will wrap up with my post:</p>
<ol>
<li>There is no shortage of creative ideas which could immensely improve the sustainability of OSS.</li>
<li>It is the platforms which developers use every day that must get involved. Paying for OSS is only awkward because it has not been normalised through the platforms which we use yet.</li>
</ol>
<p>GitHub, GitLab, npm, NuGet, etc. the ball is in your court!</p>
https://dusted.codes/fund-oss-through-package-managers
[email protected] (Dustin Moris Gorski)https://dusted.codes/fund-oss-through-package-managers#disqus_threadTue, 28 Jun 2022 00:00:00 +0000https://dusted.codes/fund-oss-through-package-managersossnugetnpmgithubCan we trust Microsoft with Open Source?<p>Oh boy, what a week of .NET drama again. Not bored yet? Read on, but for this one you’ll need some stamina, because this one is different.</p>
<p>Before I start with my actual blog post let me give you a short disclaimer. This is <strong>not</strong> an issue of some outspoken or sceptical community members making a fuss out of nothing. I know it is easy to see it this way with incomplete information, but trust me on this one, you couldn’t be more wrong if you believed that's the case this time. This is a <strong>bigger issue</strong>, an issue internally playing out at Microsoft right now as we speak. Many people from Microsoft itself, who have been working really hard to build trust with the OSS community and market .NET as a real viable OSS platform are struggling right now. These are your .NET heroes who you probably dearly cherish and these folks are currently unable to publicly speak their minds. They struggle and they want YOU to speak for them. In fact, they want you to speak out right now and make your voice heard!</p>
<p>You don’t believe me? Don’t take my word for it, but how do you explain this to me?</p>
<p><a href="https://twitter.com/shanselman/status/1451376901579182082"><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/tweet-1.png" alt="Tweet 1"></a>

<a href="https://twitter.com/shanselman/status/1451737603942739974"><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/tweet-2.png" alt="Tweet 2"></a></p>
<p><a href="https://twitter.com/davidfowl/status/1451759897708666881"><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/tweet-3.png" alt="Tweet 3"></a>

<a href="https://twitter.com/DamianEdwards/status/1451015493872087045"><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/tweet-4.png" alt="Tweet 4"></a></p>
<p><a href="https://twitter.com/condrong/status/1451754645563457537"><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/tweet-5.png" alt="Tweet 5"></a></p>
<h2 id="the-hot-hot-reload-issue">The hot "Hot Reload" issue</h2>
<p>Here is a short summary of what has happened…</p>
<p>For the majority of the last year the .NET Team was hugely focused on improving the “inner dev loop” in .NET. You might have heard many prominent .NET figures use exactly those words on countless public forums, live streams, conferences and public talks. It was one of the <a href="https://themesof.net">.NET Team’s highest priority items for .NET 6</a>:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/themes-of-dotnet.png" alt="Themes of .NET"></p>
<p>As such the team worked hard on making lots of great improvements to .NET, the SDK and the tooling around it. One of those big features was “Hot Reload” in the <code>dotnet watch</code> tool. I watched Scott Hunter give a talk and early demos of this feature many months ago when .NET 6 was still in baby shoes.</p>
<p>Hot reload wasn’t a fringe feature that might or might not have made it into the release of .NET 6. It was literally one of the flagship features that was in the making for a long time, <strong>and it was complete</strong>. After all, you don’t accumulate <a href="https://github.com/dotnet/sdk/pull/22217">1000s of lines of perfectly fine code</a> by accident.</p>
<p><a href="https://twitter.com/davidfowl/status/1392367324586418176?s=20"><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/hot-reload-tweet.png" alt="Hot Reload Tweet"></a>

So what happened to it? Well, <a href="https://www.linkedin.com/in/julia-liuson-6703441/">someone at Microsoft who wields great power</a> has made the decision that features such as hot reload cannot be given away for free as part of the open source .NET SDK anymore. These features must be reserved to proprietary commercial products such as Visual Studio. In fact, there is a bigger internal strategy being formed at Microsoft to make Visual Studio the main IDE for .NET again, because some people at Microsoft are annoyed that Visual Studio Code and other third party tools have been undermining Visual Studio for way too long now. As a result the <a href="https://github.com/dotnet/sdk/pull/22217">hot reload feature was ripped out of the SDK</a> in a last minute effort, breaking all promises and conventions of Microsoft’s release candidate policies and <a href="https://devblogs.microsoft.com/dotnet/update-on-net-hot-reload-progress-and-visual-studio-2022-highlights/">announcing that hot reload will be a Visual Studio only feature</a> going forward.</p>
<p>I know what you think. You might think I’m reading too much in-between the lines here, but trust me on this one that I am not wrong. This is exactly what is happening behind the scenes at Microsoft right now and the .NET team wants you to know it even though they can't say it. It’s not by accident that we’ve seen a fairly “coordinated” effort by Microsoft employees <a href="https://www.theverge.com/2021/10/22/22740701/microsoft-dotnet-hot-reload-removal-decision-open-source">leaking a lot of internal infighting with the Verge</a> and <a href="https://www.theregister.com/2021/10/22/microsoft_net_hot_reload_visual_studio/">other media outlets</a> yesterday.</p>
<p>The issue goes even deeper. Not only will hot reload not make it into .NET 6 or any future version of .NET, the entire <code>dotnet watch</code> tool will be discontinued in an effort to push Visual Studio as a viable product. This was not publicly announced yet and don’t count on anyone from Microsoft to publicly admit that, but if you <a href="https://github.com/dotnet/sdk/pull/22217/files#r733047263">pay close attention</a> or if you happen to talk to a Microsoft developer privately over a coffee then you might come to that conclusion exactly.</p>
<p>Why the change of direction now? The truth is this hasn’t just happened now. In fact Microsoft has already started to make such <a href="https://blog.lextudio.com/the-rough-history-of-net-core-debuggers-b9fb206dc4aa">subtle moves in the past</a>. For example the <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance">Python extension for Visual Studio Code</a> was never open sourced and released as a proprietary product to begin with. The once open source cross platform IDE <a href="https://www.monodevelop.com">MonoDevelop</a> was hard forked by Microsoft and rebranded as "Visual Studio for Mac". <a href="https://blog.lextudio.com/the-end-of-monodevelop-80b383dab34b">All improvements and feature work since then have been proprietary and closed source</a>. Yes, that is right, Visual Studio for Mac is a closed source version of the formerly open source MonoDevelop IDE after Microsoft acquired Xamarin. There was no need to turn an already long standing open source project into a proprietary commercial product. Especially since MonoDevelop was available on Linux and Mac and Visual Studio for Mac, as the name suggests, isn't. These subtle moves and <a href="https://github.com/dotnet/core/issues/505">many</a> <a href="https://github.com/dotnet/core/issues/4788">others</a> which went mostly unnoticed have emboldened certain people at Microsoft to further pursue a less open strategy again.</p>
<p>I'm glad I am not the only one who noticed...</p>
<p><a href="https://twitter.com/hhariri/status/1451841350123597829?s=200"><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/debugging-tweet.png" alt="Microsoft changing licenses to prevent debugging outside Visual Studio"></a></p>
<p>Rumours say more is to follow.</p>
<p>Visual Studio Code has been one of the most successful products which Microsoft has ever released. It has become staple to every software developer around the globe. It was a success to everyone except Visual Studio. Have you ever noticed that .NET - a Microsoft owned product - is the worst supported programming platform on Visual Studio Code - which also happens to be another Microsoft owned product? <a href="https://dusted.codes/dotnet-for-beginners">I’ve been complaining about this for years</a>, because to me Visual Studio Code represents the future of .NET and a new gateway into growing the .NET platform beyond traditional die-hard Windows fans. However, Microsoft has been purposefully underfunding <a href="https://github.com/OmniSharp">OmniSharp</a> for years in an effort to push developers towards Visual Studio again.</p>
<p>Sure, you might think this is not a big deal and completely up to Microsoft to decide where they want to allocate their resources, and normally I would agree, but what if I tell you that internally at Microsoft employees are being actively punished by management if they contribute improvements or bug fixes to OmniSharp (the OSS .NET plugin for VS Code) which is seen as further undermining Visual Studio?</p>
<p>Ouch, that is a big accusation and hard to believe, so why don't you reach out to one of your favourite Microsoft employees who work on .NET or Roslyn or another OSS product under .NET and ask them for a comment? They most likely won’t admit it, because they are not allowed to, but more importantly <strong>they also won’t deny it</strong>.</p>
<p>As Damian Edwards said before, good that nobody let the cat out of the bag ;)</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/microsoft-org-chart.png" alt="Microsoft org chart"></p>
<p>The big issue here is that Microsoft has an internal struggle going on at the moment. On one hand they want to be seen as a new version of Microsoft who loves Open Source, but on the other hand they want to actively block advances in OSS projects like the .NET SDK which could undermine their own commercial offerings. Microsoft<sup>*</sup> doesn’t want features like hot reload to make it into the SDK. They won’t develop cool features like these any more and most frighteningly, they won’t accept pull requests or community contributions which could add these features back into the SDK - and that is a bleak outlook for the open source community in .NET.</p>
<p>*) Microsoft outside the .NET team</p>
<p>So here is my simple question…</p>
<h2 id="can-we-trust-microsoft-with-oss">Can we trust Microsoft with OSS?</h2>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/microsoft-loves-oss.png" alt="Microsoft loves Open Source"></p>
<p>I am not sure. It takes years to build trust and only a few moments to lose it all. Microsoft is a huge organisation and we as outsiders often get to see only a handful of selected people being tasked to spread a certain message via their huge followings in order to create a public image in favour of Microsoft, but what if those people leave?</p>
<p>Do you trust Microsoft with Open Source or do you actually trust people like Jon Galloway, Scott Hanselman, Scott Hunter, Guido van Rossum, David Fowler, Damian Edwards, Miguel de Icaza and a handful of other OSS champions who have been pushing the OSS message internally from the bottom up? What if these people leave .NET? Will Microsoft continue to play nicely with the community?</p>
<p>Would it worry you if Scott Hunter was moved away from .NET and someone new from a different division at Microsoft would have taken over? Remember your answer when you find out in which part of Microsoft Scott Hunter works today.</p>
<h2 id="what-can-we-do">What can we do?</h2>
<p>This is a call to action to all .NET community members out there. Anyone with a tiny bit of clout make your voice heard. <a href="https://github.com/dotnet/sdk/issues/22247">Comment and upvote the issues on GitHub</a>. <a href="https://twitter.com/haacked/status/1451580844578000898?s=20">Tweet your dissatisfaction</a> and make sure to tag high profile folks at Microsoft so they see what you think! Send Microsoft a clear message that this undermines the trust which we’ve lent them over the last few years and that the betrayal of this trust can have long term consequences. This is not a threat by any means but really just the reality of the matter. Many developers in technical leadership roles and with hiring powers in their respective organisations have only stayed on .NET due to the big changes which came with .NET Core. People love .NET for what it is today, a more open, cross platform and community led platform. If Microsoft starts to change their hearts again then people will as well.</p>
<p>Please help to send the right message and make your voice heard!</p>
<h2 id="-update-24102021-">!!! UPDATE 24/10/2021 !!!</h2>
<p>Thanks to everyone who read this blog post and shared it on social media or spread the message in other ways. Yesterday evening (London, UK time) something amazing happened which can best be described in a single picture:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/dotnet-community.png" alt=".NET Community working together"></p>
<p>Honestly, this felt quite like some biblical moment. The entire community stepped up and made their voices heard. Everyone was respectful and worked together to get a clear message across:</p>
<p><strong>We love the new open and cross platform .NET of today and we won't give up even an inch of it without a proper fight :)</strong></p>
<p>Apparently this message made its way all the way to the top of Microsoft and the result speaks for itself:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2021-10-23/hot-reload-back-tweet.png" alt="Hot reload is back"></p>
<p>Special thanks to Scott Hanselman and Scott Hunter who <strong>amongst many others</strong> worked tirelessly behind the scenes to make this happen.</p>
<p>Read Scott Hunter's <a href="https://devblogs.microsoft.com/dotnet/net-hot-reload-support-via-cli/">official blog post</a> for more information and don't be too hung up on the exact wording. It's corporate lawyer speak for admitting the mistake and thanking the community. We should take that as a huge win!</p>
<p>Congratulations to everyone!</p>
https://dusted.codes/can-we-trust-microsoft-with-open-source
[email protected] (Dustin Moris Gorski)https://dusted.codes/can-we-trust-microsoft-with-open-source#disqus_threadSat, 23 Oct 2021 00:00:00 +0000https://dusted.codes/can-we-trust-microsoft-with-open-sourcedotnetaspnet-coreoss.NET Basics<p><img src="https://cdn.dusted.codes/images/blog-posts/2021-05-06/dotnet-for-dummies-2.png" alt=".NET for Dummies"></p>
<p>So you want to learn .NET but you are <a href="https://www.reddit.com/r/csharp/comments/n5av93/do_i_need_to_learn_asp_net_before_starting_asp/">confused about the differences between .NET Framework and .NET Core</a> or what the various versions of ASP.NET are or what the relationship is between C# and F#? If that is the case then you came to the right place. This guide will cover all the basics of .NET and shed some light on the various acronyms and buzz words behind it!</p>
<p>If you are new to .NET and you want to get a holistic overview of the entire platform and what parts really drive the framework and how they relate to each other then look no further because this blog post will cover them all!</p>
<div class="tip"><p><strong>Disclaimer:</strong> I am a (mostly) self taught developer and a non native English speaker. I have written this guide with the utmost care and to the best of my knowledge, but there is a good chance that I could have made a mistake or misrepresented some details in the guide below. If you find an issue then please be polite and let me know in the comments underneath or email me directly at hello [at] dusted.codes. I appreciate any type of feedback and will do my best to correct any mistakes as soon as I can. Thank you.</p></div>
<h2 id="table-of-contents">Table of contents</h2>
<ul>
<li><a href="#what-is-net">What is .NET?</a>
<ul>
<li><a href="#c-vs-f-vs-vbnet">C# vs. F# vs. VB.NET</a></li>
<li><a href="#il-code-and-the-cli">IL Code and the CLI</a></li>
<li><a href="#sdk-and-runtime-clr">SDK and Runtime (CLR)</a></li>
<li><a href="#so-what-is-net">So what is .NET?</a></li>
</ul>
</li>
<li><a href="#what-is-net-framework">What is .NET Framework?</a></li>
<li><a href="#what-is-mono">What is Mono?</a></li>
<li><a href="#what-is-net-core">What is .NET Core?</a></li>
<li><a href="#what-is-net-standard">What is .NET Standard?</a></li>
<li><a href="#why-is-there-no-net-core-4">Why is there no .NET Core 4?</a></li>
<li><a href="#what-is-aspnet">What is ASP.NET?</a></li>
<li><a href="#what-is-aspnet-core">What is ASP.NET Core?</a></li>
<li><a href="#where-do-i-start">Where do I start?</a></li>
<li><a href="#who-is-dotnet-bot">Who is dotnet-bot?</a></li>
<li><a href="#why-is-everything-net">Why is everything .NET?</a></li>
<li><a href="#useful-links">Useful links</a></li>
</ul>
<h2 id="what-is-net">What is .NET?</h2>
<p>.NET is the top level name of a Microsoft technology used for building software. It is a twenty year old platform which has seen a lot of change and innovation over the years and which spans across many different domains. .NET can be used to develop web applications, games, IoT, machine learning, Desktop and mobile applications and much more. The term ".NET" is often used in a very far reaching meaning and used synonymously for a wide range of smaller parts of the platform such as the original .NET Framework, the newer .NET Core, or languages such as C#, F# and VB.NET.</p>
<p>When people talk about ".NET" they could mean anything from ASP.NET to PowerShell. In order to understand .NET one has to look at all the different pieces that make up the platform and look at them individually to get the full picture.</p>
<div class="tip"><p><strong>Tip:</strong> The official website for .NET is <a href="https://dotnet.microsoft.com" target="_blank">dotnet.microsoft.com</a>, but a much more memorable URL to all things .NET is <a href="https://dot.net" target="_blank">dot.net</a>, which will redirect a beginner to the most up to date resources around the Microsoft .NET platform.</p></div>
<h3 id="c-vs-f-vs-vbnet">C# vs. F# vs. VB.NET</h3>
<p>First of all .NET is not a programming language. It's a development platform. In fact, you can use three different officially supported programming languages to develop with .NET:</p>
<ul>
<li><a href="https://dotnet.microsoft.com/learn/csharp">C#</a> (An object oriented language, very similar to Java)</li>
<li><a href="https://dotnet.microsoft.com/learn/fsharp">F#</a> (A functional first language, similar to OCaml)</li>
<li><a href="https://docs.microsoft.com/en-gb/dotnet/visual-basic/">VB.NET</a> (An object oriented language which is the successor of VB6)</li>
</ul>
<p>VB.NET and C# were the first officially supported languages for .NET. VB.NET is the successor of Visual Basic 6.0 and whilst extremely similar in syntax yet a different language and incompatible with the classic version of Visual Basic.</p>
<p>C# is a C-like object oriented language which first borrowed a lot of its early concepts from Java and was initially seen as an imitation of it. Today things stand very differently and C# is often being praised for its innovation and modern features which Java trails behind. The name <a href="https://www.donnfelker.com/how-did-c-get-its-name/">C# is a word play from taking C++ and adding two more + signs</a> to it, so that it forms the hash character, or as .NET developers like to call it, the "sharp" sign.</p>
<div class="tip"><p><strong>Fun fact:</strong> Initially C# was developed under the name "Cool" which stood for <strong>C</strong>-like <strong>O</strong>bject <strong>O</strong>riented <strong>L</strong>anguage but then was renamed to C# for trademark reasons.</p></div>
<p>A few years after .NET's initial release Microsoft Research and <a href="https://twitter.com/dsyme">Don Syme</a> developed a completely new language called F#. Unlike C# and VB.NET "F-Sharp" was designed as a functional first multi paradigm language which took a lot of inspiration from Ocaml, Erlang, Python, Haskell and Scala at the time. For many years F# has been the leading source of inspiration for new features in C# and was the first .NET language to introduce features such as Linq, Async and pretty much all of the new features starting from C# 7 and onwards. It was also the first language to go open source before all of .NET became public.</p>
<p>All three programming languages are primarily <a href="https://en.wikipedia.org/wiki/Type_system#Static_and_dynamic_type_checking_in_practice">statically typed languages</a> which means that the compiler will provide type safety checks during development and compilation. Those type safety checks can prevent many hard to catch runtime errors which could otherwise occur.</p>
<p>The opposite to a statically typed language is a dynamic one. Famous examples of dynamic languages are Python or JavaScript. For example, in C# one cannot accidentally assign a string value to a float variable but in JavaScript this would be totally fine.</p>
<div class="tip"><p><strong>Note:</strong> C# has the <code>dynamic</code> keyword which allows a developer to introduce dynamic behaviour into the language, but it remains an extremely rarely used feature which has been mostly reserved to excpeptional cases where interop with other systems is required.</p></div>
<h3 id="il-code-and-the-cli">IL Code and the CLI</h3>
<p>A term which often gets mentioned alongside .NET is so called "IL Code". IL code stands for intermediate language code and is the code to which C#, F# and VB.NET get translated to when an application gets compiled.</p>
<p>In .NET it is perfectly possible (and not that uncommon) to have multiple projects of different languages mixed into a single solution. For example one can have several F# projects sitting alongside C# and talk to each other, forming a larger application.</p>
<p>This is possible because the CLI (Common Language Infrastructure, not to be mistaken with the "command line interface") is a language agnostic interface which allows code to be executed on different architectures without having the code to be rewritten for each specific platform.</p>
<p>At this point this might sound a little bit confusing but it will become much clearer when I will explain the .NET CLR in the next part.</p>
<div class="tip"><p><strong>Tip:</strong> One can use <a href="https://sharplab.io" target="_blank">sharplab.io</a> to translate .NET code into IL code and get a glimpse into the inner workings of the compiler.</p></div>
<h3 id="sdk-and-runtime-clr">SDK and Runtime (CLR)</h3>
<p>Software programmers write applications with the help of very high level languages which have human readable constructs such as <code>if</code>, <code>else</code>, <code>return</code>, <code>while</code>, <code>foreach</code>, <code>public</code>, <code>private</code> and so on. Of course this is not how a binary machine works and therefore every application code which was written in a high level programming language must get translated into native machine code at some point in time. We can broadly characterise a programming language into "<strong>compiled</strong>" and "<strong>interpreted</strong>" languages and "<strong>managed</strong>" and "<strong>unmanaged</strong>" code.</p>
<h4 id="compiled-vs-interpreted">Compiled vs. Interpreted</h4>
<p>An interpreted language is a programming language where the code which gets written by a developer is the final code which gets shipped as the application. For example, in PHP a developer doesn't compile <code>.php</code> files into something else. PHP files are the final artefact which get shipped to a web server and only when an incoming request hits the server the PHP engine will read the <code>.php</code> files and interpret the code "just in time" to translate it into lower level machine code. If a developer made a mistake then they will not know until the code gets executed. This has the benefit that a developer can quickly modify raw <code>.php</code> files and get a quick feedback loop during development but on the other hand it means that many errors will not be found until the server runs the code. Typically an interpreted language also takes a slight performance hit because the runtime has to do the entire compilation at the time of execution.</p>
<p>In contrast a compiled language does all of the compilation work (or parts of it) ahead of time. For example Go requires code to be compiled directly into native machine code before an application can run. This means that a developer will be notified by the compiler about any potential errors well ahead of execution, but equally it means that changes to the code require an additional step during development before they can get tested.</p>
<p>.NET is also a compiled language, but unlike Go or Rust it doesn't compile directly into native machine code but into the intermediate language (IL Code). The IL code is the artefact which gets shipped as the final application and the .NET runtime will do the final compilation step from IL code to native machine code using the JIT (just in time compiler) at runtime. This is why .NET applications get deployed as <code>.dll</code> files and not raw <code>.cs</code>, <code>.fs</code> or <code>.vb</code> files. This is also how C# and F# and VB.NET can talk to each other because they communicate at the IL level and not before. The model which .NET follows is popular with many other programming languages and provides some notable benefits. It is usually much faster than interpreted code because much of the compilation is done well in advance, but then still slightly slower than low level languages such as Rust or C++ which can compile directly into native machine instructions.</p>
<p>The important take away is to understand why .NET has <code>.dll</code> files and that .NET requires a runtime to do the final compilation from IL code to machine code with the help of the JIT. This runtime is called the "CLR" (common language runtime) in .NET.</p>
<div class="tip"><p><strong>Fun fact:</strong> The source code compiler for C# and VB.NET is called <a href="https://github.com/dotnet/roslyn" target="_blank">Roslyn</a>, whereas <a href="https://github.com/dotnet/fsharp/blob/main/docs/compiler-guide.md" target="_blank">F# has its own compiler</a> called the <code>Fsharp.Compiler.Service</code> and a console application called <code>fsc</code> (fsharp compiler) which can be used to invoke the <code>FSharp.Compiler.Service</code> from the command line.</p></div>
<h4 id="managed-vs-unmanaged">Managed vs. Unmanaged</h4>
<p>Managed code is simply code which requires the execution under the <a href="#il-code-and-the-cli">CLI (Common Language Infrastructure)</a>. Unmanaged code is code which runs outside the CLI. Without going too far into detail the point is that in theory one can invent any new framework (or language) which can run on .NET as long as it implements the CLI. If that is the case then <a href="https://en.wikipedia.org/wiki/List_of_CLI_languages">the language can be translated into IL code and get executed under the CLR (Common Language Runtime)</a>. All these abstractions make it possible to not only have more than one language for .NET but also multiple frameworks such as .NET Framework, .NET Core or Mono (more on this later).</p>
<div class="tip"><p><strong>Fun fact:</strong> Microsoft has also developed a dynamic language runtime (DLR) which runs on the CLR and therefore can support dynamic languages on top of .NET. The most notable examples are <a href="https://ironpython.net" target="_blank">IronPython</a> and <a href="http://ironruby.net" target="_blank">IronRuby</a>, which are implementations of the Python and Ruby programming language for .NET.</p></div>
<h4 id="sdk-vs-runtime">SDK vs. Runtime</h4>
<p>Now that the purpose of the CLR (.NET runtime) is a bit clearer one can also explain why .NET is available as two different installations:</p>
<ul>
<li><strong>SDK</strong> (Software Development Kit with the runtime)</li>
<li><strong>Runtime</strong> (only)</li>
</ul>
<p>The runtime is what an end user's machine or a server must have installed in order to run a .NET application. The SDK is the software development tool chain which allows programmers to develop .NET applications. In layman terms, the SDK is what produces a <code>.dll</code> file and the runtime is what can run it.</p>
<p>This is why the <a href="https://dotnet.microsoft.com/download/dotnet/6.0">official .NET page</a> offers both options to download:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2021-05-06/dotnet-download-page-2.png" alt=".NET 5 Download Page"></p>
<p>Software developers must download the .NET SDK in order to build .NET applications, but a web server or end user machine should only install the .NET runtime which is the much smaller installation.</p>
<p>Another point which becomes very clear from the download page above is the loose relationship between the different .NET languages. As one can see C#, F# and VB.NET are at complete different stages in their life and therefore versioned differently. The languages can evolve independently and introduce new language features to their own liking as long as the associated compiler translates the source code into valid IL.</p>
<p>A careful observer might have also noticed the disparity between the .NET version, the .NET SDK and the .NET runtime. The official .NET version normally refers to the .NET runtime version, because that is essentially the final execution runtime which needs to be installed on a machine. The SDK can have a different version because the development tool chain can improve faster than the runtime itself and support new features and better development workflows whilst still targeting the same version of .NET.</p>
<div class="tip"><p><strong>Side note:</strong> Not every programming language requires an SDK and runtime. For example languages such as <a href="https://www.rust-lang.org" target="_blank">Rust</a> or <a href="https://golang.org" target="_blank">Go</a>, which directly compile into native machine code, don't require a runtime. These languages only have a <a href="https://golang.org/dl/" target="_blank">single download option</a> available which normally represents the SDK for building software. Coming from one of these languges can make .NET feel unusual, but in essence .NET is not any different than for example Java, which also has the JDK (Java Development Kit) for building software and the JRE (Java Runtime Environment) for running it.</p></div>
<h4 id="summary-of-net-components">Summary of .NET components</h4>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2021-05-06/dotnet-compilation-steps.svg" alt=".NET Compilation Steps"></p>
<h3 id="so-what-is-net">So what is .NET?</h3>
<p>Coming back to the original question of what is .NET? .NET is the combination of all of the different parts which have been described above. It's a platform which consists of languages, a CLI, the runtime (CLR) and an SDK for building software.</p>
<p>To make matters worse, .NET comes in three official versions:</p>
<ul>
<li>.NET Framework</li>
<li>Mono</li>
<li>.NET Core</li>
</ul>
<p>Even though .NET Framework, .NET Core and Mono are officially labelled as "frameworks", these flavours of .NET are simply different implementations of the CLI.</p>
<div class="tip">
<strong>Fun fact:</strong>
<p>In theory all three frameworks come with slightly different CLRs:</p>
<ul>
<li>CLR (Original .NET Framework CLR)</li>
<li>Mono CLR (Mono's implementation of the CLR)</li>
<li>CoreCLR (The actual name of .NET Core's CLR)</li>
</ul>
<p>The CLR is the platform specific part of .NET since it has to translate platform agnostic IL code into an architecture's specific machine code. This is why the .NET SDK and Runtime downloads come in so many different versions.</p><p>In order to support .NET on a completely new architecture (such as Apple Silicon for example) Microsoft only has to build a new architecture specific CLR (e.g. for macOS Arm64) and the rest will continue to work.</p>
</div>
<h2 id="what-is-net-framework">What is .NET Framework?</h2>
<p>.NET Framework is the original .NET. It's the platform which was first released twenty years ago and which only worked on Windows and was regularly updated and shipped as part of Windows.</p>
<p>.NET Framework came in these major versions:</p>
<ul>
<li>.NET Framework 1.0</li>
<li>.NET Framework 1.1</li>
<li>.NET Framework 2.0</li>
<li>.NET Framework 3.0</li>
<li>.NET Framework 3.5</li>
<li>.NET Framework 4.0</li>
<li>.NET Framework 4.5</li>
<li>.NET Framework 4.6</li>
<li>.NET Framework 4.7</li>
<li>.NET Framework 4.8</li>
</ul>
<p><a href="https://devblogs.microsoft.com/dotnet/announcing-the-net-framework-4-8/">.NET Framework 4.8</a> was the last official version of .NET Framework and has been superseded by .NET 5 since then. There will be no new version of .NET Framework any more and all future development is conducted on .NET Core (renamed to .NET 5) and its future versions.</p>
<p>The original .NET Framework is what most people think of when they have negative connotations towards .NET. It was tightly coupled to Windows, the CLR could only run on Windows or Windows Server with IIS, it required Visual Studio to develop on it and it had absolutely no cross platform support. It was also relatively slow in execution and slow to evolve.</p>
<p>Overall it worked well on Windows but started to increasingly lack capabilities and meet modern software development demands.</p>
<h2 id="what-is-mono">What is Mono?</h2>
<p>When Microsoft was still at war with Linux and had absolutely no appetite to support .NET on any other platform than Windows a crazy-genius person called <a href="https://twitter.com/migueldeicaza">Miguel de Icaza</a> decided to develop the <a href="https://www.mono-project.com">Mono Project</a> - an open source cross platform alternative to .NET Framework.</p>
<p>Mono allowed Linux, macOS and other Unix developers to write and execute .NET applications on non Windows systems. The Mono project was led by Miguel as a collaborative open source project and attracted many supporters and even investors (through Xamarin) from all around the world. Despite its initial struggles and not having had any help from Microsoft, which meant it was always lagging slightly behind the latest version of .NET Framework, Mono still managed to become a very successful standalone framework which became very popular with the ALT.NET (alternative .NET) movement and matured incredibly well over the years.</p>
<div class="tip"><p><strong>Fun fact:</strong> Mono was only possible because it implemented the <a href="#il-code-and-the-cli">Common Language Infrastructure (CLI)</a> which Microsoft released as an open standard (ECMA-335) in December 2000.</p></div>
<p>Only in 2016 when Microsoft changed its internal culture and started to adopt a more favourable relationship with the open source community they decided to acquire <a href="https://xamarin.com/">Xamarin</a> (the company which owned Mono) and hired Miguel de Icaza as the lead.</p>
<p>Today Mono is part of the official .NET eco system and powers strategic products such as <a href="https://dotnet.microsoft.com/apps/xamarin">Microsoft's mobile development toolkit</a> and <a href="https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor">Blazor</a>, a .NET WASM (web assembly) runtime for running .NET in the browser.</p>
<h2 id="what-is-net-core">What is .NET Core?</h2>
<p>.NET Core is Microsoft's latest reinvention of the legacy .NET Framework with the promise of true cross platform support and improved performance. Initially .NET Core started as a complete re-write of the .NET Framework and was initially focused on the "core" parts of the framework which were needed for running web and console applications.</p>
<p>Since the <a href="https://devblogs.microsoft.com/dotnet/announcing-net-core-1-0/">release of .NET Core 1.0</a> Microsoft has steadily iterated over the product and eventually caught up with the original .NET Framework which led to the renaming of .NET Core to simply ".NET" again. The current version of .NET Core is called ".NET 5" and .NET 6 will be released later this year.</p>
<p>.NET Core has truly lived up to its promises and revived the entire .NET eco system from the first day. It runs on Linux, macOS and Windows, it is being <a href="https://github.com/dotnet/core">openly developed on GitHub</a> and runs <a href="https://github.com/dotnet/core/blob/main/LICENSE.TXT">under an OSS license</a>, it's a more light-weight standalone product which is decoupled from Windows and incredibly fast in comparison to .NET Framework.</p>
<div class="tip">
<p><strong>Fun fact:</strong></p>
<p>.NET Core's CLR is called CoreCLR and has its own JIT which is called <a href="https://devblogs.microsoft.com/dotnet/ryujit-the-next-generation-jit-compiler-for-net/" target="_blank">RyuJIT</a>. Both projects are also open source and available on GitHub:</p>
<ul>
<li><a href="https://github.com/dotnet/runtime/tree/main/src/coreclr" target="_blank">CoreCLR</a></li>
<li><a href="https://github.com/dotnet/runtime/tree/main/src/coreclr/jit" target="_blank">RyuJIT</a></li>
</ul>
<p>The term Ryu means dragon in Japanese and is a reference to a book about compilers famously known as the <a href="https://en.wikipedia.org/wiki/Compilers:_Principles,_Techniques,_and_Tools" target="_blank">"Dragon Book"</a>.</p>
</div>
<p>Today .NET Core represents the foundation of all new innovation in .NET and is being released on a yearly schedule, with every second year producing an LTS (long term support) version (.NET 6 going to be the next one).</p>
<p>Without doubt, any new .NET developer should start with .NET 5 or higher when learning .NET.</p>
<h2 id="what-is-net-standard">What is .NET Standard?</h2>
<p><a href="https://devblogs.microsoft.com/dotnet/the-future-of-net-standard/">.NET Standard is basically a short lived invention from the past</a>, but because it still lingers around many corners of the internet it is worth quickly touching on as well.</p>
<p>Before .NET 5 became the unification of .NET Framework and .NET Core Microsoft created a specification called the ".NET Standard" which was meant to help developers to build Framework and Core compatible applications. .NET Standard was not a framework itself, but just a blueprint (specification) of available APIs.</p>
<p>It worked as following, the higher the version of .NET Framework was, the higher it would implement a version of .NET Standard. The same was true for .NET Core. This meant that a .NET developer could target a specific version of .NET Standard and then be confident that it would be compatible with certain versions of .NET Framework and .NET Core.</p>
<p>It was a worthwhile idea but unfortunately has always caused some confusion with .NET developers and finally got phased out with the unification of .NET 5.</p>
<h2 id="why-is-there-no-net-core-4">Why is there no .NET Core 4?</h2>
<p>.NET 5 became the first version of .NET to unify .NET Framework and .NET Core. As such it had to pick a version number which would reflect the natural progression of both frameworks and had to be higher than .NET Core 3.1 and .NET Framework 4.8. Simple as that.</p>
<div class="tip"><p><strong>Tip:</strong> You can visit <a href="https://versionsof.net" target="_blank">versionsof.net</a> to get an overview of all existing versions of .NET, including .NET Framework, .NET Core and Mono. It also highlights which versions are in long term support.</p></div>
<h2 id="what-is-aspnet">What is ASP.NET?</h2>
<p>ASP.NET is the name of .NET's web platform. It is a collection of .NET libraries to build rich web applications and comes with .NET Framework. ASP.NET inherited its name from <a href="https://en.wikipedia.org/wiki/Active_Server_Pages">ASP (Active Server Pages)</a>, which was Microsoft's initial server side scripting language for dynamic web pages. ASP was an interpreted language just like PHP and pretty much a Redmond copy of it. ASP.NET on the other hand is an object oriented framework which compiles into IL code like everything else in .NET Framework. It was first released with .NET Framework 1.0 and is only available on Windows.</p>
<p>Much like the rest of .NET Framework, ASP.NET is mostly seen as a legacy platform, which is tightly coupled to Windows and requires hosting in IIS (Internet Information Services) - a proprietary Microsoft web server.</p>
<h2 id="what-is-aspnet-core">What is ASP.NET Core?</h2>
<p>For a long time Microsoft was bleeding existing developers to new emerging technologies, such as Node.js, Docker or the Cloud. These tools were purposefully built to tackle the problems of modern web demand and for the most part incompatible with Microsoft's Windows centric ASP.NET. As a result Microsoft decided to develop a new version of ASP.NET with the release of .NET Core. The aim was to provide a more light weight, composable and cross platform compatible web platform which could compete with other technologies on a level playing field. ASP.NET Core was released with the first version of .NET Core and remained the main focus of its initial releases. It played a huge role in the success of .NET Core and the adoption of .NET by a whole new generation of developers, scoring consistently high in <a href="https://insights.stackoverflow.com/survey/2020">StackOverflow's yearly developer surveys</a>.</p>
<p>ASP.NET Core is the future of web in .NET and often forms the baseline library for many other web frameworks too. One of those is <a href="https://dotnet.microsoft.com/apps/aspnet/mvc">ASP.NET Core MVC</a>, an object oriented model-view-controller framework sitting on top of ASP.NET Core, which allows developers to build rich web applications in an object oriented class driven approach. Other notable ASP.NET Core web frameworks are <a href="https://github.com/CarterCommunity/Carter">Carter</a>, <a href="https://github.com/giraffe-fsharp/Giraffe">Giraffe</a>, <a href="https://saturnframework.org">Saturn</a>, <a href="https://github.com/xyncro/freya">Freya</a> or <a href="https://github.com/pimbrouwers/Falco/">Falco</a>, which have a slightly more light-weight and functional nature to building web applications in .NET.</p>
<p>In addition there are also .NET web frameworks that don't require ASP.NET Core at all. Famous examples of standalone web frameworks are <a href="https://websharper.com">WebSharper</a> or <a href="https://suave.io">Suave</a>.</p>
<p>Last but not least ASP.NET Core can also be used completely on its own in a more bare metal approach. It is a very popular choice and something which the ASP.NET Core team is currently focused on and will probably evangelise more in the future.</p>
<h2 id="where-do-i-start">Where do I start?</h2>
<p>Unless someone has a very good reason not to, everyone should start with .NET Core (now .NET 5). As a developer one should download the <a href="https://dotnet.microsoft.com/download">latest .NET SDK</a> and familiarise themselves with the <code>dotnet</code> command line tool.</p>
<p>The most important commands are <code>dotnet build</code>, which will restore dependencies and build an application, <code>dotnet run</code> which will build and launch an application and <code>dotnet test</code> which can be used to run some unit tests.</p>
<p>The easiest way to get started is by creating a simple console application:</p>
<pre><code>dotnet new console
</code></pre>
<p>If someone wants to jump straight into web development then I'd recommend to begin with an empty ASP.NET Core application:</p>
<pre><code>dotnet new web
</code></pre>
<p>This is a good starting point to slowly explore ASP.NET Core as a whole and learn about the architecture of the framework and how to compose bigger applications.</p>
<p>A great resource to learn ASP.NET Core and find out about different project types is <a href="https://github.com/dodyg/practical-aspnetcore">practical-aspnetcore</a>. Another fantastic resource is <a href="https://recaffeinate.co/book/">The Little ASP.NET Core Book</a>, a short free e-book to help people learn about ASP.NET Core! For functional developers the <a href="https://safe-stack.github.io">SAFE Stack</a> is a great place to get started too!</p>
<h2 id="who-is-dotnet-bot">Who is dotnet-bot?</h2>
<p>When Microsoft started to open source .NET they didn't want that all of the existing source will get attributed to a single person. Consequently Microsoft created a new GitHub user called <a href="https://github.com/dotnet-bot">dotnet-bot</a> as a placeholder for the <a href="https://github.com/dotnet/coreclr/commit/ef1e2ab328087c61a6878c1e84f4fc5d710aebce">initial commit</a>. Later it has evolved into .NET's official mascot:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2021-05-06/dotnet-bot.svg" alt="dotnet-bot"></p>
<p>You can design your own dotnet-bot mod at <a href="https://mod-dotnet-bot.net/create-your-bot/">mod-dotnet-bot.net</a> and also find existing swag in the <a href="https://github.com/dotnet-foundation/swag">official .NET Foundation repository</a>.</p>
<h2 id="why-is-everything-net">Why is everything .NET?</h2>
<p>At last one might ask why is everything called something something .NET? Isn't that causing a lot of confusion and part of the reason why guides like this have to be written in the first place? Well yes, but it's also important to remember that Microsoft is still a big corporation after all. That means that things have to be complicated enough to justify corporate prices and policies and Microsoft also doesn't want to disenfranchise all the corporate consultants who have made an entire career out of distilling everything .NET.</p>
<p>Just think of all the certifications alone!</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2021-05-06/joke.gif" alt="Laughing at my own joke ;)"></p>
<p>Jokes aside, there is no real reason why everything is called something .NET. Someone at Microsoft probably really likes the name .NET and because they are boss everything will remain and continue to be .NET until they eventually retire :)</p>
<h2 id="useful-links">Useful links</h2>
<p>Finally a list of some useful links:</p>
<ul>
<li><a href="https://dot.net">.NET Homepage</a></li>
<li><a href="https://devblogs.microsoft.com/dotnet/">.NET Blog</a></li>
<li><a href="https://devblogs.microsoft.com/dotnet/category/conversations/">.NET Conversations</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/csharp/">C# Documentation</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/fsharp/">F# Documentation</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/visual-basic/">VB.NET Documentation</a></li>
<li><a href="https://nuget.org">NuGet.org</a> (npm for .NET)</li>
<li><a href="https://dotnetfoundation.org">.NET Foundation</a></li>
<li><a href="https://live.dot.net">Live .NET</a> (.NET community stand-ups)</li>
<li><a href="https://versionsof.net">Versions of .NET</a></li>
<li><a href="https://themesof.net">Themes of .NET</a> (High level topics which the .NET team is working on)</li>
<li><a href="https://sitesof.net">Sites of .NET</a> (Find all official .NET pages in one place)</li>
<li><a href="https://discoverdot.net">Discover .NET</a> (.NET community resources)</li>
<li><a href="https://builtwithdot.net">BuiltWithDot.Net</a> (Collection of projects which have been built with .NET)</li>
<li><a href="https://dotnetketchup.com">.NET Ketchup</a> (Collection of weekly .NET news)</li>
</ul>
https://dusted.codes/dotnet-basics
[email protected] (Dustin Moris Gorski)https://dusted.codes/dotnet-basics#disqus_threadMon, 24 May 2021 00:00:00 +0000https://dusted.codes/dotnet-basicsdotnetdotnet-coreaspnetaspnet-coreYou don’t need Docker<div class="tip"><strong>Note:</strong> In this blog post I use the terms Docker and containers interchangeably. I know that Docker is only one of many container technologies and not always the best suited one (e.g. Kubernetes), but for the purpose of this blog post I don't differentiate between them.</div>
<p><em><span style="font-size: 1.5em;">„</span>You don’t need Docker. I started my business on a small server under my desk. It took me more than 10 years before I reached the scale where I needed something like Docker. You’ll be fine for a very long time before you’ll have to worry about it!<span style="font-size: 1.5em;">“</span></em></p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2021-03-26/you-dont-need-docker.png" alt="You don't need Docker"></p>
<p>Who has never heard someone say something like this?</p>
<p><a href="https://dev.to/inductor/do-you-really-need-docker-or-kubernetes-in-your-system-11nk">I hear it all the time</a>. At least once a week I see someone <a href="https://twitter.com/FransBouma/status/1216736461166383105?s=20">tweet</a> or <a href="https://medium.com/@chintanaw/no-you-dont-need-cloud-docker-no-kubernetes-hell-no-ae2e422d0942">blog</a> about how <a href="https://launchyourapp.meezeeworkouts.com/2021/03/why-we-dont-use-docker-we-dont-need-it.html?m=1">Docker is not really needed</a> and how they managed to get away without it. To be honest they are not wrong. Nobody really <em>needs</em> Docker, but then again “need” is a very strong word. The real question is “Do you want Docker”?</p>
<h2 id="a-different-world">A different world</h2>
<p>It is true, many successful websites and web applications started without Docker. Many also started without the “Cloud”. Some probably even started without NoSQL databases (there was a time when MySQL and Oracle were king), no Redis, no SendGrid or MailChimp, no Stripe or Braintree, no Angular or React, definitely no Vue, no serverless functions, no queues and distributed systems, no CSS frameworks, no CI/CD pipelines and probably not even virtual machines! Heck, some websites probably didn't even use jQuery or JavaScript to begin with!</p>
<p>However, would anyone really want to start a new internet business without these tools today? I don't think so, at least not if they aim for success!</p>
<p>It's easy to forget but when companies like MySpace or Facebook started the world was a very different place. We had no social media (that’s kind of obvious from this example), no iPads and iPhones, no smartwatches, no home speakers, we had no fibre optic cables leading to our homes and we didn’t have mobile broadband either. The aspiration of internet domination wasn’t a thing yet. Even when Facebook already reached huge success they were still only one of many other social networks on the web. We had many individual (often national) versions of something similar to Facebook for a very long time before the world wide web became a little bit less wide again. Internet usage was very different too. <a href="https://www.telegraph.co.uk/technology/news/11272577/How-South-Korean-pop-star-Psy-broke-YouTube.html">Psy didn't even break YouTube yet</a>.</p>
<p>The world, our relationship with the internet and people's expectations were completely different than what they are today.</p>
<h3 id="instant-scale-was-not-a-threat">Instant scale was not a threat</h3>
<p>Do you remember <a href="https://en.wikipedia.org/wiki/Bulletin_board_system">online bulletin boards (BBS)</a>? Before Reddit we had many independent self-hosted internet forums. There was a time where there was no WordPress or Medium. The internet started off as a truly decentralised web with many independent websites, blogs, communities and even multiple search engines before Google took over. Internet users didn't all congregate in the same places then.</p>
<p><strong>Websites had time to grow</strong>. The danger for an indy blog seeing traffic spikes from 3 users per week to tens of thousands of users in a single day was just not a credible threat. Today it doesn't matter how fringe or unknown a website is, any page could suddenly end up on Hacker News and learn what it means to get the infamous <a href="https://www.indiehackers.com/post/the-hacker-news-hug-50-000-unique-visitors-in-18-hours-65977e0636">Hacker News hug of death</a>. Maybe a few years ago it was possible to delay the thought of scale to a later stage, but today this is not possible if one wants to reap the benefits of sudden success.</p>
<h3 id="a-more-patient-crowd">A more patient crowd</h3>
<p>The first time I used an instant messenger was at the time of <a href="https://en.wikipedia.org/wiki/ICQ">ICQ</a>. The first time I downloaded music was from <a href="https://en.wikipedia.org/wiki/Napster">Napster</a>. Then I switched to <a href="https://en.wikipedia.org/wiki/Kazaa">KaZaA</a>, then <a href="https://en.wikipedia.org/wiki/LimeWire">LimeWire</a>, then <a href="https://en.wikipedia.org/wiki/EDonkey2000">eDonkey</a> and later to <a href="https://en.wikipedia.org/wiki/Giganews">Giganews</a> which was a popular <a href="https://en.wikipedia.org/wiki/Usenet">Usenet</a> at that time. What they all had in common was the true nature of a decentralised web. They were all built on so called <a href="https://en.wikipedia.org/wiki/Peer-to-peer">peer-to-peer networks</a>. When my friends went offline then I couldn't send them a message any more. Messages just wouldn't arrive. There would be no connection and it would time out. When enough "peers" turned their computers off then my downloads would pause. It was totally normal to download content over a period of multiple days if not weeks. There was absolutely no expectation for things to happen in an instant moment.</p>
<p>Nowadays nobody would accept a download to take longer than a few seconds let alone a couple weeks. Patience has come down and expectations went up high. What was once a luxury experience is now the baseline bar. If a video doesn't stream in at least 1080p then it might as well never happen. If the quality is not right then people turn away. Startups, indy hackers, open source projects and even hobbyists cannot afford to offer a degraded service if they want to get traction in current times.</p>
<h3 id="the-internet-was-a-toy">The internet was a toy</h3>
<p>When I got my first computer the internet was nothing more than a toy. My relationships with my family and friends did not rely on the availability of WhatsApp. Jobs were not impacted if StackOverflow, GitHub or Slack were a bit slow. Today I can't even book a doctor's appointment without going online.</p>
<p>As we have become increasingly more dependent on the web, service providers have gained a higher responsibility in keeping their services alive. Today it's rather questionable if a business can still offer a meaningful SLA with a server under someone's desk.</p>
<h3 id="tolerance-towards-failure">Tolerance towards failure</h3>
<p>Tolerance towards failure is another benefit which web applications had in the past. Today not so much anymore. Nobody expects Uber to be down. Binge watching is only possible because Netflix never goes offline. Music never stops playing when you're on Spotify. And if it did then people would lash out.</p>
<p>We've become so used to the high quality and availability of services that no glitches in the system go unnoticed anymore. No failure, no scalability issue and no data loss get past users without people grinding their teeth or writing angry tweets. Every incident has a lasting effect and can limit one's future potential of growth. For example, I have never hosted anything on GitLab myself but I sure know that they are infamous for <a href="https://github.com/sameersbn/docker-gitlab/issues/13">being</a> <a href="https://serverfault.com/questions/1049621/gitlab-push-very-slow-gitlab-ce">so</a> <a href="https://gitlabfan.com/why-gitlab-is-slow-and-what-you-can-do-about-it-bca9d61405bd">awfully</a> <a href="https://stackoverflow.com/questions/43226191/frequently-our-gitlab-is-getting-slow">slow</a> or losing <a href="https://gitlab.developers.cam.ac.uk/uis/devops/devhub/docs/-/wikis/reports/29th-March-2019-Incident-Report">production data</a> without full recourse.</p>
<h3 id="the-network-effect">The network effect</h3>
<p>Although everyone likes some quick gains, nobody likes to see their business outgrow their own ability of keeping up with demand. <strong>Scale plays a huge part in that realm.</strong> We are so interconnected that unless someone launches a product in a private invite-only group they won't be able to predict (or control) how fast they will grow. The internet has its own mind and nobody knows who will be famous tomorrow and what will go viral the day after. An innocent tweet, a <a href="https://remoteclan.com/s/27ihu5/my_product_scale_went_viral_150_000_views">short post on Reddit</a> or a routine launch on Product Hunt can <a href="https://medium.com/@vinayh/0-10-000-users-how-openvid-launched-on-product-hunt-575ff9ecf7a1">shift a new startup from 0 to 10,000 users</a> in the span of 3 months. That level of virality is insane. Imagine having 10,000 willing customers on your doorstep and they can't sign in because someone told you that you won't have to think about scale for a good while. Don't become a victim of your own success.</p>
<h2 id="do-you-delneeddel-want-docker">Do you <del>need</del> want Docker?</h2>
<p>Nobody really needs Docker (or containers per se) and I'm not going to claim that containers are the perfect silver bullet to all the issues listed above, but it remains an incredibly powerful tool which can address many of today's challenges in a very time and cost effective way. Sure there is an upfront investment to be made in learning container technology and putting it into practice, but by no means is it any harder or more time intensive than learning a new CSS framework or the latest flavour of JS. If anything, skills like Docker are much broader applicable and more transferable between programming languages, tech stacks and jobs.</p>
<p>Containers make builds predictable, they make deployments reliable and they make horizontal scaling a breeze. Containers are a great way of providing stable and backwards compatible APIs whilst keeping code complexity low. Containers can reduce infrastructure cost by running multiple applications on the same box. They can accelerate a team's productivity by running different feature branches at the same time and launch testing environments independent of hardware. Containers make blue green deployments mainstream and help to keep downtime low.</p>
<p>The question is, why would you not want Docker?</p>
https://dusted.codes/you-dont-need-docker
[email protected] (Dustin Moris Gorski)https://dusted.codes/you-dont-need-docker#disqus_threadMon, 12 Apr 2021 00:00:00 +0000https://dusted.codes/you-dont-need-dockerdockercontainersmicroservicesUsing .env in .NET<p>.NET (Core) comes with a lot of bells and whistles. One of them is the sheer amount of managing application secrets and settings. Developers have a variety of options from which they can load application settings using the <code>ConfigurationBuilder</code> class. The <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/?view=aspnetcore-5.0">official ASP.NET Core documentation</a> lists all options as following:</p>
<ul>
<li>Settings files, such as appsettings.json</li>
<li>Environment variables</li>
<li>Azure Key Vault</li>
<li>Azure App Configuration</li>
<li>Command-line arguments</li>
<li>Custom providers, installed or created</li>
<li>Directory files</li>
<li>In-memory .NET object</li>
</ul>
<p>Additionally .NET developers can choose between strongly typed configuration classes, <code>IOptions<T></code> wrappers, <code>IOptionsSnapshot<T></code> wrappers or the <code>IOptionsMonitor<T></code> interface to access their settings. Theoretically there are also the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.options.ioptionschangetokensource-1?view=dotnet-plat-ext-5.0"><code>IOptionsChangeTokenSource<T></code></a>, <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.options.ioptionsfactory-1?view=dotnet-plat-ext-5.0"><code>IOptionsFactory<T></code></a>, <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.options.ioptionsmonitorcache-1?view=dotnet-plat-ext-5.0"><code>IOptionsMonitorCache<T></code></a> interfaces and the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.options.optionsmanager-1?view=dotnet-plat-ext-5.0"><code>OptionsManager<T></code></a> class, but most users will never need to use them.</p>
<p>Most modern cloud based applications don't even require half of those features. If anything the <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/options?view=aspnetcore-5.0">ASP.NET Core options pattern</a> can feel a little bit bloated which can overcomplicate an application and make it more difficult to understand for people outside one's team. The much simpler and often entirely sufficient alternative are environment variables!</p>
<div class="tip"><p><strong>Tip:</strong> If you would like to learn more about the different configuration implementations in ASP.NET Core then check out <a href="https://andrewlock.net/tag/configuration/" target="_blank">Andrew Lock's blog</a> where he wrote about <a href="https://andrewlock.net/using-multiple-instances-of-strongly-typed-settings-with-named-options-in-net-core-2-x/" target="_blank">several of the mentioned interfaces</a> and <a href="https://andrewlock.net/creating-singleton-named-options-with-ioptionsmonitor/" target="_blank">explained how and when to use them</a>.</p></div>
<h2 id="environment-variables">Environment variables</h2>
<p>In the cloud most settings are configured via environment variables. The ease of configuration, their wide spread support and the simplicity of environment variables makes them a very compelling option. Setting environment variables during development is a little bit more tricky though. It's not any harder than in the cloud, but it's significantly more inconvenient when someone wants to quickly add, remove or edit a variable. Additionally there is a risk of collision when working on multiple applications at the same time. Environment variables like <code>LOG_LEVEL</code>, <code>SECRET_KEY</code> or <code>WEB_PORT</code> are common enough to appear in more than one project. Having to constantly change those values when switching context can become tiresome. Luckily environment variables can be configured at different levels. They can be set on a machine level, user level or for a single process. The latter is the preferred solution during development. Dotenv (.env) files are a great way of making that easy!</p>
<h2 id="the-net-way">The .NET way</h2>
<p>Before explaining "Dotenv" files let's take a quick look at how configuration is typically done in .NET 5 (Core).</p>
<p>The framework strongly prescribes developers to create an <code>appsettings.json</code> file in the root of their project and configure their application settings in JSON:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span> "Logging": {
</span></span><span style="display:flex;"><span> "Level": <span style="color:#ffa08f">"Debug"</span>
</span></span><span style="display:flex;"><span> },
</span></span><span style="display:flex;"><span> "Foo": <span style="color:#ffa08f">"foo"</span>,
</span></span><span style="display:flex;"><span> "Bar": <span style="color:#ffa08f">"bar"</span>,
</span></span><span style="display:flex;"><span> "Server": {
</span></span><span style="display:flex;"><span> "Port": <span style="color:#abfebc">8080</span>,
</span></span><span style="display:flex;"><span> "ForceHttps": <span style="color:#d179a3">true</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>However not every cloud environment makes editing JSON files in a deployed application's directory easy and therefore most .NET developers still end up using environment variables in production. The <code>ConfigurationBuilder</code> makes it possible to specify more than one source and load configuration settings from various places:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">var</span> config =
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">new</span> ConfigurationBuilder()
</span></span><span style="display:flex;"><span> .SetBasePath(Directory.GetCurrentDirectory())
</span></span><span style="display:flex;"><span> .AddJsonFile(<span style="color:#ffa08f">"appsettings.json"</span>, <span style="color:#d179a3">true</span>)
</span></span><span style="display:flex;"><span> .AddEnvironmentVariables()
</span></span><span style="display:flex;"><span> .Build();
</span></span></code></pre><p>The <code>AddEnvironmentVariables</code> instruction comes after <code>AddJsonFile</code>, which means that any environment variables which have been set would override a previously configured setting in <code>appsettings.json</code>. This is a common pattern and standard code seen in almost every .NET application.</p>
<p>Although one thing which is not obvious from the example above is the unidiomatic way of declaring environment variables in order to make this happen. The .NET configuration architecture has been primarily designed with JSON files in mind, which means that .NET developers have to configure nested settings with a double underscore (<code>__</code>) in environment variables:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>LOGGING__LEVEL<span style="color:#d179a3">=</span><span style="color:#ffa08f">Debug</span>
</span></span><span style="display:flex;"><span>FOO<span style="color:#d179a3">=</span><span style="color:#ffa08f">foo</span>
</span></span><span style="display:flex;"><span>BAR<span style="color:#d179a3">=</span><span style="color:#ffa08f">bar</span>
</span></span><span style="display:flex;"><span>SERVER__PORT<span style="color:#d179a3">=</span><span style="color:#ffa08f">8080</span>
</span></span><span style="display:flex;"><span>SERVER__FORCEHTTPS<span style="color:#d179a3">=</span><span style="color:#ffa08f">true</span>
</span></span></code></pre><p>This is such an odd way of configuring environment variables that seeing names such as <code>SERVER__FORCEHTTPS</code> are almost a certain giveaway that the underlying architecture is in .NET.</p>
<h2 id="the-altnet-way-using-env">The ALT.NET way (using .env)</h2>
<p>I'd much rather keep my development environment as close to production as possible. Therefore I'd much rather use environment variables as the main configuration mechanism during development too.</p>
<p>What if instead of using a nested JSON document I could configure my application just like in production:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>LOG_LEVEL<span style="color:#d179a3">=</span><span style="color:#ffa08f">Debug</span>
</span></span><span style="display:flex;"><span>FOO<span style="color:#d179a3">=</span><span style="color:#ffa08f">foo</span>
</span></span><span style="display:flex;"><span>BAR<span style="color:#d179a3">=</span><span style="color:#ffa08f">bar</span>
</span></span><span style="display:flex;"><span>SERVER_PORT<span style="color:#d179a3">=</span><span style="color:#ffa08f">8080</span>
</span></span><span style="display:flex;"><span>SERVER_FORCE_HTTPS<span style="color:#d179a3">=</span><span style="color:#ffa08f">true</span>
</span></span></code></pre><p>Well that's how developers would do it in many other programming languages where the use of <code>.env</code> files is more prevalent. A <code>.env</code> file is essentially just a flat file specifying environment variables like the ones above. When an engineer launches their application during development then the <code>.env</code> file gets parsed and all variables within it will get set on a process level before anything else tries to read them. As the values are set on a process level they will only persist during the currently executing process and vanish on shutdown.</p>
<p>This has several benefits over the <code>appsettings.json</code> file approach. First is the incredible simplicity. Secondly is predictability. There is no need to configure multiple configuration providers. An application only retrieves its settings from one source and nowhere else. There is also no complexity around what happens when certain settings are stored in one location (e.g. <code>appsettings.json</code>) and other settings in another (e.g. environment variables). Will they merge or replace each other? If an application relies on only environment variables then this is not something to worry about.</p>
<p>Another benefit is how engineers think of configuration. The <code>appsettings.json</code> approach invites developers to create overly complex configuration hierarchies. They are easy to read and change during development, but more cumbersome to manage in production.</p>
<p>For example take this snippet as an illustration:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span> "Databases": {
</span></span><span style="display:flex;"><span> "SqlServer": {
</span></span><span style="display:flex;"><span> "ConnectionString": <span style="color:#ffa08f">"foo-bar"</span>
</span></span><span style="display:flex;"><span> },
</span></span><span style="display:flex;"><span> "Redis": {
</span></span><span style="display:flex;"><span> "Endpoint": <span style="color:#ffa08f">"localhost:6379"</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>In JSON format this looks totally fine, but in reality it probably isn't. Apart from being a data persistence technology, Redis and SQL Server have very little in common. In fact they are probably used for complete different application functionalities. Thus it makes very little sense to group them under one universal <code>Databases</code> configuration node together.</p>
<p>Remember in production these will need to get configured as following:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>DATABASES__SQLSERVER__CONNECTIONSTRING<span style="color:#d179a3">=</span><span style="color:#ffa08f">foo-bar</span>
</span></span><span style="display:flex;"><span>DATABASES__REDIS__ENDPOINT<span style="color:#d179a3">=</span><span style="color:#ffa08f">localhost:6379</span>
</span></span></code></pre><p>This notion makes it much more obvious that the original configuration structure is unfit for everyday use in production.</p>
<p>If environment variables were the primary configuration strategy during development too, then developers would presumably name them more sensibly:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>SQL_SERVER_CS<span style="color:#d179a3">=</span><span style="color:#ffa08f">foo-bar</span>
</span></span><span style="display:flex;"><span>REDIS_ENDPOINT<span style="color:#d179a3">=</span><span style="color:#ffa08f">localhost:6379</span>
</span></span></code></pre><p>Fortunately using <code>.env</code> in .NET is a straightforward alternative to <code>appsettings.json</code>.</p>
<h3 id="loading-env-files-in-c">Loading .env files in C#</h3>
<p>The code for loading and parsing a <code>.env</code> file is so simple that it hardly warrants the use of an external dependency via NuGet.</p>
<p>Personally I like to create a <code>DotEnv.cs</code> file in my C# project and copy the following code into it:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">namespace</span> YourApplication
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">using</span> System;
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">using</span> System.IO;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">public</span> <span style="color:#d179a3">static</span> <span style="color:#d179a3">class</span> <span style="color:#c2d975">DotEnv</span>
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">public</span> <span style="color:#d179a3">static</span> <span style="color:#d179a3">void</span> Load(<span style="color:#d179a3">string</span> filePath)
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> (!File.Exists(filePath))
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">return</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">foreach</span> (<span style="color:#d179a3">var</span> line <span style="color:#d179a3">in</span> File.ReadAllLines(filePath))
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">var</span> parts = line.Split(
</span></span><span style="display:flex;"><span> <span style="color:#ffa08f">'='</span>,
</span></span><span style="display:flex;"><span> StringSplitOptions.RemoveEmptyEntries);
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">if</span> (parts.Length != <span style="color:#abfebc">2</span>)
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">continue</span>;
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> Environment.SetEnvironmentVariable(parts[<span style="color:#abfebc">0</span>], parts[<span style="color:#abfebc">1</span>]);
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>Then I add <code>DotEnv.Load("..")</code> at the beginning of the <code>Main</code> function inside my <code>Program.cs</code> file:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">public</span> <span style="color:#d179a3">static</span> <span style="color:#d179a3">class</span> <span style="color:#c2d975">Program</span>
</span></span><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">public</span> <span style="color:#d179a3">static</span> <span style="color:#d179a3">async</span> Task Main(<span style="color:#d179a3">string</span>[] args)
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">var</span> root = Directory.GetCurrentDirectory();
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">var</span> dotenv = Path.Combine(root, <span style="color:#ffa08f">".env"</span>);
</span></span><span style="display:flex;"><span> DotEnv.Load(dotenv);
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f">// Other code</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span>}
</span></span></code></pre><p>This makes sure that all environment variables get set before any class or function tries to access them.</p>
<p>Finally I specify environment variables as the only required configuration provider:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">var</span> config =
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">new</span> ConfigurationBuilder()
</span></span><span style="display:flex;"><span> .AddEnvironmentVariables()
</span></span><span style="display:flex;"><span> .Build();
</span></span></code></pre><p>Now I can add a <code>.env</code> file into the root of my application and configure environment variables like in production.</p>
<p>Of course the file doesn't have to be named <code>.env</code> and one can rename it to whichever name suits them best. Regardless which name one settles on, don't forget to add it to one's <code>.gitignore</code> file. Especially in open source projects you wouldn't want to commit development secrets into the public domain.</p>
<h3 id="loading-env-files-in-f">Loading .env files in F#</h3>
<p>In F# the implementation is very similar to C#:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">namespace</span> YourApplication
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#d179a3">module</span> DotEnv <span style="color:#d179a3">=</span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">open</span> System
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">open</span> System.IO
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">let</span> <span style="color:#dedede">private</span> parseLine<span style="color:#d179a3">(</span>line <span style="color:#d179a3">:</span> <span style="color:#d179a3">string</span><span style="color:#d179a3">)</span> <span style="color:#d179a3">=</span>
</span></span><span style="display:flex;"><span> Console.WriteLine <span style="color:#d179a3">(</span>sprintf <span style="color:#ffa08f">"Parsing: %s"</span> line<span style="color:#d179a3">)</span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">match</span> line<span style="color:#d179a3">.</span>Split<span style="color:#d179a3">(</span><span style="color:#ffa08f">'='</span><span style="color:#d179a3">,</span> StringSplitOptions.RemoveEmptyEntries<span style="color:#d179a3">)</span> <span style="color:#d179a3">with</span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">|</span> args <span style="color:#d179a3">when</span> args<span style="color:#d179a3">.</span>Length <span style="color:#d179a3">=</span> 2 <span style="color:#d179a3">-></span>
</span></span><span style="display:flex;"><span> Environment.SetEnvironmentVariable<span style="color:#d179a3">(</span>
</span></span><span style="display:flex;"><span> args<span style="color:#d179a3">.[</span>0<span style="color:#d179a3">],</span>
</span></span><span style="display:flex;"><span> args<span style="color:#d179a3">.[</span>1<span style="color:#d179a3">])</span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">|</span> <span style="color:#d179a3">_</span> <span style="color:#d179a3">-></span> <span style="color:#b4ddff">()</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">let</span> <span style="color:#dedede">private</span> load<span style="color:#b4ddff">()</span> <span style="color:#d179a3">=</span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">lazy</span> <span style="color:#d179a3">(</span>
</span></span><span style="display:flex;"><span> Console.WriteLine <span style="color:#ffa08f">"Trying to load .env file..."</span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">let</span> <span style="color:#dedede">dir</span> <span style="color:#d179a3">=</span> Directory.GetCurrentDirectory<span style="color:#b4ddff">()</span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">let</span> <span style="color:#dedede">filePath</span> <span style="color:#d179a3">=</span> Path.Combine<span style="color:#d179a3">(</span>dir<span style="color:#d179a3">,</span> <span style="color:#ffa08f">".env"</span><span style="color:#d179a3">)</span>
</span></span><span style="display:flex;"><span> filePath
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">|></span> File.Exists
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">|></span> <span style="color:#d179a3">function</span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">|</span> <span style="color:#d179a3">false</span> <span style="color:#d179a3">-></span> Console.WriteLine <span style="color:#ffa08f">"No .env file found."</span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">|</span> <span style="color:#d179a3">true</span> <span style="color:#d179a3">-></span>
</span></span><span style="display:flex;"><span> filePath
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">|></span> File.ReadAllLines
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">|></span> Seq.iter parseLine
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">let</span> <span style="color:#dedede">init</span> <span style="color:#d179a3">=</span> load<span style="color:#b4ddff">()</span><span style="color:#d179a3">.</span>Value
</span></span></code></pre><p>The only main difference is that the <code>load()</code> function has been made private and lazy loaded via the <code>init</code> variable, meaning that the code inside <code>load</code> will only get executed once, regardless of how often <code>DotEnv.init</code> gets called. This is to allow the loading of environment variables before <code>Program.fs</code> gets invoked.</p>
<p>In functional programming it is very common to make use of static variables and functions. For example I often load my application settings using a static module like this:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><span style="color:#d179a3">module</span> Config <span style="color:#d179a3">=</span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">open</span> System
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">let</span> <span style="color:#dedede">private</span> get key <span style="color:#d179a3">=</span>
</span></span><span style="display:flex;"><span> DotEnv.init
</span></span><span style="display:flex;"><span> Environment.GetEnvironmentVariable key
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">let</span> <span style="color:#dedede">secretKey</span> <span style="color:#d179a3">=</span> get <span style="color:#ffa08f">"SECRET_KEY"</span>
</span></span><span style="display:flex;"><span> <span style="color:#d179a3">let</span> <span style="color:#dedede">redisEndpoint</span> <span style="color:#d179a3">=</span> get <span style="color:#ffa08f">"REDIS_ENDPOINT"</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f">// etc.
</span></span></span></code></pre><p>Those static values will get initialised as soon as the assembly loads into the domain, which is well in advance of any code being called in <code>Program.fs</code>. Therefore I have to place the <code>DotEnv.init</code> command inside the <code>get</code> helper function, making sure that settings from the <code>.env</code> file get initialised before the first <code>Environment.GetEnvironmentVariable</code> invocation. Given that <code>DotNet.load()</code> is <code>lazy</code> it will only execute once and not reload the <code>.env</code> file on subsequent calls.</p>
<p>Additionally I must also put the <code>DotEnv.fs</code> file as the first compilation item in the <code>.fsproj</code> file:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span><ItemGroup>
</span></span><span style="display:flex;"><span> <Compile Include=<span style="color:#ffa08f">"DotEnv.fs"</span> />
</span></span><span style="display:flex;"><span> <Compile Include=<span style="color:#ffa08f">"Other.fs"</span> />
</span></span><span style="display:flex;"><span> <Compile Include=<span style="color:#ffa08f">"Stuff.fs"</span> />
</span></span><span style="display:flex;"><span> <Compile Include=<span style="color:#ffa08f">"Program.fs"</span> />
</span></span><span style="display:flex;"><span></ItemGroup>
</span></span></code></pre><p>All in all this completely replaces .NET's huge configuration pattern with an extremely simple solution. It's "cloud native" as Microsoft likes to call it and extremely easy to understand.</p>
<p>Just like in C# don't forget to add the <code>.env</code> file to your <code>.gitignore</code> rules!</p>
<h2 id="side-notes">Side notes</h2>
<h3 id="what-if-an-environment-variable-changes">What if an environment variable changes?</h3>
<p>A lot of complexity in .NET's configuration classes come from the "need" to react to changes. A web server is a long running process and if someone wants to change a value in <code>appsettings.json</code> then any functionality which relies on that setting also has to learn about the update. However most current application hosting solutions such as serverless functions or Kubernetes clusters automatically reload an application on configuration changes, so while it might be an interesting problem to think about, it's certainly more of a theoretical than practical issue. The simple <code>.env</code> solution works just fine.</p>
<h3 id="why-not-load-environment-variables-via-x">Why not load environment variables via X?</h3>
<p>Could I not just load environment variables via:</p>
<ul>
<li>bash/PowerShell?</li>
<li>launchSettings.json?</li>
<li>this tool?</li>
<li>that tool?</li>
<li>etc.?</li>
</ul>
<p>Yes, there are many ways to load environment variables into the process before launching an application. However, in this blog post I wanted to show a way which satisfies two requirements (which most of these tools don't):</p>
<ol>
<li>It works for everyone, regardless of OS or IDE</li>
<li>It works during <kbd>F5</kbd> debugging from an IDE as well</li>
</ol>
<p>Ultimately it doesn't matter how you load environment variables, but if it can be done from within .NET so that it just works for everyone on every platform, and also works during debugging without any extra hacks then why not go with such a solution?</p>
<h3 id="existing-oss-projects-for-net">Existing OSS projects for .NET?</h3>
<p>If you wondered if there are any existing .NET OSS projects to support <code>.env</code> files then you will be pleased to hear that there are some such as <a href="https://github.com/bolorundurowb/dotenv.net">dotenv.net</a>, <a href="https://github.com/tonerdo/dotnet-env">dotnet-env</a> and <a href="https://github.com/codeyu/net-dotenv">net-dotenv</a>. I have not used any of them but they all seem to be actively maintained.</p>
https://dusted.codes/dotenv-in-dotnet
[email protected] (Dustin Moris Gorski)https://dusted.codes/dotenv-in-dotnet#disqus_threadSun, 10 Jan 2021 00:00:00 +0000https://dusted.codes/dotenv-in-dotnetdotnetcsharpfsharpEqual pay for equal work<p>Remote work is on the rise, the economy is on a decline and businesses are re-structuring themselves around what many consider the new normal. Increasingly more organisations are opening themselves up to the possibility of working remotely beyond just the temporary measure in the fight against COVID-19. Many begin to see remote work as not only a necessary consequence, but rather as a new opportunity to diversify their workforce, increase employee productivity and cut some cost. The biggest cost saving obviously comes from office space, and not just in terms of raw square footage but also in location. When previously companies had to boast huge HQs in expensive areas, now they can spread themselves thinner across more affordable cities. Another big cost saving can come from employee salaries. A more distributed workforce and more work from home opportunities will inevitably mean a bigger talent pool competing for the same open positions, effectively giving employers a bigger choice.</p>
<p>However, when it comes to existing employees' salaries the direction taken by big tech companies couldn't be of starker contrast. Companies like <a href="https://www.cnbc.com/2020/05/21/zuckerberg-50percent-of-facebook-employees-could-be-working-remotely.html">Facebook or Twitter</a> have made it clear that employees who choose to relocate to more affordable cities can expect big cuts to their existing salary whereas companies like <a href="https://redditblog.com/2020/10/27/evolving-reddits-workforce/">Reddit announced that they won't reduce the salary of any of their 600 US workers</a> regardless of where they live.</p>
<p>These recent announcements have revived a <a href="https://www.helpscout.com/blog/remote-employee-compensation/">long standing debate</a> around the topic of <a href="https://www.nityesh.com/equal-pay-for-equal-work-at-a-remote-company/">equal pay for equal work</a>. Before COVID-19 <a href="https://about.gitlab.com/blog/2019/02/28/why-we-pay-local-rates/">GitLab has already sparked a lot of controversy</a> around the concept of "cost of living" adjusted salaries. At GitLab two engineers who perform the exact same work could get vastly differently compensated based on where they live. It doesn't require a lot of imagination to understand that such policies can alienate a lot of good people and further contribute to the perception of an ever growing inequality gap. Personally I find cost of living adjusted salaries very problematic as they deflect from the real market forces which come into play. Those should be a decent minimum living wage and the forces of demand & supply. If someone is a professional in a niche market, a rare specialist in a scientific field or an exceptionally sought after engineer then cost of living should have no say in determining their pay. Those so called knowledge workers are often higher in demand than there is supply. A good indicator for the scarcity of one's profession is when their employer recruits talent from all across the world and advertises unique perks such as relocation bonuses, dental care or exceptional holiday packages. These benefits would not exist if there wasn't a fierce competition for a particular skill.</p>
<p>Regardless of the real economics at play, it still comes down to a good old negotiation where each person has to stand up for their own beliefs and reach an agreement on their pay. If you find yourself in a situation where your current salary might be at risk due to recent relocation then hopefully the following write-up can help you to negotiate equal pay for equal work. It is a curated list of relevant points to formulate a strong argument why one should receive the same or perhaps an even better pay for the same work carried out from a remote position.</p>
<h2 id="same-duties-same-pay">Same duties, same pay</h2>
<p>If you're reading this blog post then you're most likely not being paid for your time or place of work. You are being paid for your knowledge, your skills, your duties, the challenges which come with your work and most importantly the responsibilities of your role. Your professionalism and your personal commitment which you bring to your every day job will not diminish when you move into a new place. Whether you work remotely or not, you will still contemplate over that tough work problem in your spare time. You will still lose sleep over that big presentation the next day. You will still stay up late to meet an important deadline by the end of the month. You do it because you take pride in your work. You do it because of a sense of personal accountability to your team. It is those qualities which have earned you the trust of your manager over the years. It is those qualities which earned you a pay rise long ago. Changing postcodes doesn't take away your accomplishments from the past. Changing postcodes shouldn't take away the things which you have already earned before.</p>
<h2 id="same-value-same-pay">Same value, same pay</h2>
<p>The value of your contributions hasn't been compromised by moving house. Customers still pay the same price for the products which you have helped to build. Sales figures didn't drop when you went remote. Why should your employer reduce your pay when they are not passing that cost saving onto their customers by cutting their own price? Based on the same principle why your employer won't reduce their prices when downsizing office, you also shouldn't accept a lower salary after moving place. The concept is the same and you shouldn't accept one principle for them, but another for yourself. As you continue to deliver the same value and quality as before, you equally deserve the same compensation in return.</p>
<h2 id="your-savings-their-savings">Your savings, their savings</h2>
<p>Let's be clear, when you transitioned from office to home it's not like you're the only one who benefited from a lower cost of living as a result. Whatever savings you might have made on your rent, your employer does now too. Every person in an office requires an extra desk. Every additional person requires a fraction more of bathroom space. More people mean bigger kitchens, bigger break out areas, more meeting rooms, larger hallways, bigger stairways and more facilities to comply with fire safety regulations. There is more pressure on lifts to avoid bottle necks during peak times. There is more building maintenance work to be done and more frequent cleaning intervals required. A bigger workforce automatically means more individual needs. Extra bicycle storage rooms, canteens, car parks, private offices, outdoor spaces, a bigger selection of refreshments and a larger variety of office perks are just a few to name. Unless your employer is prepared to share their own operational saving directly with you, why should you share your personal operational saving with them? Fundamentally their savings are theirs, and your savings are yours.</p>
<h2 id="their-profit-your-profit">Their profit, your profit</h2>
<p>Have you ever received a pay rise when your employer re-structured themselves to benefit from lower tax? Have you ever received a pay rise when your employer opened up a new factory in a less regulated place? Have you ever received a pay rise when your employer outsourced a call centre into a country without minimum wage? Probably not, because it was your employer and not you who took the risk. Guess what, when you are courageous enough to relocate to a different country, city or state, then any financial gains from your move are also only yours to claim. This shouldn't be a huge surprise as nothing comes without its own significant risk. Economic security, social safety nets, unemployment benefits, retirement support, access to public health services, cost of qualitative education or medical treatment are often some of the compromises which have to be taken into account. Simple things such as free playgrounds for children, access to public recreational grounds, well maintained parks, a good public transport system funded by tax or basic luxuries such as political stability or the ability to safely park a new car on a public road are many other cost calculations not to be dismissed. Of course this is not always the case, but those considerations are for you to make. Nobody else, particularly not your employer, who has a financial interest of dismissing or downplaying those issues should be making those calculations on your behalf. This is such a basic principle that it's almost offensive to suggest the opposite. Your employer should not decide what sort of lifestyle you deserve.</p>
<h2 id="higher-cost-higher-pay">Higher cost, higher pay</h2>
<p>Contrary to common belief, working from home is not cheap. First of all, working permanently from home requires an entire additional room. If your family lived in a three bedroom house before, now you need four.
Thanks to COVID-19 most will agree that working from a sofa is neither practical, nor sustainable or realistic by any means. A proper office desk and a good ergonomic chair are a necessity at the least. Those requirements alone make a legitimate home office significantly more expensive than many would like to admit. Throw in a couple of monitors, a qualitative web cam, microphone, printer, shredder, peripheral devices, a whiteboard, noise cancelling headphones, a mesh router and a backup laptop (assuming you'll get a work laptop provided) then the true cost of a home office begins to reach eye watering levels.</p>
<p>Some employers will try to provide you those supplies, but any seasoned remote worker will tell you to decline such an offer. After all you're still equipping your own personal home. You shouldn't have to pick from a selection of office desks which don't match your walls. You shouldn't have to settle on a chair which doesn't appeal to your eye. You shouldn't have to accept a black bezelled screen when all your other equipment is in space grey. Most importantly though, you want to treat those things as your own. You don't want to change your monitors when you change your job. You don't want to have to go through your employer to make a warranty claim. You don't want to ask for permission to replace a worn out chair. Instead you want to get a pay rise which allows you to set up your home office in the most productive way. If your employer is smart enough then they will not want to manage all of that inventory (which they never get to see) either.</p>
<p>Speaking of inventory, the true cost of working from home does not stop there yet. When working remotely nothing disrupts productivity more than a low bandwidth internet connection. One cannot constantly have people cutting out in important meetings. Pair programming is not possible with a two second lag. Worst of it is when one cannot work at all, because their internet provider has yet another outage in a short period of time. Working professionally requires a professional internet connection, whether from an office or from home. A fibre optic connection from a reputable provider with a high SLA is often 3-4 times more expensive than what most households have today. It's a significant cost which remote workers can hardly avoid to pay.</p>
<p>Fibre optic is not always the holy grail though. About a year ago hundreds of <a href="https://www.irishnews.com/magazine/technology/2019/12/20/news/virgin-media-customers-hit-by-service-outage-after-cable-cut-1796372/">London households were affected by a major outage</a> because a residential construction site accidentally drilled into one of the main regional cables underground. Households, businesses and even entire hospitals were left without internet for several days. Now if this was to happen in a traditional office then employees would still go to work, stand by the coffee machine all day and still get paid. Meanwhile office management would go berserk and erratically try to workaround a problem which cannot be easily fixed. At some point the issue would get eventually resolved and everyone will carry on. However, if the same were to happen in a remote work environment then employees would be expected to put adequate counter measures into place. Unfortunately I was one of the households which was affected by this outage at the time. My options were to either sign up with a co-working space and work from there, or pay extra for unlimited mobile data as a backup plan. I opted for the latter as it made more sense and <a href="https://www.bbc.co.uk/news/business-48271553">time has shown</a> that <a href="https://www.bbc.co.uk/news/uk-england-hampshire-51752912">exceptional cases</a> tend to be <a href="https://www.bbc.co.uk/news/technology-52448607">more regular</a> when one cannot afford for them to happen.</p>
<p>Finally remote workers - who essentially conduct business from home - also have to pay for a higher home insurance premium and for a significant increase in utility bills. All in all the recurring and one-off expenses are sky high and must be accounted in a remote worker's pay.</p>
<h2 id="not-important-then-not-important-now">Not important then, not important now</h2>
<p>Think back to the time when you got hired for your current role. Did anyone ask you about your cost of living then? During your interview did someone ask you about your monthly rent? I'd be surprised if this was the case. Instead you were probably asked to traverse a binary tree. You had to solve a whiteboard puzzle to justify your pay. Cost of living was never a concern. Many of your peers received the same pay before the housing market took a hike. Many of your peers might have not even paid any rent. What if someone inherited a home or bought extremely cheap a few years ago? Equal distribution was never your employer's aim. Adjusting salaries to cost of living cannot just be arbitrarily introduced when it suits your employer the most. You are not less worth when moving place.</p>
<h2 id="negotiate-like-a-friend">Negotiate like a friend</h2>
<p>Whatever one's situation is, however difficult a negotiation might be, treat people like they were your friends. Always be kind, remain friendly, negotiate in good faith and explain your thoughts and concerns with respect. The best way to achieve your goal is by getting people on your side. Make yourself a friend. Imagine if you were a manager and how far you would go yourself for someone who you respect. How many hierarchies would you fight in order to secure a valuable employee's pay? No supervisor or manager would want to lose a good employee on their team over a pay policy which is partially out of their control. People will jump through those hoops if you stand up for yourself in a positive and friendly way. Don't underestimate what an amicable negotiation can achieve. A hostile negotiator can only get what the other party has to give. A friendly negotiator however can get things which others thought were not even up for debate.</p>
https://dusted.codes/equal-pay-for-equal-work
[email protected] (Dustin Moris Gorski)https://dusted.codes/equal-pay-for-equal-work#disqus_threadSat, 02 Jan 2021 00:00:00 +0000https://dusted.codes/equal-pay-for-equal-workremote-worksoft-skills.NET for Beginners<p>Last night I came across this question on Reddit:</p>
<blockquote>
<p>Hello, I am just starting to learn c# and i am about to start with a course on Udemy by Mosh Hamedani called "C# Basics for Beginners: Learn C# Fundamentals by Coding".
I can see that he is using .NET Framework but i have read that .NET Core is newer and is the future? I am really not sure where to start and would appreciate if anyone could help me. I would like to learn C# to have a better understanding with OOP and to learn a programming language to help with my University course. Thank you</p>
<footer><cite><a href="https://www.reddit.com/r/csharp/comments/hkmnue/i_am_just_starting_to_learn_c_and_i_am_confused/">I am just starting to learn C# and i am confused about .NET framework & .NET Core</a>, Reddit</cite></footer>
</blockquote>
<p>First of all congratulations to the author for learning C# which is a great choice of programming language and for putting in the extra effort into better understanding the concepts of OOP (object oriented programming)! I genuinely hope that they'll have a great experience learning .NET and most importantly that they'll enjoy picking up some new skills and have fun along the way! I remember when I started to code (more than 20 years ago) and how much fun it was for me! It is also great to see how many people have replied with really helpful answers and provided some useful guidance to the original thread! It is a true testament to how supportive and welcoming the .NET community is! However, despite these positive observations I'm still a little bit gutted that such a question had to be even asked in the first place. I don't imply any fault on the author themselves - quite contrary - I actually really sympathise with them and can understand the sort of struggles which a new C# or F# developer may face. Questions like these really demonstrate how complex .NET has become over the years. It is a good time to take a step back and reflect!</p>
<h2 id="high-cognitive-entry-barrier">High cognitive entry barrier</h2>
<p>.NET has a high cognitive entry barrier. What I mean by this is that a new developer has to quickly familiarise themselves with rather boring and complex topics way too early in their educational journey. In particular questions around the differences between .NET, .NET Core, Mono, Xamarin, and the relation between C#, F#, VB.NET and what a target framework or a runtime is add very little to almost no benefit to one's initial learning experience. Most newcomers just want to read a tutorial, watch a video or attend a class and then be able to apply their newly learned skills in an easy and uncomplicated way. Ideally educational material such as purchased online courses or books from more than a few months ago should still be relevant today. Unfortunately neither of these are true for .NET. As a matter of fact it's almost given that any content which has been written more than a year ago is largely outdated today. Just think about the <a href="https://devblogs.microsoft.com/dotnet/introducing-net-5/">upcoming release of .NET 5</a> and how it will invalidate most of the lessons taught just a few months ago. Another prominent example is the very short lived <a href="https://docs.microsoft.com/en-us/dotnet/standard/net-standard">.NET Standard</a>. Once hugely evangelised and now de-facto dead. I personally think Microsoft, the .NET team and to some extent the wider .NET community has failed in making .NET a more beginner friendly language. I say this from a good place in my heart. Programming is an extremely powerful skill which can allow people to uplift themselves from all sorts of backgrounds and the harder we make it for beginners, the more exclusive we make the entry to our profession. I think the success of the next ten years will be defined by how accessible we make .NET to a whole new population of developers and I think there's some real work to be done in order to improve the current situation!</p>
<h2 id="what-about-others">What about others?</h2>
<p>Before you think that the current complexity is normal in programming and nothing endemic to .NET itself then I'm afraid to prove you wrong. <a href="https://www.php.net">Take a look at PHP</a>. PHP has evolved a lot over the years and while it has introduced more complicated object oriented concepts and improved on its performance, it hasn't lost the beauty of its original simplicity at all. PHP is a great programming language to learn. Beginner content exist like sand on a beach. The online community, forums, Reddits and conferences are second to none. Most importantly, a student could pass on a several years old PHP book to another person and they would still get a lot of value from it. The language might have moved on, but many things which were taught on a beginner level are still 100% accurate today. This is a huge advantage which gets easily forgotten by more advanced developers like myself. Furthermore the distinction between language and the interpreter is not something which a new PHP developer has to understand. A new beginner doesn't have to wrap their head around such relatively unimportant topics to get started. They just know that they've installed PHP and it works.</p>
<p>Another great example (and there are many great examples) is <a href="https://golang.org">Go</a>. Go is not a new language at all. It's <a href="https://opensource.googleblog.com/2009/11/hey-ho-lets-go.html">been around for more than 10 years</a> and despite huge improvements to the language, the compiler and the standard library it has remained faithful to its original simple design. Similar to PHP a new developer doesn't have to think about complicated nuances between "target frameworks", weird gotchas if they write Go in one IDE or another and they certainly don't have to look up a complicated version matrix to understand the intricate relations between an SDK, the runtime and newly available language features. There is just one version which maps to a well defined list of features, bug fixes and improvements in Go. It's documentation is simple and easy to comprehend. It is a very beginner friendly language.</p>
<h2 id="improving-net">Improving .NET</h2>
<p>Why is .NET so complicated? Well, the answer is perhaps not that easy itself. It's mostly history, legacy baggage, some bad decisions in the past and what I believe an overly eager desire (almost urgency) to change things for changes' sake. I will try to name a few selected issues from my personal observation, describe the problems which I have perceived and try to provide some constructive suggestions which I think could be a good improvement for the future.</p>
<p>Let me introduce you to the <strong>6 Sins of .NET</strong>:</p>
<ul>
<li><a href="#language-spaghetti">Language Spaghetti</a></li>
<li><a href="#version-overflow">Version Overflow</a></li>
<li><a href="#net-everywhere">.NET Everywhere</a></li>
<li><a href="#all-eyez-on-me">All Eyez on Me</a></li>
<li><a href="#architecture-break-down">Architecture Break Down</a></li>
<li><a href="#name-overload">Name Overload</a></li>
</ul>
<h3 id="language-spaghetti">Language Spaghetti</h3>
<p>If a person sets out to learn C# (like the author from the question above), what do they learn? Is it C# or .NET? The answer is both. C# doesn't exist without .NET and you cannot program .NET without C# (or F# or VB.NET for that matter). This is not a problem in itself, but certainly where some of the issues begin. A new beginner doesn't just learn C# but also has to learn the inner workings of .NET. Things get even more confusing when C# isn't the initial .NET language to learn. Supposedly the loose relationship between .NET languages shouldn't really matter because they compile into IL and become cross compatible. <a href="https://github.com/dotnet/vblang/issues/300">Except when they don't</a>:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-07-05/csharp-unmanaged-constraint-leak.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/csharp-unmanaged-constraint-leak.png" alt="C# unmanaged constraint language leak"></a></p>
<p>The <a href="https://docs.microsoft.com/en-us/dotnet/standard/framework-libraries#base-class-libraries">BCL (Base Class Libraries)</a> provide the foundation for all three languages, yet they are only written in C#. That's not really an issue unless entire features were written with only one language in mind and are extremely cumbersome to use from another. For example, F# still doesn't have <a href="https://github.com/fsharp/fslang-suggestions/issues/581">native support for <code>Task</code> and <code>Task<T></code></a>, converting between <code>System.Collection.*</code> classes and F# types is a painful undertaking and F# functions and .NET <code>Action<T></code> objects don't map very well.</p>
<p>The most frustrating thing though is when <a href="https://docs.microsoft.com/en-us/dotnet/csharp/nullable-references">changes to one language</a> force <a href="https://github.com/fsharp/fslang-suggestions/issues/577">additional complexity on another</a>.</p>
<p>Interop issues between those three languages are a big burden on new developers, particularly when they start with something else but C#. However, as painful as this might be, interop problems are not the only complexity which a new beginner has to face. Try to explain to a new C# user in a meaningful way when they should use <a href="https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/inheritance">inheritance with standard classes</a>, <a href="https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/interfaces/">interfaces</a>, <a href="https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/abstract-and-sealed-classes-and-class-members">abstract classes</a> or <a href="https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/default-interface-methods-versions">interfaces with default method implementations</a>. The differences between those options have been so watered down that one cannot explain the distinctions in a single coherent answer anymore.</p>
<p>Take this <a href="https://stackoverflow.com/questions/2570814/when-to-use-abstract-classes">StackOverflow question</a> for an example. Nothing which has been described in the accepted (and most upvoted) answer below isn't also true for interfaces with default method implementations today:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-07-05/abstract-class-question-stack-overflow.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/abstract-class-question-stack-overflow.png" alt="Abstract class question on StackOverflow"></a></p>
<p>Another great example is the growing variety of C# data types. When is it appropriate to create a <a href="https://docs.microsoft.com/en-us/aspnet/web-api/overview/data/using-web-api-with-entity-framework/part-5">data class</a>, an <a href="https://www.c-sharpcorner.com/article/all-about-c-sharp-immutable-classes2/">immutable data class</a>, a <a href="http://mustoverride.com/tuples_structs/">mutable struct</a>, an <a href="https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/struct#readonly-struct">immutable struct</a>, a <a href="https://docs.microsoft.com/en-us/dotnet/api/system.tuple?view=netcore-3.1">tuple class</a>, the new concept of <a href="https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/value-tuples">named tuples</a> or the upcoming <a href="https://www.stevefenton.co.uk/2020/05/csharp-9-record-types/">record type</a>?</p>
<p>Seasoned .NET developers are very opinionated about when to use each of these types and depending on who a new beginner will ask, or which StackOverflow thread they'll read, they will most likely get very different and often contradicting advice. Anyone who doesn't think that this is a massive problem must be in serious denial. Learning a programming language is hard enough on its own. Learning a programming language where two mentors (who you might look up to) give you contradicting advice is even harder.</p>
<p>The irony is that many of the newly added language and framework features are supposed to make .NET easier to learn:</p>
<p><a href="https://twitter.com/shanselman/status/1281856685657616384"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/tweet-dotnet-easier.png" alt="Make .NET easier"></a></p>
<p><em>(BTW, I have been a huge proponent of dropping the <code>.csproj</code> and <code>.sln</code> files for a very long time but previously Microsoft employees defended them as if someone offended their family, so it's nice to finally see some support for that idea too! :))</em></p>
<p>Don't get me wrong, I agree with Scott that *this particular feature* will make writing a first hello world app a lot easier than before, however, our friend Joseph Woodward makes a good point that nothing comes for free:</p>
<p><a href="https://twitter.com/joe_mighty/status/1281899362281566208"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/tweet-dotnet-reboot-3.png" alt="With great power comes great responsibility"></a></p>
<p>And he's not alone with this idea:</p>
<p><a href="https://twitter.com/buhakmeh/status/1281985930279223298"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/tweet-dotnet-hard-2.png" alt=".NET is hard"></a></p>
<p><a href="https://twitter.com/FransBouma/status/1282059062247555082"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/tweet-dotnet-hard-3.png" alt=".NET is hard"></a></p>
<p>It's not just about learning how to write C#. A huge part of learning .NET is reading other people's code, which is also getting inherently more difficult as a result:</p>
<p><a href="https://twitter.com/1amjau/status/1281908190167347200"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/tweet-dotnet-hard-4.png" alt=".NET is hard"></a></p>
<p>Whilst I do like and support the idea of adding more features to C#, I cannot ignore the fact that it also takes its toll.</p>
<p>Some people raised a good point that it might be time to consider making some old features obsolete:</p>
<p><a href="https://twitter.com/RustyF/status/1281860368369963008"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/tweet-dotnet-reboot-2.png" alt=".NET is hard"></a></p>
<p><a href="https://twitter.com/RehanSaeedUK/status/1281943982189228032"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/tweet-dotnet-reboot-1.png" alt=".NET is hard"></a></p>
<p>Whatever one's personal opinion is, "feature bloat" is certainly becoming a growing concern in the C# community and Microsoft would be stupid not to listen or at least take some notes.</p>
<p>Given that F# is already a <a href="https://dusted.codes/why-you-should-learn-fsharp">functional first multi paradigm language</a>, and C# is definitely heading towards that direction too, maybe one day there's a future opportunity to consolidate both languages into one? (YES, I dared to suggest it!) Either that, or Microsoft should establish 100% feature parity so that interop is seamless between the languages and the only differentiating factor remains in their syntax - one geared towards a functional first experience and the other towards the object oriented equivalent.</p>
<h3 id="version-overflow">Version Overflow</h3>
<p>As mentioned above, all three .NET languages evolve independently from .NET. C# is heading towards <a href="https://devblogs.microsoft.com/dotnet/welcome-to-c-9-0/">version 9</a>, F# is approaching <a href="https://devblogs.microsoft.com/dotnet/announcing-f-5-preview-1/">version 5</a> and VB.NET is already on <a href="https://docs.microsoft.com/en-us/dotnet/visual-basic/getting-started/whats-new#visual-basic-160">version 16</a>. Meanwhile .NET Framework is on <a href="https://dotnet.microsoft.com/download/dotnet-framework/net48">version 4.8</a> and .NET Core on <a href="https://dotnet.microsoft.com/download/dotnet-core/3.1">version 3.1</a>. Don't even get me started on <a href="https://www.mono-project.com">Mono</a>, <a href="https://dotnet.microsoft.com/apps/xamarin">Xamarin</a> or <a href="https://docs.microsoft.com/en-us/aspnet/core">ASP.NET</a>.</p>
<p>Now I understand that all these things are very different and I'm comparing apples with oranges, but how is a new developer supposed to know all of that? All these components are independent and yet correlated enough to overwhelm a new developer with screens like this:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-07-05/dotnet-core-versions.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/dotnet-core-versions.png" alt=".NET Core Versions"></a></p>
<p>Even .NET developers with 5+ years of experience find the above information hard to digest. I know this for a fact because I ask this question in interviews a lot and it's rare to get a correct explanation. The problem is not that this information is too verbose, or wrong, or unnecessary to know, but rather the unfortunate case that it's just how big .NET has become. In fact it's even bigger but I believe that the .NET team has already tried their best to condense this screen into the simplest form possible. I understand the difficulty - if you've got a very mature and feature rich platform then there's a lot to explain - but nevertheless it's not a particularly sexy look.</p>
<p>In contrast to .NET this is the cognitive load which is thrown at a beginner in Go:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-07-05/go-versions.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/go-versions.png" alt="Go Versions"></a></p>
<p>It's much simpler in every way. Admittedly it's not an entirely fair comparison because Go gets compiled directly into machine code and therefore there isn't a real distinction between SDK and runtime, but my point is still the same. There is certainly plenty of room for improvement and I don't think what we see there today is the best we can do.</p>
<p>Maybe there's some value in officially aligning language, ASP.NET Core and .NET (Core) versions together and ship one coherent release every time? Fortunately .NET 5 is the right step in that direction but in my opinion there's still more to do!</p>
<h3 id="net-everywhere">.NET Everywhere</h3>
<p>Now this one will probably hit some nerves, but one of the *big* problems with .NET is that Microsoft is obsessed with the idea of <a href="https://www.hanselman.com/blog/NETEverywhereApparentlyAlsoMeansWindows311AndDOS.aspx">.NET Everywhere</a>. Every <a href="https://visualstudiomagazine.com/articles/2020/06/30/uno-visual-studio.aspx">new development</a> aims at <a href="https://devblogs.microsoft.com/dotnet/introducing-net-multi-platform-app-ui/">unifying everything</a> into <a href="https://dotnet.microsoft.com/learn/dotnet/what-is-dotnet">one big platform</a>, catering for <a href="https://dotnet.microsoft.com/apps/xamarin/mobile-apps">every</a> <a href="https://dotnet.microsoft.com/apps/iot">single</a> <a href="https://dotnet.microsoft.com/apps/gaming">possible</a> <a href="https://dotnet.microsoft.com/apps/cloud">use</a> <a href="https://dotnet.microsoft.com/apps/desktop">case</a> <a href="https://dotnet.microsoft.com/apps/machinelearning-ai">imaginable</a>:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-07-05/dotnet-5-everywhere.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/dotnet-5-everywhere.png" alt=".NET Everywhere"></a></p>
<p><em>(Thanks to the courtesy of <a href="https://twitter.com/ben_a_adams">Ben Adams</a> I've updated the graphic to represent the <a href="https://twitter.com/ben_a_adams/status/1286144227819257856">full picture of .NET</a>. Ben created this image for the purpose of his own blog which you can read on <a href="https://www.ageofascent.com/blog/">www.ageofascent.com/blog</a>.)</em></p>
<p>In many ways it makes a lot of sense, but the angle taken is causing more harm than help. While it makes a lot of sense to walk on a stage and boast to potential customers that your product can solve all their existing and future problems, it's not always the best approach when you actually want to on-board new developers from all sorts of different industries.</p>
<p>Unifying the entire stack into a single .NET platform doesn't come without a price. For years things have been constantly moved around, new things have been created and others have been dropped. Only recently I had to re-factor my ASP.NET Core startup code yet again:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-07-05/dotnet-generic-web-host-builder.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/dotnet-generic-web-host-builder.png" alt=".NET Core 3.1 - Refactoring web host to generic host"></a></p>
<p>Every attempt at unifying things for unification's sake makes simple code more verbose. Without a doubt the previous code was a lot easier to understand. I applaud the concept of a <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/host/generic-host?view=aspnetcore-3.1">generic host</a>, but one has to wonder how often does a developer actually want to combine two servers into one? In my opinion there must be a real benefit in order to justify complexity such as having a builder inside another builder! How often does a developer want to *create* a web server and not *run* it as well? I'm sure these edge cases do exist, but why can't they be hidden from the other 99.9% of use cases which normally take place?</p>
<p>As nice and useful as the builder pattern may be, and whatever architectural benefits the lambda expression might give, to a C# newbie who just wants to create a hello world web application this is insane!</p>
<p>Remember, this is the equivalent in Go:</p>
<pre><code>router := ... // Equivalent to the Startup class in .NET
if err := http.ListenAndServe(":5000", router); err != nil {
// Handle err
}
</code></pre>
<p>Microsoft's "Swiss Army Knife" approach creates an unnecessary burden on new .NET developers in many different ways.</p>
<p>For example, here's the output of all the default .NET templates which get shipped as part of the .NET CLI (minus the Giraffe one):</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-07-05/dotnet-new-command.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/dotnet-new-command.png" alt=".NET Project Templates"></a></p>
<p>They barely fit on a single screen. Again, it's great that Microsoft has Blazor as an answer to WASM, or that they have WPF as an option for Windows, but why are these things shipped together as one big ugly thing? Why can't there just be a template for a console app or class library and then some text which explains how to download more? This is a classic example where ".NET Everywhere" is getting into most users' way!</p>
<p>Speaking of fitting things into a single screen...</p>
<p><a href="https://twitter.com/dylanbeattie/status/832326857798348800"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/visual-studio-chaos.png" alt="Visual Studio Chaos"></a></p>
<p>Whilst the above tweet was comical in nature, it's not far from the truth.</p>
<p>My honest constructive feedback is <strong>*less is more*</strong>!</p>
<p>It's rarely the case that a new .NET developer wants to be drowned in a sea of options before they can get started with some basic code. Most newcomers just want to set up the bare bones minimum to get up and running and write their first lines of code. They don't even care if it's a console app or an ASP.NET Core application.</p>
<p>I totally support the idea of a single .NET which can cater to all sorts of different needs, but it must be applied in a much less overwhelming way. Visual Studio's new project dialogue doesn't need to match Microsoft's marketing slides and the <code>dotnet new</code> command doesn't have to ship a template for every single type of app. It doesn't happen very often that a developer is first tasked to work on a line of business web application, then told to build a Windows forms store app and later asked to build a game. Absolutely nobody needs twenty different templates which span across ten different industries on their machine.</p>
<p>My advice would be to optimise the .NET experience for beginners and not worry about advanced developers at all. There's a reason why advanced users are called advanced. If a senior developer wants to switch from iOS to IoT development then they will know where to find the tools. Currently the .NET CLI ships FIFTEEN!!! different web application templates for a user to pick. How is a new C# developer supposed to decide on the right template if even experienced .NET developers scratch their head? Microsoft must understand that not bloating every tool or IDE with a million different options doesn't mean that users don't understand that these options do exist.</p>
<p>In my opinion the whole mentality around ".NET Everywhere" is entirely wrong. It should be a positive side effect and not the goal.</p>
<h3 id="all-eyez-on-me">All Eyez on Me</h3>
<p>Another problem which I have observed is Microsoft's fear of being ignored. Microsoft has a solution for almost every problem which a developer might face. It's an amazing achievement and something to be really proud of (and I really mean it), but at the same time Microsoft has to learn how to give developers some space.</p>
<p>Microsoft does not miss a single opportunity to advertise a whole range of products when someone just looks at one. Doing .NET development? Oh look, here is Visual Studio which can help you with that! Oh and by the way you can deploy directly to IIS! In case you wonder, IIS comes with Windows Server which also runs in Azure! Aha, speaking of Azure, did you know that you can also click this button which will automatically create a subscription and deploy to the cloud? In case you don't like right-click-publish we also have Azure DevOps, which is a fully featured CI/CD pipeline! Of course there's no pressure, but if you *do sign up now* then we'll give you free credits for a month! Anyway, it's just a "friendly reminder" so you know that in theory we offer the full package in case you need it! C'mon look at me, look at me, look at me now!</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-07-05/look-at-me.gif"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/look-at-me.gif" alt="Look at me - Attention seeker"></a></p>
<p>Again, I totally understand why Microsoft does what they do (and I'm sure there's a lot of good intention there), but it comes across the complete wrong and opposite way.</p>
<p>No wonder that the perception of .NET hasn't changed much in the outside world:</p>
<p><a href="https://twitter.com/blazorguy/status/1279092538490736640"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/dotnet-microsoft-bloatware.png" alt=".NET coming across the wrong way"></a></p>
<p>What Microsoft really tries to say is:</p>
<blockquote>
<p>Hey folks, look we're different now, we are cross platform, we run everywhere and we want you to have a great experience. Here's a bunch of things which can help you on your journey.</p>
</blockquote>
<p>Unfortunately what users *actually* understand is:</p>
<blockquote>
<p>Hey, so .NET is a Microsoft product and it mostly only works with other Microsoft products, so here's a bunch of stuff which you will have to use!</p>
</blockquote>
<p>On one hand Microsoft wants to create this new brand of an open source, cross platform development tool chain and yet on another they push Visual Studio and Azure whenever they talk .NET. This is sending mixed messages and confusing new developers, detracting from .NET's actual brilliance and doing a massive disservice to the entire development platform. The frustrating thing is that the more .NET is becoming open to the world, the more Microsoft is pushing for their own proprietary tools. Nowadays when you watch some of the most iconic .NET employees giving a talk on .NET then it's only 50% .NET and the rest advertising Windows, Edge and Bing. This is not what most .NET developers came to see and it doesn't happen in other programming communities such as Node.js, Rust or Go either. Besides that, if someone constantly advertises every new Microsoft flavour of the month then they also lose developer credibility over time.</p>
<p>The other thing is that <a href="https://www.reddit.com/r/csharp/comments/htlgsr/vs_or_vs_code_problem/">questions and answers like these</a> need to stop:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-07-05/visual-studio-vs-visual-studio-code-question.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/visual-studio-vs-visual-studio-code-question.png" alt="Visual Studio vs. Visual Studio Code"></a></p>
<p>This question and particularly the answer are really bad, because they demonstrate that whilst C# and .NET are not necessarily tied to Visual Studio and Windows anymore, they still remain the most viable option to date. This sentiment is not good but unfortunately true. I know from my own experience that the <a href="https://code.visualstudio.com/Docs/languages/csharp">Visual Studio Code plugin for C#</a> is nowhere near as good as it should be. The same applies to F#. Why is that? It's not that Visual Studio Code is less capable than Visual Studio, but rather a decision by Microsoft to give it a lower priority and a lack of investment. I don't need to use <a href="https://www.jetbrains.com/go/">JetBrains GoLand</a> in order to be productive in Go, but I have to use <a href="https://www.jetbrains.com/rider/">Rider</a> for .NET.</p>
<p>Microsoft needs to decouple .NET from everything else and make it a great standalone development environment if they want to compete with the rest.</p>
<p>C#, F# and .NET will always be perceived as a very Microsoft and Windows centric development environment when even the <a href="https://docs.microsoft.com/en-us/dotnet/">official .NET documentation</a> page confirms a critic's worst fears:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-07-05/official-dotnet-docs.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/official-dotnet-docs.png" alt=".NET Documentation"></a></p>
<h3 id="architecture-break-down">Architecture Break Down</h3>
<p>Go is 10 years old.</p>
<p>.NET Core is 4 years old - not even half the age of Go.</p>
<p>Go is currently on version 1.14 and .NET Core is already on its third major version. Both have been written from the ground up, but Microsoft has had arguably more experience developing .NET Core given that it was re-written from scratch with the knowledge of more than 14 years of supporting .NET Framework. How on earth did .NET Core end up with so many architectural changes that it is already on version 3.x?</p>
<p>Microsoft prides itself with developing .NET Core extremely fast and offering a comprehensive out of the box solution, but it is one of the most unstable and tiresome development environments which I have worked with in recent years. It is becoming increasingly exhausting trying to keep up with .NET Core's constant change. Microsoft would implement one thing one day and then completely replace it the other. Just think about the constant changes to the <code>Startup.cs</code> class, or the ever evolving <code>HttpClient</code> and its related helper types, the invention and demise of the JSON project file, .NET Standard or various JSON serialisers. The list goes on. Features such as CORS, routing and authorisation keep changing as more code gets rewritten and pushed down the pipeline and more types are being made obsolete from the <code>Microsoft.AspNetCore.*</code> namespace and replaced with new ones emerging in <code>Microsoft.Extensions.*</code>.</p>
<p>It's hard to keep up with .NET as an experienced developer let alone as a beginner. This constant change significantly reduces the lifespan and usefulness of books, videos, online tutorials, StackOverflow questions, Reddit threads and community blog posts. It's not only making .NET by an order of magnitude harder to learn but also financially less viable than others.</p>
<p>Good software and framework architecture should provide a stable foundation which is open for extension, but closed for change (sound familiar?). There was no need to implement v1 of ASP.NET Core in such a way that it now requires constant architectural change in order to support new innovation. Why has endpoint routing not been build on day one? Why does Microsoft not provide an adequate and feature complete replacement for <code>Newtonsoft.Json</code> before they released and encouraged the usage of <code>System.Text.Json</code>? Why are light weight and easy to understand routing handlers such as <code>MapGet</code> only an afterthought? Why is it that Microsoft never creates a GitHub issue for an existing successful .NET OSS project when they need something similar (maybe with a few changes or improvements) and rather invent their own competing in-house product which *always* causes pain in the community and indirectly forces users to re-write their codebase yet again?</p>
<p>There is no actual need to do any of this, only self imposed deadlines which force Microsoft to release ill written software, badly thought out framework features and an unnecessary burden on its current developer community. This constant change is extremely painful to say the least. It's the single worst feature of .NET and the main reason why I honestly couldn't recommend .NET to a programming novice in good faith.</p>
<p>There is really not much else to say other than <strong>*slow - down*</strong>. I wish the .NET and ASP.NET Core teams would take this criticism (which isn't new) more serious and realise how bad things have become. I know I keep banging about Go, but surely there is some valuable lesson to learn given how popular and successful Go has become in a relatively short amount of time? Maybe Go is too simple in comparison to .NET, but maybe the current pace of .NET is not the right approach either? It's important to remember that a less breaking .NET would pose such a smaller mental and financial burden on new developers from all across the world!</p>
<h3 id="name-overload">Name Overload</h3>
<p>This blog post wouldn't be complete without mentioning Microsoft's complete failure of naming .NET properly in a user and beginner friendly way. I mean what is ".NET" anyway? First and foremost it's a TLD, which has nothing to do with Microsoft! Secondly there is no clear or uniform way of spelling .NET. Is it ".NET" or "dot net"? Maybe it was "DOTNET" or it could be "dot.net" like the newly registered domain <a href="https://dot.net">dot.net</a>? My friends still tease me by calling it "DOT NOT" whenever I mention it to them!</p>
<p>Finally when there was an opportunity to correct this long standing mistake by re-writing the entire platform and possibly giving it a new name then Microsoft decided to call it ".NET Core". If anyone thought it couldn't get any worse than Microsoft surely didn't disappoint! I cannot think of a more internet unfriendly name than ".NET Core". How do you even hashtag this? I've seen it all ranging from <a href="https://twitter.com/hashtag/dotnet">#dotnet</a> to <a href="https://twitter.com/hashtag/dot-net">#dot-net</a>, <a href="https://twitter.com/hashtag/dot-net-core">#dot-net-core</a>, <a href="https://twitter.com/hashtag/dotnet-core">#dotnet-core</a>, <a href="https://twitter.com/hashtag/netcore">#netcore</a>, <a href="https://twitter.com/hashtag/net-core">#net-core</a>, <a href="https://twitter.com/hashtag/dot-netcore">#dot-netcore</a> and <a href="https://twitter.com/hashtag/dotnetcore">#dotnetcore</a>.</p>
<p>I think everyone can agree that objectively speaking ".NET Core" was never a great name. Needless to say that ".NET Core" also completely messes with the internet history for ".NET Framework", which is exactly what everyone predicted before.</p>
<p>At least Microsoft is consistent with its naming. There's something comical in the fact that the only three supported languages in .NET are called CSharp, F# and VB.NET. Or was it C#, F Sharp and Visual Basic? Anyway, it was some combination of the three!</p>
<h2 id="final-words">Final words</h2>
<p>C#, F# and the whole of .NET is a great development platform to code, but it has also become overly complex which is holding new developers back. I've been working with it for many years and mostly enjoyed myself, however I won't lie and say that things haven't gotten a little bit out of hand lately. There is something to tell that after having .NET for 20 years the programming community still hasn't seen anything new or noteworthy since the creation of <a href="https://stackoverflow.com">stackoverflow.com</a>:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-07-05/famous-dotnet-websites.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-07-05/famous-dotnet-websites.png" alt="Famous .NET website question on Quora"></a></p>
<p>Meanwhile we've seen very prominent products being built with other languages spanning across multiple domains such as developer technologies (<a href="https://www.docker.com">Docker</a>, <a href="https://kubernetes.io">Kubernetes</a>, <a href="https://github.com/prometheus">Prometheus</a>) to smaller static website generators (<a href="https://gohugo.io">Hugo</a>) or some of the most successful FinTech startups (<a href="https://monzo.com">Monzo</a>) in the world.</p>
<p>.NET is a great technology for experienced developers who grew up with the platform as it matured, but I'm not sure if I'd still enjoy learning it today as much as I did in 2008. Whilst the complexity allows me to charge great fees to clients for writing software in .NET, I'd probably not recommend it to a friend who wants to learn how to code or I wouldn't use it for building my own startup from the grounds up.</p>
<p>The future success of .NET will be based on the developers which it can attract today.</p>
<p>The success of .NET in ten years will be based on the decisions made today.</p>
<p>I hope those decisions will be made wisely!</p>
<div class="tip"><strong>June 2021 Update:</strong><p>If you are actually learning .NET and were looking for beginner content to get started then check out my latest blog post on <a href="https://dusted.codes/dotnet-basics">.NET Basics</a> which teaches all the foundational concepts around .NET and explains the inner workings of the platform.</p></div>
https://dusted.codes/dotnet-for-beginners
[email protected] (Dustin Moris Gorski)https://dusted.codes/dotnet-for-beginners#disqus_threadWed, 22 Jul 2020 00:00:00 +0000https://dusted.codes/dotnet-for-beginnersdotnetdotnet-corecsharpfsharpvbnetGitHub Actions for .NET Core NuGet packages<p>Last weekend I migrated the <a href="https://github.com/giraffe-fsharp/Giraffe">Giraffe web framework</a> from <a href="https://www.appveyor.com">AppVeyor</a> to <a href="https://github.com/features/actions">GitHub Actions</a>. It proved to be incredibly easy to do so despite me having some very specific requirements on how I wanted the final solution to work and that it should be flexible enough to apply to all my other projects too. Even though it was mostly a very straight forward job, there were a few things which I learned along the way which I thought would be worth sharing!</p>
<p>Here's a quick summary of what I did, why I did it and most importantly how you can apply the same GitHub workflow to your own .NET Core NuGet project as well!</p>
<h2 id="overview">Overview</h2>
<ul>
<li><a href="#cicd-pipeline-for-net-core-nuget-packages">CI/CD pipeline for .NET Core NuGet packages</a>
<ul>
<li><a href="#branch-and-pull-request-trigger">Branch and pull request trigger</a></li>
<li><a href="#test-on-linux-macos-and-windows">Test on Linux, macOS and Windows</a></li>
<li><a href="#create-build-artifacts">Create build artifacts</a></li>
<li><a href="#push-nightly-releases-to-github-packages">Push nightly releases to GitHub packages</a></li>
<li><a href="#github-release-trigger-for-official-nuget-release">GitHub release trigger for official NuGet release</a></li>
<li><a href="#drive-nuget-version-from-git-tags">Drive NuGet version from Git Tags</a></li>
<li><a href="#speed">Speed</a></li>
</ul>
</li>
<li><a href="#environment-variables">Environment Variables</a></li>
<li><a href="#the-end-result">The End Result</a>
<ul>
<li><a href="#four-stages-of-a-release">Four stages of a release</a></li>
<li><a href="#workflow-yaml">Workflow YAML</a></li>
</ul>
</li>
</ul>
<h2 id="cicd-pipeline-for-net-core-nuget-packages">CI/CD pipeline for .NET Core NuGet packages</h2>
<p>First, let's look at the requirements which I set out for my final CI/CD pipeline to meet. Each of these points has a specific purpose which I think is applicable to most .NET Core NuGet projects and therefore explaining in more detail.</p>
<h3 id="branch-and-pull-request-trigger">Branch and pull request trigger</h3>
<p>CI builds are the first formal check as part of the software development feedback loop which don't come from a developer's machine itself. They are reproducible and reliable feedback and arguably cheap to run in the cloud. As such CI builds should run as frequently as possible so that new errors can be flagged up as soon as they occur.</p>
<p>On this premise I decided that each commit, regardless if it happened on a <code>feature/*</code>, <code>hotfix/*</code> or other branch, should trigger a CI build. Pull requests should trigger a CI build as well. It's a great way of validating the changes of a PR before deciding whether to merge. As a matter of fact, it's highly recommended to enforce this rule through GitHub itself.</p>
<p>In GitHub, under <strong>Settings</strong> and then <strong>Branches</strong>, one can set up <a href="https://help.github.com/en/github/administering-a-repository/configuring-protected-branches">branch protection rules</a> for a repository:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-branch-protection-rules.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-branch-protection-rules.png" alt="GitHub Branch Protection Rules"></a></p>
<p><em>Note that the available CI options get automatically updated whenever a CI pipeline is executed and therefore might not show up before the first workflow run has completed.</em></p>
<p>We can configure a GitHub Action to trigger builds for commits and pull requests on all branches by providing the <code>push</code> and <code>pull_request</code> option and leaving the branch definitions blank:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>on:
</span></span><span style="display:flex;"><span> push:
</span></span><span style="display:flex;"><span> pull_request:
</span></span></code></pre><h3 id="test-on-linux-macos-and-windows">Test on Linux, macOS and Windows</h3>
<p>.NET Core is cross platform compatible and so it's not a surprise that a NuGet library is expected work on Linux, macOS and Windows as well.</p>
<p>Running a CI job against multiple OS versions can be configured via a build matrix:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>jobs:
</span></span><span style="display:flex;"><span> build:
</span></span><span style="display:flex;"><span> runs-on: ${{ matrix.os }}
</span></span><span style="display:flex;"><span> strategy:
</span></span><span style="display:flex;"><span> matrix:
</span></span><span style="display:flex;"><span> os: [ ubuntu-latest, windows-latest, macos-latest ]
</span></span></code></pre><p>In this example I've named the "build" job <code>build</code>, which is an arbitrary value and can be changed to anything a user wants.</p>
<h3 id="create-build-artifacts">Create build artifacts</h3>
<p>A build artifact is downloadable output which can be created and collected on each CI run. It can be anything from a single file to an entire folder full of binaries. In the case of a .NET Core NuGet library it is a very useful feature to create a super early version of a NuGet package as soon as a build has finished:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-build-artifacts.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-build-artifacts.png" alt="GitHub Action Build Artifacts"></a></p>
<p>In combination with pull request triggers this is a super handy way of giving OSS contributors and OSS maintainers an easy way of downloading and testing a NuGet package as part of a PR.</p>
<p>It is also a nice way of letting users download and consume a "super early semi official" NuGet package which came from the project's official CI pipeline when someone is in absolute desperate need of applying a fix before an official release or pre-release has been created.</p>
<p>In GitHub a NuGet artifact can be easily created by first running the <code>dotnet pack</code> command as part of an earlier build step and subsequently using the <code>upload-artifact@v2</code> action to upload the newly created <code>*.nupkg</code> as an artifact:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>jobs:
</span></span><span style="display:flex;"><span> build:
</span></span><span style="display:flex;"><span> runs-on: ${{ matrix.os }}
</span></span><span style="display:flex;"><span> strategy:
</span></span><span style="display:flex;"><span> matrix:
</span></span><span style="display:flex;"><span> os: [ ubuntu-latest, windows-latest, macos-latest ]
</span></span><span style="display:flex;"><span> steps:
</span></span><span style="display:flex;"><span> ...
</span></span><span style="display:flex;"><span> ...
</span></span><span style="display:flex;"><span> - name: Pack
</span></span><span style="display:flex;"><span> if: matrix.os == 'ubuntu-latest'
</span></span><span style="display:flex;"><span> run: dotnet pack -v normal -c Release --no-restore --include-symbols --include-source -p:PackageVersion=$GITHUB_RUN_ID src/$PROJECT_NAME/$PROJECT_NAME.*proj
</span></span><span style="display:flex;"><span> - name: Upload Artifact
</span></span><span style="display:flex;"><span> if: matrix.os == 'ubuntu-latest'
</span></span><span style="display:flex;"><span> uses: actions/upload-artifact@v2
</span></span><span style="display:flex;"><span> with:
</span></span><span style="display:flex;"><span> name: nupkg
</span></span><span style="display:flex;"><span> path: ./src/${{ env.PROJECT_NAME }}/bin/Release/*.nupkg
</span></span></code></pre><p>In the example above I'm using the pre-defined <code>GITHUB_RUN_ID</code> environment variable to specify the NuGet package version and a custom defined environment variable called <code>PROJECT_NAME</code> to specify which .NET Core project to pack and publish as an artifact. This has the benefit that the same GitHub workflow definition can be used across multiple projects with very minimal initial setup.</p>
<p>One might have also noticed that I used a wildcard definition for the project file extension <code>.*proj</code>. This has the additional benefit that the <code>dotnet pack</code> command will work for all types of .NET Core projects, which are <code>.vbproj</code>, <code>.csproj</code> and <code>.fsproj</code>.</p>
<p>Lastly I had to use the version 2 (<code>@v2</code>) of the <code>upload-artifact</code> action in order to use wildcard definitions in the artifact's <code>path</code> specification. If you run into a "missing file" error when trying to upload an artifact then make sure that you're using the latest version of this action. Before version 2 wildcards were not supported yet.</p>
<p>On another note, the <code>if: matrix.os == 'ubuntu-latest'</code> condition as part of the <code>Pack</code> and <code>Upload Artifact</code> steps has no special purpose except limiting the artifact upload to a single run from the previously defined build matrix. A single artifact upload is sufficient (the NuGet package doesn't change based on the environment where it has been packed) and therefore I simply chose <code>ubuntu-latest</code> because Ubuntu happens to be the fastest executing environment and therefore helps to keep the overall build time as low as possible. Windows workers seem to take generally longer to start than macOS or Ubuntu.</p>
<h3 id="push-nightly-releases-to-github-packages">Push nightly releases to GitHub packages</h3>
<p>You might have heard of the term "Nightly Build" before. A nightly build (or what I like to call a bleeding edge pre-release build) is a proper (formal) deployment of a build artifact to a place which makes general consumption almost as intuitive as an official release.</p>
<p>In the context of a NuGet package a "nightly release" is a NuGet library which normally gets pushed to a public NuGet feed which is just like the official <a href="https://www.nuget.org">NuGet Gallery</a>, but not the gallery itself. This is a common pattern amongst .NET Core libraries because developers can configure more than one NuGet feed in their project via a <code>NuGet.config</code> file (see the <a href="https://docs.microsoft.com/en-us/nuget/reference/nuget-config-file">NuGet.config reference</a> for more information) and therefore consume a nightly build package the same way as an official release. Most commonly I've seen self hosted <a href="https://inedo.com/proget">ProGet</a> feeds or cloud hosted <a href="https://www.myget.org">MyGet</a> feeds to distribute "nightly builds" alongside the official NuGet gallery. However, GitHub's relatively new <a href="https://github.com/features/packages">Packages</a> feature makes an attractive alternative.</p>
<p>Setting up a nightly build pipeline to <a href="https://github.com/features/packages">GitHub packages</a> is fairly easy:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>jobs:
</span></span><span style="display:flex;"><span> build:
</span></span><span style="display:flex;"><span> ...
</span></span><span style="display:flex;"><span> ...
</span></span><span style="display:flex;"><span> prerelease:
</span></span><span style="display:flex;"><span> needs: build
</span></span><span style="display:flex;"><span> if: github.ref == 'refs/heads/develop'
</span></span><span style="display:flex;"><span> runs-on: ubuntu-latest
</span></span><span style="display:flex;"><span> steps:
</span></span><span style="display:flex;"><span> - name: Download Artifact
</span></span><span style="display:flex;"><span> uses: actions/download-artifact@v1
</span></span><span style="display:flex;"><span> with:
</span></span><span style="display:flex;"><span> name: nupkg
</span></span><span style="display:flex;"><span> - name: Push to GitHub Feed
</span></span><span style="display:flex;"><span> run: |<span style="color:#ffa08f">
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> for f in ./nupkg/*.nupkg
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> do
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> curl -vX PUT -u "$GITHUB_USER:$GITHUB_TOKEN" -F package=@$f $GITHUB_FEED
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> done</span>
</span></span></code></pre><p>Unlike build artifacts nightly releases are not something which one would want to do on every build run. It makes sense to limit the creation of a pre-release/nightly deployment to a trigger which is at least one step closer to an official release than a casual git commit or a random pull request. If one uses <a href="https://nvie.com/posts/a-successful-git-branching-model/">Git flow</a> or another similar branching strategy then the <code>develop</code> branch can be a natural gate keeper for a nightly release:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>if: github.ref == 'refs/heads/develop'
</span></span></code></pre><p>Anything which gets pushed into the <code>develop</code> branch is per definition on the road map for the next official release and therefore a good trigger for a nightly build.</p>
<p>I've created a complete separate job called <code>prerelease</code> for this purpose alone. Just like the <code>build</code> job before, this name is completely random and can be changed to something entirely else. In addition the <code>prerelease</code> job should only execute after a successful <code>build</code> run:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>needs: build
</span></span></code></pre><p>If I hadn't specified this then GitHub would try to run multiple jobs in parallel which is not desired in this case.</p>
<p>The following two <code>steps</code> are fairly self explanatory:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>steps:
</span></span><span style="display:flex;"><span> - name: Download Artifact
</span></span><span style="display:flex;"><span> uses: actions/download-artifact@v1
</span></span><span style="display:flex;"><span> with:
</span></span><span style="display:flex;"><span> name: nupkg
</span></span><span style="display:flex;"><span> - name: Push to GitHub Feed
</span></span><span style="display:flex;"><span> run: |<span style="color:#ffa08f">
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> for f in ./nupkg/*.nupkg
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> do
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> curl -vX PUT -u "$GITHUB_USER:$GITHUB_TOKEN" -F package=@$f $GITHUB_FEED
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> done</span>
</span></span></code></pre><p>First I'm using the <code>download-artifact@v1</code> action to obtain the artifact which has been uploaded under the name <code>nupkg</code> by the previous job. Then I use cURL to make a HTTP PUT request directly to GitHub's HTTP API in order to upload the downloaded <code>*.nupkg</code> package to the specified feed.</p>
<p>The name of the feed is determined through the <code>GITHUB_FEED</code> environment variable (more on this later). The <code>GITHUB_TOKEN</code> is a pre-defined environment variable which every GitHub Action has automatically created for. The <code>GITHUB_USER</code> variable is another global setting where I've set my GitHub username in one place.</p>
<h4 id="github-packages-issue-with-nuget">GitHub packages issue with NuGet</h4>
<p>Now one might wonder why I used a <code>curl</code> command to interact with GitHub's HTTP API if I could have used <code>dotnet nuget push</code> or <code>nuget push</code> instead? The short answer is because both of these CLI commands don't work with GitHub's packages feed today.</p>
<p><em>The <code>dotnet nuget push</code> command only works if the worker image is set to <code>windows-latest</code>, however, because the start-up time of a Windows worker is significantly longer than <code>ubuntu-latest</code> I rather trade a little bit of "cURL complexity" for an overall faster CI/CD pipeline. It is a personal choice and a trade off which I'm happy to make in this particular case (more on the benefit of speed later).</em></p>
<h4 id="github-packages-feed">GitHub Packages Feed</h4>
<p>If everything went to plan then the NuGet packages will get uploaded to the user's or organisation's own GitHub packages feed:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-packages-feed.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-packages-feed.png" alt="GitHub Packages Feed"></a></p>
<p>The packages are tagged with the <code>GITHUB_RUN_ID</code> (unless it was a GitHub release):</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-package-versions.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-package-versions.png" alt="GitHub package versions"></a></p>
<p>This is by design. It makes it very easy to associate a certain package version to a specific nightly run. It also makes it very obvious that a package version is a nightly build and not an official release and it's easy to know when a newer version is available since the <code>GITHUB_RUN_ID</code> is an incremental counter.</p>
<h3 id="github-release-trigger-for-official-nuget-release">GitHub release trigger for official NuGet release</h3>
<p>GitHub has a wonderful <a href="https://help.github.com/en/github/administering-a-repository/about-releases">concept of releases</a>, which is an extra layer on top of git tags and which provide a nice UI to create, manage and view a release:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-view-release.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-view-release.png" alt="GitHub Release"></a></p>
<p>Personally I like to use GitHub releases as a formal and conscious step to create, document and publish a NuGet package. For that reason I've added the <code>release</code> option as an additional CI trigger:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>on:
</span></span><span style="display:flex;"><span> push:
</span></span><span style="display:flex;"><span> pull_request:
</span></span><span style="display:flex;"><span> release:
</span></span><span style="display:flex;"><span> types:
</span></span><span style="display:flex;"><span> - published
</span></span></code></pre><p>A GitHub release can have multiple trigger types such as a draft (e.g. <code>created</code>) or an edit (<code>edited</code>), a delete (<code>deleted</code>) and many more. A deployment should only get kicked off when an actual release has been published and therefore the <code>published</code> type has to be specified explicitly.</p>
<p>If a repository doesn't use GitHub releases then one can add normal git tags as a CI trigger instead. Git tag triggers would also invoke for a GitHub release (which uses git tags behind the scene), but they would also run whenever a developer pushes a git tag manually from their client.</p>
<p>In order to kick off a deployment for a GitHub release I created a job called <code>deploy</code>:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>deploy:
</span></span><span style="display:flex;"><span> needs: build
</span></span><span style="display:flex;"><span> if: github.event_name == 'release'
</span></span><span style="display:flex;"><span> runs-on: ubuntu-latest
</span></span><span style="display:flex;"><span> steps:
</span></span><span style="display:flex;"><span> - uses: actions/checkout@v2
</span></span><span style="display:flex;"><span> - name: Setup .NET Core
</span></span><span style="display:flex;"><span> uses: actions/setup-dotnet@v1
</span></span><span style="display:flex;"><span> with:
</span></span><span style="display:flex;"><span> dotnet-version: <span style="color:#abfebc">3.1.301</span>
</span></span><span style="display:flex;"><span> - name: Create Release NuGet package
</span></span><span style="display:flex;"><span> run: |<span style="color:#ffa08f">
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> ...
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> ...
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> dotnet pack -v normal -c Release --include-symbols --include-source -p:PackageVersion=$VERSION -o nupkg src/$PROJECT_NAME/$PROJECT_NAME.*proj</span>
</span></span><span style="display:flex;"><span> - name: Push to GitHub Feed
</span></span><span style="display:flex;"><span> run: |<span style="color:#ffa08f">
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> for f in ./nupkg/*.nupkg
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> do
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> curl -vX PUT -u "$GITHUB_USER:$GITHUB_TOKEN" -F package=@$f $GITHUB_FEED
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> done</span>
</span></span><span style="display:flex;"><span> - name: Push to NuGet Feed
</span></span><span style="display:flex;"><span> run: dotnet nuget push ./nupkg/*.nupkg --source $NUGET_FEED --skip-duplicate --api-key $NUGET_KEY
</span></span></code></pre><p>Similar to the <code>prerelease</code> job the <code>deploy</code> job also requires the <code>build</code> job to finish first and additionally checks if the CI run was triggered by a GitHub release event:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>deploy:
</span></span><span style="display:flex;"><span> needs: build
</span></span><span style="display:flex;"><span> if: github.event_name == 'release'
</span></span></code></pre><p>The actual <code>deploy</code> steps are very similar to everything else we've done so far. First it pulls the latest changes via the <code>checkout@v2</code> action, then it installs the .NET Core SDK version <code>3.1.301</code> with the help of <code>setup-dotnet@v1</code> and finally it runs a <code>dotnet pack</code> command with the official release version which is specified via the <code>VERSION</code> variable (more on this in the next section).</p>
<p>At last I'm using cURL to push to the GitHub packages feed again and the normal <code>dotnet nuget push</code> command to publish the final release to the official NuGet feed too.</p>
<h3 id="drive-nuget-version-from-git-tags">Drive NuGet version from Git Tags</h3>
<p>As mentioned above, the final NuGet version is determined through the <code>VERSION</code> variable. This variable doesn't exist and has to be created manually. Most .NET Core projects specify the package version in their <code>*.csproj</code>/<code>*.fsproj</code> file through the <code><Version></code>, <code><VersionSuffix></code> or <code><PackageVersion></code> property (if you don't know the differences check out <a href="https://andrewlock.net/version-vs-versionsuffix-vs-packageversion-what-do-they-all-mean/">Andrew Lock's blog post</a> for further information). The downside of this is that the project's version has to be kept manually in sync with the GitHub release or git tag version and that it's mostly just meaningless metadata carried along in the project file which is not required until a new release is being published.</p>
<p>In my opinion a much better approach is to entirely remove all version properties from a .NET Core project file and derive the final package version from the submitted git tag during deployment. This is mostly better because there is never a risk of the NuGet package version being out of sync with the provided git tag version. As a developer you probably know that any manual sync is deemed to fail, so why put that strain on ourselves if we can do without it!</p>
<p>Luckily obtaining the git tag version from within a GitHub action is fairly easy. Assuming that a release is being tagged in the format of <code>vX.X.X</code> this bash script will extract the actual version from the <code>GITHUB_REF</code> variable:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>- name: Create Release NuGet package
</span></span><span style="display:flex;"><span> run: |<span style="color:#ffa08f">
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> arrTag=(${GITHUB_REF//\// })
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> VERSION="${arrTag[2]}"
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> VERSION="${VERSION//v}"
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> dotnet pack -v normal -c Release --include-symbols --include-source -p:PackageVersion=$VERSION -o nupkg src/$PROJECT_NAME/$PROJECT_NAME.*proj</span>
</span></span></code></pre><p>For example, if I tagged a commit with <code>v1.2.3</code>, then the <code>GITHUB_REF</code> variable would contain <code>refs/tags/v1.2.3</code>.</p>
<p>The code <code>arrTag=(${GITHUB_REF//\// })</code> converts all forward slash <code>/</code> characters into a whitespace and subsequently splits the <code>GITHUB_REF</code> variable by the whitespace character into an array called <code>arrTag</code>:</p>
<pre><code>arrTag[0]: refs
arrTag[1]: tags
arrTag[2]: v1.2.3
</code></pre>
<p>The next couple lines grab the version tag from the third array element (second index) and later strip the leading <code>v</code> character from the value:</p>
<pre><code>VERSION="${arrTag[2]}"
VERSION="${VERSION//v}"
</code></pre>
<p>If the git tag didn't include the <code>v</code> character (e.g. just <code>1.2.3</code>) then the second line can be removed.</p>
<p>In the end the <code>VERSION</code> variable holds the correct release version of the imminent deployment and can be used to tag the final NuGet package as part of the <code>dotnet nuget pack</code> command:</p>
<pre><code>dotnet pack -c Release --include-symbols --include-source -p:PackageVersion=$VERSION -o nupkg src/$PROJECT_NAME/$PROJECT_NAME.*proj
</code></pre>
<p>This is a very effective way of correctly versioning NuGet releases and keeping them automatically synced with GitHub releases (or manual git tags). It also enforces that a release can only happen when a proper git tag has been created.</p>
<h3 id="speed">Speed</h3>
<p>Speed is paramount in a good CI/CD pipeline. The longer a single run takes, the more likely it is that multiple triggers will result in long queues of individual CI runs stacking up and therefore preventing developers from getting a fast developer feedback loop.</p>
<p>There's a few things which can be done to speed up a .NET Core NuGet pipeline.</p>
<h4 id="ubuntu-over-windows">Ubuntu over Windows</h4>
<p>All jobs use the <code>ubuntu-latest</code> worker image except the first <code>build</code> job which uses a build matrix of three different OS versions to build and test against all major environments. Ubuntu workers start faster than others and therefore should be preferred over Windows images.</p>
<h4 id="avoid-redundant-dotnet-restores">Avoid redundant dotnet restores</h4>
<p>The <code>build</code> job has been optimised to not repeat the <code>dotnet restore</code> step unnecessarily by making use of the <code>--no-restore</code> and <code>--no-build</code> flags where possible:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>- name: Checkout
</span></span><span style="display:flex;"><span> uses: actions/checkout@v2
</span></span><span style="display:flex;"><span>- name: Setup .NET Core
</span></span><span style="display:flex;"><span> uses: actions/setup-dotnet@v1
</span></span><span style="display:flex;"><span> with:
</span></span><span style="display:flex;"><span> dotnet-version: <span style="color:#abfebc">3.1.301</span>
</span></span><span style="display:flex;"><span>- name: Restore
</span></span><span style="display:flex;"><span> run: dotnet restore
</span></span><span style="display:flex;"><span>- name: Build
</span></span><span style="display:flex;"><span> run: dotnet build -c Release --no-restore
</span></span><span style="display:flex;"><span>- name: Test
</span></span><span style="display:flex;"><span> run: dotnet test -c Release --no-build
</span></span></code></pre><h4 id="avoid-redundant-nuget-caching">Avoid redundant NuGet caching</h4>
<p>Setting the environment variable <code>DOTNET_SKIP_FIRST_TIME_EXPERIENCE</code> to <code>true</code> means that we can prevent the .NET CLI from wasting time on redundant package caching.</p>
<h4 id="turn-off-telemetry">Turn off telemetry</h4>
<p>Maybe not a huge gain, but surely turning off the .NET telemetry by setting the <code>DOTNET_CLI_TELEMETRY_OPTOUT</code> environment variable to <code>true</code> will shave off another few (milli)seconds.</p>
<h4 id="avoid-pulling-in-extra-dependencies">Avoid pulling in extra dependencies</h4>
<p>Not having to install extra utilities on a worker image means that the CI run doesn't have to waste extra time setting up additional tools. For example, instead of installing the standalone NuGet CLI one can use <code>dotnet nuget</code> which comes out of the box when .NET Core is set up as a dependency. Another example is to use <code>curl</code> when it already exists instead of pulling in another HTTP client for negligent benefits.</p>
<h4 id="bash-over-powershell">Bash over PowerShell</h4>
<p>Running <code>bash</code> scripts is significantly faster than running PowerShell (<code>pwsh</code>), because PowerShell takes longer to load. Luckily all script blocks in GitHub actions are set to <code>bash</code> by default unless specified otherwise. Try to avoid using PowerShell scripts if not necessarily required (e.g. using a little bit more <code>bash</code> instead of fancier <code>pwsh</code> Cmdlets for establishing the NuGet release version as seen above).</p>
<p>Overall these micro improvements mean that an incoming pull request takes approximately two minutes to successfully build against the entire build matrix and produce a NuGet artifact as well:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-action-run-time.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-action-run-time.png" alt="GitHub Action Build time for Giraffe"></a></p>
<h2 id="environment-variables">Environment Variables</h2>
<p>Now that the majority of the GitHub Action has been explained in detail we're just missing one final piece to complete the puzzle. Throughout this blog post I've been frequently referring to various environment variables which the script assumes to exist so that it is not specifically tied to a single project but rather applicable to many.</p>
<p>Some of those environment variables get created automatically by GitHub itself, but others have to be set up manually, which can be done either at the job level or globally at the top of the file:</p>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>env:
</span></span><span style="display:flex;"><span> DOTNET_SKIP_FIRST_TIME_EXPERIENCE: <span style="color:#d179a3">true</span>
</span></span><span style="display:flex;"><span> DOTNET_CLI_TELEMETRY_OPTOUT: <span style="color:#d179a3">true</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f"># Project name to pack and publish</span>
</span></span><span style="display:flex;"><span> PROJECT_NAME: Giraffe
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f"># GitHub Packages Feed settings</span>
</span></span><span style="display:flex;"><span> GITHUB_FEED: https://nuget.pkg.github.com/giraffe-fsharp/
</span></span><span style="display:flex;"><span> GITHUB_USER: dustinmoris
</span></span><span style="display:flex;"><span> GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f"># Official NuGet Feed settings</span>
</span></span><span style="display:flex;"><span> NUGET_FEED: https://api.nuget.org/v3/index.json
</span></span><span style="display:flex;"><span> NUGET_KEY: ${{ secrets.NUGET_KEY }}
</span></span></code></pre><p>The <code>PROJECT_NAME</code> variable is set to the .NET Core project name which is meant to get packed and published by this script. The script assumes that the solution follows the widely adopted folder structure of:</p>
<pre><code>src/
+-- MyProject/
+-- MyProject.csproj
tests/
+-- MyProject.Tests/
+-- MyProject.Tests.csproj
MySolution.sln
</code></pre>
<p>In this example the <code>PROJECT_NAME</code> variable must be set to <code>MyProject</code>. Don't worry, if the solution contains more helper projects. Everything will get built and tested by the script. The <code>build</code> job executes a <code>dotnet build</code> and <code>dotnet test</code> from the root level of the repository, which means that it will pick up all projects from the repo's <code>.sln</code> file.</p>
<p>The <code>GITHUB_FEED</code> variable is a convenience pointer to the user's or organisation's GitHub feed. The <code>GITHUB_USER</code> variable requires a username of someone who has sufficient permissions to publish to the feed. As mentioned before, the <code>GITHUB_TOKEN</code> variable is auto-created by GitHub, which is why it's being assigned directly from <code>${{ secrets.GITHUB_TOKEN }}</code>.</p>
<p>Finally the <code>NUGET_FEED</code> variable points towards the official NuGet gallery and the <code>NUGET_KEY</code> variable receives a private secret which must be set up manually either at the project or organisation level of the GitHub repository:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-secrets.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-06-28/github-secrets.png" alt="GitHub Secrets"></a></p>
<p>I configured the <code>NUGET_KEY</code> as an organisation wide secret, which means I don't have to set up any additional secrets for each repository any more.</p>
<p>If this rings your security bells then you are not entirely wrong. If you wonder if a malicious attacker could modify the GitHub workflow yaml file as part of a pull request and force bad code into the public domain then let me assure you that this won't be possible. GitHub makes it very clear that <strong>it doesn't pass</strong> secrets to workflows which were triggered by a pull request from a fork:</p>
<blockquote>
<p>Secrets are not passed to workflows that are triggered by a pull request from a fork</p>
</blockquote>
<p>The value for the <code>NUGET_KEY</code> secret has to be generated on <a href="https://www.nuget.org">www.nuget.org</a>:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2020-06-28/nuget-key.png"><img src="https://cdn.dusted.codes/images/blog-posts/2020-06-28/nuget-key.png" alt="NuGet API Key"></a></p>
<h2 id="the-end-result">The End Result</h2>
<p>The end result is a pretty elaborate CI/CD pipeline. All commits and pull requests will trigger a new run against a Linux, macOS and Windows environment and build and test the code across all these platforms. Additionally each build will produce a NuGet package as an artifact which can be downloaded and added to a local NuGet feed for test purposes or urgent matters. When features and bug fixes get eventually merged into the <code>develop</code> branch it will kick off a nightly build and publish an early pre-release version into the organisation's own GitHub packages feed. Finally when a release is being published an official NuGet package will be produced and not only pushed to the Github packages feed, but also to the official NuGet gallery. Nightly build packages are tagged with the workflow run ID and official packages with the associated git tag version. Everything is optimised for speed.</p>
<h3 id="four-stages-of-a-release">Four stages of a release</h3>
<p>In total this set up supports the four stages of a release:</p>
<h4 id="1-yolo-release">1. YOLO Release</h4>
<p>Builds create a NuGet artifact, which is what I like to call the YOLO (you only live once) release. It's meant for super early testing or people who just don't give a damn :).</p>
<h4 id="2-nightly-builds">2. Nightly Builds</h4>
<p>Merges into the <code>develop</code> branch will create an official nightly build which gets pushed into GitHub packages. It's still bleeding edge, but slightly more mature than the YOLO release.</p>
<h4 id="3-official-pre-release-packages">3. Official pre-release packages</h4>
<p>The next step in the release pipeline is an official pre-release package. It is basically the same as an official release package except that the git version tag follows the pre-release convention (e.g. <code>v2.0.0-beta-23</code>). Those packages are being released into the public NuGet gallery, but clearly marked as not a proper release candidate.</p>
<h4 id="4-official-release-packages">4. Official release packages</h4>
<p>Finally the last release stage is a proper stable release. Same as before, it's triggered by a git tag, but this time with a stable release version (e.g. <code>v2.0.0</code>).</p>
<h3 id="workflow-yaml">Workflow YAML</h3>
<p>The final GitHub Action file where all the individual pieces are put together can be viewed on the official <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/.github/workflows/build.yml">Giraffe repository</a>.</p>
<p>Anyone is free to copy the <code>build.yml</code> file, apply custom changes to the environment variables at the top of the file and deploy it to their own .NET Core NuGet repository (it should just work)!</p>
<p>If the link above doesn't work or cannot tbe viewed then the entire <code>build.yml</code> file can also be seen in the script below:</p>
<h6 id="buildyml">build.yml</h6>
<pre style="color:#ccc;background-color:#1d1d1d;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code><span style="display:flex;"><span>name: .NET Core
</span></span><span style="display:flex;"><span>on:
</span></span><span style="display:flex;"><span> push:
</span></span><span style="display:flex;"><span> pull_request:
</span></span><span style="display:flex;"><span> release:
</span></span><span style="display:flex;"><span> types:
</span></span><span style="display:flex;"><span> - published
</span></span><span style="display:flex;"><span>env:
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f"># Stop wasting time caching packages</span>
</span></span><span style="display:flex;"><span> DOTNET_SKIP_FIRST_TIME_EXPERIENCE: <span style="color:#d179a3">true</span>
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f"># Disable sending usage data to Microsoft</span>
</span></span><span style="display:flex;"><span> DOTNET_CLI_TELEMETRY_OPTOUT: <span style="color:#d179a3">true</span>
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f"># Project name to pack and publish</span>
</span></span><span style="display:flex;"><span> PROJECT_NAME: Giraffe
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f"># GitHub Packages Feed settings</span>
</span></span><span style="display:flex;"><span> GITHUB_FEED: https://nuget.pkg.github.com/giraffe-fsharp/
</span></span><span style="display:flex;"><span> GITHUB_USER: dustinmoris
</span></span><span style="display:flex;"><span> GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
</span></span><span style="display:flex;"><span> <span style="color:#8f8f8f"># Official NuGet Feed settings</span>
</span></span><span style="display:flex;"><span> NUGET_FEED: https://api.nuget.org/v3/index.json
</span></span><span style="display:flex;"><span> NUGET_KEY: ${{ secrets.NUGET_KEY }}
</span></span><span style="display:flex;"><span>jobs:
</span></span><span style="display:flex;"><span> build:
</span></span><span style="display:flex;"><span> runs-on: ${{ matrix.os }}
</span></span><span style="display:flex;"><span> strategy:
</span></span><span style="display:flex;"><span> matrix:
</span></span><span style="display:flex;"><span> os: [ ubuntu-latest, windows-latest, macos-latest ]
</span></span><span style="display:flex;"><span> steps:
</span></span><span style="display:flex;"><span> - name: Checkout
</span></span><span style="display:flex;"><span> uses: actions/checkout@v2
</span></span><span style="display:flex;"><span> - name: Setup .NET Core
</span></span><span style="display:flex;"><span> uses: actions/setup-dotnet@v1
</span></span><span style="display:flex;"><span> with:
</span></span><span style="display:flex;"><span> dotnet-version: <span style="color:#abfebc">3.1.301</span>
</span></span><span style="display:flex;"><span> - name: Restore
</span></span><span style="display:flex;"><span> run: dotnet restore
</span></span><span style="display:flex;"><span> - name: Build
</span></span><span style="display:flex;"><span> run: dotnet build -c Release --no-restore
</span></span><span style="display:flex;"><span> - name: Test
</span></span><span style="display:flex;"><span> run: dotnet test -c Release
</span></span><span style="display:flex;"><span> - name: Pack
</span></span><span style="display:flex;"><span> if: matrix.os == 'ubuntu-latest'
</span></span><span style="display:flex;"><span> run: dotnet pack -v normal -c Release --no-restore --include-symbols --include-source -p:PackageVersion=$GITHUB_RUN_ID src/$PROJECT_NAME/$PROJECT_NAME.*proj
</span></span><span style="display:flex;"><span> - name: Upload Artifact
</span></span><span style="display:flex;"><span> if: matrix.os == 'ubuntu-latest'
</span></span><span style="display:flex;"><span> uses: actions/upload-artifact@v2
</span></span><span style="display:flex;"><span> with:
</span></span><span style="display:flex;"><span> name: nupkg
</span></span><span style="display:flex;"><span> path: ./src/${{ env.PROJECT_NAME }}/bin/Release/*.nupkg
</span></span><span style="display:flex;"><span> prerelease:
</span></span><span style="display:flex;"><span> needs: build
</span></span><span style="display:flex;"><span> if: github.ref == 'refs/heads/develop'
</span></span><span style="display:flex;"><span> runs-on: ubuntu-latest
</span></span><span style="display:flex;"><span> steps:
</span></span><span style="display:flex;"><span> - name: Download Artifact
</span></span><span style="display:flex;"><span> uses: actions/download-artifact@v1
</span></span><span style="display:flex;"><span> with:
</span></span><span style="display:flex;"><span> name: nupkg
</span></span><span style="display:flex;"><span> - name: Push to GitHub Feed
</span></span><span style="display:flex;"><span> run: |<span style="color:#ffa08f">
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> for f in ./nupkg/*.nupkg
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> do
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> curl -vX PUT -u "$GITHUB_USER:$GITHUB_TOKEN" -F package=@$f $GITHUB_FEED
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> done</span>
</span></span><span style="display:flex;"><span> deploy:
</span></span><span style="display:flex;"><span> needs: build
</span></span><span style="display:flex;"><span> if: github.event_name == 'release'
</span></span><span style="display:flex;"><span> runs-on: ubuntu-latest
</span></span><span style="display:flex;"><span> steps:
</span></span><span style="display:flex;"><span> - uses: actions/checkout@v2
</span></span><span style="display:flex;"><span> - name: Setup .NET Core
</span></span><span style="display:flex;"><span> uses: actions/setup-dotnet@v1
</span></span><span style="display:flex;"><span> with:
</span></span><span style="display:flex;"><span> dotnet-version: <span style="color:#abfebc">3.1.301</span>
</span></span><span style="display:flex;"><span> - name: Create Release NuGet package
</span></span><span style="display:flex;"><span> run: |<span style="color:#ffa08f">
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> arrTag=(${GITHUB_REF//\// })
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> VERSION="${arrTag[2]}"
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> echo Version: $VERSION
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> VERSION="${VERSION//v}"
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> echo Clean Version: $VERSION
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> dotnet pack -v normal -c Release --include-symbols --include-source -p:PackageVersion=$VERSION -o nupkg src/$PROJECT_NAME/$PROJECT_NAME.*proj</span>
</span></span><span style="display:flex;"><span> - name: Push to GitHub Feed
</span></span><span style="display:flex;"><span> run: |<span style="color:#ffa08f">
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> for f in ./nupkg/*.nupkg
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> do
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> curl -vX PUT -u "$GITHUB_USER:$GITHUB_TOKEN" -F package=@$f $GITHUB_FEED
</span></span></span><span style="display:flex;"><span><span style="color:#ffa08f"> done</span>
</span></span><span style="display:flex;"><span> - name: Push to NuGet Feed
</span></span><span style="display:flex;"><span> run: dotnet nuget push ./nupkg/*.nupkg --source $NUGET_FEED --skip-duplicate --api-key $NUGET_KEY
</span></span></code></pre>
https://dusted.codes/github-actions-for-dotnet-core-nuget-packages
[email protected] (Dustin Moris Gorski)https://dusted.codes/github-actions-for-dotnet-core-nuget-packages#disqus_threadMon, 29 Jun 2020 00:00:00 +0000https://dusted.codes/github-actions-for-dotnet-core-nuget-packagesdevopsgithubdotnet-coreci-cdnugetAutomate frequently used CLI commands<p>Welcome back to my blog! I haven’t written anything here for more than a year now, which is something I very much regret, but I mostly blame real life for it! Luckily due to the COVID-19 pandemic everything has slowed down a bit and I get to spend more time on my blog, side projects and some writing again. It’s always hard to get back to something after a long break, so I thought I'd start with a short blog post on a little productivity tip to break the current blogging hiatus!</p>
<p>Every developer has a range of commonly used CLI commands which they have to run frequently in order to execute some everyday tasks. These commands often have a bunch of additional flags and arguments attached to them which are really hard to remember if you only need to run them once every so often. One way of making it easier to run these commands is to <a href="https://thorsten-hans.com/5-types-of-zsh-aliases">configure short Aliases</a> which are much easier to remember if you're using something like <a href="https://www.zsh.org/">zsh</a> or <a href="https://www.gnu.org/software/bash/">bash</a> as your preferred shell (aliases also work in the <a href="http://www.kornshell.org/">KornShell</a> and <a href="https://en.wikipedia.org/wiki/Bourne_shell">Bourne shell</a> too). However, if you don't have a fancy Unix shell because you work on a restricted environment, an older version of Windows or you don't have the option of installing <a href="https://docs.microsoft.com/en-us/windows/wsl/">WSL (Windows Subsystem for Linux)</a> on your Windows machine then there's another option which luckily works across most operating systems regardless which version or shell you're using!</p>
<h2 id="good-old-shellbatch-scripts">Good old shell/batch scripts</h2>
<p>Creating a shell (or batch) script with an easy to remember name and placing it in a directory which has been added to your OS's <code>PATH</code> environment variable is a very simple and easy way of achieving pretty much the same for the majority of automation tasks!</p>
<p>First create a new folder called something like <code>useful-scripts</code> in your home directory:</p>
<h6 id="windows">Windows</h6>
<pre><code>mkdir %userprofile%\useful-scripts
</code></pre>
<h6 id="unix">Unix</h6>
<pre><code>mkdir $HOME/useful-scripts
</code></pre>
<p>This will create a new directory called <code>/useful-scripts</code> under...</p>
<ul>
<li><code>/users/<username>/useful-scripts</code> on Linux or macOS</li>
<li><code>C:\Users\<username>\useful-scripts</code> on Windows</li>
</ul>
<p>Then you have to add this path to your <code>PATH</code> environment variable and voila, any shell or batch script which will be placed inside this folder will be available as if it was a command itself.</p>
<h4 id="path-on-windows">PATH on Windows</h4>
<p>Note that when you're changing the <code>PATH</code> variable on Windows the <code>set PATH=%PATH%;C:\Users\<username>\useful-scripts</code> command will only set the path for the current terminal session and the change will not persist until the next one. In Windows 7 or later you can use the <code>setx</code> command to permanently set the <code>PATH</code> variable, however it's not very intuitive, because the command <code>setx PATH %PATH%;C:\Users\<username>\useful-scripts</code> will merge all values from your system wide <code>PATH</code> into your user specific <code>PATH</code> variable. Using <code>setx /M</code> will do the opposite, merge your user specific <code>PATH</code> values into the system wide <code>PATH</code> variable. Both are not recommended as it may cause unwanted side effects.</p>
<p>The GUI still remains the easiest way of permanently editing your <code>PATH</code> variable in Windows today.</p>
<p>Alternatively you can use the following PowerShell script too:</p>
<pre><code>$currentPath = [Environment]::GetEnvironmentVariable("Path", "Machine")
$usefulScripts = "C:\Users\<username>\useful-scripts"
[Environment]::SetEnvironmentVariable("Path", $currentPath + ";" + $usefulScripts, "Machine")
</code></pre>
<h4 id="path-on-linux-and-macos">PATH on Linux and macOS</h4>
<p>Adding <code>/users/<username>/useful-scripts</code> to the <code>PATH</code> environment variable on Linux or macOS is fairly straight forward. You can either <a href="https://stackabuse.com/how-to-permanently-set-path-in-linux/">add it to the profile of your preferred shell</a> or set it system wide by adding it to <code>/etc/profile.d</code> on Linux or <code>/etc/paths.d</code> on macOS.</p>
<h2 id="useful-scripts">Useful Scripts</h2>
<p>Now let's explore some example scripts which might be useful to add to your newly created directory.</p>
<h3 id="switching-kubectl-context">Switching kubectl context</h3>
<p>A command which I frequently use is to authenticate the <code>kubectl</code> CLI with a specific Kubernetes cluster. I manage more than one Kubernetes cluster in Microsoft Azure as well as a few other ones in the Google Cloud Platform. Authenticating with an AKS cluster requires to run the following command:</p>
<pre><code>az aks get-credentials --name <cluster-name> --resource-group <resource-group-name>
</code></pre>
<p>Authenticating with a cluster running inside the GKE requires a similar command:</p>
<pre><code>gcloud container clusters get-credentials <cluster-name> --region <region-name>
</code></pre>
<p>If you manage multiple AKS clusters under different Azure subscriptions, or multiple GKE clusters under different Google Cloud projects then you'll also need to authenticate with the desired subscription/project before executing the <code>get-credentials</code> command:</p>
<pre><code>gcloud config set project <gcp-project-name>
</code></pre>
<p>Having to type out all these commands every single time when wanting to connect to a specific cluster is a very tedious task and a good opportunity to shorten them into a more memorable abbreviation.</p>
<p>For example I'd like to only have to type <code>gcp-cluster-1</code> if I want to authenticate with <code>cluster-1</code> in GCP and only have to type <code>aks-cluster-a</code> when I'd like to authenticate with <code>cluster-a</code> in Azure.</p>
<p>Creating the following scripts inside <code>useful-scripts</code> will exactly enable this:</p>
<h6 id="usersltusernamegtuseful-scriptsgcp-cluster-1">/users/<username>/useful-scripts/gcp-cluster-1</h6>
<pre><code>gcloud config set project <project-1>
gcloud container clusters get-credentials <cluster-1> --region <region-name>
</code></pre>
<h6 id="usersltusernamegtuseful-scriptsgcp-cluster-2">/users/<username>/useful-scripts/gcp-cluster-2</h6>
<pre><code>gcloud config set project <project-2>
gcloud container clusters get-credentials <cluster-2> --region <region-name>
</code></pre>
<h6 id="usersltusernamegtuseful-scriptsaks-cluster-a">/users/<username>/useful-scripts/aks-cluster-a</h6>
<pre><code>az account set --subscription <subscription-name>
az aks get-credentials --name <cluster-1> --resource-group <resource-group-name>
</code></pre>
<p>Note that on Windows you'll have to add the <code>.bat</code> file extension for it to work and on macOS or Linux you'll have to give the execute permission to the newly created scripts:</p>
<pre><code>chmod +x /users/<username>/useful-scripts/*
</code></pre>
<h3 id="cleaning-up-docker-containers-and-images">Cleaning up Docker containers and images</h3>
<p>Another immensely useful script is to clean up old Docker containers and remove unused Docker images from your machine. If you frequently create and run Docker images locally then you will quickly accumulate many unused containers and unused or outdated images over time.</p>
<p>In older versions of Docker it was a bit of a mouthful to find and remove old Docker containers and images, but even with the latest Docker APIs and the availability of the <code>prune</code> command it's still nice to condense them into a single command:</p>
<h6 id="delete-old-containers-before-docker-113">Delete old containers (before Docker 1.13)</h6>
<pre><code>docker rm $(docker ps -a -q)
</code></pre>
<h6 id="delete-untagged-images-before-docker-113">Delete untagged images (before Docker 1.13)</h6>
<pre><code>docker rmi $(docker images -a | grep "^<none>" | awk '{print $3}')
</code></pre>
<h6 id="delete-old-containers-docker--113">Delete old containers (Docker >= 1.13)</h6>
<pre><code>docker container -prune -f
</code></pre>
<h6 id="delete-untagged-images-docker--113">Delete untagged images (Docker >= 1.13)</h6>
<pre><code>docker image prune -a -f
</code></pre>
<p>It can be more convenient to put both statements into a single <code>docker-cleanup</code> script:</p>
<pre><code>docker container -prune -f
docker image prune -a -f
</code></pre>
<h3 id="output-all-custom-scripts">Output all custom scripts</h3>
<p>Creating these sort of helper scripts is a nice way of simplifying and speeding up every day workflows. One script which I always like to include is an <code>sos</code> script which will output all other scripts in case I have ever forgetten any of the other scripts :)</p>
<h6 id="sosbat-on-windows">sos.bat on Windows</h6>
<pre><code>dir %USERPROFILE%\useful-scripts
</code></pre>
<h6 id="sos-on-linux-or-macos">sos on Linux or macOS</h6>
<pre><code>ls $HOME/useful-scripts
</code></pre>
<p>This will enable me to run <code>sos</code> in order to get a list of all other available "commands".</p>
https://dusted.codes/automate-frequently-used-cli-commands
[email protected] (Dustin Moris Gorski)https://dusted.codes/automate-frequently-used-cli-commands#disqus_threadTue, 02 Jun 2020 00:00:00 +0000https://dusted.codes/automate-frequently-used-cli-commandsdevopskubernetesdockerproductivityTips and tricks for ASP.NET Core applications<p>This is a small collection of some tips and tricks which I keep repeating myself in every ASP.NET Core application. There's nothing ground breaking in this list, but some general advice and minor tricks which I have picked up over the course of several real world applications.</p>
<h2 id="logging">Logging</h2>
<p>Let's begin with logging. There are many logging frameworks available for .NET Core, but my absolute favourite is <a href="https://serilog.net/">Serilog</a> which offers a very nice structured logging interface for a <a href="https://github.com/serilog/serilog/wiki/Provided-Sinks">vast number of available storage providers</a> (sinks).</p>
<h3 id="tip-1-configure-logging-before-anything-else">Tip 1: Configure logging before anything else</h3>
<p>The logger should be the very first thing configured in an ASP.NET Core application. Everything else should be wrapped in a try-catch block:</p>
<pre><code>public class Program
{
public static int Main(string[] args) => StartWebServer(args);
public static int StartWebServer(string[] args)
{
Log.Logger =
new LoggerConfiguration()
.MinimumLevel.Warning()
.Enrich.WithProperty("Application", "MyApplicationName")
.WriteTo.Console()
.CreateLogger();
try
{
WebHost.CreateDefaultBuilder(args)
.UseSerilog()
.UseKestrel(k => k.AddServerHeader = false)
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.Build()
.Run();
return 0;
}
catch (Exception ex)
{
Log.Fatal(ex, "Host terminated unexpectedly.");
return -1;
}
finally
{
Log.CloseAndFlush();
}
}
}
</code></pre>
<h3 id="tip-2-flush-the-logger-before-the-application-terminates">Tip 2: Flush the logger before the application terminates</h3>
<p>Make sure to put <code>Log.CloseAndFlush();</code> into the <code>finally</code> block of your try-catch block so that no log data is getting lost when the application terminates before all logs have been written to the log stream.</p>
<h3 id="tip-3-enrich-your-log-entries">Tip 3: Enrich your log entries</h3>
<p>Configure your logger to automatically decorate every log entry with an <code>Application</code> property which contains a unique identifier for your application (typically a human readable name which identifies your app):</p>
<pre><code>.Enrich.WithProperty("Application", "MyApplicationName")
</code></pre>
<p>This is extremely useful if you write logs from more than one application into a single log stream (e.g. a single Elasticsearch database). Personally I prefer to write logs from multiple (smaller) services of a coherent system into a single logging database and filter logs by properties.</p>
<p>Appending an additional <code>Application</code> property to all your application logs has the advantage that one can easily filter and view the overall health of a single application as well as getting a holistic view of the entire system.</p>
<p>Other really useful information which could be appended to your log entries is the application version and the environment name:</p>
<pre><code>Log.Logger =
new LoggerConfiguration()
.MinimumLevel.Warning()
.Enrich.WithProperty("Application", "MyApplicationName")
.Enrich.WithProperty("ApplicationVersion", "<version number>")
.Enrich.WithProperty("EnvironmentName", "Staging")
.WriteTo.Console()
.CreateLogger();
</code></pre>
<p>This will allow one to better visualise if issues had been resolved (or appeared) after a certain version has been deployed and it will also make it very easy to filter out any logs which might have accidentally been written from a different environment (e.g. a developer was debugging locally with the production connection string in their settings).</p>
<h2 id="startup-configuration">Startup Configuration</h2>
<p>In ASP.NET Core there are two main places where features and functionality get configured. First there is the <code>Configure</code> method which can be used to plug middleware into the ASP.NET Core pipeline and secondly there is the <code>ConfigureServices</code> method to register dependencies.</p>
<p>For example adding <a href="https://github.com/domaindrivendev/Swashbuckle.AspNetCore">Swagger</a> to ASP.NET Core would look a bit like this:</p>
<pre><code>public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
services.AddSwaggerGen(
c =>
{
var name = "<my app name>"
var version = "v1";
c.SwaggerDoc(
version,
new Info { Version = version, Title = name });
c.DescribeAllEnumsAsStrings();
var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
c.IncludeXmlComments(xmlPath);
});
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app .UseSwagger()
.UseSwaggerUI(
c =>
{
var name = "<my app name>"
var version = "v1";
c.RoutePrefix = "";
c.SwaggerEndpoint(
"/swagger/{version}/swagger.json", name);
}
)
.UseMvc();
}
</code></pre>
<p>Middleware and dependencies are obviously two different things and therefore their configuration is split into two different methods, but from a developer's point of view it is very annoying that most features are configured across more than just one place.</p>
<h3 id="tip-4-create-config-classes">Tip 4: Create 'Config' classes</h3>
<p>One nice way to combat this is by creating a <code>Config</code> folder in the root of your ASP.NET Core application and create <code><FeatureName>Config</code> classes for each feature/functionality which needs to be registered in <code>Startup</code>:</p>
<pre><code>public static class SwaggerConfig
{
private static string Name => "My Cool API";
private static string Version => "v1";
private static string Endpoint => $"/swagger/{Version}/swagger.json";
private static string UIEndpoint => "";
public static void SwaggerUIConfig(SwaggerUIOptions config)
{
config.RoutePrefix = UIEndpoint;
config.SwaggerEndpoint(Endpoint, Name);
}
public static void SwaggerGenConfig(SwaggerGenOptions config)
{
config.SwaggerDoc(
Version,
new Info { Version = Version, Title = Name });
config.DescribeAllEnumsAsStrings();
var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
config.IncludeXmlComments(xmlPath);
}
}
</code></pre>
<p>By doing this one can move all related configuration of a feature into a single place and also nicely distinguish between the individual configuration steps (e.g. <code>SwaggerUIConfig</code> vs <code>SwaggerGenConfig</code>).</p>
<p>Afterwards one can tidy up the <code>Startup</code> class by invoking the respective class methods:</p>
<pre><code>public void ConfigureServices(IServiceCollection services)
{
services
.AddMvc(MvcConfig.AddFilters)
.AddJsonOptions(MvcConfig.JsonOptions);
services.AddSwaggerGen(SwaggerConfig.SwaggerGenConfig);
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app .UseSwagger()
.UseSwaggerUI(SwaggerConfig.SwaggerUIConfig)
.UseMvc();
}
</code></pre>
<h3 id="tip-5-extension-methods-for-conditional-configurations">Tip 5: Extension methods for conditional configurations</h3>
<p>Another common use case is to configure different features based on the current environment or other conditional cases:</p>
<pre><code>public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
}
</code></pre>
<p>A neat trick which I like to apply here is to implement an extension method for conditional configurations:</p>
<pre><code>public static class ApplicationBuilderExtensions
{
public static IApplicationBuilder When(
this IApplicationBuilder builder,
bool predicate,
Func<IApplicationBuilder> compose) => predicate ? compose() : builder;
}
</code></pre>
<p>The <code>When</code> extension method will invoke a <code>compose</code> function only if a given <code>predicate</code> is true.</p>
<p>Now with the <code>When</code> method someone can set up conditional middleware in a much nicer and fluent way:</p>
<pre><code>public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app .When(env.IsDevelopment(), app.UseDeveloperExceptionPage)
.When(!env.IsDevelopment(), app.UseHsts)
.UseSwagger()
.UseSwaggerUI(SwaggerConfig.SwaggerUIConfig)
.UseMvc();
}
</code></pre>
<h2 id="exit-scenarios">Exit scenarios</h2>
<h3 id="tip-6-dont-forget-to-return-a-default-404-response">Tip 6: Don't forget to return a default 404 response</h3>
<p>Don't forget to register a middleware which will return a <code>404 Not Found</code> HTTP response if no other middleware was able to deal with an incoming request:</p>
<pre><code>public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app .When(env.IsDevelopment(), app.UseDeveloperExceptionPage)
.When(!env.IsDevelopment(), app.UseHsts)
.UseSwagger()
.UseSwaggerUI(SwaggerConfig.SwaggerUIConfig)
.UseMvc()
.Run(NotFoundHandler);
}
private readonly RequestDelegate NotFoundHandler =
async ctx =>
{
ctx.Response.StatusCode = 404;
await ctx.Response.WriteAsync("Page not found.");
};
</code></pre>
<p>If you don't do this then a request which couldn't be matched by any middleware will be left unhandled (unless you have another web server sitting behind Kestrel).</p>
<h3 id="tip-7-return-non-zero-exit-code-on-failure">Tip 7: Return non zero exit code on failure</h3>
<p>Return a non zero exit code when the application terminates with an error. This will allow parent processes to pick up the fact that the application terminated unexpectedly and give them a chance to handle such a situation more gracefully (e.g. when your ASP.NET Core application is run from a Kubernetes cluster):</p>
<pre><code>try
{
// Start WebHost
return 0;
}
catch (Exception ex)
{
Log.Fatal(ex, "Host terminated unexpectedly.");
return -1;
}
finally
{
Log.CloseAndFlush();
}
</code></pre>
<h2 id="error-handling">Error Handling</h2>
<p>Every ASP.NET Core application is likely going to have to deal with at least three types of errors:</p>
<ul>
<li>Server errors</li>
<li>Client errors</li>
<li>Business logic errors</li>
</ul>
<p>Server errors are unexpected exceptions which get thrown by an application. <a href="https://dusted.codes/error-handling-in-aspnet-core">Normally these exceptions bubble up to a global error handler</a> which will log the exception and return a <code>500 Internal Server Error</code> response to the client.</p>
<p>Client errors are mistakes which a client can make when sending a request to the server. These normally include things like missing or wrong authentication data, badly formatted request bodies, calling endpoints which do not exist or perhaps sending data in an unsupported format. Most of these errors will get picked up by a built-in ASP.NET Core feature which will return a corresponding <code>4xx</code> HTTP error back to the client.</p>
<p>Business logic errors are application specific errors which are not handled by ASP.NET Core by default because they are very unique to each individual application. For example an invoicing application might want to throw an exception when a customer tries to raise an invoice with an unsupported currency whereas an online gaming application might want to throw an error when a user ran out of credits.</p>
<p>These errors are often raised from lower level domain code and might want to return a specific <code>4xx</code> or <code>5xx</code> HTTP response back to the client.</p>
<h3 id="tip-8-create-a-base-exception-type-for-domain-errors">Tip 8: Create a base exception type for domain errors</h3>
<p>Create a base exception class for business or domain errors and additional exception classes which derive from the base class for all possible error cases:</p>
<pre><code>public enum DomainErrorCode
{
InsufficientCredits = 1000
}
public class DomainException : Exception
{
public readonly DomainErrorCode ErrorCode;
public DomainException(DomainErrorCode code, string message) : base(message)
{
ErrorCode = code;
}
}
public class InsufficientCreditsException : DomainException
{
public InsufficientCreditsException()
: base(DomainErrorCode.InsufficientCredits,
"User ran out of free credit. Please upgrade your plan to continue using our service.")
{ }
}
</code></pre>
<p>Include a unique <code>DomainErrorCode</code> for each custom exception type which later can be used to identify the specific error case from higher level code.</p>
<p>Afterwards one can use the newly created exception classes to throw more meaningful errors from inside the domain layer:</p>
<pre><code>throw new InsufficientCreditsException();
</code></pre>
<p>This has now the benefit that the ASP.NET Core application can look for domain exceptions from a central point (e.g. custom error middleware) and handle them accordingly:</p>
<pre><code>public class DomainErrorHandlerMiddleware
{
private readonly RequestDelegate _next;
public DomainErrorHandlerMiddleware(RequestDelegate next)
{
_next = next ?? throw new ArgumentNullException(nameof(next));
}
public async Task InvokeAsync(HttpContext ctx)
{
try
{
await _next(ctx);
}
catch(DomainException ex)
{
ctx.Response.StatusCode = 422;
await ctx.Response.WriteAsync($"{ex.ErrorCode}: {ex.Message}");
}
}
}
</code></pre>
<p>Because every domain exception includes a unique <code>DomainErrorCode</code> the generic error handler can even implement a slightly different response based on the given domain error.</p>
<p>This architecture has a few benefits:</p>
<ul>
<li>The domain layer can throw meaningful exceptions</li>
<li>The domain layer works nicely with the higher level web layer without tight coupling</li>
<li>Domain exceptions are clearly distinguishable from other errors</li>
<li>Domain exceptions are self documenting</li>
<li>The web layer can handle all domain errors in a unified way without having to replicate the same try-catch block across multiple controllers</li>
<li>The additional error code in the response can be parsed and understood by third party clients</li>
<li>The custom exception types can be easily documented through Swagger</li>
</ul>
<h3 id="tip-9-expose-an-endpoint-which-returns-all-error-codes">Tip 9: Expose an endpoint which returns all error codes</h3>
<p>When you followed tip 8 and implemented a custom exception type with a unique error code for each error case then it can be extremely handy to expose all possible error codes through a single API endpoint. This will allow third party clients to quickly retrieve a list of the latest possible error codes and their meaning:</p>
<pre><code>[HttpGet("/error-codes")]
public ActionResult<IDictionary<int, string>> ErrorCodes()
{
var values = Enum
.GetValues(typeof(DomainErrorCode))
.Cast<DomainErrorCode>();
var result = new Dictionary<int, string>();
foreach(var v in values)
result.Add((int)v, v.ToString());
return result;
}
</code></pre>
<h2 id="other-tips-amp-tricks">Other Tips & Tricks</h2>
<h3 id="tip-10-expose-a-version-endpoint">Tip 10: Expose a version endpoint</h3>
<p>Another really useful thing to have in an API (or website) is a version endpoint. Often it can be extremely helpful to customer support staff, QA or other members of a team to quickly be able to establish what version of an application is being deployed to an environment.</p>
<p>This version is different than the customer facing API version which often only includes the major version number (e.g. https://my-api.com/v3/some/resource).</p>
<p>Exposing an endpoint which displays the current application version and the build date and time is a nice way of quickly making this information accessible to relevant people:</p>
<pre><code>[HttpGet("/info")]
public ActionResult<string> Info()
{
var assembly = typeof(Startup).Assembly;
var creationDate = File.GetCreationTime(assembly.Location);
var version = FileVersionInfo.GetVersionInfo(assembly.Location).ProductVersion;
return Ok($"Version: {version}, Last Updated: {creationDate}");
}
</code></pre>
<h3 id="tip-11-remove-the-server-http-header">Tip 11: Remove the 'Server' HTTP header</h3>
<p>Whilst one is at configuring their ASP.NET Core application they might as well remove the <code>Server</code> HTTP header from every HTTP response by deactivating that setting in Kestrel:</p>
<pre><code>.UseKestrel(k => k.AddServerHeader = false)
</code></pre>
<h3 id="tip-12-working-with-null-collections">Tip 12: Working with Null Collections</h3>
<p>My last tip on this list is not specific to ASP.NET Core but all of .NET Core development where a collection or <code>IEnumerable</code> type is being used.</p>
<p>How often do .NET developers write something like this:</p>
<pre><code>var someCollection = GetSomeCollectionFromSomewhere();
if (someCollection != null && someCollection.Count > 0)
{
foreach(var item in someCollection)
{
// Do stuff
}
}
</code></pre>
<p>Adding a one line extension method can massively simplify the above code across an entire applicatoin:</p>
<pre><code>public static class EnumerableExtensions
{
public static IEnumerable<T> OrEmptyIfNull<T>(this IEnumerable<T> source) =>
source ?? Enumerable.Empty<T>();
}
</code></pre>
<p>Now the above <code>if</code> statement can be reduced to a single loop like this:</p>
<pre><code>var someCollection = GetSomeCollectionFromSomewhere();
foreach(var item in someCollection.OrEmptyIfNull())
{
// Do stuff
}
</code></pre>
<p>Or converting the <code>IEnumerbale</code> to an <code>IList</code> and use the <code>ForEach</code> LINQ extension method to turn this into a one liner:</p>
<pre><code>someCollection.OrEmptyIfNull().ToList().ForEach(i => i.DoSomething());
</code></pre>
<h2 id="what-tips-and-tricks-do-you-have">What tips and tricks do you have?</h2>
<p>So this is it, this was my brief post on some tips and tricks which I like to apply in my personal ASP.NET Core development. I hope this was at least somewhat useful to someone?! Let me know what you think and please feel free to share your own tips and tricks which make your ASP.NET Core development life easier in the comments below!</p>
https://dusted.codes/advanced-tips-and-tricks-for-aspnet-core-applications
[email protected] (Dustin Moris Gorski)https://dusted.codes/advanced-tips-and-tricks-for-aspnet-core-applications#disqus_threadThu, 28 Feb 2019 00:00:00 +0000https://dusted.codes/advanced-tips-and-tricks-for-aspnet-core-applicationsaspnet-coreloggingarchitecturemvcWhy you should learn F#<p>If you were thinking of learning a new programming language in 2019 then I would highly recommend to have a close look at F#. No matter if you are already a functional developer from a different community (Haskell, Clojure, Scala, etc.) or you are a complete newbie to functional programming (like I was 3 years ago) I think F# can equally impress you. F# is a <a href="https://dotnet.microsoft.com/languages/fsharp">functional first language</a>. This means it is not a pure functional language but it is heavily geared towards the <a href="https://en.wikipedia.org/wiki/Functional_programming">functional programming paradigm</a>. However, because F# is also part of the <a href="https://dotnet.microsoft.com/languages">.NET language family</a> it is equally well equipped to write object oriented code too. Secondly F# is - contrary to common believe - an extremely well designed <a href="https://fsharpforfunandprofit.com/why-use-fsharp/">general purpose language</a>. This means that F# is not only good for all sorts of "mathematical" stuff, but also for so much more. Without doubt F# is, like most other functional (algebraic) languages, greatly suited for this kind of work, but it is certainly not at the forefront of the creators of F# and neither a very common use case by most people who I know work with F#. So what is F# really good for? Well, the honest answer is almost anything! F# is an extremely pragmatic, expressive, statically typed programming language. Whether you want to build a distributed real time application, a service oriented web backend, a fancy looking single page app, mobile games, a line of business application or the next big social internet flop, F# will satisfy most if not all of your needs. As a matter of fact F# is probably a much better language for these types of applications than let's say Python, Java or C#. If you don't believe me then please continue reading and hopefully I will have convinced you by the end of this post!</p>
<h2 id="table-of-contents">Table of contents</h2>
<ul>
<li><a href="#domain-driven-development">Domain Driven Development</a></li>
<li><a href="#immutability-and-lack-of-nulls">Immutability and lack of Nulls</a></li>
<li><a href="#solid-made-easy-in-fsharp">SOLID made easy in F#</a></li>
<li><a href="#simplicity">Simplicity</a></li>
<li><a href="#asynchronous-programming">Asynchronous programming</a></li>
<li><a href="#net-core">.NET Core</a></li>
<li><a href="#open-source">Open Source</a></li>
<li><a href="#tooling">Tooling</a></li>
<li><a href="#fsharp-conquering-the-web">F# conquering the web</a></li>
<li><a href="#fsharp-everywhere">F# Everywhere</a></li>
<li><a href="#final-words">Final Words</a></li>
<li><a href="#useful-resources">Useful Resources</a></li>
</ul>
<h2 id="domain-driven-development">Domain Driven Development</h2>
<p>Before I started to write this article I asked myself why do I like F# so much? There are many reasons which came to my mind, but the one which really stood out to me was the fact that F# has some great capabilities of modelling a domain. After all, the majority of work which we do as software developers is to model real world processes into a digital abstraction of them. A language which makes this kind of work almost feel natural is immensely valuable and should not be missed.</p>
<p>Let's look at some code examples to demonstrate what I mean. For this task and for the rest of this blog post I'll be comparing F# with C# in order to show some of the benefits. I've chosen C# because many developers consider it as one of the best object oriented languages and mainly because C# is the language which I am the most proficient at myself.</p>
<h3 id="identifying-bad-design">Identifying bad design</h3>
<p>A common use case in a modern application is to read a customer object from a database. In C# this would look something like this:</p>
<pre><code>public Customer GetCustomerById(string customerId)
{
// do stuff...
}
</code></pre>
<p>I have purposefully omitted the internals of this method, because from a caller's point of view the signature of a method is often all we know. Even though this operation is so simple (and very familiar) there are still a lot of unknowns around it:</p>
<ul>
<li>Which values are accepted for the <code>customerId</code>? Can it have an empty string? Probably not, but will it instantly throw an <code>ArgumentException</code> or still try to fetch some user data?</li>
<li>Does the ID follow a specific format? What if the <code>customerId</code> has the correct format but is all upper case? Is it case sensitive or will the method normalise the string anyway?</li>
<li>What happens if a given <code>customerId</code> doesn't exist? Will it return <code>null</code> or throw an <code>Exception</code>? There's no way to find out without checking the internal implementation of this method (docs, decompilation, GitHub, etc.) or by testing against all sorts of input.</li>
<li>What happens if the database connection is down? Will it return the same result as if the customer didn't exist or will it throw a different type of exception?</li>
<li>How many different exception types will this code throw anyway?</li>
</ul>
<p>The interface/signature of this method is not very clear in answering any of these questions. This is pretty poor given that the signature or interface of a method has the only purpose of defining a clear contract between the caller and the method itself. Of course there are many conventions which make C# developers feel safe, mostly by making broad assumptions about the underlying code, but at the end of the day these are only assumptions which can (and will eventually) lead to severe errors. If a library only slightly differs from an established convention then there is a high chance of introducing a bug which will catch them later.</p>
<p>If anything, conventions are rather weak workarounds for missing language features. Just like C# is perhaps seen as a better language than JavaScript, because of its statically typed feature, many functional programming languages are seen superior to C#, Java, and others, because of their domain modelling features.</p>
<p>There are ways of improving this code in C#, but none of those options are very straightforward (or often very cumbersome), which is why there is still plenty of code written like the one above.</p>
<h3 id="f-makes-correct-code-easy">F# makes correct code easy</h3>
<p>F# on the other hand has a rich type system which allows developers to express the true state of a function. If a function might or might not return a <code>Customer</code> object then the function can return an object of type <code>Option<'T></code>.</p>
<p>The <code>Option<'T></code> type defines a return value which can either be something or nothing:</p>
<pre><code>let getCustomerById customerId =
match db.TryFindCustomerById customerId with
| true, customer -> Some customer
| false, _ -> None
</code></pre>
<p>It is important to understand that <code>None</code> is not another way of saying <code>null</code>, because <code>null</code> is truly nothing (there is nothing allocated in memory), whereas <code>None</code> is an actual object/case of type <code>Option<'T></code>.</p>
<p>In this example the <code>TryFindCustomerId</code> method is a typical .NET member which has an <code>out</code> parameter defined like this:</p>
<pre><code>bool TryFindCustomerById(string customerId, out Customer customer)
</code></pre>
<p>In F# you can use simple pattern matching to extract the <code>out</code> parameter on success:</p>
<pre><code>match db.TryFindCustomerById customerId with
| true, customer -> Some customer
| false, _ -> None
</code></pre>
<p>The benefit of the <code>Option<'T></code> type is not only that it is more expressive (and therefore more honest about the true state of the function), but also that it forces the calling code to implement the case of <code>None</code>, which means that a developer has to think of this edge case straight from the beginning:</p>
<pre><code>let someOtherFunction customerId =
match getCustomerById customerId with
| Some customer -> // Do something when customer exist
| None -> // Do something when customer doesn't exist
</code></pre>
<p>Another extremely useful type which comes with F# is the <code>Result<'T,'TError></code> type:</p>
<pre><code>let validateCustomerId customerId =
match customerId with
| null -> Error "Customer ID cannot be null."
| "" -> Error "Customer ID cannot be empty."
| id when id.Length <> 10 -> Error "Invalid Customer ID."
| _ -> Ok (customerId.ToLower())
</code></pre>
<p>The <code>validateCustomerId</code> function will either return <code>Ok</code> with a normalised <code>customerId</code> or an <code>Error</code> object which contains a relevant error message. In this example <code>'T</code> and <code>'TError</code> are both of type <code>string</code>, but it doesn't have to be the same type and you can even wrap multiple types into a much richer return value such as <code>Result<Option<Customer>, string list></code>.</p>
<p>The type system in F# allows for even more flexibility. One can easily create a new type which truly represents all possible outcomes of a function like <code>getCustomerById</code>:</p>
<pre><code>type DataResult<'T> =
| Success of 'T option
| ValidationError of string
| DataError of Exception
let getCustomerById customerId =
try
match validateCustomerId customerId with
| Error msg -> ValidationError msg
| Ok id ->
match db.TryFindCustomerById id with
| true, customer -> Some customer
| false, _ -> None
|> Success
with ex -> DataError ex
</code></pre>
<p>The custom defined <code>DataResult<'T></code> type declares three distinctive cases which the calling code might want to treat differently. By explicitly declaring a type which represents all these possibilities we can model the <code>getCustomerById</code> function in such a way that it removes all ambiguity about error- and edge case handling as well as preventing unexpected behaviour and forcing calling code to handle these cases.</p>
<h3 id="f-makes-invalid-state-impossible">F# makes invalid state impossible</h3>
<p>So far we have always assumed that the <code>customerId</code> is a value of type <code>string</code>, but as we've seen this creates a lot of ambiguity around the allowed values for it and also forces a developer to write a lot of guard clauses to protect themselves from errors:</p>
<pre><code>public Customer GetCustomerById(string customerId)
{
if (customerId == null)
throw new ArgumentNullException(nameof(customerId));
if (customerId == "")
throw new ArgumentException(
"Customer ID cannot be empty.", nameof(customerId));
if (customerId.Length != 10 || customerId.ToLower().StartsWith("c"))
throw new ArgumentException(
"Invalid customer ID", nameof(customerId));
// do stuff...
}
</code></pre>
<p>The correct way of avoiding this anti-pattern is to model the concept of a <code>CustomerId</code> into its own type. In C# you can either create a <code>class</code> or <code>struct</code> to do so, but either way you'll end up writing a lot of boilerplate code to get the type to behave the way it should (eg. GetHashCode, Equality, ToString, etc.):</p>
<pre><code>public class CustomerId
{
public string Value { get; }
public CustomerId(string customerId)
{
if (customerId == null)
throw new ArgumentNullException(nameof(customerId));
if (customerId == "")
throw new ArgumentException(
"Customer ID cannot be empty.",
nameof(customerId));
var value = customerId.ToLower();
if (value.Length != 10 || value.StartsWith("c"))
throw new ArgumentException(
"Invalid customer ID",
nameof(customerId));
Value = value;
}
// Lots of overrides to make a
// CustomerId behave the correct way
}
</code></pre>
<p>Needless to say that this is extremely annoying and the exact reason why it is so rarely seen in C#. Also a class is less favourable, because code which will accept a <code>CustomerId</code> will still have to deal with the possibility of <code>null</code>, which is not really a thing. A <code>CustomerId</code> should never be <code>null</code>, just like an <code>int</code>, a <code>Guid</code> or a <code>DateTime</code> can never be <code>null</code>. Once you've finished implementing a correct <code>CustomerId</code> type in C# you'll end up with 200 lines of code which itself open up a lot of room for further errors.</p>
<p>In F# we can define a new type as easily as this:</p>
<pre><code>type CustomerId = private CustomerId of string
</code></pre>
<p>This version of <code>CustomerId</code> is basically a wrapper of <code>string</code>, but provides additional type safety, because one couldn't accidentally assign a <code>string</code> to a parameter of type <code>CustomerId</code> or vice versa.</p>
<p>The private access modifier prevents code from different modules or namespaces to create an object of type <code>CustomerId</code>. This is intentional, because now we can force the creation via a specific function like this:</p>
<pre><code>module CustomerId =
let create (customerId : string) =
match customerId with
| null -> Error "Customer ID cannot be null."
| "" -> Error "Customer ID cannot be empty."
| id when id.Length <> 10 -> Error "Invalid Customer ID."
| _ -> Ok (CustomerId (customerId.ToLower()))
</code></pre>
<p>The above implementation is extremely efficient and almost free of any noise. As a developer I didn't have to write a lot of boilerplate code and was able to focus on the actual domain which is what I really want:</p>
<ul>
<li>The system has a type called <code>CustomerId</code>, which wraps a <code>string</code>.</li>
<li>The only way to create a <code>CustomerId</code> is via the <code>CustomerId.create</code> function, which does all of the relevant checks before emitting an object of <code>CustomerId</code>.</li>
<li>If a <code>string</code> violates the <code>CustomerId</code> requirements then a meaningful <code>Error</code> is returned and the calling code is forced to deal with this scenario.</li>
<li>The <code>CustomerId</code> object is immutable and non-nullable. Once successfully created all subsequent code can confidently rely on correct state.</li>
<li>The <code>CustomerId</code> type automatically inherits all other behaviour from a <code>string</code>, which means I didn't have to write a <code>GetHashCode</code> implementation, equality overrides, operator overloads and all of the other nonsense which I would have to do in C#.</li>
</ul>
<p>This is a perfect example where F# can provide a lot of value with very few lines of code. Also because there is not much code to begin with there is very little room for making a mistake. The only real mistake I could have made is in the actual implementation of the <code>CustomerId</code> validation, which is more of a domain responsibility rather than a shortcoming of the language itself.</p>
<p>C# developers are not very used to model real world concepts like a <code>CustomerId</code>, an <code>OrderId</code> or an <code>EmailAddress</code> into their own types, because the language doesn't make it easy. These objects are often represented by very primitive types such as <code>string</code> or <code>int</code> and are being handled very loosely by the domain.</p>
<p>If you would like to learn more about <a href="https://en.wikipedia.org/wiki/Domain-driven_design">Domain Driven Design</a> in F# then I would highly recommend to watch <a href="https://www.youtube.com/watch?v=Up7LcbGZFuo">Scott Wlaschin's Domain Modeling Made Functional</a> presentation from NDC London. This is a fantastic talk with lots of food for thought and also the source of some of the ideas which I have introduced in this article:</p>
<iframe class="youTubeVideo" src="https://www.youtube.com/embed/Up7LcbGZFuo" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<h2 id="immutability-and-lack-of-nulls">Immutability and lack of Nulls</h2>
<p>One of the greatest features of F# is that <strong>objects in F# are immutable by default and cannot be null</strong>. This makes it a lot easier to reason about code and also implement bug free applications. Not having to think twice if an object has changed state after passing it into a function or having to check for nulls has a huge impact on how easily someone can write reliable applications.</p>
<h3 id="saying-goodbye-to-nulls">Saying goodbye to nulls</h3>
<p><a href="https://en.wikipedia.org/wiki/Tony_Hoare">Tony Hoare</a>, who invented (amongst many other great things) the null reference <a href="https://en.wikipedia.org/wiki/Null_pointer#History">called it his billion dollar mistake</a>. He even apologised for the creation of <code>null</code> during QCon London in 2009.</p>
<p><strong>The problem with <code>null</code> is that it doesn't reflect any real state and yet has too many meanings at the same time.</strong> It's never clear if <code>null</code> means "unknown", "empty", "does not exist", "not initialised", "invalid", "some other error" or perhaps "end of line/file/stream/etc."? Today's scholars agree that the existence of <code>null</code> is certainly a mistake and hence why languages like <a href="https://msdn.microsoft.com/en-us/magazine/mt829270.aspx">C# try to slowly move away from it in their upcoming versions</a>.</p>
<p>Fortunately F# never had nulls to begin with. The only way to force a <code>null</code> into F# is by interoperating with C# and not properly fencing it off.</p>
<h3 id="immutability--mutability">Immutability > Mutability</h3>
<p>Mutability is another topic where functional programming really shines. The problem is not mutability per se, but <strong>whether objects are mutable by default or not</strong> in a given language. It can only be one of the two and each programming language has to pick which one it wants it to be.</p>
<p>Immutability has the benefit of making code a lot easier to understand. It also prevents a lot of errors, because no class, method or function can change the state of an object after it had been created. This is particularly useful when objects get passed around between many different methods where the internal implementation is not always known (third party code, etc.).</p>
<p>On the other hand mutability doesn't have many benefits at all. It makes code arguably harder to follow, introduces a lot more ways for classes and methods to overstep their responsibility and lets poorly written libraries introduce unexpected behaviour. The small benefit of being able to directly mutate an object comes at a rather high cost.</p>
<p>Now the question is which one is easier, change an object in an immutable-by-default language, or introduce immutability in a mutable-by-default one?</p>
<p>I can quickly answer the first one by looking at F# and how it deals with the desire of changing an object's state. In F# mutations are performed by creating a new object with the modified values applied:</p>
<pre><code>let c = { Name = "Susan Doe"; Address = "1 Street, London, UK" }
let c' = { c with Address = "3 Avenue, Oxford, UK" }
</code></pre>
<p>This is a very elegant solution which produces almost the same outcome as if mutability was allowed (but without the cost).</p>
<p>Introducing immutability in C# is a little bit more awkward.</p>
<p>C# has no language construct which allows one to create an immutable object out of the box. First I have to create a new type, but I cannot use a <code>class</code>, because a <code>class</code> is a reference type which could be <code>null</code>. If <code>null</code> can be assigned to an object after it had been created then it is not immutable:</p>
<pre><code>public class Customer
{
public string Name { get; set; }
public string Address { get; set; }
}
// Somewhere later in the program:
var c = new Customer();
c = null;
</code></pre>
<p>This leads me to using a <code>struct</code>:</p>
<pre><code>public struct Customer
{
public string Name { get; set; }
public string Address { get; set; }
}
</code></pre>
<p>Now <code>null</code> is not an issue anymore, but the properties still are:</p>
<pre><code>var c = new Customer();
c.Name = "Haha gotcha!";
</code></pre>
<p>Let's make the setters private then:</p>
<pre><code>public struct Customer
{
public string Name { get; private set; }
public string Address { get; private set; }
}
</code></pre>
<p>Better, but not immutable yet. One could still do something like this:</p>
<pre><code>public struct Customer
{
public string Name { get; private set; }
public string Address { get; private set; }
public void ChangeName(string name)
{
Name = name;
}
}
var c = new Customer();
c.ChangeName("Haha gotcha!");
</code></pre>
<p>The problem is not that <code>ChangeName</code> is public, but the fact that there is still a method which can alter the object's state after it was created.</p>
<p>Let's introduce two private backing fields for the properties and remove the setters altogether:</p>
<pre><code>public struct Customer
{
private string _name;
private string _address;
public string Name { get { return _name; } }
public string Address { get { return _address; } }
public Customer(string name, string address)
{
_name = name;
_address = address;
}
}
</code></pre>
<p>This looks perhaps better, but it's not (yet). A class member can still change the <code>_name</code> and <code>_address</code> fields from inside.</p>
<p>We can fix this by making the fields <code>readonly</code>:</p>
<pre><code>public struct Customer
{
private readonly string _name;
private readonly string _address;
public string Name { get { return _name; } }
public string Address { get { return _address; } }
public Customer(string name, string address)
{
_name = name;
_address = address;
}
}
</code></pre>
<p>Now this is immutable (at least for now), but a bit verbose. At this point we might as well collapse the properties into <code>public readonly</code> fields:</p>
<pre><code>public struct Customer
{
public readonly string Name;
public readonly string Address;
public Customer(string name, string address)
{
Name = name;
Address = address;
}
}
</code></pre>
<p>Alternatively with C# 6 (or later) we could also create readonly properties like this:</p>
<pre><code>public struct Customer
{
public string Name { get; }
public string Address { get; }
public Customer(string name, string address)
{
Name = name;
Address = address;
}
}
</code></pre>
<p>So far so good, but unless someone knows C# very well one could have easily gotten this wrong.</p>
<p>Unfortunately real world applications are never this simple though.</p>
<p>What if the <code>Customer</code> type would look more like this?</p>
<pre><code>public class Address
{
public string Street { get; set; }
}
public struct Customer
{
public readonly string Name;
public readonly Address Address;
public Customer(string name, Address address)
{
Name = name;
Address = address;
}
}
var address = new Address { Street = "Springfield Road" };
var c = new Customer("Susan", address);
address.Street = "Gotcha";
</code></pre>
<p>At this point it should be evident that introducing immutability in C# is not as straightforward as someone might have thought.</p>
<p>This is another great example where the stark contrast between F# and C# really stands out. Writing correct code shouldn't be that hard and the language of choice can really make a difference.</p>
<h2 id="solid-made-easy-in-fsharp">SOLID made easy in F#</h2>
<p>Object oriented programming is all about producing <a href="https://en.wikipedia.org/wiki/SOLID">SOLID</a> code. In order to understand and write decent C# one has to read at least five different books, <a href="https://en.wikipedia.org/wiki/Software_design_pattern">study 20+ design patterns</a>, follow <a href="https://en.wikipedia.org/wiki/Composition_over_inheritance">composition over inheritance</a>, practise <a href="https://en.wikipedia.org/wiki/Test-driven_development">TDD</a> and <a href="https://en.wikipedia.org/wiki/Behavior-driven_development">BDD</a>, apply the <a href="https://jeffreypalermo.com/2008/07/the-onion-architecture-part-1/">onion architecture</a>, layer everything into tiers, <a href="https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93presenter">MVP</a>, <a href="https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93viewmodel">MVVM</a> and most importantly <a href="https://en.wikipedia.org/wiki/Single_responsibility_principle">single responsibility</a> all the things.</p>
<p>We all know the importance of these principles, because they are vital in keeping object oriented code in maintainable shape. Object oriented developers are so used to practise these patterns that it is unimaginable to them that someone could possibly produce SOLID code without injecting everything through a constructor. I've been there myself. The first time I saw functional code it looked plain wrong to me. I think most C# developers are put off by F# when they look at functional code for the very first time and don't see anything which looks familiar to them. There is no classes, no constructors and most importantly no IoC containers.</p>
<p><strong>Functional code is often mistaken for procedural code to the inexperienced eye</strong>.</p>
<p>In functional programming everything is a function. The only design pattern which someone has to know is that a function is a first class citizen. Functions can be composed, instantiated, partially applied, passed around and executed.</p>
<p>There is this <a href="//www.slideshare.net/ScottWlaschin/fp-patterns-ndc-london2014">famous slide</a> by <a href="https://twitter.com/ScottWlaschin">Scott Wlaschin</a> which nicely sums it up:</p>
<iframe src="//www.slideshare.net/slideshow/embed_code/key/oCM5TxRgKh1vme?startSlide=15" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe>
<p>This slide deck is from Scott Wlaschin's <a href="https://www.youtube.com/watch?v=srQt1NAHYC0">Functional programming design patterns</a> talk which can be viewed on Vimeo.</p>
<h3 id="to-hell-with-interfaces">To hell with interfaces</h3>
<p>In C# everything requires an interface. For example if a method requires to support multiple sort algorithms then the <a href="https://en.wikipedia.org/wiki/Strategy_pattern">strategy pattern</a> can help with this:</p>
<pre><code>public interface ISortAlgorithm
{
List<int> Sort(List<int> values);
}
public class QuickSort : ISortAlgorithm
{
public List<int> Sort(List<int> values)
{
// Do QuickSort
return values;
}
}
public class MergeSort : ISortAlgorithm
{
public List<int> Sort(List<int> values)
{
// Do MergeSort
return values;
}
}
public void DoSomething(ISortAlgorithm sortAlgorithm, List<int> values)
{
var sorted = sortAlgorithm.Sort(values);
}
public void Main()
{
var values = new List<int> { 9, 1, 5, 7 };
DoSomething(new QuickSort(), values);
}
</code></pre>
<p>In F# the same can be done with simple functions:</p>
<pre><code>let quickSort values =
values // Do QuickSort
let mergeSort values =
values // Do MergeSort
let doSomething sortAlgorithm values =
let sorted = sortAlgorithm values
()
let main =
let values = [| 9; 1; 5; 7 |]
doSomething quickSort values
</code></pre>
<p>Functions in F#, which have the same signature, are interchangeable and don't require an explicit interface declaration. Both sort functions are therefore of type <code>int list -> int list</code> and can be alternatively passed into the <code>doSomething</code> function.</p>
<p>This is literally all it requires to implement the strategy pattern in in F#!</p>
<p>Let's look at a slightly more complex example to really demonstrate the strengths of F#.</p>
<h3 id="everything-is-a-function">Everything is a function</h3>
<p>One of the most useful patterns in C# and one of my personal favourites is the <a href="https://en.wikipedia.org/wiki/Decorator_pattern">decorator pattern</a>. It allows adding additional functionality to an existing class without violating the <a href="https://en.wikipedia.org/wiki/Open%E2%80%93closed_principle">open-closed principle</a> of the SOLID guidelines. A password policy is a perfect example for this:</p>
<pre><code>public interface IPasswordPolicy
{
bool IsValid(string password);
}
public class BasePolicy : IPasswordPolicy
{
public bool IsValid(string password)
{
return true;
}
}
public class MinimumLengthPolicy : IPasswordPolicy
{
private readonly int _minLength;
private readonly IPasswordPolicy _nextPolicy;
public MinimumLengthPolicy(int minLength, IPasswordPolicy nextPolicy)
{
_minLength = minLength;
_nextPolicy = nextPolicy;
}
public bool IsValid(string password)
{
return
password != null
&& password.Length >= _minLength
&& _nextPolicy.IsValid(password);
}
}
public class MustHaveDigitsPolicy : IPasswordPolicy
{
private readonly IPasswordPolicy _nextPolicy;
public MustHaveDigitsPolicy(IPasswordPolicy nextPolicy)
{
_nextPolicy = nextPolicy;
}
public bool IsValid(string password)
{
if (password == null) return false;
return password.ToCharArray().Any(c => char.IsDigit(c))
&& _nextPolicy.IsValid(password);
}
}
public class MustHaveUppercasePolicy : IPasswordPolicy
{
private readonly IPasswordPolicy _nextPolicy;
public MustHaveUppercasePolicy(IPasswordPolicy nextPolicy)
{
_nextPolicy = nextPolicy;
}
public bool IsValid(string password)
{
if (password == null) return false;
return password.ToCharArray().Any(c => char.IsUpper(c))
&& _nextPolicy.IsValid(password);
}
}
public class Programm
{
public void Main()
{
var passwordPolicy =
new MustHaveDigitsPolicy(
new MustHaveUppercasePolicy(
new MinimumLengthPolicy(
8, new BasePolicy())));
var result = passwordPolicy.IsValid("Password1");
}
}
</code></pre>
<p>During the instantiation of the <code>passwordPolicy</code> object one can decide which policies to use. A different password policy can be created without having to modify a single class. While this works really well in C#, it is also extremely verbose. There is a lot of code which had to be written for arguably little functionality at this point. I also had to use an additional interface and constructor injection to glue policies together. The <code>passwordPolicy</code> variable is of type <code>IPasswordPolicy</code> and can be injected anywhere a password policy is required. This is as good as it gets in C#.</p>
<p>The only thing which I could have possibly improved (by writing a lot more boilerplate code) would have been to add additional syntactic sugar to compose a policy like this:</p>
<pre><code>var passwordPolicy =
Policy.Create()
.MustHaveMinimumLength(8)
.MustHaveDigits()
.MustHaveUppercase();
</code></pre>
<p>In F# the equivalent implementation is "just" functions again:</p>
<pre><code>let mustHaveUppercase (password : string) =
password.ToCharArray()
|> Array.exists Char.IsUpper
let mustHaveDigits (password : string) =
password.ToCharArray()
|> Array.exists Char.IsDigit
let mustHaveMinimumLength length (password : string) =
password.Length >= length
let isValidPassword (password : string) =
mustHaveMinimumLength 8 password
&& mustHaveDigits password
&& mustHaveUppercase password
</code></pre>
<p>Just like in C# the <code>passwordPolicy</code> object implemented the <code>IPasswordPolicy</code> interface, the <code>isValidPassword</code> function implements the <code>string -> bool</code> signature which therefore can be interchanged with any other function which also implements <code>string -> bool</code>.</p>
<p>The F# solution is almost embarrassingly easy when compared to the overly complex one in C#. Yet I didn't have to compromise on any of the SOLID principles. Each function validates a single requirement (single responsibility) and can be tested in isolation. They can be swapped or mocked for any other function which also implements <code>string -> bool</code> and I can create multiple new policies without having to modify existing code (open closed principle):</p>
<pre><code>let isValidPassword2 (password : string) =
mustHaveMinimumLength 12 password
&& mustHaveUppercase password
</code></pre>
<h3 id="inversion-of-control-made-functional">Inversion of Control made functional</h3>
<p>The only pattern which a functional developer has to understand is functions. To prove my point one last time I'll explore the <a href="https://en.wikipedia.org/wiki/Inversion_of_control">Inversion of Control principle</a> next.</p>
<p>First let's be clear what the Inversion of Control principle is, because many developers wrongly confuse it with the dependency injection pattern. The Inversion of Control principle states that a class shall never instantiate its own dependencies itself. <a href="https://martinfowler.com/">Martin Fowler</a> uses the term <a href="https://martinfowler.com/bliki/InversionOfControl.html">Hollywood Principle</a> as in <em>"Don't call us, we'll call you"</em>.</p>
<p>There are three distinctive design patterns which follow the IoC principle:</p>
<ul>
<li>Dependency Injection</li>
<li>Factory</li>
<li>Service Locator</li>
</ul>
<p>The <a href="http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/">Service Locator is considered an anti pattern</a> so I won't go any further here.</p>
<p>The Factory pattern consists of two further sub-patterns:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Abstract_factory_pattern">Abstract Factory</a></li>
<li><a href="https://en.wikipedia.org/wiki/Factory_method_pattern">Factory Method</a></li>
</ul>
<p>The <a href="https://en.wikipedia.org/wiki/Dependency_injection">Dependency Injection</a> pattern breaks down into three more sub-patterns:</p>
<ul>
<li>Constructor Injection</li>
<li>Method Injection</li>
<li>Property Injection</li>
</ul>
<p>Despite <a href="https://martinfowler.com/articles/injection.html#ConstructorInjectionWithPicocontainer">Constructor Injection</a> being the most popular IoC pattern in object oriented programming, it is only one of many other patterns which follow the Dependency Inversion Principle. Each of these patterns is extremely useful and satisfies a specific use case which Constructor Injection couldn't do on its own.</p>
<p>This has nothing to do with F# directly, but I wanted to underline how the sheer number of different design patterns can sometimes be very confusing. It may take years for an OO software engineer to fully grasp the vast amount of concepts and understand how and when they play an important role.</p>
<p>Now that I got this out of the way let's take a look at how C# handles Dependency Injection via Constructor Injection:</p>
<pre><code>public interface INotificationService
{
void SendMessage(Customer customer, string message);
}
public class OrderService
{
private readonly INotificationService _notificationService;
public OrderService(INotificationService notificationService)
{
_notificationService =
notificationService
?? throw new ArgumentNullException(
nameof(notificationService));
}
public void CompleteOrder(Customer customer, ShoppingBasket basket)
{
// Do stuff
_notificationService.SendMessage(customer, "Your order has been received.");
}
}
</code></pre>
<p>Nothing should be surprising here. The <code>OrderService</code> has a dependency on an object of type <code>INotificationService</code> which is responsible for sending order updates to a customer.</p>
<p>There could be multiple implementations of the <code>INotificationService</code>, such as an <code>SmsNotificationService</code> or an <code>EmailNotificationService</code>:</p>
<pre><code>public class EmailNotificationService : INotificationService
{
private readonly EmailSettings _settings;
public EmailNotificationService(EmailSettings settings)
{
_settings = settings;
}
public void SendMessage(Customer customer, string message)
{
// Do stuff
}
}
</code></pre>
<p>Typically in C# these dependencies would get registered in an IoC container. I've skipped this part in order to keep the C# implementation small as it's already becoming large.</p>
<p>Now let's take a look at how dependency injection can be done in F#:</p>
<pre><code>let sendEmailNotification emailSettings customer message =
()
let sendSmsNotification smsService apiKey customer message =
()
let completeOrder notify customer shoppingBasket =
notify customer "Your order has been received."
()
</code></pre>
<p>That's it - Dependency Injection in functional programming is achieved by simply passing one function into another (basically what I've already been doing in the examples before)!</p>
<p>The only difference here is that the <code>sendEmailNotification</code> and <code>sendSmsNotification</code> functions do not share the same signature at the moment. Not only is <code>emailSettings</code> of a different type than <code>smsService</code>, but both functions also differ in the number of parameters they need. The <code>sendEmailNotification</code> function requires three parameters in total and the <code>sendSmsNotification</code> requires four. Furthermore the <code>notify</code> parameter of the <code>completeOrder</code> function doesn't know which concrete function will be injected and therefore doesn't care about anything except the <code>Customer</code> object and the <code>string</code> message. So how does it work?</p>
<p>The answer is <strong>partial application</strong>. In functional programming one can partially apply parameters of one function in order to generate a new one:</p>
<pre><code>let sendEmailFromHotmailAccount =
// Here I only apply the `emailSettings` parameter:
sendEmailNotification hotmailSettings
let sendSmsWithTwillio =
// Here I only apply the `smsService` and `apiKey` parameters:
sendSmsNotification twilioService twilioApiKey
</code></pre>
<p>After partially applying both functions the newly created <code>sendEmailFromHotmailAccount</code> and <code>sendSmsWithTwillio</code> functions share the same signature again:</p>
<pre><code>Customer -> string -> unit
</code></pre>
<p>Now both functions can be passed into the <code>completeOrder</code> function.</p>
<p>There is no need for an IoC container either. If one doesn't want to repeatedly pass all dependencies into the <code>completeOrder</code> function then partial application can be utilised once again:</p>
<pre><code>let completeOrderAndNotify =
emailSettings
|> sendEmailNotification
|> completeOrder
// Later in the program one would use:
completeOrderAndNotify customer shoppingBasket
</code></pre>
<p>If we compare this solution to the one from C# then there isn't much of a difference (except for simplicity). Classes mainly require their dependencies to be injected through their constructor and functions take other functions as a dependency. In C# all dependencies get registered only once at IoC container level. In F# all dependencies get "registered" only once through partial application. In both cases one can create mocks, stubs and fakes for their dependencies and unit test each class or function in isolation.</p>
<p>There is a few advantages with the functional approach though:</p>
<ul>
<li>Dependencies can get "registered" (partially applied) closer to the functions where they belong.</li>
<li>Simpler by having a lot less code.</li>
<li>No additional (third party) IoC container required.</li>
<li>Dependency Injection is a pattern which has to be taught in OO programming whereas passing a function into another function is the most fundamental/normal thing one could do in functional programming.</li>
</ul>
<p><a href="http://blog.ploeh.dk/">Mark Seeman</a>, author of <a href="https://www.manning.com/books/dependency-injection-in-dot-net">Dependency Injection in .NET</a>, did a fantastic talk on more advanced dependency patterns in F#. Watch his talk "<a href="https://www.youtube.com/watch?v=xG5qP5AWQws">From dependency injection to dependency rejection</a>" on YouTube:</p>
<iframe class="youTubeVideo" src="https://www.youtube.com/embed/xG5qP5AWQws" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<h2 id="simplicity">Simplicity</h2>
<p>If there is one theme which has been consistent throughout this blog post then it must be the remarkable simplicity of F#. No matter if it is creating a new immutable type, expressing the true state of a function, modelling a domain or applying advanced programming patterns, F# always seems to have a slight edge over C#.</p>
<p>The abstinence of classes, complex design patterns, IoC containers, mutability, inheritance, overrides and interfaces has a few more benefits which come extremely handy at work.</p>
<p>First there is a lot less code to write. This makes applications smaller, faster to comprehend and much easier to maintain.</p>
<p>Secondly it allows for blazingly fast prototyping. In F# one can very quickly hack one function after another until a desired prototype has been reached. Furthermore, the additional work to transition from prototype to production is almost nothing. Since everything is a function and gets naturally compartmentalised into smaller functions the difference between a prototype- and a production-ready function is often very little.</p>
<h2 id="asynchronous-programming">Asynchronous programming</h2>
<p>Speaking of simplicity, F# makes asynchronous programming strikingly easy:</p>
<pre><code>let readFileAsync fileName =
async {
use stream = File.OpenRead(fileName)
let! content = stream.AsyncRead(int stream.Length)
return content
}
</code></pre>
<p>There is a lot of great content available which explains the differences and benefits of F#'s asynchronous programming model so that I won't rehash everything again, but I would highly recommend to read <a href="http://tomasp.net/">Thomans Petricek</a>'s article on <a href="http://tomasp.net/blog/csharp-async-gotchas.aspx/">Async in C# and F#: Asynchronous gotchas in C#</a>, followed by his blog series on <a href="http://tomasp.net/blog/csharp-fsharp-async-intro.aspx/">Asynchronous C# and F#</a>, including <a href="http://tomasp.net/blog/async-csharp-differences.aspx/">How do they differ?</a> and <a href="http://tomasp.net/blog/async-compilation-internals.aspx/">How does it work?</a>.</p>
<h2 id="net-core">.NET Core</h2>
<p>So far I've talked mostly about generic concepts of the functional programming paradigm, but there is a wealth of benefits which come specifically with F#. The obvious one is <a href="https://github.com/dotnet/core">.NET Core</a>. As we all know Microsoft is putting a lot of work into their new open source, cross platform, multi language runtime.</p>
<p>F# is part of .NET and therefore runs on all .NET runtimes, which include <a href="https://en.wikipedia.org/wiki/.NET_Core">.NET Core</a>, <a href="https://en.wikipedia.org/wiki/.NET_Framework">.NET Framework</a> and <a href="https://en.wikipedia.org/wiki/Xamarin">Xamarin</a> (Mono). This means that anyone can develop F# on either Windows, Linux or macOS. It also means that F# developers have access to a large eco system of extremely mature and high quality libraries. Because F# is a multi paradigm language (yes you can write object oriented code too if you want) it can reference and call into any third party package no matter if it was written in F#, C# or VB.NET.</p>
<h2 id="open-source">Open Source</h2>
<p>Long time before Microsoft embraced the OSS community they debuted with F# as their first language which was born out of <a href="https://www.microsoft.com/en-us/research/">Microsoft Research</a> as an open source project from the get go. The open source community behind F# is very strong, with many contributions coming from outside Microsoft and driving the general direction of the language.</p>
<p>You can find all <a href="https://github.com/fsharp">F# source code</a> hosted on GitHub and start contributing by submitting an <a href="https://github.com/fsharp/fslang-suggestions">F# language suggestion</a> first. When a suggestion gets approved then an <a href="https://github.com/fsharp/fslang-design/tree/master/RFCs">RFC</a> gets created with a corresponding <a href="https://github.com/fsharp/fslang-design/issues">discussion thread</a>.</p>
<p>The F# language is under the direction of the <a href="https://fsharp.org/">F# Foundation</a> with strong backing by Microsoft who is still the main driver of development.</p>
<h2 id="tooling">Tooling</h2>
<p>There is no match when it comes to tooling. Microsoft's .NET languages have always benefited from excellent tooling. <a href="https://visualstudio.microsoft.com/vs/">Visual Studio</a> was the uncontested leader for a long time, but in recent years the competition has racked up. <a href="https://www.jetbrains.com/">JetBrains</a>, the company who invented <a href="https://www.jetbrains.com/resharper/">ReSharper</a>, has released a new IntelliJ driven cross platform IDE called <a href="https://www.jetbrains.com/rider/">Rider</a>. Meanwhile Microsoft developed a new open source editor called <a href="https://github.com/Microsoft/vscode">Code</a>. <a href="https://code.visualstudio.com/">Visual Studio Code</a> has quickly emerged as the <a href="https://insights.stackoverflow.com/survey/2018/#technology-most-popular-development-environments">most popular development environment</a> amongst programmers and boasts a huge marketplace of useful plugins. Thanks to <a href="https://twitter.com/k_cieslak">Krzysztof Cieślak</a> there is a superb extension called <a href="http://ionide.io/">Ionide</a> for F#.</p>
<p>Visual Studio, JetBrains Rider and Visual Studio Code with Ionide are three of the world's best programming IDEs which are cross platform compatible, run on all major operating systems and support F#.</p>
<h2 id="fsharp-conquering-the-web">F# conquering the web</h2>
<p>As I mentioned at the very beginning F# is not just a language for algebraic stuff. Functional programming in general is a perfect fit for anything web related. A web application is basically a large function with a single parameter input (HTTP request) and a single parameter output (HTTP response).</p>
<h3 id="f-on-the-backend">F# on the Backend</h3>
<p>F# has an abundance of diverse and feature rich web frameworks. My personal favourite is a library called <a href="https://github.com/giraffe-fsharp/Giraffe">Giraffe</a> (disclaimer: I am the core maintainer of this project). <a href="https://github.com/giraffe-fsharp/Giraffe">Giraffe</a> sits on top of <a href="https://www.asp.net/core">ASP.NET Core</a>, which means that it mostly piggybacks off the entire ASP.NET Core environment, its performance attributes and community contributions. In Giraffe a web application is composed through a combination of many smaller functions which get glued together via the Kleisli operator:</p>
<pre><code>let webApp =
choose [
GET >=>
choose [
route "/ping" >=> text "pong"
route "/" >=> htmlFile "/pages/index.html"
]
POST >=> route "/submit" >=> text "Successful" ]
</code></pre>
<p><a href="https://github.com/giraffe-fsharp/Giraffe">Giraffe</a> has also recently joined the <a href="https://www.techempower.com/benchmarks/#section=data-r17&hw=ph&test=plaintext">TechEmpower Web Framework Benchmarks</a> and ranks with a total of <strong>1,649,957 req/sec</strong> as one of the fastest functional web frameworks available.</p>
<p>However, if Giraffe is not to your taste then there are many other great F# web libraries available:</p>
<ul>
<li><a href="https://saturnframework.org/">Saturn</a> (F# MVC framework built on top of Giraffe)</li>
<li><a href="https://suave.io/">Suave</a> (An entire web server written in F#)</li>
<li><a href="https://freya.io/">Freya</a></li>
<li><a href="https://websharper.com/">WebSharper</a></li>
</ul>
<p>ASP.NET Core and ASP.NET Core MVC are also perfectly compatible with F#.</p>
<h3 id="f-on-the-frontend">F# on the Frontend</h3>
<p>After F# set its mark on the server side of things it has also seen a lot of innovation on the frontend of the web.</p>
<p><a href="https://fable.io/">Fable</a> is an F# to JavaScript transpiler which is built on top of <a href="https://babeljs.io/">Babel</a>, which itself is an extremely advanced JavaScript compiler. Babel, which is hugely popular and <a href="https://opencollective.com/babel#contributors">backed by large organisations</a> such as Google, AirBnb, Adobe, Facebook, trivago and many more, is doing the heavy lifting of the compilation, whereas Fable is transpiling from F# to Babel's own abstract syntax tree. In simple terms you get the power of F# combined with the maturity and stability of Babel which allows you to write rich frontends in F#. <a href="https://twitter.com/alfonsogcnunez">Alfonso Garcia-Caro</a> has done a magnificent job in merging the F# and JavaScript communities and recently <a href="https://fable.io/blog/Introducing-2-0-beta.html">released Fable 2</a> which comes with a two-fold speed boost as well as a 30%-40% reduced bundle size.</p>
<p><a href="https://github.com/fable-compiler/Fable">Fable</a> and <a href="https://github.com/babel/babel">Babel</a> are also open source and have a thriving community behind them.</p>
<p>On a complete different front Microsoft has worked on a new project called <a href="https://github.com/aspnet/Blazor">Blazor</a>. Blazor is a single-page web application framework built on .NET that runs in the browser with WebAssembly. It supports all major .NET languages including F# and is <a href="https://blogs.msdn.microsoft.com/webdev/2018/11/15/blazor-0-7-0-experimental-release-now-available/">currently in beta</a>.</p>
<p>With the availability of Fable and Blazor there is a huge potential of what an F# developer can do on the web today.</p>
<h2 id="fsharp-everywhere">F# Everywhere</h2>
<p>F# is one of very few languages which can truly run anywhere! Thanks to <a href="https://dotnet.microsoft.com/">.NET Core</a> one can develop F# on any OS and run on any system. It can run natively on Windows, Linux and macOS or via a <a href="https://www.docker.com/">Docker</a> container in a <a href="https://kubernetes.io/">Kubernetes</a> cluster. You can also run F# serverless functions in <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> or <a href="https://azure.microsoft.com/en-gb/services/functions/">Azure Functions</a>. <a href="https://visualstudio.microsoft.com/xamarin/">Xamarin App Development</a> brings F# to Android, iOS and Windows apps, and <a href="https://fable.io/">Fable</a> and <a href="https://blazor.net/">Blazor</a> into the browser. Since <a href="https://blogs.msdn.microsoft.com/dotnet/2018/05/30/announcing-net-core-2-1/">.NET Core 2.1</a> one can even run F# on <a href="https://alpinelinux.org/">Alpine Linux</a> and <a href="https://www.arm.com/">ARM</a>! Machine learning, IoT and games are yet other areas where F# can be used today.</p>
<p>The list of supported platforms and architectures has been growing every year and I'm sure it will expand even further in the future!</p>
<h2 id="final-words">Final Words</h2>
<p>I've meant to write this blog post for a long time but never found the time to do it up until recently. My background is mainly C#, which is what I have been programming for more than ten years now and what I am still doing today. In the last three years I have taught myself F# and fallen madly in love with it. As a convert I get often asked what I like about F# and therefore I decided to put everything into writing. The list is obviously not complete and only a recollection of my own take on the main benefits of F#. If you think that I have missed something then please do not hesitate and let me know in the comments below. I see this blog post as an ever evolving resource where I hope I can point people to who have an interest in F#.</p>
<p>This blog post is also part of the <a href="https://sergeytihon.com/2018/10/22/f-advent-calendar-in-english-2018/">F# Advent Calendar 2018</a> which has been kindly organised by <a href="https://sergeytihon.com/">Sergey Tihon</a> again. Sergey does not only organise the <a href="https://sergeytihon.com/2017/10/22/f-advent-calendar-in-english-2017/">yearly F# Advent Calendar</a>, but also runs a <a href="https://sergeytihon.com/category/f-weekly/">weekly F#</a> newsletter. <a href="https://sergeytihon.com/">Subscribe to his newsletter</a> or <a href="https://twitter.com/sergey_tihon">follow him on Twitter</a> and stay up to date with the latest developments on F#!</p>
<h2 id="useful-resources">Useful Resources</h2>
<h3 id="blog-and-websites">Blog and Websites</h3>
<ul>
<li><a href="https://fsharp.org/">F# Foundation</a></li>
<li><a href="https://docs.microsoft.com/en-gb/dotnet/fsharp/">F# Guide</a></li>
<li><a href="https://fsharpforfunandprofit.com/">F# for fun and profit</a></li>
<li><a href="http://blog.ploeh.dk/">ploeh blog</a></li>
<li><a href="http://tomasp.net/">Tomas Petricek</a></li>
<li><a href="https://sergeytihon.com/">Sergey Tihon Weekly F#</a></li>
<li><a href="https://atlemann.github.io/">F# all the things</a></li>
<li><a href="https://safe-stack.github.io/">SAFE Stack</a></li>
</ul>
<h3 id="videos">Videos</h3>
<ul>
<li><a href="https://www.youtube.com/watch?v=Up7LcbGZFuo&t=36s">Domain Modeling Made Functional</a></li>
<li><a href="https://www.youtube.com/watch?v=KPa8Yw_Navk">F# for C# programmers</a></li>
<li><a href="https://www.youtube.com/watch?v=srQt1NAHYC0">Functional Design Patterns</a></li>
<li><a href="https://www.youtube.com/watch?v=Fssvnaf8bMo">A gentle introduction to F#</a></li>
</ul>
<h3 id="books">Books</h3>
<ul>
<li><a href="https://www.manning.com/books/get-programming-with-f-sharp">Get Programming with F#</a> by <a href="https://cockneycoder.wordpress.com/">Isaac Abraham</a></li>
<li><a href="https://fsharpforfunandprofit.com/books/">Domain Modeling Made Functional</a> by <a href="https://fsharpforfunandprofit.com/">Scott Wlaschin</a></li>
<li><a href="https://www.amazon.com/Stylish-Writing-More-Productive-Elegant/dp/1484239997">Stylish F#</a> by <a href="http://www.kiteason.com">Kit Eason</a></li>
</ul>
<h3 id="conferences">Conferences</h3>
<ul>
<li><a href="https://skillsmatter.com/conferences/10869-f-sharp-exchange-2019">F# Exchange</a></li>
<li><a href="https://www.openfsharp.org/">Open FSharp</a></li>
<li><a href="https://fable.io/fableconf/">FableConf</a></li>
<li><a href="http://www.lambdadays.org/lambdadays2019">Lambda Days</a></li>
</ul>
https://dusted.codes/why-you-should-learn-fsharp
[email protected] (Dustin Moris Gorski)https://dusted.codes/why-you-should-learn-fsharp#disqus_threadMon, 17 Dec 2018 00:00:00 +0000https://dusted.codes/why-you-should-learn-fsharpfsharpcsharpfunctional-programmingDrawbacks of Stored Procedures<p>A few weeks ago I had a conversion with someone about the pros and cons of stored procedures. Personally I don't like them and try to avoid stored procedures as much as possible. I know there are some good reasons for using stored procedures (sometimes), but I'm also very well aware of the downsides which stored procedures bring with them.</p>
<p>This was not the first time that I had such a conversation and therefore I thought that I would quickly summarise all the reasons (and problems) which I had encountered with stored procedures in the past and put them into one concise blog post for future reference.</p>
<h2 id="testability">Testability</h2>
<p>First and foremost business logic which is encapsulated in stored procedures becomes very difficult to test (if tested at all).</p>
<p>Some developers prefer to write a thin data access layer on top of stored procedures to workaround this issue, but even in this instance the extent of testing is mostly limited to a few integration tests only. Writing unit tests for any business logic inside a stored procedure is not possible, because there is no way to clearly separate the domain logic from the actual data. Mocking, faking or stubbing won't be possible either.</p>
<h2 id="debugging">Debugging</h2>
<p>Depending on the database technology debugging stored procedures will either not be possible at all or extremely clunky. Some relational databases, such as SQL Server, have some debugging capabilities and others none. There's nothing worse than having to use a database profiler to track down an application issue or to debug your database via print statements.</p>
<h2 id="versioning">Versioning</h2>
<p>Versioning is another crucial feature which stored procedures don't support out of the box. Putting stored procedure changes into re-runnable scripts and placing them into a version control system is certainly advisable, but it doesn't solve the problem that there is nothing inside a stored procedure which tells us which version a stored procedure is on and if there wasn't any other change being made after the latest script had been applied.</p>
<h2 id="history">History</h2>
<p>Similar to versioning, there's no history attached to stored procedures. Specifically if business logic spans across multiple stored procedures then it can be very difficult to establish the exact combination of different versions of different stored procedures at a given point in time.</p>
<h2 id="branching">Branching</h2>
<p>Branching is a wonderful feature which enables the isolation of related software changes until a certain piece of work has been completed. This also allows development teams to work on multiple changes simultaneously without breaking each others' code.</p>
<p>As soon as a stored procedure requires to change then a development team will either face the maintenance of multiple database instances for their affected branches or have to coordinate the deployment of different stored procedures throughout the entire development life cycle.</p>
<h2 id="runtime-validation">Runtime Validation</h2>
<p>Errors in stored procedures cannot be caught as part of a compilation or build step in a CI/CD pipeline. The same is true if a stored procedure went missing or another database error has crept into the application during the development process (e.g. missing permission to execute a stored procedure). In such a scenario a development team will often not know about the error until they execute the application. Catching fundamental mistakes like this can be very disruptive if it happens so late in the process.</p>
<h2 id="maintainability">Maintainability</h2>
<p>Stored procedures introduce a cliff (or disconnect) between coherent functionality, because the domain logic gets split between the application- and the database layer. It's rarely clear where the line is drawn (e.g. which part of a query should go into the application layer and which part into the database layer?). Code which is divided between two disconnected systems makes it harder to read, comprehend and therefore reason about.</p>
<h2 id="fear-of-change">Fear of change</h2>
<p>One of the biggest drawbacks of stored procedures is that it is extremely difficult to tell which parts of a system use them and which not. Especially if software is broken down into multiple applications then it's often not possible to find all references in one go (or at all if a developer doesn't have read access to all projects) and therefore it might be difficult to confidently establish how a certain change will affect the overall system. As a result stored procedures pose a huge risk of introducing breaking changes and development teams often shy away from making any changes at all. Sometimes this can lead to crippling new technological innovations.</p>
<h2 id="logging-and-error-handling">Logging and Error handling</h2>
<p>Robust software often relies on very sophisticated logging frameworks and/or error handling modules. An exception can get logged in several places, events can be raised based on different severity levels and custom notifications can be sent out to selective members of the team. However, business logic which is encapsulated inside a stored procedure cannot directly benefit from the same tools without having to either duplicate some of the code or introduce additional layers and workarounds.</p>
<h2 id="deployments">Deployments</h2>
<p>Not entirely impossible, but if a stored procedure has to change as part of a new application version then a zero downtime deployment will become a lot more difficult than without it. It's much easier to deploy and run two different versions of a web services than having to run two different versions of a set of stored procedures.</p>
<h2 id="conclusion">Conclusion</h2>
<p>These are some of the issues which I had personally experienced when dealing with complex stored procedures in the past. There are obviously many good reasons for using stored procedures too, but overall I feel that the majority of these drawbacks are pretty big trade offs to swallow for very few benefits which can also be achieved without having to use a stored procedure.</p>
<p>If you think I've missed something or misstated one of the downsides of stored procedures then please let me know in the comments below.</p>
https://dusted.codes/drawbacks-of-stored-procedures
[email protected] (Dustin Moris Gorski)https://dusted.codes/drawbacks-of-stored-procedures#disqus_threadThu, 29 Nov 2018 00:00:00 +0000https://dusted.codes/drawbacks-of-stored-proceduresstored-proceduresdatabasearchitectureASP.NET Core Firewall<p>About a month ago I experienced an issue with one of my online services which is running in the Google Cloud and also protected by <a href="https://www.cloudflare.com/">Cloudflare</a>. I had noticed a spike in traffic which only showed up in my Google Cloud dashboard but not in Cloudflare. It was odd because all requests should normally route through Cloudflare's proxy servers but it seemed like someone was circumventing the DNS resolution and hitting my service directly via its exposed IP address. It was a significant issue, because the endpoint which was being hit was quite expensive and I had specifically configured Cloudflare to rate limit a caller to a maximum of 100 requests per second. Unfortunately someone must have spoofed my service's IP address and managed to bypass Cloudflare and the configured rate limit and was able to issue thousands of requests per second which put a huge strain on my rather cheap infrastructure.</p>
<p>After a quick Google search I discovered that <a href="https://blog.christophetd.fr/bypassing-cloudflare-using-internet-wide-scan-data/">bypassing Cloudflare</a> is not that difficult and actually quite <a href="https://support.cloudflare.com/hc/en-us/articles/115003687931-Warning-about-exposing-your-origin-IP-address-via-DNS-records">well documented on the internet</a>. To my rescue I also discovered that <a href="https://www.cloudflare.com/ips/">Cloudflare publishes a list of all their IPv4 and IPv6 addresses</a> which web administrators (is that even still a thing?) can use to set up IP address filtering on their web services to specifically prevent scenarios like this. I needed a quick solution and therefore went on another internet search for an ASP.NET Core middleware which would block all incoming requests which did not originate from a known Cloudflare address. The closest I could find was an article on a <a href="https://docs.microsoft.com/en-us/aspnet/core/security/ip-safelist?view=aspnetcore-2.1">Client IP safelist</a>, but it didn't allow me to "safelist" an entire IP address range like the ones which Cloudflare has made public (e.g. <code>103.21.244.0/22</code>).</p>
<p>Knowing that I couldn't afford to run with this issue for much longer I decided to quickly hack my own IP address filtering middleware together. After a couple of hours of mad programming and copy pasting from Stackoverflow I had a quick and dirty solution deployed to production. It wasn't perfect, but it worked. My initial hack was able to validate an incoming IP address against all of Cloudflare's published CIDR notations and either grant or deny access to the requested resource. I was really happy how well it worked and after my pressing issue had been solved I wanted to deploy the same solution to all of my other ASP.NET Core services too.</p>
<p>A week later I published a slightly more polished version of the middleware as a NuGet package called <a href="https://www.nuget.org/packages/Firewall/">Firewall</a>. Today I deployed another version with major architectural improvements which made <a href="https://github.com/dustinmoris/Firewall">Firewall</a> a much more flexible and useful library to a wider range of applications. In the rest of this blog post I would like to demonstrate some of the features which Firewall can do for an ASP.NET Core application.</p>
<h2 id="how-firewall-works">How Firewall works</h2>
<p>Firewall is an ASP.NET Core access control middleware. It primarily lets an application filter incoming requests based on their IP address and either grant or deny access. IP address filtering can be configured through a list of specific IP addresses and/or a list of CIDR notations:</p>
<pre><code>using Firewall;
namespace BasicApp
{
public class Startup
{
public void Configure(IApplicationBuilder app)
{
var allowedIPs =
new List<IPAddress>
{
IPAddress.Parse("10.20.30.40"),
IPAddress.Parse("1.2.3.4"),
IPAddress.Parse("5.6.7.8")
};
var allowedCIDRs =
new List<CIDRNotation>
{
CIDRNotation.Parse("110.40.88.12/28"),
CIDRNotation.Parse("88.77.99.11/8")
};
app.UseFirewall(
FirewallRulesEngine
.DenyAllAccess()
.ExceptFromIPAddressRanges(allowedCIDRs)
.ExceptFromIPAddresses(allowedIPs));
app.Run(async (context) =>
{
await context.Response.WriteAsync("Hello World!");
});
}
}
}
</code></pre>
<p>The main feature can be enabled through the <code>UseFirewall()</code> extension method, which registers the <code>FirewallMiddleware</code> in the ASP.NET Core pipeline.</p>
<p>Rules for the Firewall are configured through the so called <code>FirewallRulesEngine</code>. The <a href="https://www.nuget.org/packages/Firewall/">Firewall NuGet package</a> comes with a set of default rules which are ready to use. For example the <code>ExceptFromCloudflare()</code> extension method will automatically configure the Firewall to retrieve the latest version of all of Cloudflare's IPv4 and IPv6 address ranges and subsequently validate incoming requests against them:</p>
<pre><code>app.UseFirewall(
FirewallRulesEngine
.DenyAllAccess()
.ExceptFromCloudflare());
</code></pre>
<p>A list of all currently available rules can be found on the <a href="https://github.com/dustinmoris/Firewall/blob/master/README.md">project's documentation page</a>.</p>
<p>Rules can be chained in the reverse order in which they will get evaluated against an incoming HTTP request:</p>
<pre><code>var adminIPAddresses = new [] { IPAddress.Parse("1.2.3.4) };
app.UseFirewall(
FirewallRulesEngine
.DenyAllAccess()
.ExceptFromCloudflare()
.ExceptFromIPAddresses(adminIPAddresses)
.ExceptFromLocalhost());
</code></pre>
<p>In the example above an incoming request will be first checked if it came from the same host, then if it came from the web administrator's home address and afterwards if it came from one of Cloudflare's IP addresses before the request will get denied. The request needs to satisfy only one of the rules in order to pass validation.</p>
<p>The reverse order of validation might seem a little bit weird at first, but it is simply explained by exposing the underlying architecture which is nothing more than a standard decorator composition pattern:</p>
<pre><code>// Pseudo code:
var rules =
new LocalhostRule(
new IPAddressRule(
new CloudflareRule(
new DenyAllAccessRule())));
</code></pre>
<p>The <code>FirewallRulesEngine</code> is only syntactic sugar on top of the decorator pattern which allows a user to compose a set of rules without having to new up a bunch of classes and dependencies.</p>
<h2 id="custom-rules">Custom Rules</h2>
<p>Custom rules can either be configured via the <code>ExceptWhen</code> extension method or by creating a new class which implements the <code>IFirewallRule</code> interface:</p>
<pre><code>var adminIPAddresses = IPAddress.Parse("1.2.3.4);
app.UseFirewall(
FirewallRulesEngine
.DenyAllAccess()
.ExceptFromCloudflare()
.ExceptWhen(ctx => ctx.Connection.RemoteIpAddress == adminIPAddress));
</code></pre>
<p>More complex rules can be created by implementing <code>IFirewallRule</code>:</p>
<pre><code>public class IPCountryRule : IFirewallRule
{
private readonly IFirewallRule _nextRule;
private readonly IList<string> _allowedCountryCodes;
public IPCountryRule(
IFirewallRule nextRule,
IList<string> allowedCountryCodes)
{
_nextRule = nextRule;
_allowedCountryCodes = allowedCountryCodes;
}
public bool IsAllowed(HttpContext context)
{
const string headerKey = "CF-IPCountry";
if (!context.Request.Headers.ContainsKey(headerKey))
return _nextRule.IsAllowed(context);
var countryCode = context.Request.Headers[headerKey].ToString();
var isAllowed = _allowedCountryCodes.Contains(countryCode);
return isAllowed || _nextRule.IsAllowed(context);
}
}
</code></pre>
<p>There's a <a href="https://github.com/dustinmoris/Firewall/blob/master/README.md#custom-rules">complete example of creating a custom rule</a> available in the latest documentation.</p>
<h2 id="x-forwarded-for-http-header">X-Forwarded-For HTTP Header</h2>
<p>Firewall has more features like a <a href="https://dev.maxmind.com/geoip/geoip2/geolite2/">GeoIP2</a> powered <code>CountryRule</code>, detailed diagnostics for debugging and examples of how to load rule settings from external configuration providers, but one more ASP.NET Core feature which I wanted to specifically highlight here is the <code>UseForwardedHeaders</code> middleware.</p>
<p>If an application sits behind more than one proxy server (e.g. Cloudflare + a custom load balancer) then you'll need to enable the <code>ForwardedHeader</code> middleware in order to retrieve the correct client IP address in the <code>HttpContext.Connection.RemoteIpAddress</code> property:</p>
<pre><code>public void Configure(IApplicationBuilder app)
{
app.UseForwardedHeaders(
new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor,
ForwardLimit = 1
}
);
app.UseCloudflareFirewall();
app.Run(async (context) =>
{
await context.Response.WriteAsync("Hello World!");
});
}
</code></pre>
<p>It is important to understand that this HTTP header is not guaranteed to be safe (as anything else which is client generated) and therefore it is not recommended to set the <code>ForwardedLimit</code> to a value greater than 1 unless the application is also set up with a list of trusted proxies (<code>KnownProxies</code> or <code>KnownNetworks</code>). If this is not done correctly then a malicious user could pretend to be a trusted source by setting the <code>X-Forwarded-For</code> header to a known (trusted) IP address.</p>
<p>If you think this short article was useful or if you've got your own ASP.NET Core website running behind Cloudflare then please go and check out the <a href="https://github.com/dustinmoris/Firewall">Firewall</a> project and secure your application against unwanted traffic too.</p>
https://dusted.codes/asp-net-core-firewall
[email protected] (Dustin Moris Gorski)https://dusted.codes/asp-net-core-firewall#disqus_threadMon, 22 Oct 2018 00:00:00 +0000https://dusted.codes/asp-net-core-firewallaspnet-corefirewallcloudflaresecurityOpen Source Documentation<p>Since January 2017, which soon will be two years ago, I've been maintaining an open source project called <a href="https://github.com/giraffe-fsharp/giraffe">Giraffe</a>. Giraffe is a functional web framework for F# which allows .NET developers to build rich web applications on top of Microsoft's <a href="https://docs.microsoft.com/en-us/aspnet/core/">ASP.NET Core</a> web framework in a functional first approach. Given that <a href="https://github.com/giraffe-fsharp/Giraffe">Giraffe</a> is targeted at .NET web developers, who practise F# and who also like to use ASP.NET Core as their underlying web stack (as opposed to maybe <a href="https://websharper.com/">WebSharper</a>, <a href="https://suave.io/">Suave</a>, <a href="https://freya.io/">Freya</a> or others) I would consider Giraffe a fairly niche product.</p>
<p>However as niche as it may be, it still attracted a reasonable amount of developers who use it in a personal or professional capacity every day and as such documentation has become an integral part of maintaining the project from the get go.</p>
<p>As someone who has never maintained an open source project before I didn't really have much experience with this topic and therefore went with a very straightforward, simple and sort of lazy solution in the beginning. I put all initial documentation into the <code>README.md</code> file inside my Git repository.</p>
<p>Almost two years-, nearly 50 releases-, 47 total contributors-, more than 800 GitHub stars and more than 100 merged pull requests later the project's simple documentation approach hasn't changed much, and to be honest I have very little motivation to do something about it.</p>
<p>The main reason why I'm not particularly excited about migrating <a href="https://github.com/giraffe-fsharp/Giraffe">Giraffe</a>'s documentation to a different place is because I actually believe that the current way of how documentation is handled by Giraffe is a perfectly well working solution for its users, its contributors and myself.</p>
<p>In Giraffe the entire documentation - a full reference of what the web framework can or cannot do and how it integrates and operates within the ASP.NET Core pipeline - is placed in a single <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/DOCUMENTATION.md"><code>DOCUMENTATION.md</code></a> file inside the <a href="https://github.com/giraffe-fsharp/Giraffe">project's Git repository</a>, right next to the <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/README.md"><code>README.md</code></a> file.</p>
<p>The documentation is not short either. There's quite a lot of content available and somehow this hasn't become a problem yet. The fact that all of the documentation is placed in a single large file has proven to be extremely advantageous if anything else.</p>
<p>Sometimes people ask me why I don't move the docs to a wiki page, like <a href="">GitHub's wiki feature</a>, or use <a href="">GitHub pages</a> or perhaps even a third party tool like <a href="https://readthedocs.org/">Read the docs</a> and my answer has always been the same: Because they all suck for documentation!</p>
<p>I find them particularly bad for documentation because they often fail to sufficiently address one of the two most important aspects of what makes good documentation in my opinion:</p>
<ol>
<li>Help your users</li>
<li>Stay up to date (aka provide accurate information)</li>
</ol>
<p>The first point is without doubt the most important aspect of all. If a project's documentation doesn't address 1.) sufficiently, then there's no point in even having documentation at all.</p>
<h2 id="a-single-documentationmd-file-helps-your-users">A single <code>DOCUMENTATION.md</code> file helps your users</h2>
<h3 id="discovery">Discovery</h3>
<p>Before users can read (and hopefully benefit from) your documentation they need to be able to find it in the first place. Having all of your documentation in a single <code>DOCUMENTATION.md</code> file makes that discovery process a lot easier.</p>
<p>First the file is labelled in big capital letters stating "DOCUMENTATION". This is definitely a good start. Secondly it is right next to a file called <code>README.md</code>, which is a pretty well established (and understood) concept in the open source community. The chances that someone will find the <code>DOCUMENTATION.md</code> file which resides right next to the <code>README.md</code> file is considerably high I would say. An additional reference from the <code>README.md</code> file pointing to the <code>DOCUMENTATION.md</code> file often helps to eliminate any leftover chances of someone not finding my project's documentation.</p>
<p>Furthermore the discovery process is massively boosted by the fact that the <code>DOCUMENTATION.md</code> file is stored inside the project's Git repository. Given that most open source projects are hosted by one of the big Git hosting providers (GitHub, BitBucket, GitLab, etc.) there's a high probability that the <code>DOCUMENTATION.md</code> file will end up very high at a search engine's results page - and let's face it, this is probably how the majority of your users will search for documentation in the first place anyway. It will probably even outrank any custom homepage or wiki page which is rarely a coincidence. GitHub's, BitBucket's and other Git hosting provider's main business model is to provide a user friendly hosting platform for your open source projects as well as a user friendly platform for your own users to easily discover, browse and contribute to your project. These platforms have discoverability at their heart and are extremely well SEO optimised. If my project's <code>DOCUMENTATION.md</code> file can benefit from that optimisation then I'm all up for it!</p>
<h3 id="well-understood-structure">Well understood structure</h3>
<p>Once users find the link to the <code>DOCUMENTATION.md</code> file and click on it they will be presented with the actual content. At this point it is the maintainer's responsibility to make sure that the content is written and presented in such a way that it addresses the needs of its readers.</p>
<p>The ease of use and the initial experience of your users will often be determined by how familiar and comfortable they are with your documentation's structure. Wiki pages, GitHub pages, custom homepages, Read the docs pages and other third party tools have all their own take on how to structure a well laid out documentation. Is the menu on the top? Maybe on the left or right? Does it slide or collapse and where does it go when I open it on a mobile device with a much smaller screen size? Does it even have a menu? These questions seem extremely trivial, but they are often responsible for frustrations amongst users when they are not implemented in a good way.</p>
<p>I once had to read the documentation of a third party software which had so many menu items that the menu exceeded my screen's vertical length. Unfortunately at that time there was an issue with the website which didn't let me to scroll beyond the last visible item on the screen and I had to open my browser's developer tools to read the remaining items directly from the source code and open the links manually in a new tab. Needless to say that this was a terrible experience.</p>
<p>I'm sure that this issue has been fixed by now and I'm not saying that all third party tools have such a bad user experience, but regardless of what their actual UX is, each of them has a slightly different approach on how they structure their content. This is all good in the context of normal (commercial) websites, but it often forgets that documentation is much simpler than the usual website on the internet. Documentation doesn't require half of the things which a normal website can do today. Documentation is a read only exercise. Most importantly, documentation already has an ancient universally understood structure which every human is very likely to be familiar (and comfortable) with: The table of contents.</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2018-10-08/book-table-of-contents.jpg" alt="Table of contents inside a book, Image by Dustin Moris Gorski" class="three-quarters" />
<p>A table of contents is so simple and effective that it is still used across all various industries for any content which happens to be larger than a single page. Magazines, books, catalogues, contracts and manuals of all sorts of kind use a table of contents in order to structure their content in a user friendly way.</p>
<p>A table of contents let's one structure a large document into smaller pieces without having to divide the content into multiple pages. If it works for print, e-books or large PDFs, then I don't see why it wouldn't work for a project's <code>DOCUMENTATION.md</code> file which is hosted on the web:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2018-10-08/giraffe-table-of-contents.png" alt="Table of contents for open source project, Image by Dustin Moris Gorski" class="three-quarters" />
<p>Instead of having to break up an online documentation into multiple pages which need to be maintained individually by a person or team, a table of contents allows one to have a single large <code>DOCUMENTATION.md</code> file which can be maintained a lot easier without losing the convenience of a structured document.</p>
<p>There's also no ambiguity where a user will find the menu (= table of contents). It's always at the top (or the front) of a document, regardless if it's been opened on a large screen, a small mobile device or if it's printed on paper.</p>
<h3 id="search-is-king">Search is king</h3>
<p>Arguably the most important feature of good documentation (apart from the content itself) is the capability of quickly finding a certain piece of information. I would even go as far as to say that a website's search capability make or break a good documentation.</p>
<p>This again sounds like an extremely trivial problem and yet I feel like we're constantly getting it wrong to a point where a user can never really know whether they can trust a website's search box or not. Will the search give me meaningful suggestions when I type? Will it suggest relevant content even when I mistype something? How does the algorithm prioritise results? What key words do I need to search for? When I type "nodejs" will it also show me results which has "node.js" in the title? I'd hope so, but I can never be 100% sure.</p>
<p>For example there's a huge difference in the quality of results which I get from searching a programming question on Google or directly via <a href="https://stackoverflow.com">StackOverflow.com</a>. I want to get an answer from <a href="https://stackoverflow.com">StackOverflow.com</a>, but I still chose to search for it via Google because I know I'll get much better results.</p>
<p>The loss of trust in a website's search box has a deep implication on the user experience. If I'm browsing a website and I want to search for a specific topic which I am interested in and I don't get a perfect match to my first query then I'm rarely satisfied with my initial results. Often I will start altering my search query several times before accepting that the topic which I'm looking for might not be covered by the current state of documentation. Most likely I will even go to Google and try several search queries there before making any conclusions.</p>
<p>Another realisation which I have made is that nowadays it doesn't even matter how good the actual search of a website is. The mistrust in search boxes is so engrained in users' behaviour that even if a wiki page (or any other tool) has a perfectly functioning search box users will still go to a search engine and double check their results. It's kind of sad that users have been trained over time that the best way to search for any information is to leave the actual website where they want to get the information from and search for it on another estate.</p>
<p>The only search which I have found is more trusted than Google's search box is the browser's built-in text search (often quickly accessed through the <kbd>CTRL+F</kbd> or <kbd>CMD+F</kbd> shortcut). A single page <code>DOCUMENTATION.md</code> file can be easily and confidently searched by simply using a browser's built-in full text search - and regardless of what the results are - users know to trust it. From my experience this is the only time where a user <em>actually</em> trusts their initial search results and almost <em>never</em> re-validates via Google or any other search engine. In terms of user experience this is a huge benefit for keeping all documentation in a single large file. It makes finding information easy, predictable, trustworthy and extremely fast!</p>
<h3 id="other-benefits-for-users">Other benefits for users</h3>
<p>There's quite a few other benefits which users get from a single <code>DOCUMENTATION.md</code> file:</p>
<ul>
<li>Search works offline. Having the documentation open in one tab allows a user to make effective use of it even when there is a loss of connectivity (e.g. sitting on a train, plane, airport, etc.).</li>
<li>Downloadable - The entire documentation can be downloaded with a single click and is usable as it is. This is particularly true for a Markdown file which is already human readable without any extra software. Given that most users probably have some sort of editor which can nicely format Markdown files makes it even better.</li>
<li>Print friendly - Some people like to have a print-out of the documentation. A single <code>DOCUMENTATION.md</code> file is extremely print friendly, since it doesn't need anything else in order to be user friendly for offline consumption.</li>
<li>Looks good on any screen size. A single large documentation Markdown file is extremely friendly towards all sort of screen sizes.</li>
</ul>
<h2 id="a-single-documentationmd-file-makes-it-easy-to-maintain">A single <code>DOCUMENTATION.md</code> file makes it easy to maintain</h2>
<p>The most annoying thing about documentation is that it is extremely difficult to keep up to date if nobody really owns this responsibility. For an open source project this normally means that the lead maintainer has to constantly update the documentation after a new release has been published, which often proves to be a process which doesn't scale very well.</p>
<p>If all of the documentation is hosted in a single <code>DOCUMENTATION.md</code> file inside a project's Git repository then this responsibility can be easily shared with other contributors. There is a huge benefit in being able to search and replace function names and other code samples as part of the normal re-factoring process from an IDE. Without even having to actively think about documentation the normal search and replace feature in an IDE will automatically include any findings from the <code>DOCUMENTATION.md</code> too. It is also extremely useful to have the documentation being closely linked with the various branches of a project. This allows contributors and other maintainers to keep the documentation up to date as and when they make changes on a specific branch. Heck it can even be required to update the documentation as part of a pull request which gives the core maintainer a huge power over distributing this responsibility to other contributors as well.</p>
<p>Apart from never being out of sync another nice token is that there is never a delay in publishing the updated version of the documentation and the actual product release itself. As soon as a release is crafted and everything is merged back into master (from where the build will automatically deploy the latest version) the documentation has been merged into master as well and therefore become the latest updated iteration which matches the released code.</p>
<h2 id="overall-experience">Overall experience</h2>
<p>Personally I've not found a single downside of having a single large documentation file written up in Markdown and kept close to my code inside my project's Git repository yet. It has proven to be an extremely powerful pattern which allows me to easily keep <a href="https://github.com/giraffe-fsharp/Giraffe">Giraffe</a>'s documentation up to date, well maintained and extremely user friendly for everyone who's been using it so far.</p>
<p>I don't pretend that I've invented something new here or that I'm the first one doing this, but I simply wanted to state all the benefits which I have actively thought about when taking the decision to follow this pattern and I thought it would be worth sharing as I think other project's could certainly benefit of better documentation too.</p>
https://dusted.codes/open-source-documentation
[email protected] (Dustin Moris Gorski)https://dusted.codes/open-source-documentation#disqus_threadMon, 08 Oct 2018 00:00:00 +0000https://dusted.codes/open-source-documentationdocumentationossgiraffeStop doing security (yourself)<p>Security has a very special place in software engineering. It is very different than other disciplines, because it is a problem space which cannot be approached with the same mindset as other problem spaces in our trade.</p>
<p>What makes security stand out so much is that it doesn't follow the same principles as the rest of software development. There is no different set of opinions. There is only one right opinion and the rest. There is not many ways how to achieve a secure something. There's only one very narrow, very particular and very unforgiving way of how to achieve a secure system (based on latest knowledge) and if you don't follow these steps very precisely then you'll end up with something which has more holes than a swiss cheese.</p>
<p>The other difference (between security and the rest of software engineering) is that you cannot delegate that competency to a single person. Your organisation might have one software architect, maybe one principal developer, maybe one Java or one database specialist, but if you have only one security expert then you have a problem. Security is such a complex topic, that even the most knowledgeable person on the planet will not know everything that is important.</p>
<p>Unfortunately, by nature, a system can never be assumed (or claimed) to be 100% secure, because there is no physics on this planet which could back this up. Therefore, the best and really only way which an organisation can use to its defence is to get their security model in front of as many eyes as possible. This is not an opinion, but a fact.</p>
<p>The most secure encryption is not the one which has been developed by the most knowledgeable person, but the one which has been reviewed, hacked and revised by the most people. There is a reason why all sophisticated cryptography algorithms are open to the public. Cryptographers deliberately want their work to be exposed to the largest possible audience and have it validated by them, because even they don't know how secure it is until it's been tested. This also explains why we have organisations specialising in penetration testing or why big organisations run public bug bounty programs.</p>
<p>With that in mind, my universal recommendation to any organisation which is not deeply involved in the InfoSec community is always to not do security by yourself. This can really not be stressed enough, but if you are not an industry leading expert in security, then don't even think about implementing your own password hasing algorithm, don't secure your API with a custom built authentication scheme and please don't build your own identity provider.</p>
<p>There is three reasons why:</p>
<ol>
<li>You'll get it wrong</li>
<li>You don't have to do it, because others have already done it for you</li>
<li>You will get it wrong</li>
</ol>
<p>If this little pep talk hasn't been convincing enough yet then there is another crucial reason why you shouldn't do security yourself: Security is out of your control and you'll probably live better by not having to deal with it.</p>
<p>Imagine your development team has made the choice to use Amazon's DynamoDb as their main data persistence layer. Two years down the line Amazon releases a new, improved and more state-of-the-art NoSQL database, but the development team doesn't find out about it until a few months later when one of the team members hears about it at a conference. After the conference the team sits together and decides to migrate to the new NoSQL database, but they won't do it for another six months, because they simply don't have the time and resources right now. Half of the team is on summer holiday, one person is sick, the development manager is on her honeymoon (so they lack formal approval anyway) and the person who originally came up with the current database architecture has left the company last year. It will take some time to come up with a good migration strategy and roll out in an efficient way. Luckily that is not an issue, because the current NoSQL database still works perfectly well and there's no immediate pressure to make a hasty change. The team is not bothered, nobody is stressing out and everyone enjoys their holidays before the team tackles the project later in the year.</p>
<p>Now imagine the same scenario but replace "database" with "identity provider" and replace "state-of-the-art" with "not full of security holes" and the whole story looks very different.</p>
<p>If someone drops a <a href="https://en.wikipedia.org/wiki/Zero-day_(computing)">zero day</a> vulnerability which affects your system then time is against you before you have even realised it. At this point you are already on the losing end and the problem which you're trying to tackle is not prevention, but damage control. The longer it will take you to fix, update and roll out a new version of your affected software the more likely you are going to be hit hard by this situation.</p>
<p>It will add very little consolation knowing that this whole disaster hasn't even been your fault. You might just have used a cryptographic implementation which was considered secure yesterday, but today you woke up to the news that some hackers have published a white paper on how to crack it in less than an hour. This was not a targeted attack against you, your business or your customers. This was simply a new discovery which has been dumped on the security community without much thought and now you and half of the world is affected by it. Before you think this type of stuff doesn't happen, let me quickly remind you of incidents like <a href="http://heartbleed.com/">Heartbleed</a>, <a href="https://en.wikipedia.org/wiki/Cloudbleed">Cloudbleed</a>, <a href="https://meltdownattack.com/">Meltdown</a>, <a href="https://spectreattack.com/">Spectre</a>, <a href="https://en.wikipedia.org/wiki/WannaCry_ransomware_attack">WannaCry</a>, <a href="https://krebsonsecurity.com/tag/zero-day/">and so on</a>.</p>
<p>In order to survive such a scenario you'll want to have certain things in place:</p>
<ul>
<li>You'd hope to hear about the news before it's too late. This usually requires someone being extremely active in the InfoSec community, following multiple industry leading experts on Twitter, reading InfoSec blogs and being subscribed to InfoSec related RSS feeds and mailing lists.</li>
<li>You have a security response team which has the skillset, communication channels and authority within your organisation to deal with this matter as fast as possible</li>
<li>You have an extremely fast development life cycle. Your security response team can get the latest version of a project, make the necessary changes, get sufficient test coverage and deployment to your production systems turned around in as little time as possible</li>
<li>Your security response team is ideally available around the clock. You don't know when the news might come to light and given the huge time differences between different places it can be mission critical to have security employees working around the clock, or at least be prepared for an uncomfortable situation in the middle of the night. Your security team might wake up one or two hours away from the office and needs to have sufficient access to deal with such a situation away from their office desk.</li>
</ul>
<p>Unless your business meets all of these requirements it might be flatout irresponsible to even think of writing your own custom security software if you don't have the means to deal with issues that come with it.</p>
<p>Now my main point is not to scare you of writing your own software, but to create some awareness that in the context of security you are probably better off by deferring these responsibilities to a third party which is a specialist in this field. Someone who lives and breathes security every day is much better equipped to deal with all the unknowns which we're confronted with every day.</p>
https://dusted.codes/stop-doing-security-yourself
[email protected] (Dustin Moris Gorski)https://dusted.codes/stop-doing-security-yourself#disqus_threadFri, 21 Sep 2018 00:00:00 +0000https://dusted.codes/stop-doing-security-yourselfsecuritycryptographyGiraffe 1.1.0 - More routing handlers, better model binding and brand new model validation API<p>Last week I announced the release of <a href="https://github.com/giraffe-fsharp/Giraffe/releases/tag/v1.1.0">Giraffe 1.0.0</a>, which (apart from some initial confusion around the transition to <a href="https://github.com/rspeele/TaskBuilder.fs">TaskBuilder.fs</a>) went mostly smoothly. However, if you have thought that I would be chilling out much since then, then you'll probably be disappointed to hear that today I've released another version of Giraffe with more exciting features and minor bug fixes.</p>
<p>The release of <a href="https://github.com/giraffe-fsharp/Giraffe/releases/tag/v1.1.0">Giraffe 1.1.0</a> is mainly focused around improving Giraffe's <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/DOCUMENTATION.md#routing">routing API</a>, making <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/DOCUMENTATION.md#model-binding">model binding</a> more functional and adding a new <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/DOCUMENTATION.md#model-validation">model validation API</a>.</p>
<p>Some of these features address some long requested functionality, so let's not waste any more time and get straight down to it.</p>
<h2 id="routes-with-trailing-slashes">Routes with trailing slashes</h2>
<p>Often I've been asked how to make Giraffe treat a route with a trailing slash equal to the same route without a trailing slash:</p>
<pre><code>https://example.org/foo/bar
https://example.org/foo/bar/</code></pre>
<p>According to the technical specification a <a href="https://webmasters.googleblog.com/2010/04/to-slash-or-not-to-slash.html">route with a trailing slash is not the same as a route without it</a>. A web server might want to serve a different response for each route and therefore Giraffe (rightfully) treats them differently.</p>
<p>However, it is not uncommon that a web application chooses to not distinguish between two routes with and without a trailing slash and as such it wasn't a surprise when I received multiple bug reports for Giraffe not doing this by default.</p>
<p>Before version 1.1.0 one would have had to specify two individual routes in order to make it work:</p>
<pre><code>let webApp =
choose [
route "/foo" >=> text "Foo"
route "/foo/" >=> text "Foo"
]</code></pre>
<p>Giraffe version 1.1.0 offers a new routing handler called <code>routex</code> which is similar to <code>route</code> except that it allows a user to specify <code>Regex</code> in the route declaration.</p>
<p>This makes it possible to define routes with more complex rules such as allowing an optional trailing slash:</p>
<pre><code>let webApp =
choose [
routex "/foo(/?)" >=> text "Foo"
]</code></pre>
<p>The <code>(/?)</code> regex pattern denotes that there can be exactly zero or one slash after <code>/foo</code>.</p>
<p>With the help of <code>routex</code> and <code>routeCix</code> (the case insensitive version of <code>routex</code>) one can explicitly allow trailing slashes (or other non-standard behaviour) in a single route declaration.</p>
<h2 id="parameterised-sub-routes">Parameterised sub routes</h2>
<p>Another request which I have seen on several occasions was a parameterised version of the <code>subRoute</code> http handler.</p>
<p>Up until Giraffe 1.0.0 there was only a <code>routef</code> and a <code>subRoute</code> http handler, but not a combination of both.</p>
<p>Imagine you have a localised application which requires a language parameter at the beginning of each route:</p>
<pre><code>https://example.org/en-gb/foo
https://example.org/de-at/bar
etc.</code></pre>
<p>In previous versions of Giraffe one could have used <code>routef</code> to parse the parameter and pass it into another <code>HttpHandler</code> function:</p>
<pre><code>let fooHandler (lang : string) =
sprintf "You have chosen the language %s." lang
|> text
let webApp =
choose [
routef "/%s/foo" fooHandler
]</code></pre>
<p>This was all good up until someone needed to make use of something like <code>routeStartsWith</code> or <code>subRoute</code> to introduce additional validation/authentication before invoking the localised routes:</p>
<pre><code>let webApp =
choose [
// Doesn't require authentication
routef "/%s/foo" fooHandler
routef "/%s/bar" barHandler
// Requires authentication
requiresAuth >=> choose [
routef "/%s/user/%s/foo" userFooHandler
routef "/%s/user/%s/bar" userBarHandler
]
]</code></pre>
<p>The problem with above code is that the routing pipeline will always check if a user is authenticated (and potentially return an error response) before even knowing if all subsequent routes require it.</p>
<p>The workaround was to move the authentication check into each of the individual handlers, namely the <code>userFooHandler</code> and the <code>userBarHandler</code> in this instance.</p>
<p>A more elegant way would have been to specify the authentication handler only one time before declaring all protected routes in a single group. Normally the <code>subRoute</code> http handler would make this possible, but not if routes have parameterised arguments at the beginning of their paths.</p>
<p>The new <code>subRoutef</code> http handler solves this issue now:</p>
<pre><code>
let webApp =
choose [
// Doesn't require authentication
routef "/%s/foo" fooHandler
routef "/%s/bar" barHandler
// Requires authentication
subRoutef "%s-%s/user" (
fun (lang, dialect) ->
// At this point it is already
// established that the path
// is a protected user route:
requiresAuth
>=> choose [
routef "/%s/foo" (userFooHandler lang dialect)
routef "/%s/bar" (userBarHandler lang dialect)
]
)
]</code></pre>
<p>The <code>subRoutef</code> http handler can pre-parse parts of a route and group a collection of cohesive routes in one go.</p>
<h2 id="improved-model-binding-and-model-validation">Improved model binding and model validation</h2>
<p>The other big improvements in Giraffe 1.1.0 were all around model binding and model validation.</p>
<p>The best way to explain the new model binding and validation API is by looking at how Giraffe has done model binding in previous versions:</p>
<pre><code>[<CLIMutable>]
type Adult =
{
FirstName : string
MiddleName : string option
LastName : string
Age : int
}
override this.ToString() =
sprintf "%s %s"
this.FirstName
this.LastName
member this.HasErrors() =
if this.Age < 18 then Some "Person must be an adult (age >= 18)."
else if this.Age > 150 then Some "Person must be a human being."
else None
module WebApp =
let personHandler : HttpHandler =
fun (next : HttpFunc) (ctx : HttpContext) ->
let adult = ctx.BindQueryString<Adult>()
match adult.HasErrors() with
| Some msg -> RequestErrors.BAD_REQUEST msg
| None -> text (adult.ToString())
let webApp _ =
choose [
route "/person" >=> personHandler
RequestErrors.NOT_FOUND "Not found"
]</code></pre>
<p>In this example we have a typical F# record type called <code>Adult</code>. The <code>Adult</code> type has an override for its <code>ToString()</code> method to output something more meaningful than .NET's default and an additional member called <code>HasErrors()</code> which checks if the provided data is correct according to the application's business rules (e.g. an adult must have an age of 18 or over).</p>
<p>There's a few problems with this implementation though. First you must know that the <code>BindQueryString<'T></code> extension method is a very loose model binding function, which means it will create an instance of type <code>Adult</code> even if some of the mandatory fields (non optional parameters) were not present in the query string (or badly formatted). While this "optimistic" model binding approach has its own advantages, it is not very idiomatic to functional programming and requires additional <code>null</code> checks in subsequent code.</p>
<p>Secondly the model validation has been baked into the <code>personHandler</code> which is not a big problem at first, but means that there's a lot of boilerplate code to be written if an application has more than just one model to work with.</p>
<p>Giraffe 1.1.0 introduces <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/RELEASE_NOTES.md#110">new http handler functions</a> which make model binding more functional. The new <code>tryBindQuery<'T></code> http handler is a stricter model binding function, which will only create an instance of type <code>'T</code> if all mandatory fields have been provided by the request's query string. It will also make sure that the provided data is in the correct format (e.g. a numeric value has been provided for an <code>int</code> property of the model) before returning an object of type <code>'T</code>:</p>
<pre><code>[<CLIMutable>]
type Adult =
{
FirstName : string
MiddleName : string option
LastName : string
Age : int
}
override this.ToString() =
sprintf "%s %s"
this.FirstName
this.LastName
member this.HasErrors() =
if this.Age < 18 then Some "Person must be an adult (age >= 18)."
else if this.Age > 150 then Some "Person must be a human being."
else None
module WebApp =
let adultHandler (adult : Adult) : HttpHandler =
fun (next : HttpFunc) (ctx : HttpContext) ->
match adult.HasErrors() with
| Some msg -> RequestErrors.BAD_REQUEST msg
| None -> text (adult.ToString())
let parsingErrorHandler err = RequestErrors.BAD_REQUEST err
let webApp _ =
choose [
route "/person" >=> tryBindQuery<Adult> parsingErrorHandler None adultHandler
RequestErrors.NOT_FOUND "Not found"
]</code></pre>
<p>The <code>tryBindQuery<'T></code> requires three parameters. The first is an error handling function of type <code>string -> HttpHandler</code> which will get invoked when the model binding fails. The <code>string</code> parameter in that function will hold the specific model parsing error message. The second parameter is an optional <code>CultureInfo</code> object, which will get used to parse culture specific data such as <code>DateTime</code> values or floating point numbers. The last parameter is a function of type <code>'T -> HttpHandler</code>, which will get invoked with the parsed model if model parsing was successful.</p>
<p>By using <code>tryBindQuery<'T></code> there is no danger of encountering a <code>NullReferenceException</code> or the need of doing additional <code>null</code> check any more. By the time the model has been passed into the <code>adultHandler</code> it has been already validated against any data contract violations (e.g. all mandatory fields have been provided, etc.).</p>
<p>At this point the semantic validation of business rules is still embedded in the <code>adultHandler</code> itself. The <code>IModelValidation<'T></code> interface can help to move this validation step closer to the model and make use of a more generic model validation function when composing the entire web application together:</p>
<pre><code>[<CLIMutable>]
type Adult =
{
FirstName : string
MiddleName : string option
LastName : string
Age : int
}
override this.ToString() =
sprintf "%s %s"
this.FirstName
this.LastName
member this.HasErrors() =
if this.Age < 18 then Some "Person must be an adult (age >= 18)."
else if this.Age > 150 then Some "Person must be a human being."
else None
interface IModelValidation<Adult> with
member this.Validate() =
match this.HasErrors() with
| Some msg -> Error (RequestErrors.BAD_REQUEST msg)
| None -> Ok this
module WebApp =
let textHandler (x : obj) = text (x.ToString())
let parsingErrorHandler err = RequestErrors.BAD_REQUEST err
let tryBindQuery<'T> = tryBindQuery<'T> parsingErrorHandler None
let webApp _ =
choose [
route "/person" >=> tryBindQuery<Adult> (validateModel textHandler)
]</code></pre>
<p>By implementing the <code>IModelValidation<'T></code> interface on the <code>Adult</code> record type we can now make use of the <code>validateModel</code> http handler when composing the <code>/person</code> route. This functional composition allows us to entirely get rid of the <code>adultHandler</code> and keep a clear separation of concerns.</p>
<p>First the <code>tryBindQuery<Adult></code> handler will parse the request's query string and create an instance of type <code>Adult</code>. If the query string had badly formatted or missing data then the <code>parsingErrorHandler</code> will be executed, which allows a user to specify a custom error response for data contract violations. If the model could be successfully parsed, then the <code>validateModel</code> http handler will be invoked which will now validate the business rules of the model (by invoking the <code>IModelValidation.Validate()</code> method). The user can specify a different error response for business rule violations when implementing the <code>IModelValidation<'T></code> interface. Lastly if the model validation succeeded then the <code>textHandler</code> will be executed which will simply use the object's <code>ToString()</code> method to return a <code>HTTP 200</code> text response.</p>
<p>All functions are generic now so that adding more routes for other models is just a matter of implementing a new record types for each model and registering a single route in the web application's composition:</p>
<pre><code>let webApp _ =
choose [
route "/adult" >=> tryBindQuery<Adult> (validateModel textHandler)
route "/child" >=> tryBindQuery<Child> (validateModel textHandler)
route "/dog" >=> tryBindQuery<Dog> (validateModel textHandler)
]</code></pre>
<p>Overall the new model binding and model validation API aims at providing a more functional counter part to <a href="https://docs.microsoft.com/en-us/aspnet/core/mvc/models/validation">MVC's model validation</a>, except that Giraffe prefers to use functions and interfaces instead of the <code>System.ComponentModel.DataAnnotations</code> attributes. The benefit is that data attributes are often ignored by the rest of the code while a simple validation function can be used from outside Giraffe as well. F# also has the benefit of having a better type system than C#, which means that things like the <code>[<Required>]</code> attribute have little use if there is already an <code>Option<'T></code> type.</p>
<p>Currently this new improved way of model binding in Giraffe only works for query strings and HTTP form payloads via the <code>tryBindQuery<'T></code> and <code>tryBindFrom<'T></code> http handler functions. Model binding functions for JSON and XML remain with the "optimistic" parsing model due to the underlying model binding libraries (JSON.NET and <code>XmlSerializer</code>), but a future update with improvements for JSON and XML is planned as well.</p>
<p>In total you have the following new model binding http handlers at your disposal with Giraffe 1.1.0:</p>
<table>
<tr>
<th style="text-align: left; min-width: 120px;">HttpHandler</th>
<th style="text-align: left">Description</th>
</tr>
<tr>
<td><code>bindJson<'T></code></td>
<td>Traditional model binding. This is a new http handler equivalent of <code>ctx.BindJsonAsync<'T></code>.</td>
</tr>
<tr>
<td><code>bindXml<'T></code></td>
<td>Traditional model binding. This is a new http handler equivalent of <code>ctx.BindAsync<'T></code>.</td>
</tr>
<tr>
<td><code>bindForm<'T></code></td>
<td>Traditional model binding. This is a new http handler equivalent of <code>ctx.BindFormAsync<'T></code>.</td>
</tr>
<tr>
<td><code>tryBindForm<'T></code></td>
<td>New improved model binding. This is a new http handler equivalent of a new <code>HttpContext</code> extension method called <code>ctx.TryBindFormAsync<'T></code>.</td>
</tr>
<tr>
<td><code>bindQuery<'T></code></td>
<td>Traditional model binding. This is a new http handler equivalent of <code>ctx.BindQueryString<'T></code>.</td>
</tr>
<tr>
<td><code>tryBindQuery<'T></code></td>
<td>New improved model binding. This is a new http handler equivalent of a new <code>HttpContext</code> extension method called <code>ctx.TryBindQueryString<'T></code>.</td>
</tr>
<tr>
<td><code>bindModel<'T></code></td>
<td>Traditional model binding. This is a new http handler equivalent of <code>ctx.BindModelAsync<'T></code>.</td>
</tr>
</table>
<p>The new model validation API works with any http handler which returns an object of type <code>'T</code> and is not limited to <code>tryBindQuery<'T></code> and <code>tryBindFrom<'T></code> only.</p>
<h2 id="roadmap-overview">Roadmap overview</h2>
<p>To round up this blog post I thought I'll quickly give you a brief overview of what I am planning to tackle next.</p>
<p>The next release of Giraffe is anticipated to be version 1.2.0 (no date set yet) which will mainly focus around improved authentication and authorization handlers (policy based auth support), better CORS support and hopefully better Anti-CSRF support.</p>
<p>After that if nothing else urgent comes up I shall be free to go over two bigger PRs in the Giraffe repository which aim at providing a <a href="https://github.com/giraffe-fsharp/Giraffe/pull/218">Swagger integration API</a> and a <a href="https://github.com/giraffe-fsharp/Giraffe/pull/182">higher level API of working with web sockets</a> in ASP.NET Core.</p>
https://dusted.codes/giraffe-110-more-routing-handlers-better-model-binding-and-brand-new-model-validation-api
[email protected] (Dustin Moris Gorski)https://dusted.codes/giraffe-110-more-routing-handlers-better-model-binding-and-brand-new-model-validation-api#disqus_threadFri, 16 Feb 2018 00:00:00 +0000https://dusted.codes/giraffe-110-more-routing-handlers-better-model-binding-and-brand-new-model-validation-apigiraffeaspnet-corefsharpdotnet-corewebAnnouncing Giraffe 1.0.0<p>I am pleased to announce the release of <a href="https://github.com/giraffe-fsharp/Giraffe/releases/tag/v1.0.0">Giraffe 1.0.0</a>, a functional ASP.NET Core web framework for F# developers. After more than a year of building, improving and testing the foundations of Giraffe it makes me extremely happy to hit this important milestone today. With the help of <a href="https://github.com/giraffe-fsharp/Giraffe/graphs/contributors">32 independent contributors</a>, more than a hundred <a href="https://github.com/giraffe-fsharp/Giraffe/issues?q=is%3Aissue+is%3Aclosed">closed GitHub issues</a> and an astonishing <a href="https://github.com/giraffe-fsharp/Giraffe/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aclosed+is%3Amerged">79 merged pull requests</a> (and counting) it is fair to say that Giraffe has gone through many small and big changes which made it what I believe one of the best functional web frameworks available today.</p>
<p>The release of Giraffe 1.0.0 continues with this trend and also brings some new features and improvements along the way:</p>
<h2 id="streaming-support">Streaming support</h2>
<p>Giraffe 1.0.0 offers a new <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/DOCUMENTATION.md#streaming">streaming API</a> which can be used to stream (large) files and other content directly to a client.</p>
<p>A lot of work has been put into making this feature properly work like supporting conditional HTTP headers and range processing capabilities. On top of that I was even able to help iron out a <a href="https://github.com/aspnet/Mvc/issues/7208">few bugs in ASP.NET Core MVC</a>'s implementation as well (loving the fact that ASP.NET Core is all open source).</p>
<h2 id="conditional-http-headers">Conditional HTTP Headers</h2>
<p>In addition to the new streaming API the <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/DOCUMENTATION.md#conditional-requests">validation of conditional HTTP headers</a> has been exposed as a separate feature too. The <code>ValidatePreconditions</code> function is available as a <code>HttpContext</code> extension method which can be used to validate <code>If-{...}</code> HTTP headers from within any http handler in Giraffe. The function will self determine the context in which it is called (e.g. <code>GET</code> <code>POST</code>, <code>PUT</code>, etc.) and return a correct result denoting whether a request should be further processed or not.</p>
<h2 id="configuration-of-serializers">Configuration of serializers</h2>
<p>A much desired and important improvement was the ability to change the default implementation of <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/DOCUMENTATION.md#serialization">data serializers</a> and <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/DOCUMENTATION.md#content-negotiation">content negotiation</a>. Giraffe 1.0.0 allows an application to configure the default JSON or XML serializer via ASP.NET Core's services container.</p>
<h2 id="detailed-xml-documentation">Detailed XML documentation</h2>
<p>For the first time Giraffe has detailed XML documentation for all public facing functions available:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2018-02-09/40125759872_9550519cbe_o.png" alt="giraffe-xml-docs, Image by Dustin Moris Gorski">
<p>Even though this is not a feature itself, it aims at improving the general development experience by providing better IntelliSense and more detailed information when working with Giraffe.</p>
<h2 id="giraffetasks-deprecated">Giraffe.Tasks deprecated</h2>
<p>When Giraffe introduced the <code>task {}</code> CE for the first time it was a copy of the single file project <a href="https://github.com/rspeele/TaskBuilder.fs">TaskBuilder.fs</a> written by <a href="https://github.com/rspeele">Robert Peele</a>. However, maintaining our own copy of the task CE is resource expensive and not exactly my personal field of expertise. Besides that, since the initial release Robert has made great improvements to TaskBuilder.fs whereas Giraffe's version has been lacking behind. When TaksBuilder.fs has been published to NuGet it felt like a good idea to deprecate <code>Giraffe.Tasks</code> and resort back to the original.</p>
<p>This allows me and other Giraffe contributors to focus more on the web part of Giraffe and let Robert do his excellent work on the async/task side of things. Otherwise nothing has changed and Giraffe will continue to build on top of <code>Task</code> and <code>Task<'T></code>. If you use <code>Giraffe.Tasks</code> outside of a Giraffe web application then you can continue doing so by referencing <code>TaskBuilder.fs</code> instead.</p>
<p>Giraffe also continues to use exclusively the context insensitive version of the task CE (meaning all task objects are awaited with <code>ConfigureAwait(false)</code>). If you encouter type inference issues after the upgrade to Giraffe 1.0.0 then you might have to add an extra open statement to your F# file:</p>
<pre><code>open FSharp.Control.Tasks.ContextInsensitive</code></pre>
<p>This is normally not required though unless you have <code>do!</code> bindings in your code.</p>
<p>If you like the <code>task {}</code> computation expression then please go to the <a href="https://github.com/rspeele/TaskBuilder.fs">official GitHub repository</a> and hit the star button to show some support!</p>
<h2 id="tokenrouter-as-nuget-package">TokenRouter as NuGet package</h2>
<p><a href="https://github.com/giraffe-fsharp/Giraffe.TokenRouter">TokenRouter</a> is a popular alternative to Giraffe's default routing API aimed at providing maximum performance. Given the complexity of TokenRouter and the fact that Giraffe already ships a default version of the routing API it made only sense to decouple the TokenRouter into its own repository.</p>
<p>This change will allow TokenRouter to become more independent and evolve at its own pace. TokenRouter can also benefit from having its own release cycle and be much bolder in introducing new features and breaking changes without affecting Giraffe.</p>
<p>If your project is using the TokenRouter API then you will need to add a new dependency to the <code>Giraffe.TokenRouter</code> NuGet package now. The rest remains unchanged.</p>
<h2 id="improved-documentation">Improved documentation</h2>
<p>At last I have worked on improving the official <a href="https://github.com/giraffe-fsharp/Giraffe/blob/master/DOCUMENTATION.md">Giraffe documentation</a> by completely restructuring the document, providing a wealth of new information and focusing on popular topics by demand.</p>
<p>The documentation has also been broken out of the README, but remains as a Markdown file in the git repository for reasons which I hope to blog about in a separate blog post soon.</p>
<p>The complete list of changes and new features can be found in the <a href="https://github.com/giraffe-fsharp/Giraffe/releases/tag/v1.0.0">official release notes</a>.</p>
<p>Thank you for reading this blog and using Giraffe (and if you don't, then give it a try ;))!</p>
https://dusted.codes/announcing-giraffe-100
[email protected] (Dustin Moris Gorski)https://dusted.codes/announcing-giraffe-100#disqus_threadFri, 09 Feb 2018 00:00:00 +0000https://dusted.codes/announcing-giraffe-100giraffeaspnet-corefsharpdotnet-corewebExtending the Giraffe template with different view engine options<p>This is going to be a quick but hopefully very useful tutorial on how to create a more complex template for the <code>dotnet new</code> command line tool.</p>
<p>If you have ever build a <a href="https://github.com/giraffe-fsharp/Giraffe">Giraffe</a> web application then you've probably started off by installing the <a href="https://github.com/giraffe-fsharp/giraffe-template">giraffe-template</a> into your .NET CLI and then created a new boilerplate application by running the <code>dotnet new</code> command:</p>
<pre><code>dotnet new giraffe</code></pre>
<p>(<em>There is currently a <a href="https://github.com/dotnet/templating/issues/1373">bug with the .NET CLI</a> which forces you to specify the <code>-lang</code> parameter.</em>)</p>
<p>Previously this would have created a new Giraffe web application which would have had the <code>Giraffe.Razor</code> NuGet package included and a default project structure with MVC's famous Razor view engine.</p>
<p>Since today (after you've <a href="https://github.com/giraffe-fsharp/giraffe-template#updating-the-template">updated the giraffe-template</a> to the latest version) you can choose between three different options:</p>
<ul>
<li><code>giraffe</code> - Giraffe's default view engine</li>
<li><code>razor</code> - MVC Razor views</li>
<li><code>dotliquid</code> - DotLiquid template engine</li>
</ul>
<p>The <code>dotnet new giraffe</code> command optionally supports a new parameter called <code>--ViewEngine</code> or short <code>-V</code> now:</p>
<pre><code>dotnet new giraffe -ViewEngine razor</code></pre>
<p>If you are unsure which options are available you can always request help by running:</p>
<pre><code>dotnet new giraffe --help</code></pre>
<p>The output of the command line will display all available options and supported values as well:</p>
<pre><code>Giraffe Web App (F#)
Author: Dustin Moris Gorski, David Sinclair and contributors
Options:
-V|--ViewEngine
giraffe - Default GiraffeViewEngine
razor - MVC Razor views
dotliquid - DotLiquid template engine
Default: giraffe</code></pre>
<p>If you do not specify a view engine then the <code>dotnet new giraffe</code> command will automatically create a new Giraffe web application with the default <code>GiraffeViewEngine</code> engine.</p>
<h2 id="creating-multiple-project-templates-as-part-of-one-dotnet-new-template">Creating multiple project templates as part of one dotnet new template</h2>
<p>There are many ways of how I could have programmed these options into the Giraffe template, but none of them are very obviously documented in one place. The <a href="#templating-engine-documentation">documentation of the dotnet templating engine</a><sup>1</sup> is fairly scattered across multiple resources and hard to understand if you have never worked with it before. Part of today's blog post I thought I'll quickly sum up one of the options which I believed was the cleanest and most straight forward one.</p>
<p>Each view engine has a significant impact on the entire project structure, such as NuGet package dependencies, folder structure, code organisation and files which need to be included. I didn't want to hack around with <code>#if</code> - <code>#else</code> switches and introduce complex add-, modify- or delete rules and consequently decided that the easiest and least error prone way would be to create a <a href="https://github.com/giraffe-fsharp/giraffe-template/tree/master/src/content">complete independent template for each individual view engine</a> first:</p>
<pre><code>src
+-- giraffe-template.nuspec
|
+-- content
|
+-- .template.config
| +-- template.json
|
+-- DotLiquid.Template
| +-- Views
| +-- WebRoot
| +-- Program.fs
| +-- AppNamePlaceholder.fsproj
|
+-- Razor.Template
| +-- Views
| +-- WebRoot
| +-- Program.fs
| +-- AppNamePlaceholder.fsproj
|
+-- Giraffe.Template
+-- WebRoot
+-- Program.fs
+-- AppNamePlaceholder.fsproj</code></pre>
<p>I split the content of the Giraffe template into three distinctive sub templates:</p>
<ul>
<li><code>DotLiquid.Template</code></li>
<li><code>Razor.Template</code></li>
<li><code>Giraffe.Template</code></li>
</ul>
<p>As you can see from the diagram there's still only one <code>.template.config\template.json</code> file at the root of the <code>content</code> folder and only one <code>giraffe-template.nuspec</code> file.</p>
<p>The benefit of this structure is very simple. There is a clear separation of each template and each template is completely independent of the other templates which makes maintenance very straight forward. I can work on each template as if they were small projects with full Intellisense and IDE support and being able to build, run and test each application.</p>
<p>The next step was to create the <code>--ViewEngine</code> parameter inside the <code>template.json</code> file:</p>
<pre><code>"symbols": {
"ViewEngine": {
"type": "parameter",
"dataType": "choice",
"defaultValue": "giraffe",
"choices": [
{
"choice": "giraffe",
"description": "Default GiraffeViewEngine"
},
{
"choice": "razor",
"description": "MVC Razor views"
},
{
"choice": "dotliquid",
"description": "DotLiquid template engine"
}
]
}
}</code></pre>
<p>All I had to do was to define a new symbol called <code>ViewEngine</code> of type <code>parameter</code> and data type <code>choice</code>. Then I specified all supported options via the <code>choice</code> array and set the <code>giraffe</code> option as the default value.</p>
<p>Now that the <code>ViewEngine</code> parameter has been created I was able to use it from elsewhere in the specification. The <code>sources</code> section of a <code>template.json</code> file denotes what source code should be installed during the <code>dotnet new</code> command. In Giraffe's case this was very easy. If the <code>giraffe</code> option has been selected, then the source code shall come from the <code>Giraffe.Template</code> folder and the destination/target folder should be the root folder of where the <code>dotnet new</code> command is being executed from. The same logic applies to all the other options as well:</p>
<pre><code>"sources": [
{
"source": "./Giraffe.Template/",
"target": "./",
"condition": "(ViewEngine == \"giraffe\")"
},
{
"source": "./Razor.Template/",
"target": "./",
"condition": "(ViewEngine == \"razor\")"
},
{
"source": "./DotLiquid.Template/",
"target": "./",
"condition": "(ViewEngine == \"dotliquid\")"
}
]</code></pre>
<p>With this in place I was able to create a new <code>giraffe-template</code> NuGet package and deploy everything to the official NuGet server again.</p>
<p>This is literally how easy it is to support distinct project templates from a single dotnet new template.</p>
<h2 id="different-templates-with-same-groupidentifier">Different templates with same groupIdentifier</h2>
<p>Another very similar, but in my opinion less elegant way would have been to create three different <code>template.json</code> files and use the <code>groupIdentifier</code> setting in connection with the <code>tags</code> array to support three different templates as part of one. Unfortunately this option doesn't seem to be very well supported from the .NET CLI. Even though it works, the .NET CLI doesn't display any useful error message when a user makes a mistake or when someone types <code>dotnet new giraffe --help</code> into the terminal. It also doesn't allow a default value to be set which made it less attractive overall. I would only recommend to go with this option if you need to <a href="https://github.com/dotnet/dotnet-template-samples/tree/master/06-console-csharp-fsharp">provide different templates based on the selected .NET language</a>, in which case it works really well again.</p>
<p>If you have any further questions or you would like to know more about the details of the Giraffe template then you can visit the <a href="https://github.com/giraffe-fsharp/giraffe-template">giraffe-template GitHub repository</a> for further reference.</p>
<p>This blog post is part of the <a href="https://sergeytihon.com/2017/10/22/f-advent-calendar-in-english-2017/">F# Advent Calendar in English 2017</a> blog series which has been kindly organised by <a href="https://twitter.com/sergey_tihon">Sergey Tihon</a>. Hope you all enjoyed this short tutorial and wish you a very Merry Christmas!</p>
<h4 id="templating-engine-documentation">1) Templating engine documentation</h4>
<p>Various documentation for the <code>dotnet new</code> templating engine can be found across the following resources:</p>
<ul>
<li><a href="https://docs.microsoft.com/en-us/dotnet/core/tools/custom-templates">Custom templates for dotnet new</a></li>
<li><a href="https://github.com/dotnet/dotnet-template-samples">dotnet-template-samples</a> (GitHub repo with a lot of useful examples)</li>
<li><a href="https://blogs.msdn.microsoft.com/dotnet/2017/04/02/how-to-create-your-own-templates-for-dotnet-new/">How to create your own templates for dotnet new</a> (Great blog post)</li>
<li><a href="https://github.com/dotnet/templating/wiki/Available-templates-for-dotnet-new">Available templates for dotnet new</a> (Community built templates)</li>
<li><a href="https://github.com/dotnet/templating">dotnet tempalting engine</a> (Official dotnet templating engine GitHub repository)</li>
</ul>
https://dusted.codes/extending-the-giraffe-template-with-different-view-engine-options
[email protected] (Dustin Moris Gorski)https://dusted.codes/extending-the-giraffe-template-with-different-view-engine-options#disqus_threadThu, 21 Dec 2017 00:00:00 +0000https://dusted.codes/extending-the-giraffe-template-with-different-view-engine-optionsgiraffetemplateaspnet-corefsharpEvolving my open source project from a one man repository to an OSS organisation<p>Over the last couple of months I have been pretty absent from my every day life, such as keeping up with Twitter, reading and responding to emails, writing blog posts, working on my business and maintaining the Giraffe web framework. It was not exactly what I had planned for, but my wife and I decided to take a small break and <a href="https://www.instagram.com/dustedtravels/">wonder through South America</a> for a bit before coming back for Christmas again. It was a truly amazing experience, but as much as travelling was fun, it was not great for <a href="https://github.com/giraffe-fsharp/Giraffe">my open source project Giraffe</a> which was just about to pick up momentum after the huge exposure via <a href="https://www.hanselman.com/blog/AFunctionalWebWithASPNETCoreAndFsGiraffe.aspx">Scott Hanselman</a>'s and the <a href="https://blogs.msdn.microsoft.com/dotnet/2017/09/26/build-a-web-service-with-f-and-net-core-2-0/">Microsoft F# team</a>'s blog posts (huge thanks btw, feeling super stoked and grateful for it!).</p>
<p>Initially I did not plan to slow down on Giraffe, but it turns out that trying to get a decent internet connection and a few hours of quality work in between of adventure seeking, 16 hour bus journeys, one flight every 3 days on average, hostels, hotels, hurricanes and multi day treks in deserts and mountains is not that easy after all :) (who thought ey?).</p>
<p>As a result issues and pull requests started to pile up quicker than I was able to deal with them and things got a bit stale. Luckily there were a few OSS contributors who did a fantastic job in picking up issues, replying to questions, fixing bugs and sending PRs for new feature requests when I was simply not able to do so - which meant that luckily things were moving at least at some capacity while I was away (a very special thanks to <a href="https://twitter.com/gerardtoconnor">Gerard</a> who has helped with the entire project beyond imagination and has been a huge contributor to Giraffe altogether; I think it's fair to say that Giraffe wouldn't be the same without his endless efforts).</p>
<p>However, even though the community stepped up during my absence I was still the only person who was able to approve PRs and get urgent bug fixes merged before pushing a new release to NuGet, and that understandably caused frustrations not only for users, but also for the very people who were kind enough to help maintaining it.</p>
<p>As the owner of the project and someone who really believes in the future of Giraffe it became very apparent that this was not acceptable going forward and things had to change if I really wanted Giraffe to further grow and succeed in the wider .NET eco system. So when <a href="https://github.com/giraffe-fsharp/Giraffe/issues/152">the community asked me to add additional maintainers</a> to the project I did not even hesitate for a second and decided that it was time to evolve the project from a one man repository to a proper OSS organisation which would allow more people having a bigger impact and control of the future of Giraffe.</p>
<h2 id="an-oss-project-is-only-as-good-as-its-community">An OSS project is only as good as its community</h2>
<p>I created <a href="https://github.com/giraffe-fsharp"><code>giraffe-fsharp</code></a> on GitHub and moved the <a href="https://github.com/giraffe-fsharp/Giraffe">Giraffe repository</a> from my personal account to its new home. Furthermore I have initially added three more contributors to the organisation who now all have the required permissions to maintain Giraffe without my every day involvement. This doesn't mean that I don't want to work on Giraffe any longer or that I want to work less on it, but it means that I just wanted to <strong>remove myself as the single point of failure</strong>, which is very important for a number of reasons.</p>
<p>First there is the obvious point that if I'd disappear out of the blue for whatever reason it would have a huge impact on anyone who has trusted in the project and its longevity. If I'd be a large company where changing tech can be very expensive then I'd personally not be able to justify the usage of a project which could literally drop dead over night. It is a real concern which I understand and therefore try to address this with the transition to a proper OSS organisation. It's a first step to mitigate this very real risk and a commitment from my side to do whatever I can to keep Giraffe well and alive.</p>
<p>Secondly I can simply not imagine that I as a single person could possibly grow the project better or faster than a larger collective of highly talented developers who are motivated to help me. I would be stupid to refuse this offer and it's in my personal interest to help them to help me.</p>
<p>Thirdly and most importantly I don't want to lose the fun and joy of working on Giraffe. I strongly believe that <a href="https://dot.net">.NET Core</a> is an excellent platform and a future proof technology. I also believe that functional programming is yet to have its big moment in the .NET world and with <a href="http://fsharp.org/">F# being the only functional first language</a> I think the potential for Giraffe is probably bigger than I can think of today. I think it's only a matter of time before its usage will outgrow my own physical capability of maintaining it and the last thing I want to happen is to burn out, have an OSS fatigue or completely lose motivation for it. The only way to avoid this from happening is by accepting the help of others and delegating more responsibilities to other people who share the same vision and have an interest in the project's success.</p>
<p>Therefore it makes me proud to have met these people and being able to expand Giraffe into a more structured organisation which I believe is the right way of addressing all these issues effectively.</p>
<h2 id="more-people-need-more-structure">More people need more structure</h2>
<p>Now that there's more people using Giraffe and more people helping to maintain it I think the next important step is to further expand on a work flow which aims at providing at a minimum the same quality at which I would have maintained the project myself.</p>
<p>As such I have set up the following process to maximise quality, reduce risk and give orgaisations of all size the confidence of trusting into Giraffe:</p>
<ul>
<li>All development work has to happen on individual branches (enforced via GitHub)</li>
<li>The <code>develop</code> branch <strong>must be at a releasable state</strong> at all times</li>
<li>Only a finished bug fix or feature enhancement can be pushed via a pull request into <code>develop</code> (enforced via GitHub)</li>
<li>Each PR against <code>develop</code> must be formally reviewed by at least one other core maintainer (enforced via GitHub)</li>
<li>If enough PRs has flown into <code>develop</code> and the team wants to schedule a new release then a pull request can be made against <code>master</code></li>
<li>Only an owner (currently me) has permissions to approve a PR on <code>master</code> and hence triggering a new automated release to NuGet (enforced via GitHub)</li>
</ul>
<p>I know this might seem like a lot of process for a fairly new project, but I think it is very important to establish a good working structure early on to keep the quality high no matter how big the project will grow in the future. This process guarantees that at least 3 separate pair of eyes (two core maintainers and one owner) will review every single line of code before it makes into an official release. At the same time this process should hopefully allow a frictionless collaboration between core maintainers up until the point of an official release without the necessity of my involvement. Things like a vacation, illness or other forms of temporary unavailability should not be an issue any longer.</p>
<h2 id="never-stop-growing">Never stop growing</h2>
<p>All of this is only a first step of what I hope will be a very long future for Giraffe. Nothing is set in stone and if we discover better or more efficient ways of working together then things might change in the future and I will most certainly blog about it in a follow up post again.</p>
<p>One other thing which is perhaps worth noting is that I also plan to join the <a href="https://www.dotnetfoundation.org/">.NET Foundation</a> or the <a href="http://foundation.fsharp.org/">F# Foundation</a> in the near future. I'll talk more about this in a follow up blog post as well!</p>
<h2 id="can-i-join-giraffe-fsharp">Can I join giraffe-fsharp?</h2>
<p>Short answer is yes, but you have to have contributed to the project as an outside collaborator first and shown an interest and familiarity with the project before getting an invite to become a member of the organisation. I certainly don't see an upper limit of motivated people who want to help and any form of contribution is more than welcome. If you like Giraffe and want to participate in it then feel free to go and <a href="https://github.com/giraffe-fsharp/Giraffe/fork">check out the code</a>, <a href="https://github.com/giraffe-fsharp/Giraffe/issues/new">discuss your ideas via GitHub issues</a> and send PRs for approved feature requests which will definitely see your code (after reviews) merged into the repository.</p>
<p>Please also be aware that any member of <a href="https://github.com/orgs/giraffe-fsharp/people">giraffe-fsharp</a> must have two factor authentication enabled on their account. I am big on security and I don't see why an OSS project should be treated any less serious than a project which would be run by a private company.</p>
<p>I hope this was a helpful insight into where I see the project going, what has happend lately and which steps I am taking to get Giraffe out of infancy and make it a serious contender among all other web frameworks in the wild (and not just .NET ;))!</p>
<p>Thanks to everyone who has helped, blogged, used or simply talked about Giraffe! There's no better feeling than knowing that people use and like something you've built and I now it's time that Giraffe becomes not only mine but a community driven project which we can grow together.</p>
<p>P.S.: If you ever have a chance to pack your stuff and travel the world then I'd highly recommend you to do so! It's one of the most amazing things one can do in life and probably for many people a one in a lifetime experience. Go and explore different places, meet people, learn about new cultures and experience life from a different perspective. It's eye opening and very educational on so many levels! <a href="https://www.instagram.com/dustedtravels/">Follow me on Instagram</a> for inspiration ;)</p>
https://dusted.codes/evolving-my-open-source-project-from-a-one-man-repository-to-a-proper-organisation
[email protected] (Dustin Moris Gorski)https://dusted.codes/evolving-my-open-source-project-from-a-one-man-repository-to-a-proper-organisation#disqus_threadWed, 06 Dec 2017 00:00:00 +0000https://dusted.codes/evolving-my-open-source-project-from-a-one-man-repository-to-a-proper-organisationgiraffeossaspnet-corefsharpGiraffe goes Beta<p><img src="https://raw.githubusercontent.com/dustinmoris/Giraffe/develop/giraffe.png" alt="Giraffe Logo"></p>
<p>After 6 months of working on <a href="https://github.com/dustinmoris/Giraffe">Giraffe</a>, lots of improvements, <a href="https://github.com/dustinmoris/Giraffe/pulls?q=is%3Apr+is%3Aclosed">great community contributions</a> and running several <a href="https://buildstats.info/">private</a> as well as commercial web applications in production I am really happy to announce that <a href="https://github.com/dustinmoris/Giraffe/releases/tag/v0.1.0-beta-001">Giraffe is finally going Beta</a>. This might not sound like a big milestone to you, but given that I was very hesitant on pre-maturely labelling the project anything beyond Alpha and the many breaking changes which we had in the past it certainly feels like a big achievement to me! After plenty of testing, tweaking, re-architecting and some real world exposure I think we finally reached a point where I can confidently say that the external facing API will remain fairly stable as of now - and seems to work really well too ;).</p>
<h2 id="what-has-changed">What has changed</h2>
<h3 id="xmlviewengine">XmlViewEngine</h3>
<p>Since the last blog post I have made several improvements to the previously known <a href="https://dusted.codes/functional-aspnet-core-part-2-hello-world-from-giraffe#functional-html-view-engine">functional HTML engine</a>, which has been renamed to a more generic <a href="https://github.com/dustinmoris/Giraffe#renderhtml">XmlViewEngine</a> now. The new changes allow the <code>XmlViewEngine</code> to be used beyond simple HTML views and can be used for things like generating dynamic XML such as <a href="https://github.com/dustinmoris/CI-BuildStats/blob/master/src/BuildStats/Views.fs">SVG images</a> and more. Personally I think the <a href="https://github.com/dustinmoris/Giraffe/blob/v0.1.0-beta-001/src/Giraffe/XmlViewEngine.fs">XmlViewEngine</a> is the most feature rich and powerful view engine you can find in any .NET framework today and I will certainly dedicate a whole separate blog post on that topic alone in the near future soon.</p>
<h3 id="continuations-instead-of-bindings">Continuations instead of bindings</h3>
<p>One particular big change which we recently had was the move from <a href="https://medium.com/@gerardtoconnor/carry-on-continuation-over-binding-pipelines-for-functional-web-58bd7e6ea009">binding HttpHandler functions</a> to <a href="https://github.com/dustinmoris/Giraffe/issues/69">chaining HttpFunc continuations</a> instead. The <code>HttpHandler</code> function has changed its signature from a <code>HttpContext -> Async<HttpContext option></code> to a <code>HttpFunc -> HttpFunc</code>, whereas a <code>HttpFunc</code> is defined as <code>HttpContext -> Task<HttpContext option></code>.</p>
<p>The main difference is that a <code>HttpHandler</code> function doesn't return <code>Some HttpContext</code> any longer (unless you want to immediately short circuit the chain) and is responsible for invoking the <code>next</code> continuation function from within itself. If you think this sounds very similar to ASP.NET Core's middleware then you are not mistaken. It is the same concept which brings several benefits such as better control flow and improved performance.</p>
<p>Even though this posed a fundamental change in architecture, we didn't have to compromise on how easy it is to compose a larger web application in Giraffe:</p>
<pre><code>let webApp =
choose [
GET >=>
choose [
route "/" >=> renderHtml indexPage
route "/signup" >=> renderHtml signUpPage
route "/login" >=> renderHtml loginPage
]
POST >=>
choose [
route "/signup" >=> signUpHandler
route "/login" >=> loginHandler
]
setStatusCode 404 >=> text "Not Found" ]</code></pre>
<p>All credit goes to <a href="https://twitter.com/gerardtoconnor">Gerard</a> who worked on this big change entirely from concept to implementation on his own.</p>
<h3 id="tasks">Tasks</h3>
<p>Another architectural change was that Giraffe works natively with <code>Task</code> and <code>Task<'T></code> objects now. Previously you would have had to convert from a C# <code>Task<'T></code> to an F# <code>Async<'T></code> workflow and then back again to a <code>Task<'T></code> before returning the flow to ASP.NET Core, but not any longer.</p>
<p>If you paid close attention in the previous example then you might have noticed that a <code>HttpFunc</code> is defined as <code>HttpContext -> Task<HttpContext option></code>. Apart from the additional convenience of not having to convert between tasks and asyncs any more, this change netted us a two figure % perf improvement overall.</p>
<p>From now on you can reference the <code>Giraffe.Tasks</code> module and make use of the new <code>task {}</code> workflow which will allow you to write asynchronous code just as easily as it was with F# <code>async {}</code> before:</p>
<pre><code>open Giraffe.Tasks
open Giraffe.HttpHandlers
let personHandler =
fun (next : HttpFunc) (ctx : HttpContext) ->
task {
let! person = ctx.BindModel<Person>()
return! json person next ctx
}</code></pre>
<p>The original code for Giraffe's task implementation has been taken from <a href="https://github.com/rspeele/TaskBuilder.fs">Robert Peele's TaskBuilder.fs</a> and minimally modified to better fit Giraffe's use case for a highly scalable ASP.NET Core web application.</p>
<p>Again, all credit goes to <a href="https://github.com/gerardtoconnor">Gerard</a> and his endless efforts in improving Giraffe's overall architecture and performance.</p>
<h2 id="what-to-expect-next">What to expect next?</h2>
<h3 id="stability">Stability</h3>
<p>First of all, as Giraffe has officially entered the Beta phase you can expect a much more stable API with no to minimal breaking changes going forward.</p>
<h3 id="more-performance">More performance</h3>
<p>Next there are still many ways of improving the internals of <code>Giraffe.Tasks</code> which we think will yield even further perf improvements. Additionally Gerard plans to implement an alternative trie routing API which should promise another perf gain on web applications with large routing layers as well.</p>
<h3 id="more-sample-applications-and-templates">More sample applications and templates</h3>
<p>Another area where I would like to focus on more in the future is in providing more <a href="https://github.com/dustinmoris/Giraffe#demo-apps">sample applications</a> and templates which will help people to get up and running with Giraffe in as little time as possible.</p>
<p>Also I would like to blog more about my own usage of Giraffe and show case a few production applications with a closer look at some stats, deployments and general tips & tricks.</p>
<p>I hope you like the latest changes and are still as excited about Giraffe as I am or even considering to build your next F# web application with the fastest functional .NET web framework you'll find anywhere today ;).</p>
<p>If you already use Giraffe for a commercial or hobby project please let me know in the comments below and I can feature you <a href="https://github.com/dustinmoris/Giraffe#live-apps">in the official GitHub repository</a> if you like.</p>
<p>Thanks for reading and stay tuned until next time!</p>
https://dusted.codes/giraffe-goes-beta
[email protected] (Dustin Moris Gorski)https://dusted.codes/giraffe-goes-beta#disqus_threadSat, 12 Aug 2017 00:00:00 +0000https://dusted.codes/giraffe-goes-betagiraffeaspnet-corefsharpFunctional ASP.NET Core part 2 - Hello world from Giraffe<p>This is a follow up blog post on the <a href="https://dusted.codes/functional-aspnet-core">functional ASP.NET Core</a> article from about two months ago. First of all I'd like to say that this has been the longest period I haven't published anything new to my blog since I started blogging in early 2015. The reason is because I have been pretty busy with a private project which I hope to write more about in the future, but more importantly I have been extremely busy organising my own wedding which took place at the end of last month :). Yes!, I've been extremely lucky to have found the love of my life and best friend and after being engaged for almost a year and a half we've finally tied the knot two weeks ago. Normally I don't blog about my private life here, but since this has been such a significant moment in my life I thought I should mention a few words here as well and let everyone know that the quiet time has been for a good reason and will not last for much longer now.</p>
<p>While this has primarily occupied the majority of my time I was also quite happy to see my <a href="https://github.com/dustinmoris/Giraffe">functional ASP.NET Core project</a> receive recognition from the community and some really great support from other developers who've been helping me in adding lots of new functionality since then. In this first blog post after my small break I thought I'd take the opportunity and showcase some of the work we've done since the initial release and explain some of the design decisions behind some features.</p>
<p>But first I shall say that the framework has been renamed to <strong>Giraffe</strong>.</p>
<h2 id="aspnet-core-lambda-is-now-giraffe">ASP.NET Core Lambda is now Giraffe</h2>
<p>ASP.NET Core Lambda was a good name in terms of being very descriptive of what it stood for, but at the same time there were plenty of other issues which led me and other people believe that a different name would be a better fit.</p>
<p>Initially I named the project ASP.NET Core Lambda, because, at its core it was a functional framework built on top of (and tightly integrated with) ASP.NET Core, so I put one and one together and went with that name.</p>
<p>However, it quickly became apparent that "ASP.NET Core Lambda" wasn't a great name for the following reasons:</p>
<ul>
<li>ASP.NET Core Lambda is a bit of a tongue twister.</li>
<li>"ASP", ".NET", "Core" and "Lambda" are extremely overloaded words with more than one meaning. If the project turns out to be successful then any type of search or information lookup (e.g StackOverflow) would be an absolute nightmare with this name.</li>
<li>Specifically Lambda is associated with <a href="https://aws.amazon.com/lambda/">Amazon's serverless cloud offering</a> which would add even more to the confusion.</li>
<li>Finally the name is not very tasteful. Let's be honest, the mix of capitalized and pascal cased words, the additional whitespace and the dot in the word makes the name look very busy and simply doesn't resemble an elegant or tasteful product.</li>
</ul>
<p>As a result I decided to rename the project to something different and <a href="https://github.com/dustinmoris/Giraffe/issues/15">put the name up for a vote</a>, which ultimately led to <strong>Giraffe</strong>. Looking back I think it was a great choice and I would like to thank everyone who helped me in picking the new name, as well as suggesting other great names which made the decision not easy at all.</p>
<p>I think Giraffe is a much better name now, because it is short, it is very clear and distinctive and there is no ambiguity around the spelling or pronunciation. There is also no other product called Giraffe in the .NET space and not really anything else which it could be mistaken with. The name Giraffe also hasn't been taken as a NuGet package which made things really easy. On top of that Giraffe gave lots of creative room for creating a beautiful logo for which I used <a href="https://99designs.co.uk/">99designs.co.uk</a>. I set up a design challenge there and the winner impressed with this clever design:</p>
<p><img src="https://raw.githubusercontent.com/dustinmoris/Giraffe/develop/giraffe.png" alt="Giraffe Logo"></p>
<p>Now I can only hope that the product will live up to this beautiful logo and the new name, which brings me to the actual topic of this blog post.</p>
<h2 id="overview-of-new-features">Overview of new features</h2>
<p>There has been quite a few changes and new features since my last blog post and there's a few of which I am very excited about:</p>
<ul>
<li><a href="#dotnet-new-template">Dotnet new template</a></li>
<li><a href="#nested-routing">Nested routing</a></li>
<li><a href="#razor-views">Razor views</a></li>
<li><a href="#functional-html-view-engine">Functional HTML view engine</a></li>
<li><a href="#content-negotiation">Content negotiation</a></li>
<li><a href="#model-binding">Model binding</a></li>
</ul>
<h2 id="dotnet-new-template">Dotnet new template</h2>
<p>One really cool thing you can do with the new .NET tooling is to create <a href="https://github.com/dotnet/templating/wiki/%22Runnable-Project%22-Templates">project templates</a> which can be installed via NuGet packages.</p>
<p>Thanks to <a href="https://github.com/dsincl12">David Sinclair</a> you can install a Giraffe template by running the following command:</p>
<pre><code>dotnet new -i giraffe-template::*</code></pre>
<p>This will install the <a href="https://www.nuget.org/packages/giraffe-template">giraffe-template</a> NuGet package to your local templates folder.</p>
<p>Afterwards you can start using <code>Giraffe</code> as a new project type when running the <code>dotnet new</code> command:</p>
<pre><code>dotnet new giraffe</code></pre>
<p>This feature makes it significantly easier to get started with Giraffe now. The quickest way to get a working Giraffe application up and running is by executing these three commands:</p>
<ol>
<li><code>dotnet new giraffe</code></li>
<li><code>dotnet restore</code></li>
<li><code>dotnet run</code></li>
</ol>
<p>Everything should compile successfully and you should see a Hello-World Giraffe app running behind <a href="http://localhost:5000">http://localhost:5000</a>.</p>
<h2 id="nested-routing">Nested routing</h2>
<p>Another cool feature which has been added by <a href="https://twitter.com/stuartblang">Stuart Lang</a> is nested routing.</p>
<p>The new <code>subRoute</code> handler allows users to create nested routes which can be very useful when logically grouping certain paths.</p>
<p>An example would be when an API changes it's authentication scheme and you'd want to group routes together which implement the same type of authentication. With the help of nested routing you can enable certain features like a new authentication scheme by only declaring it once per group:</p>
<pre><code>let app =
subRoute "/api"
(choose [
subRoute "/v1"
(oldAuthentication >=> choose [
route "/foo" >=> text "Foo 1"
route "/bar" >=> text "Bar 1" ])
subRoute "/v2"
(newAuthentication >=> choose [
route "/foo" >=> text "Foo 2"
route "/bar" >=> text "Bar 2" ]) ])</code></pre>
<p>In this example a request to <code>http://localhost:5000/api/v1/foo</code> will use <code>oldAuthentication</code> and a request to <code>http://localhost:5000/api/v2/foo</code> will end up using <code>newAuthentication</code>.</p>
<p>There is also a <a href="https://github.com/dustinmoris/Giraffe#subrouteci"><code>subRouteCi</code></a> handler which is the case insensitive equivalent of <code>subRoute</code>.</p>
<h2 id="razor-views">Razor views</h2>
<p>Next is the support of Razor views in Giraffe. <a href="https://github.com/nicolocodev">Nicolás Herrera</a> developed the first version of Razor views by utilising the <a href="https://github.com/toddams/RazorLight">RazorLight</a> engine. Shortly after that I realised that by referencing the <code>Microsoft.AspNetCore.Mvc</code> NuGet package I can easily re-use the original Razor engine in order to offer a more complete and original Razor experience in Giraffe as well. While under the hood the engine changed from <a href="https://www.nuget.org/packages/RazorLight/">RazorLight</a> to <a href="https://github.com/dotnet/aspnetcore">ASP.NET Core MVC</a> the functionality remained more or less the same as implemented by Nicolás in the first place.</p>
<p>In order to enable Razor views in Giraffe you have to register it's dependencies first:</p>
<pre><code>type Startup() =
member __.ConfigureServices (svc : IServiceCollection,
env : IHostingEnvironment) =
Path.Combine(env.ContentRootPath, "views")
|> svc.AddRazorEngine
|> ignore</code></pre>
<p>After that you can use the <code>razorView</code> handler to return a Razor page from Giraffe:</p>
<pre><code>let model = { WelcomeText = "Hello World" }
let app =
choose [
route "/" >=> razorView "text/html" "Index" model
]</code></pre>
<p>The above example assumes that there is a <code>/views</code> folder in the project which contains an <code>Index.cshtml</code> file.</p>
<p>One of the parameters passed into the <code>razorView</code> handler is the mime type which should be returned by the handler. In this example it is set to <code>text/html</code>, but if the Razor page would represent something different (like an SVG image template for example) then with the <code>razorView</code> handler you can also set a different <code>Content-Type</code> as well.</p>
<p>In most cases <code>text/html</code> is probably the desired <code>Content-Type</code> of your response and therefore there is a second handler called <code>razorHtmlView</code> which does exactly that:</p>
<pre><code>let model = { WelcomeText = "Hello World" }
let app =
choose [
route "/" >=> razorHtmlView "Index" model
]</code></pre>
<p>A more involved example with a layout page and a partial view can be found in the <a href="https://github.com/giraffe-fsharp/samples/blob/master/demo-apps/SampleApp/SampleApp/HtmlViews.fs">SampleApp</a> project in the <a href="https://github.com/giraffe-fsharp/samples">Giraffe samples repository</a>.</p>
<h3 id="using-dotnet-watcher-to-reload-the-project-on-razor-page-changes">Using DotNet Watcher to reload the project on Razor page changes</h3>
<p>If you come from an ASP.NET Core MVC background then you might be used to having Razor pages automatically re-compile on every page change during development, without having to manually restart an application. In Giraffe you can achieve the same experience by adding the <a href="https://www.nuget.org/packages/Microsoft.DotNet.Watcher.Tools">DotNet.Watcher.Tools</a> to your <code>.fsproj</code> and put a watch on all <code>.cshtml</code> files:</p>
<pre><code><ItemGroup>
<DotNetCliToolReference Include="Microsoft.DotNet.Watcher.Tools" Version="1.0.0" />
</ItemGroup>
<ItemGroup>
<Watch Include="**\*.cshtml" Exclude="bin\**\*" />
</ItemGroup></code></pre>
<p>By adding the watcher to your project file you can start making changes to any <code>.cshtml</code> file in your project and immediately see the changes take effect during a running Giraffe web application (without having to manually restart the app).</p>
<h3 id="dependency-on-microsoftaspnetcoremvc">Dependency on Microsoft.AspNetCore.Mvc</h3>
<p>One other thing which might sound a little bit strange is the dependency on the <code>Microsoft.AspNetCore.Mvc</code> NuGet package. It is essentially the full MVC library being referenced by Giraffe now and it has sparked a bit of confusion or disappointment amongst some users. Personally I think it really doesn't matter and I wanted to explain my thinking behind this design decision.</p>
<p>In order to get Razor views working in Giraffe there were three options available:</p>
<ul>
<li>Implement Giraffe's own Razor engine</li>
<li>Use someone else's custom Razor engine</li>
<li>Use the original Razor engine</li>
</ul>
<p>I certainly did not have an appetite for the first option, which is hopefully understandable, and therefore was left with the choice between the latter two.</p>
<p>At the time of writing there was only one .NET Core compatible custom Razor engine available, which is <a href="https://github.com/toddams/RazorLight">RazorLight</a>. From what I know RazorLight is a very nice library and definitely highly recommended, but not necessarily the right choice for Giraffe.</p>
<p>When you ignore the name of the NuGet package for a second then there is really not much difference between referencing <a href="https://www.nuget.org/packages/RazorLight/">RazorLight</a> or <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc">Microsoft.AspNetCore.Mvc</a> in Giraffe. Both require a new NuGet dependency in the project and both are a library which exposes some functionality to render Razor views. The ASP.NET Core MVC package might be slightly bigger and offer more functionality than what Giraffe actually needs, but that doesn't really matter, because Giraffe ignores the rest and only uses what is needed for the Razor support. I think it is pretty normal that any given library often implements far more functionality than what a single project actually makes use of.</p>
<p>In the case of Giraffe I was faced with a trade-off between a dependency which uses slightly more KBs disk space, but in return offers a complete and original Razor experience vs. a slightly smaller library which offers a custom implementation of Razor pages.</p>
<p>As far as I see this issue there is absolutely no disadvantage in Giraffe using the MVC NuGet package in order to get the original Razor experience in comparison to using any other Razor library. I also believe that this option is more in line with Giraffe's goal to be tightly integrated with the original ASP.NET Core experience. Users benefit by getting the original, well documented and understood Razor features which makes portability of existing Razor views also significantly easier.</p>
<p>For me it's really about making smart choices and I truly believe that the strength of Giraffe is by <strong>standing on the shoulders of giants</strong>, which is ASP.NET Core MVC in this case.</p>
<h2 id="functional-html-view-engine">Functional HTML view engine</h2>
<p>Speaking of the Razor view engine, another really cool feature which has been added to Giraffe is a new programmatic way of creating views. <a href="https://github.com/nojaf">Florian Verdonck</a> helped me a lot with Giraffe over the last few weeks and one of his contributions was to port <a href="https://github.com/SuaveIO/suave/blob/master/src/Suave.Experimental/Html.fs">Suave's experimental Html engine</a> to Giraffe.</p>
<p>I think the best way to describe the new <code>Giraffe.HtmlEngine</code> is by showing some code:</p>
<pre><code>open Giraffe.HtmlEngine
let model = { Name = "John Doe" }
let layout (content: HtmlNode list) =
html [] [
head [] [
title [] (encodedText "Giraffe")
]
body [] content
]
let partial () =
p [] (encodedText "Some partial text.")
let personView model =
[
div [] [
h3 [] (sprintf "Hello, %s" model.Name |> encodedText)
]
div [] [partial()]
] |> layout
let app =
choose [
route "/" >=> (personView model |> renderHtml)
]</code></pre>
<p>This examples demonstrates well how easy it is to create complex views with features like layout pages, partial views and model binding. There's really nothing that can't be done with this programmatic way of defining view-model pages and if you think there's something missing then it is really easy to extend as well.</p>
<p>Kudos to the Suave guys for coming up with this brilliant view engine and thanks to Florian for suggesting this feature as well as liaising with the Suave guys and porting the code to Giraffe!</p>
<h2 id="content-negotiation">Content negotiation</h2>
<p>The next feature which I'd like to show off is something which I did myself for a change. When exposing a web service endpoint you often want to respect the client's requested response type which is typically communicated via the HTTP Accept header.</p>
<p>For example a client might send the following information with a HTTP Accept header:</p>
<pre><code>Accept: application/xml,application/json,text/html;q=0.8,text/plain;q=0.9,*/*;q=0.5</code></pre>
<p>In this example the client says the following:</p>
<ul>
<li>Please give me either an XML or JSON response, both is my most preferred choice</li>
<li>If you don't speak XML or JSON, then I'd like plain text as the next best option</li>
<li>If you don't speak plain text either, then please just send the response in HTML</li>
<li>If that doesn't suit you either, then just give me whatever you have</li>
</ul>
<p>In Giraffe you can use the <code>negotiate</code> and <code>negotiateWith</code> handlers to return the client the best matching response based on the information passed through the request's <code>Accept</code> header.</p>
<p>The <code>negotiate</code> handler is very simple and speaks JSON, XML and plain text at the moment:</p>
<pre><code>[<CLIMutable>]
type Person =
{
FirstName : string
LastName : string
}
override this.ToString() =
sprintf "%s %s" this.FirstName this.LastNam
let app =
choose [
route "/foo" >=> negotiate { FirstName = "Foo"; LastName = "Bar" }
]</code></pre>
<p>By default the <code>negotiate</code> handler will check the request's <code>Accept</code> header and automatically serialize the model in either JSON, XML or plain text. If the client asks for <code>plain/text</code> then the <code>negotiate</code> handler will use the model's <code>ToString()</code> method otherwise it will use a JSON or XML serializer. Other mime types like <code>text/html</code> are not supported out of the box, because there is no default way to serialize an object into HTML.</p>
<p>However, if you want to support a wider range of accepted mime types then you can use the <code>negotiateWith</code> handler to set custom negotiation rules.</p>
<p>Let's assume you want to support two additional mime types, <code>application/x-protobuf</code> for <a href="https://github.com/google/protobuf">Google's Protocol Buffers</a> serialization and <code>application/octet-stream</code> for generic binary serialization.</p>
<p>First you would want to implement two new <code>HttpHandler</code> functions which can return a response of those exact types:</p>
<pre><code>let serializeProtobuf x =
// Implement protobuf serialization
let serializeBinary x =
// Implement binary serialization
let protobuf (dataObj : obj) =
setHttpHeader "Content-Type" "application/x-protobuf"
>=> setBodyAsString (serializeProtobuf dataObj)
let binary (dataObj : obj) =
setHttpHeader "Content-Type" "application/octet-stream"
>=> setBodyAsString (serializeBinary dataObj)</code></pre>
<p>Then you can use the two new <code>HttpHandler</code> functions to set up custom negotiation rules and use them with the <code>negotiateWith</code> handler:</p>
<pre><code>let rules =
dict [
"*/*" , json
"application/json" , json
"application/xml" , xml
"application/x-protobuf" , protobuf
"application/octet-stream", binary
]
let model = { FirstName = "Foo"; LastName = "Bar" }
let app =
choose [
route "/foo" >=> negotiateWith rules model
]</code></pre>
<p>You might find it more convenient to create a new negotiate handler altogether, which will make it much less verbose to use the custom rules in subsequent routes:</p>
<pre><code>let negotiate2 = negotiateWith rules
let app =
choose [
route "/foo" >=> negotiate2 { FirstName = "Foo"; LastName = "Bar" }
]</code></pre>
<p>Even though there's still loads of room for improvement, I think this might be just enough for a large quantity of web applications already.</p>
<h2 id="model-binding">Model Binding</h2>
<p>While the <code>Accept</code> HTTP header denotes what mime types a client understands (typically more than just one), the <code>Content-Type</code> HTTP header specifies which mime type a client/server has chosen to send a message with. This is very useful information when it comes to model binding.</p>
<p>Giraffe exposes five different model binding functions which can deserialize the content of a HTTP request into a strongly typed object. Four of them can bind a specific request type into a typed model and the fifth method picks the most appropriate model binding function based on the request's <code>Content-Type</code> header.</p>
<p>It's the easiest to demonstrate this with a quick example again. Let's assume we have the following record type in a web application:</p>
<pre><code>[<CLIMutable>]
type Car =
{
Name : string
Make : string
Wheels : int
Built : DateTime
}</code></pre>
<p>Now I'd like to expose different endpoints which can be used to HTTP POST a car object to the web service:</p>
<pre><code>open Giraffe.HttpHandlers
open Giraffe.HttpContextExtensions
let submitAsJson =
fun (ctx : HttpContext) ->
async {
let! car = ctx.BindJson<Car>()
// Do stuff
}
let submitAsXml =
fun (ctx : HttpContext) ->
async {
let! car = ctx.BindXml<Car>()
// Do stuff
}
let submitAsForm =
fun (ctx : HttpContext) ->
async {
let! car = ctx.BindForm<Car>()
// Do stuff
}
let submitAsQueryString =
fun (ctx : HttpContext) ->
async {
let! car = ctx.BindQueryString<Car>()
// Do stuff
}
let submitHowYouLike =
fun (ctx : HttpContext) ->
async {
let! car = ctx.BindModel<Car>()
// Do stuff
}
let webApp =
POST >=>
choose [
route "/json" >=> submitAsJson
route "/xml" >=> submitAsXml
route "/form" >=> submitAsForm
route "/query" >=> submitAsQueryString
route "/any" >=> submitHowYouLike ]</code></pre>
<p>As you can see from the example, the model binding functions are extension methods on the <code>HttpContext</code> object and require to open the <code>Giraffe.HttpContextExtensions</code> module.</p>
<p>The <code>ctx.BindJson<'T>()</code> function will always try to retrieve an object by deserializing data from JSON. The <code>ctx.BindXml<'T>()</code> function behaves the same way but will try to deserialize from XML. The <code>ctx.BindForm<'T>()</code> function will bind a model from a request which has a <code>Content-Type</code> of <code>application/x-www-form-urlencoded</code> (typically a POST request from a HTML form element).</p>
<p>Sometimes you might want to bind a model from a query string, which could not only come from a HTTP POST but also from any other HTTP verb request. In this instance the <code>ctx.BindQueryString<'T>()</code> function can be used to bind the values from a query string to a strongly typed model.</p>
<p>At last you might want to allow a client to submit an object via any of the above mentioned options on the same endpoint. In this case your endpoint has to pick the correct model binding based on the <code>Content-Type</code> HTTP header and this can be achieved with the <code>ctx.BindModel<'T>()</code> function.</p>
<p>Since all model binding functions are extension methods of the <code>HttpContext</code> type they can be used from anywhere in a web application where you have access to the <code>HttpContext</code> object, which in Giraffe's case is every single <code>HttpHandler</code> function.</p>
<h2 id="whats-next">What's next?</h2>
<p>There were quite a few breaking changes since the first release, but APIs are slowly maturing as I get more feedback and exposure of the framework. So far the library has been in an alpha stage and will probably remain for another few weeks before I get around to finish some more examples and test projects which will eventually lead to the beta phase.</p>
<p>Once the project is in beta I will try to focus my effort more on collecting a lot of additional feedback before I feel confident enough to declare the first RC and subsequently the official version 1.0.0.</p>
<p>Even though breaking changes are not always the end of the world I would like to avoid drastic fundamental changes (as seen recently) once the project has entered the first stable release. Therefore I have been fairly reluctant to prematurely label Giraffe beyond an alpha and will probably want to enjoy the freedom of breaking stuff for a tiny bit longer. At the end of the day it's about setting the right expectations and I don't help anyone by labeling v1.0.0 too early when I know there's still a fair bit of danger to potentially move stuff around.</p>
<p>However, having said that I do want to stress that the underlying system (ASP.NET Core and Kestrel) have been very stable for a while now and as long as you don't mind that a namespace or method might still change in the near future then Giraffe is absolutely fit for production. So please go ahead and give it a try if you like what you've seen in this blog post so far :).</p>
<p>This basically brings me to the end of this follow up article and I thought what better way to finish it off than by sharing some of our memories from our wonderful wedding (in case you ever wondered what a British-Indian/Austrian-Polish wedding looks like ;)).</p>
<p>It was a very long day, which started off with a civil ceremony in the morning...</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2017-05-14/34654715395_505f30f403_o.png" alt="Civil ceremony in the morning of the day, Image by Dustin Moris Gorski">
<p>Followed by a traditional Hindu ceremony shortly after lunch...</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2017-05-14/34654707095_29a78034d4_o.jpg" alt="Drums during groom entrance at Hindu ceremony, Image by Dustin Moris Gorski">
<img src="https://cdn.dusted.codes/images/blog-posts/2017-05-14/34524044991_34308216d4_o.jpg" alt="Groom entrance at Hindu ceremony, Image by Dustin Moris Gorski">
<img src="https://cdn.dusted.codes/images/blog-posts/2017-05-14/34492683892_f8caa7cafe_o.jpg" alt="Prayers at Hindu ceremony, Image by Dustin Moris Gorski">
<img src="https://cdn.dusted.codes/images/blog-posts/2017-05-14/34524041711_e19221ff3c_o.jpg" alt="Listening to Hindu priest cracking jokes, Image by Dustin Moris Gorski">
<p>I had no idea how much fun Hindu ceremonies can be! There's lots of really fun and merry traditions which take place as part of us getting married. Then there's also a bit of banter between the two families. One of those little traditions is that the bride's family has to scyan the groom's shoes before the ceremony ends so that the groom can't leave the house and take his newly wedded wife away from her family - at least not without having to pay for getting his shoes back. Normally this results in a bit of shoe pulling between the bride's side and the groomsmen, but I think in our case it is fair to say that there was a bit of a cultural clash when someone from my family rugby tackled a guy who tried to sneak away with my shoes, lol...</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2017-05-14/34271006590_4217b6d130_o.jpg" alt="Shoe fight, Image by Dustin Moris Gorski">
<p>Luckily nothing serious happened and after everyone had a great laugh we continued with the ceremony...</p>
<p>Until finally we were able to celebrate at the reception party in the evening...</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2017-05-14/33844938503_c21b395eed_o.png" alt="Entering at the reception party, Image by Dustin Moris Gorski">
<img src="https://cdn.dusted.codes/images/blog-posts/2017-05-14/34654706015_1a1bd052ab_o.jpg" alt="Cake cutting, Image by Dustin Moris Gorski">
<img src="https://cdn.dusted.codes/images/blog-posts/2017-05-14/34524043471_52399b9e9f_o.jpg" alt="Our first toast as a married couple, Image by Dustin Moris Gorski">
<p>Throughout the day our guests wrote us lovely (I think) messages on little papers and my family decided to throw all these messages into a wooden box with a nice bottle of Red, which we had to seal ourselves with nails and hammer. We are not allowed to open this box until in seven years time and then we can enjoy a nicely matured bottle of vino while reading all those wonderful memories from our big day. What a brilliant idea!</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2017-05-14/34525341611_cce8067f4c_o.jpg" alt="Sealing box of memories, Image by Dustin Moris Gorski">
<p>We had a fantastic day and before everyone stormed to the dance floor there was even a pretty impressive surprise firework display...</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2017-05-14/34492692812_5e79eb9549_o.jpg" alt="Surprise firework display, Image by Dustin Moris Gorski">
<p>Getting married was a lot of fun and it all worked out so much better than we could have hoped for :)</p>
<p>Now we are ready for new adventures...</p>
https://dusted.codes/functional-aspnet-core-part-2-hello-world-from-giraffe
[email protected] (Dustin Moris Gorski)https://dusted.codes/functional-aspnet-core-part-2-hello-world-from-giraffe#disqus_threadSun, 14 May 2017 00:00:00 +0000https://dusted.codes/functional-aspnet-core-part-2-hello-world-from-giraffegiraffeaspnet-corefsharpFunctional ASP.NET Core<p>In December 2016 I participated in the <a href="https://sergeytihon.wordpress.com/2016/10/23/f-advent-calendar-in-english-2016/">F# Advent Calendar</a> where I wrote a blog post on <a href="https://dusted.codes/running-suave-in-aspnet-core-and-on-top-of-kestrel">running Suave in ASP.NET Core</a>. As part of that blog post I introduced the <a href="https://www.nuget.org/packages/Suave.AspNetCore/">Suave.AspNetCore</a> NuGet package which makes it possible to run a <a href="https://suave.io/">Suave web application</a> inside ASP.NET Core via a new middleware.</p>
<p>So far this has been pretty good and as of last week the <a href="https://github.com/SuaveIO/Suave.AspNetCore">GitHub repository</a> has been moved to the official <a href="https://github.com/SuaveIO">SuaveIO GitHub organisation</a> as well. A while ago someone even tweeted me that the performance has been pretty good too:</p>
<p><a href="https://twitter.com/jamesjrg/status/809923555894902784" title="suave-aspnet-core-perf-tweet"><img src="https://cdn.dusted.codes/images/blog-posts/2017-02-07/32713608366_88c2eca85d_o.png" alt="suave-aspnet-core-perf-tweet, Image by Dustin Moris Gorski"></a></p>
<p>Even though this made me very happy there was still one thing that bugged me until today.</p>
<h2 id="why-i-created-suaveaspnetcore">Why I created Suave.AspNetCore</h2>
<p>My main motivation for running Suave inside ASP.NET Core was to benefit from the speed and power of <a href="https://github.com/aspnet/KestrelHttpServer">Kestrel</a> while still being able to build a web application in a functional approach. <a href="https://www.nuget.org/packages/Suave.AspNetCore/">Suave.AspNetCore</a> made this possible, but afterwards I realised that this was not my final goal yet.</p>
<p>Ultimately I would like to build a web application in ASP.NET Core with a functional framework which does not only benefit from Kestrel but also from the entire ASP.NET Core eco system, including other middleware such as <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/static-files">static files</a>, <a href="https://docs.microsoft.com/en-us/aspnet/core/security/authentication/identity">authentication</a>, <a href="https://docs.microsoft.com/en-us/aspnet/core/security/authorization/introduction">authorization</a>, <a href="https://docs.microsoft.com/en-us/aspnet/core/security/">security</a>, the flexibility of the <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration">config system</a>, <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging">logging</a> or simply being able to retrieve information from the current <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/environments">hosting environment</a> and more.</p>
<p>There is a lot of great features in ASP.NET Core which have been carefully crafted by experts (e.g. security) and I wouldn't want to miss out on those based on the framework of my choice. Unfortunately with Suave and Suave.AspNetCore I am limited in what other middleware I can use in combination with a Suave ASP.NET Core web application.</p>
<h2 id="suave-and-aspnet-core---buddies-but-not-family">Suave and ASP.NET Core - Buddies but not family</h2>
<p>Suave doesn't naturally fit into ASP.NET Core the way MVC does.</p>
<p>The reason is because Suave is not only a web framework but more of a web platform similar to what ASP.NET Core is itself. It's probably never been intended to be integrated with ASP.NET Core in the first place.</p>
<p>Think of it like this, ASP.NET Core is a web platform which sets the ground work for building any web application and MVC is a framework on top of the platform, which enables building web applications with an object oriented <a href="https://msdn.microsoft.com/en-us/library/ff649643.aspx">Model-View-Controller design pattern</a>. Ideally as an F# developer I would like to replace the object oriented MVC framework with a functional equivalent, but keep the rest of ASP.NET Core's offering at the same time. This is very difficult with Suave at the moment.</p>
<p>Suave has its own HTTP server, its own Socket server and its own HTTP abstractions. As a result Suave's <a href="https://github.com/SuaveIO/suave/blob/master/src/Suave/WebSocket.fs">Socket</a> implementation is not compatible with <a href="https://github.com/aspnet/WebSockets">ASP.NET Core's web socket server</a> and Suave's <a href="https://github.com/SuaveIO/suave/blob/master/src/Suave/Http.fs#L526">HttpContext</a> is vastly different from ASP.NET Core's <a href="https://github.com/aspnet/HttpAbstractions/blob/master/src/Microsoft.AspNetCore.Http.Abstractions/HttpContext.cs">HttpContext</a>.</p>
<p>This is why the <a href="https://github.com/SuaveIO/Suave.AspNetCore/blob/master/src/Suave.AspNetCore/SuaveMiddleware.cs#L33">Suave.AspNetCore middleware</a> has to translate one <code>HttpContext</code> into another and then back again. It works, but it is not ideal, because there's a lot of information that gets lost along the way (e.g. the <code>User</code> object in ASP.NET Core is of type <code>IPrincipal</code> and in Suave it is a <code>Map<string, obj></code>). This is a limiting factor. For example there's no way to access the <code>Authentication</code> property, the <code>Session</code> object or the <code>Features</code> collection of the original ASP.NET Core <code>HttpContext</code> from inside a Suave application.</p>
<p>This means that even though I can run a Suave web application in ASP.NET Core, I still have to re-build a lot of the ground work that has been laid out for me by other ASP.NET Core middleware. In some cases this might be a minor problem, but in others I consider it a big issue. Especially when it comes to critical components such as security I would much rather want to rely on the implementation provided by Microsoft and other industry experts than (re-)inventing it myself.</p>
<p>None of this is Suave's fault though, because it has not been designed with ASP.NET Core in mind, which is more than fair, but for people like me who want to benefit from both platforms this is still an important issue to consider.</p>
<h2 id="why-aspnet-core">Why ASP.NET Core?</h2>
<p>If Suave is already a well working standalone product for building web applications in a functional way, why would I even bother with ASP.NET Core? Well this is a good question and I can only answer it for myself.</p>
<p>For me the main reasons for using ASP.NET Core are the following:</p>
<ul>
<li><a href="#performance">Performance</a></li>
<li><a href="#security">Security</a></li>
<li><a href="#laziness">Laziness</a></li>
<li><a href="#community">Community</a></li>
<li><a href="#impatience">Impatience</a></li>
<li><a href="#fear-of-missing-out">Fear of missing out</a></li>
<li><a href="#support">Support</a></li>
</ul>
<h3 id="performance">Performance</h3>
<p><a href="https://www.ageofascent.com/2016/02/18/asp-net-core-exeeds-1-15-million-requests-12-6-gbps/">Kestrel is extremely fast</a> and the Microsoft team is working hard on making it even faster. Depending on what type of application you are trying to build this can be a big or a small selling point.</p>
<p>Personally I am working on a project which anticipates a significant load at most times and therefore performance is key. I also have a few smaller side projects which run in Docker containers in the cloud and the quicker these applications can handle web requests, the less I have to pay for additional computation power.</p>
<h3 id="security">Security</h3>
<p>As mentioned before I am very reluctant to using 3rd party security code which hasn't been vetted, audited and tested to the same depth as the one provided by Microsoft and the ASP.NET Core team. They have years of invaluable experience and I trust them with security more than other individuals who I barely know or even myself. Call me pessimistic, but security is such a complex topic that I don't even think that a single person should do this on their own. It's one of those things where I believe you have to have a big team of experts and resources behind you to stand a chance against the various vulnerabilities of today.</p>
<h3 id="laziness">Laziness</h3>
<p>I am a lazy developer. I don't enjoy building the 100th logging framework or coming up with yet another configuration API. I want to build new original ideas and not waste my time on stuff which has been done by thousands of other developers before me. ASP.NET Core is a platform which offers many things out of the box and essentially saves me a lot of valuable time.</p>
<p>Additionally it offers easy integration points for other 3rd party code and has a huge developer base behind it which gives me access to even more useful tools and libraries which can benefit my projects.</p>
<h3 id="community">Community</h3>
<p>The community around ASP.NET (Core) is probably the biggest of all .NET web frameworks. There is significantly more people reporting issues, working on bug fixes and building new features into the product than anyone else. The value of such a vibrant community shall not be underestimated. There's nothing better than having bugs fixed by other people before I even encounter them myself.</p>
<p>Another great side effect is that a lot of my questions might have already been answered on StackOverflow and that there is a ton of other useful blog posts explaining stuff that I don't have to work out myself. As an employer it might be also beneficial to pick a framework which has a bigger talent pool than others.</p>
<h3 id="impatience">Impatience</h3>
<p>I am very impatient and for me it is very important that certain issues get addressed fairly quickly. For instance in recent years many web servers were upgrading to support HTTP/2 which is an important improvement over HTTP/1.1. My chances of getting such critical updates are probably higher with a product which is used by thousands of developers than something else which is maybe a little bit more niche. This is not always true, but from my experience this is generally the case.</p>
<h3 id="fear-of-missing-out">Fear of missing out</h3>
<p>We don't know what the future holds for us. Tomorrow someone might release something incredibly awesome which might not be available on smaller platforms in its initial phase. I love working with bleeding edge technology and as such I do have a certain degree of FOMO when it comes to software innovations.</p>
<h3 id="support">Support</h3>
<p>Lastly ASP.NET Core, even though open source, remains an enterprise supported product and that has a lot of value as well. Over the years I had to replace many successful open source packages because the original maintainers got tired of supporting it and no one else stepped in to replace them, which meant they became stale and essentially completely out of date. There is no guarantee that Microsoft will support the ASP.NET (Core) platform for ever, but from the looks of it I don't expect it to go away any time soon either. In fact the current signs seem to suggest the exact opposite considering that ASP.NET itself has already more than a decade on its shoulders and Microsoft recently decided to invest in a complete new re-design to continue its success.</p>
<p>Not everyone might agree with my reasoning, but for me these are very compelling points to stick with ASP.NET Core as my preferred .NET web platform and try to use as much of its ready made features as possible.</p>
<h2 id="building-a-functional-framework-for-aspnet-core">Building a functional framework for ASP.NET Core</h2>
<p>Now that I have explained why I would like to build a web application with ASP.NET Core I have the only problem that as an F# developer there's no ideal framework available yet.</p>
<p>This got me thinking what if I could build my own little micro framework in F# that borrows the functional design pattern of Suave, but embraces the power of ASP.NET Core?</p>
<h3 id="defining-a-functional-httphandler">Defining a functional HttpHandler</h3>
<p>A functional ASP.NET Core web application could look as simple as a function which takes in a <code>HttpContext</code> and returns a <code>HttpContext</code>. In functional programming everything is a function after all. Inside that function it would have full access to the <code>Request</code> and <code>Response</code> object as well as all the other objects to successfully process an incoming web request and return a response.</p>
<p>I could call such a web function a <code>HttpHandler</code>:</p>
<pre><code>type HttpHandler = HttpContext -> HttpContext</code></pre>
<p>Not every incoming web request can or should be handled by a <code>HttpHandler</code>. For example if the request was made to a route which wasn't anticipated, like a static file which should be picked up by the static file middleware, then a <code>HttpHandler</code> should be able to skip this particular request.</p>
<p>In this case there should be an option to return nothing so that another <code>HttpHandler</code> can try to satisfy the incoming request or the calling middleware can defer the <code>HttpContext</code> to the next middleware:</p>
<pre><code>type HttpHandler = HttpContext -> HttpContext option</code></pre>
<p>By making the <code>HttpContext</code> optional the function can now either return <code>Some HttpContext</code> or <code>None</code>.</p>
<p>Additionally the <code>HttpHandler</code> shouldn't block on IO operations or other long running tasks and therefore return the <code>HttpContext option</code> wrapped in an asynchronous workflow:</p>
<pre><code>type HttpHandler = HttpContext -> Async<HttpContext option></code></pre>
<p>This is slowly taking shape, but there's still something missing.</p>
<p>In ASP.NET Core MVC controller dependencies are automatically resolved during instantiation. This is very useful, because in ASP.NET Core dependencies are registered in a central place inside the <code>ConfigureServices</code> method of the <code>Startup.cs</code> class file.</p>
<p>Automatic dependency resolution is not really a thing in functional programming, because dependencies are normally functions and not objects. Functions can be passed around or partially applied which usually makes object oriented dependency management obsolete.</p>
<p>However, because most ASP.NET Core dependencies are registered as objects inside an <code>IServiceCollection</code> container, a <code>HttpHandler</code> can resolve these dependencies through an <code>IServiceProvider</code> object.</p>
<p>This is done by wrapping the original <code>HttpContext</code> and an <code>IServiceProvider</code> object inside a new type called <code>HttpHandlerContext</code>:</p>
<pre><code>type HttpHandlerContext =
{
HttpContext : HttpContext
Services : IServiceProvider
}</code></pre>
<p>By changing the <code>HttpHandler</code> function definition we can take advantage of this new type:</p>
<pre><code>type HttpHandler = HttpHandlerContext -> Async<HttpHandlerContext option></code></pre>
<p>With that one should be able to build pretty much any web application of desire. If you have worked with Suave in the past then this should look extremely familiar as well.</p>
<h3 id="combining-smaller-httphandlers-to-bigger-applications">Combining smaller HttpHandlers to bigger applications</h3>
<p>In principal there's nothing you cannot do with an <code>HttpHandler</code>, but it wouldn't be very practical to build a whole web application in one function. The beauty of functional programming is the composition of many smaller functions to one bigger application.</p>
<p>The simplest combinator would be a bind function which takes two <code>HttpHandler</code> functions and combines them to one:</p>
<pre><code>let bind (handler : HttpHandler) (handler2 : HttpHandler) =
fun (ctx : HttpHandlerContext) ->
async {
let! result = handler ctx
match result with
| None -> return None
| Some ctx2 -> return Some ctx2
}</code></pre>
<p>As you can see the <code>bind</code> function takes two different <code>HttpHandler</code> functions and a <code>HttpHandlerContext</code>. First it evaluates the first handler and checks its result. If the result was <code>None</code> then it will stop at this point and return <code>None</code> as the final result. If the result was <code>Some HttpHandlerContext</code> then it will take the resulting context and use it to evaluate the second <code>HttpHandler</code>. Whatever the second <code>HttpHandler</code> returns will be the final result in this case.</p>
<p>This pattern is often referred to as <a href="http://fsharpforfunandprofit.com/rop/">railway oriented programming</a>. If you are interested to learn more about it then please check out <a href="http://fsharpforfunandprofit.com/rop/">Scott Wlaschin's slides and video</a> on his website or this <a href="http://fsharpforfunandprofit.com/posts/recipe-part2/">lengthy blog post</a> on that topic.</p>
<p>One more thing that should be considered in the <code>bind</code> function is to check if a <code>HttpResponse</code> has been already written by the first <code>HttpHandler</code> before invoking the second <code>HttpHandler</code>. This is required to prevent a potential exception when a <code>HttpHandler</code> tries to make changes to the <code>HttpResponse</code> after another <code>HttpHandler</code> has already written to the response. In this case the <code>bind</code> function should not invoke the second <code>HttpHandler</code>:</p>
<pre><code>let bind (handler : HttpHandler) (handler2 : HttpHandler) =
fun (ctx : HttpHandlerContext) ->
async {
let! result = handler ctx
match result with
| None -> return None
| Some ctx2 ->
match ctx2.HttpContext.Response.HasStarted with
| true -> return Some ctx2
| false -> return! handler2 ctx2
}</code></pre>
<p>To round this up we can bind the <code>bind</code> function to the <code>>>=</code> operator:</p>
<pre><code>let (>>=) = bind</code></pre>
<p>With the <code>bind</code> function we can combine unlimited <code>HttpHandler</code> functions to one.</p>
<p>The flow would look something like this:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2017-02-07/32713574026_7ef98d6280_o.png" alt="aspnet-core-lambda-http-handler-flow-cropped, Image by Dustin Moris Gorski">
<p>Another very useful combinator which can be borrowed from Suave is the <code>choose</code> function. The <code>choose</code> function let's you define a list of multiple <code>HttpHandler</code> functions which will be iterated one by one until the first <code>HttpHandler</code> returns <code>Some HttpHandlerContext</code>:</p>
<pre><code>let rec choose (handlers : HttpHandler list) =
fun (ctx : HttpHandlerContext) ->
async {
match handlers with
| [] -> return None
| handler :: tail ->
let! result = handler ctx
match result with
| Some c -> return Some c
| None -> return! choose tail ctx
}</code></pre>
<p>In order to better see the usefulness of this combinator it's best to look at an actual example.</p>
<p>Let's define a few simple <code>HttpHandler</code> functions first:</p>
<pre><code>let httpVerb (verb : string) =
fun (ctx : HttpHandlerContext) ->
if ctx.HttpContext.Request.Method.Equals verb
then Some ctx
else None
|> async.Return
let GET = httpVerb "GET" : HttpHandler
let POST = httpVerb "POST" : HttpHandler
let route (path : string) =
fun (ctx : HttpHandlerContext) ->
if ctx.HttpContext.Request.Path.ToString().Equals path
then Some ctx
else None
|> async.Return
let setBody (bytes : byte array) =
fun (ctx : HttpHandlerContext) ->
async {
ctx.HttpContext.Response.Headers.["Content-Length"] <- new StringValues(bytes.Length.ToString())
ctx.HttpContext.Response.Body.WriteAsync(bytes, 0, bytes.Length)
|> Async.AwaitTask
|> ignore
return Some ctx
}
let setBodyAsString (str : string) =
Encoding.UTF8.GetBytes str
|> setBody</code></pre>
<p>This already gives a good illustration of how easily one can create different <code>HttpHandler</code> functions to do various things. For instance the <code>httpVerb</code> handler checks if the incoming request matches a given HTTP verb. If yes it will proceed with the next <code>HttpHandler</code>, otherwise it will return <code>None</code>. The two functions <code>GET</code> and <code>POST</code> re-purpose <code>httpVerb</code> to specifically check for a GET or POST request.</p>
<p>The <code>route</code> function compares the request path with a given string and either proceeds or returns <code>None</code> again. Both, <code>setBody</code> and <code>setBodyAsString</code> write a given payload to the response of the <code>HttpContext</code>. This will trigger a response being made back to the client.</p>
<p>Each <code>HttpHandler</code> is kept very short and has a single responsibility. Through the <code>bind</code> and <code>choose</code> combinators we can combine many <code>HttpHandler</code> functions into one larger web application:</p>
<pre><code>let webApp =
choose [
GET >>=
choose [
route "/" >>= setBodyAsString "Index"
route "/ping" >>= setBodyAsString "pong"
]
POST >>=
choose [
route "/submit" >>= setBodyAsString "Submitted!"
route "/upload" >>= setBodyAsString "Uploaded!"
]
]</code></pre>
<p>Even though I've barely written any code this functional framework already proves to be quite powerful.</p>
<p>At last I need to create a new middleware which can run this functional web application:</p>
<pre><code>type HttpHandlerMiddleware (next : RequestDelegate,
handler : HttpHandler,
services : IServiceProvider) =
do if isNull next then raise (ArgumentNullException("next"))
member __.Invoke (ctx : HttpContext) =
async {
let httpHandlerContext =
{
HttpContext = ctx
Services = services
}
let! result = handler httpHandlerContext
if (result.IsNone) then
return!
next.Invoke ctx
|> Async.AwaitTask
} |> Async.StartAsTask</code></pre>
<p>And finally hook it up in <code>Startup.cs</code>:</p>
<pre><code>type Startup() =
member __.Configure (app : IApplicationBuilder) =
app.UseMiddleware<LambdaMiddleware>(webApp) |> ignore</code></pre>
<p>This web framework shows what I love about F# so much. With very little code I was able quickly write a basic web application from the ground up. It looks very much like an ASP.NET Core clone of Suave, but with the difference that it fully embraces the ASP.NET Core architecture and its <code>HttpContext</code>.</p>
<h2 id="functional-aspnet-core-framework">Functional ASP.NET Core framework</h2>
<p>By extending the above example with a few more useful <code>HttpHandler</code> functions to return JSON, XML, HTML or even templated (DotLiquid) views someone could create a very powerful ASP.NET Core functional web framework, which could be easily seen as an MVC replacement for F# developers.</p>
<p>This is exactly what I did and I named it <a href="https://github.com/dustinmoris/AspNetCore.Lambda">ASP.NET Core Lambda</a>, because I simply couldn't think of a more descriptive or "cooler" name. It is a functional ASP.NET Core micro framework and I've built it primarily for my own use. It is still very early days and in alpha testing, but I already use it for two of my private projects and it works like a charm.</p>
<h3 id="how-does-it-compare-to-other-net-web-frameworks">How does it compare to other .NET web frameworks</h3>
<p>Now you might ask yourself how does this compare to other .NET web frameworks and particularly to Suave (since I've borrowed a lot of ideas from Suave and from Scott Wlaschin's blog)?</p>
<p>I think this table explains it very well:</p>
<table>
<tr>
<th></th>
<th>Paradigm</th>
<th>Language</th>
<th>Hosting</th>
<th>Frameworks</th>
</tr>
<tr>
<th>MVC</th>
<td>Object oriented</td>
<td>C#</td>
<td>ASP.NET (Core) only</td>
<td>Full .NET, .NET Core</td>
</tr>
<tr>
<th>NancyFx</th>
<td>Object oriented</td>
<td>C#</td>
<td>Self-hosted or ASP.NET (Core)</td>
<td>Full .NET, .NET Core, Mono</td>
</tr>
<tr>
<th>Suave</th>
<td>Functional</td>
<td>F#</td>
<td>Primarily self-hosted</td>
<td>Full .NET, .NET Core, Mono</td>
</tr>
<tr>
<th>Lambda</th>
<td>Functional</td>
<td>F#</td>
<td>ASP.NET Core only</td>
<td>.NET Core</td>
</tr>
</table>
<p>MVC and NancyFx are both heavily object-oriented frameworks mainly targeting a C# audience. MVC probably the most feature rich framework and NancyFx with its <a href="https://github.com/NancyFx/Nancy/wiki/Introduction">super-duper-happy-path</a> are probably the most wide spread .NET web frameworks. NancyFx is also very popular with .NET developers who want to run a .NET web application self-hosted on Linux (this was a big selling point pre .NET Core times).</p>
<p>Suave was the first functional web framework built for F# developers. It is a completely independent standalone product which was designed to be cross platform compatible (via Mono) and self-hosted. A large part of the Suave library is compounded of its own HTTP server and Socket server implementation. Unlike NancyFx it is primarily meant to be self-hosted, which is why the separation between web server and web framework is not as clean cut as in NancyFx (e.g. the Suave NuGet library contains everything in one while NancyFx has separate packages for different hosting options).</p>
<p><a href="https://github.com/dustinmoris/AspNetCore.Lambda">ASP.NET Core Lambda</a> is the smallest of all frameworks (and is meant to stay this way). It is also a functional web framework built for F# developers, but cannot exist outside of the ASP.NET Core platform. It has been tightly built around ASP.NET Core to leverage its features as much as possible. As a result it is currently the only (native) functional web framework which is a first class citizen in ASP.NET Core.</p>
<p>I think it has its own little niche market where it doesn't really compete with any of the other web frameworks. It is basically aimed at F# developers who want to use a ASP.NET Core in a functional way.</p>
<p>While all these web frameworks share some similarities they still have their own appliances and target a different set of developers.</p>
<p>Watch this space for more updates on <a href="https://github.com/dustinmoris/AspNetCore.Lambda">ASP.NET Core Lambda</a> in the future and feel free to try it out and let me know how it goes! So far I've been very happy with the results and it has become my goto framework for new web development projects.</p>
https://dusted.codes/functional-aspnet-core
[email protected] (Dustin Moris Gorski)https://dusted.codes/functional-aspnet-core#disqus_threadTue, 07 Feb 2017 00:00:00 +0000https://dusted.codes/functional-aspnet-corefsharpaspnet-corekestrelThank you Microsoft for being awesome<p>OK so I have to admit that I can have a pretty big mouth sometimes, which is certainly not a good quality to have. I don't shy away to be (overly) critical if things annoy me:</p>
<p><a href="https://twitter.com/dustinmoris/status/689747287938109440" title="tweet-0"><img src="https://cdn.dusted.codes/images/blog-posts/2017-01-24/31644773584_a733c139dc_o.png" alt="tweet-0, Image by Dustin Moris Gorski"/></a></p>
<p>As you can see, about a year ago I was a little bit upset that a great new product received a not so great new name. I really hoped that <a href="https://www.asp.net/core">ASP.NET Core</a> would have been named something much "cooler" for my taste.</p>
<p>So my frustration continued...</p>
<p><a href="https://twitter.com/dustinmoris/status/689760562180415488" title="tweet-1"><img src="https://cdn.dusted.codes/images/blog-posts/2017-01-24/31644772444_27955fafaf_o.png" alt="tweet-1, Image by Dustin Moris Gorski"/></a></p>
<p><em>(Let's be honest, it is probably a good thing that I don't have many Twitter followers.)</em></p>
<p>And even a year later I feel like I am still suffering sometimes:</p>
<p><a href="https://twitter.com/dustinmoris/status/817846764707389440" title="tweet-2"><img src="https://cdn.dusted.codes/images/blog-posts/2017-01-24/32109514750_f3d46e3f8e_o.png" alt="tweet-2, Image by Dustin Moris Gorski"/></a></p>
<p>But I don't care about the name anymore.</p>
<p>Because the product is actually really good.</p>
<p>Yes, the name is still an issue when you google for stuff, and yes, the tooling is still in preview, and yes, the support for F# is not even in RC yet, and yes, there's still lots of other little annoyances which would be nice to have fixed soon, but when I look past all that for a moment then I have to admit that the product is actually <strong>really freaking nice</strong>.</p>
<h2 id="being-harsh-is-easy">Being harsh is easy</h2>
<p>Often it is very easy to be very quickly critical, but much harder to acknowledge someone's good work. It's already an effort to think about praising people who did a great job at work, let alone praising some third party who you are not even directly related to.</p>
<p>But I think as someone who depends a lot on these third parties, corporations like Microsoft, who own a lot of the tech which I love and use every day, and other contributors from the open source community, then it's important to give back to these people as well.</p>
<p>Particularly with companies like Microsoft many people feel entitled to be overly rude or critical if things don't work out immediately the way they want it to be (and not just Microsoft but really any company of similar size). It is so easy to forget good manners when one is thinking of a big company logo rather than actual human beings, because it is much easier to throw dirt at a logo than a person.</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2017-01-24/32448458306_cf66227c91_o.png" alt="Microsoft_logo, Image by Dustin Moris Gorski" class="two-third-width"/>
<p>But behind these logos there is still human beings working hard on the products which we use every day and they are not any different from us. Women and men who got into software development and IT for the same good reasons as we did, who share the same passion as we do and who make the same mistakes as us, even when they have only the best in their intention.</p>
<p>Microsoft has made a lot of mistakes in the past (Vista, Windows 8, Silverlight, Windows Phone, Skype, you name it!) and we've not come short to let them know how much these products sucked, but in most recent years they've actually done an amazing turnaround and even taken the lead in some of the most exciting innovations of our field and I am certainly not the only <a href="http://www.technobuffalo.com/2017/01/06/microsoft-is-killing-apple-in-every-corner-what-ive-learned-at-ces-2017/">one who has noticed it</a>.</p>
<h2 id="awesome-things-by-microsoft">Awesome things by Microsoft</h2>
<p>Here's a list of a some of the many awesome things that happened at Microsoft in recent years (in no particular order):</p>
<ul>
<li><a href="https://blogs.msdn.microsoft.com/dotnet/2014/11/12/net-core-is-open-source/">.NET Core open source and on GitHub</a></li>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/">ASP.NET Core open source and on GitHub</a></li>
<li><a href="https://channel9.msdn.com/Series/ConnectOn-Demand/Introducing-Visual-Studio-Community-2015">Free Visual Studio Community Edition</a></li>
<li><a href="http://www.hanselman.com/blog/IntroducingVisualStudioCodeForWindowsMacAndLinux.aspx">Visual Studio Code, an open source free IDE for Windows, Mac and Linux</a></li>
<li><a href="http://open.microsoft.com/2016/08/19/powershell-is-open-sourced-and-available-on-linux/">PowerShell became open source and available on Linux</a></li>
<li><a href="https://techcrunch.com/2016/03/07/microsoft-is-bringing-sql-server-to-linux/">Microsoft made SQL Server available on Linux</a></li>
<li><a href="https://www.linuxfoundation.org/press-release/2016/11/microsoft-fortifies-commitment-to-open-source-becomes-linux-foundation-platinum-member/">Microsoft joined the Linux Foundation</a></li>
<li><a href="http://news.microsoft.com/2015/12/09/microsoft-offers-new-certification-for-linux-on-azure/#hdYyJXK4A7pp2Eh2.97">Microsoft even offers certification for Linux on Azure</a></li>
<li><a href="http://blogs.microsoft.com/blog/2016/02/24/microsoft-to-acquire-xamarin-and-empower-more-developers-to-build-apps-on-any-device/">Microsoft acquired Xamarin</a></li>
<li><a href="http://www.zdnet.com/article/microsoft-open-sources-xamarins-software-development-kit/">Two months later Microsoft open sourced Xamarin's SDK</a></li>
<li><a href="https://blogs.windows.com/buildingapps/2016/03/30/run-bash-on-ubuntu-on-windows/#f7WbyptVYXolXR4R.97">Bash on Ubuntu on Windows</a></li>
<li><a href="https://blog.docker.com/2016/09/dockerforws2016/">Docker on Windows</a></li>
<li><a href="https://www.youtube.com/watch?v=nSDmCPH3OWc">Microsoft Surface Pro</a></li>
<li><a href="https://www.youtube.com/watch?v=VpQTRCOECZw">Microsoft Surface Book</a></li>
<li><a href="http://creativity-online.com/work/microsoft-introducing-microsoft-surface-studio/49677">Microsoft Surface Studio</a></li>
<li><a href="http://www.techradar.com/news/office-365-crowned-king-of-all-productivity-apps">Office 365</a></li>
<li><a href="https://mva.microsoft.com/">Free Online Training in the Microsoft Virtual Academy</a></li>
<li><a href="https://azure.microsoft.com/en-gb/">Microsoft Azure</a></li>
<li><a href="https://www.microsoft.com/en-gb/windows/get-windows-10">Windows 10</a></li>
</ul>
<p>This is a lot of (radical) positive change in a comparatively short amount of time when you think about the size of the company!</p>
<p>For example, check out the <a href="https://www.youtube.com/watch?v=BzMLA8YIgG0">Surface Studio introduction video</a> which demonstrates 3D drawing on a Surface Studio:</p>
<p><a href="https://cdn.dusted.codes/images/blog-posts/2017-01-24/31644800474_16cf27d972_o.gif" title="surface-studio-3d-drawing"><img src="https://cdn.dusted.codes/images/blog-posts/2017-01-24/31644800474_16cf27d972_o.gif" alt="surface-studio-3d-drawing, Image by Dustin Moris Gorski"/></a></p>
<p>You have to admit that this is awesome. I mean I am not even a designer and I get a watery mouth by looking at it.</p>
<p>To be honest the whole Microsoft Surface product series is awesome and I speak from personal experience. Last Black Friday I bought my partner a new Surface Pro 4 (I accidentally spilled a drink over her old laptop) and boy was I impressed with her new device. It's slick, it's fast and when I tried out the drawing with the pencil then my jaw dropped and I was left with pure jealousy (and I say that even though I have a <a href="https://www.youtube.com/watch?v=nxKAN0JA0gw">Lenovo Yoga 900</a> which is a pretty neat device itself).</p>
<p>But that is not even why I am so pleased with Microsoft in recent days. As a .NET developer I am mostly impressed by the work the teams have put into .NET Core, ASP.NET Core, C#, F# and the tooling around it.</p>
<h2 id="net-core-aspnet-core-f-and-open-source">.NET Core, ASP.NET Core, F# and Open Source</h2>
<p>Especially the ASP.NET Core team led by <a href="https://twitter.com/shanselman">Scott Hanselman</a> and <a href="https://twitter.com/DamianEdwards">Damian Edwards</a> is an excellent example of how well Microsoft functions today. Don't get me wrong, I know that ASP.NET Core is still far away from being perfect and that there's still loads of stuff that needs to be done before everyone can use it, but I am still amazed by how far this team has come today.</p>
<p>They have taken a 10+ year old technology, completely revamped it and transitioned from an entirely closed and proprietary product team to one of the most approachable open source contributors that I have ever seen. Not only is <a href="https://github.com/aspnet/Home">their code on GitHub</a>, but they are putting an immense amount of time and effort into keeping a close relationship to the .NET community. They are active on GitHub, Slack, Gitter, Twitter and heck they even <a href="https://live.asp.net/">live stream a weekly standup</a> where they demo the latest stuff hot from the press, answer live questions to the community, give updates on the current development status and talk about the roadmap. They even spend time going through blog posts written by non-Microsofties where <a href="https://twitter.com/jongalloway">Jon Galloway</a> presents a whole list of community written content at the beginning of each standup. This has become a nice ritual where the Microsoft team gives back to other open source contributors and help nourish a friendly and healthy community.</p>
<p>I find this is amazing work. I don't know many other teams in the world who put this much effort into their open source work.</p>
<p>This is not Microsoft trying to put some code on GitHub and then turning on the <a href="https://twitter.com/search?q=%23OpenSource">#OpenSource</a> marketing machinery. No, this is Microsoft showing the rest of the world how open source is done right (and I am not trying to undermine the open source work by others here, just trying to highlight how well it's done at MSFT).</p>
<p>Also I'd like to point out that this team is doing the difficult balancing act of developing a completely new technology which is supposed to entertain an already existing (and very spoiled) community as well as trying to attract new programmers from the open source field. I guess this is much harder than just launching a new product into the market like Go, Rust or Node.js which didn't have an existing developer base to please.</p>
<h3 id="fsharp">FSharp</h3>
<p>I have been a C# developer for almost the entirety of my professional career and only in the last year I really fell in love with functional programming and F#. For me F# is probably one of the most powerful languages on the market right now. It has the horse power of .NET under the hood with the beauty of a modern functional language on the surface and it has been <a href="http://fsharp.org/">open source, cross platform compatible and community driven</a> from the very beginning of its time.</p>
<p>So with all of that I've got something to say which was overdue for a while now:</p>
<h2 id="thank-you-microsoft-for-being-awesome">Thank you Microsoft for being awesome!</h2>
<p>Seriously, a special thanks to all of the Microsoft teams who work on .NET Core, ASP.NET Core, Visual Studio, Visual Studio Code, C#, F# and the entire goodness around it!!! Your products are awesome and because of your great work I enjoy working in the .NET eco-system more than ever before!</p>
<p>This is not me being sarcastic, I am dead serious, so thank you guys!</p>
<p>I hope when you read this then you know that despite all the negativity that Microsoft has to suck up every now and then that there's also a lot of developers who think you are doing an amazing job and that without you we would be probably some poor suckers who would have to work with Node.js or some other soul destroying technology!</p>
<h2 id="join-me-in-saying-thank-you">Join me in saying thank you</h2>
<p>If you are a .NET developer who loves working with C# or F#, who thinks that Visual Studio (Code) is one of the best IDEs, who thinks that ASP.NET Core and .NET Core are awesome new development stacks then please join me in saying thank you to these guys so that they know that all of their hard work is paying off.</p>
<p>Oh and before someone labels me a Microsoft fanboy or thinks that there's a hidden agenda behind this blog post then let me tell you this:</p>
<p><strong>Sometimes is just feels damn good to be nice.</strong></p>
<p>...something which I would wish programmers would do much more to each other, particularly in such divisive times like today.</p>
https://dusted.codes/thank-you-microsoft-for-being-awesome
[email protected] (Dustin Moris Gorski)https://dusted.codes/thank-you-microsoft-for-being-awesome#disqus_threadTue, 24 Jan 2017 00:00:00 +0000https://dusted.codes/thank-you-microsoft-for-being-awesomemicrosoftaspnet-coredotnet-coreError Handling in ASP.NET Core<p>Almost two years ago I wrote a blog post on <a href="https://dusted.codes/demystifying-aspnet-mvc-5-error-pages-and-error-logging">demystifying ASP.NET MVC 5 error pages and error logging</a>, which became one of my most popular posts on this blog. At that time of writing the issue was that there were an awful lot of choices on how to deal with unhandled exceptions in ASP.NET MVC 5 and no clear guidance or recommendation on how to do it the right way.</p>
<p>Fortunately with ASP.NET Core the choices have been drastically reduced and there is also a much better <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/error-handling">documentation</a> on the topic itself. However, there's still a few interesting things to consider which I wanted to point out in this follow up blog post and clear up any remaining questions on ASP.NET Core error handling.</p>
<h2 id="aspnet-core--mvc">ASP.NET Core <> MVC</h2>
<p>First I thought it's worth mentioning that the relationship between <a href="https://www.asp.net/core">ASP.NET Core</a> and <a href="https://github.com/aspnet/Mvc">ASP.NET Core MVC</a> hasn't changed much since what it used to be in Classic ASP.NET.</p>
<p>ASP.NET Core remains the main underlying platform for building web applications in .NET Core while MVC is still an optional web framework which can be plugged into the ASP.NET Core pipeline. It's basically a NuGet library which sits on top of ASP.NET Core and offers a few additional features for the <a href="https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller">Model-View-Controller design pattern</a>.</p>
<p>What that means in terms of error handling is that any exception handling capability offered by MVC will be limited to MVC. This will become much more apparent when we look at the ASP.NET Core architecture.</p>
<h2 id="aspnet-core-middleware">ASP.NET Core middleware</h2>
<p>ASP.NET Core is completely modular and the request pipeline is mainly defined by the installed middleware in an application.</p>
<p>For better demonstration let's create a new boilerplate MVC application and check out the <code>void Configure(..)</code> method inside the <code>Startup.cs</code> class file:</p>
<pre><code>public void Configure(
IApplicationBuilder app,
IHostingEnvironment env,
ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseDatabaseErrorPage();
app.UseBrowserLink();
}
else
{
app.UseExceptionHandler("/Home/Error");
}
app.UseStaticFiles();
app.UseIdentity();
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}</code></pre>
<p>Because that is a lot of boilerplate code for a simple web application I'll trim it down to the main points of interest:</p>
<pre><code>app.UseExceptionHandler("/Home/Error");
app.UseStaticFiles();
app.UseIdentity();
app.UseMvc(routes => ...);</code></pre>
<p>What you can see here is fairly self explanatory, but there's a few key things to understand from this code. The <code>app.Use...()</code> (extension-) method calls are enabling several middleware by registering them with the <code>IApplicationBuilder</code> object. Each middleware will be made responsible for invoking the next middleware in the request pipeline, which is why the order of the <code>app.Use...()</code> method calls matter.</p>
<p>For example this is a rough skeleton of the <a href="https://github.com/aspnet/StaticFiles/blob/master/src/Microsoft.AspNetCore.StaticFiles/StaticFileMiddleware.cs">StaticFileMiddleware</a>:</p>
<pre><code>public StaticFileMiddleware(RequestDelegate next, ...)
{
// Some stuff before
_next = next;
// Some stuff after
}
public Task Invoke(HttpContext context)
{
// A bunch of code to see if this middleware can
// serve a static file that matches the HTTP request...
// If not the code will eventually reach this line:
return _next(context);
}</code></pre>
<p>I cut out some noise to highlight the usage of the <code>RequestDelegate</code> variable.</p>
<p>As you can see each middleware must accept a <code>RequestDelegate</code> object in the constructor and each middleware must implement a method of type <code>Task Invoke(HttpContext context)</code>.</p>
<p>The <code>RequestDelegate</code> is, as its name suggest, a delegate which represents the next middleware in the lifecycle. ASP.NET Core defers the responsibility of invoking it to the current middleware itself. For example if the <code>StaticFileMiddleware</code> is not able find a static file which matches the incoming HTTP request then it will invoke the next middleware by calling <code>return _next(context);</code> at the end. On the other hand if it was able to find the requested static file then it will return it to the client and never invoke the next or any subsequent middleware anymore.</p>
<p>This is why the order of the <code>app.Use...()</code> method calls matter. When you think about it the underlying pattern can be seen a little bit like an onion:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2017-01-19/31564566283_dca040a066_o.png" alt="aspnet-core-middleware-onion-architecture, Image by Dustin Moris Gorski">
<p>A HTTP request will travel from the top level middleware down to the last middleware, unless a middleware in between can satisfy the request and return a HTTP response earlier to the client. In contrast an unhandled exception would travel from the bottom up. Beginning at the middleware where it got thrown it would bubble up all the way to the top most middleware waiting for something to catch it.</p>
<p>In theory a middleware could also attempt to make changes to the response <em>after</em> it has invoked the next middleware, but this is normally not the case and I would advise against it, because it could result in an exception if the other middleware already wrote to the response.</p>
<h3 id="error-handling-should-be-the-first-middleware">Error handling should be the first middleware</h3>
<p>With that in mind it is clear that in order to catch any unhandled exception an error handling middleware should be the first in the pipeline. Only then it can guarantee a final catch if nothing else caught the exception before.</p>
<p>Because MVC is typically registered towards the end of the middleware pipeline it is also clear that exception handling features (like the infamous <a href="https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/filters#exception-filters">ExceptionFilters</a>) within MVC will not be able to catch every exception.</p>
<p>For more information on middleware please check out the <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware">official documentation</a>.</p>
<h2 id="custom-exception-handlers">Custom Exception Handlers</h2>
<p>Now that middleware and exception handling hopefully makes sense I also wanted to quickly show how to create your own global exception handler in ASP.NET Core.</p>
<p>Even though there is already quite a few useful <a href="http://www.talkingdotnet.com/aspnet-core-diagnostics-middleware-error-handling/">exception handlers</a> in the <a href="https://www.nuget.org/packages/Microsoft.AspNetCore.Diagnostics/">Microsoft.AspNetCore.Diagnostics</a> NuGet package available, it still might make sense to create your own one as well. For example one might want to have an exception handler which logs critical exceptions to <a href="https://sentry.io/welcome/">Sentry</a> by using Sentry's <a href="https://github.com/getsentry/raven-csharp">Raven Client for .NET</a> or one might want to implement an integration with a bug tracking tool and log a new ticket for every <code>NullReferenceException</code> that gets thrown. Another option would be an integration with <a href="https://elmah.io/">elmah.io</a>.</p>
<p>There is many good reasons why someone might want to create additional exception handlers and it might even be useful to have multiple exception handlers registered at once. For example the first exception handler logs a ticket in a bug tracking system and re-throws the original exception. Then the next exception handler could log the error in ELMAH and re-trigger the original exception again. The final exception handler might catch the exception and return a friendly error page to the client. By having each exception handler focusing on a single responsibility they automatically become more re-usable across multiple projects and it would also enable to use different combinations on different environments (think dev/staging/production).</p>
<p>A good example of writing your own exception handling middleware is the default <a href="https://github.com/aspnet/Diagnostics/blob/master/src/Microsoft.AspNetCore.Diagnostics/ExceptionHandler/ExceptionHandlerMiddleware.cs">ExceptionHandlerMiddleware</a> in ASP.NET Core.</p>
<p>A default exception handler boilerplate would look like this:</p>
<pre><code>using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
namespace SomeApp
{
public sealed class CustomExceptionHandlerMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger _logger;
public CustomExceptionHandlerMiddleware(
RequestDelegate next,
ILoggerFactory loggerFactory)
{
_next = next;
_logger = loggerFactory.
CreateLogger<CustomExceptionHandlerMiddleware>();
}
public async Task Invoke(HttpContext context)
{
try
{
await _next(context);
}
catch (Exception ex)
{
try
{
// Do custom stuff
// Could be just as simple as calling _logger.LogError
// if you don't want to rethrow the original exception
// then call return:
// return;
}
catch (Exception ex2)
{
_logger.LogError(
0, ex2,
"An exception was thrown attempting " +
"to execute the error handler.");
}
// Otherwise this handler will
// re -throw the original exception
throw;
}
}
}
}</code></pre>
<p>In addition to the <code>RequestDelegate</code> the constructor also accepts an <code>ILoggerFactory</code> which can be used to instantiate a new <code>ILogger</code> object.</p>
<p>In the <code>Task Invoke(HttpContext context)</code> method the error handler basically does nothing other than immediately calling the next middleware. Only if an exception is thrown it will come into action by capturing it in the <code>catch</code> block. What you put into the <code>catch</code> block is up to you, but it would be good practice to wrap any non trivial code in a second try-catch block and default back to basic logging if everything else is falling apart.</p>
<p>I hope all of this made sense and that this blog post was useful again. Personally I find it extremely nice to see how well ASP.NET Core has evolved from its predecessor. If you have any more questions just drop me a comment below.</p>
https://dusted.codes/error-handling-in-aspnet-core
[email protected] (Dustin Moris Gorski)https://dusted.codes/error-handling-in-aspnet-core#disqus_threadThu, 19 Jan 2017 00:00:00 +0000https://dusted.codes/error-handling-in-aspnet-coreaspnet-coremvcerror-pageserror-loggingRunning Suave in ASP.NET Core (and on top of Kestrel)<p>Ho ho ho, happy F# Advent my friends! This is my blog post for the <a href="https://sergeytihon.wordpress.com/2016/10/23/f-advent-calendar-in-english-2016/">F# Advent Calendar in English 2016</a>. First a quick thanks to <a href="https://twitter.com/theburningmonk">Yan Cui</a> who has pointed out this calendar to me last year and a big thanks to <a href="https://twitter.com/sergey_tihon">Sergey Tihon</a> who is organising this blogging event and was kind enough to reserve me a spot this year.</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-12-15/31650186205_f39501871c_o.png" alt="santa-suave, Image by Dustin Moris Gorski">
<p>In this blog post I wanted to write about two technologies which I am particularly excited about: <a href="https://suave.io/">Suave</a> and <a href="https://www.asp.net/core">ASP.NET Core</a>. Both are frameworks for building web applications, both are written in .NET and both are open source and yet they are very different. <a href="https://github.com/SuaveIO/suave">Suave is a lightweight web server</a> written entirely in F# and belongs to the family of micro frameworks similar to <a href="http://nancyfx.org/">NancyFx</a>. <a href="https://github.com/aspnet/Home">ASP.NET Core</a> is Microsoft's new cloud optimised web framework which has been built from the ground up on top of <a href="https://www.microsoft.com/net/core">.NET Core</a> and all of its goodness. Both are fairly new cutting edge technologies and both are extremely fun to work with.</p>
<p>What I like the most about Suave is that it's written in F# for F#. It is really well designed and embraces functional concepts like <a href="http://fsharpforfunandprofit.com/rop/">railway oriented programming</a> in its core architecture. Lately I've been a big fan of functional programming and being able to build web applications in a functional way is not only very productive but also a heap of fun. ASP.NET Core is object oriented and closer related to C#, but nonetheless an extraodinary new web framework. After more than 14 years of developing (the old) ASP.NET stack Microsoft has completely revamped the platform and built something new which is <a href="https://www.ageofascent.com/2016/02/18/asp-net-core-exeeds-1-15-million-requests-12-6-gbps/">extremely fast</a> and flexible. I love <a href="https://github.com/aspnet/KestrelHttpServer">Kestrel</a>, I love how ASP.NET Core is completely modular and extendable (via <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware">middleware</a>) and I love how it is cross platform compatible and supported by Microsoft (Mono you have served us well but I am glad to move on now). There's more than one good reason to go with either framework and that's why I really wanted to combine them together.</p>
<p>Ideally I would like to continue building web applications with Suave in F# and then plug them into the ASP.NET Core pipeline to run them on top of Kestrel and benefit from both worlds.</p>
<h2 id="suave-inside-aspnet-core-in-theory">Suave inside ASP.NET Core in theory</h2>
<p>In order to better understand Suave let's have a quick look at a simple web application:</p>
<pre><code>open System
open Suave
open Suave.Successful
open Suave.Operators
open Suave.RequestErrors
open Suave.Filters
let simpleApp =
choose [
GET >=> choose [
path "/" >=> OK "Hello world from Suave."
path "/ping" >=> OK "pong"
]
NOT_FOUND "404 - Not found."
]
[<EntryPoint>]
let main argv =
startWebServer defaultConfig simpleApp
0</code></pre>
<p>Even in this simple example you can clearly see the core concept behind Suave. An application is always an assemble of one or many web parts. A <code>WebPart</code> is a function which takes a <code>HttpContext</code> and returns an <code>option</code> of <code>HttpContext</code> wrapped in an async workflow. Through combinators such as <code>choose</code> or <code>>=></code> (and many others) one can compose a complex web application with routing, model binding, view engines and anything else that someone might want to do. At the end there is one top level function of type <code>WebPart</code> which takes in a <code>HttpContext</code> and returns a <code>HttpContext</code>. In this example this function is called <code>simpleApp</code>.</p>
<p>In theory the one thing required to plug a Suave web app into ASP.NET Core would be to take an incoming HTTP request from ASP.NET Core and convert it into an <code>HttpContext</code> in Suave, execute the top level web part, and then translate the resulting <code>HttpContext</code> back into an ASP.NET Core response:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-12-15/31534152341_073c5b4729_o.png" alt="suave-in-aspnetcore-concept, Image by Dustin Moris Gorski">
<p>The other thing which you get with Suave is a self hosted web server which is built into the framework and the traditional way of starting a Suave web application. The <code>startWebServer</code> function takes a <code>SuaveConfig</code> object and the top level <code>WebPart</code> as input parameters. The config object allows web server specific configuration such as HTTP bindings, request quotas, timeout limits and many more things to be set.</p>
<p>When putting a Suave app into ASP.NET Core then it would be running on a different web server under the hood (Kestrel by default) and it wouldn't necessarily make sense to use an existing <code>SuaveConfig</code> in this scenario. Considering that ASP.NET Core offers other natural ways of configuring server settings, I think it is fair to skip the <code>SuaveConfig</code> when merging Suave into ASP.NET Core and mainly focusing on a smooth <code>WebPart</code> integration.</p>
<h2 id="suave-inside-aspnet-core-in-practice">Suave inside ASP.NET Core in practice</h2>
<p>Taking theory into practise I thought I can make it happen, and when a programmer says that then it usually means to google for an existing solution first. I was lucky to discover <a href="https://github.com/Krzysztof-Cieslak/Suave.Kestrel">Suave.Kestrel</a> which is a <em>super early alpha version</em> of the above concept written by <a href="https://github.com/Krzysztof-Cieslak">Krzysztof Cieślak</a>. Krzysztof is the developer behind the <a href="https://github.com/ionide">Ionide</a> project which makes F# development in Visual Studio Code even possible, and therefore a massive thanks to him and his great contributions as well!</p>
<p>Even though this project was a good start to begin with there was still loads of work left to do. First I started off by using the existing code and trying to extend it, but then I quickly realised that I was fighting more with the tools than writing any code, which lead me to the decision of creating an entirely new project written in C#. Why C#? Because the Visual Studio tooling for F# projects in .NET Core is non existent (at least at the moment). As much as I love F# if I cannot properly debug or reason with my code then I rather switch to C# and get the job done.</p>
<p>However as a seasoned C# developer that was not a big problem and in the end it wouldn't even matter if the library was written in C#, F# or VB.NET as long as it would allow an easy integration from an F# point of view. Moments like this make me really appreciate the flexibility of the .NET framework.</p>
<h2 id="introducing-suaveaspnetcore">Introducing Suave.AspNetCore</h2>
<p>After my initial start on the project it took me another 3 months (mainly because I had absolutely no time) to finally release the first version of <a href="https://github.com/dustinmoris/Suave.AspNetCore">Suave.AspNetCore</a> and make it available. Since yesterday anyone can install <a href="https://www.nuget.org/packages/Suave.AspNetCore/">Suave.AspNetCore</a> as a NuGet package for a .NET Core application.</p>
<p>I decided to name the package <code>Suave.AspNetCore</code> because I thought it is a more representative name of what the NuGet package has to offer. While this library makes it perfectly possible to run Suave on top of Kestrel it is certainly not limited to it. <code>Suave.AspNetCore</code> gives a way of plugging a Suave <code>WebPart</code> into the ASP.NET Core pipeline and run it on any environment of someone's desire. In theory Suave can be run alongside NancyFx and ASP.NET Core MVC in the same ASP.NET Core application and let the middleware decide which framework is suited best to satisfy an incoming request.</p>
<h3 id="current-release-information">Current release information</h3>
<p>The current version should be able to deal with any incoming web request which can be handled by a Suave <code>WebPart</code>. One thing that is missing (but already in the works) is the support for Suave's web socket implementation.</p>
<p>I shall also note that Suave and F# itself don't have an official stable release for .NET Core yet and therefore the project as a whole should be taken with some caution.</p>
<h2 id="suaveaspnetcore-in-action">Suave.AspNetCore in action</h2>
<p>Ok enough of the talk and let's look at a demo.</p>
<p>First I'll start by using one of my existing <a href="https://github.com/dustinmoris/AspNetCoreFSharp">F# ASP.NET Core project templates</a> and <a href="https://blogs.msdn.microsoft.com/dotnet/2016/11/16/announcing-net-core-1-1/#user-content-upgrading-existing-net-core-10-projects">upgrade it to .NET Core 1.1</a>.</p>
<p>Then I add <code>Suave</code>, <code>Suave.AspNetCore</code> and <code>Newtonsoft.Json</code> to the dependencies:</p>
<pre><code>"dependencies": {
"Microsoft.FSharp.Core.netcore": "1.0.0-alpha-*",
"Microsoft.AspNetCore.Diagnostics": "1.1.0",
"Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
"Microsoft.Extensions.Logging.Console": "1.1.0",
"Suave": "2.0.0-*",
"Suave.AspNetCore": "0.1.0",
"Newtonsoft.Json": "9.0.1"
}</code></pre>
<p>Next I move on to the <code>Startup.fs</code> file and create a Suave web application:</p>
<pre><code>module App =
let catchAll =
fun (ctx : HttpContext) ->
let json =
JsonConvert.SerializeObject(
ctx.request,
Formatting.Indented)
OK json
>=> Writers.setMimeType "application/json"
<| ctx</code></pre>
<p>This app is very basic. You can see that it is a single web part which uses Json.NET to serialize the entire <code>HttpContext</code> and then later returns a successful response of the Json text with a mime type of <code>application/json</code>.</p>
<p>It is not hugely interesting but it is a nice function which will handle every incoming web request and output the resulting <code>HttpContext</code> in a more or less readable way. It's at least a good way of quickly verifying if the incoming web request has been correctly converted into a Suave <code>HttpContext</code> object.</p>
<p>Finally I go to the <code>Startup</code> class and hook up the Suave <code>catchAll</code> web app into the ASP.NET Core pipeline via a middleware:</p>
<pre><code>type Startup() =
member __.Configure (app : IApplicationBuilder)
(env : IHostingEnvironment)
(loggerFactory : ILoggerFactory) =
app.UseSuave(App.catchAll) |> ignore</code></pre>
<p>Save all, <code>dotnet restore</code>, <code>dotnet build</code> and <code>dotnet run</code>:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-12-15/31648972855_6ca7d63719_o.png" alt="running-a-suave-aspnetcore-app, Image by Dustin Moris Gorski">
<p>If everything is correct then going to <code>http://localhost:5000/</code> should return a successful response like this:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-12-15/31649057355_63fb82d256_o.png" alt="suave-in-aspnetcore-simple-get-request-result, Image by Dustin Moris Gorski">
<p>You can <a href="https://github.com/dustinmoris/Suave.AspNetCore/tree/master/test/Suave.AspNetCore.App">check out the sample app in GitHub</a> and try it yourself!</p>
<h3 id="differences-between-vanilla-suave-and-suaveaspnetcore">Differences between vanilla Suave and Suave.AspNetCore</h3>
<p>After running a few tests of my own I noticed a few minor differences.</p>
<p>First I noticed that the original Suave web server converts all HTTP headers into lower case values. For example <code>Content-Type: text/html</code> would be stored as <code>content-type: text/html</code> in Suave's HTTP header collection. In contrast ASP.NET Core preserves the original casing. When using the <code>Suave.AspNetCore</code> middleware then it will match the original Suave behaviour, but can be easily overriden by setting the <code>preserveHttpHeaderCasing</code> parameter to <code>true</code> in the <code>UseSuave</code> method:</p>
<pre><code>app.UseSuave(App.catchAll, true) |> ignore</code></pre>
<p>Another difference which I found was that Suave always sets the <code>host</code> variable to <code>127.0.0.1</code> for local requests, even when I explicitely call the API with <code>http://localhost:5000/</code>. I wasn't able to find out why or where this is happening and if there is a good reason for it. In this case I didn't align with the original Suave behaviour and kept the values provided by ASP.NET Core.</p>
<p>Other than this I haven't found any big differences and hope that it is (mostly) bug free. The project is open source and I am open for ideas, help or suggestions of any kind.</p>
<p>Merry Christmas everyone!</p>
https://dusted.codes/running-suave-in-aspnet-core-and-on-top-of-kestrel
[email protected] (Dustin Moris Gorski)https://dusted.codes/running-suave-in-aspnet-core-and-on-top-of-kestrel#disqus_threadThu, 15 Dec 2016 00:00:00 +0000https://dusted.codes/running-suave-in-aspnet-core-and-on-top-of-kestrelfsharpsuaveaspnet-corekestreldotnet-coreBuilding and shipping a .NET Core application with Docker and TravisCI<p>With the .NET Core ecosystem slowly maturing since the <a href="http://www.hanselman.com/blog/NETCore10IsNowReleased.aspx">first official release</a> this year I started to increasingly spend more time playing and building software with it.</p>
<p>I am a big fan of managed CI systems like <a href="https://www.appveyor.com/">AppVeyor</a> and <a href="https://travis-ci.org/">TravisCI</a> and one of the first things I wanted to work out was how easily I could build and ship a <a href="https://www.microsoft.com/net/core#windows">.NET Core</a> application with one of these tools. This was a major consideration for me because I would have been less interested in building a .NET Core app if the deployment story wasn't great yet and I am not very keen in building my own CI server as I don't think this is the best use of a developer's time. Luckily I was very happy to find out that the deployment experience and integration with TravisCI is extremely easy and intuitive, which is what I will be trying to cover in this blog post today.</p>
<p>Up until now I was more or less tied down to AppVeyor as the only vendor which uses Windows Server VMs for its build nodes and therefore the only viable option of building full .NET framework applications. TravisCI and <a href="https://circleci.com/">other popular CI platforms</a> use Linux nodes for their build jobs and .NET support was limited to the <a href="http://www.mono-project.com/">Mono framework</a> at most. However, with .NET Core being the first officially Microsoft supported cross platform framework my options have suddenly increased from one to many. TravisCI already offered a good integration with Mono and now that <a href="https://docs.travis-ci.com/user/languages/csharp/#Testing-Against-Mono-and-.NET-Core">.NET Core is part of their default offering</a> I was keen to give it a shot.</p>
<p>In this blog post I will be covering what I believe is a typical deployment scenario for a .NET Core application which will be shipped as a Docker image to either the <a href="https://hub.docker.com/">official Docker Hub</a> or a private registry.</p>
<h2 id="1-creating-a-net-core-application">1. Creating a .NET Core application</h2>
<p>First I need to create a .NET Core application. For the purpose of this blog post I am just going to create a default hello world app and you can skip this step for the most part if you are already familiar with the framework. For everyone else I will quickly skim through the creation of a new .NET Core application.</p>
<p>Let's open a Windows command line prompt and navigate to <code>C:\temp</code> and create a new folder called <code>NetCoreDemo</code>:</p>
<pre><code>cd C:\temp
mkdir NetCoreDemo
cd NetCoreDemo</code></pre>
<p>Inside that folder I can run <code>dotnet new --type console</code> to create a new hello world console application:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-10-19/30273420421_c02db77a5e_o.png" alt="dotnet-new-console-app, Image by Dustin Moris Gorski" class="two-third-width">
<p>For a full reference of the <code>dotnet new</code> command check out the <a href="https://docs.microsoft.com/en-us/dotnet/articles/core/tools/dotnet-new">official documentation</a>.</p>
<p>If you don't have the .NET Core CLI available you need to install the <a href="https://www.microsoft.com/net/core#windows">.NET Core SDK for Windows</a> (or your operating system of choice).</p>
<p>After the command has completed I can run <code>dotnet restore</code> to restore all dependencies followed by a <code>dotnet run</code> which will build and subsequently start the hello world application:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-10-19/30273420621_c451bc8473_o.png" alt="dotnet-restore-and-run, Image by Dustin Moris Gorski">
<p>This is literally all I had to do to get a simple C# console app running and therefore will stop at this point and move on to the next part where I will set up a build and deployment pipeline in TravisCI.</p>
<p>If you want to learn more about building .NET Core applications then I would highly recommend to check out the <a href="https://docs.asp.net/en/latest/tutorials/index.html">official ASP.NET Core tutorials</a> or read <a href="https://www.asp.net/community/articles">other great blog posts</a> by developers who have covered this topic extensively.</p>
<h2 id="2-setting-up-travisci-for-building-a-net-core-application">2. Setting up TravisCI for building a .NET Core application</h2>
<p>If you are not familiar with TravisCI yet (or a similar platform), then please follow the instructions to <a href="https://docs.travis-ci.com/user/for-beginners">set up TravisCI with your source control repository</a> and add a <code>.travis.yml</code> file to your project repository. This file will contain the entire build configuration for a project.</p>
<p>The first line in the <code>.travis.yml</code> file should be the <code>language</code> declaration. In our case this will be <code>language: csharp</code> which is the correct setting for any .NET language (including VB.NET and F#).</p>
<p>Next we need to set the correct environment type.</p>
<p>The standard TravisCI build environment runs on an <a href="https://docs.travis-ci.com/user/ci-environment/#Virtualization-environments">Ubuntu 12.04 LTS Server Edition 64 bit</a> distribution. This is no good for us because <a href="https://www.microsoft.com/net/core#ubuntu">.NET Core only supports Ubuntu 14.04 or higher</a>. Fortunately there is a new <a href="https://docs.travis-ci.com/user/trusty-ci-environment/">Ubuntu 14.04 (aka Trusty) beta environment</a> available. In order to make use of this new beta environment we need to enable <code>sudo</code> and set the <code>dist</code> setting to <code>trusty</code>:</p>
<pre><code>sudo: required
dist: trusty</code></pre>
<p>Next I want to specify what version of Mono and .NET Core I want to have installed when running my builds. At the moment I am only interested in .NET Core so I am going to skip Mono and set the <code>dotnet</code> setting to the currently latest SDK:</p>
<pre><code>language: csharp
sudo: required
dist: trusty
<strong>mono: none
dotnet: 1.0.0-preview2-003131</strong></code></pre>
<p>The next step is not required nor necessarily recommended, but more of my personal preference to disable the <a href="https://docs.microsoft.com/en-us/dotnet/articles/core/tools/telemetry">.NET Core Tools Telemetry</a> by setting the <code>DOTNET_CLI_TELEMETRY_OPTOUT</code> environment variable to <code>1</code> during the <code>install</code> step of the TravisCI lifecycle:</p>
<pre><code>install:
- export DOTNET_CLI_TELEMETRY_OPTOUT=1</code></pre>
<p>After that I have to set access permissions for two script files in the <code>before_script</code> step:</p>
<pre><code>before_script:
- chmod a+x ./build.sh
- chmod a+x ./deploy.sh</code></pre>
<p>The <a href="https://en.wikipedia.org/wiki/Chmod">chmod command</a> changes the access permissions of my build and deployment script to allow execution by any user on the system. TravisCI recommends to set <code>chmod ugo+x</code> which is effectdively the same as <code>chmod a+x</code>, where <code>a</code> is a shortcut for <code>ugo</code>.</p>
<p>Following <code>before_script</code> I am going to set the <code>script</code> step which is responsible for the actual build instructions:</p>
<pre><code>script:
- ./build.sh</code></pre>
<p>At last I am giong to define a <code>deploy</code> step as well, which will automatically trigger only after the <code>script</code> setp has successfully completed:</p>
<pre><code>deploy:
- provider: script
script: ./deploy.sh $TRAVIS_TAG $DOCKER_USERNAME $DOCKER_PASSWORD
skip_cleanup: true
on:
tags: true</code></pre>
<p>Here I am essentially calling a second script called <code>deploy.sh</code> and passing in three environment variables which I will explain in a moment. Additionally I defined the trigger to deploy for tags only. You can set up <a href="https://docs.travis-ci.com/user/deployment#Conditional-Releases-with-on%3A">different deploy conditions</a>, but in most cases you either want to deploy on each push to <code>master</code> or when a commit has been tagged. I chose the latter, because sometimes I want to publish an alpha or beta version of my application which is likely to be on a different branch than <code>master</code> and therefore the tag condition made more sense in my case.</p>
<p>The <code>TRAVIS_TAG</code> variable is a <a href="https://docs.travis-ci.com/user/environment-variables/#Default-Environment-Variables">default environment variable</a> which gets set by TravisCI for every build which has been triggered by a tag push and will contain the string value of the tag. <code>DOCKER_USERNAME</code> and <code>DOCKER_PASSWORD</code> are two custom <a href="https://docs.travis-ci.com/user/environment-variables/#Defining-Variables-in-Repository-Settings">environment variables which I have set through the UI</a> to follow TravisCI's recommendation to keep sensitive data secret:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-10-19/30401215765_13d4f6937d_o.png" alt="travisci-settings-page, Image by Dustin Moris Gorski">
<p>Another option would have been to <a href="https://docs.travis-ci.com/user/environment-variables/#Encrypting-environment-variables">encrypt environment variables</a> in the <code>.travis.yml</code> file to keep those values secret. Both options are valid as far as I know and it is up to you which one you prefer.</p>
<h4 id="tip">Tip:</h4>
<p>If you have to store access credentials to 3rd party platforms like a private registry or the official Docker Hub inside TravisCI then it is highly recommended to register a dedicated user for TravisCI and add that user as an additional collaborator to your Docker Hub repository, so that you can easily limit or revoke access when required:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-10-19/30347448781_1f15e0ded6_o.png" alt="docker-hub-collaborators, Image by Dustin Moris Gorski" class="two-third-width">
<p>After defining the <code>script</code> and <code>deploy</code> step I am basically done with the <code>.travis.yml</code> file.</p>
<p>Note that I purposefully didn't choose to place the individual build and deployment instructions directly into the <code>script</code> step, because I wanted to seperate out the actual build instructions from the TravisCI configuration.</p>
<p>This has a few advantages:</p>
<ul>
<li>There is a clear distinction between environment setup and the actual build steps which are required to build and deploy the project. The <code>.travis.yml</code> file is the definition for the build environment and the <code>build.sh</code> and <code>deploy.sh</code> script files are the recipe to build and deploy an application.</li>
<li>The build and deploy scripts are completely independent from the CI platform and I could easily switch the CI provider at any given time.</li>
<li>The actual build and deployment scripts can be executed from anywhere. Both are a generic bash script which developers can run on their personal machines to build, test and deploy a project.</li>
</ul>
<p>The last point is probably the most important in my view. Even though managed CI systems are super easy to integrate with, it can be a pain if you are tied down to a particular provider. Imagine you have a new developer joining your team and the first question they ask is how to build your project. It would be a pain to tell them to open up the <code>.travis.yml</code> file and follow all the instructions manually if you could just tell them to run <code>build.sh</code> and it will work.</p>
<p>If I put everything together then the final <code>.travis.yml</code> file will look something like this:</p>
<pre><code>language: csharp
sudo: required
dist: trusty
mono: none
dotnet: 1.0.0-preview2-003131
install:
- export DOTNET_CLI_TELEMETRY_OPTOUT=1
before_script:
- chmod a+x ./build.sh
- chmod a+x ./deploy.sh
script:
- ./build.sh
deploy:
- provider: script
script: ./deploy.sh $TRAVIS_TAG $DOCKER_USERNAME $DOCKER_PASSWORD
skip_cleanup: true
on:
tags: true</code></pre>
<p>One last thing that I wanted to mention is that even though I said we are going to use Docker to deploy the project I didn't have to <a href="https://docs.travis-ci.com/user/docker/">specify Docker as an extra service</a> anywhere in the <code>.travis.yml</code> file. This is because unlike the standard TravisCI environment the Trusty beta environment comes with Docker pre-configured out of the box.</p>
<h2 id="3-building-and-deploying-a-net-core-app-from-a-bash-script">3. Building and deploying a .NET Core app from a bash script</h2>
<p>Now that the build environment is set up in the <code>.travis.yml</code> file and we deferred the entire build and deployment logic to external bash scripts we have to actually create those scripts to complete the puzzle.</p>
<h4 id="buildsh">build.sh</h4>
<p>The <code>build.sh</code> script is going to be very quick:</p>
<pre><code>#!/bin/bash
set -ev
dotnet restore
dotnet test
dotnet build -c Release</code></pre>
<p>The first line is not necessarily required, but it is good practice to include <code>#!/bin/bash</code> at top of the script so the shell knows which interpreter to run. The second line tells the shell to exit immediately if a command fails with a non zero exit code (<code>set -e</code>) and to print shell input lines as they are read (<code>set -v</code>).</p>
<p>The last three commands are using the normal <code>dotnet</code> CLI to <a href="https://docs.microsoft.com/en-us/dotnet/articles/core/tools/dotnet-restore">restore</a>, <a href="https://docs.microsoft.com/en-us/dotnet/articles/core/tools/dotnet-build">build</a> and <a href="https://docs.microsoft.com/en-us/dotnet/articles/core/tools/dotnet-test">test</a> the application.</p>
<h4 id="deploysh">deploy.sh</h4>
<p>The <code>deploy.sh</code> script is going to be fairly easy as well. The first two lines are going to be the same as in <code>build.sh</code> and then I am assigning the three parameters that we are passing into the script to named variables:</p>
<pre><code>#!/bin/bash
set -ev
TAG=$1
DOCKER_USERNAME=$2
DOCKER_PASSWORD=$3</code></pre>
<p>Next I am going to use the <code>dotnet</code> CLI <a href="https://docs.microsoft.com/en-us/dotnet/articles/core/tools/dotnet-publish">publish</a> command to package the application and all of its dependencies into the publish folder:</p>
<pre><code>dotnet publish -c Release</code></pre>
<p>Now that everything is packaged up I can use the <code>docker</code> CLI to build an image with the supplied tag and the <code>latest</code> tag:</p>
<pre><code>docker build -t repository/project:$TAG bin/Release/netcoreapp1.0/publish/.
docker tag repository/project:$TAG repository/project:latest</code></pre>
<p>Make sure that <code>repository/project</code> matches your own repository and project name.</p>
<p>Lastly I have to authenticate with the official Docker registry and push both images to the hub:</p>
<pre><code>docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"
docker push repository/project:$TAG
docker push repository/project:latest</code></pre>
<p>And with that I have finished the continuous deployment setup with Docker and TravisCI. The final <code>deploy.sh</code> looks like this:</p>
<pre><code>#!/bin/bash
set -ev
TAG=$1
DOCKER_USERNAME=$2
DOCKER_PASSWORD=$3
# Create publish artifact
dotnet publish -c Release src
# Build the Docker images
docker build -t repository/project:$TAG src/bin/Release/netcoreapp1.0/publish/.
docker tag repository/project:$TAG repository/project:latest
# Login to Docker Hub and upload images
docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"
docker push repository/project:$TAG
docker push repository/project:latest</code></pre>
<h4 id="tip-1">Tip:</h4>
<p>Some projects follow a naming convention where version tags begin with a lowercase <code>v</code> in git, for example <code>v1.0.0</code>, but want to remove the <code>v</code> from the Docker image tag. In that case you can use this additional snippet to create a variable called <code>SEMVER</code> which will be the same as <code>TAG</code> without the leading <code>v</code>:</p>
<pre><code># Remove a leading v from the major version number (e.g. if the tag was v1.0.0)
IFS='.' read -r -a tag_array <<< "$TAG"
MAJOR="${tag_array[0]//v}"
MINOR=${tag_array[1]}
BUILD=${tag_array[2]}
SEMVER="$MAJOR.$MINOR.$BUILD"</code></pre>
<p>Place that snippet after the <code>dotnet publish</code> command in the <code>deploy.sh</code> and use <code>$SEMVER</code> instead of <code>$TAG</code> when building and publishing the Docker images.</p>
<p>If you want to see a full working example you can check out <a href="https://github.com/dustinmoris/NewsHacker">one of my open source projects</a> where I use this setup to publish a Docker image of an F# .NET Core application.</p>
https://dusted.codes/building-and-shipping-a-dotnet-core-application-with-docker-and-travisci
[email protected] (Dustin Moris Gorski)https://dusted.codes/building-and-shipping-a-dotnet-core-application-with-docker-and-travisci#disqus_threadWed, 19 Oct 2016 00:00:00 +0000https://dusted.codes/building-and-shipping-a-dotnet-core-application-with-docker-and-traviscidotnet-coretraviscidockerLoad testing a Docker application with JMeter and Amazon EC2<p>A couple of months ago I blogged about <a href="https://dusted.codes/jmeter-load-testing-from-a-continuous-integration-build">JMeter load testing from a continuous integration build</a> and gave a few tips and tricks on how to get the most out of automated load tests. In this blog post I would like to go a bit more hands on and show how to manually load test a Docker application with JMeter and the help of <a href="https://aws.amazon.com/">Amazon Web Services</a>.</p>
<p>I will be launching two <a href="https://aws.amazon.com/ec2/">Amazon EC2 instances</a> to conduct a single load test. One instance will host a <a href="https://www.docker.com/">Docker</a> application and the other the <a href="http://jmeter.apache.org/">JMeter load test tool</a>. The benefit of this setup is that Docker and JMeter have their own dedicated resources and I can load test the application in isolation. It also allows me to quickly tear down the Docker instance and vertically scale it up or down to measure the impact of it.</p>
<h2 id="launching-a-docker-vm">Launching a Docker VM</h2>
<p>First I will create a new EC2 instance to host the Docker container. The easiest way of doing this is to go through the online wizard and select the Ubuntu 14.04 base image and paste the following bash script into the user data field to automatically pre-install the Docker service during the launch up:</p>
<pre><code>#!/bin/bash
# Install Docker
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
sudo bash -c 'echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" >> /etc/apt/sources.list.d/docker.list'
sudo apt-get update
sudo apt-get install linux-image-extra-$(uname -r) -y
sudo apt-get install apparmor
sudo apt-get install docker-engine -y
sudo service docker start
# Run [your] Docker container
sudo docker run -p 8080:8888 dustinmoris/docker-demo-nancy:0.2.0
</code></pre>
<p>At the end of the script I added a <code>docker run</code> command to auto start the container which runs my application under test. Replace this with your own container when launching the instance.</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-09-27/29654177530_6e5a15a96d_o.png" alt="aws-launch-ec2-advanced-details, Image by Dustin Moris Gorski">
<p>Simply click through the rest of the wizard and a few minutes later you should be having a running Ubuntu VM with Docker and your application container running inside it.</p>
<p>Make sure to map a port from the container to the host and open this port for inbound traffic. For example if I launched my container with the flag <code>-p 8080:8888</code> then I need to add the port 8080 to the inbound rules of the security group which is associated with this VM.</p>
<h2 id="launching-a-jmeter-vm">Launching a JMeter VM</h2>
<p>Next I am going to create a JMeter instance by going through the wizard for a second time. Just as before I am using Ubuntu 14.04 as the base image and the user data field to install everything I need during the launch-up:</p>
<pre><code>#!/bin/bash
# Install Java 7
sudo apt-get install openjdk-7-jre-headless -y
# Install JMeter
wget -c http://ftp.ps.pl/pub/apache//jmeter/binaries/apache-jmeter-3.0.tgz -O jmeter.tgz
tar -xf jmeter.tgz</code></pre>
<p>Don't forget to <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html">open the default SSH port 22</a> in the security group of the JMeter instance.</p>
<p>Only a short time later I have two successfully created VMs with Docker and JMeter being fully operational and ready to run some load tests.</p>
<h2 id="running-jmeter-tests">Running JMeter tests</h2>
<p>Running load tests from the JMeter instance is fairly straight forward now. I am going to remote connect to the JMeter instance, copy a JMeter test file on the machine and then launch the <a href="http://jmeter.apache.org/usermanual/get-started.html#non_gui">JMeter command line tool</a> to run the load tests remotely. Afterwards I will download the JMeter results file and analyse the test data in my local JMeter GUI.</p>
<h3 id="download-putty-ssh-client-tools">Download PuTTY SSH client tools</h3>
<p>From here on I will describe the steps required to remote connect from a Windows desktop, which might be slightly different than what you'd have to do to connect from a Unix based system. However, most things are very similar and and it should not be too difficult to follow the steps from a Mac or Linux as well.</p>
<p>In order to SSH from Windows to a Linux VM you will have to download the <a href="http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html">PuTTY SSH client</a>. Whilst you are on the download page you might also download the <a href="https://the.earth.li/~sgtatham/putty/latest/x86/pscp.exe">PSCP</a> and <a href="https://the.earth.li/~sgtatham/putty/latest/x86/puttygen.exe">PuTTYgen</a> tools. One will be needed to securely transfer files between your Windows machine and the Linux VM and the other to convert the SSH key from the <code>.pem</code> to the <code>.ppk</code> file format.</p>
<h3 id="convert-ssh-key-from-pem-to-ppk">Convert SSH key from .pem to .ppk</h3>
<p>Before we can use PuTTY to connect to the Ubuntu VM we have to convert the SSH key which has been associated with the VM from the <code>.pem</code> to the <code>.ppk</code> file format:</p>
<ol>
<li>Open <code>puttygen.exe</code></li>
<li>Click on the "Load" button and locate the <code>.pem</code> SSH key file</li>
<li>Select the SSH-2 RSA option</li>
<li>Click on "Save private key" and save the key as a <code>.ppk</code> file</li>
</ol>
<p>Once completed you can use the new key file with the PuTTY SSH client to remote connect to the EC2 instance.</p>
<h3 id="remote-connect-to-the-ec2-instance">Remote connect to the EC2 instance</h3>
<ol>
<li>Open <code>putty.exe</code></li>
<li>Type the public IP of the EC2 instance into the host name field</li>
<li>Prepend <code>ubuntu@</code> to the IP address in the host name field<br />(this is not necessarily required, but speeds up the login process later on)</li>
<li>On the left hand side in the tree view expand the "SSH" node and then select "Auth"</li>
<li>Browse for the <code>.ppk</code> private key file</li>
<li>Go back to "Session" in the tree view</li>
<li>Type in a memorable name into the "Saved Sessions" field and click "Save"</li>
<li>Finally click on the "Open" button and connect to the VM</li>
</ol>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-09-27/29322755824_be4874c21b_o.png" alt="putty-save-session, Image by Dustin Moris Gorski">
<p>At this point you should be presented with a terminal window and being connected to the JMeter EC2 instance.</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-09-27/29949798345_8da765d601_o.png" alt="putty-ssh-terminal, Image by Dustin Moris Gorski">
<h3 id="upload-a-jmeter-test-file-to-the-vm">Upload a JMeter test file to the VM</h3>
<p>Now you can use the <code>pscp.exe</code> tool from a normal Windows command prompt to copy files between your local Windows machine and the Ubuntu EC2 instance in the cloud.</p>
<p>The first argument specifies the source location and the second argument the destination path. You can target remote paths by prepending the username and the saved session name to it.</p>
<p>For example I downloaded the <code>pscp.exe</code> into <code>C:\temp\PuTTY</code> and have an existing JMeter test plan saved under <code>C:\temp\TestPlan.jmx</code> which I would like to upload to the JMeter instance. I named the session in PuTTY <code>demo-session</code> and therefore can run the following command from the Windows command prompt:</p>
<pre><code>C:\temp\PuTTY\pscp.exe C:\temp\TestPlan.jmx ubuntu@demo-session:TestPlan.jmx</code></pre>
<p>Usually the upload is extremely fast. If you don't know how to create a JMeter test plan then you can follow the official documentation on <a href="http://jmeter.apache.org/usermanual/build-web-test-plan.html">building a basic JMeter web test plan</a>.</p>
<h3 id="running-jmeter-from-the-command-line">Running JMeter from the command line</h3>
<p>After uploading the <code>.jmx</code> file we can switch back to the PuTTY terminal and run the test plan from the JMeter command line.</p>
<p>If you followed all the steps from before then you can find JMeter under <code>/apache-jmeter-3.0/bin/./jmeter</code> on the EC2 instance. Use the <code>-n</code> flag to run it in non-GUI mode, the <code>-t</code> parameter to specify the location of the test plan and <code>-l</code> to set the path of the results file:</p>
<pre><code>apache-jmeter-3.0/bin/./jmeter -n -t TestPlan.jmx -l results.jtl</code></pre>
<p>Run this command, wait and watch the test being executed until it's completed.</p>
<h3 id="download-the-jmeter-results-file">Download the JMeter results file</h3>
<p>Finally when the test has finished you can download the results file via the PSCP tool again:</p>
<pre><code>C:\temp\PuTTY\pscp.exe ubuntu@demo-session:results.jtl C:\temp\</code></pre>
<p>From here on everything should be familiar and you can retrospectively open the <code>results.jtl</code> from an available JMeter listener and analyse the data in the JMeter GUI.</p>
<p>With the help of a cloud provider like Amazon Web Services and Docker containers it is super easy to quickly spin up multiple instances and run many load tests at the same time without them interfering with each other. You can test different application versions or instance setups simultanuly and optimise for the best performance.</p>
https://dusted.codes/load-testing-a-docker-application-with-jmeter-and-amazon-ec2
[email protected] (Dustin Moris Gorski)https://dusted.codes/load-testing-a-docker-application-with-jmeter-and-amazon-ec2#disqus_threadTue, 27 Sep 2016 00:00:00 +0000https://dusted.codes/load-testing-a-docker-application-with-jmeter-and-amazon-ec2jmeterdockerawscloudCreating a Slack bot with F# and Suave in less than 5 minutes<p>Slack has quickly gained a lot of popularity and became one of the leading team communication tools for developers and technology companies. One of the main compelling features was the great amount of integrations with other tools and services which are essential for development teams and project managers. Even though the <a href="https://slack.com/apps">list of apps</a> seems to be huge, sometimes you need to write your own custom integration and Slack wants it to be as easy and simple as possible. How easy and how fast this can be done I will show you in this blog post.</p>
<h2 id="a-simple-slash-command">A simple slash command</h2>
<p>In most cases you will probably need to create one or more new slash commands which can be used by slack users to perform some actions.</p>
<p>For this tutorial let's assume I would like to create a new slash command to hash a given string with the SHA-512 algorithm. I would like my slack users to be able to type <code>/sha512 <some string></code> into a channel and a slack bot to reply with the correct hash code.</p>
<p>The easiest way to achieve this is to create a new web service which will perform the hashing of the string and integrate it with the <a href="https://api.slack.com/slash-commands">Slash Commands API</a>.</p>
<h2 id="building-an-f-web-service-which-integrates-with-slash-commands">Building an F# web service which integrates with Slash commands</h2>
<p>Let's begin with the web service by creating a new F# console application and installing the <a href="https://www.nuget.org/packages/Suave">Suave web framework</a> NuGet package.</p>
<p>The <a href="https://api.slack.com/slash-commands">Slash Commands API</a> will make a HTTP POST request to a configurable endpoint and submit a bunch of data
which will provide all relevant information to perform our action. First I want to model a <code>SlackRequest</code> type which will represent the incoming POST data from the Slash Commands API:</p>
<pre><code>type SlackRequest =
{
Token : string
TeamId : string
TeamDomain : string
ChannelId : string
ChannelName : string
UserId : string
UserName : string
Command : string
Text : string
ResponseUrl : string
}</code></pre>
<p>For this simple web service the only two relevant pieces of information are the token and the text which get submitted. The token represents a secret string value which can be used to validate the origin of the request and the text value represents the entire string which the user typed after the slash command. For example if I type <code>/sha512 dusted codes</code> then the text property will contain <code>dusted codes</code> in the POST data.</p>
<p>Inside this record type I'm also adding a little helper function to extract the POST data from a <code>Suave.Http.HttpContext</code> object:</p>
<pre><code>static member FromHttpContext (ctx : HttpContext) =
let get key =
match ctx.request.formData key with
| Choice1Of2 x -> x
| _ -> ""
{
Token = get "token"
TeamId = get "team_id"
TeamDomain = get "team_domain"
ChannelId = get "channel_id"
ChannelName = get "channel_name"
UserId = get "user_id"
UserName = get "user_name"
Command = get "command"
Text = get "text"
ResponseUrl = get "response_url"
}</code></pre>
<p>Next I'll create a function to perform the actual SHA-512 hashing:</p>
<pre><code>let sha512 (text : string) =
use alg = SHA512.Create()
text
|> Encoding.UTF8.GetBytes
|> alg.ComputeHash
|> Convert.ToBase64String</code></pre>
<p>Finally I will create a new Suave WebPart to handle an incoming web request and register it with a new route <code>/sha512</code> which listens for POST requests:</p>
<pre><code>let sha512Handler =
fun (ctx : HttpContext) ->
(SlackRequest.FromHttpContext ctx
|> fun req ->
req.Text
|> sha512
|> OK) ctx
let app = POST >=> path "/sha512" >=> sha512Handler
[<EntryPoint>]
let main argv =
startWebServer defaultConfig app
0</code></pre>
<p>With that the entire web service - even though very primitive - is completed. The entire implementation is less than 60 lines of code:</p>
<pre><code>open System
open System.Security.Cryptography
open System.Text
open Suave
open Suave.Filters
open Suave.Operators
open Suave.Successful
type SlackRequest =
{
Token : string
TeamId : string
TeamDomain : string
ChannelId : string
ChannelName : string
UserId : string
UserName : string
Command : string
Text : string
ResponseUrl : string
}
static member FromHttpContext (ctx : HttpContext) =
let get key =
match ctx.request.formData key with
| Choice1Of2 x -> x
| _ -> ""
{
Token = get "token"
TeamId = get "team_id"
TeamDomain = get "team_domain"
ChannelId = get "channel_id"
ChannelName = get "channel_name"
UserId = get "user_id"
UserName = get "user_name"
Command = get "command"
Text = get "text"
ResponseUrl = get "response_url"
}
let sha512 (text : string) =
use alg = SHA512.Create()
text
|> Encoding.UTF8.GetBytes
|> alg.ComputeHash
|> Convert.ToBase64String
let sha512Handler =
fun (ctx : HttpContext) ->
(SlackRequest.FromHttpContext ctx
|> fun req ->
req.Text
|> sha512
|> OK) ctx
let app = POST >=> path "/sha512" >=> sha512Handler
[<EntryPoint>]
let main argv =
startWebServer defaultConfig app
0</code></pre>
<p>Now I just need to build, ship and deploy the application.</p>
<h2 id="configuring-slash-commands">Configuring Slash Commands</h2>
<p>Once deployed I am ready to add a new Slash Commands integration.</p>
<ol>
<li>
<p>Go into your team's Slack configuration page for custom integrations.
<br/>e.g.: <code>https://{your-team-name}.slack.com/apps/manage/custom-integrations</code></p>
</li>
<li>
<p>Pick Slash Commands and then click on the "Add Configuration" button:</p>
</li>
</ol>
<img class="half-width" src="https://cdn.dusted.codes/images/blog-posts/2016-08-22/29087335371_13517d5f78_o.png" alt="slack-slash-commands-add-configuration, Image by Dustin Moris Gorski">
<ol start="3">
<li>Choose a command and confirm by clicking on "Add Slash Command Integration":</li>
</ol>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-08-22/29087334921_64c34738d3_o.png" alt="slack-slash-commands-choose-a-command, Image by Dustin Moris Gorski">
<ol start="4">
<li>Finally type in the URL to your public endpoint and make sure the method is set to POST:</li>
</ol>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-08-22/29087335691_dd7ae72d98_o.png" alt="slack-slash-commands-integration-settings, Image by Dustin Moris Gorski">
<ol start="5">
<li>Optionally you can set a name, an icon and additional meta data for the bot and then click on the "Save Integration" button.</li>
</ol>
<p>Congrats, if you've got everything right then you should be able to go into your team's Slack channel and type <code>/sha512 test</code> to get a successful response from your newly created Slack integration now.</p>
<p>If you are interested in a more elaborate example with token validation and Docker integration then check out my <a href="https://github.com/dustinmoris/Glossiator/blob/master/Glossiator/Program.fs">glossary micro service</a>.</p>
https://dusted.codes/creating-a-slack-bot-with-fsharp-and-suave-in-less-than-5-minutes
[email protected] (Dustin Moris Gorski)https://dusted.codes/creating-a-slack-bot-with-fsharp-and-suave-in-less-than-5-minutes#disqus_threadMon, 22 Aug 2016 00:00:00 +0000https://dusted.codes/creating-a-slack-bot-with-fsharp-and-suave-in-less-than-5-minutesslacksuavefsharpBuildStats.info |> F#<p>After working on the project on and off for a few months I finally found enough time to finish the migration to F#, Suave and Docker of <a href="https://buildstats.info/">BuildStats.info</a>.</p>
<p>The migration was essentially a complete rewrite in F# and a great exercise to learn F# and <a href="https://suave.io/">Suave.io</a> as part of a small side project which runs in a production environment now. Working with F# and Suave was so much fun that I'm already planning to develop a couple more small projects in the very near future, but more on this at another time.</p>
<p>Apart from migrating to F# and Suave I had also <a href="https://hub.docker.com/r/dustinmoris/ci-buildstats/">dockerised the application</a> and switched my hosting from an <a href="https://azure.microsoft.com/en-gb/services/app-service/web/">Azure Web App</a> to <a href="https://aws.amazon.com/ecs/">Amazon EC2 Container Service</a>, because it is considerably cheaper than Microsoft's <a href="https://azure.microsoft.com/en-gb/services/container-service/">ACS</a> at the time of writing. I was also considering Docker Cloud and Google Container Service, but the fact that I can run a micro instance in Amazon for free for 12 months was the deciding factor which pushed me towards AWS and I am very happy so far.</p>
<p>Small side projects like this are always a great opportunity to try and learn new technologies beyond a simple hello world application and with that in mind I also decided to <a href="https://github.com/dustinmoris/CI-BuildStats/blob/master/api.raml">document the service endpoint with RAML</a>, which is a very intuitive language for describing web service APIs. <a href="https://dusted.codes/design-test-and-document-restful-apis-using-raml-in-dotnet">RAML was not entirely new to me</a>, but it was the first time I used version 1.0 and some of its new features.</p>
<p>Last but not least I also switched my CI system from <a href="https://www.appveyor.com/">AppVeyor</a> to <a href="https://travis-ci.org/dustinmoris/CI-BuildStats">TravisCI</a>. I wasn't really planning to do this, but because I wanted to build the application with Mono on Linux I had to make that transition as well. Nevertheless I am still a big fan of AppVeyor and will continue using it as my primary CI sytem for all Windows based builds and Travis is just as great as well.</p>
<p>A lot of (good) things have been going on in my private life in the last 6 months and I didn't get as much time to blog and work on side projects as I wanted to do, but now that things have gotten a bit calmer again I hope to get more time to keep this blog more updated again and talk about all the stuff that I am doing every day.</p>
<p>Wish you all a great weekend and stay tuned :)</p>
https://dusted.codes/buildstatsinfo-fsharp
[email protected] (Dustin Moris Gorski)https://dusted.codes/buildstatsinfo-fsharp#disqus_threadFri, 15 Jul 2016 00:00:00 +0000https://dusted.codes/buildstatsinfo-fsharpfsharpsuavedockerawsJMeter Load Testing from a continuous integration build<p>In the last two weeks I have been doing a series of load tests and the tool I've been using was <a href="http://jmeter.apache.org/">Apache JMeter</a>. JMeter is an open source, cross platform load testing tool written in Java. Unlike the <a href="https://httpd.apache.org/docs/2.4/programs/ab.html">Apache benchmarking tool</a> (aka AB) <a href="http://stackoverflow.com/a/10264501/1693158">JMeter has been specifically designed to load test</a> static and dynamic web services with high accuracy. Running those tests manually in the beginning of a new project might be fun, but if you have a high traffic web service which needs periodic testing then integrating those tests into your CI system might be a good idea.</p>
<p>In this short blog post I will share a few lessons I have learned along the way...</p>
<h2 id="creating-a-jmeter-test-plan-jmx-file">Creating a JMeter test plan (.jmx file)</h2>
<p>The first step would be to create a test plan which will represent the type of load you would like to test against. This should match the load you anticipate in your production system as much as possible. If you have multiple instances running behind a load balancer then you can test the load of one instance only. Also the maximum load your server can handle will depend on the size of the instance under test. If your production and test instances don't match up then you might have difficulties to draw meaningful conclusions after a test run.</p>
<p>Creating a JMeter test plan should be the only time you use the GUI to execute your tests. The GUI consumes many resources which will have an impact on your test run and could even limit you on the maximum load you'll be able to throw at your service. If you have never created a JMeter test plan then check out this great <a href="http://jmeter.apache.org/usermanual/build-web-test-plan.html">introduction tutorial which will walk you through the basic architecture of a JMeter test plan</a>.</p>
<p>Note that <a href="http://jmeter.apache.org/usermanual/component_reference.html#listeners">listeners</a> and <a href="http://jmeter.apache.org/usermanual/component_reference.html#assertions">assertions</a> are very resource expensive elements and should be avoided in your test plan wherever possible. After a test run you can open the results file with any of the provided listeners and analyse the data retrospectively. For example if you don't log the HTTP response data in the test results file, which is generally recommended to minimise resource consumption, then you might need to keep a <a href="http://jmeter.apache.org/usermanual/component_reference.html#Response_Assertion">response assertion</a> enabled to check if a HTTP request was successful or not. In contrast you can probably disable all <a href="http://jmeter.apache.org/usermanual/component_reference.html#Duration_Assertion">duration assertions</a>, because you will most likely log the latency of each request and therefore be able validate this metric afterwards.</p>
<h2 id="allocating-enough-memory-in-your-jvm">Allocating enough memory in your JVM</h2>
<p>If you run very large load tests then you might experience a Java <code>OutOfMemory</code> exception. This is not uncommon considering that the default memory allocation is only 512MB large. A good rule of thumb is to set the Java heap size to ~80% of your available memory. If you intend to set the heap size to a value greater than 2GB then you will have to install the <a href="http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html">64-bit version of the JRE</a>.</p>
<p>You can change the heap size allocation by opening the <code>jmeter.bat</code> file inside the <code>/bin</code> folder and edit the following line:</p>
<pre><code>set HEAP=-Xms512m -Xmx512m</code></pre>
<p>In this example I set the Java heap size to a maximum value of 4GB:</p>
<pre><code>set HEAP=-Xms512m -Xmx4096m</code></pre>
<h2 id="configuring-jmeter-properties">Configuring JMeter properties</h2>
<p>Another good way of optimising your load tests is to configure JMeter to only save the data which is required for later analysis. This again will reduce the overall resource consumption and allow you to run much larger tests.</p>
<p>Inside the <code>/bin</code> folder there is a <code>jmeter.properties</code> file, which holds all the default properties for JMeter. You should not modify this file directly, because otherwise you would lose your custom settings when upgrading to a newer version. Instead it's recommended to create a <code>user.properties</code> file in the same folder (or open the pre-existing empty one) and save all custom settings in this place. Configuration settings inside the <code>user.properties</code> file have higher precedence over <code>jmeter.properties</code>.</p>
<p>Search for the values beginning with <code>jmeter.save.saveservice.</code> inside the <code>jmeter.properties</code> file. These properties specify the results file configuration and let you customise what data will be stored during a test run. Go through each of those lines and decide whether you want to keep the default value or change it. In the latter case copy the original line and save it with your custom value inside the <code>user.properties</code> file.</p>
<p>For instance if you are mostly interested in the average throughput, various latencies and the error rate then you could tailor the JMeter properties to those values:</p>
<pre><code>jmeter.save.saveservice.output_format=csv
jmeter.save.saveservice.assertion_results_failure_message=false
jmeter.save.saveservice.assertion_results=none
jmeter.save.saveservice.data_type=false
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.response_data=false
jmeter.save.saveservice.response_data.on_error=false
jmeter.save.saveservice.response_message=false
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
jmeter.save.saveservice.time=true
jmeter.save.saveservice.subresults=false
jmeter.save.saveservice.assertions=false
jmeter.save.saveservice.latency=true
jmeter.save.saveservice.connect_time=false
jmeter.save.saveservice.samplerData=false
jmeter.save.saveservice.responseHeaders=false
jmeter.save.saveservice.requestHeaders=false
jmeter.save.saveservice.encoding=false
jmeter.save.saveservice.bytes=true
jmeter.save.saveservice.url=false
jmeter.save.saveservice.filename=false
jmeter.save.saveservice.hostname=true
jmeter.save.saveservice.thread_counts=true
jmeter.save.saveservice.sample_count=true
jmeter.save.saveservice.idle_time=true
jmeter.save.saveservice.timestamp_format=ms
jmeter.save.saveservice.default_delimiter=,
jmeter.save.saveservice.print_field_names=true</code></pre>
<p>One setting which is worth pointing out is the following:</p>
<pre><code>jmeter.save.saveservice.output_format=csv</code></pre>
<p>This lets you specify the format of the results file. By default it is set to <code>csv</code> and I would recommend you to keep it this way, unless you want to store the response data in your results. In that case you'll have to change it to <code>xml</code>.</p>
<p>However, storing results in CSV has a few advantages:</p>
<ul>
<li>JMeter is quicker in saving data in CSV than in XML</li>
<li>It might be easier to analyse data with 3<sup>rd</sup> party tools such as MS Excel</li>
<li>Even though it's possible with XML, it is much easier to merge multiple results files from a distributed test run in CSV format</li>
</ul>
<h2 id="running-jmeter-from-the-command-line">Running JMeter from the command line</h2>
<p>Runnning JMeter from the command line is not only the recommended way of running your tests, but also the best to execute them from an automated build step.</p>
<p>To run in non GUI mode you have to invoke the <code>jmeter.bat</code> file with the <code>-n</code> switch. Using the <code>-t</code> parameter lets you specify a test plan and the <code>-l</code> parameter the location of the results file:</p>
<pre><code>jmeter -n -t MyTestPlan.jmx -l Results.jtl</code></pre>
<p>When the test run completes JMeter will exit with code 0. You can check for that code when invoking the load test from an automated step. The results file will contain all information for subsequent test analysis. When JMeter encounters an error you can inspect the <code>jmeter.log</code> file for more information.</p>
<h2 id="combining-multiple-results-files-from-a-distributed-test-run">Combining multiple results files from a distributed test run</h2>
<p>Sometimes the load you want to run is so big that you have to split the execution across two or more JMeter instances, because one instance is unable to simulate the load alone. In this case you can either use the <a href="http://jmeter.apache.org/usermanual/remote-test.html">JMeter client to control multiple remote instances</a> or configure your CI system do it yourself. Either way it will work the exact same way. The JMeter test plan will have to be executed on each individual instance and will produce a separeate results file for you. When all tests finished you'll have to collect the results files and merge them into one single file for analysis.</p>
<p>If you're using the CSV format you can simply append the contents from one file to the end of another. Merging multiple CSV results files is usually as simple as a few copy paste bash commands. However, if you were using the XML format then you will have to append the data in the right place which will require <a href="http://stackoverflow.com/a/35783873/1693158">a bit of additional scripting</a>.</p>
<h2 id="analysing-the-results-file-jtl">Analysing the results file (.jtl)</h2>
<p>You can always open a JMeter results file with one of the existing listeners from the JMeter GUI. If your results were saved in CSV format then you can also open them in a spreadsheet or almost any other statistical software.</p>
<p>If you want to analyse and evaluate your results from an automated build then you will have to do some more scripting. It mostly depends what type of analysis you would like to perform, but in most cases you will be able to extract the relevant information with only a few simple commands.</p>
<p>For that purpose you could use PowerShell as the native language on Windows machines or if you are looking for something more portable then you could write a small C# or F# library, which can be either invoked directly from tools like <a href="http://cakebuild.net/">CAKE</a>, <a href="http://fsharp.github.io/FAKE/">FAKE</a>, the <a href="https://docs.microsoft.com/en-us/dotnet/fsharp/tutorials/fsharp-interactive/">FSI</a> or from a console application build on Mono. Just to give you a taster of how simple that can be you an check out <a href="https://github.com/dustinmoris/JMeterResultsAnalyser">this small F# application</a>.</p>
<h2 id="blazemeter-for-everyone">BlazeMeter for everyone</h2>
<p>If everything from above sounds like too much work and you look for a more hassle free, extremely well functioning and feature rich solution then I can highly recommend you to sign up at <a href="https://www.blazemeter.com/">BlazeMeter</a> and let your tests run in the cloud by some real experts.</p>
https://dusted.codes/jmeter-load-testing-from-a-continuous-integration-build
[email protected] (Dustin Moris Gorski)https://dusted.codes/jmeter-load-testing-from-a-continuous-integration-build#disqus_threadFri, 17 Jun 2016 00:00:00 +0000https://dusted.codes/jmeter-load-testing-from-a-continuous-integration-buildjmeterload-testingCustom error handling and logging in Suave<p>Some years ago when you wanted to develop a .NET web application it was almost given that it will run on IIS, but today we have a sheer amount of different web server technologies to our availability. If you are an F# developer or in the process of learning F# (like me) then you will most likely come across the <a href="https://suave.io/">Suave</a> web framework.</p>
<p>Suave is a lightweight, non-blocking web server written in F#. It is fairly new to the .NET space, but works wonderfully well and is very idiomatic to functional paradigms. Even though it is still early days it has probably the coolest name and logo amongst its competitors already:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-31/26774440854_cf77ef2521_o.png" alt="suave, Image by Dustin Moris Gorski" class="two-third-width">
<p>After working with Suave for more than a month now I can say that I find it very nice and intuitive. It is a well designed web framework which is easy to work with and allows rapid development if you know how it works. However, if you don't know how it works then you might struggle to get anything done at all. This is exactly what happened to me when I started adopting the framework in the beginning. The documentation is almost non existent and if you look for any advice on how to implement certain things then you are better off by browsing the <a href="https://github.com/SuaveIO/suave">GitHub repository</a> directly. The lack of documentation is not a mistake, but rather a concious design decision which the Suave team explains as following:</p>
<blockquote>
<p>In suave, we have opted to write a lot of documentation inside the code; so just hover the function in your IDE or use an assembly browser to bring out the XML docs.</p>
</blockquote>
<p>Even though I found myself around I wish there would have been a little bit more documentation or more code examples available which would have put me on the right track straight away. With that in mind I will jump straight into some code examples myself and try to explain proper error handling and logging in Suave.</p>
<h2 id="hello-world-in-suave">Hello World in Suave</h2>
<p>Before we get started we need a simple web application to begin with:</p>
<pre><code>open Suave
open Suave.Operators
open Suave.Successful
open Suave.Filters
let app =
choose [
GET >=> path "/" >=> OK "Hello World"
]
[<EntryPoint>]
let main argv =
startWebServer defaultConfig app
0</code></pre>
<p>When I navigate to <a href="http://localhost:8083">http://localhost:8083</a> I can see the "Hello World" message.</p>
<p>When I try to browse a path which doesn't exist then Suave will not serve the request by default. For example <a href="http://localhost:8083/foo">http://localhost:8083/foo</a> will not return anything. If you want Suave to return a 404 (or anything else) for non existing paths then you can append a generic fallback case to the end of the webpart options:</p>
<pre><code>let app =
choose [
GET >=> path "/" >=> OK "Hello World"
NOT_FOUND "Resource not found."
]</code></pre>
<p>The <code>NOT_FOUND</code> webpart and other pre-defined client error responses can be found in the <code>Suave.RequestErrors</code> namespace.</p>
<p>Now that I covered that, let's look at some proper error handling next.</p>
<h2 id="error-handling-in-suave">Error Handling in Suave</h2>
<p>In order to test error handling I need a route which throws an exception:</p>
<pre><code>let errorAction =
fun _ -> async { return failwith "Uncaught exception!!" }
let app =
choose [
GET >=> path "/" >=> OK "Hello World"
GET >=> path "/error" >=> errorAction
NOT_FOUND "Resource not found."
]</code></pre>
<p>The <a href="https://github.com/SuaveIO/suave/blob/releases/v1.x/src/Suave/Web.fs#L14">default Suave error handler</a> will return a 500 Internal Server error and a static message in plain text, unless you run your application locally in which case it will return the exception message in HTML.</p>
<p>If you want to change the default behaviour and provide a different implementation, then you can do this by setting a custom function of type <code>ErrorHandler</code> as part of the web server configuration.</p>
<p>The <code>ErrorHandler</code> type is defined as following:</p>
<pre><code>type ErrorHandler = Exception -> String -> HttpContext -> WebPart</code></pre>
<p>For example you can declare a new error handler like this:</p>
<pre><code>let customErrorHandler ex msg ctx =
// Change implementation as you wish
INTERNAL_ERROR ("Custom error handler: " + msg) ctx
let customConfig =
{
defaultConfig with
errorHandler = customErrorHandler
}
[<EntryPoint>]
let main argv =
startWebServer customConfig app
0</code></pre>
<p>Let's say you have a RESTful service and you want to return an error in Json instead of plain text. You could do this by amending the code as following:</p>
<pre><code>let JSON_ERROR obj =
JsonConvert.SerializeObject obj
|> INTERNAL_ERROR
>=> setMimeType "application/json; charset=utf-8"
let customErrorHandler ex msg ctx =
JSON_ERROR ex ctx</code></pre>
<p>The <code>JSON_ERROR</code> function is a custom helper function, which uses <a href="https://www.nuget.org/packages/newtonsoft.json/">Newtonsoft.Json</a> to serialize an object and pipe it through to the <code>INTERNAL_ERROR</code> function in combination with a mime type of "application/json; charset=utf-8".</p>
<p>You could go one step further and examine the Accept header of the incoming HTTP request and return the error response in a mime type which is supported by the client. The <code>ctx</code> parameter which is of type <code>Suave.Http.HttpContext</code> has all the relevant information to make that distinction:</p>
<pre><code>type AcceptType =
| Json
| Xml
| Other
let getAcceptTypeFromRequest ctx =
match ctx.request.header "Accept" with
| Choice1Of2 accept ->
match accept with
| "application/json" -> Json
| "application/xml" -> Xml
| _ -> Other
| _ -> Other
let customErrorHandler ex msg ctx =
match getAcceptTypeFromRequest ctx with
| Json -> JSON_ERROR ex ctx
| Xml -> XML_ERROR ex ctx
| Other -> INTERNAL_ERROR msg ctx</code></pre>
<p><em>The <code>XML_ERROR</code> function doesn't exist by default and would need to be implemented similarly to the <code>JSON_ERROR</code> function from the previous example.</em></p>
<p>This code is not full proof, but you can get the idea of it. There is nothing you can't do with the error handler and you can tailor a custom implementation entirely to your own application requirements.</p>
<p>Let's move on to logging now.</p>
<h2 id="custom-logging-in-suave">Custom logging in Suave</h2>
<p>Just like the error handler is set in the web server configuration the default logger can be overridden there as well, but with the small difference that the logger needs to be of type <code>Suave.Logging.Logger</code>.</p>
<p>The <code>Logger</code> interface has only one method for logging events:</p>
<pre><code>abstract member Log : LogLevel -> (unit -> LogLine) -> unit</code></pre>
<p>It's simple but enough to build custom loggers for most use cases:</p>
<pre><code>let pressTheRedButton line =
line |> ignore // Implement here
let logAsNormal level line =
line |> ignore // Implement here
type CustomLogger() =
interface Logger with
member __.Log level line =
match level with
| LogLevel.Fatal -> pressTheRedButton line
| _ -> logAsNormal level line
let customConfig =
{ defaultConfig with
errorHandler = customErrorHandler
logger = new CustomErrorLogger() }</code></pre>
<p>The log level argument is an enum of type <code>Suave.Logging.LogLevel</code> and has the following options available:</p>
<ul>
<li>Verbose</li>
<li>Debug</li>
<li>Info</li>
<li>Warn</li>
<li>Error</li>
<li>Fatal</li>
</ul>
<p>It also provides a few helper methods to convert between its string and integer representation as well as a few overloads for comparison and an implementation of the <code>IComparable</code> and <code>IEquatable</code> interface. Check the full <a href="https://github.com/SuaveIO/suave/blob/releases/v1.x/src/Suave/Logging/LogLevel.fs">implementation</a> for a complete overview.</p>
<p>Suave also provides a few <a href="https://github.com/SuaveIO/suave/blob/releases/v1.x/src/Suave/Logging/Logger.fs">default loggers</a> which can be used out of the box. There is a <code>ConsoleWindowLogger</code> and an <code>OutputWindowLogger</code> which do exactly what they say.</p>
<p>Additionally there is another useful logger called <code>CombiningLogger</code> which lets you combine multiple loggers at once. This allows you to build smaller and more specialised loggers instead of one big bloated implementation:</p>
<pre><code>let sendMessageToSlack line =
line |> ignore // Implement here
let sendEmailToTeam line =
line |> ignore // Implement here
type SlackLogger() =
interface Logger with
member __.Log level line =
if level = LogLevel.Fatal then sendMessageToSlack line
type EmailNotifier() =
interface Logger with
member __.Log level line =
if level >= LogLevel.Error then sendEmailToTeam line
let customConfig =
{ defaultConfig with
errorHandler = customErrorHandler
logger = CombiningLogger(
[SlackLogger()
EmailNotifier()
OutputWindowLogger(LogLevel.Info)]) }</code></pre>
<p>Again, there is no limit to what you can do with the logger implementation.</p>
<p>A popular logging framework in F# is <a href="https://github.com/logary/logary">Logary</a> and there is even an <a href="https://www.nuget.org/packages/Logary.Adapters.Suave/">adapter for Suave</a> available. Funnily the official documentation for the Suave adapter is a <a href="https://logary.github.io/adapters/suave/">bunch of Lorem Ipsum</a>. Maybe it's the whole F# community which doesn't like documentation that much :) ? While this page is clearly still in progress you can find a <a href="https://suave.io/logs.html">simple example</a> on the official Suave website in the meantime.</p>
<p>As you can see the Suave web framework can be very simple and powerful when you know how it works. I have to admit that the source code is well written, concise and often self explaining. Once you are familiar with the framework you will quickly find what you need, but until then you might be missing some general pointers to main topics which are common with every modern web application.</p>
https://dusted.codes/custom-error-handling-and-logging-in-suave
[email protected] (Dustin Moris Gorski)https://dusted.codes/custom-error-handling-and-logging-in-suave#disqus_threadTue, 31 May 2016 00:00:00 +0000https://dusted.codes/custom-error-handling-and-logging-in-suavefsharpsuaveerror-handlingloggingWatch US Netflix, Hulu and more from anywhere in the world<p>If you are reading this then I assume you are one of those unfortunate Netlix, Hulu and Co. users who does not live in the US and is upset that you are treated like a second class citizen by those companies? Well, you're damn right to be upset because you know you're paying the same amount of money as other users but getting a lot less for it which is just not fair. You are being discriminated because you don't live in the United States of America and the money in your pocket does not hold a picture of George Washington. It's a bloody pain and if you ask me it's an absolute disgrace that we still live in a time where we have to deal with those types of problems.</p>
<p>I don't even blame Netflix & Co. because these guys already get our money and they would be more than happy for us to watch whatever we want. It's some outdated laws and regulations made up by some poor souls in the content industry who force us into this miserable situation. Now the problem is that the internet is a global place and certainly does not know any borders. It has been the biggest motor of new economic growth for the last couple of decades and made the world a much smaller place than it used to be. If the internet has taught us one thing then it is that anyone can have anything, anywhere in the world with an instant effect and anyone who wants to convince us of the opposite is a dinosaur on the losing track. Patience and borders is a forgein concept which does not exist in the vocabulary of new generations, and that is for a good reason. They are impediments to innovation, growth and evolution.</p>
<p>Obviously the case is not as simple and clear cut as I make it sound, but it doesn't change the fact that there is a lot of stuff going awfully wrong at the moment and I feel like there is a lot of effort being made into the wrong direction. Instead of embracing the internet's full potential of global reach, companies are investing money and technology into setting up virtual borders and <a href="https://media.netflix.com/en/company-blog/evolving-proxy-detection-as-a-global-service">building detection software</a> for people who violate those. While everyone else is making great steps forward, the media industry is desperately trying to stay resistent and not adapting to the new economy at all.</p>
<h2 id="playing-cat-and-mouse-with-vpns-and-proxies">Playing cat and mouse with VPNs and Proxies</h2>
<p>You probably know that one popular way of circumventing content restrictions is by streaming media via a VPN or proxy server. Every device which is connected to the internet has a so called <a href="http://whatismyipaddress.com/">IP Address</a>. This address allows content providers to establish your geographic location and serve you a tailored view for your country. By using a proxy server you can pretend to be in a different location and trick a provider into serving you a much better offering than you would usually get. The concept is simple. Instead of streaming directly from Netflix & Co. you connect to an intermediate server, which is geographically located in the country of your desire and let that server stream the content for you on your behalf and forward it back to you. Sounds good in theory, except that Netflix and Co. have <a href="https://torrentfreak.com/netflix-announces-crackdown-on-vpn-and-proxy-pirates-160114/">ways of detecting this spiel and will block you</a> if not even cancel your account.</p>
<h3 id="how-does-netflix-detect-vpns-and-proxies">How does Netflix detect VPNs and Proxies?</h3>
<p>The fact that they can do this is quite clever, because it is certainly not an easy thing to do. I don't know how they exactly do it, but there is some basic theory which might give us an idea how they detect whether you are using a VPN or not. First of all they are probably collecting a growing list of IP addresses which they know belong to popular VPN and proxy services. Those IP addresses get simply blacklisted and blocked. Another way of detecting proxy services would be by monitoring the amount of users connecting from the same IP address over a period of time. If you've got hundreds or thousands of users streaming from the same IP address then chances are high that this is a proxy server. Of course it could be a bunch of people streaming from a Starbucks, but even a Starbucks has to close its doors at some point in the day. A genuine user is probably going to work sometimes or at least has to sleep. If you detect streaming behaviour from an IP address which doesn't fit with normal human behaviour than it might be another indicator for dodgy activity. Now I am not saying that this is what Netflix does, and I am sure their detection system is much more sophisticated than this, but I want to share some ideas which demonstrate that detecting a VPN or proxy server is not always an impossible task.</p>
<p>So what can one do to trick those detection systems? Well, you'd have to stop sharing proxy servers for a beginning and make an IP look as normal as possible. Luckily setting up a private proxy server is really not that difficult and can be super cheap as well. As a matter of fact you can set up your own private proxy server entirely for free and I am going to show you how!</p>
<h2 id="amazon-web-services-netflixs-best-friend-and-ours">Amazon web services, Netflix's best friend (and ours)</h2>
<p>Alright so you know Amazon right? It's the book shop which doesn't sell only books anymore. If you are a regular reader of my blog then you might know what I am going to say now, but if you are a non techie who came across this blog post through some other channel then you might be surprised to learn that Amazon was the pioneer of cloud providers. They are not only the largest, but also the most mature cloud operator in the world at the moment. Amazon is doing such a great job that <a href="https://media.netflix.com/en/company-blog/completing-the-netflix-cloud-migration">Netflix runs its entire infrastructure in the cloud</a> provided by Amazon.</p>
<p>The reason why I am telling you this is because Amazon's cloud has been so successsful that they are going mainstream by making cloud services easily available to anyone, people like you and me. If Netflix can deliver the latest season of House of Cards to the entire world with the help of Amazon web services, then surely we can host one tiny proxy server in the same cloud as well and circumvent their detection software, right? Yes, we can.</p>
<h2 id="setting-up-a-proxy-server-with-aws-ec2-in-less-than-10-minutes">Setting up a proxy server with AWS EC2 in less than 10 minutes</h2>
<p>Let's start off with some good news first. Amazon offers a <a href="https://aws.amazon.com/free/">12 months free tier</a> for new subscribers to their web services. This is an amazing offer and exactly what we are going to use to set up our private proxy. If you ask yourself what happens after those 12 months then wait until the end of this blog post.</p>
<h3 id="step-1-sign-up-with-amazon-web-services">Step 1: Sign up with Amazon web services</h3>
<p>First you need to sign up with Amazon web services. If you already have an account then you can skip this step, otherwise follow the instructions:</p>
<ol>
<li>Go to the <a href="https://portal.aws.amazon.com/gp/aws/developer/registration/index.html">registration page</a> and log in with your regular Amazon account</li>
<li>After the login select the personal account option and fill in your address details</li>
<li>On the next page fill in your payment information. This is required for the case when you exceed the free tier limitation, but don't worry about this, because it will not happen if you stick to my instructions</li>
<li>The next step is pretty cool. You will be displayed a 4 digit PIN code on the screen and receive an automated phone call from Amazon. You will be asked to enter the PIN on your phone and if you typed it in correctly then the call will end immediately and you will be directed to the next step in the registration process</li>
<li>From the list of support plans pick the basic (free) plan and click continue</li>
<li>Now you are done and you should see a message that your account will be activated within a few minutes and usually you'll get an email notification as well</li>
</ol>
<h3 id="step-2-create-a-private-proxy-with-ec2">Step 2: Create a private proxy with EC2</h3>
<p>Once signed in you should see a list of available AWS Services. We are going to create a new EC2 instance which will become our private proxy server.</p>
<p>Click on the EC2 link from the menu:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26128609484_430ef48b25_o.png" alt="ec2-menu-item, Image by Dustin Moris Gorski" class="half-width">
<p>In the top right corner make sure you have selected a US region, because after all we want our proxy to stream US content:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26128609294_e3970caf88_o.png" alt="aws-select-us-region-from-dropdown, Image by Dustin Moris Gorski" class="two-third-width">
<p>Next click on the <strong>Launch Instance</strong> button:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26131065313_5a8f419d5c_o.png" alt="aws-launch-instance-button, Image by Dustin Moris Gorski" class="two-third-width">
<p>This opens up a 7-step wizard which will walk you through the configuration of a new EC2 instance. Don't worry, there is not much that needs to be done to get the proxy server up and running.</p>
<p>The first step lets you choose which image (AMI) to use for your new instance. An image is a snapshot of a pre-installed server. This allows you to create a new server which already has an operating system and other software installed so you don't have to do it manually each time.</p>
<p>For the purpose of the proxy server we don't need anything fancy and therefore can go with the <strong>Ubuntu Server</strong>, which is a free tier eligible Linux distribution:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26128609544_170b6329bf_o.png" alt="aws-ubuntu-free-tier-ami, Image by Dustin Moris Gorski" class="two-third-width">
<p>On the next screen you can pick the size of the new instance. If you don't know what this means, think of it like the horse power of your new server.</p>
<p>Again, because we don't need anything fancy we can happily go with the <strong>t2.micro</strong> instance which is free tier eligible:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26131065973_a186d89b6b_o.png" alt="aws-t2-micro-free-tier-instance, Image by Dustin Moris Gorski" class="two-third-width">
<p>Don't click on the <em>Review and Launch</em> button yet! Confirm and continue by clicking on the <strong>Next: Configure Instance Details</strong> button.</p>
<p>On the third step there's a bunch of information available, but luckily the default values are exactly what we need and you don't have to change any of them, except one thing. Scroll down to the bottom and expand the <strong>Advanced Details</strong> section by clicking on the little arrow next to it:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26141065614_171cd62fa3_o.png" alt="aws-ec2-instance-advanced-details, Image by Dustin Moris Gorski" class="half-width">
<p>You will be presented with a text field which can be used to specify additional commands which will run when launching the new EC2 instance. We will add a few commands which will automatically install and configure the <a href="https://tinyproxy.github.io/">Tinyproxy</a> software. Tinyproxy is a <a href="https://github.com/tinyproxy/tinyproxy">free and open source proxy server</a> for POSIX operating systems.</p>
<p>Copy the following code snippet into the text field:</p>
<pre><code>#!/bin/bash
sudo -i
apt-get install tinyproxy
printf 'Allow <strong>xxx.xxx.xxx.xxx</strong>' >> /etc/tinyproxy.conf
/etc/init.d/tinyproxy stop
/etc/init.d/tinyproxy start</code></pre>
<p>Replace the <strong>xxx.xxx.xxx.xxx</strong> with <a href="http://ip4.me/">your own IP address</a>.</p>
<p>This is an important step, because by default Tinyproxy does not allow any connections other than from the host itself. Therefore we need to configure who is allowed to connect to the proxy server and because we want to keep it private you will enable your own IP address only. Make sense?</p>
<p>The end result should look something like this, except with your own IP address:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26721850636_210948911a_o.png" alt="aws-ec2-instance-advanced-details-commands, Image by Dustin Moris Gorski">
<p>Continue by clicking on the <strong>Next: Add Storage</strong> button.</p>
<p>There is literally nothing to do here or in the next step. Click <strong>Next: Tag Instance</strong> and immediately move on to <strong>Next: Configure Security Group</strong>.</p>
<p>This is the last important step during the configuration. Here we configure which ports will be open on the new instance. If this is the first EC2 instance you are going to create then you are likely not going to have any existing security groups set up yet.</p>
<p>By default the wizard will create a new security group for you and add one rule for port 22. This is the default port to SSH into your EC2 instance. Normally as a system administrator you would want to keep this port open, but for the simplicity of this setup we can overwrite it. We don't really need to SSH into the instance and if you really want to you can always edit the security group afterwards.</p>
<p>In the drop down select <strong>Custom TCP Rule</strong> and enter port <strong>8888</strong> into the <strong>Port Range</strong> field. Why port 8888? Because this is the default port which Tinyproxy listens to. Under <strong>Source</strong> pick the <strong>Custom IP</strong> option and enter your IP address in the field next to it and append "/32" to the end:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26748041215_4e9e0e1e54_o.png" alt="aws-create-security-group, Image by Dustin Moris Gorski">
<p>As you can see in the screen shot I also changed the security group name to something more meaningful. Feel free to pick your own name.</p>
<p>Click <strong>Review and Launch</strong> to get to the next screen and confirm there by clicking on <strong>Launch</strong>.</p>
<p>Almost done now! The final step is to create a private key pair. The private key pair is something you would need if you wanted to SSH into this instance, but as I said before, for the simple purpose of a proxy server you don't need to do this and therefore I will not go into any more detail. Just make sure you type in a meaningful name, something like "AWS Default Key Pair" or "AWS Proxy Server Key Pair" and hit the <strong>Download Key Pair</strong> button. Save this file somewhere in a secure place and keep it secret!</p>
<p>After that you should be able to click the <strong>Launch Instances</strong> button and let Amazon web services do the rest for you:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26475526790_e1c81d8106_o.png" alt="aws-launch-status, Image by Dustin Moris Gorski">
<p>When you click on the instance id you get redirected back to the EC2 console where you can see your instance being initialized. It may take a few minutes until everthing is ready and once completed you should see the status to be "running" and all status checks to be OK:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26722908526_537699155d_o.png" alt="aws-ec2-instance-status-and-public-ip, Image by Dustin Moris Gorski">
<p>The public IP address that you see with your instance is your new private proxy IP address! Take a note of it, because you will need it in the last step.</p>
<h2 id="step-3-configure-your-device">Step 3: Configure your device</h2>
<p>The final step is to set up your device to connect via the proxy server. This is literally one single setting that you have to apply on your device. I have provided instructions for Windows, Android and iOS, but you can easily google for any other OS and find instructions online:</p>
<ul>
<li><a href="#change-proxy-server-settings-in-windows">Change proxy server settings in Windows</a></li>
<li><a href="#change-proxy-server-settings-in-android">Change proxy server settings in Android</a></li>
<li><a href="#change-proxy-server-settings-in-ios">Change proxy server settings in iOS</a></li>
</ul>
<h3 id="change-proxy-server-settings-in-windows">Changing proxy server settings in Windows</h3>
<p>Follow these instructions even if you don't use Internet Explorer for browsing the internet. Proxy settings are a system wide setting and it doesn't matter through wich browser you open the dialog.</p>
<ol>
<li>Open Internet Explorer by clicking the Start button and type Internet Explorer into the search box and click on the corresponding result from the list</li>
<li>Click the Tools button, and then click Internet Options</li>
<li>Click the Connections tab, and then click LAN settings</li>
<li>Select the Use a proxy server for your LAN check box</li>
<li>Enter the IP address of your proxy server and port 8888 into the text fields</li>
<li>Click OK and confirm everything</li>
</ol>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26662679492_251ba6a28d_o.png" alt="windows-proxy-settings, Image by Dustin Moris Gorski" class="two-third-width">
<h3 id="change-proxy-server-settings-in-android">Change proxy server settings in Android</h3>
<ol>
<li>Go into the Settings</li>
<li>Click on the Wi-Fi menu item</li>
<li>Long tap on the network which you are currently connected to</li>
<li>Select the Modify network option</li>
<li>Expand the Advanced options</li>
<li>Selet the Manual option under Proxy</li>
<li>Type in the IP address of your proxy server into the Proxy hostname field and port 8888 into Proxy port</li>
<li>Save those settings</li>
</ol>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26731108956_c9c6772763_o.png" alt="android-proxy-settings, Image by Dustin Moris Gorski" class="half-width">
<h3 id="change-proxy-server-settings-in-ios">Change proxy server settings in iOS</h3>
<ol>
<li>Go into the Settings</li>
<li>Click on the Wi-Fi menu item</li>
<li>Click on the network which you are currently connected to</li>
<li>Select the Manual option under HTTP PROXY</li>
<li>Type in the IP address of your proxy server into the Server field</li>
<li>Type 8888 into the Port field</li>
</ol>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26483815010_8deb9ea5e2_o.png" alt="ios-proxy-settings, Image by Dustin Moris Gorski" class="two-third-width">
<h2 id="test-your-connection">Test your connection</h2>
<p>Now that everything is set up and running you should be able to stream US content from Netflix, Hulu and many more! A quick test to confirm that your proxy server is successfully running would be to <a href="https://www.iplocation.net/find-ip-address">google your own IP address</a> and see that your IP appears different, from a US location now:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-05-01/26664445162_4f4b7a3519_o.png" alt="proxy-server-active, Image by Dustin Moris Gorski">
<p>And what shall you do after your first 12 months of free tier eligibility? Well, I'd suggest you sign up a new account under a different email address. It takes only a couple of minutes which you have to invest every 12 months in order to run a free private proxy server. If that is too much effort then you might as well choose to let your server run and pay for its usage. The t2.micro instance costs only <a href="https://aws.amazon.com/ec2/pricing/">$0.013 per hour</a> which would come down to $9.75 per month if you'd let it run continuously. However, in this case I'd suggest to switch it on and off as you need and reduce your cost to almost nothing.</p>
<p>Oh, and if you wonder about any of the IP addresses shown on the screenshots of this blog post then let me assure you that none of those servers exist anymore and you really have to go through the few minutes of setting up your own private proxy server ;)</p>
https://dusted.codes/watch-us-netflix-hulu-and-more-from-anywhere-in-the-world
[email protected] (Dustin Moris Gorski)https://dusted.codes/watch-us-netflix-hulu-and-more-from-anywhere-in-the-world#disqus_threadSun, 01 May 2016 00:00:00 +0000https://dusted.codes/watch-us-netflix-hulu-and-more-from-anywhere-in-the-worldawscloudnetflixAsynchronous F# workflows in NancyFx<p>I have been playing with <a href="http://fsharp.org/">F#</a> quite a lot recently and to be honest I started to really like it. Besides the fact that F# is a very <a href="https://fsharpforfunandprofit.com/posts/why-use-fsharp-intro/">powerful language</a> it is a lot of fun too! As a result I began to migrate one of my smaller web services from C# and ASP.NET MVC 5 to F# and <a href="http://nancyfx.org/">NancyFx</a>. Nancy as a .NET web framework works perfectly fine with any .NET language, but has certainly not been written with F# much in mind. While it works really well in C#, it can feel quite clunky with F# at some times. There are <a href="https://suave.io/">other web frameworks</a> which are more idiomatic, but come with other tradeoffs instead.</p>
<p>However, when I started to migrate from C# to NancyFx and F# I had quite some difficulties to implement even the simplest things in the beginning. <a href="https://github.com/NancyFx/Nancy/wiki/Async">Asynchronous routes</a> were one of those things. The fact that I am an F# beginner didn't help either. Unfortunately there is not a lot of documentation on NancyFx with F# available and therefore I thought I'd share some of my own examples in this blog post.</p>
<p>As I said before, some of the stuff is <strong>really simple</strong> and probably super easy for an experienced F# developer, but for me as a beginner it was not that obvious in the beginning.</p>
<p>A great start on NancyFx and F# is <a href="https://mfranc.com/f/f-and-nancy-beyond-hello-world/">Michal Franc's blog post</a>, where he shows how to register a normal route in NancyFx with F#:</p>
<pre><code>type MyModule() as this =
inherit NancyModule()
do this.Get.["/"] <- fun ctx ->
"Hello World" :> obj</code></pre>
<p>This code is not much different from the equivalent C# implementation. <code>MyModule</code> inherits from the base <code>NancyModule</code> class and assigns a function to the root path of the HTTP GET verb. The only difference between C# and F# is that I had to explicitly upcast the <code>string</code> value to an <code>object</code> to match the method's expected signature.</p>
<p>Registering an <a href="https://github.com/NancyFx/Nancy/wiki/Defining-routes">asynchronous route</a> is slightly different. The function expects an additional input parameter for the cancellation token and returns a <code>Task</code> or <code>Task<T></code> object instead:</p>
<pre><code>do this.Get.["/foo", true] <- fun ctx ct ->
Task.FromResult("bar" :> obj)</code></pre>
<p>This simple example works, but is not asynchronous, because <code>Task.FromResult</code> blocks on the current thread. What I really want to do is to call an <a href="https://msdn.microsoft.com/en-us/library/dd233250.aspx">asynchronous workflow</a> from F# and execute it from an asynchronous route in NancyFx.</p>
<p><a href="http://stackoverflow.com/questions/12708504/is-asynchronous-in-c-sharp-the-same-implementation-as-in-f#answer-12708955">Asynchronous workflows in F# are different from Tasks in C#</a> and therefore need to be converted from an <code>Async<'T></code> object into a <code>Task<T></code> in NancyFx.</p>
<p>Luckily the .NET <a href="https://msdn.microsoft.com/en-us/library/ee370232.aspx">Control.Async class</a> offers plenty of predefined methods to make the translation very easy.</p>
<p>With <code>Async.RunSynchronously</code> one can convert an async workflow into a C# Task and execute it on the current thread synchronously:</p>
<pre><code>do this.Get.["/foo"] <- fun ctx ct ->
async {
return "bar" :> obj
}
|> Async.RunSynchronously</code></pre>
<p>This works, but it is still not asynchronous. If you want to run the async workflow in a non-blocking fashion in Nancy then you can pipe it to <code>Async.StartAsTask</code> which runs it asynchronously and returns a completed task:</p>
<pre><code>do this.Get.["/foo", true] <- fun ctx ct ->
async {
return "bar" :> obj
}
|> Async.StartAsTask</code></pre>
<p>In every case I had to upcast the string value to an object to match the expected return type. The <a href="https://msdn.microsoft.com/en-us/library/ee370232.aspx">Async class</a> is full of useful functions which can run and convert asynchronous workflows from F# to C# and vice-versa.</p>
<p>If you are building a Nancy application in F# then you might also be interested in <a href="https://github.com/simonhdickson/Fancy">Fancy</a> or Tiny Blue Robots' <a href="http://tinybluerobots.github.io/fsharp/2015/03/17/nancy-fsharp.html">blog post</a>, where both show some neat tricks on how to make Nancy feel a bit more functional.</p>
<p>With this I am going back to my F# work and wish everyone a <a href="https://fsharpforfunandprofit.com/posts/happy-fsharp-day/">happy F# day</a>!</p>
https://dusted.codes/asynchronous-fsharp-workflows-in-nancyfx
[email protected] (Dustin Moris Gorski)https://dusted.codes/asynchronous-fsharp-workflows-in-nancyfx#disqus_threadTue, 12 Apr 2016 00:00:00 +0000https://dusted.codes/asynchronous-fsharp-workflows-in-nancyfxnancyfxfsharpasyncFiltering the AWS Service Health Dashboard<p>If you run anything on <a href="https://aws.amazon.com/">Amazon Web Services</a> in production then you probably know the <a href="http://status.aws.amazon.com/">AWS Service Health Dashboard</a> very well.</p>
<p>Sometimes when you experience a service disruption you might find yourself scrolling through the page and look for an update for one of your affected resources and then you probably wish that the icons were a little bit more distinguishable between healthy and previously unhealthy services:</p>
<img class="two-third-width" src="https://cdn.dusted.codes/images/blog-posts/2016-03-31/26168064985_56b3320748_o.png" alt="aws-service-health-dashboard, Image by Dustin Moris Gorski">
<p>Unless you have perfect eagle sight you probably struggle to quickly filter the page with your plain eye and could use some help to better visualise good from bad icons. At least this is how I feel and therefore I created this little JavaScript snippet to blend out healthy icons from the page, leaving only the bad ones visible:</p>
<pre><code>[].forEach.call(document.querySelectorAll('[src="/images/status0.gif"]'), function(e) {e.style.display = "none";});</code></pre>
<img class="two-third-width" src="https://cdn.dusted.codes/images/blog-posts/2016-03-31/26168064855_c38f2f124e_o.png" alt="aws-service-health-dashboard-filtered, Image by Dustin Moris Gorski">
<p>You can either copy this into your browser's console and run it directly from there or <a href="https://dusted.codes/diagnosing-css-issues-on-mobile-devices-with-google-chrome-bookmarklets">create a permanent bookmarklet</a> for easy access in the future.</p>
https://dusted.codes/filtering-the-aws-service-health-dashboard
[email protected] (Dustin Moris Gorski)https://dusted.codes/filtering-the-aws-service-health-dashboard#disqus_threadThu, 31 Mar 2016 00:00:00 +0000https://dusted.codes/filtering-the-aws-service-health-dashboardawsgoogle-chromeAutomating CSS and JavaScript minification in ASP.NET MVC 5 with PowerShell<p>When I started <a href="https://github.com/dustinmoris/dustedcodes">building this blog</a> I kept things very simple in the beginning. First there was nothing but my <a href="https://dusted.codes/hello-world">Hello World</a> blog post and only later when I had more content I added more features over time. It didn't take me very long before I had to think about minifying static content such as CSS and JavaScript files to speed up page load times for my readers.</p>
<p>Minifying a CSS file is one of the most trivial tasks in web development and yet it is often more cumbersome than it has to be. It is too easy to forget updating a minified file when making some quick changes to the original file, or I'd update the minified file, but forget to swap the file paths in the HTML source code, leaving the live website pointing to the uncompressed version. These are typical mistakes which happen with manual tasks and every developer experiences at least once in their career. The best way to prevent these type of mistakes is to automate the entire process and reduce the human error.</p>
<p>The default project template in a Classic ASP.NET MVC web application offers runtime minification via the <a href="https://www.nuget.org/packages/WebGrease">WebGrease</a> NuGet library. It is not great but does the job for the lazy programmer. Runtime minification is not ideal because it puts additional load on the web server instead of doing the compression on the build server at an earlier stage. ASP.NET Core takes a different approach and promotes the use of post-build commands by <a href="http://docs.asp.net/en/latest/client-side/using-gulp.html">utilizing Node modules</a> to minify static assets. This is a much more elegant solution and works just as good.</p>
<p>The same an be achieved in a Classic ASP.NET application where it is perfectly feasible to use Node.js from a post-build event as well. Node is great when you care about cross platform compatibility, but it might be a slight overkill when the application only builds on a Windows machine and Node is not used anywhere else in the project. If an entire team works on Windows then you might as well use a technology which is already available to everyone. <a href="https://en.wikipedia.org/wiki/Windows_PowerShell">PowerShell</a> would be one of those.</p>
<div class="tip"><p><strong>Tip:</strong> As an alternative to Node.js you can utilize <a href="http://cakebuild.net/" target="_blank">CAKE</a> or <a href="http://fsharp.github.io/FAKE/" target="_blank">FAKE</a> for cross platform compatible build events or entire build scripts. Both are open source and you get to use C# or F# through the entire project.</p></div>
<p>For my own website I am using PowerShell for the exact reason that I only work from a Windows machine and the website builds on a Windows server via <a href="https://www.appveyor.com/">AppVeyor</a>.</p>
<p>In this blog post I will show how to use PowerShell to minify static assets from a post-build event in a Classic ASP.NET MVC 5 application and how to switch between compressed and uncompressed versions in Release and Debug mode.</p>
<h2 id="minifying-css-and-javascript-with-powershell">Minifying CSS and JavaScript with PowerShell</h2>
<p>Let's begin with the PowerShell script. In the first step I want to recursively find all CSS files within a given folder and exclude already minified files:</p>
<pre><code>Get-ChildItem $SolutionDir -Recurse -Include *.css -Exclude *.min.css</code></pre>
<p><code>$SolutionDir</code> is a variable pointing to the root path of the solution. This variable will be assigned when calling the script from a post-build event. I will come back to this later again.</p>
<p>The next step is to iterate through all CSS files and minify them. This can be achieved by piping <code>|</code> the result from <code>Get-ChildItem</code> to a foreach loop <code>%</code> and call a function on each individual element:</p>
<pre><code>Get-ChildItem $SolutionDir -Recurse -Include *.css -Exclude *.min.css | % {
Compress-CssFile -CssFilePath $_
}</code></pre>
<p>The <code>$_</code> symbol represents each individual element in a foreach loop, which then gets assigned to the <code>-CssFilePath</code> parameter of the <code>Compress-CssFile</code> function.</p>
<p>Next I have to implement the <code>Compress-CssFile</code> function:</p>
<pre><code>Function Compress-CssFile
{
[CmdletBinding()]
param
(
[string] $CssFilePath
)
# ToDo: Implement
}</code></pre>
<p>This is the basic skeleton of the function. The <code>param</code> section declares all parameters which can be passed into the function and the <code>[CmdletBinding()]</code> attribute defines that global flags such as <code>-Verbose</code> or <code>-Debug</code> will be inherited from the calling context.</p>
<p>The actual implemetation can vary in many ways, but for this blog post I thought it would be a good exercise to use the public API of the <a href="http://cssminifier.com/">CSSMinifier</a> web service.</p>
<p>The API is very simple. All I have to do is to send a HTTP POST request to <code>http://cssminifier.com/raw/</code> with the original CSS content in the body and subsequently receive the minified version from the body of the response.</p>
<p>The <code>Compress-CssFile</code> function only accepts the full file path of a CSS file and therefore I need to read all of its content first:</p>
<pre><code>$cssFile = Get-Item -Path $CssFilePath
$content = [System.IO.File]::ReadAllText($cssFile.FullName)</code></pre>
<p>Now with the content I can initialize a HTTP body object and invoke a HTTP POST request to the API:</p>
<pre><code>$body = @{input = $content}
$response = Invoke-WebRequest -Uri "http://cssminifier.com/raw/" -Method Post -Body $body</code></pre>
<p>Before processing any further I can validate if the request was successful:</p>
<pre><code>if ($response.StatusCode -ne 200)
{
throw "Pick your own error message"
}</code></pre>
<p>If the request was successful I can grab the minified CSS content from the response and save it under the same location as the original file, but with the <code>.min.css</code> file extension instead:</p>
<pre><code>$compressedContent = $response.Content
$newFilePath = $CssFilePath.Replace(".css", ".min.css")
Set-Content -Path $newFilePath -Value $compressedContent -Force</code></pre>
<p>Note how I used the <code>-Force</code> flag on the <code>Set-Content</code> cmdlet to overwrite an existing file with the same name. This is required to update the minified file even if it already exists.</p>
<p>Finally I put all of the above PowerShell code into one file and add the <code>$SolutionDir</code> parameter at the top. I name the PowerShell file <code>MinifyCss.ps1</code> and save it in the root folder of my ASP.NET solution:</p>
<pre><code>[CmdletBinding()]
param
(
[Parameter(Position = 0, Mandatory = $true)]
[string] $SolutionDir
)
Function Compress-CssFile
{
[CmdletBinding()]
param
(
[string] $CssFilePath
)
$cssFile = Get-Item -Path $CssFilePath
$content = [System.IO.File]::ReadAllText($cssFile.FullName)
$body = @{input = $content}
$response = Invoke-WebRequest -Uri "http://cssminifier.com/raw/" -Method Post -Body $body
if ($response.StatusCode -ne 200)
{
throw "Pick your own error message"
}
$compressedContent = $response.Content
$newFilePath = $CssFilePath.Replace(".css", ".min.css")
Set-Content -Path $newFilePath -Value $compressedContent -Force
}
Get-ChildItem $SolutionDir -Recurse -Include *.css -Exclude *.min.css | % {
Compress-CssFile -CssFilePath $_
}</code></pre>
<p>This script is ready now. Implementing the same functionality for JavaScript files is trivial. Simply copy the <code>MinifyCss.ps1</code> file and rename it to <code>MinifyJavaScript.ps1</code>. Change the implementation to point to the public <a href="https://javascript-minifier.com/">JavaScript Minifier</a> API and change the file extensions from <code>.css</code> to <code>.js</code>.</p>
<h2 id="calling-powershell-scripts-from-an-aspnet-post-build-event">Calling PowerShell scripts from an ASP.NET post-build event</h2>
<p>The next step is to call both PowerShell scripts from an ASP.NET post-build event.</p>
<p>This couldn't be any easier. Right click the project file of your ASP.NET project and select "Properties" from the menu or select the project file and hit <kbd>Alt</kbd> + <kbd>Enter</kbd> on your keyboard.</p>
<p>Go to the "Build Events" dialog and paste the following code into the post-build event command line:</p>
<pre><code>if $(ConfigurationName) == Debug (
echo "Skipping CSS minification in debug mode."
echo "Skipping JavaScript minification in debug mode."
) else (
%windir%\System32\WindowsPowerShell\v1.0\powershell.exe -NoLogo -NonInteractive -Command "$(SolutionDir)MinifyCss.ps1" $(SolutionDir)
%windir%\System32\WindowsPowerShell\v1.0\powershell.exe -NoLogo -NonInteractive -Command "$(SolutionDir)MinifyJavaScript.ps1" $(SolutionDir)
)</code></pre>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-03-26/26018307816_a33cee7d15_o.png" alt="aspnet-mvc-5-post-build-event-command-line, Image by Dustin Moris Gorski">
<p>This code block makes sure that we only execute the PowerShell scripts when the project doesn't build in Debug mode. This is desired because during development we might make frequent changes to the original CSS file and not want to minify the content until we are ready to build in Release mode.</p>
<p>The <code>$(SolutionDir)</code> placeholder is a <a href="https://msdn.microsoft.com/en-us/library/c02as0cs.aspx">reserved MSBuild macro</a> which points to the root directory of the solution. It gets passed directly to the PowerShell script where it gets assigned to the equally named PowerShell variable. The rest happens in PowerShell.</p>
<h2 id="swap-between-original-and-minified-files-in-aspnet-mvc-razor-views">Swap between original and minified files in ASP.NET MVC Razor views</h2>
<p>The last piece in the puzzle is to swap between the original and the minified files in the ASP.NET MVC Razor views. In Debug mode we want to point to the original file, so that we can test CSS changes without any friction during development, but in all other cases we want to swap it for the minified version instead.</p>
<p>In order to distinguish between Debug and Release mode in an MVC razor view we need a little helper class:</p>
<pre><code>public static class BuildProperties
{
public static bool IsDebugMode()
{
#if DEBUG
return true;
#else
return false;
#endif
}
}</code></pre>
<p>With this helper method we can easily switch between the <code>.css</code> and <code>.min.css</code> files in the HTML markup:</p>
<pre><code>@if (BuildProperties.IsDebugMode())
{
<link type="text/css" href="~/Assets/Css/site.css">
}
else
{
<link type="text/css" href="~/Assets/Css/site.min.css">
}</code></pre>
<p>If you <a href="https://dusted.codes/using-csharp-6-features-in-aspdotnet-mvc-5-razor-views">use C# 6.0 in your razor views</a> then you can write it even neater with this one liner where you don't have to repeat the file path twice:</p>
<pre><code><link type="text/css" href=@($"~/Assets/Css/site{(BuildProperties.IsDebugMode() ? "" : ".min")}.css")></code></pre>
<p>Voila, now you never have to worry about manually minifying static assets anymore. It just happens automatically during the Release build and the live website will reference the correct path to the minified file.</p>
https://dusted.codes/automating-css-and-javascript-minification-in-aspnet-mvc-5-with-powershell
[email protected] (Dustin Moris Gorski)https://dusted.codes/automating-css-and-javascript-minification-in-aspnet-mvc-5-with-powershell#disqus_threadSat, 26 Mar 2016 00:00:00 +0000https://dusted.codes/automating-css-and-javascript-minification-in-aspnet-mvc-5-with-powershellaspnetmvcpowershellcssjavascriptCircleCI build history charts and NuGet badges by Buildstats.info<p>Quick update on <a href="https://buildstats.info/">Buildstats.info</a>. Two weeks ago I added <a href="https://circleci.com/">CircleCI</a> to the list of supported CI systems for the <a href="https://github.com/dustinmoris/CI-BuildStats#build-history-chart">build history chart</a> and last weekend I implemented a new badge for <a href="https://github.com/dustinmoris/CI-BuildStats#nuget-badge">NuGet packages</a> too.</p>
<p>CircleCI is the third continuous integration system which is supported by the build history chart now. AppVeyor and TravisCI are the other two. If you have a public open source project which is built by one of those systems then you might want to check out the official <a href="https://github.com/dustinmoris/CI-BuildStats">documentation for the build history chart</a>. Its quite cool and lets you create SVG badges like the one I did for my blog:</p>
<p><a href="https://ci.appveyor.com/project/dustinmoris/dustedcodes/history?branch=master" title="dusted.codes build history"><img src="https://buildstats.info/appveyor/chart/dustinmoris/dustedcodes?branch=master" alt="Build History Chart" /></a></p>
<p>On a complete separate note I also added a new SVG badge for <a href="https://github.com/dustinmoris/CI-BuildStats#nuget-badge">NuGet packages</a>.</p>
<p>I did not think of adding NuGet support in the beginning, but since <a href="http://shields.io/">Shields.io</a> NuGet badges are broken for more than 2 weeks now I had to look for an alternative:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-02-29/25255668592_5362a02717_o.png" alt="shields.io-broken-nuget-badges, Image by Dustin Moris Gorski" class="half-width">
<p>The <a href="https://github.com/badges/shields/issues/655">issue has been reported</a>, but it doesn't look like it will get fixed any time soon and so I went with my own solution.</p>
<p>For my personal projects I like to display the current version of my NuGet packages as well as the total number of downloads. With Shields.io I had to use two individual badges, but with Buildstats.info I can display both in one:</p>
<p><a href="https://www.nuget.org/packages/Lanem/" title="Lanem NuGet package"><img src="https://buildstats.info/nuget/lanem" alt="NuGet badge for Lanem" /></a></p>
<p>This is a first version which satisfied my own personal needs, but will likely expand over the next coming weeks providing more functionality for other users as well.</p>
<p>I have a few ideas, but if you are looking for something in particular then please feel free to <a href="https://github.com/dustinmoris/CI-BuildStats/issues">file a feature request</a> on the public GitHub repository.</p>
https://dusted.codes/circleci-build-history-charts-and-nuget-badges-by-buildstatsinfo
[email protected] (Dustin Moris Gorski)https://dusted.codes/circleci-build-history-charts-and-nuget-badges-by-buildstatsinfo#disqus_threadMon, 29 Feb 2016 00:00:00 +0000https://dusted.codes/circleci-build-history-charts-and-nuget-badges-by-buildstatsinfocirclecinugetgithubsvgDon't dispose externally created dependencies<p>Today I wanted to blog about a common mistake which I often see during code reviews or when browsing open source projects in relation to the <code>IDisposable</code> interface. Heck, even Microsoft got it wrong when they wrote on <a href="http://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application">how to implement the UnitOfWork pattern</a> in an ASP.NET MVC application.</p>
<p>What I mean is this, in a very simplified way:</p>
<pre><code>public interface IRepository : IDisposable
{
// Member definition
}
public class MyClass : IDisposable
{
private readonly IRepository _repository;
public MyClass(IRepository repository)
{
_repository = repository;
}
public void Dispose()
{
_repository.Dispose();
}
}</code></pre>
<p>The issue with the above implementation is this little detail:</p>
<pre><code>public void Dispose()
{
_repository.Dispose();
}
</code></pre>
<p>Disposing an externally created dependency is a bad idea and usually leads to severe bugs in an application. <code>MyClass</code> did not create the instance of <code>IRepository</code>, but decided to dispose it on behalf of the creator. However, <code>MyClass</code> has no information on who or what created the instance or for how long it is supposed to live. What if the object has been setup as a Singleton or an instance per HttpRequest? If that would be the case then any code which follows this and relies on an <code>IRepository</code> is broken now. <code>MyClass</code> has crossed its boundaries by interfering with the lifespan of an externally managed dependency.</p>
<p>It doesn't even matter what the current lifespan is. It could have been setup as a transient object, but the point is that a developer should be able to change the lifespan in the future without breaking the entire application. This does not only apply to the repository but to any dependency which is managed outside a class. The rule of thumb is that <strong>you cannot dispose an object which you did not create</strong> yourself.</p>
<p>The question is what shall we do with dependencies which implement <code>IDisposable</code>?</p>
<h2 id="how-to-manage-a-dependency-which-implements-idisposable">How to manage a dependency which implements IDisposable?</h2>
<p>The intentions were definitely good. The repository implements <code>IDisposable</code> obviously for a good reason. <code>IDisposable</code> is usually used <a href="https://msdn.microsoft.com/en-us/library/system.idisposable">when a class has some expensive or unmanaged resources</a> allocated which need to be released after their usage. Not disposing an object can lead to memory leaks. In fact it is so important that the C# language offers the <a href="https://msdn.microsoft.com/en-us/library/yh598w02.aspx?f=255&MSPPError=-2147217396">using</a> keyword to minimize risk of not cleaning up resources in certain edge cases such as the occurrence of an exception. Generally it should be a red flag if an instance of <code>IDisposable</code> has not been wrapped in a <code>using</code> block, which is another good indicator that the code from above has a fundamental problem. I am actually surprised that no one at Microsoft picked this up before publishing the <a href="http://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application">article on the UnitOfWork</a> pattern.</p>
<p>In most cases the creator of an object will be the IoC container. Therefore it should be the IoC container's responsibility to dispose an object after its usage, except that this is very difficult at this stage of an application. Either the object will live too long or too short which brings us back to the initial problem.</p>
<p>Typically if something implements <code>IDisposable</code> we want to dispose it as soon as possible. Often we even want to dispose it way sooner than the lifespan of the class where it has been used, which is yet another indicator that disposing it in the <code>Dispose()</code> method of <code>MyClass</code> is not a good place. This makes me doubt if injecting the repository through the constructor is a good idea at all.</p>
<h2 id="taking-control-of-creating-and-disposing-an-idisposable">Taking control of creating and disposing an IDisposable</h2>
<p>Constructor injection is only one of many IoC patterns which we have to our availability. Another approach would be to use a factory which will issue a new repository on every request:</p>
<pre><code>public interface IRepository : IDisposable
{
// Member definition
}
public interface IRepositoryFactory
{
IRepository Create();
}
public class MyClass
{
private readonly IRepositoryFactory _repositoryFactory;
public MyClass(IRepositoryFactory repositoryFactory)
{
_repositoryFactory = repositoryFactory;
}
public string GetSomething()
{
using (var repository = _repositoryFactory.Create())
{
return repository.GetSomething();
}
}
}</code></pre>
<p>The constructor parameter has been changed from an <code>IRepository</code> to an <code>IRepositoryFactory</code>. The instantiation of the repository has been deferred to the actual time of usage and the <code>Dispose()</code> method has been made redundant. Additionally I was able to benefit from the <code>using</code> keyword and overall reduce the total amount of code. This solution is much more elegant, more robust and free of the initial error.</p>
<p>The big difference is that <code>MyClass</code> takes care of creating a repository now. Per definition a factory always creates a new instance, so there is no ambiguity about the object's lifespan. In terms of testability and extensibility nothing has really changed. The factory gets injected through the constructor which makes it easily exchangeable with another implementation or a mock in tests.</p>
<h2 id="dependency-injection-is-not-always-the-best-option">Dependency injection is not always the best option</h2>
<p>This is a great example where dependency injection is not always the best suited IoC pattern. <a href="https://en.wikipedia.org/wiki/Dependency_injection#Three_types_of_dependency_injection">Dependency injection in all three forms</a> (constructor, property and method injection) is only one of many possible ways of the <a href="https://en.wikipedia.org/wiki/Dependency_inversion_principle">dependency inversion principle</a>. Factories and other <a href="https://en.wikipedia.org/wiki/Creational_pattern">creational design patterns</a> are still useful and have their place in modern software architecture.</p>
<p>Subtle differences like the one from above can determine whether you have a severe bug in your application or not. Sometimes it is not even a question of stylistic preference.</p>
<p>Another good read on the proper use of the <code>IDisposable</code> interface is <a href="http://stackoverflow.com/questions/538060/proper-use-of-the-idisposable-interface#answer-538238">this amazing StackOverflow answer</a> by <a href="http://stackoverflow.com/users/12597/ian-boyd">Ian Boyd</a>.</p>
https://dusted.codes/dont-dispose-externally-created-dependencies
[email protected] (Dustin Moris Gorski)https://dusted.codes/dont-dispose-externally-created-dependencies#disqus_threadSat, 20 Feb 2016 00:00:00 +0000https://dusted.codes/dont-dispose-externally-created-dependenciesiocarchitecturedotnetdisposableSHA-256 is not a secure password hashing algorithm<p><a href="https://en.wikipedia.org/wiki/SHA-2#Cryptanalysis_and_validation">SHA-256</a> is not a secure password hashing algorithm. <a href="https://en.wikipedia.org/wiki/SHA-2#Cryptanalysis_and_validation">SHA-512</a> neither, regardless of how good it has been salted. Why not? Because both can be computed in the billions per minute with specialised hardware. If you are surprised to hear that you should continue reading...</p>
<h2 id="what-makes-a-good-password-hashing-algorithm">What makes a good password hashing algorithm?</h2>
<p>A password hash is the very last line of defence. Its only purpose is to prevent an attacker from gaining total control of a user when all other measures of security have been broken. This usually means to prevent the attacker from using the compromised data to access users' data on other websites, which could happen <a href="http://www.troyhunt.com/2011/06/brief-sony-password-analysis.html">when a user re-uses a password</a>. It is extremely important that a good hashing algorithm will resist all attempts of cracking it, at least for a significant period of time.</p>
<p>Since the attacker is in control of the raw user data there is nothing which can be done to prevent a crude brute force attack. However, this is not an easy undertaking and there are measures which can be put into place to prolong the attack and jeopardise its feasibility.</p>
<p>A good password hashing algorithm removes the slightest chance of a shortcut, leaving a brute force attack as the only attack surface and puts other barriers in place.</p>
<h3 id="one-way-functions">One-way functions</h3>
<p>First of all this means that a password must <strong>always</strong> be stored with a cryptographic one-way function. If a password has been encrypted with an algorithm which allows decryption then there is no guarantee that an attacker has not already gained access to the secret key and immediately bypassed all gates of security.</p>
<p>Therefore encryption algorithms such as <a href="https://en.wikipedia.org/wiki/Advanced_Encryption_Standard">AES</a> and <a href="//dusted.codes/the-beauty-of-asymmetric-encryption-rsa-crash-course-for-developers">RSA</a> are not secure storage mechanisms for a password. The use of a one-way hash function is mandatory.</p>
<h3 id="pre-image-and-collision-attacks">Pre-image and collision attacks</h3>
<p>A password hash also needs to resist so called <a href="https://en.wikipedia.org/wiki/Preimage_attack">pre-image</a> and <a href="https://en.wikipedia.org/wiki/Collision_attack">collision attacks</a>. In simple words it should not be possible to methodically find a value which can be computed to a given hash value. This crosses out hash functions such as <a href="https://en.wikipedia.org/wiki/Collision_attack#Classical_collision_attack">MD5 and SHA-1 which have been proven to be vulnerable</a> to such attacks.</p>
<h3 id="lookup-tables-and-password-salting">Lookup tables and password salting</h3>
<p>Another shortcut are <a href="https://en.wikipedia.org/wiki/Rainbow_table">lookup tables</a>. A lookup table is a pre-computed table with hash values derived from commonly used passwords and dictionary entries. An attacker can easily match up a lookup table with the compromised hash values and look up the underlying plain text password. This is where the <a href="https://en.wikipedia.org/wiki/Salt_(cryptography)">concept of salting</a> comes into play.</p>
<p>A salt is a piece of text of certain length and complexity which is added to the original value before computing a hash. The idea is that the salt itself is random enough to generate a hash which will not exist in a pre-computed lookup table.</p>
<p>The salt is usually stored in plain text next to the hash value. This is required to allow a genuine login scenario with the original password. It doesn't matter if an attacker can see the salt, because it still invalidates a pre-computed lookup table.</p>
<h3 id="random-salt-per-user">Random salt per user</h3>
<p>It is good practice to generate a random salt per user. If the same salt has been shared among all users then an attacker can quickly generate a new lookup table and is back at square one. However, if every user has an individual salt then it becomes significantly more difficult.</p>
<p>Additionally a random salt per user prevents the use of reverse lookup tables. A reverse lookup table is similar to a lookup table except that it matches up the password of multiple users at once. This is possible because many users pick the same (simple) password without knowing it.</p>
<h3 id="key-stretching-algorithms">Key-stretching algorithms</h3>
<p>By using salts and eliminating the possibility of pre-computed lookup tables an attacker is forced to go down the route of a brute force attack. Even though it is extremely difficult it is not impossible. High end hardware with <a href="http://www.zdnet.com/article/25-gpus-devour-password-hashes-at-up-to-348-billion-per-second/">fast GPU can compute billions of hashes per minute</a>.</p>
<p>How can one protect a password from being brute force attacked like this? The idea is to slow down the hashing function. This technique is called <a href="https://en.wikipedia.org/wiki/Key_stretching">key stretching</a> and is a specially crafted algorithm which is very hardware intensive. Such algorithms usually come with an iteration factor which needs to be carefully adjusted to the hardware used on a web server. <strong>This is the currently recommended way of storing passwords</strong>.</p>
<p>Popular key-stretching algorithms are:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/PBKDF2">PBKDF2</a></li>
<li><a href="https://en.wikipedia.org/wiki/Bcrypt">bcrypt</a></li>
<li><a href="https://en.wikipedia.org/wiki/Scrypt">scrypt</a></li>
</ul>
<p>The .NET framework has native built in support for PBKDF2 which comes in form of the <a href="https://msdn.microsoft.com/en-gb/library/system.security.cryptography.rfc2898derivebytes%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396">Rfc2898DeriveBytes</a> class. There is also an open source library for <a href="https://bcrypt.codeplex.com/">bcrypt in .NET</a>.</p>
<h3 id="summary">Summary</h3>
<p>As you can see good password hashing is more than just sticking a salt at the end of a password and shoving it into the SHA-256 hash function. In practical terms this is as bad as using MD5.</p>
<p>Correct password hashing is not too complicated either, but if it could be avoided all together it would be even better. There is always the option of defering the password handling to a 3rd party by using single sign-on options from trusted authorities such as Google, Facebook or Twitter.</p>
<p>If you have to do it yourself you should follow the guidelines from above and use a key derivation algorithm (=key stretching) in combination with a random salt per user and stick with the native implementation. Don't try to create your own algorithm as you will only get it wrong and end up with a hash function which can be easily parallelised on the CPU and potentially make it even worse.</p>
<p>Last but not least you should always encourage your users to chose strong and unique passwords and <a href="http://www.troyhunt.com/2011/03/only-secure-password-is-one-you-cant.html">not limit them in doing so</a>.</p>
https://dusted.codes/sha-256-is-not-a-secure-password-hashing-algorithm
[email protected] (Dustin Moris Gorski)https://dusted.codes/sha-256-is-not-a-secure-password-hashing-algorithm#disqus_threadMon, 08 Feb 2016 00:00:00 +0000https://dusted.codes/sha-256-is-not-a-secure-password-hashing-algorithmsecuritypassword-hashingcryptographybrute-force-attacksUnderstanding ASP.NET Core 1.0 (ASP.NET 5) and why it will replace Classic ASP.NET<p>ASP.NET has quite some years on its shoulders now. Fourteen years to be precise. I only started working with ASP.NET in 2008, but even that is already 8 years ago. Since then the framework went through a steady evolutionary change and finally led us to its most recent descendant - <a href="https://get.asp.net/">ASP.NET Core 1.0</a>.</p>
<p>ASP.NET Core 1.0 is not a continuation of ASP.NET 4.6. It is a whole new framework, a side-by-side project which happily lives alongside everything else we know. It is an actual re-write of the current ASP.NET 4.6 framework, but much smaller and a lot more modular.</p>
<p>Only recently <a href="https://twitter.com/shanselman">Scott Hanselman</a> announced the <a href="http://www.hanselman.com/blog/ASPNET5IsDeadIntroducingASPNETCore10AndNETCore10.aspx">final name will be ASP.NET Core 1.0</a> after we've got to know it under ASP.NET 5, ASP.NET vNext and Project K.</p>
<p>Some people claim that many things remain the same, but this is not entirely true. Those people mostly refer to MVC 6 which is a whole separate framework which can be plugged into ASP.NET, but doesn't have to. While MVC 6 remains very familiar, ASP.NET Core 1.0 is a big fundamental change to the ASP.NET landscape.</p>
<p>If you are a follower of the <a href="https://live.asp.net/">live community standups</a> then you might have heard <a href="https://twitter.com/DamianEdwards">Damian Edwards</a> saying how the team gets a lot of pressure (from above) to get an RTM out of the door. I am not surprised and can understand why ASP.NET Core 1.0 is strategically so important to Microsoft. It is probably a lot more vital to the future of .NET than we might think of it today.</p>
<h2 id="aspnet-core-10---what-has-changed">ASP.NET Core 1.0 - What has changed?</h2>
<p>A better question would be what has not changed. ASP.NET Core 1.0 is a complete re-write. There is no <code>System.Web</code> anymore and everything which came with it.</p>
<p>ASP.NET Core 1.0 is open source. It is also cross platform. Microsoft invests a lot of money and effort into making it truly cross platform portable. This means there is a new <a href="https://github.com/dotnet/coreclr">CoreCLR</a> which is an alternative to <a href="http://www.mono-project.com/">Mono</a> now. You can develop, build and run an ASP.NET Core 1.0 application either on Mono or the CoreCLR on a Mac, Linux or Windows machine. This also means that Windows technologies such as PowerShell are abandoned from ASP.NET Core 1.0. Instead Microsoft heavily integrates Node.js which can be utilized to run pre- and post build events with <a href="http://gruntjs.com/">Grunt</a> or <a href="http://gulpjs.com/">Gulp</a>.</p>
<p>It is also the reason why things like the <code>.csproj</code> file got replaced by the <code>project.json</code> file and why all the new <a href="https://www.nuget.org/profiles/dotnetframework?showAllPackages=True">framework libraries ship as NuGet packages</a>. This was the only way to make development on a Mac a first class citizen.</p>
<p>But Microsoft went even further. Part of a great development experience is the editor of choice. Visual Studio was privileged to Windows users only. With <a href="https://code.visualstudio.com/">Visual Studio Code</a> Microsoft created a decent IDE for everyone now. Initially it was proprietary software but quickly became open source as well.</p>
<p>There are many more changes being made, but the common theme remains the same. Microsoft is dead serious about going open source and cross platform. Personally I think this is great. All of this is an amazing change and crucial to the long term success of ASP.NET.</p>
<h2 id="aspnet-core-10---why-did-everything-change">ASP.NET Core 1.0 - Why did everything change?</h2>
<p>One might wonder why this new direction towards the Mac and Linux community? Why does Microsoft invest so much money in attracting non-Windows developers? Visual Studio Code doesn't cost them anything, it is unlikely that they will use MS SQL server in their projects and there is a high chance that these web applications will end up somewhere on a Linux box in Amazon Web Services or the Google Cloud Platform. After all these are the technologies which non-Windows users are more familiar with.</p>
<p>My guess is that all of this doesn't matter for now. The truth is that an ASP.NET developer who cannot be monetized is still better than a non ASP.NET developer. This is particularly very true if you think that the .NET community is shrinking (relatively). This is just my own speculation, but I think Microsoft fears losing .NET developers, which means they are subsequently losing people who are more willing to pay for other Microsoft products such as MS SQL Server or Microsoft Azure.</p>
<p>If you are a .NET developer you might think this sounds crazy, but think of it from a different angle. Windows Desktop application development is slowly dying. There is no denial to that. Left is the mobile market and the web. Windows phones and tablets are still <a href="http://www.winbeta.org/news/windows-phone-market-share-drops-1-7-percent">a drop on a hot stone</a> in comparison to the market shares of iOS and Android. This leaves the web as a last resort. Now the web is an interesting one. After Silverlight's death ASP.NET is the only Microsoft product which competes with other web technologies such as Node, Ruby, Python, Java and more. This is a though battle for ASP.NET, because up until now you had to be a Windows user to be able to develop web applications with ASP.NET.</p>
<h3 id="lack-of-portability">Lack of portability</h3>
<p>In the last few years this problem has become even more prominent with many <a href="http://www.infoworld.com/article/2840235/application-development/9-cutting-edge-programming-languages-worth-learning-next.html">new languages gaining more popularity</a> and putting ASP.NET into the shadows.</p>
<p>The biggest problem is that the .NET framework and ASP.NET are not cross platform compatible. As a web developer you are writing applications which can be understood by any browser, any OS and any device which is connected to the web. There are no limitations, but with ASP.NET you can only develop from a Windows machine. That doesn't make much sense when you think about it.</p>
<p>This limitation has an impact on the adoption of ASP.NET on several levels. Recruitment is a good example. There is a massive <a href="http://techcrunch.com/2015/06/09/software-is-eating-the-job-market/">shortage of good software developers</a> at the moment. Ask Ayende <a href="https://ayende.com/blog/172899/recruiting-good-people-is-hard">how hard it is to recruit</a> a new talent. Imagine how much harder it is if you limit your talent pool to Windows users only? Not only do you waste more time and resources on the recruitment process itself, but also have to pay higher salaries for developers where the demand is higher than the supply.</p>
<p>It can be difficult for companies which are heavily committed to the .NET stack to change directions now, but what about startups? Many of today's biggest internet businesses were born out of small startups. They use free open source technologies such as PHP, Ruby, Python, Java or Node.js. This has a double negative effect for Microsoft. Not only did they lose the opportunity to sell ASP.NET, but they also send out the message that if you want to build a successful business you pick an open stack over proprietary software.</p>
<p>ASP.NET is probably one of the feature richest and fastest technologies you can find, but why would a startup care about this in the beginning? If they do well they can deal with this stuff later and if it doesn't go well then its good they didn't have to pay for a Microsoft license, right?</p>
<h3 id="chasing-behind-innovation">Chasing behind innovation</h3>
<p>Another major implication of not being cross platform compatible is that current ASP.NET 4.6 developers are missing out on big innovations which are not immediately available on the Windows platform. Over the last years Microsoft was chasing after many innovations by providing its own version to the .NET community, but not always with success (Silverlight, AppFabric Cache, DocumentDb, Windows Phone, etc.). This is not a sustainable model.</p>
<p>As a result many ASP.NET developers live in silos today. We are at a point where Microsoft cannot keep up with the vast amount of technology anymore and ASP.NET developers miss out on big innovations such as containers and Docker and don't even realize it, because they know very little to nothing about it. This is a dangerous place to be.</p>
<p>Cross platform compatibility is more than just a fad. It is the key to innovation today and the only way to stay on top of the game!</p>
<p>So how does Classic ASP.NET fit into this new world? Not much to be honest. ASP.NET 4.6 has a really though time to keep up with this fast moving environment.</p>
<p>Except we have ASP.NET Core 1.0 now...</p>
<h2 id="aspnet-core-10---reviving-aspnet">ASP.NET Core 1.0 - Reviving ASP.NET</h2>
<p>This is where ASP.NET Core 1.0 comes into the limelight. It is built on the same core principles which helped other languages to popularity:</p>
<ul>
<li>Free and open source</li>
<li>Cross platform compatible</li>
<li>Ease of access</li>
<li>Thin, fast, modular and extensible</li>
</ul>
<p>On the plus side ASP.NET Core 1.0 can be developed with some of the greatest languages available right now, thinking of C# and F# in particular! This will stick out ASP.NET Core from other competitive frameworks.</p>
<p>What will happen to ASP.NET 4.6? I don't know, but I would argue that ASP.NET 4.6 is a dead horse in the long run. There is very little value in betting any more money on it. Microsoft wouldn't say this yet, but it is pretty obvious. ASP.NET Core 1.0 is the new successor and the only viable solution to address the aforementioned problems.</p>
<p>ASP.NET 4.6 will be soon remembered as Classic ASP.NET. It will not entirely disappear, just like Classic ASP has never fully disappeared, but new development will likely happen in ASP.NET Core going forward. I find it extremely exciting and the benefits of ASP.NET Core are too compelling to not switch over as soon as possible.</p>
<p>The only thing we need to hope for is that Microsoft will not become impatient now and mess up the release with an immature product which will cause more churn than attraction. Microsoft, please take the time to bake something to be proud of!</p>
https://dusted.codes/understanding-aspnet-core-10-aka-aspnet-5-and-why-it-will-replace-classic-aspnet
[email protected] (Dustin Moris Gorski)https://dusted.codes/understanding-aspnet-core-10-aka-aspnet-5-and-why-it-will-replace-classic-aspnet#disqus_threadWed, 03 Feb 2016 00:00:00 +0000https://dusted.codes/understanding-aspnet-core-10-aka-aspnet-5-and-why-it-will-replace-classic-aspnetaspnet-coreaspnetdotnetASP.NET 5 like configuration in regular .NET applications<p><a href="https://get.asp.net/">ASP.NET 5</a> is Microsoft's latest web framework and the new big thing on the .NET landscape. It comes with a whole lot of <a href="https://github.com/aspnet/home/releases/v1.0.0-rc1-final">new features</a> and other changes which makes it very distinctive from previous ASP.NET versions. It is basically a complete re-write of the framework, optimized for the cloud and cross platform compatible. If you haven't checked it out yet then you are definitely missing out!</p>
<p>Some of the newly introduced features are the new <a href="https://docs.asp.net/en/latest/fundamentals/configuration.html">configuration options</a>, which come as part of the <a href="https://github.com/aspnet/Configuration">Microsoft.Extensions.Configuration</a> NuGet packages. They allow you a more flexible way of loading configuration values into an application and make it significantly easier to move an application across different environments without having to change configuration files or run through nasty configuration transformations as part of the process. <a href="http://www.hanselman.com/">Scott Hanselman</a> has nicely summarized this topic in one of <a href="http://www.hanselman.com/blog/BestPracticesForPrivateConfigDataAndConnectionStringsInConfigurationInASPNETAndAzure.aspx">his recent blog posts</a>.</p>
<p>It is needless to say that a lot of people are hugely excited about the new framework and many projects are being written in ASP.NET 5 right now, but there is also a significant amount of people who cannot easily migrate their existing projects to the new framework yet. This might be for various reasons but essentially means they are stuck on an older version of ASP.NET at least for a while.</p>
<p>However, it doesn't mean that those people cannot already benefit from some of the new ideas which have been publicly introduced in ASP.NET 5 such as the new configuration options.</p>
<h2 id="implementing-aspnet-5-like-configuration-options">Implementing ASP.NET 5 like configuration options</h2>
<p>The idea is to provide multiple sources from where an app can load configuration values. Additionally we want to put those sources into an order of precedence, where one source overrules another.</p>
<p>While ASP.NET 5 implementes a <a href="https://en.wikipedia.org/wiki/Builder_pattern">Builder</a> pattern, you can easily achieve the same thing with a good old <a href="https://en.wikipedia.org/wiki/Decorator_pattern">Decorator</a>.</p>
<p>First we need an interface which provides a method to retrieve a config value:</p>
<pre><code>public interface IConfiguration
{
string Get(string key);
}</code></pre>
<p>Next I implement a simple class which loads a value from an app- or web.config file:</p>
<pre><code>using System.Configuration;
public class AppConfigConfiguration : IConfiguration
{
public string Get(string key)
{
return ConfigurationManager.AppSettings[key];
}
}
</code></pre>
<p>This is trivial and probably the same as what you have in your current application, just that it has been wrapped in a class which is behind an interface.</p>
<p>Finally we can provide an additional implementation which loads a value from the environment variables:</p>
<pre><code>using System;
public class EnvironmentVariablesConfiguration : IConfiguration
{
private readonly IConfiguration _backupConfiguration;
public EnvironmentVariablesConfiguration(
IConfiguration backupConfiguration)
{
_backupConfiguration = backupConfiguration;
}
public string Get(string key)
{
var value = Environment.GetEnvironmentVariable(key);
return value ?? _backupConfiguration.Get(key);
}
}
</code></pre>
<p>This particular implementation wraps another <code>IConfiguration</code> implementation. If the setting does not exist in the environment variables then it defers to the next implementation. This could be anything and in this particular example will be the <code>AppConfigConfiguration</code>:</p>
<pre><code>container.Register<IConfiguration>(
new EnvironmentVariablesConfiguration(
new AppConfigConfiguration()));
</code></pre>
<p>It is really as simple as that. The order of precedence is determined by the order of the classes being put together.</p>
<p>This pattern can be extended as far as you like. For example I could add two more classes and compose something like this:</p>
<pre><code>container.Register<IConfiguration>(
new EnvironmentVariablesConfiguration(
new JsonFileConfiguration(
new AzureTableStorageConfiguration(
new AppConfigConfiguration()))));</code></pre>
<p>Now I can access configuration values from other classes through an <code>IConfiguration</code> dependency:</p>
<pre><code>var value = _configuration.Get("SomeKey");</code></pre>
<p>With this trick you can easily implement a "cloud optimized" configuration in any version of ASP.NET and follow good practice patterns no matter where you code!</p>
<h3 id="update">UPDATE:</h3>
<p>I created a complete <a href="https://github.com/dustinmoris/ASP.NET-4.6-Configuration-Demo">example of how this might look like in an ASP.NET 4.6 MVC 5 application</a> and uploaded it to GitHub.</p>
<p>In this example I provide 3 configuration sources and two different ways of composing them together. Once via the Decorator pattern as shown in this blog post and another one using a fluent Builder pattern which is much more similar to the way it is done in ASP.NET Core.</p>
https://dusted.codes/aspnet-5-like-configuration-in-regular-dotnet-applications
[email protected] (Dustin Moris Gorski)https://dusted.codes/aspnet-5-like-configuration-in-regular-dotnet-applications#disqus_threadTue, 12 Jan 2016 00:00:00 +0000https://dusted.codes/aspnet-5-like-configuration-in-regular-dotnet-applicationsaspnetdotnetaspnet-corearchitectureRunning NancyFx in a Docker container, a beginner's guide to build and run .NET applications in Docker<p>The quiet Christmas period is always a good time to explore new technologies and recent trends which have been on my list for a while. This Christmas I spent some time learning <a href="http://docs.asp.net/en/latest/conceptual-overview/aspnet.html">the latest ASP.NET framework</a>, in particular how to run ASP.NET 5 applications on Linux via the <a href="https://github.com/dotnet/coreclr">CoreCLR</a> and how to run a regular .NET 4.x web application via <a href="http://www.mono-project.com/">Mono</a> in a <a href="https://www.docker.com/">Docker</a> container. The latter is what I am going to talk about in this blog post today.</p>
<h2 id="what-is-docker">What is Docker?</h2>
<p>I assume you have some basic knowledge of what <a href="https://www.docker.com/">Docker</a> is, how it revolutionized the way we ship software into the cloud and what the benefits are of a container over a VM. If anything of this doesn't make sense, then I would highly recommend to make yourself familiar with the basic concept of containers and why it is desirable to run applications in a container first.</p>
<p>A few good resources to get you started are:</p>
<ul>
<li><a href="http://training.docker.com/">Docker Training</a></li>
<li><a href="https://docs.docker.com/">Docker Docs</a></li>
<li><a href="https://www.pluralsight.com/courses/docker-deep-dive">Docker Deep Dive on Pluralsight</a></li>
<li><a href="https://github.com/veggiemonk/awesome-docker">Awesome Docker (list of useful Docker resources)</a></li>
</ul>
<h2 id="setting-up-docker-on-windows">Setting up Docker on Windows</h2>
<p>First I want to get Docker running locally so I can run and debug applications in a development environment. Luckily this has been made extremely easy for us. All I need is to download the <a href="https://www.docker.com/docker-toolbox">Docker Toolbox</a> for Windows and follow the instructions.</p>
<h3 id="docker-toolbox">Docker Toolbox</h3>
<p>After installation I will have three new applications:</p>
<ul>
<li><a href="https://www.virtualbox.org/">VirtualBox</a></li>
<li><a href="https://kitematic.com/">Kitematic</a></li>
<li>Docker Quickstart Terminal</li>
</ul>
<p>If you have VirtualBox already installed then the installer will skip over this step. The important thing to know is that VirtualBox has an external API which can be used by other applications to manage VMs automatically. This is exactly what the Docker Machine does. It will create a new VM in VirtualBox with an image which has everything you need to run Docker there. Because it is all automated you never really have to worry about VirtualBox yourself.</p>
<p>Kitematic is a GUI client around the Docker Machine. At the moment it is very limited in functionality and therefore you will not need it either.</p>
<p>This leaves the Docker Terminal as the last application and the only thing which we will be using to run and manage Docker containers in a local environment.</p>
<h3 id="run-your-first-docker-command-from-the-terminal">Run your first Docker command from the Terminal</h3>
<p>After a successful installation let's run a first Docker command to see if things generally work. When you open the terminal for the first time it will initialize the VM in VirtualBox. This may take a few seconds but eventually you should end up at a screen like this:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/24239896875_d1d6a0bc60_o.png" alt="docker-quickstart-terminal, Image by Dustin Moris Gorski">
<p>You don't have to open Kitematic or VirtualBox to get it running. As I said before, you can happily ignore those two applications, however, if you are curious you can look into VirtualBox and see the VM running as expected:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/23872055229_7bb95b3ccb_o.png" alt="oracle-virtualbox-docker-default-vm-details, Image by Dustin Moris Gorski" class="half-width"><img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/23613070973_747be21283_o.png" alt="oracle-virtualbox-docker-default-vm, Image by Dustin Moris Gorski" class="half-width"></p>
<p>It's a Linux box loaded from the boot2docker.iso.</p>
<p>Back to the terminal I can now type <code>docker version</code> to get some basic version information about the Docker client and server application:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/24131827402_5d6d39cd40_o.png" alt="docker-version, Image by Dustin Moris Gorski">
<p>With that I am good to go with Docker now.</p>
<p>Maybe one thing which is worth mentioning at this point is the initial message in the Docker Terminal:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/24131827722_fb7851eb06_o.png" alt="docker-host-ip-address, Image by Dustin Moris Gorski">
<p>The IP address which is shown in the terminal is the endpoint from where you can reach your application later in this tutorial.</p>
<h2 id="creating-a-nancyfx-web-application-for-docker">Creating a NancyFx web application for Docker</h2>
<p>Now it is time to actually create a .NET web application which can run on Mono.</p>
<p>First I create a new project using the template for a regular console application, targeting .NET Framework 4.6.1.</p>
<p>The project is entirely empty except the <code>Program.cs</code> file:</p>
<pre><code>class Program
{
static void Main(string[] args)
{
}
}
</code></pre>
<p>Next I have to install 3 NuGet packages:</p>
<pre><code>Install-Package Nancy
Install-Package Nancy.Hosting.Self
Install-Package Mono.Posix
</code></pre>
<p>The first package installs the <a href="http://nancyfx.org/">NancyFx</a> web framework. Nancy is a lightweight .NET framework for building HTTP based services. You can think of it like a counterpart of ASP.NET, except it has nothing to do with ASP.NET, IIS or the System.Web namespace.</p>
<p>You can still host Nancy applications on IIS, but you can equally host it somewhere else like a console application. This is exactly what we will do and why we install <a href="https://www.nuget.org/packages/Nancy.Hosting.Self/">Nancy.Hosting.Self</a> as the second package.</p>
<p>The third package installs the <a href="https://www.nuget.org/packages/Mono.Posix/">POSIX interface for Mono and .NET</a>.</p>
<p>Having the Nancy packages installed I can now configure an endpoint and start a new <code>Nancy.Hosting.Self.NancyHost</code>:</p>
<pre><code>using System;
using Nancy.Hosting.Self;
class Program
{
static void Main(string[] args)
{
const string url = "http://localhost:8888";
var uri = new Uri(url);
var host = new NancyHost(uri);
host.Start();
}
}
</code></pre>
<p>This console application will exit immediately after launching and therefore I need to add something to keep it open such as a <code>Console.ReadLine()</code> command. Additionally I want to stop the host when I know the application is going to shut down:</p>
<pre><code>host.Start();
Console.ReadLine();
host.Stop();
</code></pre>
<p>If I would want to run this on Windows then I would be done now, but on Linux I want to wait for Unix termination signals instead.</p>
<p>A way to detect if the application is running on Mono is with this little helper method:</p>
<pre><code>private static bool IsRunningOnMono()
{
return Type.GetType("Mono.Runtime") != null;
}
</code></pre>
<p>Another helper method exposes the Unix termination signals:</p>
<pre><code>private static UnixSignal[] GetUnixTerminationSignals()
{
return new[]
{
new UnixSignal(Signum.SIGINT),
new UnixSignal(Signum.SIGTERM),
new UnixSignal(Signum.SIGQUIT),
new UnixSignal(Signum.SIGHUP)
};
}
</code></pre>
<p>I add both methods to my <code>Program</code> class and change the <code>Main</code> method to support both, Windows and Unix termination:</p>
<pre><code>host.Start();
if (IsRunningOnMono())
{
var terminationSignals = GetUnixTerminationSignals();
UnixSignal.WaitAny(terminationSignals);
}
else
{
Console.ReadLine();
}
host.Stop();
</code></pre>
<p>This is what the final class looks like:</p>
<pre><code>using System;
using Nancy.Hosting.Self;
using Mono.Unix;
using Mono.Unix.Native;
class Program
{
static void Main(string[] args)
{
const string url = "http://localhost:8888";
Console.WriteLine($"Starting Nancy on {url}...");
var uri = new Uri(url);
var host = new NancyHost(uri);
host.Start();
if (IsRunningOnMono())
{
var terminationSignals = GetUnixTerminationSignals();
UnixSignal.WaitAny(terminationSignals);
}
else
{
Console.ReadLine();
}
host.Stop();
}
private static bool IsRunningOnMono()
{
return Type.GetType("Mono.Runtime") != null;
}
private static UnixSignal[] GetUnixTerminationSignals()
{
return new[]
{
new UnixSignal(Signum.SIGINT),
new UnixSignal(Signum.SIGTERM),
new UnixSignal(Signum.SIGQUIT),
new UnixSignal(Signum.SIGHUP)
};
}
}
</code></pre>
<p>All I am missing now is at least one Nancy Module which serves HTTP requests. This is done by implementing a new module which derives from <code>Nancy.NancyModule</code> and registering at least one route. I setup a "Nancy: Hello World" message on the root <code>/</code> endpoint and an OS version string on the <code>/os</code> endpoint:</p>
<pre><code>using System;
using Nancy;
public class IndexModule : NancyModule
{
public IndexModule()
{
Get["/"] = _ => "Nancy: Hello World";
Get["/os"] = _ => Environment.OSVersion.ToString();
}
}
</code></pre>
<p>If I compile and run the application then I should be able to see the hello world message when visiting <a href="http://localhost:8888">http://localhost:8888</a> and see the OS version at <a href="http://localhost:8888/os">http://localhost:8888/os</a>:</p>
<p><img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/23613071903_f5f5255aab_o.png" alt="nancy-hello-world-in-browser, Image by Dustin Moris Gorski" class="half-width"><img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/23872053639_939d22a1fa_o.png" alt="nancy-os-version-in-browser, Image by Dustin Moris Gorski" class="half-width"></p>
<h2 id="running-nancyfx-in-a-docker-container">Running NancyFx in a Docker container</h2>
<p>The application is very simple but certainly enough to deploy the first version in a Docker container.</p>
<h3 id="create-a-dockerfile">Create a Dockerfile</h3>
<p>First I need to build a Docker image which will contain the entire application and all of its dependencies. For this I have to create a recipe which defines what exactly goes into the image. The recipe is a <code>Dockerfile</code>, an ordinary human readable text file with instructions on how to compose an image. It is important to name the file exactly as shown, without a file extension and a capital "D".</p>
<p>It is good practice to add the Dockerfile into your project folder, because it may change when your project changes:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/24157345171_75b2f0c778_o.png" alt="dockerfile-in-project-tree, Image by Dustin Moris Gorski">
<p>I also want to include the Dockerfile in the build output, therefore I have to change the "Build Action" setting to "Content" and "Copy to Output Directory" to "Copy always":</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/24157345131_839d132ffa_o.png" alt="dockerfile-properties, Image by Dustin Moris Gorski">
<p>Visual Studio 2015 creates text files with <a href="http://stackoverflow.com/questions/2223882/whats-different-between-utf-8-and-utf-8-without-bom">UTF-8-BOM encoding</a> by default. This adds an additional (invisible) BOM character at the very beginning of the text file and will cause an error when trying to build an image from the Dockerfile. The easiest way to change this is by opening the file in <a href="https://notepad-plus-plus.org/">Notepad++</a> and changing the encoding to UTF-8 (without BOM):</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/23611655044_7e9916a8cf_o.png" alt="dockerfile-encoding, Image by Dustin Moris Gorski" class="two-third-width">
<p><em>You can also <a href="http://stackoverflow.com/questions/5406172/utf-8-without-bom#answer-5411486">permanently change Visual Studio to save files without BOM</a>.</em></p>
<p>Now that this is sorted I can open the file and start defining the build steps.</p>
<p>Every Dockerfile has to begin with the <a href="https://docs.docker.com/engine/reference/builder/#from">FROM</a> instruction. This defines the base image to start with. Docker uses a <a href="https://docs.docker.com/engine/introduction/understanding-docker/#how-does-a-docker-image-work">layering system</a> which is one of the reasons why Docker images are so light. You can find many official images to start with at the public <a href="https://hub.docker.com/">Docker Hub</a>.</p>
<p>Fortunately there is already an <a href="https://hub.docker.com/_/mono/">official Mono repository</a> which we can use. The most recent image is <a href="https://github.com/mono/docker/blob/39c80bc024a4797c119c895fda70024fbc14d5b9/4.2.1.102/Dockerfile">4.2.1.102</a> at the time of writing. As you can see the Mono image itself has the <a href="https://github.com/tianon/docker-brew-debian/blob/bd71f2dfe1569968f341b9d195f8249c8f765283/wheezy/Dockerfile">debian:wheezy</a> image from the <a href="https://hub.docker.com/_/debian/">official Debian repository</a> as its base. The Debian image has the empty <a href="https://hub.docker.com/_/scratch/">scratch</a> image as its base. When we use the Mono image we essentially build a new layer on top of an existing tree:</p>
<pre><code>scratch
\___ debian:wheezy
\___ mono:4.2.1.102
\___ {our repository}:{tag}
</code></pre>
<p>If you look at the <a href="https://hub.docker.com/_/mono/">official Mono repository</a> you can see that the latest Mono image has multiple tags:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/24213790396_54b8ec37c9_o.png" alt="mono-latest-image-tag, Image by Dustin Moris Gorski">
<p>It depends on your use case which tag makes the most sense for your application. Currently they all have been built from the same Dockerfile, but only tag <code>4.2.1.102</code> is explicit enough to always guarantee the exact same build. Personally I would chose this one for a production application:</p>
<pre><code>FROM mono:4.2.1.102</code></pre>
<p>The next two instructions are very straight forward. I want to create a new folder called <code>/app</code> and copy all relevant files, which are required to execute the application, into this folder. Remember that the Dockerfile gets copied into the build output folder. This means that I basically have to copy everything from the same directory where the Dockerfile sits into the <code>/app</code> folder:</p>
<pre><code>RUN mkdir /app
COPY . /app
</code></pre>
<p>My Nancy application has been configured to listen to port 8888. With the <a href="https://docs.docker.com/engine/reference/builder/#expose">EXPOSE</a> instruction I inform Docker that the container listens to this specific port:</p>
<pre><code>EXPOSE 8888
</code></pre>
<p>Finally I have to run the application with Mono:</p>
<pre><code>CMD ["mono", "/app/DockerDemoNancy.Host.exe", "-d"]
</code></pre>
<p>This is what the final Dockerfile looks like:</p>
<pre><code>FROM mono:4.2.1.102
RUN mkdir /app
COPY . /app
EXPOSE 8888
CMD ["mono", "/app/DockerDemoNancy.Host.exe", "-d"]
</code></pre>
<p>There is a lot more you can do with a Dockerfile. Check out the <a href="https://docs.docker.com/engine/reference/builder/">Dockerfile reference</a> for a complete list of available instructions.</p>
<h3 id="build-a-docker-image">Build a Docker image</h3>
<p>Building a Docker image is extremely easy. Back in the Docker Terminal I navigate to the <code>/bin/Release/</code> folder of my Nancy application:</p>
<pre><code>cd /c/github/docker-demo-nancy/dockerdemonancy.host/bin/release
</code></pre>
<p>Next I run the <code>docker build</code> command and tag the image with the <code>-t</code> option:</p>
<pre><code>docker build -t docker-demo-nancy:0.1.0 .
</code></pre>
<p>Don't forget the dot at the end. This is the path to the directory which contains the Dockerfile. Because I already navigated into the <code>/bin/Release/</code> folder I just put a dot at the end.</p>
<p>The build process will go through each instruction and create a new layer after executing it. The first time you build an image you are likely not going to have the <code>mono:4.2.1.102</code> image on disk and Docker will pull it from the public registry (Docker Hub):</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/23611655124_274479aec6_o.png" alt="docker-build-command, Image by Dustin Moris Gorski">
<p>As you can see the FROM instruction requires Docker to download 6 different images. This is because the <code>mono:4.2.1.102</code> image and all of its ancestors (<code>debian:wheezy</code>) have 6 instructions in total, which result in 6 layered images.</p>
<p>A better way of visualizing this is by inspecting our own image.</p>
<p>Once the build is complete we can list all available images with the <code>docker images</code> command:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/23613072363_d2a56c9f20_o.png" alt="docker-images-command, Image by Dustin Moris Gorski">
<p>With <code>docker history {image-id}</code> I can see the entire history of the image, each layer it is made of and the command which is responsible for the layer:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/24213790946_7e34d73f98_o.png" alt="docker-history, Image by Dustin Moris Gorski">
<p>This is quite clever! Anyway, I am getting carried away here, the point is we just created our first Docker image!</p>
<p><em>If you want to upload the image into a repository on Docker Hub or another private registry you can use <a href="https://docs.docker.com/engine/reference/commandline/tag/"><code>docker tag</code></a> to tag an existing image with a new tag and <a href="https://docs.docker.com/engine/reference/commandline/push/"><code>docker push</code></a> to upload it to the registry.</em></p>
<h3 id="create-and-run-a-docker-container">Create and run a Docker container</h3>
<p>Running a Docker container couldn't be easier. Use the <code>docker run</code> command to create and run a container in one go:</p>
<pre><code>docker run -d -p 8888:8888 docker-demo-nancy:0.1.0
</code></pre>
<p>The <code>-d</code> option tells Docker to run the container in detached mode and the <code>-p 8888:8888</code> option maps the container's port 8888 to the host's port 8888.</p>
<p>Afterwards you can run <code>docker ps</code> to list all currently running containers:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/24213790636_33b4fa1826_o.png" alt="docker-ps, Image by Dustin Moris Gorski">
<p>Great, now pasting <code>{docker-ip}:8888</code> (the IP address from the beginning) into a browser should return the Nancy hello world message:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/23613071903_f5f5255aab_o.png" alt="nancy-hello-world-in-browser-from-docker-container, Image by Dustin Moris Gorski">
<p>And going to <code>{docker-ip}:8888/os</code> should return "Unix 4.1.13.2":</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/23611653874_8f7b112421_o.png" alt="nancy-os-version-in-browser-from-docker-container, Image by Dustin Moris Gorski">
<p>This is pretty awesome. With almost no effort we managed to run a Nancy .NET application on Mono in a Docker container!</p>
<h4 id="tip-map-the-docker-ip-address-to-a-friendly-dns">Tip: map the Docker IP address to a friendly DNS</h4>
<p>You can map the Docker IP address to a friendly DNS by editing your Windows hosts file:</p>
<ol>
<li>Open <code>C:\Windows\System32\drivers\etc\hosts</code> as an administrator</li>
<li>Add a new mapping to a memorable DNS, e.g: <code>192.168.99.100 docker.local</code></li>
<li>Save the file</li>
</ol>
<p>Now you can type <code>docker.local:8888</code> into your browser and get the same result:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/24131827632_efc4fa767a_o.png" alt="docker-local-host-resolution, Image by Dustin Moris Gorski">
<h2 id="configure-environment-specific-settings-with-docker">Configure environment specific settings with Docker</h2>
<p>The last thing I would like to show in this blog post is how to manage environment specific variables with a Docker container.</p>
<p>I think it is pretty obvious that you must never change a Docker image when you promote your Docker container from one environment to another. This means that the app.config which has been packed into the image must be the same for every environment. Even though this is not a new practice I still see a lot of people transforming config files between environments. This has to stop and Docker makes it easy to load environment variables when launching a container.</p>
<p>Let's make a small change to the Nancy IndexModule:</p>
<pre><code>public IndexModule()
{
<strong>var secret = Environment.GetEnvironmentVariable("Secret");</strong>
Get["/"] = _ => "Nancy: Hello World";
Get["/os"] = _ => Environment.OSVersion.ToString();
<strong>Get["/secret"] = _ => secret ?? "not set";</strong>
}
</code></pre>
<p>It is a fairly straight forward change. I load an environment setting with the name "Secret" into a local variable and expose it later.</p>
<p>This environment setting could be anything, but typically it includes sensitive data like encryption keys, database connection strings or other environment specific settings such as error log paths.</p>
<p>Needles to say that exposing the secret to the public is only for the purpose of this demo to show that it works.</p>
<p>Now I need to compile the application and build a new Docker image again, following the same instructions as before. I tagged the new image with <code>docker-demo-nancy:0.2.0</code>.</p>
<p>Before I launch a new container I want to stop the current one to avoid a clash on port 8888, otherwise I would happily run them side by side.</p>
<p>After I ran <code>docker stop {container-id}</code> I launch a new container with:</p>
<pre><code>docker run -d -p 8888:8888 -e Secret=S3cReT docker-demo-nancy:0.2.0</code></pre>
<p>The <code>docker run</code> command takes in one or many <code>-e</code> options to specify environment settings. There are a few <a href="https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables-e-env-env-file">more options on specifiying environment settings</a>, but the only one which you would ever want to use in a live environment is the <a href="https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables-e-env-env-file"><code>--env-file</code></a> option to load all environment variables from an external file.</p>
<p>This has many advantages:</p>
<ul>
<li>You can easily ship environment settings to environments</li>
<li>You can easily provide many environment settings</li>
<li>Sensitive data will not show up in logs</li>
<li>The path to the file can be static which makes it easier to configure a scheduler to run containers in production</li>
</ul>
<p>After launching the container with the secret setting I can run <code>docker inspect {container-id}</code> to load a whole bunch of information on the container. One piece of information is the environment variables which have been loaded for that container:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/24139693382_60ff29cf5f_o.png" alt="docker-inspect-env-vars, Image by Dustin Moris Gorski">
<p>Going to <a href="http://docker.local:8888/secret">docker.local:8888/secret</a> will expose the secret environment variable now:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2016-01-07/23620903853_fffa6cfb9f_o.png" alt="docker-secret-in-browser, Image by Dustin Moris Gorski">
<h2 id="recap">Recap</h2>
<p>This brings me to the end of my first blog post on running .NET applications in Docker. I hope I could shed more light on some of the Docker basics and demonstrate how quick and easy you can build .NET applications for Docker.</p>
<p>For this demo I chose the NancyFx framework to build a web application, but I could have equally written a regular .NET application which can run on Mono or used ASP.NET 5 which does not only run on Mono but also on the new CoreCLR which is cross platform compatible.</p>
<p>Obviously there is a lot more that comes into running .NET apps in Docker which I haven't covered in this blog post. Some of these things are debugging applications in a Docker container, building Docker images from your CI and managing containers in production. Watch out for further blog posts where I will drill down into some of those topics!</p>
<p><em>The full <a href="https://github.com/dustinmoris/Docker-Demo-Nancy">source code of the demo application</a> can be found on GitHub. I have also uploaded the Docker images to <a href="https://hub.docker.com/r/dustinmoris/docker-demo-nancy/">my public repository on Docker Hub</a>.</em></p>
https://dusted.codes/running-nancyfx-in-a-docker-container-a-beginners-guide-to-build-and-run-dotnet-applications-in-docker
[email protected] (Dustin Moris Gorski)https://dusted.codes/running-nancyfx-in-a-docker-container-a-beginners-guide-to-build-and-run-dotnet-applications-in-docker#disqus_threadThu, 07 Jan 2016 00:00:00 +0000https://dusted.codes/running-nancyfx-in-a-docker-container-a-beginners-guide-to-build-and-run-dotnet-applications-in-dockerdockernancyfxdotnetDiagnosing CSS issues on mobile devices with Google Chrome bookmarklets<p>Yesterday, when I was browsing my blog on my mobile phone I discovered a small CSS issue on one of the pages. One of my recent blog posts had a horizontal scrollbar which shouldn't have been there. A page element caused an overflow, but it was not visible which element was responsible for it.</p>
<p>When I tried to diagnose the issue on my computer I struggled to replicate it to the same extent as it was present on my mobile phone. I was too lazy to go through the entire page source to search for the needle in the haystack and decided to quickly create a <a href="https://support.google.com/chrome/answer/95745?hl=en">Google Chrome bookmarklet</a> to help me with the investigation.</p>
<p>First I had to make the invisible visible.</p>
<p>With a little bit of Google's help and playing around in the Google Chrome Console (<kbd>Ctrl</kbd> + <kbd>Shift</kbd> + <kbd>J</kbd>) I put this little JavaScript snippet together:</p>
<pre><code>[].forEach.call(
document.getElementsByTagName("*"),
function(e) {
e.style.outline = "1px solid red";
});</code></pre>
<p>When I execute this in the console it will outline every element on the page:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-31/23989595201_cf8f8f3165_o.png" alt="page-with-outlined-elements, Image by Dustin Moris Gorski">
<p>With this it should be easy to spot the overflowing element. Now I had to find a way to execute it inside the mobile version of Google Chrome.</p>
<p>For this purpose I created a new bookmarklet on my Desktop, which then got automatically synchronised to my phone.</p>
<p>A bookmarklet in Google Chrome allows you to execute JavaScript code on an already rendered page.</p>
<p>This is how you create it:</p>
<ol>
<li>
<p><kbd>Ctrl</kbd> + <kbd>Shift</kbd> + <kbd>O</kbd> in Google Chrome</p>
</li>
<li>
<p>Right click "Bookmarks bar" on the left</p>
</li>
<li>
<p>Click Add Page</p>
</li>
<li>
<p>Pick a random name, e.g.: Outline Elements</p>
</li>
<li>
<p>Type in the following snippet:</p>
<p><code>javascript: [].forEach.call(document.getElementsByTagName("*"), function(e) {e.style.outline = "1px solid red";}); </code></p>
</li>
<li>
<p>Done!</p>
</li>
</ol>
<p>When I click on the newly created bookmarklet I will get the same result as if I had executed the snippet in the console.</p>
<p>Only seconds later it appeared on my phone as well:</p>
<img class="half-width" src="https://cdn.dusted.codes/images/blog-posts/2015-12-31/23447431843_deb816c10b_o.png" alt="mobile-google-chrome-bookmarks-bar, Image by Dustin Moris Gorski">
<p>However, when I click on the bookmarklet from the bookmarks menu on my phone everything freezes and nothing happens. It turns out that I have to execute it from the address bar.</p>
<p>Just start typing the name of your bookmarklet and Google Chrome will auto-suggest the item for you:</p>
<img class="half-width" src="https://cdn.dusted.codes/images/blog-posts/2015-12-31/23991652851_1d9acee307_o.png" alt="outline-elements-bookmarklet-in-mobile-google-chrome, Image by Dustin Moris Gorski">
<p>Executing it from the address bar delivers the correct result:</p>
<img class="half-width" src="https://cdn.dusted.codes/images/blog-posts/2015-12-31/24074392275_2446d6a4fd_o.png" alt="page-with-outlined-elements-on-mobile-phone, Image by Dustin Moris Gorski">
<p>This little trick quickly helped me to find the overflowing element on my phone without having to modify the original website. I use the same technique to remove advertising banners and other blocking content on several websites which normally don't display the entire content when you are not logged in (or a paying customer).</p>
https://dusted.codes/diagnosing-css-issues-on-mobile-devices-with-google-chrome-bookmarklets
[email protected] (Dustin Moris Gorski)https://dusted.codes/diagnosing-css-issues-on-mobile-devices-with-google-chrome-bookmarklets#disqus_threadThu, 31 Dec 2015 00:00:00 +0000https://dusted.codes/diagnosing-css-issues-on-mobile-devices-with-google-chrome-bookmarkletscssgoogle-chromeDesign, test and document RESTful APIs using RAML in .NET<p>Building a RESTful API is easy. Building a RESTful API which is easy to consume is more difficult. There are three key elements which make a good API:</p>
<ul>
<li>Intuitive design</li>
<li>Good documentation</li>
<li>Documentation which actually matches the implementation</li>
</ul>
<p>Intuitive API design is obviously very important, but equally important and often neglected is good and complete documentation which makes it easier to build against your API.</p>
<p>Having said that we all know how difficult it is to keep documentation up to date. It is very loosely coupled to the actual implementation without any enforcement other than a human trial and error process.</p>
<p>In this article I would like to demonstrate how we can close this gap by building a RESTful API with a design- and test first approach using <a href="http://raml.org/">RAML</a>.</p>
<p>I will showcase an entire end to end scenario by building a simple demo API and covering the following steps:</p>
<ol>
<li><a href="#design-an-api-using-raml">Design an API using RAML</a></li>
<li><a href="#generate-a-client-from-the-raml-document">Generate a client from the RAML document</a></li>
<li><a href="#write-tests-using-the-auto-generated-client">Write tests using the auto-generated client</a></li>
<li><a href="#implement-the-api-to-satisfy-the-tests">Implement the API to satisfay the tests</a></li>
<li><a href="#document-and-review-an-api-using-the-anypoint-platform">Document and review an API using the Anypoint Platform</a></li>
</ol>
<p>But first let me briefly introduce you to RAML:</p>
<h2 id="raml">RAML</h2>
<p><a href="http://raml.org/">RAML</a> stands for RESTful API Modeling Language and <em>this</em> is exactly what it delivers.</p>
<p>If you have worked with <a href="http://swagger.io/">Swagger</a> or <a href="https://apiblueprint.org/">API Blueprint</a> before then this should be familiar, except that RAML is designed to be human readable and remarkably easy to use.</p>
<p>At the time of writing there are two public specifications:</p>
<ul>
<li><a href="https://github.com/raml-org/raml-spec/blob/master/versions/raml-08/raml-08.md">RAML 0.8</a></li>
<li><a href="https://github.com/raml-org/raml-spec/blob/master/versions/raml-10/raml-10.md/">RAML 1.0 RC</a></li>
</ul>
<p>In this blog post I will be using RAML 0.8 and assume that you are familiar enough with the spec to follow my simple examples as part of the demo.</p>
<p>For further reading I would recommend to go through the official RAML tutorials explaining the basic concepts and more advanced features in your own time and at your own pace:</p>
<ul>
<li><a href="http://raml.org/developers/raml-100-tutorial">RAML 100 Tutorial (Basics)</a></li>
<li><a href="http://raml.org/developers/raml-200-tutorial">RAML 200 Tutorial (Advanced)</a></li>
</ul>
<p>Now that you know what RAML is I will jump straight into the first part where I'll be using RAML to design an API:</p>
<h2 id="design-an-api-using-raml">1. Design an API using RAML</h2>
<p>As I mentioned earlier, for the purpose of this demo I would like to build a very rudimental fake parcel delivery API with the following endpoint:</p>
<ul>
<li><em><strong>GET</strong> /status/{parcelId}</em> will return the status of a parcel</li>
<li><em><strong>PUT</strong> /status/{parcelId}</em> will update the status of a parcel</li>
</ul>
<p>RAML is a <a href="http://www.yaml.org/">YAML</a> based language and designed for human readability. The beauty of this is that you can write RAML in a basic editor without fancy syntax highlighting and it will be still easy to read and understand.</p>
<p>However, there is a good <a href="https://atom.io/">Atom</a> plugin called <a href="http://apiworkbench.com/">API Workbench</a> which I am using to kick start my API:</p>
<pre><code>#%RAML 0.8
title: Parcel Delivery API
version: v1
baseUri: http://localhost/raml-demo-api/{version}
protocols: [ HTTP, HTTPS ]</code></pre>
<p><em>Sidenote:
I will not go through every single line of the sample code, but provide enough context so that it should be easy to follow.</em></p>
<p>At the top of the document I specified the RAML version, followed by the title of the API, the API version and the basic URI with a version placeholder. This allows me to introduce breaking changes in the future. The API shall also be called from both protocols, HTTP and HTTPS.</p>
<p>Next I define a single endpoint to set and get status information for a given parcel ID:</p>
<pre><code>...
/status/{parcelId}:
displayName: Parcel Status Information
uriParameters:
parcelId:
displayName: Parcel ID
type: string
required: true
minLength: 6
maxLength: 6
example: 123456</code></pre>
<p>After this I define the contract for the GET operation, which shall return the current status of a parcel:</p>
<pre><code>...
get:
description: Retrieves the current status for the specified parcel ID.
responses:
200:
description: Current status.
body:
application/json:
schema: |
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Delivery Status",
"type": "object",
"properties": {
"status": {
"description": "The current status of the delivery.",
"type": "string"
},
"updated": {
"description": "The date time the last status update.",
"type": "string"
}
}
}
example: |
{
"status": "Parcel is out for delivery.",
"updated": "2015-12-09T16:53:19.5168335+00:00"
}</code></pre>
<p>As you can see there is probably not much I have to explain. The GET operation returns a successful response with the 200 HTTP status code and a JSON payload containing the current status. Note how RAML allows me to provide an example alongside the schema. This will be particularly useful at a later point.</p>
<p>I am pleased with that and apply something similar for the PUT verb:</p>
<pre><code>...
put:
description: Creates or updates the status for the specified parcel ID.
body:
application/json:
schema: |
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Status Update",
"type": "object",
"properties": {
"status": {
"description": "The new status update message.",
"type": "string"
}
}
}
example: |
{
"status": "Delivered and signed by customer."
}
responses:
201:
description: The status has been successfully updated.</code></pre>
<p>The only difference is that the PUT operation expects a JSON object in the HTTP body and returns the <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.2">201 status code</a> on success.</p>
<p>Now I'm almost done with designing my API. The only thing missing is to describe what happens if a consumer provides an invalid parcel ID. It might either not exist or be in the wrong format.</p>
<p>Because this applies to both operations, the GET and the PUT on my endpoint, I will define a <a href="http://docs.raml.org/specs/1.0/#raml-10-spec-resource-types-and-traits">trait</a> with two additional error responses, which can be shared by multiple endpoints in RAML:</p>
<pre><code>...
traits:
<strong>- requiresValidParcelId:</strong>
usage: |
Apply this to any method which requires a valid Parcel ID in the request.
responses:
406:
description: Parcel ID was in incorrect format.
body:
application/json:
<strong>schema: ErrorMessage</strong>
example: |
{
"message": "Parcel ID has to be 6 characters long and may only contain digits."
}
404:
description: Could not find the specified parcel ID.
body:
application/json:
<strong>schema: ErrorMessage</strong>
example: |
{
"message": "Parcel ID not found."
}</code></pre>
<p>Every trait has a unique name. In this case I called it <code>requiresValidParcelId</code>.</p>
<p>Perhaps you noticed the <code>schema: ErrorMessage</code> in the declaration of the response body. Shared schemas is another feature of RAML. I created an <code>ErrorMessage</code> schema to describe the JSON payload of an error response:</p>
<pre><code>...
schemas:
- ErrorMessage: |
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Error Message",
"type": "object",
"properties": {
"message": {
"description": "The error message of the error.",
"type": "string"
}
}
}</code></pre>
<p>Overall it is the same principle as to how I defined the success response in my previous example, except that this one is declared in a separate section which allows multiple re-use inside RAML.</p>
<p>At last I need to hook up my endpoint with the trait:</p>
<pre><code>...
/status/{parcelId}:
displayName: Parcel Status Information
uriParameters:
parcelId:
displayName: Parcel ID
type: string
required: true
minLength: 6
maxLength: 6
example: 123456
<strong>is: [ requiresValidParcelId ]</strong></code></pre>
<p>The entire end result looks as follows:</p>
<pre><code>#%RAML 0.8
title: Parcel Delivery API
version: v1
baseUri: https://raml-demo-api.azurewebsites.net/{version}
protocols: [ HTTP, HTTPS ]
schemas:
- ErrorMessage: |
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Error Message",
"type": "object",
"properties": {
"message": {
"description": "The error message of the error.",
"type": "string"
}
}
}
traits:
- requiresValidParcelId:
usage: |
Apply this to any method which requires a valid Parcel ID in the request.
responses:
406:
description: Parcel ID was in an incorrect format.
body:
application/json:
schema: ErrorMessage
example: |
{
"message": "Parcel ID has to be 6 characters long and may only contain digits."
}
404:
description: Could not find the specified parcel ID.
body:
application/json:
schema: ErrorMessage
example: |
{
"message": "Parcel ID not found."
}
/status/{parcelId}:
displayName: Parcel Status Information
uriParameters:
parcelId:
displayName: Parcel ID
type: string
required: true
minLength: 6
maxLength: 6
example: 123456
is: [ requiresValidParcelId ]
get:
description: Retrieves the current status for the specified parcel ID.
responses:
200:
description: Current status.
body:
application/json:
schema: |
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Delivery Status",
"type": "object",
"properties": {
"status": {
"description": "The current status of the delivery.",
"type": "string"
},
"updated": {
"description": "The date time the last status update.",
"type": "string"
}
}
}
example: |
{
"status": "Parcel is out for delivery.",
"updated": "2015-12-09T16:53:19.5168335+00:00"
}
put:
description: Creates or updates the status for the specified parcel ID.
body:
application/json:
schema: |
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Status Update",
"type": "object",
"properties": {
"status": {
"description": "The new status update message.",
"type": "string"
}
}
}
example: |
{
"status": "Delivered and signed by customer."
}
responses:
201:
description: The status has been successfully updated.</code></pre>
<p><em>You can also explore the full <a href="https://github.com/dustinmoris/RAML-Demo/blob/master/api.raml">RAML document</a> in my <a href="https://github.com/dustinmoris/RAML-Demo/">RAML-Demo</a> GitHub repository.</em></p>
<p>RAML has a lot more to offer than what I showed in this basic example, but hopefully this gives you a rough idea of how intuitive and powerful it can be!</p>
<p>Additionally there is a lot of useful tooling built around RAML. One of my favourites is the RAML Tools for .NET, which brings me to the second part of this blog post.</p>
<h2 id="generate-a-client-from-the-raml-document">2. Generate a client from the RAML document</h2>
<p>Now that I have a detailed specification of what my API should look like it is time to open up Visual Studio and get my hands dirty.</p>
<p>First I create an empty test project and include the RAML file (api.raml) in a solution folder to keep everything together:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23265040193_87ebcfdf49_o.png" alt="RAML-Demo-Solution-Tree, Image by Dustin Moris Gorski">
<p>For the next part I have to install the <a href="https://github.com/mulesoft-labs/raml-dotnet-tools">RAML Tools for .NET</a> Visual Studio extension:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23865723396_88003023fd_o.png" alt="RAML-Demo-Visual-Studio-RAML-Extension, Image by Dustin Moris Gorski">
<p>After a successful install I have an additional context menu when I right click the "References" item underneath my test project:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23596120010_2895ee90f2_o.png" alt="RAML-Demo-Add-RAML-Reference, Image by Dustin Moris Gorski">
<p>A click on that menu item pops up a pretty much self-explaining dialog:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23783503252_f6263a16ef_o.png" alt="RAML-Demo-Add-RAML-Reference-Dialog, Image by Dustin Moris Gorski">
<p>I select the Upload option and navigate to the api.raml inside my solution folder. After confirmation I am presented with an Import RAML dialog:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23865723466_db487ddba7_o.png" alt="RAML-Demo-Create-Client, Image by Dustin Moris Gorski">
<p>The import process automatically detected my single endpoint and the only thing I had to change was the default client name to "ParcelDeliveryApiClient" in case I want to import another API at a later point.</p>
<p>Hitting the Import button finishes the remaining work and once completed I am seeing a new API reference in my project tree:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23596119830_abde113f83_o.png" alt="RAML-Demo-RAML-References-in-Project, Image by Dustin Moris Gorski">
<p>This was a very smooth and painless process and if successfully imported I should be able to create an instance of <code>ParcelDeliveryApiClient</code> in a new class file:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23797380272_a633140864_o.png" alt="RAML-Demo-Aut-Generated-Client-in-Code, Image by Dustin Moris Gorski">
<p>Amazing, let's explore the auto-generated client by writing some tests in the next step!</p>
<h2 id="write-tests-using-the-auto-generated-client">3. Write tests using the auto-generated client</h2>
<p>With the <code>ParcelDeliveryApiClient</code> I can write integration tests against a real endpoint. At the moment I don't have a working API running anywhere so I define a provisional URI and a couple more parameters for my first test:</p>
<pre><code>[TestFixture]
public class ParcelDeliveryApiTests
{
[Test]
public async Task IntegrationTest()
{
const string endpoint = "http://localhost/raml-demo-api/v1/status";
const string parcelId = "123456";
const string status = "Parcel is out for delivery.";
}
}</code></pre>
<p>Next I initialize the client and send a PUT request for the parcel ID 123456:</p>
<pre><code>[Test]
public async Task IntegrationTest1()
{
const string endpoint = "http://localhost/raml-demo-api/v1/status";
const string parcelId = "123456";
const string status = "Parcel is out for delivery.";
<strong>var parcelDeliveryApiClient = new ParcelDeliveryApiClient(endpoint);
var putResult =
await parcelDeliveryApiClient.StatusParcelId.Put(
new StatusParcelIdPutRequestContent
{
Status = status
},
parcelId);</strong>
}</code></pre>
<p>The amazing thing is that the entire client code has been auto-generated during the import. Every member of the class such as the <code>StatusParcelId</code> or the <code>Put(...)</code> method, as well as the <code>StatusParcelIdPutRequestContent</code> DTO class have been auto-magically created for me.</p>
<p>Remember, all I have done so far was to describe my API using RAML and with a few additional clicks in Visual Studio I am now able to write full fletched integration tests against an API which doesn't even exist yet!</p>
<p>I find this pretty cool.</p>
<p>The expected result is HTTP status code 201:</p>
<pre><code>[Test]
public async Task IntegrationTest()
{
const string endpoint = "http://localhost/raml-demo-api/v1/status";
const string parcelId = "123456";
const string status = "Parcel is out for delivery.";
var parcelDeliveryApiClient = new ParcelDeliveryApiClient(endpoint);
var putResult =
await parcelDeliveryApiClient.StatusParcelId.Put(
new StatusParcelIdPutRequestContent
{
Status = status
},
parcelId);
<strong>Assert.AreEqual(HttpStatusCode.Created, putResult.StatusCode);</strong>
}</code></pre>
<p>For this integration test I'd like to add one more check:</p>
<pre><code>[Test]
public async Task IntegrationTest()
{
const string endpoint = "http://localhost/raml-demo-api/v1/status";
const string parcelId = "123456";
const string status = "Parcel is out for delivery.";
var parcelDeliveryApiClient = new ParcelDeliveryApiClient(endpoint);
var putResult =
await parcelDeliveryApiClient.StatusParcelId.Put(
new StatusParcelIdPutRequestContent
{
Status = status
},
parcelId);
Assert.AreEqual(HttpStatusCode.Created, putResult.StatusCode);
<strong>var getResult = await parcelDeliveryApiClient.StatusParcelId.Get(parcelId);
Assert.AreEqual(HttpStatusCode.OK, getResult.StatusCode);
Assert.AreEqual(status, getResult.Content.StatusParcelIdGetOKResponseContent.Status);</strong>
}</code></pre>
<p>After the PUT I am firing a GET with the same parcel ID and expect another successful response with the updated status.</p>
<p>When I run this test it will fail on the first assertion, because at the moment there is nothing behind the endpoint yet, but I am going to fix this very soon.</p>
<p>This test obviously doesn't cover everything from the RAML document, but at this point it should be clear that with the auto-generated client I can test every single aspect of my API without having to write any client code myself.</p>
<h3 id="coupling-the-raml-file-to-the-api">Coupling the RAML file to the API</h3>
<p>So how is this better than a normal integration test? The key benefit is that the client is a 1:1 replica of the RAML file. If the API changes I will have to update my tests, which requires me to update the client, which subsequently forces me to update the RAML first.</p>
<p>Besides that it took me only 10 seconds to generate a perfect abstraction of my API which can be used for more than just writing tests.</p>
<h2 id="implement-the-api-to-satisfy-the-tests">4. Implement the API to satisfy the tests</h2>
<p>I have to admit this part has very little to do with RAML, but I thought it would be great to provide a full end to end example as part of this blog post.</p>
<p>For that reason I'll make it short and fast forward the implementation to the point where it will satisfy the test from above:</p>
<pre><code>public StatusModule() : base("/v1/status")
{
Get["/{parcelId}"] = ctx =>
{
if (!Statuses.ContainsKey(ctx.parcelId))
return HttpStatusCode.NotFound;
return new JsonResponse(
Statuses[ctx.parcelId],
new DefaultJsonSerializer());
};
Put["/{parcelId}"] = ctx =>
{
dynamic obj = JsonConvert.DeserializeObject(Request.Body.AsString());
Statuses[ctx.parcelId] = new StatusInformation
{
Status = obj.status.Value,
Updated = DateTime.UtcNow.ToString("o")
};
return HttpStatusCode.Created;
};
}</code></pre>
<p>This (quick and dirty) snippet doesn't implement the entire API, but enough to make the test go green.</p>
<p>After cheating myself through step 4 let's move on to the next and final part of this article and look at another important criteria of building a RESTful API.</p>
<h2 id="document-and-review-an-api-using-the-anypoint-platform">5. Document and review an API using the Anypoint Platform</h2>
<p>Before going any further let's quickly recap what I've done so far:</p>
<ul>
<li>I designed an API using RAML</li>
<li>Used the RAML Tools for .NET to auto-generate a client</li>
<li>Wrote an integration test with the generated client</li>
<li>Implemented enough of the API to satisfy my test</li>
</ul>
<p>It feels like I am almost done. So what about documentation? Well RAML is already human readable, it is accurate and tested against my actual API and all my tests are passing as well, so am I actually done?</p>
<p>No, not yet, there are two more things I have to solve:</p>
<ul>
<li>At the moment the RAML document is only in my source control and likely not accessible to external consumers</li>
<li>Integration tests are great to give me confidence, but my stakeholders don't understand those green and red lights in Visual Studio (or CI system) and likely want to verify the API for themselves</li>
</ul>
<p>This is where I find the <a href="https://anypoint.mulesoft.com/apiplatform/">Anypoint Platform</a> extremely useful!</p>
<h3 id="anypoint-platform">Anypoint Platform</h3>
<p>Among many other features <a href="https://anypoint.mulesoft.com/">Anypoint</a> allows me to document and publish my API with an interactive designer (much like the API Workbench but even richer) and create a live Portal at no cost.</p>
<p>The designer is exceptionally well done. It offers many features like syntax highlighting, intellisense, instant RAML validation and auto-suggestion of available nodes:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23881341396_87c12df3f3_o.png" alt="RAML-Demo-Anypoint-Designer-Editor, Image by Dustin Moris Gorski">
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23548329229_70792a7a3b_o.png" alt="RAML-Demo-Anypoint-Designer-Suggested-Nodes, Image by Dustin Moris Gorski">
<p>Another brilliant feature is the interactive preview when editing a RAML file. It visually displays every characteristic of your API in a beautiful interface, like those responses as an example:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23611739780_4be138f177_o.png" alt="RAML-Demo-Anypoint-Designer-Preview-Responses, Image by Dustin Moris Gorski">
<p>It even goes as far as allowing me to interact with a mocked service while working on the RAML:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23881341456_e44ba5d047_o.png" alt="RAML-Demo-Anypoint-Designer-Preview, Image by Dustin Moris Gorski">
<p>When I click the Try It button it displays me a form with all relevant parameters pre-populated with the values from the examples in my RAML:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-12-23/23907433735_0847e4e5c9_o.png" alt="RAML-Demo-Anypoint-Designer-TryIt-Request, Image by Dustin Moris Gorski">
<p>From the UI I can quickly run requests against my API with as little friction as possible.</p>
<p>Finally you can publish your API into a Live Portal which is publicly accessible (can be private as well), where users and stakeholders can try the API in a live environment.</p>
<p>Try it yourself by executing some PUT and GET requests via the <a href="https://anypoint.mulesoft.com/apiplatform/dustinmoris#/portals/organizations/1c966d9b-793c-46bc-a87a-427b9a4a9b4a/apis/45625/versions/47291">Live Portal of my demo API</a>.</p>
<p>Any technical or non-technical person can review the API and validate if it works as expected and I as a developer cannot claim that a feature is done before it is available in the Live Portal.</p>
<p>This is how I envision API development in an agile environment.</p>
https://dusted.codes/design-test-and-document-restful-apis-using-raml-in-dotnet
[email protected] (Dustin Moris Gorski)https://dusted.codes/design-test-and-document-restful-apis-using-raml-in-dotnet#disqus_threadWed, 23 Dec 2015 00:00:00 +0000https://dusted.codes/design-test-and-document-restful-apis-using-raml-in-dotnetramlrestful-apidotnetanypointHow to make an IStatusCodeHandler portable in NancyFx<p>I am currently working on several micro services using the <a href="http://nancyfx.org/">NancyFx</a> framework with many projects sharing the same underlying architecture.</p>
<p>The <a href="https://github.com/NancyFx/Nancy/blob/master/src/Nancy/IStatusCodeHandler.cs">IStatusCodeHandler</a> interface is one of the core infrastructure components which is used by many Nancy projects to intercept and process a Nancy context before the final response gets returned to the client.</p>
<p>Having a few identical implemenations of IStatusCodeHandler I wanted to extract them into a re-usable NuGet package.</p>
<p>The problem is that Nancy automagically detects all implementations of IStatusCodeHandler and wires them up in the Nancy pipeline. In other words, if a library exposes an IStatusCodeHandler implementation then it gets automatically hooked into your Nancy application just by adding a reference to the assembly.</p>
<p>This is a nice feature, but makes it more difficult to include an IStatusCodeHandler in a shared library. I certainly don't want to modify an application's behaviour by simply adding a reference to the project.</p>
<p>Fortunately with some additional wiring you can easily expose an IStatusCodeHandler in a disabled state and allow your application to enable it only when required.</p>
<p>Here is an example of how to make an IStatusCodeHandler portable:</p>
<pre><code>public class NotFoundStatusCodeHandler : IStatusCodeHandler
{
private static bool _isEnabled = false;
public static void Enable()
{
_isEnabled = true;
}
public bool HandlesStatusCode(
HttpStatusCode statusCode,
NancyContext context)
{
return _isEnabled && statusCode == HttpStatusCode.NotFound;
}
public void Handle(HttpStatusCode statusCode, NancyContext context)
{
// Do work
}
}</code></pre>
<p>I added a static Boolean field <code>_isEnabled</code> to the class definition. The field is initialised with <code>false</code> and only the <code>Enable()</code> method changes the value to be <code>true</code>.</p>
<p>Inside the <code>IStatusCodeHandler.HandlesStatusCode</code> method we check for the <code>_isEnabled</code> flag before evaluating the rest of the condition.</p>
<p>This will not stop Nancy from detecting and adding the IStatusCodeHandler into the pipeline, but will essentially suppress its behaviour entirely.</p>
<p>Enabling the handler is as easy as adding one line of code to your application startup event:</p>
<pre><code>public class Bootstrapper : DefaultNancyBootstrapper
{
protected override void ApplicationStartup(TinyIoCContainer container, IPipelines pipelines)
{
base.ApplicationStartup(container, pipelines);
NotFoundStatusCodeHandler.Enable();
}
}</code></pre>
<p>The fact that the <code>_isEnabled</code> flag is static is not a problem, because it would either never change or only once during the application startup and remain that way for the rest of the application's lifetime.</p>
<p>It is a very simple but effective trick to make IStatusCodeHandler classes re-usable across multiple projects.</p>
https://dusted.codes/how-to-make-an-istatuscodehandler-portable-in-nancyfx
[email protected] (Dustin Moris Gorski)https://dusted.codes/how-to-make-an-istatuscodehandler-portable-in-nancyfx#disqus_threadWed, 25 Nov 2015 00:00:00 +0000https://dusted.codes/how-to-make-an-istatuscodehandler-portable-in-nancyfxnancyfxarchitectureWhen to use Scrum? Waterfall vs. Scrum vs. Kanban vs. Scrumban<p>"<strong>Why</strong>", I love this word and it is probably my most favourite question to ask. It is also a curse at the same time, because if it is not followed by a satisfactory answer I can easily find myself in an unhappy situation.</p>
<p>This summer I decided to look for a new career opportunity and as such I went through many job interviews with various technology companies. Being a big fan of agile methodologies I was keen to learn exactly how their teams work and which processes they follow. Surprisingly, I often heard the same answer. A development manager or CTO would very proudly explain to me that all their teams work in Scrum. I however, was not very impressed with this answer.</p>
<h2>All teams work in Scrum? <u>Why</u>?</h2>
<p>Do all their projects really have the same requirements?</p>
<p>I doubt it. My guess is that all the teams work in Scrum because of the unfortunate circumstance that Scrum has become the <a href="http://agilemethodology.org/">unchallenged prime example of agile methodologies</a>. The issue I have with this is that Scrum does not make you agile automatically. It can be agile when applied appropriately, but only if it's strengths match your requirements, otherwise <a href="http://programmers.stackexchange.com/questions/89128/why-do-i-need-scrum-vs-a-less-formal-more-lightweight-process-for-my-team?newreg=666847df58ea47eeb2f37344135bd2e2#">it might be an impediment on its own</a>.</p>
<h3>Essence of Agile</h3>
<p><a href="http://www.oxforddictionaries.com/definition/english/agile">Agile means to move quickly, be flexible and adaptable to rapid change.</a>
</p>
<p>If you didn't know anything about Scrum, would it sound very flexible to you if all teams have to follow the same rigid process?</p>
<p>Probably not and hence I ask myself how agile is such a company after all?</p>
<p>Sometimes I hear that an organisation invested a lot into implementing Scrum and therefore is unlikely to change their process any time soon. But hang on, someone invested a lot into becoming more agile and then they are averse to change? I find it a bit strange, because to me agile is all about change.</p>
<h3>Fear of losing control</h3>
<p>Self organising teams may seem daunting to a traditional manager. This is because managers are generally in charge of a team's process which puts them in control. Luckily Scrum is very heavy in process.</p>
<p>When waterfall became unpopular it was easy to find love for Scrum. It promises to be more agile, but also introduces a lot of new processes such as roles within a team, repetitive meetings, velocity based release planning, monitoring burndown charts and defining sprint goals.</p>
<p>I can see how a traditional manager felt very comfortable with the volume of processes and consequently implemented it across the entire organisation.</p>
<h3>Self organising teams</h3>
<p>Rolling out Scrum to all teams, making everyone follow the same iterations and maximising consistency in the process leaves very little to no room for self organising teams. At the end of the day there is nothing left to self organise and a team follows the same rigid patterns and reporting schedules as they did before, but now under the hat of Scrum.</p>
<p>Alternatively a much better, or in my opinion a more agile approach would be to say "I don't care how you work". Provide a team with enough coaching to understand the differences in agile methodologies and empower them to pick the right tool for themselves.</p>
<p>The manager can stop worrying about all the boring semantics of a process and focus on more relevant and exciting parts of the business - mainly the results. Additionally this will force a team to make success measurable, which is key in agile software development.</p>
<h4>The power of trust</h4>
<p>There is nothing more powerful than delegating responsibility to a team. It triggers a higher sense of ownership, pride in one's own work, intrinsic motivation and increase in productivity.</p>
<p>The team is responsible for it's own process and will take any action required to make it as smooth as possible. Situations where a team leaned back and waited for someone else to fix an issue belong in the past!</p>
<h3>Scrum is not a silver bullet</h3>
<p>Ultimately there is really no reason why every team should follow Scrum. The methodology of choice should meet a project's requirements rather than a manager's capability to control teams horizontally.</p>
<p>This also means that one team may change its workflow if the project changes or if the team switches to a different project.</p>
<p>So which project types are actually benefitting from Scrum?</p>
<h2>When to use Scrum</h2>
<h3>Scrum is expensive</h3>
<p>Scrum is a time consuming process. It takes a long time for a team to align their story point estimations and base velocity. Every day stand-up meetings, frequent backlog refinement and sprint planning meetings come with a price as well. A sprint is a fixed commitment and does not allow for any change or interruptions. Highly skilled people spend a lot of time doing secondary tasks which is not very cost effective either.</p>
<p>However, Scrum really pays off for projects which:</p>
<ul>
<li>have stakeholders who frequently change their minds</li>
<li>require a quick feedback loop</li>
<li>use stakeholders' feedback to prioritise the next sprint</li>
<li>don't get many interruptions from everyday business</li>
<li>have a cross functional team</li>
</ul>
<h3>Scrum is great for projects with little baggage</h3>
<p>Greenfield projects and projects in their early days are great candidates for Scrum. Stakeholders may have busy schedules and cannot afford to see a team every day. Regular 2-4 week sprint review meetings give them an opportunity to view the latest deliverables, clarify new requirements and define or change the scope of the following iteration.</p>
<p>The project is new, which means there is almost no maintenance work and the team can focus on a few shippable features every sprint. The team is also less worried about filling up the sprint with as much work as possible. Instead they commit to a couple of features which they are confident to present to the stakeholders at the end of a sprint.</p>
<p>The team gets a full iteration to produce value without any interruptions. Anything after that may be subject to change. This is where a sprint really lives up to its name.</p>
<p>The heavy Scrum process is vital to this type of project. Every single meeting and the restrictions of a committed sprint serve a real purpose and are advantageous to the success of a project.</p>
<p>However, if a project doesn't have any of the requirements listed above then Scrum can very quickly become uneconomical.</p>
<h2>Waterfall is okay if nothing is expected to change</h2>
<p>If your project doesn't anticipate any change then Scrum is certainly an overkill.</p>
<p>You might even question if an agile approach is the right tool at all?</p>
<p>There is no shame to consider a more traditional waterfall workflow for some projects.</p>
<p>If you need to migrate an old C++ library to a more modern language and the library is well documented, free of ambiguity and doesn't need any change then it might be easier to just get on with the work and not faff about with Scrum planning- and redundant sprint review meetings.</p>
<h2>Maximum flexibility with Kanban</h2>
<p><a href="https://en.wikipedia.org/wiki/Kanban">Kanban</a> is great in many ways where Scrum has its limits. It is a <a href="https://en.wikipedia.org/wiki/Lean_manufacturing">lean system</a>, which means one of it's main principles is to eliminate any waste. There are no iterations, no sprint planning meetings and therefore no story pointing, only one continuous flow.</p>
<p>Kanban is extremely flexible and suits a wide range of projects. In particular it works well for projects which:</p>
<ul>
<li>have reached a certain level of maturity with many business as usual tasks, defects and smaller unrelated user stories</li>
<li>require maximum flexibility and frequent change of priorities</li>
<li>have multiple releases per week or per day</li>
<li>have many unscheduled releases</li>
<li>have less cross functional teams</li>
</ul>
<h3>Mature projects</h3>
<p>When a project enters the phase where core features have been implemented and there's naturally less ambiguity about the general course of the project then the advantages of a Scrum driven process will gradually disappear.</p>
<p>This is often, but not necessarily the point when a project has had its first release and customers started using the system. New changes get pushed directly to consumers and enable a new channel for immediate feedback outside of Scrum.</p>
<h3>Business as usual tasks</h3>
<p>At this stage the project also starts picking up many business as usual tasks such as bugs, minor improvements or 3rd line support from developers.</p>
<p>While these tasks are considered as noise or impediments in Scrum, they are still important and first class citizens in Kanban.</p>
<h3>Unscheduled releases</h3>
<p>Some tasks or bug fixes are critical and cannot wait for the next release at the end of an interation. Unfortunately Scrum does not facilitate for these cases.</p>
<p>I have seen and practised many workarounds, but none of them are ideal:</p>
<ul>
<li>Either a team plans all sprints below velocity to accommodate for the unknown</li>
<li>...or the PO cancels the current sprint to deal with the interruption</li>
<li>...or the PO removes user stories from the sprint backlog to make room for the unscheduled release mid sprint</li>
<li>...or the team simply deals with the event and shrugs their shoulders when they move half unfinished work into the next iteration</li>
</ul>
<p>In particular when the last two circumstances become a habit then a sprint commitment loses all of its meaning and the team basically cries for a more flexible workflow like Kanban.</p>
<h3>Less cross functional teams</h3>
<p><a href="//dusted.codes/death-of-a-qa-in-scrum">Scrum is not very compatible with non cross functional teams</a>. It either results in a lot of friction or in a waste of resources. Kanban doesn't entirely solve the problem, but is certainly a lot more friendly towards specialised roles and distributed efforts within a team.</p>
<p>Work tasks can be divided into several stages with different upper limits on current work in progress. This allows a team to tweak those limits to avoid blockages and optimise a smooth continuous flow.</p>
<p>Another benefit of Kanban is that bottlenecks are clearly visualised on the cumulative flow diagram and therefore quickly highlights inefficiencies or understaffed areas of the development cycle.</p>
<h3>Continuous integration</h3>
<p>Last but not least, Kanban is a perfect match for teams who aspire a continuous integration pipeline. Process flow and cycle time are key metrics in Kanban and teams iteratively improve on them to deliver innovation <a href="http://www.toyota-global.com/company/vision_philosophy/toyota_production_system/just-in-time.html">just-in-time</a> to the market.</p>
<h3>Caveats</h3>
<p>Even though Kanban is applicable to many projects it not as widely used as Scrum. Due to it's fast pace it requires a good and robust test automation suite which makes it difficult for many teams to begin with.</p>
<h2>Scrumban</h2>
<p>While Scrum is a continuous sequence of time boxed iterations, Kanban has only one continuous flow. The repetition of process establishes routine and familarity. A well accustomed team could struggle to adopt Kanban and give up the routine.</p>
<p>This is where <a href="https://www.agilealliance.org/what-is-scrumban/">Scrumban</a> comes into play, initially invented to help teams with the transition from Scrum to Kanban. In Scrumban a team primarily uses Scrum as their chosen way of work and mixes in concepts from Kanban to allow for specialised roles, work policies and a better flow.</p>
<p>One fundamental idea of Scrumban is to help teams and organizations develop their own set of Scrum-based processes and practices that work best for them.</p>
<h2>Which process will you pick?</h2>
<p>Back to my initial question, how agile can a company be if all teams have to follow Scrum? It may be possible that Scrum is the right fit for all their projects, but looking at the variety of different methodologies it is more than likely that this is not the case.</p>
<p>Scrum is a brilliant tool, but it is not an all purpose solution to becoming more agile. If Scrum is applied inapproriately it could decrease productivity, as well as frustrate everyone involved with it. Eventually <a href="https://www.google.co.uk/search?btnG=1&pws=0&q=scrum+sucks">people will end up blaming the process</a> for the inadequacies of the implementor. Let's not stigmatise Scrum like we did with Waterfall.</p>
https://dusted.codes/when-to-use-scrum-waterfall-vs-scrum-vs-kanban-vs-scrumban
[email protected] (Dustin Moris Gorski)https://dusted.codes/when-to-use-scrum-waterfall-vs-scrum-vs-kanban-vs-scrumban#disqus_threadSun, 08 Nov 2015 00:00:00 +0000https://dusted.codes/when-to-use-scrum-waterfall-vs-scrum-vs-kanban-vs-scrumbanscrumkanbanscrumbanwaterfallagileDeath of a QA in Scrum<p>The world is changing fast and the software industry even faster. Today I am a software developer with almost 9 years of commercial experience and recently I was told that this is about the time when a developer actually becomes good. Whether that is true or not I leave to someone else, but there is certainly a level of maturity which I have today and I didn't have a couple of years ago.</p>
<p>In those 9 years I was lucky enough to work in different teams, with different technologies and different approaches of agile methodologies.</p>
<p>However, one thing which has never changed was my role within the teams. Regardless of my actual job title I was a hands-on developer among other roles such as testers, business analysts, product owners, architects and UX experts.</p>
<p>The interesting part is that software developers and testers were always separated into two different roles. One guy (or girl!) was supposed to write code and another guy/girl was supposed to test it.</p>
<p>This model sounds great in theory, but in my own (subjective) experience it never really worked.</p>
<h2>The separation of roles in Scrum</h2>
<p>This is not another debate on manual vs. automated testing. When I say it never really worked then I mean in the context of working in an agile Scrum team with automated regression QA.</p>
<p>More precisely the distinction of developers and QA caused a lot of friction in our Scrum process and mostly we ended up with many issues like:</p>
<ul>
<li>QA had very little work at the beginning of a sprint</li>
<li>At the end of a sprint we had a lot of QA tasks piling up</li>
<li>Many QA tasks didn't get done before the end of the sprint</li>
<li>Developers wrote features quicker than QA could test them</li>
<li>...and most importantly, it was impossible to have a developer write code for an entire iteration and on the last day have everything go through code review and QA without any issues</li>
</ul>
<p>Basically through the separation of roles we ended up with a lot of difficulties, bottlenecks and inefficient use of resources.</p>
<h3>A production line in Scrum</h3>
<p>What was happening is that a user story got divided into several work tasks and each task was worked on by a different person in the team. It felt a lot like a production line:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-09-28/21794643692_d42d0f0d78_o.png" alt="Scrum User Story Production Line, Image by Dustin Moris Gorski">
<p>While it was possible to do some parallel work on the development and QA task at the same time, it was not possible to close one before the other. We had inter team dependencies.</p>
<p>Unfortunately this production line is not a hybrid approach of Scrum with a pinch of Kanban, it is more of a mini waterfall within Scrum:</p>
<ul>
<li>The team commits to deliver a fixed scope (sprint backlog) at a fixed date (end of sprint) with fixed resources (size of the team)</li>
<li>But the problem is that the scope is a set of unrelated work items instead of one or two <strong>shippable features</strong></li>
<li>Because each work item is divided into different phases (development, review, QA, etc.) it is difficult to estimate it as a whole and the team ends up over-committing</li>
<li>Dependencies between the phases and bottlenecks only add up to it</li>
<li>And finally the separation of roles leads to a lot of last minute testing, not allowing any room for re-iteration and unexpected issues</li>
<li>Eventually the sprint goal will be missed</li>
<li>And early warning signs like a sprint burndown are not meaningful as the team never really knows if they are on track</li>
</ul>
<p>When this happens the team usually doesn't think of completing one goal anymore. Individual people think of completing individual phases. The separation of roles is the same as the separation of concerns and results in sub teams within a team. While this is great as a coding practise it is poison to Scrum.</p>
<h3>Scrum is one team, not a set of roles</h3>
<p>The problem also becomes very prominent if the team starts using language like "<strong>we</strong> developers do this..., <strong>they</strong> QA do that..." and vice versa.</p>
<p>Scrum is built on the fundamental believe that one self-contained team works together towards a goal. There cannot be any sub teams, dependencies or bottle necks within a team. Each member of the team (except the PO and BA if you like) must be capable of working on each phase of a user story, otherwise your Scrum is deemed to fail.</p>
<p>What does this mean for developers and QA then? Well, there is simply <em>not much space for a dedicated QA</em> anymore!</p>
<p>Yes that's right, but before everyone starts to freak out now, I don't mean to get rid of all QA by sacking them. That would be mad and a true waste of many years of experience, valuable domain knowledge, skills and employee loyalty. We need to get rid of the QA as a separate role within a Scrum team!</p>
<h2>We are all developers</h2>
<p>I think it would be fair to say that we are all developers. At the end of the day writing good automation tests requires a lot of engineering skills and good coding practices as much as writing production code.</p>
<p>In order to make every team member responsible for writing production code as well as reviewing and writing tests we need to give everyone a title which reflects these responsibilities.
</p>
<h3>Make everyone a developer!</h3>
<p>If you really need to distinguish between different levels of expertise then give your team members a title which relates to their experience rather than a role. You can turn your automation QA into a junior or mid-level developer. They already know how to write maintainable automation tests so they are not far away from writing maintainable production code. With a little bit of guidance from a senior member of the team it should be possible to train your current QA into a full member of the team.</p>
<p>Personally I have not made the experience of this change yet, but I witnessed a transition from manual testers to fully committed automation testers and the results were extremely good! The learning curve is very steep in the beginning, but to be honest the entire software industry is a big journey of change and it is not any different for a junior developer either.</p>
<p><strong>As someone said to me before... it takes about 8-9 years before a developer actually becomes good...</strong></p>
<p>Are you ready for the change?</p>
https://dusted.codes/death-of-a-qa-in-scrum
[email protected] (Dustin Moris Gorski)https://dusted.codes/death-of-a-qa-in-scrum#disqus_threadMon, 28 Sep 2015 00:00:00 +0000https://dusted.codes/death-of-a-qa-in-scrumscrumagiletestingDisplay build history charts for AppVeyor or TravisCI builds with an SVG widget<p>If you ever browsed a popular GitHub repository (like <a href="https://github.com/nunit/nunit">NUnit</a> or <a href="https://github.com/twbs/bootstrap">Bootstrap</a>) then you must have seen many of the available SVG badges which can be used to decorate a repository's README file.
</p>
<p>While some repositories keep it very simple:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-08-30/20384154514_4e48fdc582_o.png" alt="NUnit Project Badges, Image by Dustin Moris Gorski">
<p>Others can be quite fancy:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-08-30/20996898652_6205e41d46_o.png" alt="Bootstrap Project Badges, Image by Dustin Moris Gorski">
<p>These little widgets (or often called badges) are more of a gimmick rather than anything useful, but we love them because they give us an opportunity to visually highlight statistics or achievements which we are proud of.</p>
<p>Having a few of <a href="https://github.com/dustinmoris">my own open source projects</a> I also wanted to decorate my README files with one or more fancy widgets.</p>
<p>I quite like the little build charts which you get in VisualStudio's Team Explorer and I thought I could create something similar for my <a href="http://www.appveyor.com/">AppVeyor</a> builds myself:</p>
<img src="https://ci-buildstats.azurewebsites.net/appveyor/chart/dustinmoris/dustedcodes" alt="Build history for Dusted Codes"/>
<p>A couple days later I also added support for <a href="https://travis-ci.org/">TravisCI</a> builds, uploaded everything to GitHub and hosted the widget in Windows Azure.</p>
<p>You can see it in action in <a href="https://github.com/dustinmoris/DustedCodes">one of my public GitHub repositories</a>.</p>
<p>Other repositories can use the widget as well by simply providing their own account and project name in the widget's URL. Additionally you can specify the number of builds to be shown and a switch to display or hide the build statistics.</p>
<p>For a complete up to date feature list and concrete code examples please visit the <a href="https://github.com/dustinmoris/CI-BuildStats">official project page</a>.
</p>
<p>Ideas, contributions, bug reports or any type of feedback is welcome!</p>
https://dusted.codes/display-build-history-charts-for-appveyor-or-travisci-builds-with-an-svg-widget
[email protected] (Dustin Moris Gorski)https://dusted.codes/display-build-history-charts-for-appveyor-or-travisci-builds-with-an-svg-widget#disqus_threadSun, 30 Aug 2015 00:00:00 +0000https://dusted.codes/display-build-history-charts-for-appveyor-or-travisci-builds-with-an-svg-widgetappveyortraviscigithubsvgUsing C# 6 features in ASP.NET MVC 5 razor views<p>Recently I upgraded my IDE to Visual Studio 2015 and made instant use of many new C# 6 features like the <a href="https://msdn.microsoft.com/en-us/library/dn986596.aspx">nameof keyword</a> or <a href="https://msdn.microsoft.com/en-us/library/dn961160.aspx">interpolated strings</a>.
</p>
<p>It worked (and compiled) perfectly fine until I started using C# 6 features in ASP.NET MVC 5 razor views:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-08-21/20768813781_9d305e366b_o.png" alt="Feature not available in C# 5 message, Image by Dustin Moris Gorski">
<blockquote>
<p>Feature 'interpolated strings' is not available in C# 5. Please use language version 6 or greater.</p>
</blockquote>
<p>The project compiles fine, but intellisense underlines my interpolated string in red and tells me that I can't use this feature in C# 5. Well I know that myself, but the real question is why does it think it is C# 5?</p>
<h2>It is the compiler's fault</h2>
<p>I knew I didn't have to change the .NET Framework version to .NET 4.6, because it is a language feature and not a .NET framework feature. The compiler is responsible for translating my C# 6 code into IL code which is supported by the framework.</p>
<p>However, saying that I don't get any errors at compilation time even though I made a lot of use of C# 6 features all over my project.</p>
<p>Maybe it is an intellisense bug in Visual Studio 2015? Not really, because when I start my project I get a yellow screen of death which matches the intellisense error:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-08-21/20575504479_95b11bae10_o.png" alt="Interpolated String Runtime Error in ASP.NET MVC 5, Image by Dustin Moris Gorski">
<h3>ASP.NET Runtime compiler</h3>
<p>The problem is at runtime when ASP.NET tries to compile the razor view. ASP.NET MVC 5 uses the <a href="https://msdn.microsoft.com/en-us/library/system.codedom.compiler.codedomprovider(v=vs.110).aspx">CodeDOM Provider</a> which doesn't support C# 6 language features.
</p>
<h2>Solutions</h2>
<p>There are two solutions to fix the problem:</p>
<ol>
<li>Upgrade your application to MVC 6 (which is still in beta at the time of writing)</li>
<li>Reference the <a href="https://github.com/dotnet/roslyn">Roslyn compiler</a> in your project by using the <a href="https://msdn.microsoft.com/en-us/library/y9x69bzw(v=vs.110).aspx">compiler element</a> in your web.config</li>
</ol>
<p>The 2nd option is as easy as installing the <a href="https://www.nuget.org/packages/Microsoft.CodeDom.Providers.DotNetCompilerPlatform/">CodeDOM Providers for .NET Compiler</a> nuget package.
</p>
<p>It replaces the CodeDOM provider with the new .NET compiler platform (aka Roslyn) compiler as a service API. After installing the nuget package in your MVC 5 project you will be able to use C# 6 features in razor views as well!</p>
https://dusted.codes/using-csharp-6-features-in-aspdotnet-mvc-5-razor-views
[email protected] (Dustin Moris Gorski)https://dusted.codes/using-csharp-6-features-in-aspdotnet-mvc-5-razor-views#disqus_threadFri, 21 Aug 2015 00:00:00 +0000https://dusted.codes/using-csharp-6-features-in-aspdotnet-mvc-5-razor-viewsaspnetmvc-5csharp-6razorHow to use RSA in .NET: RSACryptoServiceProvider vs. RSACng and good practise patterns<p>In my last blog post I wrote a little <a href="//dusted.codes/the-beauty-of-asymmetric-encryption-rsa-crash-course-for-developers">crash course on RSA and how it works</a> without looking into any specific language implementations. Today I'd like to explore the native implementations of .NET and the new <a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.rsacng(v=vs.110).aspx">RSACng</a> class which has been introduced with <a href="https://msdn.microsoft.com/library/ms171868.aspx#v46">.NET Framework 4.6</a>.</p>
<p>In .NET 4.6 you'll find three native RSA classes:</p>
<ol>
<li><a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.rsa(v=vs.110).aspx">RSA</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.rsacryptoserviceprovider(v=vs.110).aspx">RSACryptoServiceProvider</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.rsacng(v=vs.110).aspx">RSACng</a></li>
</ol>
<h2>RSA Class</h2>
<p>The RSA class in <em>System.Security.Cryptography</em> is an abstract class which cannot be instantiated itself. It is the base class for all other RSA implementations and exists since .NET 1.1 in the mscorlib assembly.</p>
<p>It derives from the abstract class <a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.asymmetricalgorithm(v=vs.110).aspx">AsymmetricAlgorithm</a>, which itself implements <a href="https://msdn.microsoft.com/en-us/library/system.idisposable(v=vs.110).aspx">IDisposable</a>. This means every instance of an RSA implementation should be disposed after its usage to free up memory as soon as possible.</p>
<p>The base class defines many methods, but most likely you will be interested in one of these which come with the default RSA contract:</p>
<ol>
<li>Encrypting data</li>
<li>Decrypting cipher data</li>
<li>Signing data</li>
<li>Signing a hash of data</li>
<li>Validating signed data</li>
<li>Validating a signed hash</li>
<li>A factory method to instantiate an implementation of RSA</li>
</ol>
<h3>Differences between .NET Frameworks</h3>
<h4>.NET 3.5 and earlier</h4>
<p>In .NET 3.5 and earlier the RSA class was much smaller than it is today. It didn't have a contract for signing and validating data and only exposed two methods for encyrpting and decrypting a value:</p>
<ul>
<li><a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.rsa.encryptvalue(v=vs.90).aspx">public abstract byte[] EncryptValue(byte[] rgb)</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.rsa.decryptvalue(v=vs.90).aspx">public abstract byte[] DecryptValue(byte[] rgb)</a></li>
</ul>
<p>Also notice how the methods accept a data object but no indication of which padding system should be used. Using an <a href="http://rdist.root.org/2009/10/06/why-rsa-encryption-padding-is-critical/">encyrption padding is critical to the security of your RSA implementation</a> and therefore is somewhat lacking in the base contract.</p>
<h4>After .NET 4.0</h4>
<p>Starting with the .NET 4.0 framework the RSA class has been significantly extended. In addition to all the signing methods it received two new methods for encrypting and decrypting a message:</p>
<ul>
<li><code>public virtual byte[] Encrypt(byte[] data, RSAEncryptionPadding padding)</code></li>
<li><code>public virtual byte[] Decrypt(byte[] data, RSAEncryptionPadding padding)</code></li>
</ul>
<p>Interestingly they are not mentioned in <a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.rsa(v=vs.100).aspx">the official MSDN documentation</a> on the web, however when I decompile .NET 4.0's mscorlib I can see the two virtual methods:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-08-13/20377339999_8cd6511ee9_o.png" alt="Encrypt and Decrypt methods in .NET C# RSA Class, Image by Dustin Moris Gorski">
<p>This was a great addition for two reasons in particular:</p>
<ol>
<li>It allows you to specify which padding should be used.</li>
<li>It matches the implementation of the RSACryptoServiceProvider. This is very nice, because the RSACryptoServiceProvider never bothered to implement the EncryptValue and DecryptValue methods and made it impossible to program against the native RSA contract in previous .NET versions.</li>
</ol>
<h3>Factory methods</h3>
<p>The RSA class also implements two static factory methods to create an instance of RSA:</p>
<ol>
<li><a href="https://msdn.microsoft.com/en-us/library/7taa5dzy(v=vs.110).aspx">public static RSA Create()</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/5ws2s1f6(v=vs.110).aspx">public static RSA Create(string algName)</a></li>
</ol>
<p>In all versions of .NET the default implementation is the RSACryptoServiceProvider:</p>
<pre><code>using (var rsa = RSA.Create())
{
Console.WriteLine(rsa.GetType().ToString());
// Returns System.Security.Cryptography.RSACryptoServiceProvider
}</code></pre>
<h4>Overriding the default implementation</h4>
<p>The factory methods are designed to work with the <a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.cryptoconfig(v=vs.110).aspx">CryptoConfig</a> setting from the machine.config file.</p>
<aside><em>System.Security.Cryptography.CryptoConfig</em> is a machine wide setting and not supported in the web.config on an application level.</aside>
<p>In order to override the default implementation you need to <a href="https://msdn.microsoft.com/en-us/library/vstudio/693aff9y(v=vs.100).aspx">map a friendly algorithm name to a specific cryptography class</a>.</p>
<p>Be aware that you have different machine.config files for each architecture and .NET framework:</p>
<ul>
<li><strong>32-bit</strong><br />%windir%\Microsoft.NET\Framework\<em>{.net version}</em>\config\machine.config</li>
<li><strong>64-bit</strong><br />%windir%\Microsoft.NET\Framework64\<em>{.net version}</em>\config\machine.config</li>
</ul>
<p>For example if you have a 64-bit application built with .NET 4.0 you need to modify the machine.config under this path:<br /><strong>%windir%\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config</strong></p>
<p>For this demo I will map "SomeCustomFriendlyName" to the native RSACng class:</p>
<pre><code><mscorlib>
<cryptographySettings>
<cryptoNameMapping>
<cryptoClasses>
<cryptoClass
<strong>MyRSAImplementation</strong>="System.Security.Cryptography.RSACng,
System.Core, Version=4.0.0.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089" />
</cryptoClasses>
<nameEntry
name="<strong>SomeCustomFriendlyName</strong>"
class="<strong>MyRSAImplementation</strong>" />
</cryptoNameMapping>
</cryptographySettings>
</mscorlib></code></pre>
<p>Now when I run this...</p>
<pre><code>using (var rsa = RSA.Create("SomeCustomFriendlyName"))
{
Console.WriteLine(rsa.GetType().ToString());
}</code></pre>
<p>...it will output "System.Security.Cryptography.RSACng".</p>
<p>If you don't specify a friendly algorithm name then the default <code>Create()</code> method will call <code>RSA.Create("System.Security.Cryptography.RSA")</code> under the covers.</p>
<p>Hence adding another nameEntry for "System.Security.Cryptography.RSA":</p>
<pre><code><mscorlib>
<cryptographySettings>
<cryptoNameMapping>
<cryptoClasses>
<cryptoClass
MyRSAImplementation="System.Security.Cryptography.RSACng,
System.Core, Version=4.0.0.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089" />
</cryptoClasses>
<nameEntry
name="SomeCustomFriendlyName"
class="MyRSAImplementation" />
<strong><nameEntry
name="System.Security.Cryptography.RSA"
class="MyRSAImplementation" /></strong>
</cryptoNameMapping>
</cryptographySettings>
</mscorlib></code></pre>
<p>...will now also output "System.Security.Cryptography.RSACng" when I run <code>RSA.Create().GetType().ToString()</code>.</p>
<h2>RSACryptoServiceProvider vs. RSACng</h2>
<h3>Where to find</h3>
<p>RSACryptoServiceProvider exists in mscorlib since .NET 1.1 while RSACng is a very recent addition with .NET 4.6 and lives in System.Core.</p>
<p><a href="https://github.com/dotnet/corefx">CoreFX is open source</a> and you can browse any implementation as well as the <a href="https://github.com/dotnet/corefx/blob/6b2d60061c87a2a3b0d11adafa1311ae18206259/src/System.Security.Cryptography.Cng/src/System/Security/Cryptography/RSACng.cs">RSACng class on GitHub</a>.</p>
<h3>Implementation</h3>
<p>Both classes are sealed and derive from the base RSA class and implement their members. However, both classes throw a NotSupportedException when calling EncryptValue and DecryptValue. You are forced to use the Encrypt and Decrypt methods which accept a padding.</p>
<p>Both classes are <a href="https://en.wikipedia.org/wiki/Federal_Information_Processing_Standards">FIPS</a> compliant!</p>
<p>Also worth mentioning is that both classes use the OS's underlying CSP (Crypto Service Provider) and CNG (Cryptography Next Generation) providers which are unmanaged code and should have considerably similar performance.</p>
<h3>Difference</h3>
<p>In short, the CNG implementation is...</p>
<blockquote>
<p>...the long-term replacement for the CryptoAPI. CNG is designed to be extensible at many levels and cryptography agnostic in behavior.</p>
<footer><cite><a href="https://msdn.microsoft.com/en-us/library/windows/desktop/aa376210(v=vs.85).aspx">Cryptography API: Next Generation</a>, MSDN Microsoft</cite></footer>
</blockquote>
<p>One difference which you will quickly notice is that the key in RSACng is managed by the <a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.cngkey(v=vs.110).aspx">CngKey class</a> and can be injected into the constructor of RSACng, while RSACryptoServiceProvider is tied to its own key operations.</p>
<h3>Support</h3>
<p>CNG is supported beginning with Windows Server 2008 and Windows Vista. RSACng requires .NET framework 4.6 (or higher).</p>
<p>The Crypto API was first introduced in Windows NT 4.0 and enhanced in subsequent versions.</p>
<h2>Good practise patterns</h2>
<p>After all it doesn't really matter which implementation you choose if you abstract it away and program against a contract. This is good practise anyway and luckily .NET offers the abstract RSA class for this purpose out of the box.</p>
<h3>Constructor injection vs. Factory method</h3>
<p>Constructor injection is the de facto standard pattern for dependency injection in most cases. However, when you deal with an object which implements the IDisposable interface then constructor injection can be responsible for keeping the object longer alive than it needs to be.</p>
<p>Image I have this class:</p>
<pre><code>public class MyClass
{
private readonly RSA _rsa;
public MyClass(RSA rsa)
{
_rsa = rsa;
}
public void sendSecureMessage(string message)
{
byte[] data;
// Convert the message into data
_rsa.Encrypt(data, RSAEncryptionPadding.Pkcs1);
// More business logic
}
}</code></pre>
<p>Whatever happens after <code>_rsa.Encrypt(...)</code> doesn't require the RSA object any longer and it should get disposed immediately.</p>
<p>I could call <code>_rsa.Dispose();</code> afterwards, but this would be a bad code smell, because:</p>
<ol>
<li>MyClass is not the owner of the RSA instance. It didn't create it and therefore shouldn't dispose it.</li>
<li>The RSA instance could have been injected somewhere else as well, therefore disposing it would cause a bug.</li>
<li>There is a <a href="http://dailydotnettips.com/benefit-of-using-in-dispose-for-net-objects-why-and-when/">benefit of the using statement</a> and the constructor injection pattern doesn't allow me to make use of it</li>
</ol>
<p>This means we are better off by using a different dependency injection pattern. In this instance the factory pattern is more suitable:</p>
<pre><code>public interface IRSAFactory
{
RSA CreateRSA();
}
public class MyClass
{
private readonly IRSAFactory _rsaFactory;
public MyClass(IRSAFactory rsaFactory)
{
_rsaFactory = rsaFactory;
}
public void SendSecureMessage(string message)
{
byte[] data;
// Convert the message into data
using (var rsa = _rsaFactory.CreateRSA())
{
rsa.Encrypt(data, RSAEncryptionPadding.Pkcs1);
}
// More business logic
}
}</code></pre>
<p>I have created an IRSAFactory interface which defines a contract to create a new instance of RSA. Now I can inject this factory into MyClass and conveniently create a new RSA object on the fly and encapsulate it with the using statement to properly dispose it afterwards.</p>
<h3>Why not using the native RSA.Create() factory method?</h3>
<p>What is wrong with the native factory method? Why didn't I just do this:</p>
<pre><code>public class MyClass
{
public void SendSecureMessage(string message)
{
byte[] data;
// Convert the message into data
using (var rsa = RSA.Create())
{
rsa.Encrypt(data, RSAEncryptionPadding.Pkcs1);
}
// More business logic
}
}</code></pre>
<p>Theoretically this is absolutely fine, but it makes it more difficult to provide an alternative implementation in my unit test suite, because the native factory is designed to work with the machine.config and cannot be changed programmatically!</p>
<h3>Wrapper for RSA</h3>
<p>Alternatively I could have created a wrapper for the RSA class like this:</p>
<pre><code>public abstract class RSAWrapper : RSA
{
private static RSA _overridenDefaultRSA = null;
public static void OverrideDefaultImplementation(RSA rsa)
{
_overridenDefaultRSA = rsa;
}
public static new RSA Create()
{
return _overridenDefaultRSA ?? RSA.Create();
}
}</code></pre>
<p>...and then use it like the original class:</p>
<pre><code>public class MyClass
{
public void SendSecureMessage(string message)
{
byte[] data;
// Convert the message into data
using (var rsa = <strong>RSAWrapper</strong>.Create())
{
rsa.Encrypt(data, RSAEncryptionPadding.Pkcs1);
}
// More business logic
}
}</code></pre>
<p>This would allow me to override the default implementation from the machine.config by calling into <code>OverrideDefaultImplementation(RSA rsa)</code> from my unit tests.</p>
<p>I personally prefer the first approach though, for the following reasons:</p>
<ol>
<li>By injecting an IRSAFactory object into MyClass it is obvious from the outside which dependencies MyClass has</li>
<li>It allows me to provide different mocks and stubs for different unit tests while running them in parallel. This would be very tricky with the static factory method.</li>
<li>At some point I'd have to initialise the RSA key by using <a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.rsa.fromxmlstring(v=vs.110).aspx">RSA.FromXmlString</a> or <a href="https://msdn.microsoft.com/en-us/library/system.security.cryptography.rsa.importparameters(v=vs.110).aspx">RSA.ImportParameters</a>. With the IRSAFactory (which is an abstract factory btw) I could provide additional methods to do this and test the interaction between them as well.</li>
</ol>
<p>I hope this was useful and that I could shed more light on RSA in .NET.</p>
<p>There are a lot of resources on the internet showing how to use the RSACryptoServiceProvider and therefore I didn't want to re-iterate over the same topic again and focus more on some patterns beyond the default examples.</p>
https://dusted.codes/how-to-use-rsa-in-dotnet-rsacryptoserviceprovider-vs-rsacng-and-good-practise-patterns
[email protected] (Dustin Moris Gorski)https://dusted.codes/how-to-use-rsa-in-dotnet-rsacryptoserviceprovider-vs-rsacng-and-good-practise-patterns#disqus_threadThu, 13 Aug 2015 00:00:00 +0000https://dusted.codes/how-to-use-rsa-in-dotnet-rsacryptoserviceprovider-vs-rsacng-and-good-practise-patternsdotnetrsasecurityasymmetric-encryptioncryptographyThe beauty of asymmetric encryption - RSA crash course for developers<p>With the rapid growth of the internet and the vast business which is handled over the web it is not surprising that security has become an inevitable topic for any software developer these days.</p>
<p>Unfortunately security and in particular cryptography is a complex science on its own. It is very difficult to get it right and extremely easy to get it wrong.</p>
<p>As a developer I personally find the topic very interesting and challenging and therefore strive to understand the concepts of cryptography more than just the bare minimum.</p>
<p>With this in mind I would like to break down one of the most important cryptographic algorithms which is used in modern web development nowadays: <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)">Asymmetric encryption with RSA.</a></p>
<h2>History</h2>
<p>While modern cryptology has a long history dating back to AD 800, asymmetric encryption is only a very recent discovery starting in the mid-70s.</p>
<p>Previously every symmetric encryption algorithm had the fundamental problem of the secret key distribution. Two parties had to find a way to share a secret key via an insecure channel before they could successfully exchange private messages.</p>
<p>Only in 1977 (a year after <a href="https://en.wikipedia.org/wiki/Whitfield_Diffie">Whitfield Diffie</a> and <a href="https://en.wikipedia.org/wiki/Martin_Hellman">Martin Hellman</a> introduced the <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman key exchange</a> algorithm) three scientists, <a href="https://en.wikipedia.org/wiki/Ron_Rivest">Ron Rivest</a>, <a href="https://en.wikipedia.org/wiki/Adi_Shamir">Adi Shamir</a> and <a href="https://en.wikipedia.org/wiki/Leonard_Adleman">Leonard Adleman</a>, succeeded in inventing the first publicly known asymmetric encryption algorithm named RSA. Back then GCHQ, a UK intelligence agency has independently developed the first public key algorithm in 1973 and 1974, but this was kept secret until the mid 80s.</p>
<p>Today RSA is the de-facto standard asymmetric encryption algorithm and is used in many areas such as <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security">TLS/SSL</a>, <a href="https://en.wikipedia.org/wiki/Secure_Shell">SSH</a>, digital signatures and <a href="https://en.wikipedia.org/wiki/Pretty_Good_Privacy">PGP</a>.</p>
<h2>Basics</h2>
<p>Before cracking down the RSA algorithm I would like to scratch on some basics, which are essential to understand the nature of RSA.</p>
<h3>Public-key encryption</h3>
<p>Public-key encryption, as opposed to secret-key encryption, consists of a pair of keys - the public key which is used to encrypt a message and the private key, which is subsequently used to decrypt the cipher message.</p>
<p>Each private key has only one matching public key. A message encrypted with the public key can only be decrypted with the related private key.</p>
<h4>Alice, Bob and Eve</h4>
<p>Alice and Bob want to communicate privately and Eve wants to eavesdrop. Both, Alice and Bob have their individual public and private key pair.</p>
<p>Alice uses Bob's public key to encrypt a private message before sending it to Bob. Bob can use his private key to decrypt the message. Now Bob can use Alice's public key to reply to Alice without Eve being able to understand any of the transmitted data. Finally Alice decrypts Bob's message with her own private key.</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-06-28/18626777534_fc5524c031_o.gif" alt="Public Key Encryption, Image by Dustin Moris Gorski">
<p>The public key is available to everyone, while the private key is only known to the key holder. There is never the requirement to share a secret key via an insecure channel.</p>
<h4>Integrity and Authenticity</h4>
<p>Now theoretically Eve could equally use Alice's or Bob's public key and send them a private message. It wouldn't help her directly to read anyone's encrypted message, but she could trick either one of them to believe her message was from Bob or Alice and provoke them to make a mistake.</p>
<p>Eve could also intercept and tamper with the encrypted message, so that after decryption it will read something different.</p>
<p>Just by encrypting a message neither Bob or Alice can ever be sure that the message hasn't been modified (integrity) by Eve nor that the message wasn't sent from Eve in the first place (authenticity).</p>
<p>Luckily RSA offers a solution to this problem. Similar to encrypting a message, the algorithm can also be used for signing it. Bob can use his private key to compute a signature from his message. The signature would be unique to the message and no other message would compute the same signature.</p>
<p>Alice can then use Bob's public key to verify if the message and the attached signature match. Because she uses Bob's public key she can be sure that the message came from Bob. Only Bob can create a correct signature with his private key. Additionally Alice will easily know if the message has been modified or not. If Eve changed the message or the attached signature, then Alice will not be able to verify the signature with Bob's public key and therefore know that someone has tampered with the data.</p>
<p>By providing an additional signature Alice and Bob can trust each others messages.</p>
<h4>Encrypting and signing</h4>
<p>Encrypting or signing are not exclusive. For example Alice can first encrypt her message and then use the resulting cipher as the base for computing a valid signature. Afterwards Bob has to first verify the signature based on the cipher and when that goes well he can proceed to decrypt the message. Now both have established a trusted and secure way of communicating privately.</p>
<h3>One-way functions</h3>
<p>The concept of encrypting a message with one key and not being able to decrypt with the same key is based on one-way functions. As the name suggests the characteristic of a one-way function is that it is not reversible other than with a trial and error approach.</p>
<p>This can be achieved if there is an infinite amount of values which lead to the same result, if there is some information lost as part of the algorithm or if the time to decrypt takes immensely longer than to encrypt.</p>
<h4>A simple example of a one-way function</h4>
<p>Let's say the initial value is 264. The one-way function reads as following:</p>
<p><em>You start from the centre of a map. Now take your value and divide it by it's last digit. The result is a new value x. Now draw a line x centimetres north east and mark a new point on the map. Next take your original value and subtract it by x. The result is y. Draw another line, starting from the last point, y centimetres south west. The final point is the end result.</em></p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-06-28/19247927141_e7b4b378a8_o.gif" alt="Example of a one way function, Image by Dustin Moris Gorski">
<p>In this example we would divide 264 by 4 and retrieve 66 for x. Additionally we subtract 66 from 264 and retrieve y = 198. We draw both lines and determine the final point on the map, which represents the end result of the one-way function.</p>
<p>Now just from knowing the final point on the map and the definition of the function it is not possible to easily deduce the original value.</p>
<h3>Modular arithmetic</h3>
<p>Modular arithmetic is full of one-way functions. It is also known as clock arithmetic, because it can be illustrated by a finite amount of numbers arranged in a loop, like on a clock:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-06-28/19051127700_2dd7074ef4_o.gif" alt="Clock Arithmetic, Image by Dustin Moris Gorski">
<p>The dark circle represents the clock. The blue numbers represent the value 17. If you arrange all numbers from 1 to 17 clockwise in a loop, then the end value results in 5. In other words 17 mod 12 equals 5.</p>
<p>The short-cut and common way of calculating the modulus is by dividing the original value by x. The reminder equals the modulus.</p>
<p>The modulus operation is a great one-way function, because it is fairly simple and has an infinite amount of possible values giving the same result.</p>
<h3>Prime numbers</h3>
<p>Prime numbers are the last important building block of the RSA algorithm.</p>
<blockquote>
<p>A prime number (or a prime) is a natural number greater than 1 that has no positive divisors other than 1 and itself.</p>
<footer><cite><a href="https://en.wikipedia.org/wiki/Prime_number">Prime number</a>, Wikipedia</cite></footer>
</blockquote>
<p><em>Example of prime numbers: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, ...</em></p>
<p>What's important to know is that there is <a href="https://en.wikipedia.org/wiki/Prime_number#Euclid.27s_proof">an infinite amount of prime numbers</a> and there is no effective formula to calculate a prime number or to validate a given number to be prime or not.</p>
<h3>Prime factorisation</h3>
<p>Factorisation is the process of decomposing a positive integer into a product of smaller numbers.</p>
<p>For example the number 759 can be decomposed into 3 and 253.<br />In other words 759 = 3 * 253.</p>
<p>If one continues the process you will eventually end up with all numbers being prime numbers (= prime factorisation). In our case 253 can be further broken down into 11 and 23.</p>
<p>Finally you can say <strong>759 = 3 * 11 * 23.</strong></p>
<p>Primes become rarer as we progress through the integers and there is a race in finding the next highest prime number in history.</p>
<h2>The RSA algorithm</h2>
<p>As genius as the algorithm is, it is fairly short and can be demonstrated with a relatively small example.</p>
<h3>Generating the public and private key</h3>
<p>First Alice needs to generate her own public and private key pair. For this Alice picks two prime numbers <strong>p = 17</strong> and <strong>q = 19</strong>.</p>
<p>She multiplies them together to retrieve the number <strong>n = 323</strong>.</p>
<p>Next she picks another prime <strong>e = 7</strong>.</p>
<p>
Lastly she needs to calculate the value <strong>d</strong>. This is done by working out the following equation:<br />
<em><strong>e</strong> * <strong>d</strong> mod ((<strong>p</strong> - 1) * (<strong>q</strong> - 1)) = 1</em><br />
<em>7 * <strong>d</strong> mod 288 = 1</em><br />
<strong>d = 247</strong>
</p>
<h4>Private key</h4>
<p>The private key is <strong>d = 247</strong>.</p>
<p>p and q are kept secret as well. d can only be calculated if p and q are known.</p>
<h4>Public key</h4>
<p>
The public key is formed by <strong>n</strong> and <strong>e</strong>:<br />
<strong>n = 323</strong><br />
<strong>e = 7</strong>
</p>
<p>Note that n is the product of two prime numbers. It can only be decomposed into p and q. If p and q are large enough then it is de-facto impossible.</p>
<h3>Encrypting a message</h3>
<p>The public key is used to encrypt a message. Let's say Bob wants to encrypt the message "I love Alice". Internally all data is represented in binary format and binary numbers can be converted into decimals.</p>
<p>In C# you can quickly convert it using this snippet:</p>
<pre><code>var message = "I love Alice";
var binary = Encoding.ASCII.GetBytes(message);
var decimals = binary.Select(b => Convert.ToInt32(b)).ToArray();</code></pre>
<p>"I love Alice" is represented by this number sequence:<br />73, 32, 108, 111, 118, 101, 32, 65, 108, 105, 99, 101.</p>
<p>To keep this example short I will only encrypt the first letter of the message, which is represented by the number 73.</p>
<p>
The formula to encrypt <strong>m = 73</strong> is:<br />
<strong>c = m<sup>e</sup> mod n</strong><br />
c = 73<sup>7</sup> mod 323<br />
The cipher <strong>c = 112</strong>.
</p>
<h3>Decrypting the cipher</h3>
<p>
Alice receives the cipher 112 and can decrypt it using her private key <strong>d</strong> and the formula:<br/>
<strong>m = c<sup>d</sup> mod n</strong><br/>
m = 112<sup>247</sup> mod 323<br/>
The first number of the original message is <strong>m = 73</strong>.
</p>
<p>RSA is a brilliant one-way function which allows someone to reverse it only if the private key <strong>d</strong> is known.</p>
<h2>Real world application</h2>
<p>RSA is considerably slow due to the calculation with large numbers. In particular the decryption where d is used in the exponent is slow. There are ways to speed it up by remembering p and q, but it is still slow in comparison to symmetric encryption algorithms.</p>
<p>A common practise is to use RSA only for the encryption of a secret key, which then is used in a symmetric encryption algorithm. Typically the message to encrypt is a lot longer than the secret key itself, therefore this is a very effective method to benefit from the security of an asymmetric- and the speed of a symmetric encryption algorithm.</p>
<h3>Key length</h3>
<p>With the fast development of computer chips the recommended key length for RSA changes over time.</p>
<blockquote>
<p>RSA claims that 1024-bit keys are likely to become crackable some time between 2006 and 2010 and that 2048-bit keys are sufficient until 2030. An RSA key length of 3072 bits should be used if security is required beyond 2030.</p>
<footer><cite><a href="https://en.wikipedia.org/wiki/Key_size#Asymmetric_algorithm_key_lengths">Asymmetric algorithm key lengths</a>, Wikipedia</cite></footer>
</blockquote>
<p>If you use RSA in your application, then you should periodically (every few years) recycle your keys and generate a new pair to meet current length recommendations and stay secure.</p>
<h2>Future</h2>
<p>The entire security of RSA is built on the fact that it is impractical to determine p and q by looking at n. If at some point in the future a mathematician finds a way to rapidly factor n, then RSA would become of no use.</p>
<p>Another interesting way of cracking most of today's crypto systems could be with the help of <a href="https://en.wikipedia.org/wiki/Quantum_computing#Potential">quantum computing</a>, which as of now still remains a very theoretical topic.</p>
https://dusted.codes/the-beauty-of-asymmetric-encryption-rsa-crash-course-for-developers
[email protected] (Dustin Moris Gorski)https://dusted.codes/the-beauty-of-asymmetric-encryption-rsa-crash-course-for-developers#disqus_threadSun, 28 Jun 2015 00:00:00 +0000https://dusted.codes/the-beauty-of-asymmetric-encryption-rsa-crash-course-for-developerssecurityrsaasymmetric-encryptioncryptographyRunning free tier and paid tier web apps on the same Microsoft Azure subscription<p>Last week I noticed a charge of ~ £20 by MSFT AZURE on my bank statement and initially struggled to work out why I was charged this much.</p>
<p>I knew I'd have to pay something for this website, which is hosted on the shared tier in Microsoft Azure, but according to <a href="http://azure.microsoft.com/en-us/pricing/calculator/">Microsoft Azure's pricing calculator</a> it should have only come to £5.91 per month:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-06-14/18821999662_b71b95637e_o.png" alt="Windows Azure Shared Pricing Tier, Image by Dustin Moris Gorski">
<p>After a little investigation I quickly found the issue, it was due to a few on and off test web apps which were running on the shared tier as well.</p>
<p>This was clearly a mistake, because I was confident that I created all my test apps on the free tier, but as it turned out, after I upgraded my production website to the shared tier all my other newly created apps were running on the shared tier as well.</p>
<p>I simply didn't pay close attention during the creation process:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-06-14/18829751471_b072e0ceaa_o.png" alt="Windows Azure Create new Web App, Image by Dustin Moris Gorski">
<p>Evidentially every new web app gets automatically assigned to my existing app service plan, which I upgraded to the shared tier.</p>
<p>Luckily I learned my lesson after the first bill. However my initial attempt to switch my test apps back to the free tier was not as simple as I thought it would be. I cannot scale one app individually without affecting all other apps on the same plan:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-06-14/18640926409_dbf2790205_o.png" alt="Windows Azure change pricing tier, Image by Dustin Moris Gorski">
<p>The solution is to create a new app service plan and assign it to the free tier.</p>
<p>You can do this either when creating a new web app, by picking "Create new App Service plan" from the drop down:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-06-14/18204493134_e04eba21dd_o.png" alt="Windows Azure Create new App Service plan, Image by Dustin Moris Gorski">
<p>Or when navigating to the new Portal, where you have the possibility to manage your app service plans:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-06-14/18821999642_d779125c72_o.png" class="half-width" alt="Windows Azure switch to Azure Preview Portal, Image by Dustin Moris Gorski">
<img src="https://cdn.dusted.codes/images/blog-posts/2015-06-14/18640926369_1f679d0f4f_o.png" class="half-width" alt="Windows Azure New Portal App Service Plans Menu, Image by Dustin Moris Gorski">
<p>This wasn't difficult at all, but certainly a mistake which can easily happen to anyone who is new to Microsoft Azure.</p>
<p>Another very useful thing to know is that if you choose the same data centre location for all your app service plans, then you can easily move a web app from one plan to another. This could be very handy when having different test and/or production stages (Dev/Staging/Production).</p>
https://dusted.codes/running-free-tier-and-paid-tier-web-apps-on-the-same-microsoft-azure-subscription
[email protected] (Dustin Moris Gorski)https://dusted.codes/running-free-tier-and-paid-tier-web-apps-on-the-same-microsoft-azure-subscription#disqus_threadSun, 14 Jun 2015 00:00:00 +0000https://dusted.codes/running-free-tier-and-paid-tier-web-apps-on-the-same-microsoft-azure-subscriptionmicrosoft-azureapp-hosting-planEffective defense against distributed brute force attacks<p>Protecting against brute force attacks can be a very tricky task.</p>
<p>Recently I was curious if there are any best practices to protect a website from distributed brute force attacks and I found a lot of interesting solutions:</p>
<h2>Lock an account after X failed login attempts</h2>
<p>The first method I found was very trivial. If a user reaches a certain limit of failed login attempts the website locks down the account and refuses any further access.</p>
<p>A genuine user can unlock his or her account by requesting a recovery link via email or by changing their password via the password reset function.</p>
<h3>Problems with this pattern</h3>
<ul>
<li>Introduces a targeted DOS attack surface. An attacker could easily lock out an account by purposefully providing the wrong password several times to either to block an account from using the service entirely or to force the user into a recovery path, where the attacker might have found further vulnerabilities.</li>
<li>Doesn't protect against more sophisticated attacks (typically an attacker would pick the most common password an try it on all accounts, then pick the second most common password, etc.)</li>
<li>Introduces a potential enumeration attack. An attacker can purposely provide a wrong password and determine if a certain email address/username exists if the account gets locked or not.</li>
</ul>
<h2>Blocking IP Addresses with too many failed login attempts</h2>
<p>This one is fairly simple and well known. If a certain IP address had too many failed login attempts, then further access from this IP address is denied.</p>
<h3>Problems with this pattern</h3>
<ul>
<li>It doesn't help against a distributed brute force attack.</li>
<li>Opens the door for another DOS attack.</li>
<li>There is a good chance that users behind a shared network will lock themselves out if enough users type in a wrong password within a short period of time.</li>
</ul>
<h2>Whitelist-/Blacklisting IP Addresses</h2>
<p>The idea is that a user can limit access to his/her account based on IP address rules. It can be as simple as allowing access to one single IP address, multiple addresses or more complex rules around IP address ranges or subnets.</p>
<h3>Problems with this pattern</h3>
<ul>
<li>Impractical for most websites or web services.</li>
<li>This pattern requires a user to put effort into security configuration instead of being secure by default.</li>
<li>Can become a maintenance nightmare.</li>
</ul>
<h2>Increase artificially the login time after each failed attempt</h2>
<p>This one I found very creative. Each failed login attempt causes the next failed login request to take longer by a factor X. A successful login will proceed in normal speed at any point of time. This allows a website to throttle a distributed brute force attack while providing good experience for a genuine user.</p>
<h3>Problems with this pattern</h3>
<ul>
<li>If a genuine user makes a mistake shortly after an attack, they might end up with a long response time.</li>
<li>The website ends up unnecessarily wasting threads. This can result in a potential DOS attack again!?</li>
</ul>
<h2>Implement a challenge like a CAPTCHA</h2>
<p>This appraoch is trying to stop automated bots from brute forcing an account by implementing a challenge, which supposedly can only be accomplished by a human. Captchas are a very popular solution, but there are many other creative approches to filter humans from machines which work on the same assumption.</p>
<h3>Problems with this pattern</h3>
<ul>
<li>Bad user experience for the geuine user.</li>
<li>Computer learning and social engineering make it a tough challenge to come up with a good filter.</li>
</ul>
<h2>Additional verification step</h2>
<p>Digital signatures, two factor authentication and many other patterns require an additional step of verification. They are highly effective against brute force attacks, but have their own down sides and might be impractical for many web services.</p>
<h2>Combination of patterns?</h2>
<p>Quickly you will find that one pattern on it's own might not do the trick. I tried to think of a good combination of patterns and potential pros and cons attached to them and my best idea was the following:</p>
<h3>Monitor the average fail rate and CAPTCHAs</h3>
<p>The website determines a natural rate of login failure over a certain period of time. Once this metric has been established it starts monitoring and counting failed login attempts going forward. When the number of failed login attempts significantly deviates from the natural rate then a CAPTCHA will be displayed on all subsequent login requests.</p>
<p>If the rate recovers then the CAPTCHA will be hidden from the login screen again. A very transparent website could even show a notification to the user explaining why the CAPTCHA is being displayed and remind the user to set a strong password if not done yet.</p>
<h4>Pros</h4>
<ul>
<li>Effective against any type of brute force attack?</li>
<li>Good user experience.</li>
</ul>
<h4>Cons</h4>
<ul>
<li>Might be difficult to establish the initial variables.</li>
</ul>
<h2>Strict password policy</h2>
<p>Another very viable approach is to simply not fight brute force attacks at all. Make sure your users have strong passwords and make brute force attempts rather harmless.</p>
<p>A good password policy is probably a good idea in any case. As always, security comes in layers.</p>
<p>If you know any other effective defense systems against distributed brute force attacks I'd be interested in hearing them!</p>
https://dusted.codes/effective-defense-against-distributed-brute-force-attacks
[email protected] (Dustin Moris Gorski)https://dusted.codes/effective-defense-against-distributed-brute-force-attacks#disqus_threadFri, 01 May 2015 00:00:00 +0000https://dusted.codes/effective-defense-against-distributed-brute-force-attackssecuritybrute-force-attacksDemystifying ASP.NET MVC 5 Error Pages and Error Logging<blockquote><p><a href="https://elmah.io/?utm_source=dustedcodes&utm_medium=blog&utm_content=demystifying&utm_campaign=dustedcodes">elmah.io</a> loves this post and since we already use it as part of our official documentation for implementing custom error pages, we've decided to sponsor it. Visit <a href="https://elmah.io/?utm_source=dustedcodes&utm_medium=blog&utm_content=demystifying&utm_campaign=dustedcodes">elmah.io</a> - Error Management for .NET web applications using ELMAH, powerful search, integrations with Slack and HipChat, Visual Studio integration, API and much more.</p></blockquote>
<p>Custom error pages and global error logging are two elementary and yet very confusing topics in ASP.NET MVC 5.</p>
<p>There are numerous ways of implementing error pages in ASP.NET MVC 5 and when you search for advice you will find a dozen different StackOverflow threads, each suggesting a different implementation.</p>
<h2>Overview</h2>
<h3>What is the goal?</h3>
<p>Typically good error handling consists of:</p>
<ol>
<li>
Human friendly error pages
<ul>
<li>Custom error page per error code (e.g.: 404, 403, 500, etc.)</li>
<li>Preserving the HTTP error code in the response to avoid search engine indexing</li>
</ul>
</li>
<li>Global error logging for unhandled exceptions</li>
</ol>
<h3>Error pages and logging in ASP.NET MVC 5</h3>
<p>There are many ways of implementing error handling in ASP.NET MVC 5. Usually you will find solutions which involve at least one or a combination of these methods:</p>
<ul>
<li><a href="https://msdn.microsoft.com/en-us/library/system.web.mvc.handleerrorattribute%28v=vs.118%29.aspx">HandleErrorAttribute</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/system.web.mvc.controller.onexception%28v=vs.118%29.aspx">Controller.OnException Method</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/fwzzh56s(v=vs.140).aspx">Application_Error event</a></li>
<li><a href="https://msdn.microsoft.com/en-us/library/h0hfz6fc%28v=vs.85%29.aspx">customErrors element</a> in web.config</li>
<li><a href="https://msdn.microsoft.com/en-us/library/ms690497%28v=vs.90%29.aspx">httpErrors element</a> in web.config</li>
<li>Custom <a href="https://msdn.microsoft.com/en-us/library/ms178468%28v=vs.85%29.aspx">HttpModule</a></li>
</ul>
<p>All these methods have a historical reason and a justifyable use case. There is no golden solution which works for every application. It is good to know the differences in order to better understand which one is applied best.</p>
<p>Before going through each method in more detail I would like to explain some basic fundamentals which will hopefully help in understanding the topic a lot easier.</p>
<h3>ASP.NET MVC Fundamentals</h3>
<p>The MVC framework is only a <a href="https://msdn.microsoft.com/en-us/library/ms227675%28v=vs.100%29.aspx">HttpHandler</a> plugged into the ASP.NET pipeline. The easiest way to illustrate this is by opening the Global.asax.cs:</p>
<pre><code>public class MvcApplication : System.Web.HttpApplication</code></pre>
<p>Navigating to the implementation of <code>HttpApplication</code> will reveal the underlying <code>IHttpHandler</code> and <code>IHttpAsyncHandler</code> interfaces:</p>
<pre><code>public class HttpApplication : IComponent, IDisposable, IHttpAsyncHandler, IHttpHandler</code></pre>
<p>ASP.NET itself is a larger framework to process incoming requests. Even though it could handle incoming requests from different sources, it is almost exclusively used with <abbr title="Internet Information Services">IIS</abbr>. It can be extended with <a href="https://msdn.microsoft.com/en-us/library/bb398986%28v=vs.140%29.aspx">HttpModules and HttpHandlers</a>.</p>
<p>HttpModules are plugged into the pipeline to process a request at any point of the <a href="https://msdn.microsoft.com/en-us/library/ms178473(v=vs.85).aspx">ASP.NET life cycle</a>. A HttpHandler is responsible for producing a response/output for a request.</p>
<p><a href="https://www.iis.net/">IIS</a> (Microsoft's web server technology) will create an incoming request for ASP.NET, which subsequently will start processing the request and eventually initialize the HttpApplication (which is the default handler) and create a response:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-04-06/16862010839_64d17c3268_o.gif" alt="IIS, ASP.NET and MVC architecture, Image by Dustin Moris Gorski">
<p>The key thing to know is that ASP.NET can only handle requests which IIS forwards to it. This is determined by the registered HttpHandlers (e.g. by default a request to a .htm file is not handled by ASP.NET).</p>
<p>And finally, MVC is only one of potentially many registered handlers in the ASP.NET pipeline.</p>
<p>This is crucial to understand the impact of different error handling methods.</p>
<h2>Breaking down the options</h2>
<h3>HandleErrorAttribute</h3>
<p>The HandleErrorAttribute is an MVC FilterAttribute, which can be applied to a class or a method:</p>
<pre><code>namespace System.Web.Mvc
{
[AttributeUsage(
AttributeTargets.Class | AttributeTargets.Method,
Inherited = true,
AllowMultiple = true)]
public class HandleErrorAttribute : FilterAttribute, IExceptionFilter
{
// ...
}
}</code></pre>
<p>It's error handling scope is limited to action methods within the MVC framework. This means it won't be able to catch and process exceptions raised from outside the ASP.NET MVC handler (e.g. exceptions at an earlier stage in the life cycle or errors in other handlers). It will equally not catch an exception if the action method is not part of the call stack (e.g. routing errors).</p>
<p>Additionally the <code>HandleErrorAttribute</code> only handles 500 internal server errors. For instance this will not be caught by the attribute:</p>
<pre><code>[HandleError]
public ActionResult Index()
{
throw new HttpException(404, "Not found");
}</code></pre>
<p>You can use the attribute to decorate a controller class or a particular action method. It supports custom error pages per exception type out of the box:</p>
<pre><code>[HandleError(ExceptionType = typeof(SqlException), View = "DatabaseError")]]</code></pre>
<p>In order to get the <code>HandleErrorAttribute</code> working you also need to turn customErrors mode on in your web.config:</p>
<pre><code><system.web>
<customErrors mode="On" />
</system.web></code></pre>
<h4>Use case</h4>
<p>The <code>HandleErrorAttribute</code> is the most limited in scope. Many application errors will bypass this filter and therefore it is not ideal for global application error handling.</p>
<p>It is a great tool for action specific error handling like additional fault tolerance for a critical action method though.</p>
<h3>Controller.OnException Method</h3>
<p>The <code>OnException</code> method gets invoked if an action method from the controller throws an exception. Unlike the <code>HandleErrorAttribute</code> it will also catch 404 and other HTTP error codes and it doesn't require customErrors to be turned on.</p>
<p>It is implemented by overriding the <code>OnException</code> method in a controller:</p>
<pre><code>protected override void OnException(ExceptionContext filterContext)
{
filterContext.ExceptionHandled = true;
// Redirect on error:
filterContext.Result = RedirectToAction("Index", "Error");
// OR set the result without redirection:
filterContext.Result = new ViewResult
{
ViewName = "~/Views/Error/Index.cshtml"
};
}</code></pre>
<p>With the <code>filterContext.ExceptionHandled</code> property you can check if an exception has been handled at an earlier stage (e.g. the HandleErrorAttribute):</p>
<pre><code>if (filterContext.ExceptionHandled)
return;</code></pre>
<p>Many solutions on the internet suggest to create a base controller class and implement the <code>OnException</code> method in one place to get a global error handler.</p>
<p>However, this is not ideal because the <code>OnException</code> method is almost as limited as the <code>HandleErrorAttribute</code> in its scope. You will end up duplicating your work in at least one other place.</p>
<h4>Use case</h4>
<p>The <code>Controller.OnException</code> method gives you a little bit more flexibility than the <code>HandleErrorAttribute</code>, but it is still tied to the MVC framework. It is useful when you need to distinguish your error handling between regular and AJAX requests on a controller level.</p>
<h3>Application_Error event</h3>
<p>The <code>Application_Error</code> method is far more generic than the previous two options. It is not limited to the MVC scope any longer and needs to be implemented in the Global.asax.cs file:</p>
<pre><code>protected void Application_Error(Object sender, EventArgs e)
{
var raisedException = Server.GetLastError();
// Process exception
}</code></pre>
<p>If you've noticed it doesn't come from an interface, an abstract class or an overriden method. It is purely convention based, similar like the <code>Page_Load</code> event in ASP.NET Web Forms applications.</p>
<p>Any unhandeled exception within ASP.NET will bubble up to this event. There is also no concept of routes anymore (because it is outside the MVC scope). If you want to redirect to a specific error page you have to know the exact URL or configure it to co-exist with "customErrors" or "httpErrors" in the web.config.</p>
<h4>Use case</h4>
<p>In terms of global error logging this is a great place to start with! It will capture all exceptions which haven't been handled at an earlier stage. But be careful, if you have set <code>filterContext.ExceptionHandled = true</code> in one of the previous methods then the exception will not bubble up to <code>Application_Error</code>.</p>
<p>However, for custom error pages it is still not perfect. This event will trigger for all ASP.NET errors, but what if someone navigates to a URL which isn't handled by ASP.NET? For example try navigating to <a href="http://{your-website}/a/b/c/d/e/f/g">http://{your-website}/a/b/c/d/e/f/g</a>. The route is not mapped to ASP.NET and therefore the <code>Application_Error</code> event will not be raised.</p>
<h3>customErrors in web.config</h3>
<p>The "customErrors" setting in the web.config allows to define custom error pages, as well as a catch-all error page for specific HTTP error codes:</p>
<pre><code><system.web>
<customErrors mode="On" defaultRedirect="~/Error/Index">
<error statusCode="404" redirect="~/Error/NotFound"/>
<error statusCode="403" redirect="~/Error/BadRequest"/>
</customErrors>
<system.web/>
</code></pre>
<p>By default "customErrors" will redirect a user to the defined error page with a HTTP 302 Redirect response. This is really bad practise because the browser will not receive the appropriate HTTP error code and redirect the user to the error page as if it was a legitimate page. The URL in the browser will change and the 302 HTTP code will be followed by a 200 OK, as if there was no error. This is not only confusing but has also other negative side effects like Google will start indexing those error pages.</p>
<p>You can change this behaviour by setting the redirectMode to "ResponseRewrite":</p>
<pre><code><customErrors mode="On" redirectMode="ResponseRewrite"></code></pre>
<p>This fixes the initial problem, but will give a runtime error when redirecting to an error page now:</p>
<blockquote>
<header><strong>Runtime Error</strong></header>
<p>An exception occurred while processing your request. Additionally, another exception occurred while executing the custom error page for the first exception. The request has been terminated.</p>
</blockquote>
<p>This happens because "ResponseRewrite" mode uses <a href="https://msdn.microsoft.com/en-us/library/ms525800%28v=vs.90%29.aspx">Server.Transfer</a> under the covers, which looks for a file on the file system. As a result you need to change the redirect path to a static file, for example to an .aspx or .html file:</p>
<pre><code><customErrors mode="On" redirectMode="ResponseRewrite" defaultRedirect="~/Error.aspx"/></code></pre>
<p>Now there is only one issue remaining with this configuration. The HTTP response code for the error page is still "200 OK". The only way to fix this is to manually set the correct error code in the .aspx error page:</p>
<pre><code><% Response.StatusCode = 404; %></code></pre>
<p>This is already pretty good in terms of custom error pages, but we can do better!</p>
<p>Noticed how the customErrors section goes into the system.web section? This means we are still in the scope of ASP.NET.</p>
<p>Files and routes which are not handled by your ASP.NET application will render a default 404 page from IIS (e.g. try <a href="http://{your-website}/not/existing/image.gif">http://{your-website}/not/existing/image.gif</a>).</p>
<p>Another downside of customErrors is that if you use a <a href="https://msdn.microsoft.com/en-us/library/system.web.mvc.httpstatuscoderesult%28v=vs.118%29.aspx">HttpStatusCodeResult</a> instead of throwing an actual exception then it will bypass the ASP.NET customErrors mode and go straight to IIS again:</p>
<pre><code>public ActionResult Index()
{
return HttpNotFound();
//throw new HttpException(404, "Not found");
}</code></pre>
<p>In this case there is no hack which can be applied to display a friendly error page which comes from customErrors.</p>
<h4>Use case</h4>
<p>The customErrors setting was for a long time the best solution, but still had its limits. You can think of it as a legacy version of httpErrors, which has been only introduced with IIS 7.0.</p>
<p>The only time when customErrors still makes sense is if you can't use httpErrors, because you are running on IIS 6.0 or lower.</p>
<h3>httpErrors in web.config</h3>
<p>The httpErrors section is similar to customErrors, but with the main difference that it is an IIS level setting rather than an ASP.NET setting and therefore needs to go into the system.webserver section in the web.config:</p>
<pre><code><system.webServer>
<httpErrors errorMode="Custom" existingResponse="Replace">
<clear/>
<error
statusCode="404"
path="/WebForms/Index.aspx"
responseMode="ExecuteURL"/>
</httpErrors>
<system.webServer/></code></pre>
<p>It allows more configuration than customErrors but has its own little caveats. I'll try to explain the most important settings in a nutshell:</p>
<ul>
<li>httpErrors can be inherited from a higher level (e.g. set in the machine.config)</li>
<li>Use the <code><remove/></code> tag to remove an inherited setting for a specific error code.</li>
<li>Use the <code><clear/></code> tag to remove all inherited settings.</li>
<li>Use the <code><error/></code> tag to configure the behaviour for one error code.</li>
<li>
responseMode "ExecuteURL" will render a dynamic page with status code 200.
<ul>
<li>The workaround to set the correct error code in the .aspx page works here as well.</li>
</ul>
</li>
<li>responseMode "Redirect" will redirect with HTTP 302 to a URL.</li>
<li>
responseMode "File" will preserve the original error code and output a static file.
<ul>
<li>.aspx files will get output in plain text.</li>
<li>.html files will render as expected.</li>
</ul>
</li>
</ul>
<p>The main advantage of httpErrors is that it is handled on an IIS level. It will literally pick up all error codes and redirect to a friendly error page. If you want to benefit from master pages I would recommend to go with the ExecuteURL approach and status code fix. If you want to have rock solid error pages which IIS can serve even when everything else burns, then I'd recommend to go with the static file approach (preferably .html files).</p>
<h4>Use case</h4>
<p>This is currently the best place to configure friendly error pages in one location and to catch them all. The only reason not to use httpErrors is if you are still running on an older version of IIS (< 7.0).</p>
<h3>Custom HttpModule</h3>
<p>Last but not least I would like to quickly touch on <a href="https://msdn.microsoft.com/library/ms227673(v=vs.100).aspx">custom HttpModules in ASP.NET</a>. A custom HttpModule is not very useful for friendly error pages, but it is a great location to put global error logging in one place.</p>
<p>With a HttpModule you can subscribe to the <code>OnError</code> event of the <code>HttpApplication</code> object and this event behaves same way as the <code>Application_Error</code> event from the Global.asax.cs file. However, if you have both implemented then the one from the HttpModule gets called first.</p>
<p>The benefit of the HttpModule is that it is reusable in other ASP.NET applications. Adding/Removing a HttpModule is as simple as adding or removing one line in your web.config:</p>
<pre><code><system.webServer>
<modules>
<add name="CustomModule" type="SampleApp.CustomModule, SampleApp"/>
</modules>
</system.webServer></code></pre>
<p>In fact someone has already created a powerful and reusable error logging module and it is open source and called <a href="http://elmah.github.io/">ELMAH</a>. Be sure to check out <a href="https://elmah.io/?utm_source=dustedcodes&utm_medium=blog&utm_content=demystifying&utm_campaign=dustedcodes">elmah.io</a> as well.</p>
<p>If you need to create application wide error logging, I highly recommend to look at this project!</p>
<h2>Final words</h2>
<p>I hope this overview was helpful in explaining the different error handling approaches and how they are linked together.</p>
<p>Each of the techniques has a certain use case and it really depends on what requirements you have. If you have any further questions feel free to ask me here or via any of the social media channels referenced on my <a href="//dusted.codes/about">about</a> page.</p>
<p><strong>EDIT:</strong> There is a new blog post on <a href="https://dusted.codes/error-handling-in-aspnet-core">error handling in ASP.NET Core</a>.</p>
https://dusted.codes/demystifying-aspnet-mvc-5-error-pages-and-error-logging
[email protected] (Dustin Moris Gorski)https://dusted.codes/demystifying-aspnet-mvc-5-error-pages-and-error-logging#disqus_threadMon, 06 Apr 2015 00:00:00 +0000https://dusted.codes/demystifying-aspnet-mvc-5-error-pages-and-error-loggingaspnetmvcerror-pageserror-loggingGuard clauses without test coverage, a common TDD pitfall<p>Today I wanted to blog about a little mistake which can easily creep into otherwise good TDD practices.</p>
<p>For this demo I'd like to start with an empty unit test project and initialize a subject under test. Then I add enough code to make the solution build:</p>
<pre><code>namespace MyClassLibrary
{
public class DomainClass { }
[TestFixture]
public class DomainClassTests
{
[Test]
public void Test1()
{
var sut = new DomainClass();
}
}
}</code></pre>
<p>For this example the method under test will have some very trivial business logic:</p>
<ol>
<li>Accept an argument</li>
<li>Check if the argument is NULL and return early, otherwise proceed</li>
<li>Call into a dependency</li>
<li>Return a result</li>
</ol>
<p>Let's finish the first test by checking for the simple case of a NULL argument:</p>
<pre><code>[TestFixture]
public class DomainClassTests
{
[Test]
public void DoSomeWork_With_Null_Input_Will_Return_Null()
{
// Arrange
var sut = new DomainClass();
// Act
var actual = sut.DoSomeWork(null);
// Assert
Assert.IsNull(actual);
}
}</code></pre>
<p>In true TDD practise you'd write just enough code to compile the project first, then let the test fail and at last return null to make it pass:</p>
<pre><code>public class DomainClass
{
public object DoSomeWork(object arg)
{
return null;
}
}</code></pre>
<p><strong>However, at this point I have often seen developers implememnt a little bit more code than actually required by the test:</strong></p>
<pre><code>public class DomainClass
{
public object DoSomeWork(object arg)
{
// This guard clause has not been enforced by a particular unit test yet:
if (arg == null)
return null;
return input;
}
}</code></pre>
<p>It is very tempting to write a bit more code if you already know what the desired end result should look like. Guard clauses are so trivial that it makes it a very common mistake.</p>
<p>Someone might argue that the code is justified, because if you comment out the if statement the test will fail.</p>
<p>This is not true though, because the entire condition as a whole is not required yet and as we have proven above, a simple <code>return null;</code> statement satisfies the test as well.</p>
<p>This little mistake seems very harmless now, but it could inroduce a potential NullReferenceException later in the project when the first developer starts to re-factor code and remove redundant code.</p>
<p>This will essentially happen when we continue implementing the remaining unit tests and business logic for the method under test:</p>
<pre><code>public object DoSomeWork(object arg)
{
if (arg == null)
return null;
var result = _dependecy.ProcessData(arg);
return result;
}</code></pre>
<p>Now at this point I could delete the if statement together with the "return null" and the first test will still succeed:</p>
<pre><code>public object DoSomeWork(object arg)
{
// if (input == null)
// return null;
var result = _dependecy.ProcessData(arg);
return result;
}</code></pre>
<p>Why? Well because in the first test we didn't set up a stub for the dependency yet:</p>
<pre><code>[TestFixture]
public class DomainClassTests
{
[Test]
public void DoSomeWork_With_Null_Input_Will_Return_Null()
{
// Arrange
var sut = new DomainClass();
// Act
var actual = sut.DoSomeWork(null);
// Assert
Assert.IsNull(actual);
}
}</code></pre>
<p>It means that <code>var result = _dependecy.ProcessData(arg);</code> will return null and make the test go green.</p>
<p>This is why it is best to not introduce little short cuts in TDD and be careful during your code reviews as well!</p>
https://dusted.codes/guard-clauses-without-test-coverage-a-common-tdd-pitfall
[email protected] (Dustin Moris Gorski)https://dusted.codes/guard-clauses-without-test-coverage-a-common-tdd-pitfall#disqus_threadThu, 19 Mar 2015 00:00:00 +0000https://dusted.codes/guard-clauses-without-test-coverage-a-common-tdd-pitfalltddguard-clausesMaking Font Awesome awesome - Using icons without i-tags<p><a href="http://fortawesome.github.io/Font-Awesome/">Font Awesome</a> is truly an awesome library. It is widely used across many websites and has made a web developer's life so much easier!</p>
<p>There is one thing which I don't find that awesome though. It pollutes your HTML with a lot of styling markup.</p>
<p>Let me explain the problem with an example...</p>
<h2>The Problem</h2>
<p>The default way to include an icon into your website is by adding an <i> tag into your HTML code; like this:</p>
<pre><code><i class="fa fa-car"></i></code></pre>
<p>
For example, the <a href="http://fortawesome.github.io/Font-Awesome/icon/car/">car icon</a> will render into:<br /><i class="fa fa-car"></i>
</p>
<p>This is great, but now you end up with many empty <i> tags in your HTML code, which have no structural purpose in the document whatsoever. Even worse, these tags are tightly coupled to a concrete theming technology, Font Awesome in this case.</p>
<p>This is not great and I personally think HTML markup should be theme agnostic.</p>
<h2>Solutions out in the wild</h2>
<p>A quick Google search brought up some solutions to the problem. None of them without any drawback though.</p>
<h3>#1 Shipping a modified copy of Font Awesome</h3>
<p>By providing your own copy of Font Awesome you'd obviously be able to do what you like and easily solve the problem, however this appraoch has some major issues:</p>
<ol>
<li>I don't want to maintain a custom copy of Font Awesome</li>
<li>The library is quite big and I really want to benefit from the public CDN</li>
</ol>
<h3>#2 Add the classes to the target element</h3>
<p>The next best suggestion was to attach the classes to the target tag instead of the <i> tag.<br />For example:</p>
<pre><code><a class="fa fa-car" href="#">This is a link</a></code></pre>
<p>Result:<br /><a class="fa fa-car" href="#">This is a link</a></p>
<p>As you can see this works, but Font Awesome has become the font for the entire anchor tag now. The browser ends up rendering the text in the default font, which mostly comes from the serif family and is rarely what you want.</p>
<h3>#3 Setting the unicode character in your custom CSS</h3>
<p>The most popular solution was to set the Unicode character for the content property of the ::before attribute:</p>
<pre><code>a:before {
font-family: FontAwesome;
content: "\f1b9";
display: inline-block;
padding-right: 3px;
vertical-align: middle;
}
</code></pre>
<p>The result will be visually perfect, but there are a few things which I don't like about this appraoch:</p>
<ol>
<li>I have to add custom CSS for each tag which I want to decorate</li>
<li>The unicode character can change</li>
<li>It is not readable! CSS code is still code and deserves the same best practices like any other code</li>
</ol>
<h2>Analysing the Font Awesome CSS classes</h2>
<p>After I wasn't really satisfied with any of the proposed solutions I tried to find a better one.</p>
<h3>Let's break it down</h3>
<p>Each icon consists of two CSS classes - the shared "fa" class and an icon-specific class like "fa-car". Setting only the icon-specific class will result in a not very meaningful unicode character:</p>
<pre><code><a class="fa-car" href="#">This is a link</a></code></pre>
<p>Result:<br /><a class="fa-car" href="#">This is a link</a></p>
<p>The icon isn't what we want, but at least the tag's original font remains as is. Using the Google Chrome developer tools I can quickly confirm that the icon-specific class is not doing any harm to the original tag:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-03-04/16710024065_9226643bf3_o.png" alt="CSS source code of the Font Awesome car icon, Image by Dustin Moris Gorski">
<p>Evidently this class only adds the content to the ::before attribute of the target element. The conclusion is that the actual styling gets applied via the "fa" class:</p>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-03-04/16523945879_3588abcda2_o.png" alt="CSS source code of the Font Awesome fa class, Image by Dustin Moris Gorski">
<p>Now this makes sense. The content from the ::before attribute gets rendered inside the original tag and therefore also picks up the styling from the fa class.</p>
<p>Everything from the fa class could equally go into the icon class as part of the ::before attribute, but I can see why the Font Awesome team has extracted it into a shared class, because it is the same for every icon and would be otherwise a maintenance nightmare.</p>
<h2>The ultimate solution (or at least the best I came up with)</h2>
<p>After inspecting the code I realised that all I need is to create one additional class in a custom style sheet to fix the problem:</p>
<pre><code>.icon::before {
display: inline-block;
margin-right: .5em;
font: normal normal normal 14px/1 FontAwesome;
font-size: inherit;
text-rendering: auto;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
transform: translate(0, 0);
}</code></pre>
<p>It is a copy-paste of the original fa class with 3 changes to it:</p>
<ol>
<li>I named the class "icon" to avoid a conflict</li>
<li>I attached the ::before attribute</li>
<li>I added a small margin-right, so that the icon doesn't stick to the original tag</li>
</ol>
<p>With this little trick it seems like I am able to tick all the boxes:</p>
<ul>
<li>I can continue to use the original Font Awesome CDN library</li>
<li>I don't need to add empty i-tags</li>
<li>It doesn't interfere with other CSS on targeted elements</li>
<li>I don't have to make a custom CSS change for each individual icon (unlike the unicode appraoch)</li>
<li>The code is readable!</li>
</ul>
<p>
Now it can be used like this:<pre><code><a class="icon fa-car" href="#">This is a link</a></code></pre>
</p>
<p>
Result:<br /><a class="icon fa-car" href="#">This is a link</a>
</p>
<p>Especially the fact that I don't have to use the unicode characters is very appealing. When I read the code "icon fa-car" it is pretty obvious what the rendered result will look like.</p>
<h2>Conclusion</h2>
<p>At the moment I haven't found a drawback with this appraoch, but there might be something which I have overlooked. If you know of any issues or if you know an even better appraoch, then please let me know! Feedback is much appreciated!</p>
https://dusted.codes/making-font-awesome-awesome-using-icons-without-i-tags
[email protected] (Dustin Moris Gorski)https://dusted.codes/making-font-awesome-awesome-using-icons-without-i-tags#disqus_threadWed, 04 Mar 2015 00:00:00 +0000https://dusted.codes/making-font-awesome-awesome-using-icons-without-i-tagsfont-awesomecssPHP UK Conference 2015<p>Last week was my first time at the <a href="http://phpconference.co.uk/">PHP UK Conference</a> in London. As a .NET developer who is very new to the PHP community I didn't have any particular expectations, but I think this year was a great time to be there!</p>
<h2>The Venue</h2>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-02-26/16626982306_ed7994ed96_o.jpg" alt="PHP UK Conference 2015 - The Venue, Image by Dustin Moris Gorski" class="half-width"><img src="https://cdn.dusted.codes/images/blog-posts/2015-02-26/16445626857_00f842eb7e_o.jpg" alt="PHP UK Conference 2015 - Open Bar, Image by Dustin Moris Gorski" class="half-width">
<p>The conference took place on Thursday and Friday (apparently for the first time, because in previous years it was Friday and Saturday) at <a href="http://www.thebrewery.co.uk/">The Brewery</a>, which is a great venue at a very central location in the city of London.</p>
<p>Also worth noting is that this year was the 10<sup>th</sup> anniversary of the event with a record high of more than 700 attendees coming from many different countries around the world. The hosting was excellent, food and drinks were available throughout the entire day and in the evenings they had many hundreds of free beer up for grab to celebrate this occasion.</p>
<p>As if this was not enough, the organisers even rented out the <a href="http://www.allstarlanes.co.uk/venues/brick-lane/karaoke/">All Star Lanes in Brick lane</a> to continue the celebration with some free bowling, free karaoke, more beer, more food and a cake on Thursday night.</p>
<p>I have to admit that due to a light cold I didn't make the most out of it, but I still had a fantastic time!</p>
<h2>The Tracks</h2>
<img src="https://cdn.dusted.codes/images/blog-posts/2015-02-26/16651908382_77919e91e6_o.jpg" alt="PHP UK Conference 2015 - Closing Keynote, Image by Dustin Moris Gorski" class="half-width"><img src="https://cdn.dusted.codes/images/blog-posts/2015-02-26/16651888422_9c337e1d3f_o.jpg" alt="PHP UK Conference 2015 - Opening Keynote, Image by Dustin Moris Gorski" class="half-width">
<p>Both days started off with two great key notes. The first keynote was by <a href="https://twitter.com/coderabbi">coderabbi</a>, who walked us through some of his own experiences, spoke about code reviews and peer coding and eventually spread his wisdom over a packed room.</p>
<p>On Friday <a href="https://twitter.com/miss_jwo">Jenny Wong</a>, a developer and (according to her own words) a "community junkie" kicked off the second day with a very light hearted and extremely inspirational speech about bringing developer communities together.</p>
<h3>Integrating Communities</h3>
<p>The message which I took away is, that it doesn't matter who you are, which technology you use or how experienced you are. We are all connected through the same passion which brought us together - the passion for coding. Sadly we are in an industry where people burn out every day. Rather than belittling and laughing at other developers (even if it is a Wordpress developer :)) we should help each other and help this community to grow and not shrink.</p>
<p>Definitely one of the sessions which got stuck most with me!</p>
<p>Some other sessions which I really enjoyed were "Composer Best Practices", "Application Logging & Logstash", "Modern Software Architectures" and "HHVM at Etsy".</p>
<p>They were perhaps not the most technically advanced sessions, but they all had a great speaker, who was able to present something interesting from a personal experience and give some insights on the topic which were beyond the usual literature which you'll find in books or on the internet.</p>
<p>I won't rehash all of them in this blog post, but I picked a few where I wanted to share some interesting take aways!</p>
<h3>Versioning</h3>
<p>In the session about Composer best practices <a href="http://seld.be/">Jordi Boggiano</a> started off by explaining the meaning and importance of <a href="http://semver.org">semantic versioning</a>.</p>
<p>We all know what a build number looks like. It consists at least of three numbers, separated by a dot, where the parts usually stand for the major number, the minor number and a patch or sometimes referred to as the build number.</p>
<p>e.g.: <strong>{major}.{minor}.{patch}</strong></p>
<p>What semantic versioning does is to give a clearly defined meaning to each of these numbers with the aim to ease the upgrade and migration pain when developers use 3<sup>rd</sup> party code libraries.</p>
<p>In a nutshell, major stands for compatibility breaks (even the smallest!), minor stands for new features (without affecting other features) and patch denotes bug fixes (for existing features). Sounds simple, but this convention really makes a difference when you are dealing with someone else's code and you want to understand the implications of a NuGet package update or similar.</p>
<p>It definitely changed my view on versioning artifacts and if this was new to you as well, then I hope it makes as much sense to you as it did to me :).</p>
<h3>HHVM and PHP 7</h3>
<p>The other talk I wanted to quickly share with you was HHVM at <a href="https://www.etsy.com/">Etsy</a> by <a href="https://twitter.com/jazzdan">Dan Miller</a>.</p>
<p>For those who don't know what <a href="http://hhvm.com/">HHVM</a> is, it is a virtual machine which compiles PHP code down into native C++ for better performance. HHVM is written by Facebook in <a href="http://hacklang.org/">Hack</a> and PHP, and Hack itself is written by Facebook as well.</p>
<p>HHVM stands for HipHop Virtual Machine and is a replacement for the initial HipHop compiler. The major difference is that HHVM is a just-in-time compiler, while HipHop wasn't.</p>
<h4>Why a just-in-time compiler for PHP?</h4>
<p>Now you probably ask yourself why did Facebook prefer a just-in-time compiler over the other? While there are lots of reasons for it, one of the perhaps minor, but interesting ones was the difficulty to introduce HipHop into the existing development culture. PHP developers are used to drop in replace a file on a server and know that the changes have taken immediate effect. Now with a compilation step in between this was not possible anymore and was a rather huge shift among engineers. A just-in-time compiler on the other hand allowed developers to continue their work flow as they were used to.</p>
<h4>HHVM points of interest</h4>
<p>Another few interesting things I have learned:</p>
<ul>
<li>Facebook claims that HHVM increases the throughput between 3 to 6 times. These benchmarks were taken with a much older PHP version though and Etsy's experience was more towards 3x in comparison with PHP 5.</li>
<li>HHVM is used by other big companies like Wikipedia and Baido as well.</li>
<li>HHVM is extremely solid. Etsy had no issues with HHVM whatsoever and did not encounter any bugs, even though their entire code base is in PHP and they expected at least some weird PHP constraints causing errors - but to their pleasant surprise, they didn't.</li>
<li>BUT, they found many issues with 3<sup>rd</sup> party modules for HHVM. They had to fix some pretty major bugs for the Memcached and MySQL modules to get them running.</li>
<li>If a HHVM module is not yet in use by another big company, then expect to invest a fair amount of time for bug fixing.</li>
<li>Facebook seems to be very enganged with the project. There were stories told at the conference where people reported a bug and it has been fixed only a few hours later over night. This is kind of cool stuff and music in a developer's ear :)!</li>
<li>
HHVM offers a variety of other extremely useful features like:
<ul>
<li>
<strong>HHVM Debugger</strong><br />
Allows you to set conditional break points, e.g.: only hit the break point for useid=123.
</li>
<li>
<strong>sgrep</strong><br />
Great tool for static code analysis which offers a simpler syntax than conventional regex.<br />
e.g.: <code>sgrep -e 'X && X'</code> will return all the code lines where the left- and right hand statement of a logical AND operator is the same.
</li>
<li>
<strong>spatch</strong><br />
Great tool for refactoring your code.<br />
Good PHP IDEs will offer you refactoring tools as well, but they all rely on text search and replace, why they won't give you 100% confidence that all changes have been made properly in your entire code base.<br />
e.g.: Remove 2<sup>nd</sup> argument from a method, etc.
</li>
</ul>
</li>
<li>And last but not least: Some early performance benchmarks between HHVM and PHP 7 showed that PHP 7 gets very close to HHVM. In one benchmark it even outperformed HHVM, but they didn't dived too much into the details and the quality of these figures, so please don't pin this on the wall yet.</li>
</ul>
<h2>Summary</h2>
<p>All in all the PHP UK conference was an amazing event and I am glad I had the opportunity to be part of it! Will I go next year again? It is definitely on my list! Hopefully I will see you next year PHP folks!</p>
https://dusted.codes/php-uk-conference-2015
[email protected] (Dustin Moris Gorski)https://dusted.codes/php-uk-conference-2015#disqus_threadThu, 26 Feb 2015 00:00:00 +0000https://dusted.codes/php-uk-conference-2015php-ukversioninghhvmHello World<p>Website has been launched.</p>
<p>More content and small incremental updates will follow over the next few weeks.</p>
https://dusted.codes/hello-world
[email protected] (Dustin Moris Gorski)https://dusted.codes/hello-world#disqus_threadMon, 16 Feb 2015 00:00:00 +0000https://dusted.codes/hello-world