{ "version": "https://jsonfeed.org/version/1", "title": "Muhammad Rehan Saeed", "home_page_url": "https://rehansaeed.com", "feed_url": "https://rehansaeed.com/feed.json", "description": "Software Developer at Microsoft, YouTuber, Open Source Contributor and Blogger", "icon": "https://rehansaeed.com/images/hero/Muhammad-Rehan-Saeed-1600x900.jpg", "author": { "name": "Muhammad Rehan Saeed", "url": "https://rehansaeed.com" }, "items": [ { "id": "https://rehansaeed.com/on-the-etiquette-of-pull-request-comments/", "content_html": "
Picture the scene. A hard working newbie developer is mashing their keyboard for hours trying to get something working smoothly. Eventually, perseverance and maybe more than a little help from StackOverflow leads to success. The developer carefully commits their code and crafts a pull request, ready for the world to bask in its glory.
\nThe next day the developer eagerly opens up their pull request and sees a dozen or so comments that all look something like this:
\n\n\nnit: Fix this.
\n\n
You may see this comment and not see any problems. I'd like to suggest that this comment is an example of somewhat bad etiquette. Let us discuss the etiquette of pull request comments.
\nI have a limited number of keystrokes before I die. Are my fellow developers not worthy of a share of those strokes? We're all in a rush to get something done but spending the time to write a full comment is not a waste of time, it shows that you care!
\n\n\nFix this.
\n\n
What needs to be fixed? How does it need to be fixed? Why does it need to be fixed? Answering the what, why and how is important to getting your point across.
\nIn cases like this, saving yourself some keystrokes by assuming knowledge on the part of the developer submitting the PR is often a mistake. By not fully explaining yourself, you increase the chances of further questions being asked or even direct contact via messaging systems like Slack or Microsoft Teams. Since PR reviews are asynchronous in nature, this can increase the time it takes to review a PR by hours or even days.
\nIf you're writing pull request comments that are about code style, consider using a tool like StyleCop or Prettier to automate this. It also ends any conflict in a team about the style of code the team should adopt and makes everyone's code look the same for optimal readability. Thus you can avoid having comments like this in your pull requests:
\n\n\nPrefix your fields with
\n\n_.
The first time I saw 'nit:' in a PR review, I had to google its meaning. Evidently, it means that the comment is a minor point by evoking the spectre of a blood sucking parasite. It saves the writer around seven keystrokes but always evokes a mild sense of disgust in me personally. Why not save someone doing that Google search and disgust and just write 'Minor point:'?
\n\n\nnit: This can be done better.
\n\n
The way you word your PR comments can come across as passive aggressive if you're not careful. Is the commenter below suggesting that I hadn't considered the use of 'X'? What if I already did and discounted it?
\n\n\nUse X here.
\n\n
Below are a couple of better ways to phrase the above comment. The comments assume nothing about the developers intent and simply ask a question.
\n\n\nHave you considered doing X here?\nHave you thought about X?\nWould X help here?
\n\n
Text is an imperfect low bandwidth means of communication. One way to increase that bandwidth is to use emoji and get an emotional point across along with your technical one. It also gets across a sense of fun to what can come across as something quite negative, since you are after all picking out all the mistakes you can spot in someone's hard work.
\n\n\nLooks like there is a đ crawling on this line.
\n\n
Not every comment has to be a negative comment picking out a bug or mistake. It's useful to call out a positive change. Other reviewers in the team may see it and pick up on the practice too.
\n\n\nđ This is awesome, we need to do this elsewhere.
\n\n
I sometimes see comments to the effect of the one below. Why is 'X' better? Says who?
\n\n\nX is a better way to do this.
\n\n
Now imagine there was a link below to a blog post providing evidence as to why 'X' is better. We'll even use better language and the odd emoji:
\n\n\nHave you considered using X, it's đ„?
\n\n\n
Every PR comment is a teachable moment where you have the opportunity to take a little extra time and explain why you are requesting a particular change. As I've said above, take the time to add a link to a relevant article or blog post but I've found that sometimes writing a paragraph or two can also help.
\nOne thing to watch out for though is that PR comments don't scale! If you find yourself making the same comment on a second persons PR, it's time to think about writing a blog post (as I've done in the past), wiki entry or some documentation somewhere and sharing that with your team.
\n", "url": "https://rehansaeed.com/on-the-etiquette-of-pull-request-comments/", "title": "On the Etiquette of Pull Request Comments", "summary": "Commenting on a pull request is fraught with potential pitfalls. Here is a short guide to the etiquette of writing pull request comments.", "image": "https://rehansaeed.com/images/hero/Precious-PR-1600x900.jpg", "date_modified": "2022-08-02T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/open-uk-honouree/", "content_html": "Generally speaking I'm not the sort of person who generally receives awards or prizes, so I was rather surprised when out of the blue I was contacted by Open UK to receive a medal to signify being listed among their 2022 honours list.
\nI had never heard of Open UK before so I was initially very sceptical but it seems to be a legit organisation that is funded by the likes of arm, GitHub, Google, Huawei, Microsoft and Red Hat and partners with several open source organisations including the Linux Foundation.
\nI did a bunch of reading before I accepted any award to make sure they weren't doing anything shady and found that they seem to be lobbying the UK government in favour of Open Source which is fine by me!
\n\n\nThe 2022 #openukgennext #openukhonouree list is made up of individuals with broad ranging experience in Open Technology identified as being ones to watch in the UK!
\n
\n\nThey hail from all walks of Open Source Software, Open Hardware and Open Data. This is the list of those to watch for the future of Open Technology. All are earmarked as leading the next generation of Open Technology whether through social media, their jobs, community contributions, policy or in education.
\nThe British Honours system is something very specific to the UK and a means of rewarding an individual for their achievement or service. Medals are used within this system to recognise an activity or long or valuable service.
\nCongratulations to all of those listed. Enjoy the recognition of our New Yearâs Honour from your peers at OpenUK and we look forward to seeing all that you will achieve in Open Technology through 2022 and beyond.
\n\n
This is the second year the award has been handed out. I'm not certain what the selection criteria was apart from the fact that I'm a developer in the UK but I do have a fairly active GitHub profile so I suspect that is how they found me.
\n
Overall, I'm very thankful that I was recognised (albeit without knowing exactly why) and am happy to add the Open UK medal to my small collection of Microsoft MVP awards from before I became a Microsoft employee.
\n", "url": "https://rehansaeed.com/open-uk-honouree/", "title": "I Was Awarded as an Open UK Honouree", "summary": "I am very thankful for being awarded as an Open UK 2022 Honouree. Open UK is an organisation funded by tech companies to lobby the UK government in favour of open source.", "image": "https://rehansaeed.com/images/hero/Open-UK-Honours-1366x768.jpg", "date_modified": "2022-02-07T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/live-streaming-dotnet/", "content_html": "I started live streaming my software development learnings simultaneously on YouTube and Twitch a few months ago. I'm by no means a professional and only have a couple hundred subscribers and a few thousand views at this time but I've had a lot of fun learning and sharing my learnings with the world.
\nAfter streaming for a while, I discovered that I hadn't been writing much code at work as my role at Microsoft has evolved into one that has meant more project management style work and leading a team of developers. So It's been rather liberating to force myself to allocate an hour to a live stream where I can go and learn something new or update one of my many open source projects with some new feature. I realized that I do indeed really like writing, thinking and talking about code and I want to continue to do more of it going forward.
\nI put together a few YouTube playlists of the live streams I've done so far.
\nThis is where it all started, just before the release of .NET 6 and C# 10. There were a lot of hidden and not so obvious features in this release that I haven't seen many blog posts or videos cover. I've been gathering these gems for the last year and talk about each one at length in this YouTube playlist.
\nhttps://www.youtube.com/playlist?list=PLUAZAVKVXTmQEF67lddyErymHlBDaPpjU
\nTwitter uses Snowflake ID's to generate unique identifiers for tweets. In this YouTube playlist I deep dive into how Twitter does this and using ASP.NET Core minimal API's to create an API. I think Twitter Snowflake ID's are a really cool way of generating really clean looking globally unique ID's and its worth looking into them as an alternative to ugly GUID's.
\nhttps://www.youtube.com/playlist?list=PLUAZAVKVXTmTS0Z1z-fmHv0jaxbF_Tys-
\nPulumi is a tool used to build, deploy, and manage your cloud applications using pretty much any language on any cloud. I used Pulumi with .NET to play around with Azure Container Apps which I found to be promising but very early in its development. I'm also now looking into creating an Azure Kubernetes Service cluster using Pulumi.
\nhttps://www.youtube.com/playlist?list=PLUAZAVKVXTmTAb2Vko40UMnnRLLW51UhS
\nSince I'm new to this, I'd love to hear your feedback and suggestions. Oh and don't forget to subscribe and smash that like button! Sorry, its obligatory to say that once you post videos to YouTube. I can't get over how silly it sounds when I say it.
\n", "url": "https://rehansaeed.com/live-streaming-dotnet/", "title": "Live Streaming .NET", "summary": "How I started live streaming my learnings on C#, .NET, ASP.NET Core and beyond to YouTube and Twitch.", "image": "https://rehansaeed.com/images/hero/My-YouTube-1920x1080.jpg", "date_modified": "2022-02-04T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/optimally-configuring-open-telemetry-tracing-for-asp-net-core/", "content_html": "Configuring tracing in Open Telemetry for ASP.NET Core can be a fairly simple process but never accept the defaults! There is always more we can do to make improvements.
\nIn this post, I'll show you how you can take the simplest setup for Open Telemetry tracing I showed you in 'Configuring Open Telemetry for ASP.NET Core' and move to a more fully featured example.
\nHere is a reminder of the simple setup I showed you in 'Configuring Open Telemetry for ASP.NET Core':
\npublic virtual void ConfigureServices(\n IServiceCollection services,\n IWebHostEnvironment webHostEnvironment)\n{\n // ...omitted\n services.AddOpenTelemetryTracing(\n builder =>\n {\n builder\n .SetResourceBuilder(ResourceBuilder\n .CreateDefault()\n .AddService(webHostEnvironment.ApplicationName))\n .AddAspNetCoreInstrumentation();\n if (webHostEnvironment.IsDevelopment())\n {\n builder.AddConsoleExporter(\n options => options.Targets = ConsoleExporterOutputTargets.Debug);\n }\n });\n}\n\nAnd the tracing output you can expect for a request/response cycle:
\nActivity.Id: 00-dde96d459fee4144a83818e054e221b1-cac69896c1bcd14f-01\nActivity.DisplayName: /favicon-32x32.png\nActivity.Kind: Server\nActivity.StartTime: 2021-02-01T10:28:25.4637044Z\nActivity.Duration: 00:00:00.0086712\nActivity.TagObjects:\n http.host: localhost:5001\n http.method: GET\n http.path: /favicon-32x32.png\n http.url: https://localhost:5001/favicon-32x32.png\n http.user_agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36\n http.status_code: 200\n otel.status_code: UNSET\n service.name: ApiTemplate\n service.instance.id: defe9269-04f2-4b49-a05c-ebddf2112993\n telemetry.sdk.name: opentelemetry\n telemetry.sdk.language: dotnet\n telemetry.sdk.version: 1.0.0.0\n\nThis time we're going to setup a more advanced configuration. We're going to start off by adding a lot more information to our ResourceBuilder we pass to the SetResourceBuilder function above.
private static ResourceBuilder GetResourceBuilder(IWebHostEnvironment webHostEnvironment)\n{\n var version = Assembly\n .GetExecutingAssembly()\n .GetCustomAttribute<AssemblyFileVersionAttribute>()!\n .Version\n ResourceBuilder\n .CreateEmpty()\n .AddService(webHostEnvironment.ApplicationName, serviceVersion: version)\n .AddAttributes(\n new KeyValuePair<string, object>[]\n {\n new("deployment.environment", webHostEnvironment.EnvironmentName),\n new("host.name", Environment.MachineName),\n })\n .AddEnvironmentVariableDetector();\n}\n\nThis time we want to start with an empty resource builder calling CreateEmpty. We then add the application name and version which we can retrieve from the current assembly. You may have multiple versions of your application running over time and its important to have a way to differentiate between them.
We then add a few attributes to every span including the environment name and machine name. The attribute names here are standardised as defined by the Open Telemetry specification. I decided that knowing the environment the application is running in and the machine name is important to know when troubleshooting issues.
\nFinally, we add an environment variable detector which we can use to add further attributes to every span using environment variables. Using ResourceBuilder.CreateDefault already included this in the simple example above but since we started with an empty resource builder we need to add it explicitly. Here is a PowerShell example of how you can add add additional attributes to every span using the OTEL_RESOURCE_ATTRIBUTES environment variable:
$env:OTEL_RESOURCE_ATTRIBUTES = 'key1=value1,key2=value2'\n\nWe can now plug the GetResourceBuilder into our code below and add a few more goodies:
public virtual void ConfigureServices(\n IServiceCollection services,\n IWebHostEnvironment webHostEnvironment)\n{\n // ...omitted\n services.AddOpenTelemetryTracing(\n builder =>\n {\n builder\n .SetResourceBuilder(GetResourceBuilder(webHostEnvironment))\n .AddAspNetCoreInstrumentation(\n options =>\n {\n options.Enrich = Enrich;\n options.RecordException = true;\n });\n if (webHostEnvironment.IsDevelopment())\n {\n builder.AddConsoleExporter(\n options => options.Targets = ConsoleExporterOutputTargets.Debug);\n }\n });\n}\n\nprivate static void Enrich(Activity activity, string eventName, object obj)\n{\n if (obj is HttpRequest request)\n {\n var context = request.HttpContext;\n activity.AddTag("http.flavor", GetHttpFlavour(request.Protocol));\n activity.AddTag("http.scheme", request.Scheme);\n activity.AddTag("http.client_ip", context.Connection.RemoteIpAddress);\n activity.AddTag("http.request_content_length", request.ContentLength);\n activity.AddTag("http.request_content_type", request.ContentType);\n }\n else if (obj is HttpResponse response)\n {\n activity.AddTag("http.response_content_length", response.ContentLength);\n activity.AddTag("http.response_content_type", response.ContentType);\n }\n}\n\npublic static string GetHttpFlavour(string protocol)\n{\n if (HttpProtocol.IsHttp10(protocol))\n {\n return "1.0";\n }\n else if (HttpProtocol.IsHttp11(protocol))\n {\n return "1.1";\n }\n else if (HttpProtocol.IsHttp2(protocol))\n {\n return "2.0";\n }\n else if (HttpProtocol.IsHttp3(protocol))\n {\n return "3.0";\n }\n\n throw new InvalidOperationException($"Protocol {protocol} not recognised.");\n}\n\nNext, we configure AddAspNetCoreInstrumentation to enrich the spans with additional information about the current request, response using standardised attributes. Finally, we record details of exceptions from our controllers which would otherwise be lost. This outputs the following:
Activity.Id: 00-3d0f70e71a8e6e5e87f156bdcf94b8c9-ccdd8d23a2e3ba93-01\nActivity.ActivitySourceName: OpenTelemetry.Instrumentation.AspNetCore\nActivity.DisplayName: /favicon-32x32.png\nActivity.Kind: Server\nActivity.StartTime: 2022-02-03T10:52:47.6513334Z\nActivity.Duration: 00:00:00.0077181\nActivity.TagObjects:\n http.host: localhost:5001\n http.method: GET\n http.target: /favicon-32x32.png\n http.url: https://localhost:5001/favicon-32x32.png\n http.user_agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36\n http.flavor: 2.0\n http.scheme: https\n http.client_ip: ::1\n http.request_content_length:\n http.request_content_type:\n http.status_code: 200\n otel.status_code: UNSET\n http.response_content_length: 628\n http.response_content_type: image/png\n deployment.environment: Development\n host.name: REHANS-MACHINE\n service.name: ApiTemplate\n service.version: 5.1.1.0\n service.instance.id: 4e364d08-4965-4d83-8afa-70769074ab0d\n\nThis time around you can see we've collected a lot more information. Now, this may not be 'optimal' for your application. Collecting additional information comes at a performance and monetary cost, so its up to you to judge what extra information is useful to you but I think most of the above is pretty essential basic information that would be valuable while debugging any issues.
\nIn this post, I showed a simple example showing how you can configure Open Telemetry tracing and then went on to show a more advanced real world example.
\nOpen Telemetry is gaining popularity and traction with even GitHub adopting it. So far in this blog series we've only discussed the basics of Open Telemetry and tracing in particular. When Open Telemetry metrics and logs comes out of alpha/beta, I'll write another post discussing configuring those.
\n", "url": "https://rehansaeed.com/optimally-configuring-open-telemetry-tracing-for-asp-net-core/", "title": "Optimally Configuring Open Telemetry Tracing for ASP.NET Core", "summary": "How to optimally configure Open Telemetry traces for ASP.NET Core enriched with lots of extra information.", "image": "https://rehansaeed.com/images/hero/Open-Telemetry-1600x900.png", "date_modified": "2022-02-03T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/the-problem-with-csharp-10-implicit-usings/", "content_html": "::: tip Update (2021-10-14)\nMark Rendle made an interesting suggestion on Twitter after seeing this blog post. I've updated the post below with his code.\n:::
\nYesterday I livestreamed myself upgrading a project to .NET 6 and C# 10. Along the way I tried using a new C# 10 feature called implicit using statements and discovered that it wasn't quite as straightforward as I first thought and you should probably not use it under certain circumstances.
\nHere is the live stream for those who are interested (I'm eager to get any feedback on how I'm presenting as its not a natural skill for me):
\nhttps://www.youtube.com/watch?v=FjnS4oF8K3E
\nAdding the line below to your .csproj project file turns the feature on:
<ImplicitUsings>enable</ImplicitUsings>\n\nOnce enabled, depending on the type of project you have created you'll have the following global using statements added to your project implicitly.
\n| SDK | \nDefault namespaces | \n
|---|---|
| Microsoft.NET.Sdk | \nSystemSystem.Collections.GenericSystem.IOSystem.LinqSystem.Net.HttpSystem.ThreadingSystem.Threading.Tasks | \n
| Microsoft.NET.Sdk.Web | \nSystem.Net.Http.JsonMicrosoft.AspNetCore.BuilderMicrosoft.AspNetCore.HostingMicrosoft.AspNetCore.HttpMicrosoft.AspNetCore.RoutingMicrosoft.Extensions.ConfigurationMicrosoft.Extensions.DependencyInjectionMicrosoft.Extensions.HostingMicrosoft.Extensions.Logging | \n
| Microsoft.NET.Sdk.Worker | \nMicrosoft.Extensions.ConfigurationMicrosoft.Extensions.DependencyInjectionMicrosoft.Extensions.HostingMicrosoft.Extensions.Logging | \n
Sounds great, now you can delete a large portion of the using statements in your project right? Well not so fast, here are some problems I discovered along the way.
\nI discovered the first problem while multi-targetting a class library project for a NuGet package. I had targetted .NET 4.7.2 as well as other target frameworks like .NET 6 for backwards compatibility and found that System.Net.Http could not be found. It turns out I hadn't referenced that particular NuGet package for .NET 4.7.2 and was now getting a build error.
I could add the System.Net.Http NuGet package for .NET 4.7.2 on its own and that would solve the problem but I really didn't like having the overhead of another unnecessary package reference. That also means extra work for me to maintain updating the version number or relying on tools like Dependabot and Renovate to submit PR's to upgrade the version number for me.
<ItemGroup Label="Package References (.NET 4.7.2)" Condition="'$(TargetFramework)' == 'net472'">\n <PackageReference Include="System.Net.Http" Version="4.3.4" />\n</ItemGroup>\n\nMark Rendle on Twitter suggested another workaround after seeing this blog post. His suggestion was to remove the offending using statement in the .csproj file.
<ItemGroup>\n <Using Remove="System.Net.Http" />\n</ItemGroup>\n\nThis looks awfully strange to me. I'm not sure how I feel about adding or removing namespaces from C# project files yet. It doesn't seem very discoverable to me. So in this particular case I'm happy to avoid using implicit using statements for now.
\nThe second problem is trying to understand what using's have been added. As you can see from the table above, you could go and look in the documentation to figure this out but that's slow and time consuming. Another alternative is actually to build your project and then look in its obj directory under:
My.Project\\obj\\Debug\\net472\\My.Project.GlobalUsings.g.cs\n\nThat's not ideal either. I think Visual Studio should ideally show you these using statements somehow.
\nImplicit usings are enabled by default in the latest blank project templates shipped with .NET. Overall this is a cool feature that can remove the need for many duplicated lines of code in your project but I think there is a little too much magic going on here for my liking, so I think I'll be more careful about using this feature in the future.
\n", "url": "https://rehansaeed.com/the-problem-with-csharp-10-implicit-usings/", "title": "The Problem with C# 10 Implicit Usings", "summary": "I tried using C# 10 implicit using statements and found that they had a fatal flaw which meant you couldn't use them under certain circumstances", "image": "https://rehansaeed.com/images/hero/Microsoft-.NET-1366x768.png", "date_modified": "2021-10-13T09:24:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/windows-package-manager/", "content_html": "Winget is a package manager for Windows a bit like apt for linux or the open source Chocolatey for Windows. Version 1.1 of the Windows Package Manager (winget) was recently released. I've had my eye on it for a while now and its only recently gotten good enough to use for real.
\nIt now has the ability to install Windows Store applications and its library of apps that you can search for and install has gotten quite big. My PowerShell script to get a new machine started quickly with all the essential applications that I use as a .NET/Web developer is shown below.
\n# Environment Variables\n[System.Environment]::SetEnvironmentVariable('DOTNET_CLI_TELEMETRY_OPTOUT', '1', [EnvironmentVariableTarget]::Machine)\n\n# Windows Features\n# List features: Get-WindowsOptionalFeature -Online\nEnable-WindowsOptionalFeature -Online -FeatureName 'Containers' -All\nEnable-WindowsOptionalFeature -Online -FeatureName 'Microsoft-Hyper-V' -All\nEnable-WindowsOptionalFeature -Online -FeatureName 'VirtualMachinePlatform' -All\n\n# Office\nwinget install --id '9MSPC6MP8FM4' # Microsoft Whiteboard\nstart "https://github.com/zufuliu/notepad2/releases"\n\n# Utilities\nwinget install --id '7zip.7zip' --interactive --scope machine\nwinget install --id 'XP89DCGQ3K6VLD' # Microsoft Power Toys\nwinget install --id '9NJ3KMH29VGJ' # Enpass\nwinget install --id 'WinSCP.WinSCP' --interactive --scope machine\nwinget install --id '9WZDNCRFJ3PV' # Windows Scan\n\n# Pheripherals\nwinget install --id 'Elgato.ControlCenter' --interactive --scope machine\nwinget install --id 'Elgato.StreamDeck' --interactive --scope machine\n\n# Browsers\nwinget install --id 'Google.Chrome' --interactive --scope machine\nwinget install --id 'Mozilla.Firefox' --interactive --scope machine\n\n# Communication\nwinget install --id 'Microsoft.Teams' --interactive --scope machine\nwinget install --id 'OpenWhisperSystems.Signal' --interactive --scope machine\nwinget install --id '9WZDNCRDK3WP' # Slack\nwinget install --id '9WZDNCRFJ140' # Twitter\nwinget install --id 'XP99J3KP4XZ4VV' # Zoom\n\n# Images\nwinget install --id '9N3SQK8PDS8G' # Screen To Gif\nstart https://www.getpaint.net/download.html # Paint.NET not yet available on winget\n\n# Media\nwinget install --id 'XPDM1ZW6815MQM' # VLC\nwinget install --id 'plex.plexmediaplayer' --interactive --scope machine\nwinget install --id 'OBSProject.OBSStudio' --interactive --scope machine\nwinget install --id 'dev47apps.DroidCam' --interactive --scope machine\nwinget install --id 'XSplit.VCam' --interactive --scope machine\n\n# Terminal\nwinget install --id 'Microsoft.WindowsTerminal' --interactive --scope machine\nwinget install --id 'Microsoft.Powershell' --interactive --scope machine\nwinget install --id 'JanDeDobbeleer.OhMyPosh' --interactive --scope machine\nwinget install --id '9P9TQF7MRM4R' # Windows Subsystem for Linux Preview\nwinget install --id '9NBLGGH4MSV6' # Ubuntu\nwinget install --id '9P804CRF0395' # Alpine\n\n# Git\nwinget install --id 'Git.Git' --interactive --scope machine\nwinget install --id 'GitHub.GitLFS' --interactive --scope machine\nwinget install --id 'GitHub.cli' --interactive --scope machine\nwinget install --id 'Axosoft.GitKraken' --interactive --scope machine\n\n# Azure\nwinget install --id 'Microsoft.AzureCLI' --interactive --scope machine\nwinget install --id 'Microsoft.AzureCosmosEmulator' --interactive --scope machine\nwinget install --id 'Microsoft.AzureDataStudio' --interactive --scope machine\nwinget install --id 'Microsoft.AzureStorageEmulator' --interactive --scope machine\nwinget install --id 'Microsoft.AzureStorageExplorer' --interactive --scope machine\n\n# Tools\nwinget install --id 'Docker.DockerDesktop' --interactive --scope machine\nwinget install --id 'Microsoft.PowerBI' --interactive --scope machine\nwinget install --id 'Telerik.Fiddler' --interactive --scope machine\n\n# IDE's\nwinget install --id 'Microsoft.VisualStudio.2022.Enterprise' --interactive --scope machine\nwinget install --id 'Microsoft.VisualStudioCode' --interactive --scope machine\n\n# Frameworks\nwinget install --id 'OpenJS.NodeJS' --interactive --scope machine\nwinget install --id 'Microsoft.dotnet' --interactive --scope machine\n\nA few of things to note in my script. All apps with random looking ID's like 9P9TQF7MRM4R are Windows Store applications. Secondly, for non-Windows Store applications I always use the --interactive flag because:
\n\nDonât accept the defaults!
\n\n
I never want a shortcut added to my desktop, extra toolbars or system tray icons, so never accept the defaults and always manually select the options you want in the installer. Maybe one day we can set the options we want from winget itself (we can dream!).
Finally, I set the scope of the installation to machine as opposed to user. I'm not sure which installers respect this setting but I always want all applications available to whoever is using the machine.
A few weeks ago Scott Hanselman blogged about creating dotnet new based projects directly from Visual Studio. Unfortunately, at that time Visual Studio 16.9 didn't properly support full solution templates and only supported project templates.
Happily, Microsoft just released Visual Studio 16.10 and one of the things they didn't talk about was that it now adds a user interface for creating solutions from dotnet new templates.
Given that I author the .NET Boxed solution and item templates, I thought I'd run through how it's done.
\nThe first step is to install a dotnet new based solution/project/item template NuGet package. Sadly, this step is still command line only but there are plans to add a UI so you can search for and install templates all through Visual Studio.
dotnet new --install Boxed.Templates\n\nNext we can fire up Visual Studio and go to the 'New Project' dialogue. You can select '.NET Boxed' from the 'Project type' menu on the top right to see all .NET Boxed project templates.
\n
The next step is where we can give the project a name as usual and decide where we want to store it on disk.
\n
Finally, we get to the new interesting bit, where we can select from the many options that .NET Boxed templates provide:
\n
Finally, we can hit 'Create' and start getting productive in Visual Studio.
\n
That's it! Simple isn't it.
\n", "url": "https://rehansaeed.com/dotnet-boxed-visual-studio-integration/", "title": ".NET Boxed Visual Studio Integration", "summary": "You can now create .NET Boxed projects directly from Visual Studio. Here's a short post showing you how.", "image": "https://rehansaeed.com/images/hero/Visual-Studio-1366x768.png", "date_modified": "2021-05-27T11:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/code-coverage-and-frontend-testing/", "content_html": "I was recently asked an interesting question about unit test code coverage and frontend testing by a colleague:
\n\n\nPolicies describe 80% plus unit test coverage and our React devs are pushing back a lot, arguing there is little logic in React and it would be a wast of time.\nRehan any advice/pointer for us on this?
\n
Code coverage in tests is always a controversial topic with developers in which I don't think there is a 'correct' answer. The answer you're going to get from most people is 'it depends' đ€·đŒ.
\nIf you're developing a mars rover where one bad line of code could mean mission over or flight navigation software, go ahead and go for 100% code coverage, let alone 80%. The question is, how tolerant are you to the risk of a bug in the front end React code? If something doesn't work, how long will it take you to fix it and how much will that cost you versus the cost of writing extra tests?
\nSpecifically regarding frontend unit testing, my team has been using Jest which is an excellent unit testing framework built by Facebook which I highly recommend. Jest is special because it's a single NPM package with everything you need rolled into it, including an assertion library, mocking framework and code coverage tools.
\nTo reduce the burden of writing tests you can leverage snapshot testing. A snapshot test takes very few lines of code to write and generates a file containing the rendered HTML for a Vue/React/Angular/Web/Other component. My team writes at least one snapshot test for every component we write and it hasn't been too onerous for us.
\nI just ran the code coverage tool built into Jest on one of our projects and it came to 78.7% coverage. It's worth mentioning that our projects are mostly very simple and don't contain much complex branching logic. Jest also allows you to set a code coverage limit which will fail a build if code coverage drops below a certain level set by you. Here is an example of configuring Jest that in your package.json file to do just that:
"jest": {\n "coverageThreshold": {\n "global": {\n "branches": 70,\n "functions": 70,\n "lines": 70,\n "statements": -10\n }\n }\n}\n\nIn addition, if you want a robust system, it's also worth setting up some integration or functional tests. I recommend a tool called Cypress for this job. According to the testing pyramid you need fewer of these tests as compared to unit tests as they can be more brittle.
\n
I've found them very useful for making sure that you don't release something completely broken (happens more often in the wild than any dev would like you to think). In fact, I use Cypress for this very blog to verify every commit to make sure I don't accidentally break it.
\n", "url": "https://rehansaeed.com/code-coverage-and-frontend-testing/", "title": "Code Coverage & Frontend Testing", "summary": "What is the correct level of code coverage for your project and what tools are best for quickly writing unit and integration tests for a frontend application.", "image": "https://rehansaeed.com/images/hero/Jest-Cypress-1600x900.png", "date_modified": "2021-05-10T11:23:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/web-component-custom-element-gotchas/", "content_html": "::: tip Update (04 May 2021)\nChris Holt from the FAST UI team at Microsoft got in touch with me with an alternative workaround to using a wrapper element when required to use a semantic HTML element like a section, so I've updated that section below.\n:::
Recently I've been writing web components and found several gotchas that make working with them, that much more difficult. In this post, I'll describe some gotchas you can experience when using web components.
\nThis post is framework agnostic but I've been using a lightweight library called FAST Element built by Microsoft. It is similar to Google's LitElement in that it provides a very lightweight wrapper around native web component API's. Overall the experience has been interesting but I'm not sure I'm willing to give up on Vue just yet. This post was written based on my experiences with it.
\nWhen writing a non-web component called custom-component using a framework like Vue, React or Angular, you quite often end up with HTML and CSS that looks like this:
<div class="custom-component">\n <h1>Hello</h1>\n <p>World</p>\n</div>\n\n.custom-component {\n // ...\n}\n\nThe rendered HTML from these frameworks looks exactly the same as above. However, when writing web components, you have an addition custom HTML element rendered into the DOM with the contents of the element being rendered into the shadow DOM which can introduce bugs for the unwary developer.
\n<custom-component>\n <!-- Shadow DOM -->\n <div class="custom-component">\n <h1>Hello</h1>\n <p>World</p>\n </div>\n</custom-component>\n\nThe first gotcha we encounter is that we now have an extra HTML element that we don't actually need. Extra DOM elements, mean higher memory usage and slower performance. As I'll discuss in a moment, it can also mean a more complex layout. The fix for this is simple, we can simply remove the wrapper div inside our component, so our code now becomes:
<h1>Hello</h1>\n<p>World</p>\n\n:host {\n // ...\n}\n\nWe can use the :host pseudo-selector to style the custom-component HTML element and now when our component is rendered we get the following:
<custom-component>\n <!-- Shadow DOM -->\n <h1>Hello</h1>\n <p>World</p>\n</custom-component>\n\nRemoving the wrapper div is all well and good but what if it's not a div but a semantic HTML element like section or article? Both of these tags have a specific meanings for screen readers and search engines and we must use these tags to support them. Well, in this case we have to bring back our wrapper element like so and encounter our second gotcha:
<section class="custom-component">\n <h1>Hello</h1>\n <p>World</p>\n</section>\n\nOur HTML will now be rendered as:
\n<custom-component>\n <section class="custom-component">\n <h1>Hello</h1>\n <p>World</p>\n </section>\n</custom-component>\n\nNow if we want to style the component we have to target the .custom-element class where we can place most of our styles but we also need to target the :host to change some defaults.
The default value for display in a custom HTML element like <custom-element> is actually inline which is usually not what you will want (margin, padding, border will not work as you expect), so you'll need to explicitly set your own default. This is our third gotcha! I think it makes sense to be explicit and do this for every web component.
In addition, if the content of your web component does not extend beyond the boundary of the component itself, it's a good idea to add contain: paint for a small performance boost (The Mozilla Docs have more on contain).
:host {\n display: block;\n\n contain: paint;\n}\n\n.custom-component {\n // ...\n}\n\nOne alternative to using a section above pointed out by Chris Holt is to use a template HTML element which gives you the ability to add custom HTML attributes to the custom-component element itself.
<template role="section">\n <h1>Hello</h1>\n <p>World</p>\n</template>\n\nOur HTML will now be rendered as:
\n<custom-component role="section">\n <h1>Hello</h1>\n <p>World</p>\n</custom-component>\n\nIn the example above, I've added role="section" to tell search engines and screen readers to treat the custom-component HTML element like a section element. With this approach, we no longer have a wrapper HTML element which should help improve performance and lower memory usage (particularly on low powered phones). We also get the advantage of not having to add extra styles for the wrapper element. One downside is that we have to use the role attribute.
:host {\n display: block;\n\n contain: paint;\n // ...\n}\n\nAnother downside of this approach I discovered when trying to write a custom button component is that some CSS pseudo-selectors like :disabled will not work:
<custom-button role="button">\n <slot></slot>\n</custom-button>\n\n:host(:disabled) {\n // This will not work.\n color: red;\n}\n\nYou have to fallback to using a HTML button element instead which lights up the :disabled pseudo-selector:
<custom-button>\n <button class="custom-button">\n <slot></slot>\n </button>\n</custom-button>\n\n.custom-button:disabled {\n // This will work.\n color: red;\n}\n\nThe promise of web components are that they are lightweight and fast to run. The downside seems to be that there is more to think about when building a web component, as opposed to a standard framework based component using Vue, React or Angular.
\n", "url": "https://rehansaeed.com/web-component-custom-element-gotchas/", "title": "Web Component Custom Element Gotchas", "summary": "Web components have certain gotchas with relation to custom elements and CSS. This post goes through them all and shows how you can avoid them.", "image": "https://rehansaeed.com/images/hero/Web-Components-1600x900.png", "date_modified": "2021-04-30T12:12:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/css-general-rules-of-thumb/", "content_html": "Learning CSS is difficult and as someone who has tried to teach CSS others, it's also difficult to point to good teaching resources. There isn't a simple video course I can point to and say "go and watch this". I think part of the problem is that there are so many ways to do things in CSS and also that there are so many little tricks you have to learn. As yet, the best advice I've been able to give is to go and read the last few years worth of CSS Tricks blog posts but that isn't really an easy or quick task or even one that most people would do.
\nIn this post, I wanted to give some super simple general rules of thumb that developers who are new to CSS can follow and get pretty far fairly quickly. I also wanted a public resource I could point to when I was reviewing CSS in pull requests. As these are rules of thumb, they won't be applicable everywhere but they should work in the general case.
\nThere are lots of ways of doing things in CSS. Here are some friendly defaults that make a good start.
\nUsing CSS type selectors that target a specific HTML tag makes your CSS brittle. If someone decides to change the tag used in their HTML, your CSS will also need to be changed.
\n<div></div>\n\ndiv {\n color: red;\n}\n\nA better way is to use CSS class selectors which will decouple your HTML from your CSS.
\n<div class="foo"></div>\n\n.foo {\n color: red;\n}\n\nYou should add CSS classes to any HTML elements you want to add CSS styles to. This means coming up with a naming convention for these CSS class names. Block Element Modifier (BEM) is a great naming convention to get you started. It's worth noting that there are many conventions out there, the key is to pick one and stay consistent.
\n/* Block component */\n.button {\n}\n\n/* Element that depends upon the block */\n.button__price {\n}\n\n/* Modifier that changes the style of the block */\n.button--orange {\n}\n.button--big {\n}\n\nIf you are using !important in your CSS, you are probably doing the wrong thing. It's usually used to force a particular style because a developer couldn't get the style to be applied in other ways that they tried. What you need to do instead is understand CSS Specificity. The problem you are having is probably because you have some styles elsewhere which are far too specific and are taking precedence. If you make these styles less specific, then should not need to use !important, except in a few cases.
Keeping your CSS organised can help make reading it easier for yourself and others. Code style is a very subjective topic and everyone has their own opinions. I talk more about this here.
\nI find that people have a lot of trouble laying out content the way they want. This is a huge topic but here are some basic things I found useful.
\nIf your HTML isn't great, you are going to have a tough time with your CSS. This is usually because there are lots of extra div or span elements that you don't need. Get your HTML right first and avoid adding HTML elements just to get the layout right.
It's also a good idea to learn about Semantic HTML. Every HTML tag has a meaning for search engines and those who use assistive technologies like screen readers to navigate the web (there are more people using these than you think).
\nAs a general rule CSS Grid can cater to 80% of your layout needs, so learn it well. In particular pay close attention to grid-template-areas which is a little more verbose to setup but makes your CSS Grid layout much easier to read and makes it more flexible because changing the layout means only changing the CSS for the container instead of each and every child of the container.
When you want to wrap content CSS FlexBox has your back.
\nUnderstanding the different display modes (inline, block, inline-block, grid, flex, etc.) is a must. In particular pay attention to the first two because they are the default for a lot of HTML elements.
If your element is hugging the top instead of taking all the available space, it's probably because it's using display: block and your good friend display: grid will fix that for you.
Get your hands away from the keyboard and put them down very slowly. If you are setting height and width, it's usually the wrong thing to do and makes your layout brittle and unwilling to flex.
Rather than explicitly setting a size, try to allow the contents of the container to set the size for you. This means changing the way you think about layout. In general there is an ordering to the CSS properties you should use for sizing your elements:
\nauto-fit/auto-fill and minmax() - Used in conjunction with CSS Grid's grid-template-columns, these will allow your grid to become responsive.grid-gap or gap (in newer browsers) - Used with CSS Grid and FlexBox (only supports gap). This allows you to add spacing between elements in your container.padding - It's generally preferable to use CSS properties that only affect the current element as opposed to the parent elements like margin does.marginmin-height/max-height/min-block-size/max-block-size and min-width/max-width/min-inline-size/max-inline-sizeheight and width - Use this with care if you really know what you're doing.The web is a wild west, there are many features in CSS which have varying degrees of support in different browsers and the many different versions of each browser. Whenever you're considering using a new CSS feature you found online, it's a good idea to search on caniuse.com to see if it's supported in the web browsers you want to target for your application.
\nA colour in CSS can be written in several ways. Below, I'm setting the colour white in a few different ways. You should prefer hsl because it's easiest for a human to understand how it works.
color: white;\ncolor: #ffffff;\ncolor: rgb(255 255 255);\ncolor: hsl(0 0% 100%);\n\nMaking your web application accessible is a must, lets talk about how.
\nI've talked about HTML before but I'll repeat it here because it's so important. You must learn Semantic HTML and get your HTML right before thinking about the CSS.
\nNever set the outline on a focusable element to none. People need those outlines to see what they've focused on.
.foo {\n outline: none;\n}\n\nEnsure that the background-color and color you've chosen are accessible. It's pretty easy to to do and you can read more here.
CSS seems on the face of it to be a simple programming language (and yes it is one) but it has a lot of depth to it once you start using it. If you want to learn more, reading the CSS Tricks blog posts are a great way to learn, there are also some decent courses on Frontend Masters although you do have to pay to view those.
\n", "url": "https://rehansaeed.com/css-general-rules-of-thumb/", "title": "CSS General Rules of Thumb", "summary": "Getting to grips with CSS is difficult. This post describes some general rules of thumb that can guide you down the right path to success.", "image": "https://rehansaeed.com/images/hero/CSS-1600x900.png", "date_modified": "2021-04-20T12:24:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/system-for-grouping-and-sorting-css-properties/", "content_html": "There are no hard and fast rules for code style and as I've written about before it can get ugly when people have various opposing opinions on the subject. In CSS, which I'm quite fond of writing, I believe the answer is mostly given to us by using Prettier, the opinionated code formatter. Unfortunately, Prettier does not sort CSS properties for you and never will, so this post is one solution (not the correct solution because there is no correct solution).
\nThere are automated tools like postcss-sorting that can help with this but I think it'd be difficult to use in real life because there will always be exceptions to the hard coded rules.
\nBut why even bother to group and sort CSS properties? Well, I think it makes sense for two reasons. The first is that it can make it quicker to quickly scan the CSS and find what you need. The second is that if you're working in a team environment, it can make it easier to work on CSS that has one over arching consistent style.
\nI believe you can split CSS properties into a few groups:
\nHere is an example of the four groups in real life:
\n.card {\n /* Parent Layout */\n grid-area: card;\n\n /* Layout */\n display: grid;\n align-items: center;\n gap: 10px;\n grid-template-areas:\n "header header"\n "content content";\n grid-template-columns: 1fr 1fr;\n grid-template-rows: 1fr 1fr;\n justify-items: center;\n\n /* Positioning */\n position: absolute;\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n inset-inline-start: 0;\n inset-inline-end: 0;\n inset-inline: 0;\n inset-block-start: 0;\n inset-block-end: 0;\n inset-block: 0;\n z-index: 10;\n\n /* Box Model */\n box-sizing: border-box;\n width: 100px;\n height: 100px;\n inline-size: 100px;\n block-size: 100px;\n margin: 10px;\n padding: 10px;\n\n /* Display */\n background-color: red;\n border: 10px solid green;\n color: white;\n font-family: sans-serif;\n font-size: 16px;\n text-align: center;\n}\n\nThe parent layout is any CSS layout properties that effect or come from the parent element. This usually boils down to grid-area if you're using grid-template-areas which you totally should because it allows you to change the layout child elements without modifying the child elements CSS too much.
.card {\n /* Parent Layout */\n grid-area: card;\n\n /* ... */\n}\n\nCSS Layout properties determine how the contents of the CSS class will be layed out. The common case is that you're using CSS Grid or FlexBox and want to group their respective properties together where they make the most sense.
\nI think it makes the most sense to start with the display property because that determines the type of layout followed by other properties in alphabetical order.
.card {\n /* ... */\n\n /* Layout */\n display: grid;\n align-items: center;\n gap: 10px;\n grid-template-areas:\n "header header"\n "content content";\n grid-template-columns: 1fr 1fr;\n grid-template-rows: 1fr 1fr;\n justify-items: center;\n\n /* ... */\n\nCSS properties related to position come next. Similar to display, we put the position at the top and follow in alphabetical order. Again there is an exception to be made here with top, right, bottom and left which follow the order that margin and padding values take.
.card {\n /* ... */\n\n /* Positioning */\n position: absolute;\n top: 0;\n right: 0;\n bottom: 0;\n left: 0;\n inset-inline-start: 0;\n inset-inline-end: 0;\n inset-block-start: 0;\n inset-block-end: 0;\n inset-inline: 0;\n inset-block: 0;\n z-index: 10;\n\n /* ... */\n\nCSS properties that affect the box model can come next. Again, I'm using alphabetical order except for width and height where it makes more sense for them to go together with width always being first (there are a lot of exceptions to the rules in CSS).
.card {\n /* ... */\n\n /* Box Model */\n box-sizing: border-box;\n margin: 10px;\n padding: 10px;\n width: 100px;\n height: 100px;\n inline-size: 100px;\n block-size: 100px;\n\n /* ... */\n\nFinally, there are CSS display properties which affect the look and feel. This is also a kind of 'Other' category where you can place remaining properties which don't make sense in other groups.
\n.card {\n /* ... */\n\n /* Display */\n background-color: red;\n border: 10px solid green;\n color: white;\n font-family: sans-serif;\n font-size: 16px;\n text-align: center;\n}\n\nThis is just one method of grouping and ordering CSS properties that I've found useful in real life projects. There is no correct answer to this problem and I think the problem space is probably too complex to make a tool like Prettier do the work for you because there will always be exceptions to the rules.
\n", "url": "https://rehansaeed.com/system-for-grouping-and-sorting-css-properties/", "title": "A System for Grouping & Sorting CSS Properties", "summary": "Grouping and Sorting CSS properties can make your CSS easier to read and helps with consistency in a team environment. There is no correct answer but something is better than nothing.", "image": "https://rehansaeed.com/images/hero/CSS-1600x900.png", "date_modified": "2021-04-20T10:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/exporting-open-telemetry-data-to-jaeger/", "content_html": "As I talked about in my first post, the end goal is to get nice visualisations from our Open Telemetry data, so we can spot patterns and learn something from the behaviours of our applications.
\nIn this post, I'll show how we can export the Open Telemetry traces, logs and metrics that we've collected to Jaeger and view them in the Jaeger dashboard.
\nThere are actually two methods of exporting our telemetry to Jaeger. The first uses Jaeger's proprietary protocol and is not something I plan to cover in this post. The second uses the Open Telemetry Protocol (OTLP), which is an open standard that we can use to export Open Telemetry data to any application that supports it.
\nJaeger is a pretty complex application that splits it's responsibilities into several separate binaries. It splits the collection of telemetry into a binary called a 'collector'. If we want Jaeger to collect telemetry using the Open Telemetry Protocol, we need to use the Jaeger Open Telemetry collector. I'm not going to cover how we can setup Jaeger in a full production setup. Instead I'll be using the jaegertracing/opentelemetry-all-in-one Docker image which makes running Jaeger with the Open Telemetry collector as easy as running this command:
docker run --name jaeger -p 13133:13133 -p 16686:16686 -p 4317:55680 -d --restart=unless-stopped jaegertracing/opentelemetry-all-in-one\n\nI've also opened up a few ports to the Docker container:
\n::: warning\nThe default port used by the Open Telemetry Protocol has recently been changed from 55680 to 4317. This change has not yet been made in Jaeger which uses the older port number.\n:::
In the last post I showed how we can collect Open Telemetry data in an ASP.NET Core application. I'm going to build on that example to export to Jaeger. To start with, we'll need to add an additional NuGet package called OpenTelemetry.Exporter.OpenTelemetryProtocol:
<PackageReference Include="OpenTelemetry.Exporter.Console" Version="1.0.1" />\n<PackageReference Include="OpenTelemetry.Exporter.OpenTelemetryProtocol" Version="1.0.1" />\n<PackageReference Include="OpenTelemetry.Extensions.Hosting" Version="1.0.0-rc2" />\n<PackageReference Include="OpenTelemetry.Instrumentation.AspNetCore" Version="1.0.0-rc2" />\n\nThen we can use the AddOtlpExporter method to configure where to export to. In this case, I'm exporting to http://localhost:4317 where port 4317 is the default port used by the Open Telemetry Protocol. Ideally, we'd retrieve this value from configuration but I'm keeping things simple in this example.
services.AddOpenTelemetryTracing(\n builder =>\n {\n builder\n .SetResourceBuilder(ResourceBuilder\n .CreateDefault()\n .AddService(webHostEnvironment.ApplicationName))\n .AddAspNetCoreInstrumentation()\n .AddOtlpExporter(options => options.Endpoint = new Uri("http://localhost:4317"));\n if (webHostEnvironment.IsDevelopment())\n {\n builder.AddConsoleExporter(options => options.Targets = ConsoleExporterOutputTargets.Debug);\n }\n });\n\nWe can now fire up the Jaeger dashboard in our browser which we can access at http://localhost:16686. If we execute some request/response cycles in our application where we have added Open Telemetry support, we can see the telemetry for each request/response:

If we drill down into a particular request/response trace, we can view the spans (in this simple example, there is only one) and all attributes associated with the span. This is the same data we saw in the debug output from my previous post:
\n
In this post, I've shown how you can quickly fire up Jaeger and how you can get an ASP.NET Core app to export Open Telemetry data to it. In my next final post I'll discuss optimally configuring Open Telemetry for an ASP.NET application.
\n", "url": "https://rehansaeed.com/exporting-open-telemetry-data-to-jaeger/", "title": "Exporting Open Telemetry Data to Jaeger", "summary": "How to optimally export Open Telemetry metrics, logs, and traces for .NET to Jaeger.", "image": "https://rehansaeed.com/images/hero/Open-Telemetry-1600x900.png", "date_modified": "2021-02-11T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/open-telemetry-for-asp-net-core/", "content_html": "Configuring Open Telemetry for ASP.NET Core is a fairly simple process. In this post, I'll show you the simplest setup for tracing Open Telemetry in ASP.NET Core and then move to a more fully featured example.
\nTo begin with, we'll just be exporting our Open Telemetry traces to the debug output so we can see what is being recorded but we'll soon move on to exporting to Jaeger in another post where we can see nice visualisations of our traces.
\nOpen Telemetry for ASP.NET Core ships as several NuGet packages. The OpenTelemetry.Extensions.Hosting package is the required core package to add Open Telemetry to your application.
You can optionally add packages beginning with OpenTelemetry.Instrumentation.* to collect extra span attributes e.g. the OpenTelemetry.Instrumentation.AspNetCore package adds span attributes for the current request and response.
You can also optionally add packages beginning with OpenTelemetry.Exporter.* to export trace data e.g. the OpenTelemetry.Exporter.Console package exports all trace data to the console or debug output of your application.
<ItemGroup Label="Package References">\n <PackageReference Include="OpenTelemetry.Exporter.Console" Version="1.0.1" />\n <PackageReference Include="OpenTelemetry.Extensions.Hosting" Version="1.0.0-rc2" />\n <PackageReference Include="OpenTelemetry.Instrumentation.AspNetCore" Version="1.0.0-rc2" />\n</ItemGroup>\n\nIn our Program.cs I've added a ConfigureServices method, where we can add Open Telemetry support with just a few lines of code using the AddOpenTelemetryTracing method.
public virtual void ConfigureServices(\n IServiceCollection services,\n IWebHostEnvironment webHostEnvironment)\n{\n // ...omitted\n services.AddOpenTelemetryTracing(\n builder =>\n {\n builder\n .SetResourceBuilder(ResourceBuilder\n .CreateDefault()\n .AddService(webHostEnvironment.ApplicationName))\n .AddAspNetCoreInstrumentation();\n if (webHostEnvironment.IsDevelopment())\n {\n builder.AddConsoleExporter(\n options => options.Targets = ConsoleExporterOutputTargets.Debug);\n }\n });\n}\n\nThe SetResourceBuilder method is your opportunity to add a set of common attributes to all spans created in the application. In the above case, we've added an application name.
The AddAspNetCoreInstrumentation method is where we enable collection of attributes relating to ASP.NET Core requests and responses.
Finally, we use AddConsoleExporter to export the trace data to the debug output. You could also output to the console but there is a lot of trace data and the console is already outputting log information which results in duplication, so I prefer not to do that. Note that we only do this if we are running in the development environment.
If we now start the application and execute a request/response cycle, we can see the following in our IDE's debug output window:
\nActivity.Id: 00-dde96d459fee4144a83818e054e221b1-cac69896c1bcd14f-01\nActivity.DisplayName: /favicon-32x32.png\nActivity.Kind: Server\nActivity.StartTime: 2021-02-01T10:28:25.4637044Z\nActivity.Duration: 00:00:00.0086712\nActivity.TagObjects:\n http.host: localhost:5001\n http.method: GET\n http.path: /favicon-32x32.png\n http.url: https://localhost:5001/favicon-32x32.png\n http.user_agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36\n http.status_code: 200\n otel.status_code: UNSET\n service.name: ApiTemplate\n service.instance.id: defe9269-04f2-4b49-a05c-ebddf2112993\n telemetry.sdk.name: opentelemetry\n telemetry.sdk.language: dotnet\n telemetry.sdk.version: 1.0.0.0\n\nThe first few lines give us some basic information about the span, including the span ID, start time and duration of the span. Under TagObjects is where we can see all attributes assigned to the span.
\nAll attributes starting with http tell us about the request and response while the attributes starting with service tell us about the application itself. This includes a unique identifier for the current instance of the application. This can be useful if we were running multiple instances of the application in Kubernetes or Docker Swarm for example.
Finally, we also get quite a lot of information about the Open Telemetry library used to collect the information. It may eventually be useful when multiple versions of the Open Telemetry protocol are released and there is some feature difference between them but as of now, it's not very useful. I haven't been able to find a way to turn it off, since it's a fair amount of information to send in absolutely every trace message.
\nI've shown a basic example of setting up Open Telemetry and discussed the defaults of what trace data is collected in ASP.NET Core. In my next post, I'll cover how you can fire up Jaeger and show how you can get an ASP.NET Core app to export Open Telemetry data to it.
\n", "url": "https://rehansaeed.com/open-telemetry-for-asp-net-core/", "title": "Open Telemetry for ASP.NET Core", "summary": "The basics of how to configure Open Telemetry metrics, logs, and traces for ASP.NET Core and export the traces.", "image": "https://rehansaeed.com/images/hero/Open-Telemetry-1600x900.png", "date_modified": "2021-02-01T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/deep-dive-into-open-telemetry-for-net/", "content_html": "Open Telemetry is an open source specification, tools and SDK's used to instrument, generate, collect, and export telemetry data (metrics, logs, and traces). Open Telemetry is backed by the Cloud Native Computing Foundation (CNCF) which backs a mind boggling array of popular open source projects. It's worth looking at the CNCF Landscape to see what I really mean. The SDK's support all the major programming languages including C# and ASP.NET Core.
\nIn this post, I'm going to discuss what Open Telemetry is all about, why you'd want to use it and how to use it with .NET specifically. With a typical application there are three sets of data that you usually want to record: metrics, logs and traces. Lets start by discussing what they are.
\nProvides insight into application-specific messages emitted by processes. In a .NET application, Open Telemetry support can easily be added if you use ILogger for logging which lives in the Microsoft.Extensions.Logging NuGet package. You'd typically already use this if you're building an ASP.NET Core application.
Provide quantitative information about processes running inside the system, including counters, gauges, and histograms. Support for metrics in Open Telemetry is still under development and being finalised at the time of writing. Examples of metrics are:
\nAlso known as distributed tracing, this records the start and end times for individual operations alongside any ancillary data relevant to the operation. An example of this is recording a trace of a HTTP request in ASP.NET Core. You might record the start and end time of a request/response and the ancillary data would be the HTTP method, scheme, URL etc.
\nIf an ASP.NET Core application makes database calls and HTTP requests to external API's these could also be recorded if the database and API's which are in totally separate processes also support recording Open Telemetry tracing. It's possible to follow the trace of a HTTP request from a client, down to your API, down to a database and all the way back again. This allows you to get a deep understanding of where the time is being spent or if there is an exception, where it is occurring.
\nCollecting metrics, logs and traces is only half of the equation, the other half is exporting that data to various applications that know how to collect Open Telemetry formatted data, so you can view it. The endgame is to be able to see your data in an easily consumable fashion using nice visualisations, so you can spot patterns and solve problems.
\nThe two main applications that can collect and display Open Telemetry compatible trace data are Jaeger and Zipkin. Zipkin is a bit older and doesn't have as nice a UI, so I'd personally recommend Jaeger. It looks something like this:
\n
The above image shows the trace from a 'frontend' application. You can see how it makes calls to MySQL, Redis and external API's using HTTP requests. The length of each line shows how long it took to execute. You can easily see all of the major operations executed in a trace from end to end. You can also drill into each individual line and see extra information relevant to that part of the trace. I'll show you how you can run Jaeger and collect Open Telemetry data in my next blog post.
\nEach line in the Jaeger screenshot above is called a Span or in .NET is represented by the System.Activities.Activity type. It has a unique identifier, start and end time along with a parent span unique identifier too, so it can be connected to other spans in a tree structure representing an overall trace. Finally, a span can also contain other ancillary data that I will discuss further on.
::: tip\nUnfortunately, .NET's naming has significantly deviated from the official Open Telemetry specification, resulting in quite a lot of confusion on my part. Happily, I've been through that confusion, so you don't have to!
\nMy understanding is that .NET already contained a type called Activity, so the .NET team decided to reuse it instead of creating a new Span type like you'd expect. This means that a lot of naming does not match up with the Open Telemetry specification. From this point forward you can use the words 'span' and 'activity' interchangeably.\n:::
Recording your own traces using spans is pretty simple. First we must create an ActivitySource from which spans or activities can be recorded. This just contains a little information about the source of the spans created from it.
private static ActivitySource activitySource = new ActivitySource(\n "companyname.product.library",\n "semver1.0.0");\n\nThen we can call StartActivity to start recording and finally call Dispose to stop recording the span.
using (var activity = activitySource.StartActivity("ActivityName")\n{\n // Pretend to do some work.\n await LongRunningAsync().ConfigureAwait(false);\n} // Activity gets stopped automatically at end of this block during dispose.\n\nAlong with our span we can record events. These are timestamped events that occur at a single point in time within your span.
\nusing (var activity = activitySource.StartActivity("ActivityName")\n{\n await LongRunningOperation().ConfigureAwait(false);\n}\n\npublic async Task LongRunningOperationAsync()\n{\n await Task.Delay(1000).ConfigureAwait(false);\n\n // Log timestamped events that can take place during an activity.\n Activity.Current?.AddEvent(new ActivityEvent("Something happened."));\n}\n\nWithin the LongRunningOperationAsync method, we don't have access to the current span. One way to get hold of it would be to pass it in as a method parameter. However, a better way that decouples the two operations is to use Activity.Current which gives you access to the current span within the currently running thread.
One common pitfall I can foresee is that Activity.Current could be null due to the caller deciding not to create a span for some reason. Therefore, we use the null conditional operator ?. to only call AddEvent if the current span is not null.
Attributes are name value pairs of data that you can record as part of an individual span. The attribute names have a loose standard for how they are put together that I'll talk about further on.
\n::: tip\nTags in .NET are called Attributes in the Open Telemetry specification.\n:::
using (var activity = activitySource.StartActivity("ActivityName")\n{\n await LongRunningOperation().ConfigureAwait(false);\n}\n\npublic async Task LongRunningOperationAsync()\n{\n await Task.Delay(1000).ConfigureAwait(false);\n\n // Log an attribute containing arbitrary data.\n Activity.Current?.SetTag("http.method", "GET");\n}\n\nYou can add new attributes or update existing attributes using the Activity.SetTag method. There is also an Activity.AddTag method but that will throw if an attribute does not already exist, so I'd avoid using it.
IsRecording is a flag on a span that returns true if the end time of the span has not yet been set and false if it has, thus signifying whether the span has ended. In addition it can also be set to false if the application is sampling Open Telemetry spans i.e. you don't want to collect a trace for every single execution of the code but might only want a trace for say 10% of executions to reduce the significant overhead of collecting telemetry.
::: tip\nThe Activity.IsAllDataRequested property in .NET is called IsRecording in the Open Telemetry specification.\n:::
using (var activity = activitySource.StartActivity("ActivityName")\n{\n await LongRunningOperation().ConfigureAwait(false);\n}\n\npublic async Task LongRunningOperationAsync()\n{\n await Task.Delay(1000).ConfigureAwait(false);\n\n // It's possible to optionally request more data from a particular span.\n var activity = Activity.Current;\n if (activity != null && activity.IsAllDataRequested)\n {\n activity.SetTag("http.url", "http://www.mywebsite.com");\n }\n}\n\nIt's worth reading a bit more about Open Telemetry Sampling for more details. In most real world applications, collecting telemetry for every execution of your code is prohibitively expensive and unrealistic, so you will likely be using some form of sampling. Therefore the IsRecording/IsAllDataRequested flag becomes something you should probably always check (as in the above example) before you add events or attributes to your span.
Note the attribute names http.method and http.url I used in the above examples. There are certain commonly used attribute names that have been standardised in the Open Telemetry specification.
Standardised attribute names use a lower_kebab_case syntax with . separator characters. Standardising the names of commonly used attribute names gives applications like Jaeger the ability to show nice UI customisations. Attribute names have been categorised under a few different buckets, it's worth spending some time taking a look at them:
There are many plugins for exporting data collected using Open Telemetry which I'll discuss in my next blog post about using Open Telemetry in ASP.NET Core. Therefore, it's highly unlikely that you'd need to manually write your own code to consume data collected using Open Telemetry.
\nHowever, if you're interested then Jimmy Bogard has a very well written blog post about using ActivitySource and ActivityListener to listen to any incoming telemetry. In short, you can easily subscribe to consume Open Telemetry data like so:
using var subscriber = DiagnosticListener.AllListeners.Subscribe(\n listener =>\n {\n Console.WriteLine($"Listener name {listener.Name}");\n\n listener.Subscribe(kvp => Console.WriteLine($"Received event {kvp.Key}:{kvp.Value}"));\n });\n\nEarlier on I spoke about how it's possible to record a trace across process boundaries. For example collecting a trace from a client application through to a database and API both running in separate processes. Given what you now know about recording spans above, how is this possible?
\nThis is where the W3C Trace Context standard comes in. It defines a series of HTTP headers that pass information from one process to another about any trace that is currently being recorded. There are two HTTP headers defined in the specification:
\ntraceparent - Contains the version, trace-id, parent-id and trace-flags in an encoded form separated by dashes.version - The version of Open Telemetry being used which is always 00 at the time of writing.trace-id - The unique identifier of the trace.parent-id - The unique identifier of the span which is acting as the current parent span.trace-flags - A set of flags for the current trace which determines whether the current trace is being sampled and the trace level.tracestate - Vendor-specific data represented by a set of name/value pairs.I'm not sure why but the HTTP headers are defined in lower-case. Here is an example of what these headers look like in a HTTP request:
\ntraceparent: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01\ntracestate: asp=00f067aa0ba902b7,redis=t61rcWkgMzE\n\nIf you're interested in what it looks like to actually implement the W3C Trace Context, Jimmy Bogard has been implementing Open Telemetry for NServiceBus and shows how it can be done.
\nSimilar to attributes, baggage is another way we can add data as name value pairs to a trace. The difference is that baggage travels across process boundaries using a baggage HTTP header as defined in the W3C Baggage specification. It is also added to all spans in a trace.
baggage: userId=alice,serverNode=DF:28,isProduction=false\n\nSimilar to the way attributes can be recorded using the AddTag and SetTag methods, with baggage we can use the AddBaggage method. For some reason a SetBaggage method that would also update baggage does not exist.
using (var activity = activitySource.StartActivity("ActivityName")\n{\n await LongRunningOperation().ConfigureAwait(false);\n}\n\npublic async Task LongRunningOperationAsync()\n{\n await Task.Delay(1000).ConfigureAwait(false);\n\n // Log an attribute containing arbitrary data.\n Activity.Current?.AddBaggage("http.method", "GET");\n}\n\nSo why would you use baggage over attributes? Well, if you have a global unique identifier for a particular trace like a user ID, order ID or some session ID it might be useful to add it as baggage because it's relevant to all spans in your trace. However, you must be careful not to add too much baggage because it will add overhead when making HTTP requests.
\nThe .NET team in their wisdom decided to take quite a large gamble on Open Telemetry. They not only repurposed their Activity type to represent a span but they also instrumented several libraries, so you don't have to.
The HttpClient already adds the W3C Trace Context HTTP headers from the current span automatically if a trace is being recorded. Also an ASP.NET Core application already reads W3C Trace Context HTTP headers from incoming requests and populates the current span with that information.
Since the .NET team has made it so easy to collect telemetry and integrated the Activity type into the base class libraries, I expect a lot of other libraries and applications to follow this example.
The ILogger interface from the Microsoft.Extensions.Logging NuGet package used commonly in an ASP.NET Core application is also able to collect logs compatible with Open Telemetry too.
I've discussed that Open Telemetry is all about collecting Logs, Metrics and Trace data and gone fairly deep into collecting Trace data. In my next post, I'll cover how you can configure ASP.NET Core and Open Telemetry traces and logs.
\n", "url": "https://rehansaeed.com/deep-dive-into-open-telemetry-for-net/", "title": "Deep Dive into Open Telemetry for .NET", "summary": "How to use the Open Telemetry specification, tools and SDK's used to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) using .NET and ASP.NET.", "image": "https://rehansaeed.com/images/hero/Open-Telemetry-1600x900.png", "date_modified": "2021-01-19T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/automating-dotnet-security-updates/", "content_html": "Every few weeks Microsoft pushes out a .NET SDK update to patch zero day security vulnerabilities. It's important to keep up to date with these to ensure that your software is protected. The problem is, keeping up to date is a manual and boring process but what if you could automate it?
\nIn this post, I'll talk through how you can get most of the way to a fully automated solution with the last hurdle requiring some of your help.
\nThe first problem we need to solve is to enforce a specific version of the .NET SDK to be used to build our code. We can do this by adding a global.json file to the root of our repository. We can set the .NET SDK version in it like so:
{\n "sdk": {\n "version": "3.1.402"\n }\n}\n\n::: warning Security vs Convenience\nIf a developer doesn't have the version of the .NET SDK you've specified in your global.json file, Visual Studio will fail to load the projects and show a pretty good error in the output window telling you to update the SDK. It would be nice if it also contained a link to the exact SDK install you needed to smooth the experience.\n:::
Continuous integration servers like GitHub Actions, Azure Pipelines or AppVeyor all have a version of the .NET SDK pre-installed for your convenience. However, when a new version is released it takes them days to update to the latest version.
\nIn my opinion, it's just better to install the .NET SDK yourself, which is pretty easy to do. The trick is to read the .NET SDK version number from the global.json file, so that there is a single source of truth for the version number and it's easier to update.
It's worth noting that this adds a few seconds to your build time. However, if the build server already has the version installed which is usually true, it's very quick.
\nFor GitHub Actions, we can use the first party actions/setup-dotnet GitHub action to install the .NET SDK. You can provide it a hard coded version number but it turns out omitting this causes it to lookup the version number from any global.json file it finds.
- name: 'Install .NET Core SDK'\n uses: actions/setup-dotnet@v1\n\nAzure Pipelines has a similar first party UseDotNet task that can install the .NET SDK. It's a bit more verbose, as you need to set the useGlobalJson flag to true.
- task: UseDotNet@2\n displayName: 'Install .NET Core SDK'\n inputs:\n packageType: 'sdk'\n useGlobalJson: true\n\n.NET ships with a PowerShell and Bash script to install the .NET SDK. They both ship with an argument you can pass to tell them to use the global.json file to read the version number. Here is a short cross-platform PowerShell 7 (previously known as PowerShell Core) script that you can use:
if ($isWindows) {\n Invoke-WebRequest "https://dot.net/v1/dotnet-install.ps1" -OutFile "./dotnet-install.ps1"\n ./dotnet-install.ps1 -JSonFile global.json\n}\nelse {\n Invoke-WebRequest "https://dot.net/v1/dotnet-install.sh" -OutFile "./dotnet-install.sh"\n sudo chmod u+x dotnet-install.sh\n sudo ./dotnet-install.sh --jsonfile global.json\n}\n\nAppVeyor has some issues with installing the .NET SDK using the PowerShell and Bash scripts. For reasons I'm not too clear on, you have to set the installation directory. So here is the updated script I use for that:
\nif ($isWindows) {\n Invoke-WebRequest "https://dot.net/v1/dotnet-install.ps1" -OutFile "./dotnet-install.ps1"\n ./dotnet-install.ps1 -JSonFile global.json -InstallDir 'C:\\Program Files\\dotnet'\n}\nelse {\n Invoke-WebRequest "https://dot.net/v1/dotnet-install.sh" -OutFile "./dotnet-install.sh"\n sudo chmod u+x dotnet-install.sh\n if ($isMacOS) {\n sudo ./dotnet-install.sh --jsonfile global.json --install-dir '/Users/appveyor/.dotnet'\n } else {\n sudo ./dotnet-install.sh --jsonfile global.json --install-dir '/usr/share/dotnet'\n }\n}\n\nDependabot is an amazing tool that GitHub recently acquired. It automatically submits pull requests to your repository to update packages of various kinds including NuGet and NPM packages.
\nThis is where I need your help. The Dependabot GitHub repository has an open issue (dependabot-core#2442) to also do the same for the .NET SDK version in the global.json file. Upvoting the issue will really help raise it's profile and get it implemented.
Security is hard. Keeping up to date is important but a never ending boring chore. It doesn't have to be that way. With a little extra work, we can get as close to making a .NET SDK update a three character commit every few weeks and with your help, maybe even that can be automated with Dependabot.
\n", "url": "https://rehansaeed.com/automating-dotnet-security-updates/", "title": "Automating .NET Security Updates", "summary": ".NET SDK updates are released every few weeks. In this post, I talk about how you can automate them.", "image": "https://rehansaeed.com/images/hero/Microsoft-.NET-1366x768.png", "date_modified": "2020-09-23T15:35:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/the-fastest-nuget-package-ever-published-probably/", "content_html": "::: tip Updated 2022-03-30 22:42\nThe GitHub CLI added support for creating labels, so updated the post with those new commands.\n:::
\n::: tip Updated 2021-04-12 09:50\nI appeared on the .NET Docs Show and ran through this blog post and much more. I've added a link to the show on YouTube below.\n:::
\n::: tip Updated 2020-09-17 09:29\nAdded GitHub CLI commands to create labels instead of doing it manually. The GitHub CLI also simplified some commands, so I've updated the post to make use of those simpler commands.\n:::
\nhttps://www.youtube.com/watch?v=A93Fn_qMLX4
\nSo, you want to publish a new NuGet package? You just want to get your code up into nuget.org as quickly as possible but there is so much that you have to setup to get there. Not any more! I'll show you how you can create a new project and publish a NuGet package with all the bells and whistles in a couple of minutes.
\nWe'll start off by creating a new GitHub repository using the new GitHub CLI.
\ngh repo create RehanSaeed/FastestNuGet --public --confirm\ncd FastestNuGet\n\nThe next step is to install the Dotnet Boxed project templates and then create a new project using the NuGet template. There is a lot of optional features you can toggle in this project template which you can review by looking at the output for the dotnet new nuget --help command.
dotnet new --install Boxed.Templates\ndotnet new nuget --help\ndotnet new nuget --title "Project Title" --description "Project Description" --github-username RehanSaeed --github-project FastestNuGet\n\nNext we'll commit and push our newly created project to the main branch.
git add .\ngit commit -m "Initial"\ngit push --set-upstream origin main\n\nAs soon as we do this, we'll see two GitHub Actions have started.
\n
The Build GitHub Action has completed several actions you can see below. Note that these actions were completed on Windows, MacOS and Ubuntu Linux. This ensures that your code builds and passes tests on all platforms.

This resulted in a NuGet package being packaged up and pushed to GitHub packages. This is a nice place to store pre-release packages that you can use for testing.
\n
The other Release Drafter GitHub action created a draft release for us in GitHub releases.

Next we need to create some default labels that we can apply to pull requests. This will help us create automatic release notes for any NuGet packages we release. The bug, enhancement and maintenance labels will categorise changes in our release notes. The major, minor and patch labels will automatically generate a semantic versioning 2.0 compliant version number for us.

I've gone in to GitHub and deleted all the existing labels and then run a few GitHub CLI commands to create just the ones I want:
\ngh label create "dependencies" --description "Pull requests that update a dependency file." --color "0366d6" --force\ngh label create "documentation" --description "Pull requests or issues to add or modify documentation." --color "0075ca" --force\ngh label create "bug" --description "Issues describing a bug or pull requests fixing a bug." --color "ee0701" --force\ngh label create "enhancement" --description "Issues describing an enhancement or pull requests adding an enhancement." --color "a2eeef" --force\ngh label create "maintenance" --description "Pull requests that perform maintenance on the project but add no features or bug fixes." --color "fff89b" --force\ngh label create "major" --description "Pull requests requiring a major version update according to semantic versioning." --color "b23021" --force\ngh label create "minor" --description "Pull requests requiring a minor version update according to semantic versioning." --color "f99248" --force\ngh label create "patch" --description "Pull requests requiring a patch version update according to semantic versioning." --color "eaf42c" --force\n\nNow it's time to make a change and submit a new pull request (PR) to our repository. Notice I'm adding a major and enhancement label to the pull request.
git switch --create some-change\ngit add .\ngit commit -m "Some change"\ngit push --set-upstream origin some-change\ngh pr create --fill --label major --label enhancement\n\nNext, I'll check that the pull request passed all eight of it's continuous integration build checks and merge the pull request.
\n
If we go back to GitHub Releases, we'll see that our draft GitHub release was automatically updated with details of our pull request! Notice that the enhancement label also caused our pull request to be categorised under 'New Features'.

Next, we'll want to publish an official release of our NuGet package to nuget.org but first, we need to get hold of a NuGet API key from nuget.org and add it as a secret named NUGET_API_KEY in GitHub secrets.

Finally I'll edit the release and change the tag name and display name for the release to 1.0.0. Normally, the major, minor and patch labels we applied earlier would generate this version for us but this is the first ever Git tag, so we'll need to do it ourselves.
In my last post 'The Easiest Way to Version NuGet Packages' I talked more about how we are using MinVer for taking the Git tags and versioning our DLL's and NuGet packages.
\n
Now bask in the glory of seeing your NuGet package on nuget.org. I also just noticed there is a Black Lives Matter (BLM) banner on the site! Those lives certainly do matter, check out my recent post on Racism in Software Development and Beyond for my take on the subject.
\n
That's not all! We didn't just push one NuGet package, we also pushed it's symbols to the nuget.org symbol server. The NuGet package is also signed and has source link support, so developers can debug code in your NuGet package. If you look at the main ReadMe of your project, you'll see a badge showing you the status of the latest GitHub Action run on the main branch and finally you also see a graph showing you how long each GitHub Action run took and it's status over time.
\n
You can take a look at the repository at RehanSaeed/FastestNuGet to see all of the above in action.
\nHere is the complete script we ran to get from starting a new project to publishing on NuGet. I took lots of screenshots along the way but overall, you can do all this in about two minutes assuming you have everything installed.
\ngh repo create RehanSaeed/FastestNuGet --public --confirm\ncd FastestNuGet\n\ndotnet new --install Boxed.Templates\ndotnet new nuget --title "Project Title" --description "Project Description" --github-username RehanSaeed --github-project FastestNuGet\n\ngit add .\ngit commit -m "Initial"\ngit push --set-upstream origin main\n\n# View GitHub Actions Continuous Integration Build\nstart "https://github.com/RehanSaeed/FastestNuGet/actions"\n\n# View NuGet Package Published to GitHub Packages\nstart "https://github.com/RehanSaeed/FastestNuGet/packages"\n\n# Create major, minor, patch, bug, enhancement, maintenance labels\nstart "https://github.com/RehanSaeed/FastestNuGet/labels"\ngh label create "dependencies" --description "Pull requests that update a dependency file." --color "0366d6" --force\ngh label create "documentation" --description "Pull requests or issues to add or modify documentation." --color "0075ca" --force\ngh label create "bug" --description "Issues describing a bug or pull requests fixing a bug." --color "ee0701" --force\ngh label create "enhancement" --description "Issues describing an enhancement or pull requests adding an enhancement." --color "a2eeef" --force\ngh label create "maintenance" --description "Pull requests that perform maintenance on the project but add no features or bug fixes." --color "fff89b" --force\ngh label create "major" --description "Pull requests requiring a major version update according to semantic versioning." --color "b23021" --force\ngh label create "minor" --description "Pull requests requiring a minor version update according to semantic versioning." --color "f99248" --force\ngh label create "patch" --description "Pull requests requiring a patch version update according to semantic versioning." --color "eaf42c" --force\n\ngit switch --create some-change\ngit add .\ngit commit -m "Some change"\ngit push --set-upstream origin some-change\ngh pr create --fill --label major --label enhancement\n\n# View and Complete Pull Request\nstart "https://github.com/RehanSaeed/FastestNuGet/pull/1"\n\n# Add NUGET_API_KEY to GitHub Secrets\nstart "https://github.com/RehanSaeed/FastestNuGet/settings/secrets"\n\n# View and Publish Updated Draft Release\nstart "https://github.com/RehanSaeed/FastestNuGet/releases"\n\n# View NuGet Package Published to NuGet\nstart "https://www.nuget.org/packages/FastestNuGet/"\n\nI hope this Dotnet Boxed project template accelerates development of your next NuGet package. There are lots of optional features of the NuGet project template I haven't even shown like support for Azure Pipelines and Appveyor continuous integration builds and more, so please do go and take a look.
\n", "url": "https://rehansaeed.com/the-fastest-nuget-package-ever-published-probably/", "title": "The Fastest NuGet Package Ever Published (Probably)", "summary": "The fastest way to create a new NuGet package project and get it published with all the bells and whistles like continuous integration (CI) builds and drafted release notes.", "image": "https://rehansaeed.com/images/hero/NuGet-1366x768.png", "date_modified": "2020-07-08T08:34:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/the-easiest-way-to-version-nuget-packages/", "content_html": "The easiest way to version NuGet packages using semantic versioning in my opinion is to use MinVer. Getting started is literally as easy as adding the MinVer NuGet package. Getting finished is not too much more than that.
In this post I'll discuss the semantic versioning 2.0 standard and show you how you can semantically version your NuGet packages and the DLL's within them using MinVer and Git tags.
\nSemantic versioning is the idea that each part of a version number has some intrinsic meaning. Lets break down an example of a full version number into it's constituent parts:
\n1.2.3-preview.0.4+b34215d3d2539837ac3e20fc3111ba7d46670064\n\nalpha or beta.Isn't that cool! Every number in there has so much significance. Lets break it down, just by looking at version numbers we can determine:
\nIn the past I've tried to generate version numbers in quite a few different ways, none of which has been very satisfactory and none have conformed to semantic versioning 2.0. I've tried using the current date and time to generate a version number. This tells you when the package was created but nothing more.
\n[Year].[Month].[Day].[Hour][Minutes]\n2020.7.2.0908\n\nI've also generated version numbers based on the automatically incrementing continuous integration (CI) build number but how do you turn one number into three? Well in my case I hard coded a major or minor version and used the CI build number for the patch version. Using this method lets you tie a package version back to a CI build and through inference a Git commit but it's less than ideal.
\n[Hard Coded].[Hard Coded].[CI Build Number]\n1.2.3\n\nMinVer leans on Git tags to help version your NuGet packages and the assemblies within them. Lets start by adding the MinVer NuGet package to a new class library project:
\n<ItemGroup Label="Package References">\n <PackageReference Include="MinVer" PrivateAssets="All" Version="2.3.0" />\n</ItemGroup>\n\nWe'll need an initial version number for our NuGet package, so I'll tag the current commit as 0.0.1 and push the tag to my Git repository. Then I'll build my NuGet package:
git tag -a 0.0.1 -m "Initial"\ngit push --tags\ndotnet build\n\nIf you now use an IL decompiler tool like dnSpy (which is free and open source) to take a peek inside the resulting DLL, you'll notice the following version assembly level attributes have been automatically added:
\n
[assembly: AssemblyVersion("0.0.0.0")]\n[assembly: AssemblyFileVersion("0.0.1.0")]\n[assembly: AssemblyInformationalVersion("0.0.1+362b09133bfbad28ef8a015c634efdb35eb17122")]\n\nIf you now run dotnet pack to build a NuGet package, you'll notice that it has the correct version. Note that 0.0.1 is a release version of our NuGet package i.e. something we might want push to nuget.org in this case.

Now lets make a random change in our repository and then rebuild and repack our NuGet package:
\ngit add .\ngit commit -m "Some changes"\ndotnet build\ndotnet pack\n\nNow MinVer has automatically generated a pre-release version of our NuGet package. The patch version has been automatically incremented, a pre-release name preview has been given with a pre-release version of 0. We also have a git height of one because we have made one commit since our last release and we still have the git commit SHA too:

If we crack open our DLL and view it's assembly level attributes again, we'll see more details:
\n[assembly: AssemblyVersion("0.0.0.0")]\n[assembly: AssemblyFileVersion("0.0.2.0")]\n[assembly: AssemblyInformationalVersion("0.0.2-preview.0.1+7af23ee0f769ddf0eb8991d59ad09dcbc8d82855")]\n\nNow at this stage you could make some more commits and you'd see the major, minor and patch versions stay the same but the preview version, git height and git SHA would change. Eventually though, you will want to get another release version of your NuGet package ready. Well, this is as simple as creating another git tag:
\ngit tag -a 0.0.2 -m "The next amazing version"\ngit push --tags\ndotnet build\n\nNow you can simply take the latest 0.0.2 release and push it to nuget.org.
There is a more popular competitor to MinVer out there called Nerdbank.GitVersioning which is part of the .NET Foundation and is worth talking about because it works slightly differently. It requires you to have a version.json file in your repository to contain the version information, instead of using Git tags.
{\n "$schema": "https://raw.githubusercontent.com/dotnet/Nerdbank.GitVersioning/master/src/NerdBank.GitVersioning/version.schema.json",\n "version": "1.0-beta"\n // This file can get very complicated...\n}\n\nIn my opinion, this is not as nice. Git tags are an underused feature of Git and using them to tag release versions of your packages is a great use case. Git allows you to checkout code from a tag, so you can easily view the code in a package just by knowing it's version.
\ngit checkout 0.0.1\n\nHaving a version number in a file, also means lots of commits just to edit the version number.
\nMinVer is an easy way to version your NuGet packages and DLL's. It also comes with a CLI tool that you can use to version other things like Docker images which I'll cover in another post. If you'd like to see an example of MinVer in action, you can try my Dotnet Boxed NuGet package project template by running a few simple commands to create a new project:
\ndotnet new --install Boxed.Templates\ndotnet new nuget --name "MyProject"\n\n",
"url": "https://rehansaeed.com/the-easiest-way-to-version-nuget-packages/",
"title": "The Easiest Way to Version NuGet Packages",
"summary": "Using MinVer is the easiest way to version your NuGet packages and DLL's using semantic versioning (SemVer).",
"image": "https://rehansaeed.com/images/hero/Semantic-Versioning-1600x900.png",
"date_modified": "2020-07-01T16:36:29.000Z",
"author": {
"url": "https://rehansaeed.com"
}
},
{
"id": "https://rehansaeed.com/racism-in-software-development-and-beyond/",
"content_html": "If you're of North African, Middle Eastern or South Asian origin, you have to send up to 90% more job applications than your white counterparts in the United Kingdom. This level of discrimination has been unchanged since the 1960's. Let that sink in for a moment.
\nI read this statistic in a Guardian article last year and it's been on my mind ever since. The study was carried out by the Centre for Social Investigation at Nuffield College, University of Oxford. You can read their full report here.
\nThose carrying out the study applied to nearly 3200 jobs, randomly varying the minority background of fictitious job applicants while holding their skills, qualifications and work experience constant.
\nThey looked at high and low skill jobs. What I found particularly interesting was that the high skill jobs they looked at included software engineers. They found that there was no difference between applying for high or low skill jobs, you were going to be equally discriminated against.
\n
If you don't live in the UK, then as the study points out there is a similar situation in other countries. Spain, Germany, Netherlands and Norway are called out by name and here is a similar study about the United States. Racism is a human condition, it affects everyone.
\nIf you are from a majority ethic group, the next time you're at work (I hope we defeat this virus soon and can get back), take a look around at your colleagues. If one of them is from a minority, they were either very lucky or perhaps had to work harder than you to get the same job. They may even have had to be more qualified than you.
\nWhen physical or verbal racist abuse occurs, people know about it, they can deal with it and hopefully move on. When people are discriminated against economically by not being able to get a job or perhaps getting a lower paid job, there is no way for them to know that they've been discriminated against.
\nWe know that if you're an ethnic minority in Britain, you're more likely to live in poverty and we know that that living in poverty means that you're more likely to have physical and mental health problems, you're less likely to do well in education, you're more likely to go to prison and you are liable to die sooner.
\nCOVID-19 has revealed the contrast even more starkly. People from black, asian and minority ethnic (BAME) backgrounds have suffered from higher rates of death than their white counterparts. If you're black, you're 3.9 times more likely to die. In a report the government tried to hide, it was revealed that historical racism plays a part.
\nHit a man in the face and he'll bleed for a day, hit a man in the pocket and he'll bleed for a lifetime.
\nI was born and bred in East London but my parents are originally from Pakistan and I happen to be a Muslim too. As one of those people who has to send 40% more job applications according to the report above, this is kind of personal.
\nI have experienced direct verbal racist abuse before (Most people of any ethnic minority have at least once) but it's rare for me at least. I have also received abuse online on Twitter and even GitHub but it's par for the course online.
\nI do not know...I cannot know if I have ever been discriminated against when applying for jobs. If I had a different name, would I have been offered more chances? I don't think I would have but the statistics say differently. It did take me years of applying to get a job at Microsoft but so did one of my colleagues and in the end I did get one.
\nWith that said I'm also one of the lucky ones. If I think back to the people that hired me and gave me a chance in my first few jobs, I'd really like to thank them.
\nClearly there are many problems but I would hate to leave things there without suggesting potential solutions, so here are three.
\nAnonymous job applications, where names are stripped from job applications are not as far fetched as you might think. They were trialled by 100 British firms in 2012, backed by the former deputy prime minister Nick Clegg. Some still use the special software that enables this system but there is resistance from some employers.
\nEventually, a candidate will have to meet someone in an in-person interview where there is no hiding their ethnicity but at least it will mean getting more ethnic minorities a foot in the door that they otherwise clearly are not getting according to the evidence.
\nIn the UK, if your company has more than 250 employees, you must report the gender pay gap between your male and female employees. This has highlighted the disparity with eight out of ten employers paying men more than women.
\nEmployers should be required to do the same for ethnicity. There is reportedly, a ÂŁ3.2bn pound gap in wages between ethnic minorities and their white colleagues doing the same jobs.
\nEmail your local member of parliament and urge them to support the requirement for large firms to publish an ethnicity pay gap and perhaps anonymous job applications too.
\nMy prime minister, Boris Johnson, once called black people "piccaninnies" with "watermelon smiles". When writing about the continent of Africa he said:
\n\n\nThe problem is not that we were once in charge, but that we are not in charge anymore.
\n\n
He's been anti-semitic too, writing an entire book about powerful Jews controlling the media and influencing elections.
\nHe's made Islamophobic remarks about Muslim women looking like bank robbers and letterboxes that caused an up-tick in violence towards Muslim women with racists sometimes using Mr Johnson's exact words. This was apparently in an effort to help them he says. I hope he doesn't try helping anyone else.
\nThat's not even the half of it, he's also insulted or abused half a dozen other groups of people. It's been fun to watch right wing politicians and political commentators contort themselves into strange shapes trying to defend his blatant racism.
\nThe sad thing is, if you put a racist in power, many others follow them in their wake. The Conservative party is now riddled with racist members, councillors and even members of parliament. The far right group Britain First has even said that 5,000 of their members have even joined the Conservative party.
\nThe solution for me is clear. If you put a racist in charge, they build a whole pyramid of racists underneath them.
\nAs many others have said, it's not enough to be a passive non-racist in society. Over the last half a century, structural and institutional racism still pervades our society. It is necessary for us all to be actively working against racism. We also need to recognise that a lot of the inequalities we see come from implicit biases that all human beings have, including our own selves. If we recognise that we ourselves are deficient, perhaps we can do something about it.
\n", "url": "https://rehansaeed.com/racism-in-software-development-and-beyond/", "title": "Racism in Software Development & Beyond", "summary": "If you're of North African, Middle Eastern or South Asian origin, you have to send up to 90% more job applications than your white counterparts in the United Kingdom. This level of discrimination has been unchanged since the 1960's.", "image": "https://rehansaeed.com/images/hero/Black-Glowing-Fist-1600x900.png", "date_modified": "2020-06-12T16:36:29.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/spicing-up-your-browser-console/", "content_html": "Wouldn't it be cool if when you opened the browser console up on a site, you saw a cool secret message? There are many sites that do this with quite a few business's advertising frontend development jobs in this way. I wanted to join in on the fun, so...I did. Here is my story of two hours I'm not getting back.
\nI've done some work on Colorful.Console which is an amazing C# console library that lets you write text in ASCII art using figlet fonts. I wanted to do the same for my blog. I was too lazy to use Colorful.Console and used a random online generator I found. I tried a couple of different fonts out and came up with this JavaScript code:
\nconst consoleOptions = 'background: #ffffff; color: #6b17e8';\n\n// Standard Figlet Font\nconsole.log('%c ____ _ ', consoleOptions);\nconsole.log('%c | _ \\\\ ___| |__ __ _ _ __ ', consoleOptions);\nconsole.log('%c | |_) / _ \\\\ '_ \\\\ / _` | '_ \\\\ ', consoleOptions);\nconsole.log('%c | _ < __/ | | | (_| | | | |', consoleOptions);\nconsole.log('%c |_| \\\\_\\\\___|_| |_|\\\\__,_|_| |_|', consoleOptions);\n\n// o8 Figlet Font\nconsole.log('%c oooooooooo oooo ', consoleOptions);\nconsole.log('%c 888 888 ooooooooo8 888ooooo ooooooo oo oooooo ', consoleOptions);\nconsole.log('%c 888oooo88 888oooooo8 888 888 ooooo888 888 888 ', consoleOptions);\nconsole.log('%c 888 88o 888 888 888 888 888 888 888 ', consoleOptions);\nconsole.log('%c o888o 88o8 88oooo888 o888o o888o 88ooo88 8o o888o o888o', consoleOptions);\n\nFor the standard font, I had to escape quite a few characters using a back slash \\, so watch out for that. The results in a browser console were pretty terrible and hard to read...

Notice that I passed options to the console.log API to set the background and foreground colour of the text. The Chrome browser adds a lot of space between lines and the font just looks a little anaemic and hard to read. I rooted around the Character Map app in Windows, to see if I could find a more substantial set of characters that would show up more brightly instead of using dashes, pipes and numbers. Then I found these: â â â â â âČ âș ⌠â.
I took the o8 figlet font text above and simply did a find and replace on it. I replaced the 8 character with â and I also replaced the o character with â:
console.log('%c ââââ ââââ ââââ ââââ', consoleOptions);\nconsole.log('%c âââââ âââ ââââ ââââ ââââââââ âââââââ ââ âââ ââââ ââ âââ ââââ âââââââ ââââââââ ', consoleOptions);\nconsole.log('%c ââ âââââ ââ âââ âââ âââ âââ ââââââââ âââ âââ âââ âââ âââ âââ ââââââââ âââ âââ ', consoleOptions);\nconsole.log('%c ââ âââ ââ âââ âââ âââ âââ âââ âââ âââ âââ âââ âââ âââ âââ âââ âââ âââ âââ ', consoleOptions);\nconsole.log('%c ââââ â ââââ ââââââ ââ âââââ âââââ âââââââ ââ âââââââââââââ âââââââââââââ âââââââ ââ âââââââââ', consoleOptions);\n\nconsole.log('%c ââââââââââ ââââ ', consoleOptions);\nconsole.log('%c âââ âââ ââââââââââ ââââââââ âââââââ ââ ââââââ ', consoleOptions);\nconsole.log('%c âââââââââ ââââââââââ âââ âââ ââââââââ âââ âââ ', consoleOptions);\nconsole.log('%c âââ âââ âââ âââ âââ âââ âââ âââ âââ ', consoleOptions);\nconsole.log('%c âââââ ââââ âââââââââ âââââ âââââ âââââââ ââ âââââ âââââ', consoleOptions);\n\nconsole.log('%c âââââââââ ââââ ', consoleOptions);\nconsole.log('%c âââ âââââââ ââââââââââ ââââââââââ ââââââââ ', consoleOptions);\nconsole.log('%c âââââââââ ââââââââ ââââââââââ ââââââââââ âââ âââ ', consoleOptions);\nconsole.log('%c âââ âââ âââ âââ âââ âââ âââ ', consoleOptions);\nconsole.log('%c ââââââââââ âââââââ ââ âââââââââ âââââââââ âââââââââ', consoleOptions);\n\nThis seemed to work great and created a cool effect:
\n
All you need to do now is copy and paste that text somewhere in the main part of your app. Now, when someone opens the browser console, they'll see your cool surprise. You can hit ||F12|| right now and take a look at mine.
\n", "url": "https://rehansaeed.com/spicing-up-your-browser-console/", "title": "Spicing up your Browser Console", "summary": "Use figlet fonts, ASCII art and browser console themes to spice up your browser console.", "image": "https://rehansaeed.com/images/hero/Figlet-Fonts-1600x900.png", "date_modified": "2020-06-10T18:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/choosing-a-static-site-generator/", "content_html": "I recently rebuilt this blog using a static site generator called Gridsome which is based on Vue.js and GraphQL. This is the story of all the static site generators I tried or read up on, how I moved from WordPress to Gridsome and what I discovered along the way.
\nI want blogging to be as low friction as possible. Quite frankly if there is even a little friction, I'll stop writing posts which is what has kind of happened over the last couple of years where my output has definitely dropped.
\nI was using WordPress in the past which was achingly slow and a major barrier to writing posts. Thinking of writing a post in WordPress just put me off it altogether.
\nWordPress stores its posts in HTML. You can use markdown using the new Gutenberg plugin but if you want to edit the post after the fact, you're back to HTML.
\nThe other issue is that WordPress gets hacked all the time because people don't keep it and any installed plugins up to date. However, I found over the years that when I upgraded a plugin, there was a 50% chance that something was going to break and another 20% chance that I wouldn't find out about it until a week later.
\nPlugins were the bane of my life. My code formatting plugin became unsupported without an easy alternative that I could migrate to. This meant I had to stick to an older version of PHP and wait for another plugin to add a migration path.
\nFinally, I was paying for hosting on Azure Web apps which isn't the simplest or cheapest option I could have gone with. It was time to move...
\nSo why switch from a dynamic site like WordPress, to a statically generated site? I found so many reasons, some of which I hadn't thought of before I made the switch.
\nThe obvious ones are that a static site is going to be faster and cheaper to run. A nice side effect of being faster is that search engines will also give you a little more link juice and rank you higher.
\nThe biggest win for me was that I can now finally own my own content. It's strange to say that hosting my own WordPress site was not owning my own content but the fact is, I was not fully in control of the content that WordPress was generating. I became painfully aware of this after I had exported my posts from WordPress to markdown.
\nI had used certain plugins that formatted content strangely or I was using short codes which don't translate well. There were a myriad of issues. The amazing thing was that I could use VS Code to easily find and replace broken formatting with the help of some regular expression magic (I know enough to be dangerous!). The downside was that I had to manually go back, fix and check every blog post I'd ever written.
\nAt the end of the day, my content is now simple markdown files formatted to my liking. There is nothing simpler than a text file and I am finally in control. If I ever need to pick up and switch to a different static file generator, I can pick-up my content which is in an open, easy to move format and move it easily. I'm never going back.
\nThe other thing that I really loved was building the site itself. I have a folder of bookmarks containing cool little things you can do on the web that I have collected over the years. I put nearly all of it into practice, which was a lot of fun but did take me a couple of months to finish. It turns out, I love HTML, CSS and JavaScript but particularly CSS where visual feedback is instant and very satisfying.
\nSurveying the static site generator landscape was a dizzying experience. There didn't seem to be a clear winner at first, so this is what I found:
\n::: warning\nI certainly didn't do an in-depth review of each static site generator. These are my personal views based on the limited knowledge, limited time looking at each project and in-built human biases I have.\n:::
\nThe one that started it all, Jekyll is backed by GitHub and has been going for a long time, which means it has a large community and lots of plugins. Hosting with GitHub pages was easy, since there were some nice integrations.
\nMy only problem with it was that it was built on Ruby which in my experience a year ago, does not play well on Windows. I hope that has changed but I suppose you could use the Windows Subsystem for Linux (WSL) to run Ruby instead.
\nKhalid Abuhakmeh uses Jekyll and his blog looks pretty amazing, so well worth a look.
\nGatsby is built with React and GraphQL. Plugins can be used used to hook up various data sources like markdown files and GraphQL makes querying extremely simple.
\nI'm not much of a React fan personally but if you are, it has a huge following, so community support and plugins are easy to find. Plus I cannot understate how much simpler the GraphQL plugins make it, to consume arbitrary content. Knowing GraphQL is important but it's not too difficult to learn.
\nStatiq is built on ASP.NET Core and uses Razor to write views. It's a very new static site generator and I believe is an evolution of another project called Wyam. If you're not too knowledgeable in web technologies and live in the C# and .NET space, this may be a really good choice for you.
\nI've heard a lot of good things about Hugo. It's built on Go. The community is pretty large and the project is pretty popular. If I hadn't gone with Gridsome, I would have liked to spend some more time with Hugo.
\nMy only issue with Hugo having used Kubernetes Helm templates, was that I found the Go templating quite difficult to read due to the liberal use of brackets everywhere. However, the system is fairly intuitive to use.
\n{{ define "main" }}\n <main aria-role="main">\n <header class="homepage-header">\n <h1>{{.Title}}</h1>\n {{ with .Params.subtitle }}\n <span class="subtitle">{{.}}</span>\n {{ end }}\n </header>\n <div class="homepage-content"></div>\n <div>\n {{ range first 10 .Site.RegularPages }}\n {{ .Render "summary"}}\n {{ end }}\n </div>\n </main>\n{{ end }}\n\nI took a pretty deep dive into Vuepress which is built on Vue.js. I'm a huge fan of Vue.js due to its single file components which allow you to write HTML, JavaScript and CSS in a single file. React has a lot to say about the first two but leaves CSS up to you, which has led to the whole CSS-in-JS movement. Personally, I think writing simple CSS or SCSS is where it's at.
\nIn the end though, I found that VuePress is not really geared towards building blogs, it's more geared towards documentation sites for open source projects. It does a really good job in that space.
\nNuxt.js is a well known framework which uses Vue.js in combination with server side rendering. Next.js is a similar project for React. What some don't know is that you can also create static sites using these tools.
\nNuxt.js recently released Nuxt Content which allows you to drive content from Markdown files. This came too late for me to try but I'd certainly take a deeper look the next time.
\nGridsome is the Vue.js equivalent of Gatsby, in that it also uses the power of GraphQL.
\nThere are a lot of plugins that can connect your static site to various sources of data using GraphQL. Want a data driven site? Want to consume some markdown, JSON, random images, content from WordPress or Ghost? Just install a plugin and write a simple GraphQL query.
\nIn the end, I chose Gridsome for its simplicity. Its just HTML, SCSS and JavaScript at the end of the day (albeit in a single file component). It's the closest solution to the web and also why I really enjoy Vue.js.
\nI realized that a lot of people are using Netlify in my reading. It has a lot of nice little extra features but I personally didn't need any of them but they're worth a look. Netlify has a free tier but does ramp up to being quite costly after that, so beware and make sure your site doesn't use a lot of bandwidth.
\nI chose to host my site on GitHub pages. It's free until GitHub thinks you're abusing it and asks you to move. It doesn't have any bells or whistles but it works. There are two issues I wish they would fix:
\n.github.io.master or main.With all that said, it's simple and there was one less thing I had to worry about, since my content was already on GitHub.
\nI picked Gridsome and am pretty happy but you may have a different knowledge set and something else might suit you. Whatever makes you happy! The key is that writing blog posts needs to be low friction, so that you actually end up doing it.
\nIn my next post, I'll talk about the features I look to build in a blog and which of them I think are essential to a good blog site.
\n", "url": "https://rehansaeed.com/choosing-a-static-site-generator/", "title": "Choosing a Static Site Generator", "summary": "I choose a static site generator from among a long list including Jekyll, Gatsby, Statiq, Hugo, VuePress, Next.js, Nuxt.js and Gridsome.", "image": "https://rehansaeed.com/images/hero/Static-Site-Generators-1600x900.png", "date_modified": "2020-06-09T19:19:49.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/asp-net-core-integration-testing-mocking-using-moq/", "content_html": "If you want to run an integration test for your ASP.NET Core app without also testing lots of external dependencies like databases and the like, then the lengthy official 'Integration tests in ASP.NET Core' documentation shows how you can use stubs to replace code that talks to a database or some other external service. If you want to use mocks using Moq, this is where you run out of guidance and runway. It does in fact require a fair amount of setup to do it correctly and reliably without getting flaky tests.
\nThe ConfigureServices and Configure methods in your applications Startup class must be virtual. This is so that we can inherit from this class in our tests and replace production versions of certain services with mock versions.
public class Startup\n{\n private readonly IConfiguration configuration;\n private readonly IWebHostingEnvironment webHostingEnvironment;\n\n public Startup(\n IConfiguration configuration,\n IWebHostingEnvironment webHostingEnvironment)\n {\n this.configuration = configuration;\n this.webHostingEnvironment = webHostingEnvironment;\n }\n\n public virtual void ConfigureServices(IServiceCollection services) => ...\n\n public virtual void Configure(IApplicationBuilder application) => ...\n}\n\nIn your test project, inherit from the Startup class and override the ConfigureServices method with one that registers the mock and the mock object with IoC container.
I like to use strict mocks using MockBehavior.Strict, this ensures that nothing is mocked unless I specifically setup a mock.
public class TestStartup : Startup\n{\n private readonly Mock clockServiceMock;\n\n public TestStartup(\n IConfiguration configuration,\n IHostingEnvironment hostingEnvironment)\n : base(configuration, hostingEnvironment)\n {\n this.clockServiceMock = new Mock(MockBehavior.Strict);\n }\n\n public override void ConfigureServices(IServiceCollection services)\n {\n services\n .AddSingleton(this.clockServiceMock);\n\n base.ConfigureServices(services);\n\n services\n .AddSingleton(this.clockServiceMock.Object);\n }\n}\n\nIn your test project, write a custom WebApplicationFactory that configures the HttpClient and resolves the mocks from the TestStartup, then exposes them as properties, ready for our integration test to consume them. Note that I'm also changing the environment to Testing and telling it to use the TestStartup class for startup.
Note also that I've implemented IDisposable's Dispose method to verify all of my strict mocks. This means I don't need to verify any mocks manually myself. Verification of all mock setups happens automatically when xUnit is disposing the test class.
public class CustomWebApplicationFactory : WebApplicationFactory\n where TEntryPoint : class\n{\n public CustomWebApplicationFactory()\n {\n this.ClientOptions.AllowAutoRedirect = false;\n this.ClientOptions.BaseAddress = new Uri("https://localhost");\n }\n\n public ApplicationOptions ApplicationOptions { get; private set; }\n\n public Mock ClockServiceMock { get; private set; }\n\n public void VerifyAllMocks() => Mock.VerifyAll(this.ClockServiceMock);\n\n protected override void ConfigureClient(HttpClient client)\n {\n using (var serviceScope = this.Services.CreateScope())\n {\n var serviceProvider = serviceScope.ServiceProvider;\n this.ApplicationOptions = serviceProvider\n .GetRequiredService<IOptions<ApplicationOptions>>().Value;\n this.ClockServiceMock = serviceProvider\n .GetRequiredService<Mock<IClockService>>();\n }\n\n base.ConfigureClient(client);\n }\n\n protected override void ConfigureWebHost(IWebHostBuilder builder) =>\n builder\n .UseEnvironment("Testing")\n .UseStartup();\n\n protected override void Dispose(bool disposing)\n {\n if (disposing)\n {\n this.VerifyAllMocks();\n }\n\n base.Dispose(disposing);\n }\n}\n\nI'm using xUnit to write my tests. Note that the generic type passed to CustomWebApplicationFactory is Startup and not TestStartup. This generic type is used to find the location of your application project on disk and not to start the application.
I setup a mock in my test and I've implemented IDisposable to verify all mocks for all my tests at the end but you can do this step in the test method itself if you like.
Note also, that I'm not using xUnit's IClassFixture to only boot up the application once as the ASP.NET Core documentation tells you to do. If I did so, I'd have to reset the mocks between each test and also you would only be able to run the integration tests serially one at a time. With the method below, each test is fully isolated and they can be run in parallel. This uses up more CPU and each test takes longer to execute but I think it's worth it.
public class FooControllerTest : CustomWebApplicationFactory\n{\n private readonly HttpClient client;\n private readonly Mock clockServiceMock;\n\n public FooControllerTest()\n {\n this.client = this.CreateClient();\n this.clockServiceMock = this.ClockServiceMock;\n }\n\n [Fact]\n public async Task GetFoo_Default_Returns200OK()\n {\n this.clockServiceMock\n .Setup(x => x.UtcNow)\n .ReturnsAsync(new DateTimeOffset(2000, 1, 1));\n\n var response = await this.client.GetAsync("/foo");\n\n Assert.Equal(HttpStatusCode.OK, response.StatusCode);\n }\n}\n\nI'm using xUnit. We need to turn off shadow copying, so any separate files like appsettings.json are placed in the right place beside the application DLL file. This ensures that our application running in an integration test can still read the appsettings.json file.
{\n "shadowCopy": false\n}\n\nShould you have configuration that you want to change just for your integration tests, you can add a appsettings.Testing.json file into your application. This configuration file will only be read in our integration tests because we set the environment name to 'Testing'.
If you'd like to see an end to end working example of how this all works. You can create a project using the Dotnet Boxed API project template or the GraphQL project template.
\nI wrote this because there is little to no information on how to combine ASP.NET Core with Moq in integration tests. I've messed about with using IClassFixture as the ASP.NET Core documentation tells you to do and it's just not a good idea with Moq which needs a clean slate before each test. I hope this stops others going through much pain.
The 'dotnet new' CLI command is a great way to create projects from templates in dotnet. However, I think it could provide a much better experience than it currently does. I also suspect it isn't used much, mostly because templates authored for the dotnet new experience are not included in the Visual Studio File -> New Project experience. For template authors, the experience of developing templates could do with some improvements. I tweeted about it this morning and got asked to write a short gist about what could be improved, so this is that list.

I author a Swagger API, GraphQL API, Microsoft Orleans and NuGet project templates in my Dotnet Boxed project. The project currently has 1,900 stars on GitHub and the Boxed.Templates NuGet package has around 12,149 downloads at the time of writing. The Dotnet Boxed templates are also some of the more complex templates using dotnet new. They all have a dozen or more optional features.
In the past, I also authored the ASP.NET Core Boilerplate project templates which are published as a Visual Studio extension. This extension currently has 159,307 installs which is an order of magnitude more than the 12,149 installs of my dotnet new based Boxed.Templates NuGet package.
I have read in the dotnet/templating GitHub issues that there is eventually going to be Visual Studio integration in which you'd be able to search and install dotnet new based templates on NuGet, and then create projects from those templates much as you would with Visual Studio today. Given the download counts of my two projects, this would be the number one feature I'd like to see implemented.
You could create a Visual Studio extension that wraps your dotnet new templates but having messed around with them in the past, it's a lot of effort. I'm in the template making business, not in the extension making business. Also, given the above rumour, I've held off going this route.
Currently there is no way to search for a list of all dotnet new based project templates on NuGet or on the Visual Studio marketplace. There is this list buried in the dotnet/templating GitHub project but the only people who are going to find that are template authors. It would be great if there was some kind of marketplace or store to find templates, rate them, provide feedback etc.
If you've seen the Vue CLI, it has a magical UI for creating projects from it's template. This is the benchmark by which I now measure all project creation experiences. Just take a look at it's majesty:
\n
Imagine executing dotnet new ui, then seeing a nice browser dialogue popup like the one above where you could find, install and even create projects from templates. Creating a project would involve entering the name of your project, the directory where you want it to be saved and then toggling any custom options that the project template might offer.
That last bit is where having a UI shines. There aren't many dotnet new templates that use the templating engine to it's full potential and have additional optional features. When you use the current command line experience it's unwieldy and slow to set custom options. Having a custom UI with some check boxes and drop downs would be a far quicker and more delightful experience.
There are a bunch of cool missing or half implemented features in the dotnet new templating engine that could use finishing. Chief among these are called post actions. These are a set of custom actions that can be performed once your project has been created.
As far as I can work out, the only post action that works is the one that restores all NuGet packages in your project. This was implemented because the basic Microsoft project templates wanted to use them but I understand that they no longer do for reasons unknown to me. Happily I still use this one and it works nicely.
\nOther post actions that are half implemented (They exist and you can use them but they just print content to the console) are for opening files in the editor, opening files or links in the web browser or even running arbitrary scripts. The last one has the potential for being a security risk however, so it would be better to have a health list of post actions for specific tasks. I'd love to be able to open the ReadMe.md file that ships with my project template.
\nIn terms of new post actions, I'd really like to see one that removes and sorts using statements. I have a lot of optional pieces of code in my project templates, so I have to have a lot of #if #endif code to tell the templating engine which lines of code to remove. It's particularly easy to get this wrong with using statements, leaving you with a fresh project that doesn't compile because you've removed one too many using statements by accident. To avoid this, I created my own unit testing framework for dotnet new projects called Boxed.DotnetNewTest.
There is one page of documentation on how to create project templates in the official docs page. There is a bunch more in the dotnet/templating wiki and some crucial bits of information in comments of GitHub issues. In particular, there is precious little information about how to conditionally remove code or files based on options the user selects. There is also very little about post actions. It would be great if this could be tidied up.
\nSecondary to the docs is the GitHub issues . There are currently 168 open issues with a large number having only one comment from the original author. Given the lack of documentation, having questions answered is really important.
\nThe latest version of the dotnet CLI has fixed some bugs but there are still a few that really get in the way of a great experience:
dotnet new foo --help outputs some pretty terrible looking text if you have any custom options.Dockerfile, .gitignore, .editorconfig files..csproj files requires some workarounds to work.The Vue CLI has really shown how great a new project creation experience can be. With a bit of work, the dotnet new experience could be just as great.
As I talked about in my previous post some time ago about dotnet new project templates, it's possible to enable feature selection, so that developers can toggle certain features of a project template on or off. This is not a feature that many templates in the wild use a lot. Quite often I've seen templates have no optional features or only a few. One reason is that it gets very complicated to test that toggling your optional features doesn't break the generated project in some way by stopping it from building for example. This is why I decided to write a small unit test helper library for dotnet new project templates. It is unit test framework agnostic and can work with xUnit, NUnit, MSTest or any other unit test framework.
Below is an example showing how you can use it inside an xUnit test project.
\npublic class ApiTemplateTest\n{\n public ApiTemplateTest() => DotnetNew.Install<ApiTemplateTest>("ApiTemplate.sln").Wait();\n\n [Theory]\n [InlineData("StatusEndpointOn", "status-endpoint=true")]\n [InlineData("StatusEndpointOff", "status-endpoint=false")]\n public async Task RestoreAndBuild_CustomArguments_IsSuccessful(string name, params string[] arguments)\n {\n using (var tempDirectory = TempDirectory.NewTempDirectory())\n {\n var dictionary = arguments\n .Select(x => x.Split('=', StringSplitOptions.RemoveEmptyEntries))\n .ToDictionary(x => x.First(), x => x.Last());\n var project = await tempDirectory.DotnetNew("api", name, dictionary);\n await project.DotnetRestore();\n await project.DotnetBuild();\n }\n }\n\n [Fact]\n public async Task Run_DefaultArguments_IsSuccessful()\n {\n using (var tempDirectory = TempDirectory.NewTempDirectory())\n {\n var project = await tempDirectory.DotnetNew("api", "DefaultArguments");\n await project.DotnetRestore();\n await project.DotnetBuild();\n await project.DotnetRun(\n @"Source\\DefaultArguments",\n async (httpClient, httpsClient) =>\n {\n var httpResponse = await httpsClient.GetAsync("status");\n Assert.Equal(HttpStatusCode.OK, httpResponse.StatusCode);\n });\n }\n }\n}\n\nThe first thing it does in the constructor is install the dotnet new project templates in your solution. It needs to know the name of the solution file. It then walks the sub-directory tree below your solution file and installs all project templates for you.
If we then look at the first unit test, we first need a temporary directory, where we can create a project from our dotnet new project template. We will generate a project from the template in this directory and then delete the directory at the end of the test. We then run dotnet new with the name of a project template, the name we want to give to the generated project and any custom arguments that particular project template supports. Using xUnit, I've parametrised the arguments, so we can run multiple tests while tweaking the arguments for each test. Running dotnet new returns a project which contains some metadata about the project that we've just created and we can also use it to further dotnet commands against.
Finally, we run dotnet restore and dotnet build against the project. So this test ensures that toggling the StatusEndpointOn option on our project template doesn't stop the generated project from restoring NuGet packages or building successfully.
The second unit test method is where it gets really cool. If the project template is an ASP.NET Core project, we can use dotnet run to start the project listening on some random free ports on the machine. The unit test framework then gives you two HttpClient's (One for HTTP and one for HTTPS) with which to call your newly generated project. In summary, not only can you test that the generated projects build, you can test that the features in your generated project work as they should.
This API is pretty similar to the ASP.NET Core TestHost API that also gives you a HttpClient to test the API with. The difference is that this framework is actually running the app using the dotnet run command. I have experimented with using the TestHost API to run the generated project in memory, so it could be run a bit faster but the .NET Core API's for dynamically loading DLL files needs some work which .NET Core 3.0 might solve.
You can download the Boxed.DotnetNewTest NuGet package or see the source code on GitHub.
\n", "url": "https://rehansaeed.com/unit-testing-dotnet-new-templates/", "title": "Unit Testing dotnet new Templates", "summary": "It's difficult to know if your 'dotnet new' based project will work if they have lots of options, in this post I show how to unit test them.", "image": "https://rehansaeed.com/images/hero/Microsoft-.NET-1366x768.png", "date_modified": "2019-08-21T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/gitattributes-best-practices/", "content_html": "::: tip Update (2020-08-04)\nAdded some more information about CRLF line endings for .cmd and .bat files. Also added a section about Git LFS support on GitHub.\n:::
If you've messed with Git for long enough, you're aware that you can use the .gitignore file to exclude files from being checked into your repository. There is even a whole GitHub repository with nothing but pre-made .gitignore files you can download. If you work with anything vaguely in the Microsoft space with Visual Studio, you probably want the 'Visual Studio' .gitignore file.
There is a lesser know .gitattributes file that can control a bunch of Git settings that you should consider adding to almost every repository as a matter of course.
\nIf you've studied a little computer science, you'll have seen that operating systems use different characters to represent line feeds in text files. Windows uses a Carriage Return (CR) followed by the Line Feed (LF) character, while Unix based operating systems use the Line Feed (LF) alone. All of this has it's origin in typewriters which is pretty amazing given how antiquated they are. I recommend reading the Newline Wikipedia article for more on the subject.
\nNewline characters often cause problems in Git when you have developers working on different operating systems (Windows, Mac and Linux). If you've ever seen a phantom file change where there are no visible changes, that could be because the line endings in the file have been changed from CRLF to LF or vice versa.
\nGit can actually be configured to automatically handle line endings using a setting called autocrlf. This automatically changes the line endings in files depending on the operating system. However, you shouldn't rely on people having correctly configured Git installations. If someone with an incorrect configuration checked in a file, it would not be easily visible in a pull request and you'd end up with a repository with inconsistent line endings.
\nThe solution to this is to add a .gitattributes file at the root of your repository and set the line endings to be automatically normalised like so:
# Set default behavior to automatically normalize line endings.\n* text=auto\n\n# Force batch scripts to always use CRLF line endings so that if a repo is accessed\n# in Windows via a file share from Linux, the scripts will work.\n*.{cmd,[cC][mM][dD]} text eol=crlf\n*.{bat,[bB][aA][tT]} text eol=crlf\n*.{ics,[iI][cC][sS]} text eol=crlf\n\n# Force bash scripts to always use LF line endings so that if a repo is accessed\n# in Unix via a file share from Windows, the scripts will work.\n*.sh text eol=lf\n\nOnly the first line is strictly necessary. It hard codes the line endings for Windows cmd and batch scripts to CRLF and bash scripts to be LF, so that they can be executed via a file share. It's a practice I picked up from the corefx repository.
\nIt's pretty common to want to check binary files into your Git repository. Building a website for example, involves images, fonts, maybe some compressed archives too. The problem with these binary files is that they bloat the repository a fair bit. Every time you check-in a change to a binary file, you've now got both files saved in Git's history. Over time this bloats the repository and makes cloning it slow. A much better solution is to use Git Large File System (LFS). LFS stores binary files in a separate file system. When you clone a repository, you only download the latest copies of the binary files and not every single changed version of them.
\nLFS is supported by most source control providers like GitHub, Bitbucket and Azure DevOps. It a plugin to Git that has to be separately installed (It's a checkbox in the Git installer) and it even has it's own CLI command 'git lfs' so you can run queries and operations against the files in LFS. You can control which files fall under LFS's remit in the .gitattributes file like so:
# Archives\n*.7z filter=lfs diff=lfs merge=lfs -text\n*.br filter=lfs diff=lfs merge=lfs -text\n*.gz filter=lfs diff=lfs merge=lfs -text\n*.tar filter=lfs diff=lfs merge=lfs -text\n*.zip filter=lfs diff=lfs merge=lfs -text\n\n# Documents\n*.pdf filter=lfs diff=lfs merge=lfs -text\n\n# Images\n*.gif filter=lfs diff=lfs merge=lfs -text\n*.ico filter=lfs diff=lfs merge=lfs -text\n*.jpg filter=lfs diff=lfs merge=lfs -text\n*.png filter=lfs diff=lfs merge=lfs -text\n*.psd filter=lfs diff=lfs merge=lfs -text\n*.webp filter=lfs diff=lfs merge=lfs -text\n\n# Fonts\n*.woff2 filter=lfs diff=lfs merge=lfs -text\n\n# Other\n*.exe filter=lfs diff=lfs merge=lfs -text\n\nSo here I've added a whole list of file extensions for various file types I want to be controlled by Git LFS. I tell Git that I want to filter, diff and merge using the LFS tool and finally the -text argument tells Git that this is not a text file, which is a strange way to tell it that it's a binary file.
A quick warning about adding LFS to an existing repository with existing binary files checked into it. The existing binary files will be checked into Git and not LFS without rewriting Git history which would be bad and you shouldn't do unless you are the only developer. You will have to add a one off commit to take the latest versions of all binary files and add them to LFS. Everyone who uses the repository will also have to re-clone the repository (I found this out the hard way in a team of 15 people. Many apologies were made over the course of a week). Ideally you add this from day one and educate developers about Git's treatment of binary files, so people don't check-in any binary files not controlled by LFS.
\nGitHub does technically provide Git LFS for free but they limit the bandwidth to 1GB. If your repository is public and you have any traffic going to your site whatsoever, you will get through that very quickly. GitHub charges $5 per month for a data pack which gives you 50GB of bandwidth per month which I've found is enough for a moderately popular GitHub repository.
\nI really don't understand why GitHub charges for Git LFS because people who don't want to pay are just going to check in binary files in Git instead which presumably would cost them more bandwidth. Surely they should be encouraging it's use by making it free?
\nWhen talking about the .gitattributes file, you will quite often hear some people talk about explicitly listing all binary files instead of relying on Git to auto-detect binary files (yes Git is clever enough to do that) like this:
# Denote all files that are truly binary and should not be modified.\n*.png binary\n*.jpg binary\n\nAs you saw above, we already do this with Git LFS but if you don't use LFS, read on as you may need to explicitly list binary files in certain rare circumstances.
\nI was interested so I asked a Stack Overflow question and got great answers. If you look at the Git source code, it checks first 8,000 bytes of a file to see if it contains a NUL character. If it does, the file is assumed to be binary. However, there are cases where you may need to do it explicitly:
\nThis is what the final .gitattributes file I copy to most repositories looks like:
###############################\n# Git Line Endings #\n###############################\n\n# Set default behaviour to automatically normalize line endings.\n* text=auto\n\n# Force batch scripts to always use CRLF line endings so that if a repo is accessed\n# in Windows via a file share from Linux, the scripts will work.\n*.{cmd,[cC][mM][dD]} text eol=crlf\n*.{bat,[bB][aA][tT]} text eol=crlf\n\n# Force bash scripts to always use LF line endings so that if a repo is accessed\n# in Unix via a file share from Windows, the scripts will work.\n*.sh text eol=lf\n\n###############################\n# Git Large File System (LFS) #\n###############################\n\n# Archives\n*.7z filter=lfs diff=lfs merge=lfs -text\n*.br filter=lfs diff=lfs merge=lfs -text\n*.gz filter=lfs diff=lfs merge=lfs -text\n*.tar filter=lfs diff=lfs merge=lfs -text\n*.zip filter=lfs diff=lfs merge=lfs -text\n\n# Documents\n*.pdf filter=lfs diff=lfs merge=lfs -text\n\n# Images\n*.gif filter=lfs diff=lfs merge=lfs -text\n*.ico filter=lfs diff=lfs merge=lfs -text\n*.jpg filter=lfs diff=lfs merge=lfs -text\n*.pdf filter=lfs diff=lfs merge=lfs -text\n*.png filter=lfs diff=lfs merge=lfs -text\n*.psd filter=lfs diff=lfs merge=lfs -text\n*.webp filter=lfs diff=lfs merge=lfs -text\n\n# Fonts\n*.woff2 filter=lfs diff=lfs merge=lfs -text\n\n# Other\n*.exe filter=lfs diff=lfs merge=lfs -text\n\nAll of the above are bits and pieces I've put together over time. Are there any other settings that should be considered best practice and added to any .gitattributes file?
::: warning Disclaimer\nI'm a Microsoft employee but my opinions in this personal blog post are my own and nothing to do with Microsoft. The information in this blog post is already publicly available and I talk in very general terms.\n:::
\nI recently had the unique opportunity to git clone the Windows OS repository. For me as a developer, I think that has got to be a bucket list (a list of things to do before you die) level achievement!
\nA colleague who was doing some work in the repo was on leave and the task of completing the job unexpectedly fell on me to finish up. I asked around to see if anyone had any pointers on what to do and I was pointed towards an Azure DevOps project. The first thing I naively tried was running:
\ngit clone https://microsoft.fake.com/foo/bar/os\n\nThis gave me the very helpful error:
\nremote: This repo requires GVFS. Ensure the version of git you are using supports GVFS.\nfatal: protocol error: bad pack header\n\nThis triggered a memory in the dark recesses of my mind about GVFS (Git Virtual File System). The Windows OS repository is around 250GB in size. When you consider that there are tens or maybe hundreds of developers committing changes every day, you are not going to have a very pleasant developer experiences if you just used Git and tried to pull all 250GB of files. So GVFS abstracts away the file system and only downloads files when you try to access them.
\nThe Windows OS has a very large and thorough internal Wiki. This wiki has sections covering all areas of the Windows OS going back for years. After a short time searching the wiki I discovered a very thorough getting started guide for new developers.
\nThe getting started guide involves running some PowerShell files which install a very specific but recent version of Git and setting up GVFS. Interestingly, you can also optionally point your Git client at a cache server to speed up git commands. There are a few cache servers all over the world to choose from. Finally, there is a VS Code extension specific to the OS repo that gives you some extra intelli-sense, very fancy.
\nEven though pulling the code using GVFS should in theory only pull what you need at any given time, it still took a fair amount of time to get started. Standard git commands still worked but took tens of seconds to execute, so you had to be pretty sure of what you were doing.
\nAt this point a colleague warned against using 'find in files', as this would cause GVFS to pull all files to disk. I think search would do the same. An alternative approach I used instead was to search via the Azure DevOps website where you can view all files in any repo.
\nOnce I'd had a chance to have a root around the repo, I realised that it was probably the largest folder structure I'd ever seen. There are many obscure sounding folders like 'ds' and 'net'. The reason for the wiki's existence became clear.
\nOther random things I found was that the repo contains an 'src' folder just like a lot of other repositories. There is a tonne of file extensions I've never seen or heard of before and there are binaries checked into the repo which seems suboptimal on the face of it. I even found the Newtonsoft.Json binary in there.
I was pleasantly surprised to see an .editorconfig file in the repo. It turns out that spaces are preferred over tabs and line endings are CRLF (I don't know what else I expected).
There is a tools folder with dozens of tools in it. In fact, I had to use one of these tools to get my job done. The tool I used was a package manager a bit like NuGet. You can use a CLI tool to version and upload a folder of files. This made sense. The OS repo does not a mono repo in that it doesn't contain every line of code in Windows. There are many other repositories that package up and upload their binaries using this tool.
\nSome further reading on this package manager and I discovered that the Windows OS does some de-duplication of files to save space. I'm guessing they still have to fit Windows onto a DVD (How quaint, do people still use DVD's?), so file size is important.
\nWhile trying to figure out how to use the package manager, I accidentally executed a search through all packages. Text came streaming down the page like in the Matrix. Eventually I managed to fumble the right keys on the keyboard to cancel the search.
\nOnce I'd finished with my changes I checked in and found that I had to rebase because newer commits were found on the server. I re-based as normal, except for the very long delay in executing git commands.
\nOnce I'd finally pushed the branch containing my changes up to the server, I created a pull request in Azure DevOps. As soon as I'd done that, I got inundated with emails from Azure Pipelines telling me that a build had started and various reviewers had been added to my pull request.
\nThe Azure Pipelines build only took 25 minutes to complete. A quick look shows a bunch of builds with five hours or more. I'm guessing that my changes had only gone through a cursory initial build to make sure nothing was completely broken.
\nA few days later I got a notification telling me my pull request had been merged. All I did was change a few config files and upload a package or two, but it was an interesting experience none the less.
\n", "url": "https://rehansaeed.com/git-cloning-the-windows-os-repo/", "title": "Git Cloning the Windows OS Repo", "summary": "My experiences of cloning and working on the Windows OS Git repository.", "image": "https://rehansaeed.com/images/hero/Windows-1366x768.png", "date_modified": "2019-06-24T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/securing-asp-net-core-in-docker/", "content_html": "::: warning Update (2020-10-12)\nMaking the file system for an ASP.NET Core application read-only is great for security but under high load ASP.NET Core can buffer requests to disk. In this scenario, you will see exceptions. So weight the positives and negatives of doing this.\n:::
\nSome time ago, I blogged about how you can get some extra security when running Docker containers by making their file systems read-only. This ensures that should an attacker get into the container somehow, they won't be able to change any files. This only works with certain containers that support it however and unfortunately, at that time ASP.NET Core did not support running in a Docker container with a read-only file system. Happily, this is now fixed!
\nLets see an example. I created a brand new hello world ASP.NET Core project and added this Dockerfile:
FROM microsoft/dotnet:2.2-sdk AS builder\nWORKDIR /source\nCOPY *.csproj .\nRUN dotnet restore\nCOPY . .\nRUN dotnet publish --output /app/ --configuration Release\n\nFROM microsoft/dotnet:2.2-aspnetcore-runtime\nWORKDIR /app\nCOPY --from=builder /app .\nENTRYPOINT ["dotnet", "ReadOnlyTest.dll"]\n\nI build the Docker image using this command:
\ndocker build -t read-only-test .\n\nIf I run this image with a read-only file system:
\ndocker run --rm --read-only -it -p 8000:80 read-only-test\n\nThis outputs the following error as read-only file systems are not supported by default:
\nFailed to initialize CoreCLR, HRESULT: 0x80004005\n\nIf I now run the same image with the COMPlus_EnableDiagnostics environment variable turned off:
docker run --rm --read-only -it -p 8000:80 -e COMPlus_EnableDiagnostics=0 read-only-test\n\nThe app now starts! The COMPlus_EnableDiagnostics environment variable (which is documented here) turns off debugging and profiling support, so I would not bake this environment variable into the Dockerfile. For some reason these features need a read/write file system to work properly. If you'd like to try this yourself, you can checkout all the code in this repo.
I have a confession to make...I don't use Automapper. For those who don't know Automapper is the number one object to object mapper library on NuGet by far. It takes properties from one object and copies them to another. I couldn't name the second place contender and looking on NuGet, nothing else comes close. This post talks about object mappers, why you might not want to use Automapper and introduces a faster, simpler object mapper that you might want to use instead.
\nThis is a really good question. Most of the time, it boils down to using Entity Framework. Developers want to be good citizens and not expose their EF Core models in the API surface area because this can have really bad security implications (See overposting here).
\nI have received a lot of comments at this point in the conversation saying "Why don't you use Dapper instead. Then you don't need model classes for your data layer, you can just go direct to your view model classes via Dapper". Dapper is really great, don't get me wrong but it's not always the right tool for the job, there are distinct disadvantages to using Dapper instead of EF Core:
\nALTER TABLE scripts is extra work that can be automated. EF Core handles all that for me. Alternatively, Visual Studio Database Projects can also get around this problem.NVARCHAR instead of VARCHAR and DATETIMEOFFSET instead of DATETIME2 or even DATETIME people! I've seen professional database developers make these mistakes all the time. Automating this ensures that the correct decision is made all the time.You need to use the right tool for the right job. I personally use Dapper, where there is an existing database with all the migrations etc. already handled by external tools and use EF Core where I'm working with a brand new database.
\nAutomapper is great when you have a small project that you want to throw together quickly and the objects you are mapping to and from have the same or similar property names and structure.
\nIt's also great for unit testing because once you've written your mapper, testing it is just a matter of adding a one liner to test that all the properties in your object have a mapping setup for them.
\nFinally if you use Automapper with Entity Framework, you can use the ProjectTo method which uses the property mapping information to limit the number of fields pulled back from your database making the query a lot more efficient. I think this is probably the biggest selling point of Automapper. The alternative is to write your own Entity Framework Core projection.
Cezary Piatek writes a very good rundown of some of the problems when using Automapper. I'm not going to repeat what he says but here is a short description:
\nIgnore or MapFrom.I wrote an object mapper library that consists of a couple of interfaces and a handful of extension methods to make mapping objects slightly easier. The API is super simple and very light and thus fast. You can use the Boxed.Mapping NuGet package or look at the code at on GitHub in the Dotnet-Boxed/Framework project. Lets look at an example. I want to map to and from instances of these two classes:
\npublic class MapFrom\n{\n public bool BooleanFrom { get; set; }\n public DateTimeOffset DateTimeOffsetFrom { get; set; }\n public int IntegerFrom { get; set; }\n public string StringFrom { get; set; }\n}\n\npublic class MapTo\n{\n public bool BooleanTo { get; set; }\n public DateTimeOffset DateTimeOffsetTo { get; set; }\n public int IntegerTo { get; set; }\n public string StringTo { get; set; }\n}\n\nThe implementation for an object mapper using the .NET Boxed Mapper is shown below. Note the IMapper interface which is the heart of the .NET Boxed Mapper. There is also an IAsyncMapper if for any reason you need to map between two objects asynchronously, the only difference being that it returns a Task.
public class DemoMapper : IMapper<MapFrom, MapTo>\n{\n public void Map(MapFrom source, MapTo destination)\n {\n destination.BooleanTo = source.BooleanFrom;\n destination.DateTimeOffsetTo = source.DateTimeOffsetFrom;\n destination.IntegerTo = source.IntegerFrom;\n destination.StringTo = source.StringFrom;\n }\n}\n\nAnd here is an example of how you would actually map a single object, array or list:
\npublic class UsageExample\n{\n private readonly IMapper mapper = new DemoMapper();\n \n public MapTo MapOneObject(MapFrom source) => this.mapper.Map(source);\n \n public MapTo[] MapArray(List source) => this.mapper.MapArray(source);\n \n public List MapList(List source) => this.mapper.MapList(source);\n}\n\nI told you it was simple! Just a few convenience extension methods bundled together with an interface that makes it just ever so slightly quicker to write object mapping than rolling your own implementation. If you have more complex mappings, you can compose your mappers in the same way that your models are composed.
\nKeeping things simple makes the .NET Boxed Mapper fast. I put together some benchmarks using Benchmark.NET which you can find here. The baseline is hand written mapping code and I compare that to Automapper and the .NET Boxed Mapper.
\nI even got a bit of help from the great Jon Skeet himself on how to improve the performance of instantiating an instance when using the generic new() constraint which it turns out is pretty slow because it uses Activator.CreateInstance under the hood.
This benchmark measures the time taken to map from a MapFrom object to the MapTo object which I show above.

| Method | \nRuntime | \nMean | \nRatio | \nGen 0/1k Op | \nAllocated Memory/Op | \n
|---|---|---|---|---|---|
| Baseline | \nClr | \n7.877 ns | \n1.00 | \n0.0178 | \n56 B | \n
| BoxedMapper | \nClr | \n25.431 ns | \n3.07 | \n0.0178 | \n56 B | \n
| Automapper | \nClr | \n264.934 ns | \n31.97 | \n0.0277 | \n88 B | \n
| Baseline | \nCore | \n9.327 ns | \n1.00 | \n0.0178 | \n56 B | \n
| BoxedMapper | \nCore | \n17.174 ns | \n1.84 | \n0.0178 | \n56 B | \n
| Automapper | \nCore | \n158.218 ns | \n16.97 | \n0.0279 | \n88 B | \n
This benchmark measures the time taken to map a List of MapFrom objects to a list of MapTo objects.

| Method | \nRuntime | \nMean | \nRatio | \nGen 0/1k Op | \nAllocated Memory/Op | \n
|---|---|---|---|---|---|
| Baseline | \nClr | \n1.833 us | \n1.00 | \n2.0542 | \n6.31 KB | \n
| BoxedMapper | \nClr | \n3.295 us | \n1.80 | \n2.0523 | \n6.31 KB | \n
| Automapper | \nClr | \n10.569 us | \n5.77 | \n2.4872 | \n7.65 KB | \n
| Baseline | \nCore | \n1.735 us | \n1.00 | \n2.0542 | \n6.31 KB | \n
| BoxedMapper | \nCore | \n2.237 us | \n1.29 | \n2.0523 | \n6.31 KB | \n
| Automapper | \nCore | \n3.220 us | \n1.86 | \n2.4872 | \n7.65 KB | \n
It turns out that Automapper does a really good job on .NET Core in terms of speed but is quite a bit slower on .NET Framework. This is probably down to the intrinsic improvements in .NET Core itself. .NET Boxed is quite a bit faster than Automapper on .NET Framework but the difference on .NET Core is much less at around one and a half times. The .NET Boxed Mapper is also very close to the baseline but is a bit slower. I believe that this is due to the use of method calls on interfaces, whereas the baseline mapping code is only using method calls on concrete classes.
\n.NET Boxed has zero allocations of memory while Automapper allocates a small amount per mapping. Since object mapping is a fairly common operation these small differences can add up over time and cause pauses in the app while the garbage collector cleans up the memory. There seems to be a trend I've seen in .NET for having zero allocation code. If you care about that, then this might help.
\nWhat I've tried to do with the .NET Boxed Mapper is fill a niche which I thought that Automapper was not quite filling. A super simple and fast object mapper that's just a couple of interfaces and extension methods to help you along the way and provide a skeleton on which to hang your code. If Automapper fits your app better, go ahead and use that. If you think it's useful, you can use the Boxed.Mapping NuGet package or look at the code at on GitHub in the Dotnet-Boxed/Framework project.
\n", "url": "https://rehansaeed.com/a-simple-and-fast-object-mapper/", "title": "A Simple and Fast Object Mapper", "summary": ".NET Boxed mapper is an object to object mapper that is simpler and faster than Automapper and makes zero allocations of memory, thus making the garbage collector do less work.", "image": "https://rehansaeed.com/images/hero/Dotnet-Boxed-1366x768.png", "date_modified": "2019-03-05T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/is-asp-net-core-now-a-mature-platform/", "content_html": "::: tip Update (12 January 2019)\nIt seems that Damian Edwards (The ASP.NET Core Project Manager) likes this post and agrees with the points I've made! It's great to hear that he's is in alignment with my thoughts and that's a great indication that the pain points of the platform will get solved in the future. Take a look at what he says in the ASP.NET Community Stand-up below:\n:::
\nhttps://www.youtube.com/watch?v=ho-VF2dAszI
\nI started using ASP.NET Core back when it was still called ASP.NET 5 and it was still in beta. In those early days every release introduced a sea change. The beta's were not beta's at all but more like alpha quality bits. I spent more time than I'd like just updating things to the latest version with each release.
\nCompared to the past, updates are moving at a glacial pace. Compared to the full fat .NET Framework though, it's been like moving from a camel to an electric car. When releases do come there is still a lot in each release. If you have a number of micro services using ASP.NET Core, it's not quick to get them all updated. Also, it's not just ASP.NET Core but all of the satellite assemblies built on top of .NET Core that keep changing too, things like Serilog and Swashbuckle.
\nWhat about other platforms? Well, I'm familiar with Node.js and the situation there is bordering on silly. Packages are very unstable and constantly being rev'ed. Keeping up and staying on latest is a constant battle almost every day. Each time you upgrade a package, there is also a danger that you will break something. With .NET Core, there are fewer packages and they are much more stable.
\nOverall, things move fast in software development in general and for me that's what keeps it interesting. ASP.NET Core is no exception.
\n.NET Core and ASP.NET Core started out very lightweight. There were few API's available. You often had to roll your own code, even for basic features that should exist.
\nIn today's world, a lot of API's have been added and where there are gaps, the community has filled them in many places. The .NET Framework still has a lot of API's that have not been ported across yet. A lot of these gaps are Windows specific and I'm sure a lot will be filled in the .NET Core 3.0 time frame.
\nWhen I make a comparison with Node.js and take wider community packages into consideration, I'd say that .NET Core has fewer API's. Image compression API's don't even exist on .NET for example. We were late to the party with Brotli compression which was recently added to .NET Core and is soon going to be added to the ASP.NET Core compression middleware, so we'll get there eventually. We have GraphQL.NET which is very feature rich but it still lags behind the JavaScript Apollo implementation slightly where it has first party support (Perhaps that comparison is a little unfair as GraphQL is native to Node.js). When I wanted to add Figlet font support to Colorful.Console (Figlet fonts let you draw characters using ASCII art), I had to base my implementation off of a JavaScript one. I'm not the only one who translates JavaScript code to C# either.
\nWith all this said, Node.js and JavaScript in general has it's own unique problems, otherwise I'd be using it instead of being a mainly .NET Core developer.
\nMaking .NET Core and ASP.NET Core open source has made a huge difference. We'd all occasionally visit the .NET Framework docs to understand how an API worked but today the place to go is GitHub where you can not only see the code but read other peoples issues and even raise issues of your own. There is often someone who has been there and done it all before you.
\nNot only that but a huge community has grown up with bloggers and new projects being more commonplace. It cannot be underestimated how much this change has improved a developers standard of living. Just take a look at the brilliant discoverdot.net site where you can see 634 GitHub .NET projects for all the evidence you need.
\nASP.NET Core's emphasis on performance is refreshing. It's doing well in the TechEmpower benchmarks with more improvements in sight. It's nice to get performance boosts from your applications every time you upgrade your application without having to do any work at all yourself.
\nWhile the platform is miles ahead of Node.js there are newer languages like Go that are also quite nice to write code for but blazing fast too. However, I'm not sure you can be as productive writing Go as with .NET Core. Also, you've got to use the write tool for the job. There are definitely cases where Go does a better job.
\nOne interesting effort that I've been keeping an eye on for some time now is .NET Native where C# code is compiled down to native code instead of an intermediate language. This means that the intermediate language does not need to be JIT'ed and turned into machine code at runtime which speeds up execution the first time the application is run. A nice side effect of doing this is that you also end up with a single executable file. You get all the benefits of a low level language like Go or Rust with none of the major drawbacks! I've been expecting this to hit for some time now but it's still not quite ready.
\nThis is a subject that most people have never thought about much. It's trivial for an evil doer to insert some rogue code into an update to a package and have that code running in applications soon after. In fact that's what happened with the event-stream NPM package recently. I highly recommend reading Jake Archibald's post "What happens when packages go bad".
\nWhat about .NET Core? Well, .NET is in fairly rare position of having a large number of official packages written by and maintained by Microsoft. This means that you need less third party packages and in fact you can sometimes get away with using no third party dependencies what so ever. What this also means, is that your third party dependencies that you do end up using also have fewer other dependencies in turn.
\nNuGet also recently added support for signed packages, which stops packages from being tampered with between NuGet's server and your build machine.
\nOverall this is all about reducing risk. There will always be a chance that somebody will do something bad. I'd argue that there is less of a risk of that happening on the .NET platform.
\nBing.com is running on ASP.NET Core and a site doesn't get much bigger than that. Stack Overflow is working on their transition to .NET Core. The Orchard CMS uses .NET Core. Even WordPress and various PHP applications can be run on .NET Core these days using peachpie.
\nFirst of all, let me say that every platform has gaps that are sometimes filled by the community. There are several missing API's that seem obvious to me but have yet to be built or improved enough. Here are a few basic examples of things that could be improved and where maybe the small team of 20 ASP.NET Core developers (Yes, their team is that small and they've done a tremendous job of building so much with so few resources, so they definitely deserve a pat on the back) could perhaps better direct their efforts.
\nThe response caching still only supports in-memory caching. If you want to cache to Redis using the IDistributedCache, bad luck. Even if you go with it and use the in-memory cache, if you're using cookies or the Authorization HTTP header, you've only got more bad luck as response caching turns itself off in those cases. Caching is an intrinsic part of the web, we need to do a better job of making it easier to work with.
Security is hard! HTTPS is hard! Dealing with certificates is hard! What if you could use some middleware and supply it with a couple of lines of configuration and never have to think about any of it ever again? Isn't that something you'd want? Well, it turns out that Nate McMaster has built a LetsEncrypt middleware that does just that but he needs some help to persuade his boss to build the feature, so up-vote this issue.
\nMicrosoft seems a bit late to the part, it's also one of the top voted feature requests on Azure's User Voice too.
\nHTTP/2 support in ASP.NET Core is available in 2.2 but it's not battle tested so you can't run it at the edge, wide open to the internet for fear of getting hacked.
\nHTTP/3 (formerly named QUIC) support has been talked about and the ground work for it has already been done so that the Kestrel web server can support multiple protocols easily. Lets see hoq quickly we can get support.
\nOne interesting thing about adding support for more protocols to ASP.NET Core is that most people can't make use of them or don't need to. ASP.NET Core applications are often hidden away behind a reverse proxy web server like IIS or NGINX who implement these protocols themselves. Even using something like Azure App Service means that you run behind a special fork of IIS. So I've been thinking, what is the point? Well, you've could use Kubernetes to expose your ASP.NET Core app over port 80 and get the performance boost of not having to use a reverse proxy web server as a man in the middle. Also, contrary to popular belief, Kubernetes can expose multiple ASP.NET Core applications over port 80 (at least Azure AKS can).
\nServing static files is one of the most basic features. There are a few things that could make this a lot better. You can't use the authorization middleware to limit access to static files but I believe that's changing in ASP.NET Core 3.0. Serving GZIP'ed or Brotli'ed content is a must today. Luckily dynamic Brotli compression will soon be available. What's not available is serving pre-compressed static files.
\nThere is a lot less churn. There are a lot of open source projects you can leverage. A large enough developer base has now grown up, so you see a lot more GitHub projects, Stack Overflow questions, bloggers like myself and companies who make their living from the platform.
\nThere seems to be a trend at the moment where people are jumping ship from long standing platforms and languages to brand new ones. Android developers have jumped from Java to Kotlin (and have managed to delete half their code in the process, Java is so verbose!). The poor souls who wrote Objective C, have jumped to Swift. Where once applications would be written in C++, they are now written in Go or Rust. Where once people wrote JavaScript, they are still writing JavaScript (TypeScript has taken off but not completely)...ok that has not changed. .NET Core seems to be the only one that seems to have bucked the trend and tried to reinvent itself completely while not changing things too much and still succeeding in the process.
\nSo yes, yes it is, is my answer.
\n", "url": "https://rehansaeed.com/is-asp-net-core-now-a-mature-platform/", "title": "Is ASP.NET Core now a Mature Platform?", "summary": "ASP.NET Core a large developer base, a large number of GitHub projects, Stack Overflow questions, bloggers and companies who use it. It's a mature platform.", "image": "https://rehansaeed.com/images/hero/Rocket-1366x768.jpg", "date_modified": "2018-12-18T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/pluralsight-vs-linkedin-learning-vs-frontendmasters-vs-egghead-io-vs-youtube/", "content_html": "I use a lot of video resources to keep up to date with .NET, JavaScript and other tech like Docker, Kubernetes etc. I've compiled here a list of these resources and my impressions of using them and often paying for them out of my own pocket. There is something useful in all of them but I tend to find that some cater for certain technologies better than others.
\nPluralSight is the service that tries to be all things for all people. It tended to be more .NET focused in the past but things are changing on that front. In more recent times, the breadth of topics covered has definitely gotten a lot wider. They've added IT operations courses for example (I recently watched a really good course on Bash which is not something that's easily available on the internet strangely), as well as courses on Adobe products and like Photoshop and Illustrator which is handy.
\nIn terms of software development, the courses are very high quality but they also take a lot of time for the authors to produce, so don't expect long in-depth courses on newer technologies e.g. the courses on Kubernetes and Vue.js have only recently been added and are certainly more on the 'Getting Started' end of the spectrum. However, in time I expect the portfolio to fill out.
\nThere is also definitely still a .NET bias to the site, there aren't as many in-depth frontend JavaScript courses as I would like for example. The ones that do exist are not from the well known frontend developers in the community. Also, some of the courses can be quite old (The tech world does move so fast). You'd think you would find some decent courses on CSS for example but the courses available are pretty ancient.
\nThey have apps for all the usual platforms that let you download video offline which is a must for me, for when I travel on the London underground. The monthly cost is not prohibitive for the quantity of courses available at $35 per month. I've paid for it in the past but get it free right now as a Microsoft MVP.
\nI'd recommend this as a primary source of information when learning some new technology.
\nI only discovered that LinkedIn Learning existed last year when I learned that Microsoft MVP's get it for free. Apparently LinkedIn Learning used to be called Lynda.com which I had heard of and trialled in the past. I've always thought of Lynda as a 'How to use X software' kind of resource. They've literally got hours and hours worth of courses on Adobe Photoshop for example.
\nI was surprised at how much content they actually have. The ground is a bit thin when it comes to .NET content however and the courses that I have ended up watching are pretty short and to the point with not a huge amount of depth. However, I think this varies a lot, I've seen Adobe illustrator courses that are 14 hours long!
\nIn the end I've used LinkedIn Learning for learning Kubernetes, due to PluralSight's library being a bit thin on that subject and also GraphQL.NET where LinkedIn Learning has the only course available on the internet.
\nIt costs $25 per year to subscribe, so it's cheaper than the other offerings. Overall, I probably wouldn't pay for this service if I didn't get it for free. At best, I might subscribe for a month at $30 to view a particular course. I also feel like I should be spending more time exploring their content.
\nFrontend Masters does exactly what it says on the tin. They get industry leading frontend professionals to present courses on HTML, CSS, and JavaScript mainly, although they also delve into the backend with Node.js, Mongo and GraphQL courses.
\nThe quality and depth of these courses is extremely high. The format is unusual in that the expert is delivering the course to an actual audience of people and there are also question/answer sections at the end of each module. This means that the courses tend to be quite long. If you're like me and you want to know every gritty detail, then that's great.
\nThe library of courses is not very large but I'd definitely recommend this service to anyone interested in frontend or GraphQL Node.js development. The price is quite steep at $39 per month, considering the smaller number of targeted courses available. I'm waiting to see if they have a sale at the end of the year to drop hard cash on this learning resource.
\nEgghead.io is a unique learning resource. It's USP is that it serves a series of short two minute videos that make up a course. If you run the videos at 1.5x or 2x speed, you can be done learning something in 15 minutes! In the real world, I found that each video was so concise and full of useful information, I found myself having to go back and watch things again. This is definitely the fastest way to learn something.
\nThe content is similar to Frontend Masters i.e. it's mainly focused on the frontend, with a few forays into Node.js, ElasticSearch, Mongo and Docker. Although, they tend to have a focus on JavaScript frameworks.
\nThe cost of this service is $300 per year but if you wait until the sale at the end of the year like I did, you can bag a subscription for $100 which I think is more reasonable. I'm coming up for renewal time and I'm not sure I will renew because I've pretty much watched all of the courses that I was interested in. Because the courses are very short and fairly limited in number, you can get through them pretty quickly. That said, it was definitely worth investing in a years subscription. I might purchase a subscription again in a year or two when they add more content.
\nYouTube, Vimeo and Channel 9 have a wealth of videos that you should not ignore. Plus the best part is that it's all free. Here are some channels I find useful:
\nThe NDC Conferences seem to never end. They take place three times a year (at last count) but they release videos all year round, so it's a never ending battle to keep up. For that reason, I've been trying to avoid watching them lately. The best place to watch them is on Vimeo where you can easily download them offline in high quality.
\nYou have expert speakers who often repeat their talk multiple times, so you often end up wondering whether you've seen a talk already. The talks are often very high level and often non-technical talks about design, management, managing your career or just telling stories about how some software was built.
\nHonestly, it can be fun to watch but I don't feel like I learn a lot watching these talks, so I've been a lot more strict about what I do watch.
\nThe Google Developers YouTube channel clogs up your feed with a lot of pointless two minute videos throughout the year. Then once a year, they hold their developer conference where the talks are actually interesting. The videos are Google focused, so think Chrome, JavaScript, Workbox and Android.
\nMicrosoft holds developer conferences like Build and Ignite all the time. You can watch them on Channel 9 or YouTube. Microsoft builds a lot of tech, so talks are fairly varied.
\nAzure Friday is available on YouTube or Channel 9 and lets you keep up to date with Microsoft Azure's constantly evolving cloud platform. The videos are short and released once a week or so.
\nCSS Day is a conferences that runs every year where CSS experts stand up and deliver a talk on a particular subject. Often regarding some new CSS feature or some feature that has not yet been standardised. Well worth watching, none of the resources above do a good job of covering CSS in my opinion, except maybe Frontend Masters to some extent.
\nThe .NET Foundation videos can be found on YouTube. It's really two channels combined. One for .NET in general and one for ASP.NET.
\nThe .NET videos typically have very in depth discussions about what features to add to the .NET Framework. They also sometimes release a video explaining some new features of .NET. Not something I watch often but worth keeping an eye on occasionally.
\nThe ASP.NET Community Stand-up releases a video on most Tuesday's discussing new features being added to ASP.NET Core or sometimes .NET Core in general. Always worth watching.
\nThe Heptio YouTube channel is a bit like the ASP.NET Community Stand-up for Kubernetes. There are new videos every week but they vary a lot from beginner to extreme expert level and it's difficult to tell what the level is going to be. If you're interested in Kubernetes, it's worth watching the first 10 minutes of every show, so you can keep up to date with what's new in Kubernetes.
\nWith the Christmas period approaching, most of the paid for services will offer some kind of sale. Now is the time to keep an eye out for that and grab a bargain.
\n", "url": "https://rehansaeed.com/pluralsight-vs-linkedin-learning-vs-frontendmasters-vs-egghead-io-vs-youtube/", "title": "PluralSight vs LinkedIn Learning vs FrontendMasters vs Egghead.io vs YouTube", "summary": "A comparison between PluralSight, LinkedIn Learning, Frontend Masters, Egghead.io, YouTube and other resources for software developers.", "image": "https://rehansaeed.com/images/hero/Online-Learning-1366x768.png", "date_modified": "2018-11-12T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/optimally-configuring-asp-net-core-httpclientfactory/", "content_html": "::: warning Update (04 April 2021)\nUpdated all code for .NET 5 and mentioned Open Telemetry.\n:::
\n::: warning Update (20 August 2018)\nSteve Gordon kindly suggested a further optimisation to use ConfigureHttpClient. I've updated the code below to reflect this.\n:::
In this post, I'm going to show how to optimally configure a HttpClient using the new HttpClientFactory API in ASP.NET Core 2.1. If you haven't already I recommend reading Steve Gordon's series of blog posts on the subject since this post builds on that knowledge. You should also read his post about Correlation ID's as I'm making use of that library in this post. The main aims of the code in this post are to:
HttpClientFactory typed client, I don't know why the ASP.NET team bothered to provide three ways to register a client, the typed client is the one to use. It provides type safety and removes the need for magic strings.HttpClient and ASP.NET Core does not support compression of GZIP requests, only responses. Doing some searching online some time ago suggests that this is an optimisation that is not very common at all, I thought this was pretty unbelievable at the time.HttpClient should time out after the server does not respond after a set amount of time.HttpClient should retry requests which fail due to transient errors.HttpClient should stop performing new requests for a period of time when a consecutive number of requests fail using the circuit breaker pattern. Failing fast in this way helps to protect an API or database that may be under high load and means the client gets a failed response quickly rather than waiting for a time-out.appsettings.json file.HttpClient should send a User-Agent HTTP header telling the server the name and version of the calling application. If the server is logging this information, this can be useful for debugging purposes.X-Correlation-ID HTTP header from the response should be passed on to the request made using the HttpClient. This would make it easy to correlate a request across multiple applications.It doesn't really matter what the typed client HttpClient looks like, that's not what we're talking about but I include it for context.
public interface IRocketClient\n{\n Task<TakeoffStatus> GetStatus(bool working);\n}\n\npublic class RocketClient : IRocketClient\n{\n private readonly HttpClient httpClient;\n\n public RocketClient(HttpClient httpClient) => this.httpClient = httpClient;\n\n public async Task<TakeoffStatus> GetStatus(bool working)\n {\n var response = await this.httpClient.GetAsync(working ? "status-working" : "status-failing");\n response.EnsureSuccessStatusCode();\n return await response.Content.ReadFromJsonAsync<TakeoffStatus>();\n }\n}\n\nHere is how we register the typed client above with our dependency injection container. All of the meat lives in these three methods. AddCorrelationId adds a middleware written by Steve Gordon to handle Correlation ID's. AddPolicies registers a policy registry and the policies themselves (A policy is Polly's way of specifying how you want to deal with errors e.g. using retries, circuit breaker pattern etc.). Finally, we add the typed HttpClient but with configuration options, so we can configure it's settings from appsettings.json.
public virtual void ConfigureServices(IServiceCollection services) =>\n services\n .AddDefaultCorrelationId() // Add Correlation ID support to ASP.NET Core\n .AddControllers()\n .Services\n .AddPolicies(this.configuration) // Setup Polly policies.\n .AddHttpClient<IRocketClient, RocketClient, RocketClientOptions>(this.configuration, "RocketClient")\n ...;\n\nThe appsettings.json file below contains the base address for the endpoint we want to connect to, a time-out value of thirty seconds is used if the server is taking too long to respond and policy settings for retries and the circuit breaker.
The retry settings state that after a first failed request, another three attempts will be made (this means you can get up to four requests). There will be an exponentially longer back-off or delay between each request. The first retry request will occur after two seconds, the second after another four seconds and the third occurs after another eight seconds.
\nThe circuit breaker states that it will allow 12 consecutive failed requests before breaking the circuit and throwing CircuitBrokenException for every attempted request. The circuit will be broken for thirty seconds.
Generally, my advice is when allowing a high number of exceptions before breaking, use a longer duration of break. When allowing a lower number of exceptions before breaking, keep the duration of break small.
\nAnother possibility I've not tried is to combine these two scenarios, so you have two circuit breakers. The circuit breaker with the lower limit would kick in first but only break the circuit for a short time, if exceptions are no longer thrown, then things go back to normal quickly. If exceptions continue to be thrown, then the other circuit breaker with a longer duration of break would kick in and the circuit would be broken for a longer period of time. I leave implementing this particular scenario to the reader.
\nYou can of course play with these numbers, what you set them to will depend on your application.
\n{\n "RocketClient": {\n "BaseAddress": "http://example.com",\n "Timeout": "00:00:30"\n },\n "Policies": {\n "HttpCircuitBreaker": {\n "DurationOfBreak": "00:00:30",\n "ExceptionsAllowedBeforeBreaking": 12\n },\n "HttpRetry": {\n "BackoffPower": 2,\n "Count": 3\n }\n }\n}\n\nBelow is the implementation for AddPollyPolicies. It starts by setting up and reading a configuration section in our appsettings.json file of type PolicyOptions. Then adds the PolicyRegistry which is where Polly stores it's policies. Finally, we add a retry and circuit breaker policy and configure them using the settings we've read from the PolicyOptions.
public static class ServiceCollectionExtensions\n{\n private const string PoliciesConfigurationSectionName = "Policies";\n\n public static IServiceCollection AddPolicies(\n this IServiceCollection services,\n IConfiguration configuration,\n string configurationSectionName = PoliciesConfigurationSectionName)\n {\n services.Configure<PolicyOptions>(configuration);\n var policyOptions = configuration.GetSection(configurationSectionName).Get<PolicyOptions>();\n\n var policyRegistry = services.AddPolicyRegistry();\n policyRegistry.Add(\n PolicyName.HttpRetry,\n HttpPolicyExtensions\n .HandleTransientHttpError()\n .WaitAndRetryAsync(\n policyOptions.HttpRetry.Count,\n retryAttempt => TimeSpan.FromSeconds(Math.Pow(policyOptions.HttpRetry.BackoffPower, retryAttempt))));\n policyRegistry.Add(\n PolicyName.HttpCircuitBreaker,\n HttpPolicyExtensions\n .HandleTransientHttpError()\n .CircuitBreakerAsync(\n handledEventsAllowedBeforeBreaking: policyOptions.HttpCircuitBreaker.ExceptionsAllowedBeforeBreaking,\n durationOfBreak: policyOptions.HttpCircuitBreaker.DurationOfBreak));\n\n return services;\n }\n}\n\npublic static class PolicyName\n{\n public const string HttpCircuitBreaker = nameof(HttpCircuitBreaker);\n public const string HttpRetry = nameof(HttpRetry);\n}\n\npublic class PolicyOptions\n{\n public CircuitBreakerPolicyOptions HttpCircuitBreaker { get; set; }\n public RetryPolicyOptions HttpRetry { get; set; }\n}\n\npublic class CircuitBreakerPolicyOptions\n{\n public TimeSpan DurationOfBreak { get; set; } = TimeSpan.FromSeconds(30);\n public int ExceptionsAllowedBeforeBreaking { get; set; } = 12;\n}\n\npublic class RetryPolicyOptions\n{\n public int Count { get; set; } = 3;\n public int BackoffPower { get; set; } = 2;\n}\n\nNotice that each policy is using the HandleTransientHttpError method which tells Polly when to apply the retry and circuit breakers. One important question is, what is a transient HTTP error according to Polly? Well, looking at the source code in the Polly.Extensions.Http GitHub repository, it looks like they consider any of the below as transient errors:
HttpRequestException thrown. This can happen when the server is down.408 Request Timeout.500 Internal Server Error or above.Finally, we can get down to configuring our HttpClient itself. The AddHttpClient method starts by binding the TClientOptions type to a configuration section in appsettings.json. TClientOptions is a derived type of HttpClientOptions which just contains a base address and time-out value. I'll come back to CorrelationIdDelegatingHandler and UserAgentDelegatingHandler in a moment.
We set the HttpClientHandler to be DefaultHttpClientHandler. This type just enables Brotli, GZIP and Deflate compression. Finally, we add the retry and circuit breaker policies to the HttpClient.
public static class ServiceCollectionExtensions\n{\n public static IServiceCollection AddHttpClient<TClient, TImplementation, TClientOptions>(\n this IServiceCollection services,\n IConfiguration configuration,\n string configurationSectionName)\n where TClient : class\n where TImplementation : class, TClient\n where TClientOptions : HttpClientOptions, new() =>\n services\n .Configure<TClientOptions>(configuration.GetSection(configurationSectionName))\n .AddSingleton<CorrelationIdDelegatingHandler>()\n .AddSingleton<UserAgentDelegatingHandler>()\n .AddHttpClient<TClient, TImplementation>()\n .ConfigureHttpClient(\n (serviceProvider, httpClient) =>\n {\n var httpClientOptions = serviceProvider\n .GetRequiredService<IOptions<TClientOptions>>()\n .Value;\n httpClient.BaseAddress = httpClientOptions.BaseAddress;\n httpClient.Timeout = httpClientOptions.Timeout;\n })\n .ConfigurePrimaryHttpMessageHandler(x => new DefaultHttpClientHandler())\n .AddPolicyHandlerFromRegistry(PolicyName.HttpRetry)\n .AddPolicyHandlerFromRegistry(PolicyName.HttpCircuitBreaker)\n .AddHttpMessageHandler<CorrelationIdDelegatingHandler>()\n .AddHttpMessageHandler<UserAgentDelegatingHandler>()\n .Services;\n}\n\npublic class DefaultHttpClientHandler : HttpClientHandler\n{\n public DefaultHttpClientHandler() => this.AutomaticDecompression =\n DecompressionMethods.Brotli |\n DecompressionMethods.Deflate |\n DecompressionMethods.GZip;\n}\n\npublic class HttpClientOptions\n{\n public Uri BaseAddress { get; set; }\n\n public TimeSpan Timeout { get; set; }\n}\n\nWhen I'm making a HTTP request from an API i.e. it's an API to API call and I control both sides, I use the X-Correlation-ID HTTP header to trace requests as they move down the stack. The CorrelationIdDelegatingHandler is used to take the correlation ID for the current HTTP request and pass it down to the request made in the API to API call. The implementation is pretty simple, it's just setting a HTTP header.
The power comes when you are using something like Application Insights, Kibana or Seq for logging. You can now take the correlation ID for a request and see the logs for it from multiple API's or services. This is really invaluable when you are dealing with a micro services architecture.
\npublic class CorrelationIdDelegatingHandler : DelegatingHandler\n{\n private readonly ICorrelationContextAccessor correlationContextAccessor;\n private readonly IOptions<CorrelationIdOptions> options;\n\n public CorrelationIdDelegatingHandler(\n ICorrelationContextAccessor correlationContextAccessor,\n IOptions<CorrelationIdOptions> options)\n {\n this.correlationContextAccessor = correlationContextAccessor;\n this.options = options;\n }\n\n protected override Task<HttpResponseMessage> SendAsync(\n HttpRequestMessage request,\n CancellationToken cancellationToken)\n {\n if (!request.Headers.Contains(this.options.Value.RequestHeader))\n {\n request.Headers.Add(this.options.Value.RequestHeader, this.correlationContextAccessor.CorrelationContext.CorrelationId);\n }\n\n // Else the header has already been added due to a retry.\n\n return base.SendAsync(request, cancellationToken);\n }\n}\n\nIt's often useful to know something about the client that is calling your API for logging and debugging purposes. You can use the User-Agent HTTP header for this purpose.
The UserAgentDelegatingHandler just sets the User-Agent HTTP header by taking the API's assembly name and version attributes. You need to set the Version and Product attributes in your csproj file for this to work. The name and version are then placed along with the current operating system into the User-Agent string.
Now the next time you get an error in your API, you'll know the client application that caused it (if it's under your control).
\npublic class UserAgentDelegatingHandler : DelegatingHandler\n{\n public UserAgentDelegatingHandler()\n : this(Assembly.GetEntryAssembly())\n {\n }\n\n public UserAgentDelegatingHandler(Assembly assembly)\n : this(GetProduct(assembly), GetVersion(assembly))\n {\n }\n\n public UserAgentDelegatingHandler(string applicationName, string applicationVersion)\n {\n if (applicationName == null)\n {\n throw new ArgumentNullException(nameof(applicationName));\n }\n\n if (applicationVersion == null)\n {\n throw new ArgumentNullException(nameof(applicationVersion));\n }\n\n this.UserAgentValues = new List<ProductInfoHeaderValue>()\n {\n new ProductInfoHeaderValue(applicationName.Replace(' ', '-'), applicationVersion),\n new ProductInfoHeaderValue($"({Environment.OSVersion})"),\n };\n }\n\n public UserAgentDelegatingHandler(List<ProductInfoHeaderValue> userAgentValues) =>\n this.UserAgentValues = userAgentValues ?? throw new ArgumentNullException(nameof(userAgentValues));\n\n public List<ProductInfoHeaderValue> UserAgentValues { get; set; }\n\n protected override Task<HttpResponseMessage> SendAsync(\n HttpRequestMessage request,\n CancellationToken cancellationToken)\n {\n if (!request.Headers.UserAgent.Any())\n {\n foreach (var userAgentValue in this.UserAgentValues)\n {\n request.Headers.UserAgent.Add(userAgentValue);\n }\n }\n\n // Else the header has already been added due to a retry.\n\n return base.SendAsync(request, cancellationToken);\n }\n\n private static string GetProduct(Assembly assembly) =>\n assembly.GetCustomAttribute<AssemblyProductAttribute>().Product;\n\n private static string GetVersion(Assembly assembly) =>\n assembly.GetCustomAttribute<AssemblyFileVersionAttribute>().Version;\n}\n\n<PropertyGroup Label="Package">\n <Version>1.0.0</Version>\n <Product>My Application</Product>\n <!-- ... -->\n</PropertyGroup>\n\nSetting the X-Correlation-ID and User-Agent HTTP headers are useful things to do but there is a new set of HTTP headers which come under the Open Telemetry standard which not only replace them but also add additional functionality. You can read more about Open Telemetry in my series of blog posts on the subject.
I realize that was a lot of boilerplate code to write. It was difficult to write this as more than one blog post. To aid in digestion, I've created a GitHub sample project with the full working code.
\nThe sample project contains two API's. One makes a HTTP request to the other. You can pass a query argument to decide whether the callee API will fail or not and try out the retry and circuit breaker logic. Feel free to play with the configuration in appsettings.json and see what options work best for your application.
I discovered a hidden gem in ASP.NET Core a couple of weeks ago that can help to build up and parse URL's called QueryHelpers. Here's how you can use it to build a URL using the AddQueryString method:
var queryArguments = new Dictionary<string, string>()\n{\n { "static-argument", "foo" },\n};\n\nif (someFlagIsEnabled)\n{\n queryArguments.Add("dynamic-argument", "bar");\n}\n\nstring url = QueryHelpers.AddQueryString("/example/path", queryArguments);\n\nNotice that there are no question marks or ampersands in sight. Where this really shines is when you want to add multiple arguments and then need to write code to work out whether to add a question mark or ampersand.
\nIt's also worth noting that the values of the query arguments are URL encoded for you too. The type also has a ParseQuery method to parse query strings but that's less useful to us as ASP.NET Core controllers do that for you.
Finally, .NET also has a type called UriBuilder that you should know about. It's more geared towards building up a full URL, rather than a relative URL as I'm doing above. It has a Query property that you can use to set the query string but it's only of type string, so much less useful than QueryHelpers.AddQueryString.
Lets talk about configuring your Entity Framework Core DbContext for a moment. There are several options you might want to consider turning on. This is how I configure mine in most micro services:
public virtual void ConfigureServices(IServiceCollection services) =>\n services.AddDbContextPool<MyDbContext>(\n options => options\n .UseSqlServer(\n this.databaseSettings.ConnectionString,\n x => x.EnableRetryOnFailure())\n .ConfigureWarnings(x => x.Throw(RelationalEventId.QueryClientEvaluationWarning))\n .EnableSensitiveDataLogging(this.hostingEnvironment.IsDevelopment())\n .UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking))\n ...\n\nEnableRetryOnFailure enables retries for transient exceptions. So what is a transient exception? Entity Framework Core has a SqlServerTransientExceptionDetector class that defines that. It turns out that any SqlException with a very specific list of SQL error codes or TimeoutExceptions are considered transient exceptions and thus, safe to retry.
By default, Entity Framework Core will log warnings when it can't translate your C# LINQ code to SQL and it will evaluate parts of your LINQ query it does not understand in-memory. This is usually catastrophic for performance because this usually means that EF Core will retrieve a huge amount of data from the database and then filter it down in-memory.
\nLuckily in EF Core 2.1, they added support to translate the GroupBy LINQ method to SQL. However, I found out yesterday that you have to write Where clauses after GroupBy for this to work. If you write the Where clause before your GroupBy, EF Core will evaluate your GroupBy in-memory in the client instead of in SQL. The key is to know when this is happening.
One thing you can do is throw an exception when you are evaluating a query in-memory instead of in SQL. That is what Throw on QueryClientEvaluationWarning is doing.
EnableSensitiveDataLogging enables application data to be included in exception messages. This can include SQL, secrets and other sensitive information, so I am only doing it when running in the development environment. It's useful to see warnings and errors coming from Entity Framework Core in the console window when I am debugging my application using the Kestrel webserver directly, instead of with IIS Express.
\nIf you are building an ASP.NET Core API, each request creates a new instance of your DbContext and then this is disposed at the end of the request. Query tracking keeps track of entities in memory for the lifetime of your DbContext so that if they are updated any changes can be saved, this is a waste of resources if you are just going to throw away the DbContext at the end of the request. By passing NoTracking to the UseQueryTrackingBehavior method, you can turn off this default behaviour. Note that if you are performing updates to your entities, don't use this option, this is only for API's that perform reads and/or inserts.
You can also pass certain settings to connection strings. These are specific to the database you are using, here I'm talking about SQL Server. Here is an example of a connection string:
\nData Source=localhost;Initial Catalog=MyDatabase;Integrated Security=True;Min Pool Size=3;Application Name=MyApplication\n\nSQL Server can log or profile queries that are running through it. If you set the application name, you can more easily identify the applications that may be causing problems in your database with slow or failing queries.
\nCreating database connections is an expensive process that takes time. You can specify that you want a minimum pool of connections that should be created and kept open for the lifetime of the application. These are then reused for each database call. Ideally, you need to performance test with different values and see what works for you. Failing that you need to know how many concurrent connections you want to support at any one time.
\nIt took me a while to craft this setup, I hope you find it useful. You can find out more by reading the excellent Entity Framework Core docs.
\n", "url": "https://rehansaeed.com/optimally-configuring-entity-framework-core/", "title": "Optimally Configuring Entity Framework Core", "summary": "How to optimally configure your Entity Framework Core DbContext for best performance, resiliency and easy debugging for the developer.", "image": "https://rehansaeed.com/images/hero/Microsoft-.NET-1366x768.png", "date_modified": "2018-07-08T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/migrating-to-entity-framework-core-seed-data/", "content_html": "I was already using Entity Framework Core 2.0 and had written some custom code to enter some static seed data to certain tables. Entity Framework 2.1 added support for data seeding which manages your seed data for you and adds them to your Entity Framework Core migrations.
\nThe problem is that if you've already got data in your tables, when you add a migration containing seed data, you will get exceptions thrown as Entity Framework tries to insert data that is already there. Entity Framework is naive, it assumes that it is the only thing editing the database.
\nMigrating to using data seeding requires a few extra steps that aren't documented anywhere and weren't obvious to me. Lets walk through an example. Assuming we have the following model and database context:
\npublic class Car\n{\n public int CarId { get; set; }\n\n public string Make { get; set; }\n\n public string Model { get; set; }\n}\n\npublic class ApplicationDbContext : DbContext\n{\n public ApplicationDbContext(DbContextOptions options)\n : base(options)\n {\n }\n\n public DbSet<Car> Cars { get; set; }\n}\n\nWe can add some seed data by overriding the OnModelCreating method on our database context class. You need to make sure your seed data matches the existing data in your database.
protected override void OnModelCreating(ModelBuilder modelBuilder)\n{\n modelBuilder.Entity<Car>().HasData(\n new Car() { CarId = 1, Make = "Ferrari", Model = "F40" },\n new Car() { CarId = 2, Make = "Ferrari", Model = "F50" },\n new Car() { CarId = 3, Make = "Lambourghini", Model = "Countach" });\n}\n\nIf we run a command to add a database migration, the generated code looks like this:
\ndotnet ef migrations add AddSeedData\n\npublic partial class AddSeedData : Migration\n{\n protected override void Up(MigrationBuilder migrationBuilder)\n {\n migrationBuilder.InsertData(\n table: "Cars",\n columns: new[] { "CarId", "Make", "Model" },\n values: new object[] { 1, "Ferrari", "F40" });\n\n migrationBuilder.InsertData(\n table: "Cars",\n columns: new[] { "CarId", "Make", "Model" },\n values: new object[] { 2, "Ferrari", "F50" });\n\n migrationBuilder.InsertData(\n table: "Cars",\n columns: new[] { "CarId", "Make", "Model" },\n values: new object[] { 3, "Lambourghini", "Countach" });\n }\n\n protected override void Down(MigrationBuilder migrationBuilder)\n {\n migrationBuilder.DeleteData(\n table: "Cars",\n keyColumn: "CarId",\n keyValue: 1);\n\n migrationBuilder.DeleteData(\n table: "Cars",\n keyColumn: "CarId",\n keyValue: 2);\n\n migrationBuilder.DeleteData(\n table: "Cars",\n keyColumn: "CarId",\n keyValue: 3);\n }\n}\n\nThis is what you need to do:
\nInsertData lines in the generated migration.AddSeedData migration has been run.InsertData lines in the generated migration so that if you run the migrations on a fresh database, seed data still gets added. For your existing databases, since the migration has already been run on them, they will not add the seed data twice.That's it, hope that helps someone.
\n", "url": "https://rehansaeed.com/migrating-to-entity-framework-core-seed-data/", "title": "Migrating to Entity Framework Core Seed Data", "summary": "How to migrate an existing database to use Entity Framework Core 2.1 Seed Data to insert static data into your tables while using migrations.", "image": "https://rehansaeed.com/images/hero/Microsoft-.NET-1366x768.png", "date_modified": "2018-07-01T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/net-boxed/", "content_html": ".NET Boxed is a set of project templates with batteries included, providing the minimum amount of code required to get you going faster. Right now it includes API and GraphQL project templates.
\nThe default ASP.NET Core API Boxed options will give you an API with Swagger, ASP.NET Core versioning, HTTPS and much more enabled right out of the box. You can totally turn any of that off if you want to, the point is that it's up to you.
\n
If you haven't read about or learned GraphQL yet, I really suggest you go and follow their short online tutorial. It's got some distinct advantages over standard REST'ful API's (and some disadvantages but in my opinion the advantages carry more weight).
\nOnce you've done that, the next thing I suggest you do is to create a project from the ASP.NET Core GraphQL Boxed project template. It implements the GraphQL specification using GraphQL.NET and a few other NuGet packages. It also comes with a really cool GraphQL playground, so you can practice writing queries, mutations and subscriptions.
\n
This is the only GraphQL project template that I'm aware of at the time of writing and it's pretty fully featured with sample queries, mutations and subscriptions.
\n.NET Boxed used to be called ASP.NET Core Boilerplate. That name was kind of forgettable and there was another great project that had a very similar name. I put off renaming for a long time because it was too much work but I finally relented and got it done.
\nIn the end I think it was for the best. The new .NET Boxed branding and logo are much better and I've opened it up to .NET project templates in general, instead of just ASP.NET Core project templates.
\nThanks to Jon Galloway and Jason Follas for helping to work out the branding.
\ndotnet new --install "Boxed.Templates::*" to install the project template.dotnet new api --help to see how to select the feature of the project.dotnet new api --name "MyTemplate" along with any other custom options to create a project from the template.There are new features and improvements planned on the GitHub projects tab. ASP.NET Core 2.1 is coming out soon, so look out for updates which you can see in the GitHub releases tab when they go live.
\n", "url": "https://rehansaeed.com/net-boxed/", "title": ".NET Boxed", "summary": ".NET Boxed is a set of project templates with batteries included, providing the minimum amount of code required to get you going faster.", "image": "https://rehansaeed.com/images/hero/Dotnet-Boxed-1366x768.png", "date_modified": "2018-05-13T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/the-dotnet-watch-tool-revisited/", "content_html": "I talked about using the dotnet watch tool with Visual Studio some time ago. Since then, a lot changed with the Visual Studio tooling and .NET Core 2.0 which broke the use of dotnet watch in Visual Studio, hence the reason for writing this post.
The dotnet watch tool is a file watcher for .NET that restarts the application when changes in the source code are detected. This is super useful when you just want to hack away at code and see the changes instantly when you refresh your browser. It increases productivity and reduces the magical inner-loop which reduces the time taken to write some code and then see it's effects. I also like using this tool because it opens a console window which lets you see all of your logs flashing by.

::: warning\nIn both cases you have to be careful to start the application by clicking Debug -> Start Without Debugging or hitting the ||CTRL+F5|| keyboard shortcut.\n:::
Setting up the 'dotnet watch' tool is as easy as installing the Microsoft.DotNet.Watcher.Tools NuGet package if you are using .NET Core 2.0. If you are using .NET Core 2.1 or above, this tool comes pre-installed in the .NET Core SDK.
Now using powershell, you can navigate to your project folder and run the dotnet watch run command and your set. But using the command line is a bit lame if you are using Visual Studio, we can do one better.
The launchSettings.json file is used by Visual Studio to launch your application and controls what happens when you hit ||F5||. It turns out you can add additional launch settings here to launch the application using the dotnet watch tool. You can do so by adding a new launch configuration as I've done at the bottom of this file:
{\n "iisSettings": {\n "windowsAuthentication": false,\n "anonymousAuthentication": true,\n "iisExpress": {\n "applicationUrl": "http://localhost:5000/",\n "sslPort": 44300\n }\n },\n "profiles": {\n "IIS Express": {\n "commandName": "IISExpress",\n "launchBrowser": true,\n "launchUrl": "http://localhost:5000/",\n "environmentVariables": {\n "ASPNETCORE_ENVIRONMENT": "Development",\n "ASPNETCORE_HTTPS_PORT": "44300"\n }\n },\n "dotnet run": {\n "commandName": "Project",\n "launchBrowser": true,\n "launchUrl": "http://localhost:5000/",\n "environmentVariables": {\n "ASPNETCORE_ENVIRONMENT": "Development",\n "ASPNETCORE_HTTPS_PORT": "44300"\n }\n },\n // dotnet watch run must be run without the Visual Studio debugger using CTRL+F5.\n "dotnet watch run": {\n "commandName": "Executable",\n "executablePath": "dotnet",\n "workingDirectory": "$(ProjectDir)",\n "commandLineArgs": "watch run",\n "launchBrowser": true,\n "launchUrl": "http://localhost:5000/",\n "environmentVariables": {\n "ASPNETCORE_ENVIRONMENT": "Development",\n "ASPNETCORE_HTTPS_PORT": "44300"\n }\n }\n }\n}\n\nNotice that I renamed the second launch profile (which already exists in the default template) to dotnet run because that's actually the command it's running and makes more sense.
The dotnet watch launch profile is running the dotnet watch run command as an executable and using the current working directory of the project. Now we can see the new launch profile in the Visual Studio toolbar like so:

I have updated the .NET Boxed family of project templates with this feature built in. Happy coding!
\n", "url": "https://rehansaeed.com/the-dotnet-watch-tool-revisited/", "title": "The Dotnet Watch Tool Revisited", "summary": "The dotnet watch tool is a file watcher for dotnet that restarts the application when changes in the source code are detected. You can use dotnet watch in Visual Studio by using the launchSettings.json configuration file.", "image": "https://rehansaeed.com/images/hero/Microsoft-.NET-1366x768.png", "date_modified": "2018-04-30T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/writing-code-asleep/", "content_html": "This is a quick post about something I tend to do quite often that has really helped me be more productive and I think has helped the wider developer community in some small way too. It's something that I haven't seen other developers I've worked with do, so I thought I'd call it out here.
\nWhen I come into work for a day of developing software, I generally get into the zone (if I don't have any distractions...) and am hacking away at something. Sometimes (quite often really but I'm loath to admit it) I get stuck. I'm probably working with some new language, framework or technology and I can't figure out how to do a thing.
\nAt this point I might try a few things that pop into my head, go to definition on some code to read the comments, even read the documentation if there is any. These are all good strategies to solve a problem. At what point though do you throw your hands up in despair and give up? One hour later? Maybe four? Maybe a couple of days? I've done all three in the past!
\nIt might be chance, I don't know but usually when I get to this point, it's lunch time or time to go home and spend time with the family. What do you do now? You're stuck and you're going to have to come back to your desk at some point and bang your head against the problem for a second day. Maybe a clear and fresh mind can solve the problem? It's scary and maybe a bit sad how often I think of a solution to a problem I was having while walking home or lying in bed.
\nBut what if even that does not help? The thought of having to go to work the next day where you will have to try again to solve a seemingly unsolvable problem can be quite depressing sometimes.
\nThe solution is to ask the kind people of the internet for help! Then go home and get some sleep. The chances are that come the morning, somebody has solved your problem for you! This is so obvious I feel kind of stupid for even writing this but in my experience when people hit a problem, they don't always ask for help, in fact I have a lot of anecdotal evidence for this:
\nAsk your question on Stack Overflow. The undisputed number one resource for every developer.
\nI have reviewed a lot of CV's in the last two years and there is a growing trend to list your GitHub and Stack Overflow profiles on there. Even if your profile is not listed, I can sometimes find it anyway (the internet is a stalkers dream).
\nIn all the Stack Overflow profiles I've seen, there is a worrying trend. Very few people actually ask many Stack Overflow questions! The thing is, asking questions is the easiest way to get points and build a very nice Stack Overflow profile too, so it's silly not to do it.
\nIn my three years actively using Stack Overflow (I was a lurker for a while), I've asked 191 questions and answered 143. I'm a bit behind in contributing but even my questions will help people as there will be others who had the same question and got an answer quickly because I had already asked it. In total, this has netted me almost 9,000 imaginary internet reputation, 7 gold badges, 86 silver ones and 152 bronze. Even if I had not answered any questions, and only asked them, I think I would have had a healthy reputation score of a few thousand.
\nIt's amazing how quickly you can sometimes get answers to your questions using Stack Overflow too, the fastest I've seen is literally 30 seconds! It's such an amazing resource, you literally have people sitting there waiting for you to ask a question so they can get imaginary internet points!
\nI get around 5,000 visitors to this blog every week at the moment, it's surprising to me how many people actually contact me directly asking for help. The emails are always the same, "I read your blog post and I'm working on project X, I need help urgently because I have some deadline". No supporting code, just a vague hint of what the problem might be, as if I can divine the solution through some kind of telekinesis. I help these people where I can but I've started to feel I'm hindering them by doing so, they need to learn to use Stack Overflow just like everybody else, so that's where I've started pointing them lately.
\nIf you're dealing with a project that uses GitHub or a forum of some kind, use it! Find an existing GitHub issue or forum post and add a comment to it or open a new issue if one cannot be found. One or more developers will get a notification of your problem and they might even point you in the right direction. Once again, it's amazing how quickly you can get a reply sometimes.
\nOnce again, I've seen a lot of GitHub profiles and very few people use the issues section to ask questions. You often have the developers who literally wrote the code that you're using, answer your question. What can be better than that?
\nThis post sounds silly but developers don't generally ask for help for some reason. I used to work with a great junior developer who started only with a little VB script knowledge and would ask for help whenever he needed it, which was a dozen or more times a day sometimes. It was hard sometimes to get work done but it was great because after a while he started to get really good and now we had two minds to get work done and come up with ideas instead of one. Ask for help when you need it!
\n", "url": "https://rehansaeed.com/writing-code-asleep/", "title": "Writing Code while Asleep", "summary": "How to write code while you sleep using tools like Stack Overflow, GitHub and forums effectively.", "image": "https://rehansaeed.com/images/hero/While-Alive-Eat-Sleep-Code-Repeat-1366x768.png", "date_modified": "2018-01-29T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/writing-your-webpack-configuration-in-typescript/", "content_html": "::: warning\nBefore you get the wrong idea, let me say that Webpack is a super powerful, it's what you probably should be using these days to deal with static assets and I like its power...\n:::
\nHowever, Webpack configuration files are write once software, they are a mess, a complete and utter mess. There, I said it. It has a steep learning curve and plenty of magic. What's worse is that Webpack makes it intentionally harder than it needs to be. If you look online at examples, even in the Webpack docs themselves, you'll see a dozen examples that look completely different, this is for two reasons:
\nWhat you ended up with is eight ways to configure a rule or loader, which is insane.
\nI'm going off on a tangent here but there is a new module bundler in development called ParcelJS which has support for JavaScript, HTML, CSS, SASS and images built in from the start with zero configuration! Adding Babel, TypeScript or Autoprefixer is also much easier with no need to configure Parcel to work with them.
\nUnfortunately, it's not ready for prime time yet as it is lacking support for source maps, multiple entry points, code splitting and Vue components. I have high hopes for ParcelJS in the future!
\nHappily, as I'll show below TypeScript can help you to completely avoid Webpack 1 syntax. Secondly, if you're already writing your application using TypeScript, then often the only JavaScript files you have left in your project end up being the Webpack configuration files. Converting Webpack configuration to TypeScript removes the need to switch context and switch between languages.
\nIt turns out that Webpack supports the use of TypeScript itself. However, the supported method requires you to add a couple of NPM packages as a dependency and you will not be able to use ES 2015 module syntax in your configuration file because it's not supported.
\nIn my opinion, a much simpler and cleaner way is to use the TypeScript tsc command line tool to transpile TypeScript to JavaScript before running Webpack. You could add this command as a simple NPM script in your package.json file. Here are the commands you need to use:
\ntsc --lib es6 webpack.config.ts\nwebpack --config webpack.config.js\n\nWebpack does not come with TypeScript typings, so you'll also need to install the @types/webpack NPM package. Finally, to remove all Webpack 1 syntax, you need to create some new types extending the Webpack types, which remove the Webpack 1 syntax, I stuck all of these typings in a webpack.common.ts file:
import * as webpack from "webpack";\n\n// Remove the Old Webpack 1 types to ensure that we are only using Webpack 2 syntax.\n\nexport type INewLoader = string | webpack.NewLoader;\n\nexport interface INewLoaderRule extends webpack.NewLoaderRule {\n loader: INewLoader;\n oneOf?: INewRule[];\n rules?: INewRule[];\n}\n\nexport interface INewUseRule extends webpack.NewUseRule {\n oneOf?: INewRule[];\n rules?: INewRule[];\n use: INewLoader | INewLoader[];\n}\n\nexport interface INewRulesRule extends webpack.RulesRule {\n oneOf?: INewRule[];\n rules: INewRule[];\n}\n\nexport interface INewOneOfRule extends webpack.OneOfRule {\n oneOf: INewRule[];\n rules?: INewRule[];\n}\n\nexport type INewRule = INewLoaderRule | INewUseRule | INewRulesRule | INewOneOfRule;\n\nexport interface INewModule extends webpack.NewModule {\n rules: INewRule[];\n}\n\nexport interface INewConfiguration extends webpack.Configuration {\n module?: INewModule;\n}\n\nexport interface IArguments {\n prod: boolean;\n}\n\nexport type INewConfigurationBuilder = (env: IArguments) => INewConfiguration;\n\nYou can then use these types in your Webpack configuration:
\nimport * as path from "path";\nimport * as webpack from "webpack";\nimport { INewConfiguration } from "./webpack.common";\n\nconst configuration: INewConfiguration = {\n // ...\n};\nexport default configuration;\n\nOr you can also pass arguments to your webpack configuration file like so:
\nimport * as path from "path";\nimport * as webpack from "webpack";\nimport { IArguments, INewConfiguration, INewConfigurationBuilder } from "./webpack.common";\n\nconst configurationBuilder: INewConfigurationBuilder = \n (env: IArguments): INewConfiguration => {\n const isDevBuild = !(env && env.prod);\n const configuration: INewConfiguration = {\n // ...\n };\n return configuration;\n };\nexport default configurationBuilder;\n\nIn this case, you can pass arguments to the webpack configuration file like so:
\n> webpack --env.prod\n\nI think most people have huge trouble getting to grips with Webpack, once you understand that there are so many ways to supply the same config and how to translate between them, the learning curve gets shallower. You will be able to translate all of the examples you see online that inevitably are using a different syntax to you, so you can 'borrow' (It's what us software developers do for much of the day) their code and get stuff done.
\n", "url": "https://rehansaeed.com/writing-your-webpack-configuration-in-typescript/", "title": "Writing your Webpack configuration in TypeScript", "summary": "Learn how to write your Webpack configuration file using TypeScript to get intellisense and how to exclude Webpack 1 syntax from your TypeScript typings.", "image": "https://rehansaeed.com/images/hero/TypeScript-1366x768.png", "date_modified": "2018-01-03T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/useful-docker-images-part2/", "content_html": "\nKnowing what is happening in Docker and in your applications running on Docker is critical. To collect logs from my Swarm and monitor the health of it, I use the ELK-B stack which is made up of four pieces of software called ElasticSearch, LogStash (I recommend that you use Beats instead of LogStash), Kibana and various Beats.
\nElasticSearch is basically a No-SQL database that is geared towards storing JSON documents and searching across them. Kibana is a visualization took that gives you a nice UI to view all of your data and produce nice visualizations and dashboards. There are several Beats which are used to ship data into ElasticSearch from various sources.
\n
While you could use Docker to host ElasticSearch and Kibana, I use the ElasticCloud at work, you could also use instances hosted by AWS and Azure. Using a hosted version takes some of the pain out of maintaining ElasticSearch. I had a look at the ElasticSearch Docker container and if you really want to go down the Docker route and create an ElasticSearch cluster, it looks fairly straightforward but a bit unorthodox. There is a cost versus effort trade-off in this decision and it's up to you where you decide to go.
\nIn terms of Beats, I use three of them which I'll talk about below:
\nFilebeat is a tool used to ship Docker log files to ElasticSearch. The latest version 6.0 queries Docker APIs and enriches these logs with the container name, image, labels, and so on which is a great feature, because you can then filter and search your logs by these properties. You can then view these logs in a fully customizable Kibana dashboard. Filebeat ships with a sample Kibana dashboard that looks like this:
\n
As well as shipping Docker logs, I write the logs from my ASP.NET Core applications to disk (The best way to make sure you never lose log information) and then use Filebeat to ship these log files to ElasticSearch.
\nThe Dockerfile below is used to add Filebeat configuration files to the base Filebeat image and nothing more. The configuration files are pretty lengthy and heavily commented so I've omitted them:
FROM docker.elastic.co/beats/filebeat:6.0.0\nCOPY filebeat.yml filebeat.template.json /usr/share/filebeat/\nUSER root\nRUN chown filebeat /usr/share/filebeat/filebeat.yml && /\n chown filebeat /usr/share/filebeat/filebeat.template.json && /\n chmod go-w /usr/share/filebeat/filebeat.yml && /\n chmod go-w /usr/share/filebeat/filebeat.template.json\nUSER filebeat\n\nIn the Docker stack file below, I setup a shared volume called 'logs' in which my website container stores all of it's log files. My custom Filebeat image then picks up logs from the 'logs' volume and pushes them to ElasticSearch. Filebeat is also configured so that one instance of the container runs on every Docker node, so that it can pick up Docker logs from every node in my Swarm.
\nversion: '3.3'\n\n filebeat:\n image: my-custom-filebeat-image:latest\n deploy:\n mode: global # One docker container per node\n networks:\n - defaultoverlay\n volumes:\n - logs:/var/log/my-company-name\n \n website-name:\n image: website-name:latest\n ports:\n - "5000:80"\n networks:\n - defaultoverlay\n volumes:\n - logs:/var/log/my-company-name\n \nnetworks:\n defaultoverlay:\n \nvolumes:\n logs:\n\nMetricbeat can be used to monitor the CPU, memory and disk usage on your Docker nodes and then ship those logs to your ElasticSearch cluster. Once again Metricbeat ships with a sample Kibana dashboard that looks like this:
\n
Here is an example of a custom Metricbeat Dockerfile which I use to configure Metricbeat:
FROM docker.elastic.co/beats/metricbeat:6.0.0\nCOPY metricbeat.yml metricbeat.template.json /usr/share/metricbeat/\nUSER root\nRUN chown metricbeat /usr/share/metricbeat/metricbeat.yml && /\n chown metricbeat /usr/share/metricbeat/metricbeat.template.json && /\n chmod go-w /usr/share/metricbeat/metricbeat.yml && /\n chmod go-w /usr/share/metricbeat/metricbeat.template.json\nUSER metricbeat\n\nAnd here is the Docker stack file below. Once again it configures one instance of Metricbeat to run on each Docker node. It also needs access to the Docker socket and a bunch of other folders on the Docker node, so that explains all of the volume mounts.
\nversion: '3.3'\n\nservices: \n metricbeat:\n image: my-custom-metricbeat-image:latest\n command: metricbeat -e -system.hostfs=/hostfs\n deploy:\n mode: global # One docker container per node\n networks:\n - defaultoverlay\n volumes:\n - /proc:/hostfs/proc:ro\n - /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro\n - /:/hostfs:ro\n - /var/run/docker.sock:/var/run/docker.sock\n \nnetworks:\n defaultoverlay:\n\nHeartbeat is a ping monitor that can be pointed at any status endpoints in your API's or websites. Failures get logged in ElasticSearch which show up in a nice graph. You can also use Kibana to set up alerts, so you can be notified of any downtime. Here is an example of what a Kibana Dashboard containing Heartbeat data looks like:
\n
The Dockerfile is similar to the other Beats:
\nFROM docker.elastic.co/beats/heartbeat:6.0.0\nCOPY heartbeat.yml heartbeat.template.json /usr/share/heartbeat/\nUSER root\nRUN chown heartbeat /usr/share/heartbeat/heartbeat.yml && /\n chown heartbeat /usr/share/heartbeat/heartbeat.template.json && /\n chmod go-w /usr/share/heartbeat/heartbeat.yml && /\n chmod go-w /usr/share/heartbeat/heartbeat.template.json\nUSER heartbeat\n\nThis Docker stack file is extremely simple. There is only one instance of the image required.
\nversion: '3.3'\n\nservices: \n heartbeat:\n image: my-custom-heartbeat-image:latest\n networks:\n - defaultoverlay\n \nnetworks:\n defaultoverlay:\n\nPing monitors on the internet are super expensive for what they are, this is because they send pings from various locations on the Earth. Heartbeat will not do that, so be aware of this difference. That said, there is nothing I can do if the pipe for the internet in Australia goes down, so in my opinion, Heartbeat reduces a lot of false positives.
\nI have discovered that I have enough material for a third and final part to this series of blog posts. In the next part, you can expect to learn more about Redis and Metabase Docker images.
\n", "url": "https://rehansaeed.com/useful-docker-images-part2/", "title": "Useful Docker Images - Part 2", "summary": "How to run the ELK-B Stack, made up of ElasticSearch, Kibana, Filebeat, Metricbeat and Heartbeat using Docker and Docker Swarm.", "image": "https://rehansaeed.com/images/hero/Docker-1366x768.png", "date_modified": "2017-12-11T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/useful-docker-images-part1/", "content_html": "\nI have been running Docker Swarm in production for a few API's and single page applications for a couple of months now. Here are some Docker images I've found generally useful. Most of these images are not specific to Docker Swarm. For each image, I'm also going to show a docker-stack.yml file that you can use to deploy the image and the settings I use for them. To deploy a Docker stack file, just run the following commands:
# To enable Docker Swarm mode on your local machine if you haven't already.\ndocker swarm init\n# To deploy a Docker stack file to your Swarm.\ndocker stack deploy --compose-file docker-stack.yml\n\nThe Docker Swarm Visualizer image connects to the Docker socket and shows a really nice visualization showing all of the nodes in your Docker cluster (or just one on your development machine) and all of the containers running on it.
\n
A word of warning about using this image. It has full unimpeded access to your Docker socket which lets it do basically anything that Docker can do (and that's a lot). This image is useful for development and testing purposes. If you want to use it in production, don't expose it to the internet, only run it in your local network if you trust the users in your local network that is. You don't want your Docker Swarm turning into a Bitcoin mining farm. Here is a Docker stack file you can use to deploy this image:
\nversion: '3.3'\n\nservices: \n visualizer:\n image: dockersamples/visualizer\n ports:\n - "8080:8080"\n deploy:\n placement:\n constraints: [node.role == manager]\n resources:\n limits:\n cpus: '0.1'\n memory: 100M\n networks:\n - visualizeroverlay\n volumes:\n - "/var/run/docker.sock:/var/run/docker.sock"\n \nnetworks:\n visualizeroverlay:\n\nThe container has to run on a manager node, so I've added that constraint and also added access to the Docker socket using a volume mount. I've also limited the resources the container can consume. Finally, I've also given the service it's own dedicated overlay network, so it can't talk to my other containers.
\nPortainer is a free and open source Docker image you can use to administer your Docker cluster. It has full support for standalone Docker and Docker Swarm. It lets you do everything from seeing what's running on your nodes, starting containers, viewing logs and shelling into your running Docker containers. I find the last two particularly useful.
\n
Portainer also has a visualization similar to the Visualizer image I spoke about earlier but it's not nearly as nice and is buried in a few sub-menus which is why I prefer Visualizer. It's basically competing with Docker Enterprise Edition (EE) which is a seriously expensive piece of kit, while this is totally free!
\nPortainer has user and team management built into it, so it's not wide open to the internet if you expose a port. Interestingly, Portainer also exposes an API. It's a possibility I've explored yet but you could use said API to deploy your Docker applications from your CI/CD process. Here is a Docker stack file you can use to deploy this image:
\nversion: '3.3'\n\nservices: \n portainer:\n image: portainer/portainer\n command: --host unix:///var/run/docker.sock\n deploy:\n placement:\n constraints: [node.role == manager]\n ports:\n - "9000:9000"\n networks:\n - portaineroverlay\n volumes:\n - portainer:/data\n - "/var/run/docker.sock:/var/run/docker.sock"\n \nnetworks:\n portaineroverlay:\n\nOnce again, we are binding the image to the Docker socket using a volume mount but also giving Portainer another volume to store it's data. We also set a constraint, so that the container runs on a manager node.
\n[https://hub.docker.com/r/sonatype/nexus3/](Sonatype Nexus) is an open source repository manager that can be used as a private Docker registry to store your images. In fact, it can also be used as a repository for NuGet, Maven, Ruby and NPM too. It's pretty powerful stuff and has user management built in too.
\n
version: '3.3'\n\n nexus:\n image: sonatype/nexus3:3.6.1\n deploy:\n resources:\n reservations:\n cpus: '2'\n memory: 4GB\n healthcheck:\n test: ["CMD", "curl", "--fail", "http://localhost/service/metrics/healthcheck"]\n interval: 60s\n timeout: 5s\n retries: 3\n ports:\n - "8081:8081"\n - "8082:8082"\n - "8083:8083"\n networks:\n - nexusoverlay\n volumes:\n - artefacts:/nexus-data\n\nnetworks:\n nexusoverlay:\n\nSonatype Nexus has some pretty hefty minimum system requirements, so I've reserved the necessary CPU and memory. I've added three ports to support HTTP, HTTPS and a third port for my Docker registry, you can configure this in the admin menu when you add a Docker registry. Thankfully it's just a matter of a few clicks to setup and here are my registry settings:
\n
I have also gone to the effort of setting up a health check. Health checks are a wonderful feature of Docker. The container will not start and join the network until the health check has succeeded. This has stopped failed production releases for me in the past for my ASP.NET Core apps. Use health checks people!
\nThis blog post is getting a bit long, so I'll split it into two pieces. In the next part, expect to hear about how you can use the ELK-B stack which is made up of a few bits of software: ElasticSearch, Kibana, Filebeat, Metricbeat and Heartbeat. Also, be sure to read Andrew Lock's piece on Rancher, which is a bit like Portainer. I'd never heard of Rancher, it'll be interesting to do a comparison.
\n", "url": "https://rehansaeed.com/useful-docker-images-part1/", "title": "Useful Docker Images - Part 1", "summary": "A guide to using the Docker Visualizer, Portainer and Sonatype Nexus Docker images to help manage a Docker Swarm.", "image": "https://rehansaeed.com/images/hero/Docker-1366x768.png", "date_modified": "2017-11-30T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/docker-labels-depth/", "content_html": "Docker image names are short and usually not very descriptive. You have the ability to label your Docker images to give them some extra metadata. You can add any information you like, labels are just key value pairs. Here I've added an author label to my Dockerfile:
FROM microsoft/aspnetcore:2.0\nLABEL "author"="Muhammad Rehan Saeed"\nLABEL "company"="Acme Co."\nARG source\nWORKDIR /app\nEXPOSE 80\nCOPY ${source:-obj/Docker/publish} .\nENTRYPOINT ["dotnet", "Bridge.Turtle.dll"]\n\nNote that prior to Docker 1.10, it was recommended to combine all labels into a single LABEL instruction, to prevent extra layers from being created. This is no longer necessary, but combining labels is still supported.
This is great for static data like the author but not so great for dynamic data like an automated build number or git changeset number that you might want to use. That way you'll know exactly which build built the image and the source code it was built from. This can be valuable information when you're in a pickle with a production issue.
\nTo add dynamic labels you can pass them from the command line when you run the docker build command like so:
\ndocker image build --tag foo:1.0.0 --label "build"="123" --label "changeset"="0d9c7d3b77817caab3977b16d1d76bb3eb024837" .\n\nThe Open Containers Initiative (OCI) is a standards body defining open standards for container formats and runtimes. They've already defined a standard set of labels (they call them annotations) for you to use in your Docker images:
\nThe labels in the Open Containers Annotations specification and a few others I've seen use a kind of dot separated namespace. The official Docker documentation suggests that this is only required if your image is a "third party tool" which I think means if the image will ever be used as a base for another image:
\ncom.example.some-label.com.docker.*, io.docker.*, and org.dockerproject.* namespaces are reserved by Docker for internal use..), and the hyphen character (-). Consecutive periods or hyphens are not allowed..) separates namespace "fields". Label keys without namespaces are reserved for CLI use, allowing users of the CLI to interactively label Docker objects using shorter typing-friendly strings.For any other images, you can just use simple single word labels or at least, that's what I'm doing.
\n", "url": "https://rehansaeed.com/docker-labels-depth/", "title": "Docker Labels in Depth", "summary": "How to use Docker Labels with Docker run, Docker compose and Docker Swarm. Also talk about naming conventions and the Open Containers Initiative (OCI) spec.", "image": "https://rehansaeed.com/images/hero/Docker-1366x768.png", "date_modified": "2017-11-20T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/docker-read-file-systems/", "content_html": "For a little bit of added security you can make the file system of your container read-only, excluding any volumes you may have created. If anyone hacks into your container, they will be unable to change any files.
\nWhen using the docker run command using the CLI, you can simply use the following command:
\ndocker run --read-only redis\n\nTo set a read-only file system, you simply need to set the read_only flag to true, like so:
version: '3.3'\n\nservices:\n redis:\n image: redis:4.0.1-alpine\n networks:\n - myoverlay\n read_only: true\n \nnetworks:\n myoverlay:\n\nSo above, I have a Docker stack file for use with Docker Swarm showing how to start Redis with a read-only file system.
\nNot all images support having them started with a read-only file system. Some require access to write temp files and the like. You can usually get away with using a volume in this case because volumes are still writeable even if you enable the read-only file system. In my research, I found it hard to determine if an image supported the feature, so I simply tried it out and found that most failed.
\nI discovered that Redis was the only image that I was running that had full support, several Elastic Stack containers failed to start and even my ASP.NET Core images failed to start. I since raised a GitHub issue here, trying to find out why the container fails to start and seeing if there is any workaround.
\n", "url": "https://rehansaeed.com/docker-read-file-systems/", "title": "Docker Read-Only File Systems", "summary": "How to use a read-only file system in Docker to secure your Docker containers using the docker run CLI command and Docker compose or docker swarm.", "image": "https://rehansaeed.com/images/hero/Docker-1366x768.png", "date_modified": "2017-11-13T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/asp-net-core-caching-in-practice/", "content_html": "The Cache-Control HTTP header can be used to set how long your resource can be cached for. However, the problem with this HTTP header is that you need to be able to predict the future and know before hand when the cache will become invalid. For some use cases, like writing an API where someone could change the resource at any time that's just not feasible.
\nI recommend you read the response caching middleware documentation, it's not necessary as I do a quick overview next but the knowledge below builds upon it. The simple way to set the cache control header is directly on the action method like so:
\n[HttpGet, ResponseCache(Duration = 3600, Location = ResponseCacheLocation.Any)]\npublic IActionResult GetCats()\n\nAdding the ResponseCache attribute just adds the Cache-Control HTTP header but does not actually cache the response on the server. To do that you also need to add the response caching middleware like so:
public void Configure(IApplicationBuilder application) =>\n application.UseResponseCaching().UseMvc();\n\nInstead of hard coding all of your cache settings in the ResponseCache attribute, it's possible to store them in the appsettings.json configuration file. To do so, you need to use a feature called cache profiles which look like this:
[HttpGet, ResponseCache(CacheProfile="Cache1Hour")]\npublic IActionResult GetCats()\n\npublic class Startup\n{ \n private readonly IConfiguration configuration;\n \n public Startup() => this.configuration = configuration;\n\n public void ConfigureServices(IServiceCollection services) =>\n services\n .Configure<Dictionary<string, CacheProfile>>(configuration.GetSection("CacheProfiles"))\n .AddMvc(options =>\n {\n // Read cache profiles from appsettings.json configuration file\n var cacheProfiles = this.configuration.GetSection<Dictionary<string, CacheProfile>>();\n foreach (var keyValuePair in cacheProfiles)\n {\n options.CacheProfiles.Add(keyValuePair);\n }\n });\n \n // Omitted\n}\n\n{\n "CacheProfiles": {\n "Cache1Hour": {\n "Duration": 3600,\n "Location": "Any"\n }\n },\n // Omitted...\n}\n\nNow all your caching can be configured from a single configuration file.
\nCache-Control also has a new draft directive called immutable. When you add this to the HTTP header value, you are basically telling the client that this resource never changes even if it has expired. You might be asking, why do we need this? Well, it turns out that when you refresh a page in a browser, it goes off to the server and checks to see if the resource has expired or not.
Cache-Control: max-age=365000000, immutable\n\nIt turns out that you get a massive reduction in requests to your server by implementing this directive. Read more about it in these links:
\nThis directive has not yet been implemented in ASP.NET Core but I've raised an issue on GitHub here and there is also another issue here to add the immutable directive to the static files middleware. If you really wanted to, it's really easy to add this directive today, as you just need to append the word immutable onto the end of your Cache-Control HTTP header.
A word of warning! You need to make sure that your resource really never changes. You can do this in Razor by using the asp-append-version attribute on your script tags:
<script src="~/site.js" asp-append-version="true"></script>\n\nThis will append a query string to the link to site.js which will contain a hash of the contents of the file. Each time the file changes, the hash is changed and thus you can safely mark the resource as immutable.
\nE-tags are typically generated in three ways (Read the link to understand what they are):
\nE-Tag's using this method Mads Kristensen wrote a nice blog post showing how it can be done.E-Tag can literally be the time the object was last modified which you can store in your database (I usually store created and modified timestamps for anything I store in a database anyway). This solves the performance problem above but now what is the difference between doing this and using the Last Modified HTTP header?One additional thing you need to be careful of is the Accept, Accept-Encoding and Accept-Language HTTP headers. Any time you send a different response based on these HTTP headers, your E-Tag needs to be different e.g. a JSON non-gzip'ed response in Mandarin needs to have a different E-Tag to an XML gzip'ed response in Urdu.
For option one, this can be achieved by calculating the hash after the response body has gone through GZIP compression. For the second and third options, you would need to append the value of the Accept HTTP headers to the last modified date or revision number and then hash all of that.
\nI'm assuming you already know about the Last-Modified and If-Modified-Since HTTP headers. If not, go ahead and read the links. Below is an example controller and action method that returns a list of cats.
\n[Route("[controller]")]\npublic class CatsController : ControllerBase\n{\n private readonly ICatRepository catRepository;\n private readonly ICatMapper catMapper;\n\n public CatsController(\n ICatRepository catRepository,\n ICatMapper catMapper)\n {\n this.catRepository = catRepository;\n this.catMapper = catMapper;\n }\n\n [HttpGet("")]\n public async Task<IActionResult> GetCats(CancellationToken cancellationToken)\n {\n var cats = await this.catRepository.GetAll(cancellationToken);\n var lastModified = cats.Count == 0 ? \n (DateTimeOffset?)null : \n cats.Max(x => x.ModifiedTimestamp);\n\n this.Response.GetTypedHeaders().LastModified = lastModified;\n\n var requestHeaders = this.Request.GetTypedHeaders();\n if (requestHeaders.IfModifiedSince.HasValue &&\n requestHeaders.IfModifiedSince.Value >= lastModified)\n {\n return this.StatusCode(StatusCodes.Status304NotModified);\n }\n\n var catViewModels = this.catMapper.MapList(cats);\n return this.Ok(catViewModels);\n }\n}\n\nAll of our cats have a ModifiedTimestamp, so we know when they were last changed. There are four scenarios that this action method handles:
\nLast-Modified HTTP header exists in the request, so we just return all cats.Last-Modified HTTP header exists and cats have been modified since that date, so return all cats.Last-Modified HTTP header exists but no cats have been modified since that date, so return a 304 Not Modified response.In all cases, except when we have no cats at all, we set the Last-Modified date to the latest date than any cat has been modified.
Which caching HTTP headers you pick, depends on your data but at a minimum, I would add E-Tag's or Last-Modified. Add Cache-Control where possible, usually for static assets.
An .editorconfig file helps developers define and maintain consistent coding styles between different editors and IDEs for files with different file extensions. These configuration files are easily readable and they work nicely with version control systems. An .editorconfig file defines various settings per file extension such as charsets and tabs vs spaces.
Scott Hanselman recently wrote a blog post about this file. You can also find out more from the official docs at editorconfig.org and the Visual Studio Docs which I recommend you read.
\nI wrote a generic .editorconfig file supporting the following file types:
\nExtensive code style settings for C# and VB.NET have been defined that require the latest C# features to be used. In addition, it sets various more advanced C# style settings. All C# related code styles are consistent with StyleCop's default styles. You can find our more about the C# code style settings from the official docs and also in Kent Boogaart's blog post.
\nAll you have to do is drop it into the root of your project. Then any time you open a file in Visual Studio, the .editorconfig file settings will be used to help format the document and also raise warnings if your code style and formatting does not conform.
For Visual Studio Code, you can install the EditorConfig for VS Code extension to get support.
\nI noticed that Microsoft silently released several new C# code style settings. I'm not sure when they were released but they're available in the current Visual Studio 15.7 update. The majority of them are to enforce the use of newer C# 7.3 syntax. I updated my generic .editorconfig file to add these new settings with C# 7.3 as the default.
Microsoft also updated their documentation for .editorconfig settings pertaining to .NET, so I added links to the docs site, so it's easy to see what each setting does and change it, if it's not to your liking. I've also included a undocumented dozen settings. There is an open issue on GitHub to get them documented, so it's easy to see what they do.
In addition, while I was working on this, I added support for a few more file extensions, including yaml (yml was already there), json5 (If you haven't heard of json5, check it out), cmd and bat (If you haven't switched to PowerShell yet, what are you waiting for).
Finally, Microsoft announced last week that the Visual Studio 15.8 update which is currently being released as preview 3 will automatically fix errors when you format the document using the ||CTRL+K|| followed by ||CTRL+D|| shortcut. This is huge! It means that you can drop a .editorconfig file in an existing codebase and with a few clicks or keyboard shortcuts (if that's how you roll) you can clean up your code base to use the latest C# 7.3 features and a code style that suits you.
Keeping up with the changes in the software development industry always feels like a losing battle. Just when I feel like I've started to catch up and learn the things I need to know to do my job (and hobby), a new version of something is released or even worse, something totally revolutionary comes out which means we have to start learning from first principles again.
\nWebAssembly is one such technology which is a game changer but still at the prototype stage. When it does hit, prepare for the whirlwind. Prepare to throw away most of what you knew and re-learn that which is now deemed the state of the art and the way to do things.
\nAt some point in history it was possible for a single human being to learn all scientific knowledge known to mankind if they could somehow get hold of the information. We've long since surpassed that point with a single technology like .NET, let alone software development in general.
\nThen you've got the cross-functional developers out there. The ones that try to learn everything required to build a complete application. I try to do this but I have the constant feeling that I'm behind, that I need to catch up. It's impossible to learn everything in any real depth, so you have to skim the surface of some technologies to get by. I wish I knew more about T-SQL, ElasticSearch, Webpack and pretty much all of the myriad of JavaScript frameworks.
\nI'm not complaining of course, having to learn is what keeps it interesting and what keeps us coming back for more.
\nThe list below is my personal list of RSS feeds that I follow to keep up to date. I've put this list together over several years and use an RSS feed reader called Feedly which keeps track of the articles I've read. Hopefully, it helps you keep up to date.
\nThese are sites which aggregate articles from various sources.
\nKeep up to date with product announcements and updates.
\nFind out how various online services are changing.
\nI used to do a lot more Windows development with WPF, Silverlight and UWP etc. I don't really follow anyone from those days anymore except Mike Taulty
\nThese people don't really fit into any one category but are well worth reading:
\nVideos are a great way to learn but they do suck up a lot of time so be selective.
\nSchema.org defines a set of standard classes and their properties for objects and services in the real world. There are nearly 700 classes at the time of writing defined by schema.org. This machine readable format is a common standard used across the web for describing things.
\nWebsites can define Structured Data in the head section of their html to enable search engines to show richer information in their search results. Here is an example of how Google can display extended metadata about your site in it's search results.
\n
Using structured data in html requires the use of a script tag with a MIME type of application/ld+json like so:
<script type="application/ld+json">\n{\n "@context": "http://schema.org",\n "@type": "Organization",\n "url": "http://www.example.com",\n "name": "Unlimited Ball Bearings Corp.",\n "contactPoint": {\n "@type": "ContactPoint",\n "telephone": "+1-401-555-1212",\n "contactType": "Customer service"\n }\n}\n</script>\n\nWindows UWP apps let you share data using schema.org classes. Here is an example showing how to share metadata about a book.
\nSchema.NET is Schema.org objects turned into strongly typed C# POCO classes for use in .NET. All classes can be serialized into JSON/JSON-LD. Here is a simple Schema.NET example that defines the name and URL of a website:
\nvar website = new WebSite()\n{\n AlternateName = "An Alternative Name",\n Name = "Your Site Name",\n Url = new Uri("https://example.com")\n};\nvar jsonLd = website.ToString();\n\nThe code above outputs the following JSON-LD:
\n{\n "@context":"http://schema.org",\n "@type":"WebSite",\n "alternateName":"An Alternative Name",\n "name":"Your Site Name",\n "url":"https://example.com"\n}\n\nThere are dozens more examples based on Google's Structured Data documentation with links to the relevant page in the unit tests of the Schema.NET project.
\nschema.org defines classes and properties, where each property can have a single value or an array of multiple values. Additionally, properties can have multiple types e.g. an Address property could have a type of string or a type of PostalAddress which has it's own properties such as StreetAddress or PostalCode which breaks up an address into it's constituent parts.
To facilitate this Schema.NET uses some clever C# generics and implicit type conversions so that setting a single or multiple values is possible and that setting a string or PostalAddress is also possible:
// Single string address\nvar organization = new Organization()\n{\n Address = "123 Old Kent Road E10 6RL"\n};\n\n// Multiple string addresses\nvar organization = new Organization()\n{\n Address = new List<string>()\n { \n "123 Old Kent Road E10 6RL",\n "456 Finsbury Park Road SW1 2JS"\n }\n};\n\n// Single PostalAddress address\nvar organization = new Organization()\n{\n Address = new PostalAddress()\n {\n StreetAddress = "123 Old Kent Road",\n PostalCode = "E10 6RL"\n }\n};\n\n// Multiple PostalAddress addresses\nvar organization = new Organization()\n{\n Address = new List<PostalAddress>()\n {\n new PostalAddress()\n {\n StreetAddress = "123 Old Kent Road",\n PostalCode = "E10 6RL"\n },\n new PostalAddress()\n {\n StreetAddress = "456 Finsbury Park Road",\n PostalCode = "SW1 2JS"\n }\n }\n};\n\nThis magic is all carried out using the Value<T>, Value<T1, T2>, Value<T1, T2, T3> etc. types. These types are all structs for best performance too.
Download the Schema.NET NuGet package or take a look at the code on GitHub. At some point I'll find the time to write a quick ASP.NET Core tag helper that wraps Schema.NET.
\n", "url": "https://rehansaeed.com/structured-data-using-schema-net/", "title": "Structured Data using Schema.NET", "summary": "Schema.NET is Schema.org objects turned into strongly typed C# POCO classes for use in .NET.", "image": "https://rehansaeed.com/images/hero/Schema.NET-1366x768.png", "date_modified": "2017-07-02T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/asp-net-core-lazy-command-pattern/", "content_html": "::: tip TLDR\nMove your ASP.NET Core MVC action method logic into lazily loaded commands using the command pattern.\n:::
\nWhen writing your Controllers in ASP.NET Core, you can end up with a very long class if you're not careful. You may have written several action methods with a few lines of code in each, you may be injecting a few services into your controller and you may have commented your action methods to support Swagger. The point is it's very easy to do, here is an example:
\n[Route("[controller]")]\npublic class RocketController : Controller\n{\n private readonly IPlanetRepository planetRepository;\n private readonly IRocketRepository rocketRepository;\n\n public RocketController(\n IPlanetRepository planetRepository,\n IRocketRepository rocketRepository)\n {\n this.planetRepository = planetRepository;\n this.rocketRepository = rocketRepository;\n }\n \n [HttpGet("{rocketId}")]\n public async Task<IActionResult> GetRocket(int rocketId)\n {\n var rocket = await this.rocketRepository.GetRocket(rocketId);\n if (rocket == null)\n {\n return this.NotFound();\n }\n return this.Ok(rocket);\n }\n \n [HttpGet("{rocketId}/launch/{planetId}")]\n public async Task<IActionResult> LaunchRocket(int rocketId, int planetId)\n {\n var rocket = await this.rocketRepository.GetRocket(rocketId);\n if (rocket == null)\n {\n return this.NotFound();\n }\n var planet = await this.planetRepository.GetPlanet(planetId);\n if (planet == null)\n {\n return this.NotFound();\n }\n this.rocketRepository.VisitPlanet(rocket, planet);\n return this.Ok(rocket);\n }\n}\n\nThis is where the command pattern can come in handy. The command pattern moves logic from each action method and injected dependencies into their own class like so:
\n[Route("[controller]")]\npublic class RocketController : Controller\n{\n private readonly Lazy<IGetRocketCommand> getRocketCommand;\n private readonly Lazy<ILaunchRocketCommand> launchRocketCommand;\n\n public RocketController(\n Lazy<IGetRocketCommand> getRocketCommand,\n Lazy<ILaunchRocketCommand> launchRocketCommand)\n {\n this.getRocketCommand = getRocketCommand;\n this.launchRocketCommand = launchRocketCommand;\n }\n\n [HttpGet("{rocketId}")]\n public Task<IActionResult> GetRocket(int rocketId) =>\n this.getRocketCommand.Value.ExecuteAsync(rocketId);\n\n [HttpGet("{rocketId}/launch/{planetId}")]\n public Task<IActionResult> LaunchRocket(int rocketId, int planetId) =>\n this.launchRocketCommand.Value.ExecuteAsync(rocketId, planetId);\n}\n\npublic interface IGetRocketCommand : IAsyncCommand<int>\n{\n}\n\npublic class GetRocketCommand : IGetRocketCommand\n{\n private readonly IRocketRepository rocketRepository;\n\n public GetRocketCommand(IRocketRepository rocketRepository) =>\n this.rocketRepository = rocketRepository;\n\n public async Task<IActionResult> ExecuteAsync(int rocketId)\n {\n var rocket = await this.rocketRepository.GetRocket(rocketId);\n if (rocket == null)\n {\n return new NotFoundResult();\n }\n return new OkObjectResult(rocket);\n }\n}\n\nAll the logic and dependencies in the controllers gets moved to the command which now has a single responsibility. The controller now has a different set of dependencies, it now lazily injects one command per action method.
\nYou may have noticed the IAsyncCommand interface. I keep four of these handy to inherit from. They all outline an ExecuteAsync method to execute the command and return an IActionResult but they have a differing number of parameters. I personally feel if you are needing more than three parameters you should be using a class to represent your parameters, so I've put the limit on three parameters.
public interface IAsyncCommand\n{\n Task<IActionResult> ExecuteAsync();\n}\npublic interface IAsyncCommand<T>\n{\n Task<IActionResult> ExecuteAsync(T parameter);\n}\npublic interface IAsyncCommand<T1, T2>\n{\n Task<IActionResult> ExecuteAsync(T1 parameter1, T2 parameter2);\n}\npublic interface IAsyncCommand<T1, T2, T3>\n{\n Task<IActionResult> ExecuteAsync(T1 parameter1, T2 parameter2, T3 parameter3);\n}\n\nWhy do we use Lazy<T>? Well the answer is that if we have multiple action methods on our controller, we don't want to instantiate the dependencies for every action method if we are only planning on using one action method. Registering our Lazy commands requires a bit of extra work in out Startup.cs. We can register lazy dependencies like so:
public void ConfigureServices(IServiceCollection services)\n{\n // ...Omitted\n services\n .AddScoped<IGetRocketCommand, GetRocketCommand>()\n .AddScoped(x => new Lazy<IGetRocketCommand>(\n () => x.GetRequiredService<IGetRocketCommand>()));\n}\n\nNow you might be thinking, how do I access the HttpContext or ActionContext if I want to set a HTTP header for example? Well, you can use the IHttpContextAccessor or IActionContextAccessor interfaces for this purpose. You can register them in your Startup class like so:
public void ConfigureServices(IServiceCollection services)\n{\n // ...Omitted\n services\n .AddSingleton<IHttpContextAccessor, HttpContextAccessor>()\n .AddSingleton<IActionContextAccessor, ActionContextAccessor>();\n}\n\nNotice that they can be registered as singletons. You can then use them to get hold of the HttpContext or ActionContext objects for the current HTTP request. Here is a really simple example.
public class SetHttpHeaderCommand : ISetHttpHeaderCommand\n{\n private readonly IHttpContextAccessor httpContextAccessor;\n\n public GetRocketCommand(IHttpContextAccessor httpContextAccessor) =>\n this.httpContextAccessor = httpContextAccessor;\n\n public async Task<IActionResult> ExecuteAsync()\n {\n this.httpContextAccessor.HttpContext.Response.Headers.Add("X-Rocket", "Saturn V");\n return new OkResult();\n }\n}\n\nAnother upside to the command pattern is that testing each command becomes super simple. You don't need to setup a controller with lots of dependencies that you don't care about. You only need to write test code for that single feature.
\nFor a full working example, take a look at the .NET Boxed API project template which makes full use of the Lazy Command Pattern.
\n", "url": "https://rehansaeed.com/asp-net-core-lazy-command-pattern/", "title": "ASP.NET Core Lazy Command Pattern", "summary": "Move your ASP.NET Core MVC action method logic into lazily loaded commands using the command pattern, to reduce Controller complexity.", "image": "https://rehansaeed.com/images/hero/ASP.NET-Core-Lazy-Command-Pattern-1366x768.png", "date_modified": "2017-04-08T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/dotnet-new-feature-selection/", "content_html": "In my last post I showed how to get started with using dotnet new to build project templates. In this post, I'm going to build on that knowledge and show how to add feature selection to your project template so developers can choose to add or remove bits of your template. If you check out my .NET Boxed API project template, you'll see that I have 17 features for you to set. If you run the help command against my template you'll see a description of each and instructions on how you can set them (I've cleaned up the CLI output, the current help commands output is pretty awful but this is being addressed in the next version of dotnet new).
PS C:\\Users\\rehan.saeed> dotnet new api --help\nTemplate Instantiation Commands for .NET Core CLI.\n\nUsage: dotnet new [arguments] [options]\n\nArguments:\n template The template to instantiate.\n\nOptions:\n -l|--list List templates containing the specified name.\n -lang|--language Specifies the language of the template to create\n -n|--name The name for the output being created. If no name is specified, the name of the current directory is\nused.\n -o|--output Location to place the generated output.\n -h|--help Displays help for this command.\n -all|--show-all Shows all templates\n\n.NET Boxed API (C#)\nAuthor: Muhammad Rehan Saeed (RehanSaeed.com)\nOptions:\n -Ti|--Title: The name of the project which determines the assembly product name. If the Swagger feature is enabled,\n shows the title on the Swagger UI.\n string - Optional\n Default: Project Title\n -D|--Description: A description of the project which determines the assembly description. If the Swagger feature is\n enabled, shows the description on the Swagger UI.\n string - Optional\n Default: Project Description\n -Au|--Author: The name of the author of the project which determines the assembly author, company and copyright\n information.\n string - Optional\n Default: Project Author\n -Sw|--Swagger: Swagger is a format for describing the endpoints in your API. Swashbuckle is used to generate a\n Swagger document and to generate beautiful API documentation, including a UI to explore and test operations,\n directly from your routes, controllers and models.\n bool - Optional\n Default: true\n -T|--TargetFramework: Decide which version of the .NET Framework to target.\n .NET Core - Run cross platform (on Windows, Mac and Linux). The framework is made up of NuGet packages\n which can be shipped with the application so it is fully stand-alone.\n .NET Framework - Gives you access to the full breadth of libraries available in .NET instead of the subset\n available in .NET Core but requires it to be pre-installed.\n Both - Target both .NET Core and .NET Framework.\n Default: Both\n -P|--PrimaryWebServer: The primary web server you want to use to host the site.\n Kestrel - A web server for ASP.NET Core that is not intended to be internet facing as it has not been\n security tested. IIS or NGINX should be placed in front as reverse proxy web servers.\n WebListener - A Windows only web server. It gives you the option to take advantage of Windows specific\n features, like Windows authentication, port sharing, HTTPS with SNI, HTTP/2 over TLS\n (Windows 10), direct file transmission, and response caching WebSockets (Windows 8).\n Default: Kestrel\n -Re|--ReverseProxyWebServer: The internet facing reverse proxy web server you want to use in front ofthe primary\n web server to host the site.\n Internet Information Services (IIS) - A flexible, secure and manageable Web server for hosting anything on the\n Web using Windows Server. Select this option if you are deploying your site\n to Azure web apps. IIS is preconfigured to set request limits for security.\n NGINX - A free, open-source, cross-platform high-performance HTTP server and\n reverse proxy, as well as an IMAP/POP3 proxy server. It does have a Windows\n version but its not very fast and IIS is better on that platform. If the\n HTTPS Everywhere feature is enabled, NGINX is pre-configured to enable the\n most secure TLS protocols and ciphers for security and to enable HTTP 2.0\n and SSL stapling for performance.\n Both - Support both reverse proxy web servers.\n Default: Both\n -C|--CloudProvider: Select which cloud provider you are using if any, to add cloud specific features.\n Azure - The Microsoft Azure cloud. Adds logging features that let you see logs in the Azure portal.\n None - No cloud provider is being used.\n Default: None\n -A|--Analytics: Monitor internal information about how your application is running, as well as external user\n information.\n Application Insights - Monitor internal information about how your application is running, as well as\n external user information using the Microsoft Azure cloud.\n None - Not using any analytics.\n Default: None\n -Ap|--ApplicationInsightsInstrumentationKey: Your Application Insights instrumentation key\n e.g. 11111111-2222-3333-4444-555555555555.\n string - Optional\n Default: APPLICATION-INSIGHTS-INSTRUMENTATION-KEY\n -H|--HttpsEverywhere: Use the HTTPS scheme and TLS security across the entire site, redirects HTTP to HTTPS and\n adds a Strict Transport Security (HSTS) HTTP header with preloading enabled. Configures the primary and reverse\n proxy web servers for best security and adds a development certificate file for use in your development environment.\n bool - Optional\n Default: true\n -Pu|--PublicKeyPinning: Adds the Public-Key-Pins (HPKP) HTTP header to responses. It stops man-in-the-middle\n attacks by telling browsers exactly which TLS certificate you expect. You must have two TLS certificates for this\n to work, if you get this wrong you will have performed a denial of service attack on yourself.\n bool - Optional\n Default: false\n -CO|--CORS: Browser security prevents a web page from making AJAX requests to another domain. This restriction is\n called the same-origin policy, and prevents a malicious site from reading sensitive data from another site.\n CORS is a W3C standard that allows a server to relax the same-origin policy. Using CORS, a server can explicitly\n allow some cross-origin requests while rejecting others.\n bool - Optional\n Default: true\n -X|--XmlFormatter: Choose whether to use the XML input/output formatter and which serializer to use.\n DataContractSerializer - The default XML serializer you should use. Requires the use of [DataContract] and\n [DataMember] attributes.\n XmlSerializer - The alternative XML serializer which is slower but gives more control. Uses the\n [XmlRoot], [XmlElement] and [XmlAttribute] attributes.\n None - No XML formatter.\n Default: None\n -S|--StatusController: An endpoint that returns the status of this API and its dependencies, giving an indication\n of its health. This endpoint can be called by site monitoring tools which ping the site or by load balancers\n which can remove an instance of this API if it is not functioning correctly.\n bool - Optional\n Default: true\n -R|--RequestId: Require that all requests send the X-Request-ID HTTP header containing a GUID. This is useful where\n you have access to the client and server logs and want to correlate a request and response between the two.\n bool - Optional\n Default: false\n -U|--UserAgent: Require that all requests send the User-Agent HTTP header containing the application name and\n version of the caller.\n bool - Optional\n Default: false\n -Ro|--RobotsTxt: Adds a robots.txt file to tell search engines not to index this site.\n bool - Optional\n Default: true\n -Hu|--HumansTxt: Adds a humans.txt file where you can tell the world who wrote the application. This file is a good\n place to thank your developers.\n bool - Optional\n Default: true\n\nAs you can see from the output, there are a few different types of feature you can create. You can also choose to make a feature required or optional. An optional feature, if not specified by the user will fall-back to a default value. Here are the different types available:
\nYou can create a boolean feature by adding symbols section to your template.json file. If you look at the example below, I've specified an optional bool symbol, with a default value of true.
{\n ...\n "symbols": {\n "Swagger": {\n "type": "parameter",\n "datatype": "bool",\n "isRequired": false,\n "defaultValue": "true",\n "description": "Your description..."\n }\n }\n}\n\nIn your code, you can then use the symbol name, in this case Swagger as a pre-processor directive in C# code:
#if (Swagger)\nConsole.WriteLine("Swagger feature was selected");\n#else\nConsole.WriteLine("Swagger feature was not selected");\n#endif\n\nThis is really cool because you can still run the application as a template author and the project will still work. If you define a Swagger constant in your project properties, your feature will turn on or off too. This makes debugging your project template very easy as a template author.
If you want to use the symbol in files other than C# where pre-processor directives do not exist, you can use the comment syntax specific to that file extension, so in a JavaScript file would use the // syntax:
//#if (Swagger)\nconsole.log('Swagger feature was selected');\n//#else\nconsole.log('Swagger feature was not selected');\n//#endif\n\nMost file extensions that have their own comment syntax have been catered for. For text files where there is no comment syntax or for any file extension that the templating engine doesn't know about you can use the # character:
#if (Swagger)\nSwagger feature was selected\n#else\nSwagger feature was not selected\n#endif\n\nYou can look at this code in the templating engine for a full list of supported file extensions and comment types.
\nString symbols can be used to do simple file replace operations.
\n{\n ...\n "symbols": {\n "Title": {\n "type": "parameter",\n "datatype": "string",\n "isRequired": false,\n "defaultValue": "Default Project Title",\n "replaces": "PROJECT-TITLE",\n "description": "Your description..."\n }\n }\n}\n\nThe above symbol looks for a PROJECT-TITLE string and replaces it with whatever the user specifies or with the default value Default Project Title if the user doesn't set anything.
A choice symbol is useful when you have more than two options and can't use bool.
\n{\n ...\n "symbols": {\n "TargetFramework": {\n "type": "parameter",\n "datatype": "choice",\n "isRequired": false,\n "choices": [\n {\n "choice": ".NET Core",\n "description": "Your description..."\n },\n {\n "choice": ".NET Framework",\n "description": "Your description..."\n },\n {\n "choice": "Both",\n "description": "Your description..."\n }\n ],\n "defaultValue": "Both",\n "description": "Your description..."\n }\n}\n\nIn the example above, you have the choice of selecting a target framework, with a value of .NET Core, .NET Framework or Both. Each choice has it's own description and the overall symbol also has it's description.
In the above example, you can't use the value '.NET Core' as a C# pre-processor variable because it contains a dot and a space. This is where a computed symbol comes in handy.
\n{\n ...\n "symbols": {\n "NETCore": {\n "type": "computed",\n "value": "(TargetFramework == \\".NET Core\\" || TargetFramework == \\"Both\\")"\n },\n "NETFramework": {\n "type": "computed",\n "value": "(TargetFramework == \\".NET Framework\\" || TargetFramework == \\"Both\\")"\n }\n }\n}\n\nHere I have set up two computed symbols which determines whether '.NET Core' or '.NET Framework' was selected individually in the previous choice symbol. I have named these symbols without a dot or space i.e. NETCore and NETFramework so I can use these as C# pre-processor symbols, the same way I showed above.
You can also use symbols to delete certain files or folders. In this example, I've extended my bool symbol example to additionally remove two files and a folder if the feature is deselected by the user.
\n{\n ...\n "symbols": {\n "Swagger": {\n "type": "parameter",\n "datatype": "bool",\n "isRequired": false,\n "defaultValue": "true",\n "description": "Your description..."\n }\n },\n "sources": [\n {\n "modifiers": [\n {\n "condition": "(!Swagger)",\n "exclude": [\n "Constants/HomeControllerRoute.cs",\n "Controllers/HomeController.cs",\n "ViewModelSchemaFilters/**/*"\n ]\n }\n ]\n }\n ]\n}\n\nYou do this by adding source modifiers. I've added one here with a condition and three file and folder exclusions. The exclusions use a globbing pattern.
\nThere are several other useful features of the templating engine which I'll cover in a follow up post as this is starting to get quite long. Feel free to take a look at the source code for my API template to see a full example.
\n", "url": "https://rehansaeed.com/dotnet-new-feature-selection/", "title": "dotnet new Feature Selection", "summary": "How to add feature selection to your dotnet new template using symbols (bool, string, choice, computed) and pre-processor directives.", "image": "https://rehansaeed.com/images/hero/Who-Wants-To-Be-A-Millionaire-1366x768.png", "date_modified": "2017-03-26T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/cleaning-up-csproj/", "content_html": "::: tip TLDR\nI show how to make csproj XML concise and pretty for hand editing.\n:::
I used project.json since Beta 7 and got used to hand editing it, I've continues that practice with .csproj files and I think you should too. Recent version of Visual Studio have made a lot of performance improvements but it's still a lot slower than hand editing a text file.
The NuGet package screen in Visual Studio is achingly slow. Bulk editing takes seconds. I can update NuGet package references, package properties etc. all in one go, rather than visiting multiple disparate UI's in Visual Studio. Finally, I create new projects by copying and pasting an existing csproj and tweaking it. Much faster than Visual Studio's New Project dialogue.
The Project File Tools Visual Studio extension gives you intellisense for NuGet packages in the new csproj projects. Unfortunately, due to MSBuild being around for so long and being so complex, intellisense for the rest of the project XML consists of a massive list of possible properties so it becomes less useful than it was in project.json.
After migrating my project.json projects to csproj using Visual Studio 2017 (You could also use the dotnet migrate command), I found that that the XML generated was pretty ugly and contained superfluous elements you just didn't need. Here is an example csproj library project straight after migration:
<Project Sdk="Microsoft.NET.Sdk">\n\n <PropertyGroup>\n <Description>...</Description>\n <Copyright>Copyright © Muhammad Rehan Saeed. All rights Reserved</Copyright>\n <AssemblyTitle>Dotnet Boxed Framework</AssemblyTitle>\n <VersionPrefix>2.2.2</VersionPrefix>\n <Authors>Muhammad Rehan Saeed (RehanSaeed.com)</Authors>\n <TargetFrameworks>netstandard1.6;net461</TargetFrameworks>\n <TreatWarningsAsErrors>true</TreatWarningsAsErrors>\n <GenerateDocumentationFile>true</GenerateDocumentationFile>\n <AssemblyName>Boxed.AspNetCore</AssemblyName>\n <AssemblyOriginatorKeyFile>../../../Key.snk</AssemblyOriginatorKeyFile>\n <SignAssembly>true</SignAssembly>\n <PublicSign Condition=" '$(OS)' != 'Windows_NT' ">true</PublicSign>\n <PackageId>Boxed.AspNetCore</PackageId>\n <PackageTags>ASP.NET;ASP.NET Core;MVC;Boxed;Muhammad Rehan Saeed;Framework</PackageTags>\n <PackageReleaseNotes>Updated to ASP.NET Core 1.1.2.</PackageReleaseNotes>\n <PackageIconUrl>https://raw.githubusercontent.com/Dotnet-Boxed/Framework/main/Images/Icon.png</PackageIconUrl>\n <PackageProjectUrl>https://github.com/Dotnet-Boxed/Framework</PackageProjectUrl>\n <PackageLicenseUrl>https://github.com/Dotnet-Boxed/Framework/blob/main/LICENSE</PackageLicenseUrl>\n <PackageRequireLicenseAcceptance>true</PackageRequireLicenseAcceptance>\n <RepositoryType>git</RepositoryType>\n <RepositoryUrl>https://github.com/Dotnet-Boxed/Framework.git</RepositoryUrl>\n <GenerateAssemblyConfigurationAttribute>false</GenerateAssemblyConfigurationAttribute>\n <GenerateAssemblyCompanyAttribute>false</GenerateAssemblyCompanyAttribute>\n <GenerateAssemblyProductAttribute>false</GenerateAssemblyProductAttribute>\n </PropertyGroup>\n\n <ItemGroup>\n <ProjectReference Include="..\\Framework\\Framework.csproj" />\n </ItemGroup>\n\n <ItemGroup>\n <PackageReference Include="Microsoft.AspNetCore.Mvc.Abstractions" Version="1.1.2" />\n <PackageReference Include="Microsoft.AspNetCore.Mvc.Core" Version="1.1.2" />\n <PackageReference Include="Microsoft.Extensions.Caching.Abstractions" Version="1.1.1" />\n <PackageReference Include="Microsoft.Extensions.Configuration.Binder" Version="1.1.1" />\n <PackageReference Include="Newtonsoft.Json" Version="9.0.1" />\n <PackageReference Include="StyleCop.Analyzers" Version="1.0.0">\n <PrivateAssets>All</PrivateAssets>\n </PackageReference>\n </ItemGroup>\n\n <ItemGroup Condition=" '$(TargetFramework)' == 'netstandard1.6' ">\n <PackageReference Include="System.Xml.XDocument" Version="4.3.0" />\n </ItemGroup>\n\n <ItemGroup Condition=" '$(TargetFramework)' == 'net461' ">\n <Reference Include="System.ServiceModel" />\n <Reference Include="System.Xml" />\n <Reference Include="System.Xml.Linq" />\n <Reference Include="System" />\n <Reference Include="Microsoft.CSharp" />\n </ItemGroup>\n\n</Project>\n\nThe top of the project contains a new SDK property. This imports some MSBuild targets and props files in your dotnet installation folder shown below:

If you root around in those files, you can find defaults for all kinds of settings. Here are some of the nuggets I discovered about the web projects:
\nNETStandard.Library version 1.6.1 NuGet package is referenced for you by default.wwwroot folder is excluded from compilation but included in the published output.web.config, .cshtml and .json files are published by default.ServerGarbageCollection setting.PreserveCompilationContext is set to true by default.node_modules, jspm_packages and bower_components are excluded by default.You don't need AssemblyInfo.cs anymore by default as the csproj Package settings also set many of the assembly attributes. In fact, you didn't really need it with project.json either but the default templates mostly included it for some reason. However, I still found I needed to resurrect it in some cases to use the InternalsVisibleTo attribute. InternalsVisibleTo allows my unit test projects to access internal members in my library project. After a dotnet migrate, you may see the following elements which stop certain assembly attributes from being generated. You can safely delete these.
<PropertyGroup>\n <!-- ...Omitted -->\n <GenerateAssemblyConfigurationAttribute>false</GenerateAssemblyConfigurationAttribute>\n <GenerateAssemblyCompanyAttribute>false</GenerateAssemblyCompanyAttribute>\n <GenerateAssemblyProductAttribute>false</GenerateAssemblyProductAttribute>\n<PropertyGroup>\n\nYou no longer need to explicitly reference System.* references in your csproj. David Fowler recommends that you always reference the NETStandard.Library meta NuGet package gives you most System.* references. You get NETStandard.Library by default if you use the SDK attribute at the top of the csproj:
<Project Sdk="Microsoft.NET.Sdk">\n <!-- ...Omitted -->\n</Project>\n\nThis meant that I could remove the entire code block below except System.ServiceModel because that reference is not given to you by the NETStandard.Library NuGet package.
<ItemGroup Condition=" '$(TargetFramework)' == 'netstandard1.6' ">\n <PackageReference Include="System.Xml.XDocument" Version="4.3.0" />\n</ItemGroup>\n\n<ItemGroup Condition=" '$(TargetFramework)' == 'net461' ">\n <Reference Include="System.ServiceModel" />\n <Reference Include="System.Xml" />\n <Reference Include="System.Xml.Linq" />\n <Reference Include="System" />\n <Reference Include="Microsoft.CSharp" />\n</ItemGroup>\n\nFor some reason dotnet migrate produces overly verbose XML in some cases by outputting XML elements instead of attributes. I have a NuGet reference to StyleCop.Analyzers which is a build time dependency and I don't want it to be output to my bin directory. You do this by setting the PrivateAssets property but you can turn this:
<PackageReference Include="StyleCop.Analyzers" Version="1.0.0">\n <PrivateAssets>All</PrivateAssets>\n</PackageReference>\n\nInto this:
\n<PackageReference Include="StyleCop.Analyzers" PrivateAssets="All" Version="1.0.0" />\n\nYou can label your PropertyGroup and ItemGroup elements using the Label attribute:
<PropertyGroup Label="Package">\n <!-- NuGet Packages Omitted -->\n</PropertyGroup>\n\nSo the question becomes, how should we label them? Well, the convention I use is to use the same label names as the ones in Visual Studio's project properties screen:
\n
This is what my csproj looks like at the end of all that. I've removed all the extra fluff you don't need and labelled the properties in a way that makes navigating the file with your eye that much quicker.
<Project Sdk="Microsoft.NET.Sdk">\n\n <PropertyGroup Label="Build">\n <TargetFrameworks>netstandard1.6;net461</TargetFrameworks>\n <TreatWarningsAsErrors>true</TreatWarningsAsErrors>\n <GenerateDocumentationFile>true</GenerateDocumentationFile>\n <CodeAnalysisRuleSet>../../../MinimumRecommendedRulesWithStyleCop.ruleset</CodeAnalysisRuleSet>\n </PropertyGroup>\n\n <PropertyGroup Label="Package">\n <VersionPrefix>2.2.2</VersionPrefix>\n <Authors>Muhammad Rehan Saeed (RehanSaeed.com)</Authors>\n <Product>Dotnet Boxed Framework</Product>\n <Description>...</Description>\n <Copyright>Copyright © Muhammad Rehan Saeed. All rights Reserved</Copyright>\n <PackageRequireLicenseAcceptance>true</PackageRequireLicenseAcceptance>\n <PackageLicenseUrl>https://github.com/Dotnet-Boxed/Framework/blob/main/LICENSE</PackageLicenseUrl>\n <PackageProjectUrl>https://github.com/Dotnet-Boxed/Framework</PackageProjectUrl>\n <PackageIconUrl>https://raw.githubusercontent.com/Dotnet-Boxed/Framework/main/Images/Icon.png</PackageIconUrl>\n <RepositoryUrl>https://github.com/Dotnet-Boxed/Framework.git</RepositoryUrl>\n <RepositoryType>git</RepositoryType>\n <PackageTags>ASP.NET;ASP.NET Core;MVC;Boxed;Muhammad Rehan Saeed;Framework</PackageTags>\n <PackageReleaseNotes>Updated to ASP.NET Core 1.1.2.</PackageReleaseNotes>\n </PropertyGroup>\n \n <PropertyGroup Label="Signing">\n <SignAssembly>true</SignAssembly>\n <AssemblyOriginatorKeyFile>../../../Key.snk</AssemblyOriginatorKeyFile>\n <PublicSign Condition=" '$(OS)' != 'Windows_NT' ">true</PublicSign>\n </PropertyGroup>\n\n <ItemGroup Label="Project References">\n <ProjectReference Include="..\\Boilerplate\\Boilerplate.csproj" />\n </ItemGroup>\n\n <ItemGroup Label="Package References">\n <PackageReference Include="Microsoft.AspNetCore.Mvc.Abstractions" Version="1.1.2" />\n <PackageReference Include="Microsoft.AspNetCore.Mvc.Core" Version="1.1.2" />\n <PackageReference Include="Microsoft.Extensions.Caching.Abstractions" Version="1.1.1" />\n <PackageReference Include="Microsoft.Extensions.Configuration.Binder" Version="1.1.1" />\n <PackageReference Include="Newtonsoft.Json" Version="9.0.1" />\n <PackageReference Include="StyleCop.Analyzers" PrivateAssets="All" Version="1.0.0" />\n </ItemGroup>\n\n <ItemGroup Condition=" '$(TargetFramework)' == 'net461' " Label=".NET 4.6.1 Package References">\n <Reference Include="System.ServiceModel" />\n </ItemGroup>\n\n</Project>\n\n",
"url": "https://rehansaeed.com/cleaning-up-csproj/",
"title": "Cleaning Up CSPROJ",
"summary": "I show how to make the new Visual Studio 2017 .NET Core based csproj XML concise and pretty for hand editing.",
"image": "https://rehansaeed.com/images/hero/Cleaning-Up-CSPROJ-1366x768.png",
"date_modified": "2017-03-18T00:00:00.000Z",
"author": {
"url": "https://rehansaeed.com"
}
},
{
"id": "https://rehansaeed.com/cross-platform-devops-net-core/",
"content_html": "If you're a library author or writing a cross-platform application then .NET Core is great but it throws up the question, how do you test that your code works on all operating systems? Well the answer is simple, you build and test your code on each platform.
\nThis post builds on Andrew Lock's work where he shows in two blog posts how to build, test and deploy your .NET Core NuGet packages using AppVeyor (Windows) and Travis CI (Mac and Linux) continuous integration build systems.
\nIn Andrew's blog posts, he writes PowerShell (Windows) or Bash (Mac and Linux) scripts to build, test and deploy his code. There were two problems here.
\nI only want to write my shell script once, I don't want to have to learn Bash in-depth and I don't want to write PowerShell if I can help it. Around the same time I was reading Andrew's blog posts, I read about Cake build.
\nCake lets you write your build, test and deployment script in C# and it provides lots of helper methods to get stuff done making your script very terse. You can get syntax highlighting and intellisense for your Cake scripts by installing the Visual Studio or Visual Studio Code extensions.
\nBuilding and testing your .NET Core code using Cake is dead dimple. Grab the build.cake, build.ps1 and build.sh files from the Cake Getting Started guide and drop them at the root of your project. Here is an example of my project and the files we'll be dealing with in this post:

The build.ps1 and build.sh files are shell scripts that download the Cake executable and execute the build.cake C# script. They also take any parameters that are passed to them and pass them onto your cake script. Now paste the following into your build.cake file:
// Target - The task you want to start. Runs the Default task if not specified.\nvar target = Argument("Target", "Default");\n// Configuration - The build configuration (Debug/Release) to use.\n// 1. If command line parameter parameter passed, use that.\n// 2. Otherwise if an Environment variable exists, use that.\nvar configuration = \n HasArgument("Configuration") ? Argument("Configuration") :\n EnvironmentVariable("Configuration") != null ? EnvironmentVariable("Configuration") : "Release";\n// The build number to use in the version number of the built NuGet packages.\n// There are multiple ways this value can be passed, this is a common pattern.\n// 1. If command line parameter parameter passed, use that.\n// 2. Otherwise if running on AppVeyor, get it's build number.\n// 3. Otherwise if running on Travis CI, get it's build number.\n// 4. Otherwise if an Environment variable exists, use that.\n// 5. Otherwise default the build number to 0.\nvar buildNumber =\n HasArgument("BuildNumber") ? Argument<int>("BuildNumber") :\n AppVeyor.IsRunningOnAppVeyor ? AppVeyor.Environment.Build.Number :\n TravisCI.IsRunningOnTravisCI ? TravisCI.Environment.Build.BuildNumber :\n EnvironmentVariable("BuildNumber") != null ? int.Parse(EnvironmentVariable("BuildNumber")) : 0;\n\n// A directory path to an Artefacts directory.\nvar artefactsDirectory = Directory("./Artefacts");\n\n// Deletes the contents of the Artefacts folder if it should contain anything from a previous build.\nTask("Clean")\n .Does(() =>\n {\n CleanDirectory(artefactsDirectory);\n });\n\n// Run dotnet restore to restore all package references.\nTask("Restore")\n .IsDependentOn("Clean")\n .Does(() =>\n {\n DotNetCoreRestore();\n });\n\n// Find all csproj projects and build them using the build configuration specified as an argument.\n Task("Build")\n .IsDependentOn("Restore")\n .Does(() =>\n {\n var projects = GetFiles("./**/*.csproj");\n foreach(var project in projects)\n {\n DotNetCoreBuild(\n project.GetDirectory().FullPath,\n new DotNetCoreBuildSettings()\n {\n Configuration = configuration\n });\n }\n });\n\n// Look under a 'Tests' folder and run dotnet test against all of those projects.\n// Then drop the XML test results file in the Artefacts folder at the root.\nTask("Test")\n .IsDependentOn("Build")\n .Does(() =>\n {\n var projects = GetFiles("./Tests/**/*.csproj");\n foreach(var project in projects)\n {\n DotNetCoreTest(\n project.GetDirectory().FullPath,\n new DotNetCoreTestSettings()\n {\n ArgumentCustomization = args => args\n .Append("-xml")\n .Append(artefactsDirectory.Path.CombineWithFilePath(project.GetFilenameWithoutExtension()).FullPath + ".xml"),\n Configuration = configuration,\n NoBuild = true\n });\n }\n });\n\n// Run dotnet pack to produce NuGet packages from our projects. Versions the package\n// using the build number argument on the script which is used as the revision number \n// (Last number in 1.0.0.0). The packages are dropped in the Artefacts directory.\nTask("Pack")\n .IsDependentOn("Test")\n .Does(() =>\n {\n var revision = buildNumber.ToString("D4");\n foreach (var project in GetFiles("./Source/**/*.csproj"))\n {\n DotNetCorePack(\n project.GetDirectory().FullPath,\n new DotNetCorePackSettings()\n {\n Configuration = configuration,\n OutputDirectory = artefactsDirectory,\n VersionSuffix = revision\n });\n }\n });\n\n// The default task to run if none is explicitly specified. In this case, we want\n// to run everything starting from Clean, all the way up to Pack.\nTask("Default")\n .IsDependentOn("Pack");\n\n// Executes the task specified in the target argument.\nRunTarget(target);\n\nAt the top of the script some arguments are defined. Values for these arguments can be set by passing values to the shell scripts via command line, they can come from environment variables or they can come from continuous integration build systems that Cake knows about (It knows all the common ones including TFS, TeamCity, Jenkins and Bamboo). In the above script I show how to get a build number from AppVeyor or Travis CI if the script is currently being run using those systems. This makes the code very short, terse and to the point.
\nThe rest of the script is made up of a series of chained tasks which execute one after the other, starting with the task with no dependencies. Alternatively you can pass in a Target argument which specifies which task you'd like the script to start executing from. A key thing to note is that the script does not need to know about any file names or file paths, everything is done by convention.
\nOne very important effect of using Cake is that your build script is easily testable. I've used many continuous integration systems that have their own proprietary tasks and when a slower build fails, debugging it was a nightmare, since it could only be done on the build machine. Since Cake is just a script, you can run it on your local machine and test it to your hearts content which gives you a quicker tighter development loop.
\nAppVeyor is my favourite CI system but only works if you are hosting your code with Git based repositories and it only runs builds on Windows. All you need to do is sign-up, enable AppVeyor for your git repository and add an appveyor.yml file which is in YAML format. Here is one of my commented appveyor.yml files:
version: '{build}'\n\npull_requests:\n # Do not increment build number for pull requests\n do_not_increment_build_number: true\n\nnuget:\n # Do not publish NuGet packages for pull requests\n disable_publish_on_pr: true\n\nenvironment:\n # Set the DOTNET_SKIP_FIRST_TIME_EXPERIENCE environment variable to stop wasting time caching packages\n DOTNET_SKIP_FIRST_TIME_EXPERIENCE: true\n # Disable sending usage data to Microsoft\n DOTNET_CLI_TELEMETRY_OPTOUT: true\n\nbuild_script:\n- ps: .\\build.ps1\n\ntest: off\n\nartifacts:\n# Store NuGet packages\n- path: .\\Artefacts\\**\\*.nupkg\n name: NuGet\n# Store xUnit Test Results\n- path: .\\Artefacts\\**\\*.xml\n name: xUnit Test Results\n\ndeploy:\n\n# Publish NuGet packages\n- provider: NuGet\n name: production\n api_key:\n secure: 73eFUWSfho6pxCy1VRP1H0AYh/SFiyEREV+/ATcoj0I+sSH9dec/WXs6H2Jy5vlS\n on:\n # Only publish from the main branch\n branch: main\n # Only publish if the trigger was a Git tag\n # git tag v0.1.0-beta\n # git push origin --tags\n appveyor_repo_tag: true\n\nIt basically executes the build.ps1 file at the root of my project and collects all the NuGet package and XML unit test result files in my artefacts folder. I also set some environment variables to turn off some lesser known .NET Core features for a faster build.
AppVeyor, knows about NuGet and I use AppVeyor as my primary build system to publish my NuGet packages (You don't want AppVeyor and Travis CI both publishing your NuGet packages). Now I could have created a task in my cake file to publish NuGet packages and only execute that task if I was running on AppVeyor but AppVeyor has a pretty easy to use configuration file that I've chosen to do this step instead.
\nTo publish packages to NuGet, you sign-up and receive an API key. Of course, you don't want to share that with the whole world by checking it into GitHub or Bitbucket, so AppVeyor lets you encrypt it and paste the encrypted value into the appveyor.yml file.
Travis CI is very similar to AppVeyor but it targets both Mac and Linux. All you have to do is sign-up, turn on Travis for your repository and stick a .travis.yml file in the root of your project. Here is mine:
language: csharp\nos:\n - linux\n - osx\n\n# .NET CLI require Ubuntu 14.04\nsudo: required\ndist: trusty\naddons:\n apt:\n packages:\n - gettext\n - libcurl4-openssl-dev\n - libicu-dev\n - libssl-dev\n - libunwind8\n - zlib1g\n\n# .NET CLI requires OSX 10.11\nosx_image: xcode7.2\n\n# Ensure that .NET Core is installed\ndotnet: 1.0.0-preview2-1-003177\n# Ensure Mono is installed\nmono: latest\n\nenv:\n # Set the DOTNET_SKIP_FIRST_TIME_EXPERIENCE environment variable to stop wasting time caching packages\n - DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true\n # Disable sending usage data to Microsoft\n - DOTNET_CLI_TELEMETRY_OPTOUT=true\n\n# You must run this command to give Travis permissions to execute the build.sh shell script:\n# git update-index --chmod=+x build.sh\nscript:\n - ./build.sh\n\nYou'll notice that we are specifying that we want to build our code on both Mac and Linux. Travis CI will actually run one build for each operating system. We then specify some details about the version of operating system we want to use and what we would like to install on them.
\nOnce again, I set the .NET environment variables to make the build a bit quicker and finally we run the build.sh Bash script to kick things off. Note that you need to run the following command to give Travis permission to execute the build.sh file (This is Linux after all):
git update-index --chmod=+x build.sh\n\nAnother thing to note is that if you are still using the older xproj project system and your unit tests are using xUnit, then your tests will not run due to this bug. There is a very nasty workaround in the link.
If you want to learn how to add AppVeyor and Travis CI build status badges to your Git repository ReadMe or learn how to deploy to MyGet/NuGet using tags, I recommend going back to read Andrew's blog post which is still useful. If you're looking for more examples of Cake build scripts, you can take a look at the following Cake repositories:
\ndotnet new NuGet package.If you run dotnet new today, you can create a simple console app. The command has very few options, including selecting the language you want to use (C#, VB or F#). However, this is all about to change. Sayed I. Hashimi and Mike Lorbetske who work at Microsoft in the .NET tooling team have been kind enough to show me what they've been working on with the intention of getting some feedback.

Microsoft is working on a new version of the dotnet new command with support for installing custom project templates from NuGet packages, zip files or folders. If you head over to the dotnet/templating GitHub repository you can follow the very simple instructions and try out a fairly complete version of this command which is temporarily called dotnet new3. The full dotnet new experience is due to be released in conjunction with Visual Studio 2017.

If you take a look at the screenshot above, you'll notice that there are a lot more options available. You can list all installed project templates and install new ones too.
\nCreating a new project template involves taking a folder containing your project (Mine is called Api-CSharp) and adding a .template.config folder to it containing two files.

The template.json file is where you specify metadata about your project template. This metadata is displayed when someone lists their installed project templates. A really basic one looks like this:
{\n "author": "Muhammad Rehan Saeed (RehanSaeed.com)",\n "classifications": [ "WebAPI", "Boxed" ], // Tags used to search for the template.\n "name": "Dotnet Boxed API",\n "identity": "Dotnet.Boxed.Api.CSharp", // A unique ID for the project template.\n "shortName": "api", // You can create the project using this short name instead of the one above.\n "tags": {\n "language": "C#" // Specify that this template is in C#.\n },\n "sourceName": "ApiTemplate", // Name of the csproj file and namespace that will be replaced.\n "guids": [ // GUID's used in the project that will be replaced by new ones.\n "837bc53e-0271-4e9c-b5b5-c60ea7a7c7b5",\n "113f2d04-69f0-40c3-8797-ba3f356dd812"\n ],\n}\n\nThe templating repositories Wiki page talks about what all of the properties mean in a lot more detail but I've added some basic comments for your understanding.
\nInstalling the above template from a folder is as easy as using the install command. You can also install templates from zip files and NuGet packages the same way.
\n
So how do you create a NuGet package containing a project template that's compatible with dotnet new? I'm assuming you are familiar with creating NuGet packages, if not take a look at the NuGet documentation. You can create NuGet packages of your project templates by creating a Templates.nuspec file like the one below and placing all of your templates in a content folder beside it. The content folder is a special folder which NuGet understands to contain static files. If you look at the nuspec file below, you'll notice the packageType element. This is a new way to tell NuGet that this NuGet package contains project templates.
<?xml version="1.0" encoding="utf-8"?>\n<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">\n <metadata>\n <id>Boxed.Templates</id>\n <version>1.0.0</version>\n <description>My project description.</description>\n <authors>Muhammad Rehan Saeed (RehanSaeed.com)</authors>\n <packageTypes>\n <packageType name="Template" />\n </packageTypes>\n </metadata>\n</package>\n\nWhat I've not told you is that it's possible to add features to your project template that developers can turn on or off based on command line switches a bit like Yeoman does for Node based NPM packages. As many of you will know I already do this in my ASP.NET Core Boilerplate project template but I came up with my own custom method. dotnet new makes this all a lot easier and I'll cover how to do this in a later blog post.
Traditionally, to create project templates, you could use Visual Studio to create zip files containing your project template or if you were brave you could create Visual Studio extensions (VSIX) to enable installing them directly into Visual Studio and share them on the Visual Studio Marketplace.
\nThis new method makes creating project templates about as easy as it's ever going to get and allows really easy sharing, versioning and personalization of project templates. At some point I envisage a website (Possible the Visual Studio Marketplace) where you could go and install these NuGet based project templates.
\nI have been working on a brand new project template for building API's using dotnet new with a lot of help from the guys at Microsoft. My project templates are quite complex so it's a good test of the system. The API comes jam packed full of security, performance and best practice features and also implements Swagger right out of the box. You can try installing it with dotnet new from NuGet.
Overall I'm really impressed with where the new project templating system is headed. It's very easy to do something simple but also very powerful should you need to do something complicated. There is a few blog posts worth of material here, so expect a few more posts in the coming weeks.
\n", "url": "https://rehansaeed.com/custom-project-templates-using-dotnet-new/", "title": "Custom Project Templates Using dotnet new", "summary": "How to create project templates using dotnet new and the template.json file. How to share project templates by creating NuGet packages.", "image": "https://rehansaeed.com/images/hero/Microsoft-.NET-1366x768.png", "date_modified": "2017-01-18T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/reactive-extensions-rx-part-8-timeouts/", "content_html": "In part six of this series of blog posts I talked about using Reactive Extensions for adding timeout logic to asynchronous tasks. Something like this:
\npublic async Task<string> WaitForFirstResultWithTimeOut()\n{\n Task<string> task = this.DownloadTheInternet();\n \n return await task\n .ToObservable()\n .Timeout(TimeSpan.FromMilliseconds(1000))\n .FirstAsync();\n}\n\nLast week I was working on a project and wanted to add a Timeout to my task but since it was an ASP.NET MVC project, I had no references to Reactive Extensions. After some thought I discovered another possible method of performing a timeout which may help in certain circumstances.
\nusing (var cancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMilliseconds(1000)))\n{\n try\n {\n return await this.DownloadTheInternet(cancellationTokenSource.Token);\n }\n catch (OperationCanceledException exception)\n {\n Console.WriteLine("Timed Out");\n }\n}\n\nI'm using an overload on CancellationTokenSource which takes a timeout value. Then passing the CancellationToken to DownloadTheInternet. This method should be periodically checking the CancellationToken to see if it has been cancelled and if so, throw an OperationCanceledException. In this example you'd probably use HttpClient which handles this for you if you give it the CancellationToken.
The main reason why this method is better is that the task is actually being cancelled and stopped from doing any more work. In my above reactive extensions example, the task continues doing work but it's result is just ignored.
\n", "url": "https://rehansaeed.com/reactive-extensions-rx-part-8-timeouts/", "title": "Reactive Extensions (Rx) - Part 8 - Timeouts", "summary": "Should you use Reactive Extensions (Rx) to do timeouts in .NET? It turns out it's better to use CancellationTokenSource in the Task Parallel Library (TPL).", "image": "https://rehansaeed.com/images/hero/Reactive-Extensions-1366x768.png", "date_modified": "2017-01-02T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/seo-friendly-urls-asp-net-core/", "content_html": "For some reason there are not a lot of Search Engine Optimization (SEO) blog posts or projects out there. Taking a few simple steps can make your site rank higher in Google or Bing search results so it's well worth doing. Here are a few other of my SEO related blog posts:
\nThis Mozilla blog post called '15 best practices for structuring URL's' is the best article on the subject of SEO friendly URL's I found and it's well worth a read.
\nEssentially you want a simple short URL that tells the user what they are clicking on at a glance. It should also contain keywords pertaining to what is on the page for better Search Engine Optimization (SEO). In short, a page will appear higher up in search results if the term a user searches for appears in the URL. Your URL should look like this:
\n
The URL contains an ID for a product and ends with a friendly title. The title contains alphanumeric characters with dashes instead of spaces. Note that the ID of the product is still included in the URL, to avoid having to deal with two friendly titles with the same name.
\nIf you elect to omit the ID, then you have to do a lot of footwork to make things work. Firstly, you have to use the title as a kind of primary key to get the product data from your database and secondly, you also have to figure out what to do when there are two pages with the same title. Each time you want to create a new title, you have to scan your data store to see if the title already exists and if it does either error and force the creation of a different title or add make it unique by adding a number on the end. This is a lot of work but does produce a nicer URL, the choice is yours.
\nTake a look at the controller action below. It is a very simple example of how to use SEO friendly URL's. In our example we have a product class which has a ID and title properties, where the title is just the name of the product.
\n[HttpGet("product/{id}/{title}", Name = "GetProduct")]\npublic IActionResult GetProduct(int id, string title)\n{\n // Get the product as indicated by the ID from a database or some repository.\n var product = this.productRepository.Find(id);\n\n // If a product with the specified ID was not found, return a 404 Not Found response.\n if (product == null)\n {\n return this.NotFound();\n }\n\n // Get the actual friendly version of the title.\n string friendlyTitle = FriendlyUrlHelper.GetFriendlyTitle(product.Title);\n\n // Compare the title with the friendly title.\n if (!string.Equals(friendlyTitle, title, StringComparison.Ordinal))\n {\n // If the title is null, empty or does not match the friendly title, return a 301 Permanent\n // Redirect to the correct friendly URL.\n return this.RedirectToRoutePermanent("GetProduct", new { id = id, title = friendlyTitle });\n }\n\n // The URL the client has browsed to is correct, show them the view containing the product.\n return this.View(product);\n}\n\nAll the work is done by the FriendlyUrlHelper which turns the product title which may contain spaces, numbers or other special characters (which would not be allowed in a URL without escaping them) into a lower-kebab-case title.
This generated friendly title is compared with the one that is passed in and if it is different (Someone may have omitted the friendly title or mis-spelled it) we perform a permanent redirect to the product with the same ID but now with the friendly title. This is important for SEO purposes, we want search engines to only find one URL for each product. Finally, if the friendly title matches the one passed in we return the product view.
\nThe FriendlyUrlHelper was inspired by a famous Stack Overflow question 'How does Stack Overflow generate its SEO-friendly URLs?'. The full source code for it is shown below.
/// <summary>\n/// Helps convert <see cref="string"/> title text to URL friendly <see cref="string"/>'s that can safely be\n/// displayed in a URL.\n/// </summary>\npublic static class FriendlyUrlHelper\n{\n /// <summary>\n /// Converts the specified title so that it is more human and search engine readable e.g.\n /// http://example.com/product/123/this-is-the-seo-and-human-friendly-product-title. Note that the ID of the\n /// product is still included in the URL, to avoid having to deal with two titles with the same name. Search\n /// Engine Optimization (SEO) friendly URL's gives your site a boost in search rankings by including keywords\n /// in your URL's. They are also easier to read by users and can give them an indication of what they are\n /// clicking on when they look at a URL. Refer to the code example below to see how this helper can be used.\n /// Go to definition on this method to see a code example. To learn more about friendly URL's see\n /// https://moz.com/blog/15-seo-best-practices-for-structuring-urls.\n /// To learn more about how this was implemented see\n /// http://stackoverflow.com/questions/25259/how-does-stack-overflow-generate-its-seo-friendly-urls/25486#25486\n /// </summary>\n /// <param name="title">The title of the URL.</param>\n /// <param name="remapToAscii">if set to <c>true</c>, remaps special UTF8 characters like 'Ăš' to their ASCII\n /// equivalent 'e'. All modern browsers except Internet Explorer display the 'Ăš' correctly. Older browsers and\n /// Internet Explorer percent encode these international characters so they are displayed as'%C3%A8'. What you\n /// set this to depends on whether your target users are English speakers or not.</param>\n /// <param name="maxlength">The maximum allowed length of the title.</param>\n /// <returns>The SEO and human friendly title.</returns>\n /// <code>\n /// [HttpGet("product/{id}/{title}", Name = "GetDetails")]\n /// public IActionResult Product(int id, string title)\n /// {\n /// // Get the product as indicated by the ID from a database or some repository.\n /// var product = ProductRepository.Find(id);\n ///\n /// // If a product with the specified ID was not found, return a 404 Not Found response.\n /// if (product == null)\n /// {\n /// return this.HttpNotFound();\n /// }\n ///\n /// // Get the actual friendly version of the title.\n /// var friendlyTitle = FriendlyUrlHelper.GetFriendlyTitle(product.Title);\n ///\n /// // Compare the title with the friendly title.\n /// if (!string.Equals(friendlyTitle, title, StringComparison.Ordinal))\n /// {\n /// // If the title is null, empty or does not match the friendly title, return a 301 Permanent\n /// // Redirect to the correct friendly URL.\n /// return this.RedirectToRoutePermanent("GetProduct", new { id = id, title = friendlyTitle });\n /// }\n ///\n /// // The URL the client has browsed to is correct, show them the view containing the product.\n /// return this.View(product);\n /// }\n /// </code>\n public static string GetFriendlyTitle(string title, bool remapToAscii = false, int maxlength = 80)\n {\n if (title == null)\n {\n return string.Empty;\n }\n\n int length = title.Length;\n bool prevdash = false;\n StringBuilder stringBuilder = new StringBuilder(length);\n char c;\n\n for (int i = 0; i < length; ++i)\n {\n c = title[i];\n if ((c >= 'a' && c <= 'z') || (c >= '0' && c <= '9'))\n {\n stringBuilder.Append(c);\n prevdash = false;\n }\n else if (c >= 'A' && c <= 'Z')\n {\n // tricky way to convert to lower-case\n stringBuilder.Append((char)(c | 32));\n prevdash = false;\n }\n else if ((c == ' ') || (c == ',') || (c == '.') || (c == '/') ||\n (c == '\\\\') || (c == '-') || (c == '_') || (c == '='))\n {\n if (!prevdash && (stringBuilder.Length > 0))\n {\n stringBuilder.Append('-');\n prevdash = true;\n }\n }\n else if (c >= 128)\n {\n int previousLength = stringBuilder.Length;\n\n if (remapToAscii)\n {\n stringBuilder.Append(RemapInternationalCharToAscii(c));\n }\n else\n {\n stringBuilder.Append(c);\n }\n\n if (previousLength != stringBuilder.Length)\n {\n prevdash = false;\n }\n }\n\n if (i == maxlength)\n {\n break;\n }\n }\n\n if (prevdash)\n {\n return stringBuilder.ToString().Substring(0, stringBuilder.Length - 1);\n }\n else\n {\n return stringBuilder.ToString();\n }\n }\n\n /// <summary>\n /// Remaps the international character to their equivalent ASCII characters. See\n /// http://meta.stackexchange.com/questions/7435/non-us-ascii-characters-dropped-from-full-profile-url/7696#7696\n /// </summary>\n /// <param name="character">The character to remap to its ASCII equivalent.</param>\n /// <returns>The remapped character</returns>\n private static string RemapInternationalCharToAscii(char character)\n {\n string s = character.ToString().ToLowerInvariant();\n if ("à ÄåùÀãÄÄ
Ä".Contains(s))\n {\n return "a";\n }\n else if ("ÚéĂȘĂ«Ä".Contains(s))\n {\n return "e";\n }\n else if ("ĂŹĂßïı".Contains(s))\n {\n return "i";\n }\n else if ("ĂČóÎÔöÞĆð".Contains(s))\n {\n return "o";\n }\n else if ("ĂčĂșĂ»ĂŒĆĆŻ".Contains(s))\n {\n return "u";\n }\n else if ("çÄÄÄ".Contains(s))\n {\n return "c";\n }\n else if ("ĆŒĆșĆŸ".Contains(s))\n {\n return "z";\n }\n else if ("ĆĆĆĄĆ".Contains(s))\n {\n return "s";\n }\n else if ("ñĆ".Contains(s))\n {\n return "n";\n }\n else if ("ĂœĂż".Contains(s))\n {\n return "y";\n }\n else if ("ÄÄ".Contains(s))\n {\n return "g";\n }\n else if (character == 'Ć')\n {\n return "r";\n }\n else if (character == 'Ć')\n {\n return "l";\n }\n else if (character == 'Ä')\n {\n return "d";\n }\n else if (character == 'Ă')\n {\n return "ss";\n }\n else if (character == 'Ă')\n {\n return "th";\n }\n else if (character == 'Ä„')\n {\n return "h";\n }\n else if (character == 'Ä”')\n {\n return "j";\n }\n else\n {\n return string.Empty;\n }\n }\n}\n\nThe difference between my version and the one in the Stack Overflow answer is that mine optionally handles non-ASCII characters using the boolean remapToAscii parameter. This parameter remaps special UTF8 characters like Ăš to their ASCII equivalent e. If there is no equivalent, then those characters are dropped. All modern browsers except Internet Explorer and Edge display the Ăš correctly. Older browsers like Internet Explorer percent encode these international characters so they are displayed as %C3%A8. What you set this to depends on whether your target users are English speakers and if you care about supporting IE and Edge. I must say that I was hoping Edge would have added support so that remapToAscii could be turned off by default but I'm sorely disappointed.
Using the third parameter you can specify a maximum length for the title with any additional characters being dropped. Finally, the last thing to say about this method is that it has been tuned for speed.
\nThis is a great little snippet of code to make your URL's a human readable, while giving your site an SEO boost. It doesn't take much effort to use either. This helper class is available in the Boxed.AspNetCore NuGet package or you can look at the source code in the .NET Boxed Framework GitHub page.
\n", "url": "https://rehansaeed.com/seo-friendly-urls-asp-net-core/", "title": "SEO Friendly URL's for ASP.NET Core", "summary": "An SEO friendly URL is human readable and gives your site a higher page rank. Learn how to implement SEO friendly URL's using ASP.NET Core.", "image": "https://rehansaeed.com/images/hero/Search-Engine-Optimization-SEO-1366x768.png", "date_modified": "2016-12-17T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/making-application-insights-fast-and-secure/", "content_html": "It's an application monitoring tool available on Microsoft's Azure cloud that you can use to detect errors and usage in your application. For ASP.NET Core apps, it can do this for both your C# and JavaScript code. It's main competitors are New Relic and RayGun.
\nFollowing the Getting Started guide for ASP.NET Core applications requires you to add the following HTML helper to your _Layout.cshtml file:
<head>\n @* ...Omitted *@\n\n @Html.ApplicationInsightsJavaScript(TelemetryConfiguration) \n</head>\n\nThis HTML helper adds an inline script containing the minified JavaScript in snippet.js.
\n<script type="text/javascript">\n var appInsights=window.appInsights||function(config){{\n function i(config){{t[config]=function(){{var i=arguments;t.queue.push(function(){{t[config].apply(t,i)}})}}}}var t={{config:config}},u=document,e=window,o="script",s="AuthenticatedUserContext",h="start",c="stop",l="Track",a=l+"Event",v=l+"Page",y=u.createElement(o),r,f;y.src=config.url||"https://az416426.vo.msecnd.net/scripts/a/ai.0.js";u.getElementsByTagName(o)[0].parentNode.appendChild(y);try{{t.cookie=u.cookie}}catch(p){{}}for(t.queue=[],t.version="1.0",r=["Event","Exception","Metric","PageView","Trace","Dependency"];r.length;)i("track"+r.pop());return i("set"+s),i("clear"+s),i(h+a),i(c+a),i(h+v),i(c+v),i("flush"),config.disableExceptionTracking||(r="onerror",i("_"+r),f=e[r],e[r]=function(config,i,u,e,o){{var s=f&&f(config,i,u,e,o);return s!==!0&&t["_"+r](config,i,u,e,o),s}}),t\n }}({{\n instrumentationKey: '{0}'\n }});\n\n window.appInsights=appInsights;\n appInsights.trackPageView();\n</script>\n\nThis script is responsible for:
\nFor most websites, this is fine and you can stop here. Here is what can be improved for the rest:
\nSo what does it take to move the above snippet.js file into a separate file? Well, it turns out that you can get snippet.js from the applicationinsights-js NPM package which you can add to your package.json like so:
{\n "dependencies": {\n "applicationinsights-js": "1.0.5"\n // ...\n }\n // ...\n}\n\nThe next step is to inject your instrumentation key into snippet.js and also the URL to the full application insights script which is missing from the snippet.js file in the NPM package. I do this using a gulp task like so:
var gulp = require('gulp'),\n sourcemaps = require('gulp-sourcemaps'), // Creates source map files (https://www.npmjs.com/package/gulp-sourcemaps/)\n replace = require('gulp-replace-task'), // String replace (https://www.npmjs.com/package/gulp-replace-task/)\n uglify = require('gulp-uglify'); // Minifies JavaScript (https://www.npmjs.com/package/gulp-uglify/)\n\ngulp.task('build-app-insights-js',\n function() {\n return gulp\n .src('./node_modules/ApplicationInsights-JS/JavaScript/JavaScriptSDK/snippet.js')\n .pipe(sourcemaps.init()) // Set up the generation of .map source files for the JavaScript.\n .pipe(\n replace({ // Carry out the specified find and replace.\n patterns: [\n {\n // match - The string or regular expression to find.\n match: 'CDN_PATH',\n // replacement - The string or function used to make the replacement.\n replacement: 'https://az416426.vo.msecnd.net/scripts/a/ai.0.js'\n },\n {\n match: 'INSTRUMENTATION_KEY',\n replacement: '11111111-2222-3333-4444-555555555555'\n }\n ],\n usePrefix: false\n }))\n .pipe(uglify()) // Minifies the JavaScript.\n .pipe(sourcemaps.write('.')) // Generates source .map files for the JavaScript.\n .pipe(gulp.dest('./wwwroot/js/')); // Saves the JavaScript file to the specified destination path.\n});\n\nFinally we can include the script in our HTML. Don't forget to include the crossorigin attribute on all your script tags, which allows full stack traces to be reported. You can read more about the crossorigin attribute here.
<script asp-append-version="true"\n crossorigin="anonymous"\n src="~/js/application-insights.js"></script>\n\nAs usual, all of the above is built in to the ASP.NET Core Boilerplate project template, available as a Visual Studio extension if you select the optional Application Insights feature.
\n", "url": "https://rehansaeed.com/making-application-insights-fast-and-secure/", "title": "Making Application Insights Fast & Secure", "summary": "Implementing Application Insights into your ASP.NET Core application with performance and security as a top priority in this advanced scenario.", "image": "https://rehansaeed.com/images/hero/Application-Insights-1366x768.png", "date_modified": "2016-12-11T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/the-dotnet-watch-tool/", "content_html": "The dotnet watch tool is a file watcher for .NET that restarts the application when changes in the source code are detected. If you are using IIS Express then, it actually does this restart for you already. The dotnet watch tool is only really useful if you like to run your app in the console. I personally like to do this over using IIS Express because I can see all my logs flashing by in the console like the movies which is occasionally useful if you get an exception.

::: warning\nIn both cases you have to be careful to start the application by clicking Debug -> Start Without Debugging or hitting the ||CTRL+F5|| keyboard shortcut.\n:::
Setting up the dotnet watch tool is as easy as installing the Microsoft.DotNet.Watcher.Tools NuGet package into the tools section of your project.json file like so (You may need to manually restore packages as there is a bug in the tooling which doesn't restore packages if you only change the tools section):
{\n //...\n\n "tools": {\n "Microsoft.DotNet.Watcher.Tools": "1.0.0-preview2-final"\n //...\n },\n\n //...\n}\n\nNow using powershell, you can navigate to your project folder and run the dotnet watch run command and your set. But using the command line is a bit lame if you are using Visual Studio, we can do one better.
The launchSettings.json file is used by Visual Studio to launch your application and controls what happens when you hit ||F5||. It turns out you can add additional launch settings here to launch the application using the dotnet watch tool. You can do so by adding a new launch configuration as I've done at the bottom of this file:
{\n "iisSettings": {\n "windowsAuthentication": false,\n "anonymousAuthentication": true,\n "iisExpress": {\n "applicationUrl": "http://localhost:8080/",\n "sslPort": 44300\n }\n },\n "profiles": {\n // Run the app using IIS Express. Use CTRL+F5 or Debug -> Start Without Debugging to edit code and refresh the browser \n // to see your changes while the app is running.\n "IIS Express": {\n "commandName": "IISExpress",\n "launchBrowser": true,\n "launchUrl": "https://localhost:44300/",\n "environmentVariables": {\n "ASPNETCORE_ENVIRONMENT": "Development"\n }\n },\n // Run the app in console mode using 'dotnet run'.\n "dotnet run": {\n "commandName": "Project",\n "commandLineArgs": "--server.urls http://*:8080",\n "launchBrowser": true,\n "launchUrl": "http://localhost:8080/",\n "environmentVariables": {\n "ASPNETCORE_ENVIRONMENT": "Development"\n }\n },\n // Use CTRL+F5 or Debug -> Start Without Debugging to use this launch profile. Launches the app using 'dotnet watch', \n // which allows you to edit code and refresh the browser to see your changes while the app is running.\n "dotnet watch": {\n "executablePath": "C:\\\\Program Files\\\\dotnet\\\\dotnet.exe",\n "commandLineArgs": "watch run --server.urls http://*:8080",\n "launchBrowser": true,\n "launchUrl": "http://localhost:8080/",\n "environmentVariables": {\n "ASPNETCORE_ENVIRONMENT": "Development"\n }\n }\n }\n}\n\nNotice that I renamed the second launch profile (which already exists in the default template) to dotnet run because that's actually the command it's running and makes more sense.
The dotnet watch launch profile is running dotnet watch run but it's also passing in the server.urls argument which lets us override the port number. Now we can see the new launch profile in the Visual Studio toolbar like so:

If you read my blog posts, you'll be seeing a trend by now. I built the above feature into the .NET Boxed project templates by default so you can create a new project with this feature built-in, right out of the box. Happy coding!
\n", "url": "https://rehansaeed.com/the-dotnet-watch-tool/", "title": "The Dotnet Watch Tool", "summary": "The dotnet watch tool is a file watcher for dotnet that restarts the application when changes in the source code are detected.", "image": "https://rehansaeed.com/images/hero/Microsoft-.NET-1366x768.png", "date_modified": "2016-09-10T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/nginx-asp-net-core-depth/", "content_html": "\n\nThere are only two things a web server needs to be.....fast.....really fast.....and secure.
\n\n
NGINX (Pronounced engine-x) is a popular open source web server. It can act as a reverse proxy server for TCP, UDP, HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer and a HTTP cache.
\nNGINX in fact overtook Apache as the most popular web server among the top 1000 websites. After playing with it for a while now, I have to say that I can see why.
\nThere are two flavours of NGINX. The first is the open source version which is free, the other is called NGINX Plus and provides some more advanced features (all of which can be replicated with open source plugins but with a lot effort) and proper support but at the cost of a few thousand dollars.
\nThere is a Windows version of NGINX but I wouldn't recommend using it for real as it doesn't perform as well as the Linux version and it's not as well tested. You can however use it to try out NGINX.
\nAlternatively, if you are running on Windows 10 Anniversary Update, you can install Bash for Windows and install the linux version. However the process is not that straightforward. Again, the caveat is that it can only be used for testing and not in production.
\nNGINX has no UI, it's all command line driven but don't let that put you off, the CLI interface only has three commands you actually need:
\nnginx -t).nginx -s reload).nginx.conf file is located in the NGINX installation folder. You can use that file or your own using (nginx -c [nginx.conf File Path]).IIS on the other hand does have a UI and what a travesty it is. It hasn't really changed for several years and really needs a usability study to hack it to pieces and start again.
\nThe command line experience for IIS is another matter. It has very powerful IIS extensions you can install and the latest version of IIS even has an API that you can use to make simple HTTP calls to to update it.
\nConfiguration is where NGINX shines. It has a single super simple nginx.conf file which is pretty well documented. IIS is also actually pretty simple to configure if you only rely on the web.config file.
The ASP.NET Core Documentation site has some very good documentation on how to get started on Ubuntu. Unfortunately, it's not as simple as just installing NGINX using apt-get install nginx, there are a few moving parts to the process and a lot more moving parts if you want to install any additional modules.
If you're on Windows, as I mentioned earlier you have the options of installing NGINX using Bash for Windows 10 Anniversary Update but I couldn't get this working. Alternatively you can download the NGINX executable for Windows. If you do this, beware that NGINX tries to start on port 80 and there are a number of things that use that port already on Windows:
\nOnce you have NGINX setup, you need to run your ASP.NET Core app using the Kestrel web server. Why does ASP.NET Core use two web servers? Well Kestrel is not security hardened enough to be exposed on the internet and it does not have all of the features that a full blown web server like IIS or NGINX has. NGINX takes the role of a reverse proxy and simply forwards requests to the Kestrel web server. One day this may change. Reliably keeping your ASP.NET Core app running in Linux is also described in the ASP.NET Core Documentation.
\nYou've got NGINX running, all you need now is a nginx.conf file to forward requests from the internet to your ASP.NET Core app running using the Kestrel web server.
I have taken the time to combine the recommendations from the HTML5 Boilerplate project, the ASP.NET Core NGINX Documentation, the NGINX Docs and my own experience to build the nginx.config (and mime.types file) file below specifically for the best performance and security and to target .NET Core apps.
Not only that but I've gone to extreme lengths to find out what every setting actually does and have written short comments describing each and every setting. The config file is self describing, from this point forward it needs no explanation.
\n# Configure the Nginx web server to run your ASP.NET Core site efficiently.\n# See https://docs.asp.net/en/latest/publishing/linuxproduction.html\n# See http://nginx.org/en/docs/ and https://www.nginx.com/resources/wiki/\n\n# Set another default user than root for security reasons.\n# user\t\t\t\t\t\txxx;\n\n# The maximum number of connections for Nginx is calculated by:\n# max_clients = worker_processes * worker_c\nworker_processes\t\t\t1;\n\n# Maximum file descriptors that can be opened per process\n# This should be > worker_connections\nworker_rlimit_nofile\t\t8192;\n\n# Log errors to the following location. Feel free to change these.\nerror_log\t\t\t\t\tlogs/error.log;\n# Log NXingx process errors to the following location. Feel free to change these.\npid\t\t\t\t\t\t\tlogs/nginx.pid;\n\nevents {\n\n # When you need > 8000 * cpu_cores connections, you start optimizing\n # your OS, and this is probably the point at where you hire people\n # who are smarter than you, this is *a lot* of requests.\n worker_connections\t\t8000;\n\n # This sets up some smart queueing for accept(2)'ing requests\n # Set it to "on" if you have > worker_processes\n accept_mutex\t\t\toff;\n\n # These settings are OS specific, by defualt Nginx uses select(2),\n # however, for a large number of requests epoll(2) and kqueue(2)\n # are generally faster than the default (select(2))\n # use epoll; # enable for Linux 2.6+\n # use kqueue; # enable for *BSD (FreeBSD, OS X, ..)\n\n}\n\nhttp {\n\n # Include MIME type to file extension mappings list.\n include mime.types;\n # The default fallback MIME type.\n default_type application/octet-stream;\n\n # Format for our log files.\n log_format main '$remote_addr - $remote_user [$time_local] $status '\n '"$request" $body_bytes_sent "$http_referer" '\n '"$http_user_agent" "$http_x_forwarded_for"';\n\n # Log requests to the following location. Feel free to change this.\n access_log logs/access.log main;\n\n # The number of seconds to keep a connection open.\n keepalive_timeout 29;\n # Defines a timeout for reading client request body.\n client_body_timeout 10;\n # Defines a timeout for reading client request header.\n client_header_timeout 10;\n # Sets a timeout for transmitting a response to the client.\n send_timeout 10;\n # Limit requests from an IP address to five requests per second.\n # See http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone\n limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s;\n\n # Disables emitting Nginx version in error messages and in the 'Server' HTTP response header.\n server_tokens off;\n\n # To serve static files using Nginx efficiently.\n sendfile on;\n tcp_nopush on;\n tcp_nodelay off;\n\n # Enable GZIP compression.\n gzip on;\n # Enable GZIP maximum compression level. Ranges from 1 to 9.\n gzip_comp_level 9;\n # Enable GZIP over HTTP 1.0 (The default is HTTP 1.1).\n gzip_http_version 1.0;\n # Disable GZIP compression for IE 1 to 6.\n gzip_disable "MSIE [1-6]\\."\n # Enable GZIP compression for the following MIME types (text/html is included by default).\n gzip_types # Plain Text\n text/plain\n text/css\n text/mathml\n application/rtf\n # JSON\n application/javascript\n application/json\n application/manifest+json\n application/x-web-app-manifest+json\n text/cache-manifest\n # XML\n application/atom+xml\n application/rss+xml\n application/xslt+xml\n application/xml\n # Fonts\n font/opentype\n font/otf\n font/truetype\n application/font-woff\n application/vnd.ms-fontobject\n application/x-font-ttf\n # Images\n image/svg+xml\n image/x-icon;\n # Enables inserting the 'Vary: Accept-Encoding' response header.\n gzip_vary on;\n\n # Sets configuration for a virtual server. You can have multiple virtual servers.\n # See http://nginx.org/en/docs/http/ngx_http_core_module.html#server\n server {\n\n # Listen for requests on specified port including support for HTTP 2.0.\n # See http://nginx.org/en/docs/http/ngx_http_core_module.html#listen\n listen 80 http2 default;\n # Or, if using HTTPS, use this:\n # listen 443 http2 ssl default;\n # Configure SSL/TLS\n # See http://nginx.org/en/docs/http/configuring_https_servers.html\n ssl_certificate /etc/ssl/certs/testCert.crt;\n ssl_certificate_key /etc/ssl/certs/testCert.key;\n ssl_protocols TLSv1.1 TLSv1.2;\n ssl_prefer_server_ciphers on;\n ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";\n ssl_ecdh_curve secp384r1;\n ssl_session_cache shared:SSL:10m;\n ssl_session_tickets off;\n # Ensure your cert is capable before turning on SSL Stapling.\n ssl_stapling on;\n ssl_stapling_verify on;\n\n # The name of the virtual server where you can specify one or more domains that you own.\n server_name\t\t\t\t\tlocalhost;\n # server_name example.com www.example.com *.example.com www.example.*;\n\n # Match incoming requests with the following path and forward them to the specified location.\n # See http://nginx.org/en/docs/http/ngx_http_core_module.html#location\n location / {\n\n proxy_pass http://localhost:1025;\n\n # The default minimum configuration required for ASP.NET Core\n # See https://docs.asp.net/en/latest/publishing/linuxproduction.html?highlight=nginx#configure-a-reverse-proxy-server\n proxy_cache_bypass $http_upgrade;\n # Turn off changing the URL's in headers like the 'Location' HTTP header.\n proxy_redirect off;\n # Forwards the Host HTTP header.\n proxy_set_header Host $host;\n # The Kestrel web server we are forwarding requests to only speaks HTTP 1.1.\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n # Adds the 'Connection: keep-alive' HTTP header.\n proxy_set_header Connection keep-alive;\n\n # Sets the maximum allowed size of the client request body.\n client_max_body_size 10m;\n # Sets buffer size for reading client request body.\n client_body_buffer_size 128k;\n # Defines a timeout for establishing a connection with a proxied server.\n proxy_connect_timeout 90;\n # Sets a timeout for transmitting a request to the proxied server.\n proxy_send_timeout 90;\n # Defines a timeout for reading a response from the proxied server.\n proxy_read_timeout 90;\n # Sets the number and size of the buffers used for reading a response from the proxied server.\n proxy_buffers 32 4k;\n\n }\n\n }\n\n}\n\ntypes {\n\n # An expanded list of MIME type to file extension mappings for Nginx.\n\n # Data Interchange\n application/atom+xml atom;\n application/json json map topojson;\n application/ld+json jsonld;\n application/rss+xml rss;\n application/vnd.geo+json geojson;\n application/xml rdf xml;\n\n # JavaScript\n application/javascript js;\n\n # Manifest files\n application/manifest+json webmanifest;\n application/x-web-app-manifest+json webapp;\n text/cache-manifest appcache;\n\n # Media files\n audio/midi mid midi kar;\n audio/mp4 aac f4a f4b m4a;\n audio/mpeg mp3;\n audio/ogg oga ogg opus;\n audio/x-realaudio ra;\n audio/x-wav wav;\n image/x-icon cur ico;\n image/bmp bmp;\n image/gif gif;\n image/jpeg jpeg jpg;\n image/png png;\n image/svg+xml svg svgz;\n image/tiff tif tiff;\n image/vnd.wap.wbmp wbmp;\n image/webp webp;\n image/x-jng jng;\n video/3gpp 3gp 3gpp;\n video/mp4 f4p f4v m4v mp4;\n video/mpeg mpeg mpg;\n video/ogg ogv;\n video/quicktime mov;\n video/webm webm;\n video/x-flv flv;\n video/x-mng mng;\n video/x-ms-asf asf asx;\n video/x-ms-wmv wmv;\n video/x-msvideo avi;\n\n # Microsoft Office\n application/msword doc;\n application/vnd.ms-excel xls;\n application/vnd.ms-powerpoint ppt;\n application/vnd.openxmlformats-officedocument.wordprocessingml.document docx;\n application/vnd.openxmlformats-officedocument.spreadsheetml.sheet xlsx;\n application/vnd.openxmlformats-officedocument.presentationml.presentation pptx;\n\n # Web Fonts\n application/font-woff woff;\n application/font-woff2 woff2;\n application/vnd.ms-fontobject eot;\n application/x-font-ttf ttc ttf;\n font/opentype otf;\n\n # Other\n application/java-archive ear jar war;\n application/mac-binhex40 hqx;\n application/octet-stream bin deb dll dmg exe img iso msi msm msp safariextz;\n application/pdf pdf;\n application/postscript ai eps ps;\n application/rtf rtf;\n application/vnd.google-earth.kml+xml kml;\n application/vnd.google-earth.kmz kmz;\n application/vnd.wap.wmlc wmlc;\n application/x-7z-compressed 7z;\n application/x-bb-appworld bbaw;\n application/x-bittorrent torrent;\n application/x-chrome-extension crx;\n application/x-cocoa cco;\n application/x-java-archive-diff jardiff;\n application/x-java-jnlp-file jnlp;\n application/x-makeself run;\n application/x-opera-extension oex;\n application/x-perl pl pm;\n application/x-pilot pdb prc;\n application/x-rar-compressed rar;\n application/x-redhat-package-manager rpm;\n application/x-sea sea;\n application/x-shockwave-flash swf;\n application/x-stuffit sit;\n application/x-tcl tcl tk;\n application/x-x509-ca-cert crt der pem;\n application/x-xpinstall xpi;\n application/xhtml+xml xhtml;\n application/xslt+xml xsl;\n application/zip zip;\n text/css css;\n text/html htm html shtml;\n text/mathml mml;\n text/plain txt;\n text/vcard vcard vcf;\n text/vnd.rim.location.xloc xloc;\n text/vnd.sun.j2me.app-descriptor jad;\n text/vnd.wap.wml wml;\n text/vtt vtt;\n text/x-component htc;\n\n}\n\nLike IIS, NGINX has modules that you can add to it, to provide extra features. There are a number of them out there. I've listed two that I care about and you should too.
\nInstalling modules is best done by downloading the NGINX source, as well as the modules you need and then compiling the application. There is a feature called dynamic modules which lets you dynamically load additional separate modules after installing NGINX but the link suggests third party modules may not be supported so I didn't try it out.
\nThe ngx_http_v2_module module lets you use HTTP 2.0. HTTP 2.0 gives your site a very rough ~3-5% performance boost and that's before using any of it's more advanced features which not many people are using yet.
\nThe ngx_brotli module lets NGINX use the Brotli compression algorithm. If you haven't heard about Brotli, you should take note. Brotli is a compression algorithm built by Google and is perhaps set to take over from GZIP as the compression algorithm of the web. It's already fully supported on Firefox, Chrome and Opera with only Edge lagging behind.
\nDepending on how much extra CPU power you are wanting to use (it can max out your CPU at the highest compression levels, which could DoS your site if someone makes too many requests, so be careful what compression level you choose), Brotli can compress files and save you around 10-20% bandwidth over what GZIP can do! Those are some significant savings.
\nI have updated the .NET Boxed project template, so you can now choose the web server (IIS or NGINX) you want to use. If you choose to use NGINX, you can have it pre-configured just for you, right out of the box.
\nThe main reason, I've been taking a serious look at NGINX is hard cash. Running Linux servers in the cloud can costs around half the price of a Windows server. Also, you can nab yourself some pretty big performance wins by using the modules I've listed.
\nThere are some interesting overlaps between ASP.NET Core and NGINX. Both can be used to serve static files, HTTP headers, GZIP files etc. I think ASP.NET Core is slowly going to take on more of the role that traditionally was the preserve of the web server.
\nThe cool thing is that because ASP.NET Core is just C#, we'll have a lot of power to configure things using code. NGINX lets you do more advanced configuration using the Lua language and soon even in JavaScript but putting that logic in the app where it belongs and where you can do powerful things makes sense to me.
\n", "url": "https://rehansaeed.com/nginx-asp-net-core-depth/", "title": "NGINX for ASP.NET Core In-Depth", "summary": "NGINX is a popular open source web server. It can act as a reverse proxy server for ASP.NET Core web apps. How to configure NGINX for ASP.NET Core.", "image": "https://rehansaeed.com/images/hero/NGINX-1366x768.png", "date_modified": "2016-08-21T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/asp-net-core-fluent-interface-extensions/", "content_html": "Last week Khalid Abuhakmeh wrote a very interesting blog post called Middleware Builder for ASP.NET Core which I highly recommend you read. In it, he attempts to write some extension methods to help with writing the Configure method in your ASP.NET Core Startup class with a fluent interface. I've taken his blog post to heart and gone on a mission to 'fluent all the things' in ASP.NET Core.
\nThis is an example of what your current Configure method might look like in a typical ASP.NET Core Startup class:
public void Configure(\n IApplicationBuilder application, \n IHostingEnvironment environment, \n ILoggerFactory loggerFactory)\n{\n if (environment.IsDevelopment())\n {\n // Do stuff on your local machine.\n loggerFactory\n .AddConsole(...)\n .AddDebug();\n application.UseDeveloperExceptionPage();\n }\n else\n {\n // Do stuff on when running in your production environment.\n loggerFactory.AddSerilog(...);\n application.UseStatusCodePagesWithReExecute("/error/{0}/");\n }\n\n if (environment.IsStaging())\n {\n // Do stuff in the staging environment.\n application.UseStagingSpecificMiddleware(); \n }\n\n application\n .UseStaticFiles()\n .UseMvc();\n}\n\nAnd this is the same code using the shorter, and prettier fluent interface style:
\npublic void Configure(\n IApplicationBuilder application, \n IHostingEnvironment environment, \n ILoggerFactory loggerFactory)\n{\n loggerfactory\n .AddIfElse(\n hostingEnvironment.IsDevelopment(),\n x => x.AddConsole(...).AddDebug(),\n x => x.AddSerilog(...));\n\n application\n .UseIfElse(\n environment.IsDevelopment(),\n x => x.UseDeveloperExceptionPage(),\n x => x.UseStatusCodePagesWithReExecute("/error/{0}/"))\n .UseIf(\n environment.IsStaging(),\n x => x.UseStagingSpecificMiddleware())\n .UseStaticFiles()\n .UseMvc();\n}\n\nIn the above code, you can see that I've added UseIf and UseIfElse extension methods to the IApplicationBuilder which lets us use the fluent interface. What you'll also notice is that ILoggerFactory also has AddIf and AddIfElse extension methods.
I didn't just stop there, I added similar AddIf and AddIfElse extension methods for IConfigurationBuilder:
public Startup(IHostingEnvironment hostingEnvironment)\n{\n this.hostingEnvironment = hostingEnvironment;\n var configurationBuilder = new ConfigurationBuilder()\n .SetBasePath(hostingEnvironment.ContentRootPath)\n .AddJsonFile("config.json")\n .AddJsonFile($"config.{hostingEnvironment.EnvironmentName}.json", optional: true);\n\n if (hostingEnvironment.IsDevelopment())\n {\n configurationBuilder.AddUserSecrets();\n }\n\n this.configuration = configurationBuilder\n .AddEnvironmentVariables()\n .AddApplicationInsightsSettings(developerMode: !hostingEnvironment.IsProduction())\n .Build();\n}\n\npublic Startup(IHostingEnvironment hostingEnvironment)\n{\n this.hostingEnvironment = hostingEnvironment;\n this.configuration = new ConfigurationBuilder()\n .SetBasePath(hostingEnvironment.ContentRootPath)\n .AddJsonFile("config.json")\n .AddJsonFile($"config.{hostingEnvironment.EnvironmentName}.json", optional: true)\n .AddIf(\n hostingEnvironment.IsDevelopment(),\n x => x.AddUserSecrets())\n .AddEnvironmentVariables()\n .AddApplicationInsightsSettings(developerMode: !hostingEnvironment.IsProduction())\n .Build();\n}\n\nAs if that wasn't enough I also did the same with IServiceCollection with the same AddIf and AddIfElse extension methods. In my experience, these would be used less often but I've added them for completeness.
You can get these extension methods and much more by installing the Boxed.AspNetCore NuGet package or create a project using the .NET Boxed project templates. Finally, if you are so inclined, you can also take a look at the code for these extension methods in the .NET Boxed Framework project.
\n", "url": "https://rehansaeed.com/asp-net-core-fluent-interface-extensions/", "title": "ASP.NET Core Fluent Interface Extensions", "summary": "Using the fluent interface style in with ASP.NET Core Fluent Interface Extension methods. Building on top of the work done by Khalid Abuhakmeh.", "image": "https://rehansaeed.com/images/hero/Fluent-Interface-All-The-Things-1366x768.png", "date_modified": "2016-06-26T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/azure-active-directory-versus-identity-server/", "content_html": "::: warning Disclaimer\nI looked into this subject for use by the company I work for, who had existing infrastructure I had to cater to, so the solution I chose is skewed in that direction. Even so I've tried to give an impartial summary of my own thoughts during my research. I originally asked this question on an Identity Server GitHub issue.\n:::
\nAzure Active Directory is a hosted identity solution, so there is far less setup (especially if like me, you discover that to your surprise, you are already using it for Office 365). Out of the box, it provides some very nice features that can get you started very quickly.
\nAzure Active Directory can connect to an on-premises Active Directory server very easily using something called Azure AD Connect. Most companies are not running everything in the cloud and have an on-premises AD server, so this is a pretty big killer feature.
\nSyncing the two directories happens transparently but there are a bunch of things that can be configured like the way passwords are synced. I'm not a system administrator so I've not set this up personally but most IT admins can do this pretty painlessly.
\nThe premium edition of Azure Active Directory has a monitoring and reporting capability called Connect Health so you can see who is logging into your system and when. You can also get alerts for any seemingly nefarious activity, like a report on the top 50 users with failed username and password attempts, as well as a report on whether Azure AD is syncing correctly with any on-premises AD server you might have. It's a pretty nice feature for IT Admins, while others might not care too much about it.
\n
The premium edition has two factor authentication built in right out of the box, so no having to setup a text message provider, plus the costs of sending those messages are included out of the box.
\nMicrosoft is monitoring all logins and actively blocks activity from known attackers (a bit like cloudflare for identity), so it should in theory provide some added security. There is not much detail about this though.
\nYou can manage users from the Azure portal but the UI is just about passable. If you are using Office 365, you're in a better position as it provides a better UI to manage users.
\nCustomization of the UI is very basic. You can provide a company logo and background image, which get displayed on the login screens but that's about it.
\nThe overall developer experience is pretty slick. Creating a new project in Visual Studio lets you enable integration with Azure AD by just logging in using your Azure credentials and selecting your Azure AD account. It doesn't get any easier than that for simple scenarios.
\nFor more complex scenarios, you will inevitably have to log into the Azure Portal and configure things a bit more. You often end up having to download and edit an XML configuration file from the Azure Portal. This is not the best experience in the world.
\nYou have to pay for the premium features and using the Azure Portal to do identity management is kind of a pain. Out of the box though this is ridiculously fast to setup and can get you up to speed very quickly, while giving you a secure platform.
\nThe documentation is pretty good and there are samples on GitHub with Microsoft developers actively monitoring the issues which was helpful. Some links I found useful:
\nIdentityServer is the Swiss Army knife of Identity management. It can do everything but does require a small amount of setup and a little more knowledge of the identity space. It can do most things that I listed above and a lot more beyond.
\nIdentityServer can connect to one or more identity sources. It has to be noted that even if you are using Azure Active Directory, there may still be reasons for choosing IdentityServer which I had not initially considered. For example, if you have more than one source of user data e.g. You are using AD and also a SQL database of users, then IdentityServer can be used to point to both of these sources of user information. In theory it should also make it easier to switch from AD to something else entirely as it decouples things.
\nIt's possible to integrate Application Insights yourself and record things like logins and password resets. You could build a dashboard of graphs which looks like Connect Health. In fact, you can make it look exactly like Connect Health with very little effort.
\nTwo-Factor Authentication requires a third party provider to send text messages and of course this means that there will be a monetary cost. In addition there is a small amount of code you have to write to get things connected.
\nAzure Active Directory provides some built in support for blocking malicious activity, a bit like Cloudflare but for identity. With IdentityServer, you could use the real Cloudflare and get some added protection for very little effort.
\nUI customization is where IdentityServer shines. You have full access to the HTML and CSS and can fully customize the look and feel to your hearts content.
\nIdentityServer is built by Dominick Baier, Brock Allen and the open source community. I actually did a WCF course under the instruction of Dominick many years ago and I can tell you that IdentityServer is in capable hands.
\nAny questions or issues you have would be posted on the relevant IdentityServer GitHub project. Dominick, Brock and other community members often answer questions. Overall, it's run as a healthy open source project.
\nMicrosoft has attempted to build their own identity provider in the past but the solution wasn't the best. Having embraced open source, they now recommend IdentityServer themselves.
\nThe project is actively developed on GitHub and it has well known developers at the helm. There are code samples for all the authentication flows and you can get answers from the community. Some links I found useful:
\nFact: Security is really really hard. There are lots of different ways of doing authentication called 'flows'. I put this link here because I found it very useful for understanding them. Also, the following diagram is key to understanding this entire topic.
\n
What you decide to choose depends entirely on the problem you have. Which should you choose? Well, it depends on the number of developers, time, money and effort you can expend setting everything up. There is no one size fits all solution. Really, the differences in the two products above are the differences between a SaaS and PaaS solution.
\nWhich did I choose? While I was doing this research, I discovered to my surprise that the company I work for already had an Azure Active Directory linked to an on-premises Active Directory server because we were using Office 365. That made the choice much easier for us.
\n", "url": "https://rehansaeed.com/azure-active-directory-versus-identity-server/", "title": "Azure Active Directory Versus Identity Server", "summary": "A comparison between Azure Active Directory and Identity Server covering the advantages and disadvantages of both.", "image": "https://rehansaeed.com/images/hero/Azure-Active-Directory-Versus-Identity-Server-1366x768.png", "date_modified": "2016-05-20T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/social-taghelpers-for-asp-net-core/", "content_html": "Social media websites like Facebook, Twitter, Google+, Pintrest etc. provide ways to enhance the experience of sharing a page from your site through the use of meta tags. These provide metadata about what is on your page in a standardized format that these sites can use to better display your content. Here are two quick examples of the enhanced content that Twitter and Facebook display when you add these meta tags to your page:
\n

It turns out that most of the social media sites use only two standard sets of meta tags, namely Open Graph (Facebook) and Twitter Cards. I have built ASP.NET Core TagHelpers and ASP.NET 4.6 HTML Helpers which make it easy to add these meta tags to your site.
\nThis is nothing to do with social media meta tags but worth mentioning. The author meta tag has been around for many years and is a standard but very basic way of telling search engines and others, who authored your page. It's unclear where if anywhere this tag is used but as it's a standard I like to put it in anyway as it doesn't hurt to do so.
\n<meta name="author" content="Muhammad Rehan Saeed">\n\nOpen Graph is an open standard (it's set by Facebook and doesn't seem so open to me as I'll explain), containing several sets of meta tags which represent various things, such as:
\nWebsite
\nMusic Album
\nMusic Song
\nMusic Playlist
\nVideo Movie
\nVideo Episode
\nVideo TV Show
\nVideo Other
\nArticle
\nBook
\nProfile
\nHere is an example of what the meta tags for a page looks like for the Website set. Note the type tag which determines the name of the set used:
<meta property="og:type" content="website">\n<meta property="og:title" content=".NET Boxed">\n<meta property="og:url" content="http://example.com/">\n<meta property="og:image" content="http://example.com/1200x630.png">\n<meta property="og:image:type" content="image/png">\n<meta property="og:image:width" content="1200">\n<meta property="og:image:height" content="630">\n<meta property="og:site_name" content=".NET Boxed">\n\nWhat I find perplexing is that Facebook also have their own custom sets of meta tags over and above the ones in Open Graph. These are:
\nArticle
\nBooks Author
\nBooks Book
\nBooks Genre
\nBusiness
\nFitness Course
\nGame Achievement
\nMusic Album
\nMusic Playlist
\nMusic Radio Station
\nMusic Song
\nPlace
\nProduct
\nProduct Group
\nProduct Item
\nProfile
\nRestaurant Menu
\nRestaurant Menu Item
\nRestaurant Menu Section
\nRestaurant
\nVideo Episode
\nVideo Movie
\nVideo Other
\nVideo TV Show
\nAs you can see there is a lot more choice and detail here. What's confusing is that there is overlap between the Open Graph and Facebook meta tags. Both have sets covering music, video and books, with the Facebook sets requiring you to add far more detailed metadata. The Open Graph tags may play nicer with other social media sites that use these tags while the Facebook ones will obviously give the best experience for the user on Facebook. The above meta tags can be set using my tag helpers or HTML helpers depending on the version of ASP.NET you are using like so:
\n<open-graph-website site-name="My Website"\n title="Page Title"\n main-image="@(new OpenGraphImage(\n Url.AbsoluteContent("~/img/1200x630.png"),\n ContentType.Png,\n 1200,\n 630))"\n determiner="OpenGraphDeterminer.Blank">\n\n@Html.OpenGraph(new OpenGraphWebsite(\n "Page Title",\n new OpenGraphImage(\n Url.AbsoluteContent("~/1200x630.png"))\n {\n Height = 630, \n Type = ContentType.Png, \n Width = 1200 \n })\n {\n Determiner = OpenGraphDeterminer.Blank,\n SiteName = "My Site"\n });\n\nOf course there are tag helpers and HTML helpers for all of the above meta tag sets.
\nTwitter cards require meta tags representing one of several 'cards' which can represent different things:
\nIf you have already added Open Graph meta tags, then Twitter can make use of them and you can omit some of the meta tags that Twitter requires. This makes adding a Twitter Card very easy and in fact, most of the time all you need to do is include a Twitter username and the card type. Here is an example of what Twitter card meta tags look like given that you already have Open Graph meta tags:
\n<meta name="twitter:card" content="summary_large_image">\n<meta name="twitter:site" content="@RehanSaeed">\n\nBelow, is an example of how to generate the above code using my tag helpers or HTML helpers. I have used the Summary Large Image card (Notice the double @ sign in the tag helper, this is because @ is a special character in Razor and a double @@ escapes the character):
<twitter-card-summary-large-image username="@@RehanSaeed">\n\n@Html.TwitterCard(new SummaryLargeImageTwitterCard("@RehanSaeed"));\n\nThere are also tag helpers and HTML helpers for all of the above Twitter cards. The other cards are a little more complicated than the summary card I have shown in my example above.
\nDue to the proliferation of Facebook's Open Graph and Twitters card meta tags, other social media sites, search engines and other sites also use them. By implementing the above meta tags, you can cover most of the ground with very little effort.
\nDue to the difficulty of getting these meta tags correct, there are several validator tools that the various social media companies provide which let you confirm that you have not made any mistakes. Now, if you've used my tag helpers or HTML helpers you should be ahead of the game and things should just work but it's worth checking out:
\nWhen I was looking into implementing these tag helpers and HTML helpers, I looked at a few other efforts on GitHub. However, for some strange reason all of them used reflection behind the scenes. At this point I'd like to go on a short rant against using reflection. I've seen a lot of 'clever' code use reflection over the years and I've seen a far too many developers hammer far too many nails using it. It's a very powerful tool but gets abused far too often. Now, back to resuming normal service. This made these libraries pretty slow for generating a few meta tags, not to mention that they don't support ASP.NET Core. My implementation uses a single StringBuilder and should be fairly fast. At some point I will even use object pooling to reuse copies of StringBuilder.
This tag or HTML helper is available in a few ways:
\nLast week I wrote part one of a blog post discussing a Subresource Integrity (SRI) tag helper I wrote for ASP.NET Core. It turns out the post was featured on the ASP.NET Community Standup and discussed at length by Scott Hanselman, Damian Edwards and Jon Galloway. Here is the discussion:
\nhttps://www.youtube.com/watch?v=Mu2jol8EmVo
\nThe overall impression from the standup was that the SRI tag helper I wrote was a good first step but there was more work to be done. It was however, still more secure than "the rest of the internet" according to Jon Galloway. The main issue raised during the standup was that the first call made to get the resource could retrieve a version of it that was compromised.
\nMy initial thinking was that you could check the files at deployment time when the tag helper first runs. Then the tag helper would have calculated the hash and cached it without any expiration time, so you are good from then on. In hindsight checking the files on every deployment is not great for the developer.
\nSo for the next iteration I have added a new alternative source attribute, basically a local file from which the SRI is calculated. Now the tag helper looks like this when in use:
\n<script\n src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.0/jquery.min.js"\n asp-subresource-integrity-src="~/js/jquery.min.js"></script>\n\nYou can also customize the hashing algorithm used in your SRI. You can choose between SHA256, SHA384 and SHA512, by default the tag helper uses the most secure option SHA512 which seems to be supported by all browsers. Should you choose to use a different hashing algorithm or even use more than one algorithm, you can set the asp-subresource-integrity-hash-algorithms attribute which is just a flagged enumeration (Note that I am using ASP.NET Core RC2 syntax, where the name of the enumeration can be omitted):
<script\n src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.0/jquery.min.js"\n asp-subresource-integrity-src="~/js/jquery.min.js"\n asp-subresource-integrity-hash-algorithms="SHA256 | SHA384 | SHA512"></script>\n\nWhat is it doing behind the scenes?
\nintegrity and crossorigin attributes to the script tag.IDistributedCache) built in to ASP.NET Core with no expiry date. If you are using a distributed cache like Redis (Which you should for the pure speed of it) then the hash will be remembered.In my last post I noted that SRI requires that the resource has a valid Access-Control-Allow-Origin HTTP header (usually with a * value). Microsoft's CDN does not supply this header for all it's resources. I did reach out to Microsoft to see if this could be fixed. I've not heard back yet. I would imagine that with a CDN of that size, fixing this issue is a non-trivial thing so it might take time but I'll do some more chasing.
Last week, I noted that leaving out the scheme in the URL for your CDN resource e.g. //example.com/jquery.js caused Firefox to error and fail to load the resource completely and I recommended that you always include the https:// scheme. It turns out that this was not Firefox causing the issue at all but a Firefox browser extension. I've yet to figure out which one yet as I have quite a few installed (most of them security related because I'm paranoid) but it's probably an extension called HTTPS Everywhere which attempts to use HTTPS if it is available. To be on the safe side and avoid this problem, always specify the https:// scheme.
So what happens when a CDN script is maliciously edited or (much more likely) you messed up and your local copy of the CDN script is different from the one in the CDN? Well, this is where CDN script fallbacks come in. There is already a tag helper provided by ASP.NET Core that does this:
\n<script\n src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.0/jquery.min.js"\n asp-subresource-integrity-src="~/js/jquery.min.js"\n asp-fallback-src="~/js/jquery.min.js"\n asp-fallback-test="window.jQuery"></script>\n\nI should also mention that although the fallback tag helper is cool and very simple to use, it adds inline script which is not compatible with the Content Security Policy (CSP) HTTP header. If you care about security and you probably do if you are reading this, that means using the fallback tag helper is not possible. I myself prefer to move all my fallback checks to a separate JavaScript file.
\nThis tag helper is available in a few ways:
\nCan you trust your CDN provider? What if they get hacked and the copy of jQuery you are using hosted by them has some malicious script added to it? You would have no idea this was happening! This is where Subresource Integrity (SRI) comes in.
\nIt works by taking a cryptographic hash of the file hosted on the CDN and adding that to your script or link tags. So in our case if we are using jQuery, we would add an integrity and crossorigin attribute to our script tag like so:
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js" \n integrity="sha256-ivk71nXhz9nsyFDoYoGf2sbjrR9ddh+XDkCcfZxjvcM=" \n crossorigin="anonymous"></script>\n\nThe cryptographic hashing algorithm used can be SHA256, SHA384 or SHA512 at the time of writing. In fact, you can use more than one at a time and browsers will pick the most secure one to check the file against.
\nThe current official standard document states that currently only script or link tags are supported for your JavaScript or CSS. However, it also states that this is likely to be expanded to pretty much any tag with a src or href attribute such as images, objects etc.
Scott Helme has a great post on the subject which I highly recommend you read (It's where I learned about it).
\nI implemented a tag helper for ASP.NET Core which is as simple to use as this:
\n<script asp-subresource-integrity\n src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>\n\nDon't you love it when security is so easy! I'm a big believer in making security as easy as having a big red button that says 'on' and turning it on by default so people don't have to. It's the only way these things will get used! What is it doing behind the scenes?
\nintegrity and crossorigin attributes to the script tag.There are actually two tag helpers, one supports any tag with a src attribute and another supports any tag with a href element. This is in preparation for when subresource integrity is opened up to tags other than script and link.
In the past, I have often omitted the scheme from the CDN URL like so:
\n<script src="//ajax.googleapis.com/ajax/libs/jquery/2.2.0/jquery.min.js"></script>\n\nHowever, I have noticed that Firefox, does not like it when you use SRI and omit the scheme. It stops the file from loading completely. When you think about it, this makes sense. We are trying to confirm that the resource has not been changed, one of the ways to do this is to use HTTPS. It does not make sense to use SRI over HTTP.
\nThe other gotcha I found is that the resource must have the Access-Control-Allow-Origin HTTP header. It can be set to * or your individual domain name. Now, I have been using CDN resources provided by Google (for jQuery), Microsoft (for Bootstrap, jQuery Validation etc.) and MaxCDN (for Font Awesome) because they are free, most browsers have probably already got a copy of the files from there and because they have very fast global exit nodes.
However, I have discovered that all provide the Access-Control-Allow-Origin HTTP header except Microsoft on some of their resources. Strangely, they return the header for Bootstrap but not for the jQuery Validation scripts. I have reached out to them through my capacity as an MVP and hope to get the issue solved. In the mean time, if you are using Microsoft's CDN you can switch to another CDN or wait for them to fix the issue.
This tag helper is available in a few ways:
\nI needed to write a console application a while back and was investigating the best way to do this using the available NuGet packages. I'd seen the DNVM command line tool that Microsoft built for ASP.NET Core and really liked it and wanted something similar.
\n
I really like the old school ASCII art title and the use of colour. The .NET Framework does contain an enum called ConsoleColor which contains a very limited set of hard coded colours you can use but it has some major omissions like the colour orange for example.
\nIn my hunt for a C# ASCII art generator, I discovered patorjk.com which is great for generating text using various Figlet fonts. Figlet fonts are basically .flf text files which contain instructions on how each letter in the ASCII character table can be printed out. It turns out these fonts are pretty ancient and there are libraries in every language writing out text using Figlet fonts.
\nI was just about to give up and write my own open source library when I discovered Colorful.Console, available on GitHub. Using this library you can very easily write console apps which look like this:
\n
Or this:
\n
The only thing missing was a method to write ASCII text using Figlet fonts, so I contributed some code to the project to get this done. The output, combined with the fade that Colorful.Console is capable of created a pretty cool effect. Unbelievably this is a couple of lines of code to write!
\n
The title image of this post is also generated using Colorful.Console but was a bit more complicated as it transitions through several colours. By default Colorful.Console includes a single Figlet font but there are dozens of others available which you can download and use yourself. They aren't all included by default because they would bloat the library quite a bit.
\nNow the only thing missing in my quest was a command line parser which could let me easily create commands, switches and flags so users could use my command line tool. The best tool I found was Command Line Parser available on GitHub. It's a pretty powerful and fully features library that makes writing a command line interface very easy. Unfortunately, its output is pretty ugly and it does not let you customize the 'look and feel' of what is output to the console.
\nAt some point, I'd like to make another contribution to Colorful.Console, so that it offers command line parsing too but take inspiration from several command line parsing libraries to make something that's fully customizable and of course very colourful.
\nCommand line tools have been around for decades, it's a wonder that a NuGet package that does all of these things does not exist yet.
\n", "url": "https://rehansaeed.com/colorful-console/", "title": "Colorful.Console", "summary": "Colorful.Console is a C# library that wraps around the System.Console class, making your console apps more colourful. Write ASCII art using Figlet fonts.", "image": "https://rehansaeed.com/images/hero/Colorful.Console-1366x768.png", "date_modified": "2016-02-14T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/make-certificate/", "content_html": "Making your own certificate files is quite hard work. You have to use makecert.exe and pvk2pfx.exe, passing in some pretty cryptic arguments which you always have to go back and research.
Learning how to make a certificate and the different types of certificate is pretty important. I highly recommend reading this blog post from Jayway.com which has some very detailed instructions and is the basis of MakeCertificate.
\nTo make things easier I made a PowerShell script called MakeCertificate.ps1 which you can get on the MakeCertificate GitHub page. It asks you to pick the type of certificate you want to create, there are a few different types of certificates that MakeCertificate helps to make: Certificate Authority (CA) Certificates, SSL/TLS Server Certificates and Client Certificates. You are then asked a series of questions which when answered outputs three files
It also outputs the command you need to execute using makecert.exe and pvk2pfx.exe to recreate the certificate.
Picking a logging framework for your new .NET project? I've tried all the best known ones, including log4net, NLog and Microsoft's Logging Application Block. All of these logging frameworks basically output plain text but recently I tried Serilog and was literally blown away by what you could do with it.
\nTake a look at the code below which makes use of the Serilog logger to log a geo-coordinate and an integer:
\nvar position = new { Latitude = 25, Longitude = 134 };\nvar elapsedMs = 34;\n\nlog.Information("Processed {@Position} in {Elapsed:000} ms.", position, elapsedMs);\n\nIf you configure Serilog correctly, you can get it to output it's logs to JSON format, so the above line would log the following:
\n{\n "Timestamp": "2015-12-07T12:26:24.0557671+00:00",\n "Level": "Information",\n "MessageTemplate": "Processed {@Position} in {Elapsed:000} ms.",\n "RenderedMessage": "Processed { Latitude: 25, Longitude: 134 } in 034 ms.",\n "Properties": {\n "Position": \n { \n "Latitude": 25,\n "Longitude": 134\n }, \n "Elapsed": 34,\n "ProcessId": 123,\n "ThreadId": 123,\n "User": "Domain\\\\Username",\n "Machine": "Machine-Name",\n "Source": "My Application Name"\n }\n}\n\nWhat can you do with JSON formatted logs that you can't do with plain text? Well, if you store all your logs in something like Elastic Search, you can query your logs and ask it questions. So if we take the above example further we could find all log messages from a particular machine or user with an elapsed time of more than 10 milliseconds and a distance of 10 Km away from the specific location.
\nNot only that but if you set up something like Kibana, then you can create visualisations for your logs which could grow to be gigabytes in size over time. You can create dashboards with cool charts and maps that look something like this:
\n
One major problem with all exceptions is that they do not log all the properties of an exception and throw away vital information. Take the DbEntityValidationException from EntityFramework as an example. This exception contains vital information buried not in the message but in a custom property called EntityValidationErrors. The problem is that when you do an exception.ToString() call, this vital information is not included in the resulting string. Even worse, it's not included in the debugger either. This is a pretty major failing in the .NET framework but alas we have to work around it.
There are literally dozens of questions on Stack Overflow asking how to deal with this problem and all the major logging frameworks fail in this regard. All of them call exception.ToString() and fail to log the EntityValidationErrors collection.
DbEntityValidationException is not the only culprit, half the exceptions in the .NET framework contain custom properties that are not logged. The Exception base class itself has a Data dictionary collection which is never logged either.
I wrote Serilog.Exceptions to solve this problem. So what happens when you log a DbEntityValidationException using this NuGet package added to Serilog itself? Well take a look yourself:
{\n "Timestamp": "2015-12-07T12:26:24.0557671+00:00",\n "Level": "Error",\n "MessageTemplate": "Hello World",\n "RenderedMessage": "Hello World",\n "Exception": "System.Data.Entity.Validation.DbEntityValidationException: Message",\n "Properties": {\n "ExceptionDetail": {\n "EntityValidationErrors": [\n {\n "Entry": null,\n "ValidationErrors": [\n {\n "PropertyName": "PropertyName",\n "ErrorMessage": "PropertyName is Required.",\n "Type": "System.Data.Entity.Validation.DbValidationError"\n }\n ],\n "IsValid": false,\n "Type": "System.Data.Entity.Validation.DbEntityValidationResult"\n }\n ],\n "Message": "Validation failed for one or more entities. See 'EntityValidationErrors' property for more details.",\n "Data": {},\n "InnerException": null,\n "TargetSite": null,\n "StackTrace": null,\n "HelpLink": null,\n "Source": null,\n "HResult": -2146232032,\n "Type": "System.Data.Entity.Validation.DbEntityValidationException"\n },\n "ProcessId": 123,\n "ThreadId": 123,\n "User": "Domain\\\\Username",\n "Machine": "Machine-Name",\n "Source": "My Application Name"\n }\n}\n\nIt logs every single property of the exception and not only that but it drills down even further into the object hierarchy and logs that information too.
\nYou're probably thinking it uses reflection right? Well...sometimes. This library has custom code to deal with extra properties on most common exception types and only falls back to using reflection to get the extra information if the exception is not supported by Serilog.Exceptions internally.
\nAdd the Serilog.Exceptions NuGet package to your project using the NuGet Package Manager or run the following command in the Package Console Window:
\nInstall-Package Serilog.Exceptions\n\nWhen setting up your logger, add the WithExceptionDetails line like so:
using Serilog;\nusing Serilog.Exceptions;\n\nILogger logger = new LoggerConfiguration()\n .Enrich.WithExceptionDetails()\n .WriteTo.Sink(new RollingFileSink(\n @"C:\\logs",\n new JsonFormatter(renderMessage: true))\n .CreateLogger();\n\nThat's it, it's one line of code!
\n", "url": "https://rehansaeed.com/logging-with-serilog-exceptions/", "title": "Logging with Serilog.Exceptions", "summary": "Log exception details and custom properties that are not output in Exception.ToString() using Serilog.Exceptions for .NET.", "image": "https://rehansaeed.com/images/hero/Serilog.Exceptions-1366x768.png", "date_modified": "2016-01-31T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/net-big-o-algorithm-complexity-cheat-sheet/", "content_html": "All credit goes to the creator of the Big-O Algorithm Complexity Cheat Sheet Eric Rowell and the many contributors to it. You can find the original here. I simply added .NET specific bits to it and posted it on GitHub here.
\nIt covers the space and time Big-O notation complexities of common algorithms used in Computer Science and specifically the .NET framework.
\nYou can see which collection type or sorting algorithm to use at a glance to write the most efficient code.
\nThis is also useful for those studying Computer Science in University or for technical interview tests where Big-O notation questions can be fairly common depending on the type of company you are apply to.
\nYou can download the cheat sheet in three different formats:
\n\n", "url": "https://rehansaeed.com/net-big-o-algorithm-complexity-cheat-sheet/", "title": ".NET Big-O Algorithm Complexity Cheat Sheet", "summary": "Shows Big-O time and space complexities of common algorithms used in Computer Science and the.NET Framework to write the most efficient code.", "image": "https://rehansaeed.com/images/hero/NET-Big-O-Algorithm-Complexity-Cheat-Sheet-1366x768.jpg", "date_modified": "2015-10-15T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/so-ive-been-awarded-microsoft-mvp-status/", "content_html": "Lucian Wischik contacted me out of the blue one day to update and test one of my NuGet packages using NuGet 3.0. We had a pleasant email exchange and he suggested I nominate myself for becoming a Microsoft MVP...so I did.
\nIt actually takes a fair amount of time to apply, you have to give details of all your online accounts and open source projects. Not only that but for each one you have to specify how many downloads or page views you have got. Only after doing this, did I realize how much of an online presence I really have.
\nA few months later and lo and behold I get another email out of the blue telling me I'm one of 4000 people being awarded Microsoft MVP status. So what does this mean, I thought to myself? Well it turns out you get a few freebies:
\nI hadn't realized but you have to reapply to become an MVP every year. I'm not sure how that works but I guess I'll find out a year later.
\nThanks to Lucian for suggesting I apply, my wife who has had to put up with me messing around with code all the time and all the people who downloaded some of my code and found it useful. I get a warm fuzzy feeling every time I see a new website pop-up using my project template or the download numbers for my various projects grow.
\n", "url": "https://rehansaeed.com/so-ive-been-awarded-microsoft-mvp-status/", "title": "So I've Been Awarded Microsoft MVP Status!", "summary": "Muhammad Rehan Saeed has been awarded Microsoft MVP (Most Valuable Professional) status.", "image": "https://rehansaeed.com/images/hero/Microsoft-MVP-1366x768.png", "date_modified": "2015-10-02T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/dynamically-generating-sitemap-xml-for-asp-net-mvc/", "content_html": "What is a sitemap.xml file used for? The official sitemaps.org site really does says it best:
\n\nSitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URL's for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URL's in the site) so that search engines can more intelligently crawl the site.
\nWeb crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URL's in the Sitemap and learn about those URL's using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site.
\n
<?xml version="1.0" encoding="UTF-8"?>\n<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">\n <url>\n <loc>http://www.example.com/</loc>\n <lastmod>2005-01-01</lastmod>\n <changefreq>monthly</changefreq>\n <priority>0.8</priority>\n </url>\n <!-- ... -->\n</urlset>\n\nAs you can see each URL in a sitemap contains four pieces of metadata:
\nurl - The URL itself.lastmod (Optional) - A last modified timestamp. This tells search engines whether or not they should re-index the page to reflect any changes that have been made.changefreq (Optional) - A change frequency indicator (This can take the values: always, hourly, daily, weekly, monthly, yearly, never). This gives search engines an indication of how often they should come back and re-index the page.priority (Optional) - A number from zero to one indicating the importance of the page compared to other pages on the site.The latter three values only give search engines an indication of when they can or should index or even re-index a page. It is not a guarantee that it will happen, although it makes it more likely.
\nSearch engines are black boxes. We only know what goes into them (Our sitemap) and what comes out the other end (The search results). I can make no promises that adding a sitemap will increase your sites search rankings but Google says:
\n\n\nUsing a sitemap doesn't guarantee that all the items in your sitemap will be crawled and indexed, as Google processes rely on complex algorithms to schedule crawling. However, in most cases, your site will benefit from having a sitemap, and you'll never be penalized for having one.
\n\n
There are tools online you can use to generate a static sitemap.xml file, which you can dump at the root of your site but you have to manually update these every time your site changes. This may be fine if your site does not change much but adding a dynamically generated sitemap.xml file is fairly simple process and worth the effort.
Dynamically generating a simple sitemap.xml file for ASP.NET MVC is really simple but adding all the bells and whistles requires a bit more work. We start with a SitemapNode and frequency enumeration which represents a single URL in our sitemap:
public class SitemapNode\n{\n public SitemapFrequency? Frequency { get; set; }\n public DateTime? LastModified { get; set; }\n public double? Priority { get; set; }\n public string Url { get; set; }\n}\n\npublic enum SitemapFrequency\n{\n Never,\n Yearly,\n Monthly,\n Weekly,\n Daily,\n Hourly,\n Always\n}\n\nNow we need to create a collection of SitemapNode's. In my example below, I add the three main pages of my site, Home, About and Contact. I then go on to add a collection of product pages. I am getting every product ID from my database and using that to generate a product URL. Note that I'm not using every property on the SitemapNode class since in my case I don't have an easy way to figure out a last changed date but I do specify a priority and frequency for my products.
Please note that the URL's must be absolute and I am using an extension method I wrote called AbsoluteRouteUrl to generate absolute URL's instead of relative ones. I have included that below too.
public IReadOnlyCollection<SitemapNode> GetSitemapNodes(UrlHelper urlHelper)\n{\n List<SitemapNode> nodes = new List<SitemapNode>();\n\n nodes.Add(\n new SitemapNode()\n {\n Url = urlHelper.AbsoluteRouteUrl("HomeGetIndex"),\n Priority = 1\n });\n nodes.Add(\n new SitemapNode()\n {\n Url = urlHelper.AbsoluteRouteUrl("HomeGetAbout"),\n Priority = 0.9\n });\n nodes.Add(\n new SitemapNode()\n {\n Url = urlHelper.AbsoluteRouteUrl("HomeGetContact"),\n Priority = 0.9\n });\n\n foreach (int productId in productRepository.GetProductIds())\n {\n nodes.Add(\n new SitemapNode()\n {\n Url = urlHelper.AbsoluteRouteUrl("ProductGetProduct", new { id = productId }),\n Frequency = SitemapFrequency.Weekly,\n Priority = 0.8\n });\n }\n\n return nodes;\n}\n\npublic class UrlHelperExtensions\n{\n public static string AbsoluteRouteUrl(\n this UrlHelper urlHelper,\n string routeName,\n object routeValues = null)\n {\n string scheme = urlHelper.RequestContext.HttpContext.Request.Url.Scheme;\n return urlHelper.RouteUrl(routeName, routeValues, scheme);\n }\n}\n\nNow all we have to do is turn our collection of SitemapNode's into XML:
public string GetSitemapDocument(IEnumerable<SitemapNode> sitemapNodes)\n{\n XNamespace xmlns = "http://www.sitemaps.org/schemas/sitemap/0.9";\n XElement root = new XElement(xmlns + "urlset");\n\n foreach (SitemapNode sitemapNode in sitemapNodes)\n {\n XElement urlElement = new XElement(\n xmlns + "url",\n new XElement(xmlns + "loc", Uri.EscapeUriString(sitemapNode.Url)),\n sitemapNode.LastModified == null ? null : new XElement(\n xmlns + "lastmod", \n sitemapNode.LastModified.Value.ToLocalTime().ToString("yyyy-MM-ddTHH:mm:sszzz")),\n sitemapNode.Frequency == null ? null : new XElement(\n xmlns + "changefreq", \n sitemapNode.Frequency.Value.ToString().ToLowerInvariant()),\n sitemapNode.Priority == null ? null : new XElement(\n xmlns + "priority", \n sitemapNode.Priority.Value.ToString("F1", CultureInfo.InvariantCulture)));\n root.Add(urlElement);\n }\n\n XDocument document = new XDocument(root);\n return document.ToString();\n}\n\nNow we add an action method to our HomeController to get to our sitemap. Note the route to get to the sitemap. It is recommended to place your sitemap at the root of your site at sitemap.xml. Also note that creating a route with a file extension at the end (.xml) is not allowed in MVC 5 and below (ASP.NET Core is fine), so you need to add the line below in your Web.config file.
[RoutePrefix("")]\npublic class HomeController : Controller\n{\n [Route("sitemap.xml")]\n public ActionResult SitemapXml()\n {\n var sitemapNodes = GetSitemapNodes(this.Url);\n string xml = GetSitemapDocument(sitemapNodes);\n return this.Content(xml, ContentType.Xml, Encoding.UTF8);\n }\n}\n\n<configuration>\n <system.webServer>\n <handlers>\n <add name="SitemapXml" path="sitemap.xml" verb="GET" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />\n </handlers>\n </system.webServer>\n</configuration>\n\nFor most people the above code will be enough. You can only have a maximum of 50,000 URL's in your sitemap and it must not exceed 10MB in size. I did some testing and if your URL's are fairly long and you supply all of the metadata for each URL, you can easily hit the 10MB mark with 25,000 URL's.
\nIt's not clear what happens if search engines come across a file that breaches these limits. I would have thought that the likes of Google or Bing would have a margin of error but it's better to be well under the limits than over. Not many sites have that many pages but you'd be surprised at how easy it is to hit these limits.
\nThis is where sitemap index files come in. The idea is that you break up your sitemap into pages and list all of these in an index file. When a search engine visits your sitemap.xml file, they retrieve the index file and visit each page in turn. Here is an example of an index file:
<?xml version="1.0" encoding="UTF-8"?>\n<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">\n <sitemap>\n <loc>http://www.example.com/sitemap1.xml</loc>\n <lastmod>2004-10-01T18:23:17+00:00</lastmod>\n </sitemap>\n <sitemap>\n <loc>http://www.example.com/sitemap2.xml.gz</loc>\n <lastmod>2005-01-01</lastmod>\n </sitemap>\n</sitemapindex>\n\nAs you can see you can optionally add a last modified date to each sitemap URL to tell search engines when a sitemap file has changed. This last modified date can be calculated from it's contents, you just need to take the latest last modified date from that particular page.
\nThis blog post has started to get a little long and I haven't even covered sitemap pinging yet, so I will not go into too much detail but I will refer you to where you can get at the full source code and worked example. Luckily, all of the code above and the code to generate a sitemap index file is available here:
\nAdding a sitemap is a great Search Engine Optimization (SEO) technique to improve your sites search rankings. With my NuGet package, it makes it a really simple feature to add to your site. In my next blog post, I'll talk about sitemap pinging which can be used to pro-actively notify search engines of a change in your sitemap.
\n", "url": "https://rehansaeed.com/dynamically-generating-sitemap-xml-for-asp-net-mvc/", "title": "Dynamically Generating Sitemap.xml for ASP.NET MVC", "summary": "How to dynamically generate a sitemap.xml file using ASP.NET MVC to improve the Search Engine Optimization (SEO) of your site and get better search rankings.", "image": "https://rehansaeed.com/images/hero/Sitemaps-1366x768.png", "date_modified": "2015-09-15T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/whats-new-in-asp-net-5-mvc-6-boilerplate/", "content_html": "I have just updated the ASP.NET Core Boilerplate Visual Studio extension with a new project template targeting ASP.NET Core. This post is just a quick one to talk about what's new and different about this version of the template compared to the ASP.NET 4.6 MVC 5 version.
\nWell, the obvious thing is that this template targets ASP.NET Core which is currently still in beta. In particular I am targeting Beta 6 which is the current stable version. I will be regularly updating the template with each new beta until ASP.NET Core is released sometime in November according to Microsoft.
\nThere are not too many new improved features over the ASP.NET 4.6 MVC 5 version but here is a quick description:
\nSystem.Web, so it uses a lot less memory.ASP.NET Core is still in beta and there are a lot of third party libraries that don't yet support it. Support will be added as soon as it becomes available. I have contacted all three project owners and can confirm that support will be added soon.
\n\nThe new .NET Core runtime does not currently support the System.ServiceModel.Syndication namespace which is used to build an Atom feed. The .NET Core runtime is still being targeted but the Atom feed will not work and is excluded using #if pre-processor directives. I have raised this issue on the .NET teams GitHub page here. Please do go ahead and show your support for the feature.
There are other issues around ASP.NET Core missing features from MVC 5 including no support for HttpException which I will be looking into adding soon. I am also looking into submitting any improvements I make to the ASP.NET Core GitHub project, so far, I've had one pull request accepted and a few suggestions acted on.
ASP.NET Core is still in beta but hopefully this project will give an understanding of what can be done with it. There are still missing features but it's surprisingly usable at the moment.
\n", "url": "https://rehansaeed.com/whats-new-in-asp-net-5-mvc-6-boilerplate/", "title": "Whats New in ASP.NET Core Boilerplate", "summary": "With the release of ASP.NET Core Boilerplate, this post discusses what's new and what is currently missing due to ASP.NET Core still being in beta.", "image": "https://rehansaeed.com/images/hero/ASP.NET-Core-Boilerplate-1366x768.png", "date_modified": "2015-08-23T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/minifying-html-for-asp-net-mvc/", "content_html": "Using Razor comments or blocks of code can cause extra carriage returns to appear in the generated HTML. This has been a problem in all versions if ASP.NET MVC for a while now.
\n<p>Paragraph 1</p>\n@* Razor Comment *@\n<p>Paragraph 2</p>\n\nThe above code generates the following HTML. You can imagine that with a lot of comments or code blocks you get a lot of ugly blank lines appearing in your mark-up.
\n<p>Paragraph 1</p>\n\n<p>Paragraph 2</p>\n\nIdeally it should generate the HTML below without any blank lines. If you really wanted a blank line to appear, you could add one yourself before the comment.
\n<p>Paragraph 1</p>\n<p>Paragraph 2</p>\n\nThe main problem with the above is that it makes your HTML look ugly and hard to follow. You often end up with several blank lines, which breaks up the flow of the mark-up.
\nAlso, given that every ASP.NET MVC site on the internet has this problem and probably contains at least two Razor comments and maybe a for-loop in the code somewhere, that is a lot of wasted extra bandwidth.
\nSo I made this suggestion for the next version of ASP.NET Core, to change the behaviour to the expected one above and it got accepted!
\nSo how much bandwidth has this single change saved the internet? That's the question I asked myself. According to the httparchive.org, the average request is made up of around 57KB of HTML. If we assume that each page contains two comments and maybe a for-loop, that's six carriage returns (Two sets of), twelve characters or twelve bytes of wasted bandwidth. If we assume that all sites are using GZip compression, then we can make a conservative estimate of around six bytes of wasted bandwidth per request.
\n
Cisco forecasts that global IP traffic will pass the Zettabyte (1000 Exabytes) threshold by the end of 2016. If the the average transfer size per request is 2162 KB; and only 57 KB is HTML, we can work out that 257465 Terabytes of of the worlds internet traffic per year is HTML.
\n
According to w3techs.com, 16.7% of all sites on the internet use ASP.NET as of 1st August 2015, lets assume half of those (8.35%) will use ASP.NET Core in a few years time. So, we can say that very roughly 21498 Terabytes of the worlds bandwidth is consumed on ASP.NET HTML requests per year.
\nIf the average wasted bandwidth is six bytes out of a total of 57 KB per HTML request, then we come to a grand total of around 2.3 Terabytes of bandwidth saved per year. I must admit, that's a lot of bandwidth but I still thought it would be a lot more than that.
\nAn even better solution would be to minify the HTML. There are solutions like Web Markup Min for ASP.NET Core but it works at runtime and is a little involved to set up, so all but the most determined developers use this feature.
\nThen there is Dean Hume's compile time minifier which sounded perfect. In Dean's post he gets savings of around 20-30% by minifying his HTML. If we applied a conservative 20% saving to all ASP.NET Core HTML requests, that would work out to be a 4300 Terabyte saving in global bandwidth per year!
\nSo far, I've only mentioned the bandwidth saving but downloading smaller HTML files will also mean quicker page load times. HTML is the first thing a browser downloads before it can go off and download all the other CSS, JavaScript, fonts and images a site needs to display a page. Making this download smaller, is a small but effective way to get pages up quicker.
\nThese days, MVC has things like CSS and JavaScript minification built in as standard. To squeeze out even more performance, HTML minification is the next logical step.
\nSo I made this suggestion for ASP.NET Core to implement Dean's compile time minification of Razor views by default. So far, it's not been taken up but I live in hope and write this blog post to show how cool a feature it is.
\nPlease do go and post your support for this feature. We don't necessarily need to use Dean's technique, minifying HTML could just as easily be a Grunt or Gulp task.
\nThis post contains huge leaps of guess work and estimation. I hope my maths is up to scratch but I would not be surprised if I was off by a decimal point or two. Still, we are talking huge numbers here and I hope I've convinced you that minifying HTML is worth the effort.
\n", "url": "https://rehansaeed.com/minifying-html-for-asp-net-mvc/", "title": "Minifying HTML for ASP.NET MVC", "summary": "How much bandwidth does minifying HTML save. Minifying HTML in ASP.NET MVC 5 is hard work. Minifying HTML should be a built in feature of ASP.NET Core.", "image": "https://rehansaeed.com/images/hero/The-Internet-Is-A-Series-Of-Tubes-1366x768.jpg", "date_modified": "2015-08-06T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/dynamically-generating-robots-txt-using-asp-net-mvc/", "content_html": "A robots.txt file is a simple text file you can place at the root of your site at http://example.com/robots.txt to tell search engine robots (also known as web crawlers) how to index your site. The robots know to look for this file at the root of every site before they start indexing the site. If you do not have this file in your site, you will be getting a lot of 404 Not Found errors in your logs.
The robots.txt uses the Robots Exclusion Standard which is a very simple format that can give robots instructions on what to index and what to skip. A very basic robots.txt file looks like this:
# Allow all robots to index this site.\nuser-agent: *\n\n# Tell all robots not to index any of the pages under the /error path.\ndisallow: /error/\n\n# Tell all robots to index the under the error/foo path.\nallow: /error/foo/\n\n# Add a link to the site-map. Unfortunately this must be an absolute URL.\nsitemap: http://example.com/sitemap.xml\n\nIn the above code, all comments start with the hash character. It tells all robots that they can index everything on the site except pages under the /error path because we don't want our error pages showing up in peoples search results. The only exception to that rule is to allow the resources under the /error/foo path to be indexed.
The last line is interesting and tells robots where to find an XML file called a site-map. A site-map contains a list of URL's to all the pages in the site and is used to give search engines a list of URL's they can go through to index the entire site. It's a great SEO (Search Engine Optimization) technique to give your site a boost in it's search rankings.
\nI will discuss creating a dynamic sitemap.xml file for ASP.NET Core in a future post. For now, all you need to know is that the site-map URL has to be an absolute URL according to the specification. This is a pretty terrible decision by whoever created the robots exclusion standard. It's really annoying that when you're creating a site, you have to remember to manually update this URL. If the URL was relative we would not have this problem.
Fortunately, it's really easy to dynamically create a robots.txt file, which auto-generates the site-map URL using the MVC UrlHelper. Take a look at the code below:
public class HomeController : Controller\n{\n [Route("robots.txt", Name = "GetRobotsText"), OutputCache(Duration = 86400)]\n public ContentResult RobotsText()\n {\n StringBuilder stringBuilder = new StringBuilder();\n \n stringBuilder.AppendLine("user-agent: *");\n stringBuilder.AppendLine("disallow: /error/");\n stringBuilder.AppendLine("allow: /error/foo");\n stringBuilder.Append("sitemap: ");\n stringBuilder.AppendLine(this.Url.RouteUrl("GetSitemapXml", null, this.Request.Url.Scheme).TrimEnd('/'));\n \n return this.Content(stringBuilder.ToString(), "text/plain", Encoding.UTF8);\n }\n \n [Route("sitemap.xml", Name = "GetSitemapXml"), OutputCache(Duration = 86400)]\n public ContentResult SitemapXml()\n {\n // I'll talk about this in a later blog post.\n }\n}\n\nI set up a route to the robots.txt path at the root of the site in my main HomeController and cached the response for a day for better performance (You can and should probably specify a much longer period of time if you know yours won't change).
I then go on to append my commands to the StringBuilder. The great thing is that I can easily use the UrlHelper to generate a complete absolute URL to the sitemap.xml path which is also dynamically generated in much the same way. Finally, I just return the string as plain text using the UTF-8 encoding.
Creating a route ending with a file extension is not allowed by default in ASP.NET Core. To get around this security restriction, you need to add the following to the Web.config file:
<?xml version="1.0" encoding="utf-8"?>\n<configuration>\n <!-- ...Omitted -->\n <system.webServer>\n <!-- ...Omitted -->\n <handlers>\n <!-- ...Omitted -->\n <add name="RobotsText" \n path="robots.txt" \n verb="GET" \n type="System.Web.Handlers.TransferRequestHandler" \n preCondition="integratedMode,runtimeVersionv4.0" />\n </handlers>\n </system.webServer>\n</configuration>\n\nDynamically generating your robots.txt file is pretty easy and only takes as many lines of code as you need to write your robots.txt file anyway. It also means that you don't need to pollute your project structure with yet another file at the root of it (This problem is fixed in MVC Core, where all static files must be added to the wwwroot folder). You can also dynamically generate your site-map URL so you don't need to remember to update it every time you change the domain.
You could argue that performance is an issue when compared to a static robots.txt text file but its a matter of a few bytes and if you cache the response with a sufficient time limit then I think that even that problem goes away.
Once again, you can find a working example of this and much more using the ASP.NET Core Boilerplate project template.
\n", "url": "https://rehansaeed.com/dynamically-generating-robots-txt-using-asp-net-mvc/", "title": "Dynamically Generating Robots.txt Using ASP.NET MVC", "summary": "How to dynamically generate a robots.txt file using a simple ASP.NET MVC action method and only a few lines of code.", "image": "https://rehansaeed.com/images/hero/Robots.txt-1366x768.jpg", "date_modified": "2015-07-31T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/canonical-urls-for-asp-net-mvc/", "content_html": "The aim of this post is to give your site better search engine rankings using special Search Engine Optimization (SEO) techniques. Take a look at the URL's below and see if you can spot the differences between them:
\nThe second one has a HTTPS scheme, the third omits the trailing slash and the fourth has mixed case characters. All of the URL's point to the same resource but it turns out that search engines treat every one of these URL's as unique and different. Search engines give each URL a page rank, which determines where the resource will show up in the search results. Another term you will also hear quite often is 'link juice'. The link juice conceptualizes how page rank flows between pages and websites.
\nIf your site exposes the above four different URL's to the single resource, your link juice is being spread against each one and as a result, that will be having a detrimental impact on your page rank.
\nOne way to solve this problem is to add a canonical link tag to the head of your HTML page. This tells search engines what the canonical (actual) URL to the page is. The link tag contains a URL to your preferred URL for the page.
\n\n\nOne thing you must decide early on is your preferred URL for every page. You must ask yourself the following questions and use the resulting URL in your canonical link tag.
\nWhen search engines follow a link to your page, regardless of which URL they followed to get to your page, all of the link juice will be given to the URL specified in your canonical link tag. Google goes into a lot more depth about this tag here.
\nUnfortunately, using the canonical link tag is not the recommended approach. The intention is that it should only be used to retrofit older websites, so they can become optimized for search engines.
\nAccording to both Google and Bing, the recommended approach if you visit a non-preferred format of your pages URL is to perform a 301 permanent redirect to the preferred canonical URL. According to them, you only lose a tiny amount of link juice by doing a 301 permanent redirect.
\nASP.NET MVC 5 and ASP.NET Core have two settings you can use to automatically create canonical URL's every time you generate URL's.
\n// Append a trailing slash to all URL's.\nRouteTable.Routes.AppendTrailingSlash = true;\n// Ensure that all URL's are lower-case.\nRouteTable.Routes.LowercaseUrls = true;\n\nservices.ConfigureRouting(\n routeOptions => \n { \n // Append a trailing slash to all URL's.\n routeOptions.AppendTrailingSlash = true;\n // Ensure that all URL's are lower-case.\n routeOptions.LowercaseUrls = true;\n });\n\nOnce you apply these settings and are using the UrlHelper to generate all your URL's, you will see that across your site all URL's are lower-case and all end with a trailing slash (This is just my personal preference you may not like trailing slashes).
This means that within your site, no 301 permanent redirects to canonical URL's are required because the URL's are already canonical. However, this just solves part of the problem. What about external links to your site? What happens when people copy and paste your site and delete or add a trailing slash? What happens when someone types in a link to your site and puts in an upper-case character? The fact is you have no control over external links and when search engine crawlers follow those non-canonical links you will be losing valuable link juice.
\nEnter the RedirectToCanonicalUrlAttribute. This is an MVC filter you can apply, which will check that the URL from each request is canonical. If it is, it does nothing and MVC returns the view in its response as normal. If the URL is not canonical, it generates the canonical URL based on the above MVC settings and returns a 301 permanent redirect response to the client. The client can then make another request to the correct canonical URL.
You can take a look at the source code for the RedirectToCanonicalUrlAttribute, NoTrailingSlashAttribute and NoLowercaseQueryStringAttribute's (I shall explain in a minute) for MVC 5 below or the ASP.NET Core version here.
/// <summary>\n/// To improve Search Engine Optimization SEO, there should only be a single URL for each resource. Case \n/// differences and/or URL's with/without trailing slashes are treated as different URL's by search engines. This \n/// filter redirects all non-canonical URL's based on the settings specified to their canonical equivalent. \n/// Note: Non-canonical URL's are not generated by this site template, it is usually external sites which are \n/// linking to your site but have changed the URL case or added/removed trailing slashes.\n/// (See Google's comments at http://googlewebmastercentral.blogspot.co.uk/2010/04/to-slash-or-not-to-slash.html\n/// and Bing's at http://blogs.bing.com/webmaster/2012/01/26/moving-content-think-301-not-relcanonical).\n/// </summary>\n[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, Inherited = true, AllowMultiple = false)]\npublic class RedirectToCanonicalUrlAttribute : FilterAttribute, IAuthorizationFilter\n{\n private const char QueryCharacter = '?';\n private const char SlashCharacter = '/';\n\n private readonly bool appendTrailingSlash;\n private readonly bool lowercaseUrls;\n\n /// <summary>\n /// Initializes a new instance of the <see cref="RedirectToCanonicalUrlAttribute" /> class.\n /// </summary>\n /// <param name="appendTrailingSlash">If set to <c>true</c> append trailing slashes, otherwise strip trailing \n /// slashes.</param>\n /// <param name="lowercaseUrls">If set to <c>true</c> lower-case all URL's.</param>\n public RedirectToCanonicalUrlAttribute(\n bool appendTrailingSlash, \n bool lowercaseUrls)\n {\n this.appendTrailingSlash = appendTrailingSlash;\n this.lowercaseUrls = lowercaseUrls;\n } \n\n /// <summary>\n /// Gets a value indicating whether to append trailing slashes.\n /// </summary>\n /// <value>\n /// <c>true</c> if appending trailing slashes; otherwise, strip trailing slashes.\n /// </value>\n public bool AppendTrailingSlash\n {\n get { return this.appendTrailingSlash; }\n }\n\n /// <summary>\n /// Gets a value indicating whether to lower-case all URL's.\n /// </summary>\n /// <value>\n /// <c>true</c> if lower-casing URL's; otherwise, <c>false</c>.\n /// </value>\n public bool LowercaseUrls\n {\n get { return this.lowercaseUrls; }\n }\n\n /// <summary>\n /// Determines whether the HTTP request contains a non-canonical URL using <see cref="TryGetCanonicalUrl"/>, \n /// if it doesn't calls the <see cref="HandleNonCanonicalRequest"/> method.\n /// </summary>\n /// <param name="filterContext">An object that encapsulates information that is required in order to use the \n /// <see cref="RedirectToCanonicalUrlAttribute"/> attribute.</param>\n /// <exception cref="ArgumentNullException">The <paramref name="filterContext"/> parameter is <c>null</c>.</exception>\n public virtual void OnAuthorization(AuthorizationContext filterContext)\n {\n if (filterContext == null)\n {\n throw new ArgumentNullException(nameof(filterContext));\n }\n\n if (string.Equals(filterContext.HttpContext.Request.HttpMethod, "GET", StringComparison.Ordinal))\n {\n string canonicalUrl;\n if (!this.TryGetCanonicalUrl(filterContext, out canonicalUrl))\n {\n this.HandleNonCanonicalRequest(filterContext, canonicalUrl);\n }\n }\n }\n\n /// <summary>\n /// Determines whether the specified URl is canonical and if it is not, outputs the canonical URL.\n /// </summary>\n /// <param name="filterContext">An object that encapsulates information that is required in order to use the \n /// <see cref="RedirectToCanonicalUrlAttribute" /> attribute.</param>\n /// <param name="canonicalUrl">The canonical URL.</param>\n /// <returns><c>true</c> if the URL is canonical, otherwise <c>false</c>.</returns>\n protected virtual bool TryGetCanonicalUrl(AuthorizationContext filterContext, out string canonicalUrl)\n {\n bool isCanonical = true;\n\n Uri url = filterContext.HttpContext.Request.Url;\n canonicalUrl = url.ToString();\n int queryIndex = canonicalUrl.IndexOf(QueryCharacter);\n\n // If we are not dealing with the home page. Note, the home page is a special case and it doesn't matter\n // if there is a trailing slash or not. Both will be treated as the same by search engines.\n if (url.AbsolutePath.Length > 1)\n {\n if (queryIndex == -1)\n {\n bool hasTrailingSlash = canonicalUrl[canonicalUrl.Length - 1] == SlashCharacter;\n\n if (this.appendTrailingSlash)\n {\n // Append a trailing slash to the end of the URL.\n if (!hasTrailingSlash && !this.HasNoTrailingSlashAttribute(filterContext))\n {\n canonicalUrl += SlashCharacter;\n isCanonical = false;\n }\n }\n else\n {\n // Trim a trailing slash from the end of the URL.\n if (hasTrailingSlash)\n {\n canonicalUrl = canonicalUrl.TrimEnd(SlashCharacter);\n isCanonical = false;\n }\n }\n }\n else\n {\n bool hasTrailingSlash = canonicalUrl[queryIndex - 1] == SlashCharacter;\n\n if (this.appendTrailingSlash)\n {\n // Append a trailing slash to the end of the URL but before the query string.\n if (!hasTrailingSlash && !this.HasNoTrailingSlashAttribute(filterContext))\n {\n canonicalUrl = canonicalUrl.Insert(queryIndex, SlashCharacter.ToString());\n isCanonical = false;\n }\n }\n else\n {\n // Trim a trailing slash to the end of the URL but before the query string.\n if (hasTrailingSlash)\n {\n canonicalUrl = canonicalUrl.Remove(queryIndex - 1, 1);\n isCanonical = false;\n }\n }\n }\n }\n\n if (this.lowercaseUrls)\n {\n foreach (char character in canonicalUrl)\n {\n if (this.HasNoLowercaseQueryStringAttribute(filterContext) && queryIndex != -1)\n {\n if (character == QueryCharacter)\n {\n break;\n }\n\n if (char.IsUpper(character) && !this.HasNoTrailingSlashAttribute(filterContext))\n {\n canonicalUrl = canonicalUrl.Substring(0, queryIndex).ToLower() +\n canonicalUrl.Substring(queryIndex, canonicalUrl.Length - queryIndex);\n isCanonical = false;\n break;\n }\n }\n else\n {\n if (char.IsUpper(character) && !this.HasNoTrailingSlashAttribute(filterContext))\n {\n canonicalUrl = canonicalUrl.ToLower();\n isCanonical = false;\n break;\n }\n }\n }\n }\n\n return isCanonical;\n }\n\n /// <summary>\n /// Handles HTTP requests for URL's that are not canonical. Performs a 301 Permanent Redirect to the canonical URL.\n /// </summary>\n /// <param name="filterContext">An object that encapsulates information that is required in order to use the \n /// <see cref="RedirectToCanonicalUrlAttribute" /> attribute.</param>\n /// <param name="canonicalUrl">The canonical URL.</param>\n protected virtual void HandleNonCanonicalRequest(AuthorizationContext filterContext, string canonicalUrl)\n {\n filterContext.Result = new RedirectResult(canonicalUrl, true);\n }\n\n /// <summary>\n /// Determines whether the specified action or its controller has the <see cref="NoTrailingSlashAttribute"/> \n /// attribute specified.\n /// </summary>\n /// <param name="filterContext">The filter context.</param>\n /// <returns><c>true</c> if a <see cref="NoTrailingSlashAttribute"/> attribute is specified, otherwise \n /// <c>false</c>.</returns>\n protected virtual bool HasNoTrailingSlashAttribute(AuthorizationContext filterContext)\n {\n return filterContext.ActionDescriptor.IsDefined(typeof(NoTrailingSlashAttribute), false) ||\n filterContext.ActionDescriptor.ControllerDescriptor.IsDefined(typeof(NoTrailingSlashAttribute), false);\n }\n\n /// <summary>\n /// Determines whether the specified action or its controller has the <see cref="NoLowercaseQueryStringAttribute"/> \n /// attribute specified.\n /// </summary>\n /// <param name="filterContext">The filter context.</param>\n /// <returns><c>true</c> if a <see cref="NoLowercaseQueryStringAttribute"/> attribute is specified, otherwise \n /// <c>false</c>.</returns>\n protected virtual bool HasNoLowercaseQueryStringAttribute(AuthorizationContext filterContext)\n {\n return filterContext.ActionDescriptor.IsDefined(typeof(NoLowercaseQueryStringAttribute), false) ||\n filterContext.ActionDescriptor.ControllerDescriptor.IsDefined(typeof(NoLowercaseQueryStringAttribute), false);\n }\n}\n\n/// <summary>\n/// Requires that a HTTP request does not contain a trailing slash. If it does, return a 404 Not Found. This is \n/// useful if you are dynamically generating something which acts like it's a file on the web server. \n/// E.g. /Robots.txt/ should not have a trailing slash and should be /Robots.txt. Note, that we also don't care if \n/// it is upper-case or lower-case in this instance.\n/// </summary>\n[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, Inherited = true, AllowMultiple = false)]\npublic class NoTrailingSlashAttribute : FilterAttribute, IAuthorizationFilter\n{\n private const char QueryCharacter = '?';\n private const char SlashCharacter = '/';\n\n /// <summary>\n /// Determines whether a request contains a trailing slash and, if it does, calls the \n /// <see cref="HandleTrailingSlashRequest"/> method.\n /// </summary>\n /// <param name="filterContext">An object that encapsulates information that is required in order to use the \n /// <see cref="RequireHttpsAttribute"/> attribute.</param>\n /// <exception cref="ArgumentNullException">The <paramref name="filterContext"/> parameter is null.</exception>\n public virtual void OnAuthorization(AuthorizationContext filterContext)\n {\n if (filterContext == null)\n {\n throw new ArgumentNullException(nameof(filterContext));\n }\n\n string canonicalUrl = filterContext.HttpContext.Request.Url.ToString();\n int queryIndex = canonicalUrl.IndexOf(QueryCharacter);\n\n if (queryIndex == -1)\n {\n if (canonicalUrl[canonicalUrl.Length - 1] == SlashCharacter)\n {\n this.HandleTrailingSlashRequest(filterContext);\n }\n }\n else\n {\n if (canonicalUrl[queryIndex - 1] == SlashCharacter)\n {\n this.HandleTrailingSlashRequest(filterContext);\n }\n }\n }\n\n /// <summary>\n /// Handles HTTP requests that have a trailing slash but are not meant to.\n /// </summary>\n /// <param name="filterContext">An object that encapsulates information that is required in order to use the \n /// <see cref="RequireHttpsAttribute"/> attribute.</param>\n protected virtual void HandleTrailingSlashRequest(AuthorizationContext filterContext)\n {\n filterContext.Result = new HttpNotFoundResult();\n }\n}\n\n/// <summary>\n/// Ensures that a HTTP request URL can contain query string parameters with both upper-case and lower-case \n/// characters.\n/// </summary>\n[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, Inherited = true, AllowMultiple = false)]\npublic class NoLowercaseQueryStringAttribute : FilterAttribute\n{\n}\n\nAdding the RedirectToCanonicalUrlAttribute filter is easy. You can add it to the global filters collection so all requests will be handled by it like so:
GlobalFilters.Filters.Add(new RedirectToCanonicalUrlAttribute(\n RouteTable.Routes.AppendTrailingSlash, \n RouteTable.Routes.LowercaseUrls));\n\nThat's it! It's as simple as that! Now there are two special cases, which is where the NoTrailingSlashAttribute and NoLowercaseQueryStringAttribute filters comes in.
Say you want to have the following action method where visiting http://example.com/robots.txt returns a text result. We want the client to think it's just visiting a static robots.txt file but in reality we are dynamically generating it (One reason for doing this is that a robots.txt file must contain an absolute URL and you want to use the UrlHelper to just handle that, no matter what domain the site is running under).
[NoTrailingSlash]\n[Route("robots.txt")]\npublic ContentResult RobotsText()\n{\n string content = this.robotsService.GetRobotsText();\n return this.Content(content, ContentType.Text, Encoding.UTF8);\n}\n\nAdding a trailing slash to robots.txt would just be weird. Also, the last thing you want to do when search engines try to visit http://example.com/robots.txt is 301 permanent redirect them to http://example.com/robots.txt/. So we add the NoTrailingSlashAttribute filter.
The RedirectToCanonicalUrlAttribute knows about the NoTrailingSlashAttribute filter and when it sees it and we make a request to the above action, it ignores the AppendTrailingSlash setting and it works just like requesting a static robots.txt file from the file system.
Sometimes you want your query string parameters to be a mix of upper-case and lower-case. When you want to do this, simply add the NoLowercaseQueryStringAttribute attribute to the action method like so:
[NoLowercaseQueryString]\n[Route("action")]\npublic void Action(string mixedCaseParameter)\n{\n // mixedCaseParameter can contain upper and lower case characters.\n}\n\nIf you are using the ASP.NET Identity NuGet package for authentication, then take note, you need to apply the NoLowercaseQueryStringAttribute to the AccountController.
Once again, you can find a working example of this and much more using the ASP.NET Core Boilerplate project template or view the source code directly on GitHub.
\n", "url": "https://rehansaeed.com/canonical-urls-for-asp-net-mvc/", "title": "Canonical URL's for ASP.NET MVC", "summary": "Use canonical URL's in ASP.NET MVC for better Search Engine Optimization (SEO) using ASP.NET Core Boilerplate and the RedirectToCanonicalUrlAttribute.", "image": "https://rehansaeed.com/images/hero/Link-Juice-1366x768.png", "date_modified": "2015-07-14T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/building-rssatom-feeds-for-asp-net-mvc/", "content_html": "An RSS or Atom feed is a great way to push site updates to users. Essentially, it's just an XML document which is constantly updated with fresh content and links.
\nThere are numerous feed readers out there that all work in different ways but most just aggregate feeds from several sites into a single reading list. When a user subscribes to your sites feed and adds it to their list of subscriptions, each time you update your feed the fresh content will appear in their reading list.
\nFeed readers come in all shapes and sizes, even browsers have basic feed reading abilities. Here is a screen-shot of Firefox's bookmarks side-bar, after adding the Visual Studio Magazine feed (Go ahead and try it yourself in Firefox). The bookmarks under the Blogs folder updates each time the feed updates.
\n
Feed reading websites like Feedly and NewsBlur are fairly popular. Increasingly though, feed readers are actually just apps running on phones or tablets and these can even raise notifications when the feed changes and there is fresh content to read. Services like Feedly and NewsBlur also have their own apps too.
\nThe latest versions of RSS is 2.0, while Atom is 1.0. Atom 1.0 is a web standard and you can read the official IETF Atom 1.0 specification here. RSS is not a web standard and is actually owned by Harvard University.
\nAtom was created specifically to address problems in RSS 2.0 and is the newer and more well defined format. Both of these formats are now pretty ancient by web standards and enjoy widespread support. If you have a choice of format, go with Atom 1.0.
\nSo what does an Atom feed look like, well you can look at the official specification here or there is a simple but fully featured example below.
\n<?xml version="1.0" encoding="utf-8"?>\n<feed xml:lang="en-GB" xmlns:media="http://search.yahoo.com/mrss/" xmlns="http://www.w3.org/2005/Atom">\n <title type="text">ASP.NET Core Boilerplate</title>\n <subtitle type="text">This is the ASP.NET Core Boilerplate feed description.</subtitle>\n <id>3D797739-1DED-4DB8-B60B-1CA52D0AA1A4</id>\n <rights type="text">© 2015 - Rehan Saeed</rights>\n <updated>2015-06-24T15:54:21+01:00</updated>\n <category term="Blog" />\n <logo>http://example.com/icons/atom-logo-96x48.png</logo>\n <author>\n <name>Rehan Saeed</name>\n <uri>https://rehansaeed.com</uri>\n <email>example@email.com</email>\n </author>\n <contributor>\n <name>Rehan Saeed</name>\n <uri>https://rehansaeed.com</uri>\n <email>example@email.com</email>\n </contributor>\n <link rel="self" type="application/atom+xml" href="http://example.com/feed/" />\n <link rel="alternate" type="text/html" href="http://example.com/" />\n <link rel="hub" href="https://pubsubhubbub.appspot.com/" />\n <icon>http://example.com/icons/atom-icon-48x48.png</icon>\n <entry>\n <id>6139F098-2E59-4405-9BC7-0AAB4CF78E23</id>\n <title type="text">Item 1</title>\n <summary type="text">A summary of item 1</summary>\n <published>2015-06-24T15:54:21+01:00</published>\n <updated>2015-06-24T15:54:21+01:00</updated>\n <author>\n <name>Rehan Saeed</name>\n <uri>https://rehansaeed.com</uri>\n <email>example@email.com</email>\n </author>\n <contributor>\n <name>Rehan Saeed</name>\n <uri>https://rehansaeed.com</uri>\n <email>example@email.com</email>\n </contributor>\n <link rel="alternate" type="text/html" href="http://example.com/item1/" />\n <link rel="enclosure" type="image/png" href="http://example.com/item1/atom-icon-48x48.png" />\n <category term="Category 1" />\n <rights type="text">© 2015 - Rehan Saeed</rights>\n <media:thumbnail url="http://example.com/item1/atom-icon-48x48.png" width="48" height="48" />\n </entry>\n <entry>\n <id>927406DD-E8DC-41ED-8154-30DE91B0877A</id>\n <title type="text">Item 2</title>\n <summary type="text">A summary of item 2</summary>\n <published>2015-06-24T15:54:21+01:00</published>\n <updated>2015-06-24T15:54:21+01:00</updated>\n <author>\n <name>Rehan Saeed</name>\n <uri>https://rehansaeed.com</uri>\n <email>example@email.com</email>\n </author>\n <contributor>\n <name>Rehan Saeed</name>\n <uri>https://rehansaeed.com</uri>\n <email>example@email.com</email>\n </contributor>\n <link rel="alternate" type="text/html" href="http://example.com/item2/" />\n <link rel="enclosure" type="image/png" href="http://example.com/item2/atom-icon-48x48.png" />\n <category term="Category 2" />\n <rights type="text">© 2015 - Rehan Saeed</rights>\n <media:thumbnail url="http://example.com/item2/atom-icon-48x48.png" width="48" height="48" />\n </entry>\n</feed>\n\nAt the root of the XML we have the feed element which represents the Atom Feed. Within that, there is various meta-data about the feed at the top, including:
\ntitle - The title of the feed.subtitle - A short description or subtitle of the feed.id - A unique ID for the feed. No other feed on the internet should have the same ID.rights - Copyright information.updated - When the feed was last updated.category - Zero or more categories the feed belongs to.logo - A wide 2:1 ratio image representing the feed.author - Zero or more authors of the feed.contributor - Zero or more contributors of the feed.link rel="self" - A link to the feed itself.link rel="alternate" - A link to an alternative representation of the feed.link rel="hub" - A link to the PubSubHubbub hub. I'll talk more about this further on.icon - A square 1:1 ratio image representing the feed.The entry elements are where it gets interesting, these are the actual 'things' in your feed you are describing. Each entry has meta-data which looks very similar to the meta-data we used to describe the feed itself.
\nid - A unique identifier to the entry. This can be a database row ID, it doesn't have to be a GUID.title - The title of the entry.summary - A short summary for what the entry is about.published - When the entry was published.updated - When the entry was last changed.author - Zero or more authors of the entry.contributor - Zero or more contributors of the entry.link rel="alternate" - A link to an alternative representation of the entry.link rel="enclosure" - An image representing the entry.category - The category of the entry.rights - Some copyright information.media:thumbnail - A thumbnail representing the entry. This is a non-standard extension to the Atom 1.0 specification created by Yahoo but is common enough to be used here.One thing to note is that all of the links are full absolute URL's. Relative URL's are allowed but you have to specify a single base URI which is added to the start of all URL's. Unfortunately, this feature is buggy in Firefox and so should not be used.
\nThe Windows Communication Foundation (WCF) team at Microsoft has kindly implemented the SyndicationFeed class, giving us a nice API with which to generate the above Atom 1.0 XML (In actual fact this class also represents an RSS 2.0 feed and can be used to generate RSS 2.0 XML too). Since it was the WCF team at Microsoft who built it, they put it in the System.ServiceModel namespace. It doesn't quite feel right there and will probably be split out into it's own namespace (Indeed, I've raised this very question for the new DNX Core version of the .NET Framework which is currently missing SyndicationFeed). Creating a new feed is as simple as this:
SyndicationFeed feed = new SyndicationFeed()\n{\n // id (Required) - The feed universally unique identifier.\n Id = "3D797739-1DED-4DB8-B60B-1CA52D0AA1A4",\n // title (Required) - Contains a human readable title for the feed. Often the same as the title of the \n // associated website. This value should not be blank.\n Title = SyndicationContent.CreatePlaintextContent("ASP.NET Core Boilerplate"),\n // items (Required) - The entries to add to the feed. I'll cover how to do this further on.\n Items = this.GetItems(),\n // subtitle (Recommended) - Contains a human-readable description or subtitle for the feed.\n Description = SyndicationContent.CreatePlaintextContent(\n "This is the ASP.NET Core Boilerplate feed description."),\n // updated (Optional) - Indicates the last time the feed was modified in a significant way.\n LastUpdatedTime = DateTimeOffset.Now,\n // logo (Optional) - Identifies a larger image which provides visual identification for the feed. \n // Images should be twice as wide as they are tall.\n ImageUrl = new Uri("http://example.com/icons/atom-logo-96x48.png"),\n // rights (Optional) - Conveys information about rights, e.g. copyrights, held in and over the feed.\n Copyright = SyndicationContent.CreatePlaintextContent(\n string.Format("© {0} - {1}", DateTime.Now.Year, "Rehan Saeed")),\n // lang (Optional) - The language of the feed.\n Language = "en-GB",\n // generator (Optional) - Identifies the software used to generate the feed, for debugging and other \n // purposes. Do not put in anything that identifies the technology you are using.\n // Generator = "Sample Code",\n // base (Buggy) - Add the full base URL to the site so that all other links can be relative. This is \n // great, except some feed readers are buggy with it, INCLUDING FIREFOX!!! \n // (See https://bugzilla.mozilla.org/show_bug.cgi?id=480600).\n // BaseUri = new Uri("http://example.com")\n};\n\n// self link (Required) - The URL for the syndication feed.\nfeed.Links.Add(SyndicationLink.CreateSelfLink(\n new Uri("http://example.com/feed/"), \n ContentType.Atom));\n\n// alternate link (Recommended) - The URL for the web page showing the same data as the syndication feed.\nfeed.Links.Add(SyndicationLink.CreateAlternateLink(\n new Uri("http://example.com"), \n ContentType.Html));\n\n// hub link (Recommended) - The URL for the PubSubHubbub hub. Used to push new entries to subscribers \n// instead of making them poll the feed. See feed updated method below.\nfeed.Links.Add(new SyndicationLink(new Uri("https://pubsubhubbub.appspot.com/"), "hub", null, null, 0));\n\n// author (Recommended) - Names one author of the feed. A feed may have multiple author elements. A feed \n// must contain at least one author element unless all of the entry elements contain \n// at least one author element.\nfeed.Authors.Add(\n new SyndicationPerson()\n {\n // name (Required) - conveys a human-readable name for the person.\n Name = "Rehan Saeed",\n // uri (Optional) - contains a home page for the person.\n Uri = "https://rehansaeed.com",\n // email (Optional) - contains an email address for the person.\n Email = "example@email.com"\n });\n\n// category (Optional) - Specifies a category that the feed belongs to. A feed may have multiple category \n// elements.\nfeed.Categories.Add(new SyndicationCategory("CategoryName"));\n\n// contributor (Optional) - Names one contributor to the feed. An feed may have multiple contributor \n// elements.\nfeed.Contributors.Add(\n new SyndicationPerson()\n {\n Name = "Rehan Saeed",\n Uri = "https://rehansaeed.com",\n Email = "example@email.com"\n });\n\n// icon (Optional) - Identifies a small image which provides iconic visual identification for the feed. \n// Icons should be square.\nfeed.SetIcon(this.urlHelper.AbsoluteContent("http://example.com/icons/atom-icon-48x48.png"));\n\n// Add the Yahoo Media namespace (xmlns:media="http://search.yahoo.com/mrss/") to the Atom feed. \n// This gives us extra abilities, like the ability to give thumbnail images to entries. \n// See http://www.rssboard.org/media-rss for more information.\nfeed.AddYahooMediaNamespace();\n\nUnfortunately, the property to set the icon does not exist on the SyndicationFeed, even though it is part of the official specification. Luckily for you I have created a quick extension method (Usage shown above) which allows us to set the icon.
I have also created an extension method to add a Yahoo media thumbnail to an Atom entry. This is a non-standard extension but worth the effort. To use non-standard extensions, requires adding a namespace to the feed element in the XML, that is what the AddYahooMediaNamespace method does towards the bottom.
The extension methods are shown below. They use extensibility points on the SyndicationFeed, that allows us to augment its functionality.
/// <summary>\n/// <see cref="SyndicationFeed"/> extension methods.\n/// </summary>\npublic static class SyndicationFeedExtensions\n{\n private const string YahooMediaNamespacePrefix = "media";\n private const string YahooMediaNamespace = "http://search.yahoo.com/mrss/";\n\n /// <summary>\n /// Adds a namespace to the specified feed.\n /// </summary>\n /// <param name="feed">The syndication feed.</param>\n /// <param name="namespacePrefix">The namespace prefix.</param>\n /// <param name="xmlNamespace">The XML namespace.</param>\n public static void AddNamespace(this SyndicationFeed feed, string namespacePrefix, string xmlNamespace)\n {\n feed.AttributeExtensions.Add(\n new XmlQualifiedName(namespacePrefix, XNamespace.Xmlns.ToString()), \n xmlNamespace);\n }\n\n /// <summary>\n /// Adds the yahoo media namespace to the specified feed.\n /// </summary>\n /// <param name="feed">The syndication feed.</param>\n public static void AddYahooMediaNamespace(this SyndicationFeed feed)\n {\n AddNamespace(feed, YahooMediaNamespacePrefix, YahooMediaNamespace);\n }\n\n /// <summary>\n /// Gets the icon URL for the feed.\n /// </summary>\n /// <param name="feed">The syndication feed.</param>\n /// <returns>The icon URL.</returns>\n public static string GetIcon(this SyndicationFeed feed)\n {\n SyndicationElementExtension iconExtension = feed.ElementExtensions.FirstOrDefault(\n x => string.Equals(x.OuterName, "icon", StringComparison.OrdinalIgnoreCase));\n return iconExtension.GetObject<string>();\n }\n\n /// <summary>\n /// Sets the icon URL for the feed.\n /// </summary>\n /// <param name="feed">The syndication feed.</param>\n /// <param name="iconUrl">The icon URL.</param>\n public static void SetIcon(this SyndicationFeed feed, string iconUrl)\n {\n feed.ElementExtensions.Add(new SyndicationElementExtension("icon", null, iconUrl));\n }\n\n /// <summary>\n /// Sets the Yahoo Media thumbnail for the feed entry.\n /// </summary>\n /// <param name="item">The feed entry.</param>\n /// <param name="url">The thumbnail URL.</param>\n /// <param name="width">The optional width of the thumbnail image.</param>\n /// <param name="height">The optional height of the thumbnail image.</param>\n public static void SetThumbnail(this SyndicationItem item, string url, int? width, int? height)\n {\n XNamespace ns = YahooMediaNamespace;\n item.ElementExtensions.Add(new SyndicationElementExtension(\n new XElement(\n ns + "thumbnail",\n new XAttribute("url", url),\n width.HasValue ? new XAttribute("width", width) : null,\n height.HasValue ? new XAttribute("height", height) : null)));\n }\n}\n\nCreating feed entries is just as simple and is done using the SyndicationItem class. An example of creating the first entry is shown below.
\nSyndicationItem item = new SyndicationItem()\n{\n // id (Required) - Identifies the entry using a universally unique and permanent URI. Two entries \n // in a feed can have the same value for id if they represent the same entry at \n // different points in time.\n Id = "6139F098-2E59-4405-9BC7-0AAB4CF78E23",\n // title (Required) - Contains a human readable title for the entry. This value should not be blank.\n Title = SyndicationContent.CreatePlaintextContent("Item 1"),\n // description (Recommended) - A summary of the entry.\n Summary = SyndicationContent.CreatePlaintextContent("A summary of item 1"),\n // updated (Optional) - Indicates the last time the entry was modified in a significant way. This \n // value need not change after a typo is fixed, only after a substantial \n // modification. Generally, different entries in a feed will have different \n // updated timestamps.\n LastUpdatedTime = DateTimeOffset.Now,\n // published (Optional) - Contains the time of the initial creation or first availability of the entry.\n PublishDate = DateTimeOffset.Now,\n // rights (Optional) - Conveys information about rights, e.g. copyrights, held in and over the entry.\n Copyright = new TextSyndicationContent(\n string.Format("© {0} - {1}", DateTime.Now.Year, "Rehan Saeed")),\n};\n\n// link (Recommended) - Identifies a related Web page. An entry must contain an alternate link if there \n// is no content element.\nitem.Links.Add(SyndicationLink.CreateAlternateLink(\n new Uri("http://example.com/item1"), \n ContentType.Html));\n// AND/OR\n// Text content (Optional) - Contains or links to the complete content of the entry. Content must be \n// provided if there is no alternate link.\n// item.Content = SyndicationContent.CreatePlaintextContent("The actual plain text content of the entry");\n// HTML content (Optional) - Content can be plain text or HTML. Here is a HTML example.\n// item.Content = SyndicationContent.CreateHtmlContent("The actual HTML content of the entry");\n\n// author (Optional) - Names one author of the entry. An entry may have multiple authors. An entry must \n// contain at least one author element unless there is an author element in the \n// enclosing feed, or there is an author element in the enclosed source element.\nitem.Authors.Add(this.GetPerson());\n\n// contributor (Optional) - Names one contributor to the entry. An entry may have multiple contributor elements.\nitem.Contributors.Add(this.GetPerson());\n\n// category (Optional) - Specifies a category that the entry belongs to. A entry may have multiple \n// category elements.\nitem.Categories.Add(new SyndicationCategory("Category 1"));\n\n// link - Add additional links to related images, audio or video like so.\nitem.Links.Add(SyndicationLink.CreateMediaEnclosureLink(\n new Uri("http://example.com/item1/atom-icon-48x48.png"), \n ContentType.Png, \n 0));\n\n// media:thumbnail - Add a Yahoo Media thumbnail for the entry. See http://www.rssboard.org/media-rss \n// for more information.\nitem.SetThumbnail("http://example.com/item1/atom-icon-48x48.png", 48, 48);\n\nitems.Add(item);\n\nNow it's actually possible to include a full HTML page inside a feed entry. Alternatively, you can provide plain text content or as I have done, provide a link to the full content. I have shown how to do all three in the comments above.
\nThe next step is to actually reply to the client with a HTTP response containing the Atom 1.0 XML. Although Atom is just XML, it has it's own specific schema and has it's own MIME type application/atom+xml. Furthermore, the XML must actually be returned using the UTF-8 character encoding as per the standard. So here is our controllers action returning the feed:
[OutputCache(Duration = 86400)]\n[Route("feed", Name = "GetFeed")]\npublic ActionResult Feed()\n{\n SyndicationFeed feed = this.feedService.GetFeed();\n return new AtomActionResult(feed);\n}\n\nThe above controller action is super simple, we take our SyndicationFeed and return it in a new AtomActionResult which is where all the magic happens. We also cache the response for a day, this is great for performance if your feed does not change very often. So what is AtomActionResult, well here is the code:
/// <summary>\n/// Represents a class that is used to render an Atom 1.0 feed by using an <see cref="SyndicationFeed"/> instance \n/// representing the feed.\n/// </summary>\npublic sealed class AtomActionResult : ActionResult\n{\n private readonly SyndicationFeed syndicationFeed;\n\n /// <summary>\n /// Initializes a new instance of the <see cref="AtomActionResult"/> class.\n /// </summary>\n /// <param name="syndicationFeed">The Atom 1.0 <see cref="SyndicationFeed" />.</param>\n public AtomActionResult(SyndicationFeed syndicationFeed)\n {\n this.syndicationFeed = syndicationFeed;\n }\n\n /// <summary>\n /// Executes the call to the ActionResult method and returns the created feed to the output response.\n /// </summary>\n /// <param name="context">The context in which the result is executed. The context information includes the \n /// controller, HTTP content, request context, and route data.</param>\n public override void ExecuteResult(ControllerContext context)\n {\n context.HttpContext.Response.ContentType = "application/atom+xml";\n Atom10FeedFormatter feedFormatter = new Atom10FeedFormatter(this.syndicationFeed);\n XmlWriterSettings xmlWriterSettings = new XmlWriterSettings();\n xmlWriterSettings.Encoding = Encoding.UTF8;\n\n if (HttpContext.Current.IsDebuggingEnabled)\n {\n // Indent the XML for easier viewing but only in Debug mode. In Release mode, everything is output on \n // one line for best performance.\n xmlWriterSettings.Indent = true;\n }\n\n using (XmlWriter xmlWriter = XmlWriter.Create(context.HttpContext.Response.Output, xmlWriterSettings))\n {\n feedFormatter.WriteTo(xmlWriter);\n }\n }\n}\n\nThe above code is writing out the XML to the HTTP response in UTF-8 encoding and with the application/atom+xml MIME type. By default the XML is written out all in one line which is good for performance but not very good for legibility, so we also detect whether the application is being debugged and if so, indent the XML for better legibility.
After all our hard work, we can now navigate to the controller action and view our feed. Here is Internet Explorer's view of our Atom feed:
\n
RSS and Atom have been around for over a decade now and there is precious little information out there on how to create a feed. One of the areas that lacked information was the logo and icon images. All the specification says is that the ratios of the images should be a 2:1 rectangle and a 1:1 square respectively.
\nMy advice to you and what I ended up doing is looking at various examples on the internet of feeds and copy the image sizes they were using. I ended up with images of size 48x48 and 96x48 which seemed a common size.
\nFirefox has a feature called 'Subscribe to this page' which is a button that users can add to their toolbar (The button is enabled by default on older versions of Firefox). The button detects whether the current page links to an RSS/Atom feed and if it does, the user can click on it to subscribe to the feed directly. Here is a quick screen-shot of the button:
\n
To add this feature, we need to place a meta tag in the head of our page with a link to the Atom feed like so:
\n<link href="http://localhost/feed" rel="alternate" title="ASP.NET Core Boilerplate Feed" type="application/atom+xml">\n\nThis is a pretty minor feature I admit but it has potential. By doing this, we are linking our page to the Atom feed. This can be read by search engines too, so potentially there could be some benefit in terms of Search Engine Optimization (SEO). Of course this is impossible to prove as search engines jealously guard how they manage their search rankings.
\nThe problem with feeds is that you have to pull the information from them. You are never notified of new changes to the feed, so clients have to constantly poll the feed to check for any new feed entries.
\nThis is the problem that PubSubHubbub (I know, it has a terrible name!) solves. It's been developed by Google and it's actually an open standard, with the latest version of the standard being 0.4 at the time of writing.
\nThere are already major platforms supporting it. Mostly they are Google products as you would expect but WordPress which powers a third of the worlds websites also supports it.
\nAt the heart of it, you now have a hub that knows how to speak the PubSubHubbub standard language. When a feed is updated with a new entry, the website sends a message to the hub to tell it that the feed has been updated. Clients can then register for updates with the hub and get notified instantly when there is an update.
\nThe coolest thing though is that all of this is super easy to implement, since Google provides us with a hub that we can use and we don't need to write our own. We just need to add a line of XML in our Atom feed telling clients that we support PubSubHubbub and the URL to the hub we want to use:
\n<link rel="hub" href="https://pubsubhubbub.appspot.com/" />\n\nNow when there is an update to the feed, we need to publish that update to the hub linked to above. We do that by calling the simple method below:
\n/// <summary>\n/// Publishes the fact that the feed has updated to subscribers using the PubSubHubbub v0.4 protocol.\n/// </summary>\npublic Task PublishUpdate()\n{\n HttpClient httpClient = new HttpClient();\n return httpClient.PostAsync(\n "https://pubsubhubbub.appspot.com/", \n new FormUrlEncodedContent(\n new KeyValuePair<string, string>()\n {\n new KeyValuePair<string, string>("hub.mode", "publish"),\n new KeyValuePair<string, string>(\n "hub.url", \n "http://localhost/feed")\n }));\n}\n\nIt's as simple as that from the publishers side. On the client side, subscribing to the changes in the feed is only a little more complicated than this. I won't cover that but you can find out more by reading the official specification.
\nThe Atom specification actually outlines how you can add paging to your feed. This is a great way to split up your feed if you are worried that it consumes too much bandwidth. Adding paging involves inserting the following links into the top of your feed. The links are the first, last, next and previous pages of your feed. Obviously, if you don't have a next or previous page, those links can be omitted.
\n<link rel="first" href="http://example.com/feed"/>\n<link rel="next" href="http://example.com/feed?page=4"/>\n<link rel="previous" href="http://example.com/feed?page=2"/>\n<link rel="last" href="http://example.com/feed?page=10"/>\n\nHere is the corresponding code to add the above links:
\nfeed.Links.Add(new SyndicationLink(new Uri("http://example.com/feed"), "first", null, null, 0));\nfeed.Links.Add(new SyndicationLink(new Uri("http://example.com/feed?page=10"), "last", null, null, 0));\n\nif (hasPreviousPage)\n{\n feed.Links.Add(new SyndicationLink(new Uri("http://example.com/feed?page=2")), "previous", null, null, 0));\n}\n\nif (hasNextPage)\n{\n feed.Links.Add(new SyndicationLink(new Uri("http://example.com/feed?page=4"), "next", null, null, 0));\n}\n\nOnce you are done building your feed and have published it online, don't forget to check FeedValidator.org to ensure that your feed conforms to the Atom 1.0 specification.
\nAs always, you can look at a full working example of all of this code on the ASP.NET Core Boilerplate GitHub page.
\n", "url": "https://rehansaeed.com/building-rssatom-feeds-for-asp-net-mvc/", "title": "Building RSS/Atom Feeds for ASP.NET MVC", "summary": "How to build a fully featured RSS/Atom feed for ASP.NET MVC, including Google's PubSubHubbub and the 'Subscribe to this feed' button.", "image": "https://rehansaeed.com/images/hero/RSS-And-Atom-1366x768.png", "date_modified": "2015-06-26T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/c-6-0-saving-developers-from-themselves/", "content_html": "If you haven't already taken a look at what's new in C# 6.0, you should certainly read this article. This blog post is going to cover how C# 6.0 can help reduce the number of bugs in your code by giving you the tools to avoid common developer mistakes.
\nIn my opinion, the changes introduced in C# 6.0 can be split into two separate groups. The first group of changes seems to be a declaration of war on curly braces ({}), you can now omit them in many cases. I personally am not too sure about this set of features, it reduces the lines of code you have to write a little and may save a few seconds but at the cost of having to learn a new set of syntax. If you cast your mind back to being a newbie developer (or if you are one), lots of syntax to remember can be difficult to deal with.
This is a problem that C++ developers know well, C++ is a pretty old language now but is still undergoing rapid development with C++ 11, 14 and beyond. It's got to the stage where there are so many ways to skin a cat in C++, even experienced developers can be slowed down when looking at code using older patterns and paradigms. The C# caretakers need to be careful that each new feature is genuinely worth the effort and not just bloat.
\nThe second set of features is what I am really interested in. These are features which will genuinely save you from yourself. They will stop developers making many common mistakes.
\nThe nameof operator simply gives you the name of any type you pass into it. You can take a look at the simple example below:
string obiwan; \nConsole.WriteLine(nameof(obiwan));\n// Prints obiwan\n \nint kenobi = 2;\nConsole.WriteLine(nameof(kenobi));\n// Prints kenobi\n\nSo where can this help us? Well, I can think of a few examples, the first being throwing argument exceptions. Argument exceptions all take a parameter, which represents the name of the invalid parameter. In the past, we had to pass this as a string. The problem was that the parameter might get renamed and you might forget to update the string to reflect that.
\npublic void FightCrime(string hero) \n{\n if (hero == null) \n {\n throw new ArgumentNullException("hero");\n }\n\n // Omitted crime fighting code... \n}\n\npublic void FightCrime(string hero) \n{\n if (hero == null) \n {\n throw new ArgumentNullException(nameof(hero));\n }\n\n // Omitted crime fighting code... \n}\n\nWith the second example, if you used Visual Studio to rename the hero parameter, then the hero in the nameof operator will also be updated.
This interface is notorious if you are doing any WPF/Silverlight/WinRT/XAML development, for requiring strings to be passed to it. With the nameof operator, this becomes a thing of the past.
public class Ship : INotifyPropertyChanged\n{\n public event PropertyChangedEventHandler PropertyChanged;\t\n \n private string name;\n \n public string Name\n {\n get { return this.name; }\n set\n {\n this.name = value;\n this.OnPropertyChanged(nameof(this.Name));\n }\n }\n \n protected virtual void OnPropertyChanged(string propertyName)\n {\n PropertyChangedEventHandler eventHandler = this.PropertyChanged;\n if (eventHandler != null)\n {\n eventHandler(this, new PropertyChangedEventArgs(propertyName));\n }\n }\n}\n\nASP.NET MVC makes massive use of strings everywhere. This is a massive problem when you want to rename something. In fact, I've taken to using constants everywhere. It's more work to setup but in the long run its much easier to maintain. Here is an example of how we can use nameof to create a link and do away with strings and constants:
@Html.ActionLink("Home", "Index", "Home")\n\n@Html.ActionLink2("Home", nameof(HomeController.Index), nameof(HomeController))\n\npublic static MvcHtmlString ActionLink2(this HtmlHelper htmlHelper, string linkText, string actionName, string controllerName)\n{\n htmlHelper.ActionLink(linkText, actionName, controllerName.Substring(0, controllerName.Length - 10));\n}\n\nThe above example is a little contrived. In the real world, I would never use ActionLink and use RouteLink instead. Naming your routes has better performance and is just easier to understand when you have multiple routes with the same name (for GET and POST requests).
I think we've all used string.Format and got our arguments in the wrong positions or entered the index numbers incorrectly at some point in time. Well, that bug is now a thing of the past.
// Before C# 6.0 String Interpolation\nstring nameAndAge = string.Format("Name:{0}, Age:{1}", name, age);\n// After C# 6.0 String Interpolation\nstring nameAndAge = $"Name:{name}, Age:{age}";\n\nAs you can see, you can now use your parameters directly in the strings with full syntax highlighting and renaming support too. In fact the C# 6.0 code actually compiles down to doing a string.Format behind the scenes.
Every C# developer has at some point stared at the text from a NullReferenceException and thought in their head, this is a really rubbish message and leaves out vital information. In fact there is this post on UserVoice, asking Microsoft to improve their NullReferenceException messages. It turns out that Microsoft has thought of this, they haven't improved the message (They still should, please up-vote the UserVoice post) but they have introduced the Null-Conditional operator.
public string Truncate(string value, int length)\n{\n string result = value;\n if (value != null)\n {\n result = value.Substring(0, Math.Min(value.Length, length));\n }\n return result;\n}\n\npublic string Truncate(string value, int length)\n{ \n return value?.Substring(0, Math.Min(value.Length, length));\n\n // Wow, look at all this code I didn't have to write!\n}\n\nAs you can see there is a common theme with two of the three C# 6.0 features I've picked above. They give us tools to better deal with strings which have terrible IDE and language support. Making a typo in a string gives us no compile time errors and Visual Studio doesn't help either. Each of these features has allowed us to deal with strongly typed objects instead, which have full language and IDE support.
\nThe Null-Conditional operator is another great tool to help mitigate really common but minor bugs that catch even the most experienced developers out. These are great features, and should help stop silly mistakes that all of us developers make. We are after all, human.
\n", "url": "https://rehansaeed.com/c-6-0-saving-developers-from-themselves/", "title": "C# 6.0 - Saving Developers From Themselves", "summary": "C# 6.0 helps reduce human error and save developers from themselves using the nameof operator, string interpolation and the null-conditional operator.", "image": "https://rehansaeed.com/images/hero/Microsoft-.NET-1366x768.png", "date_modified": "2015-05-10T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/content-security-policy-for-asp-net-mvc/", "content_html": "This series of blog posts goes through the additions made to the default ASP.NET MVC template to build the ASP.NET Core Boilerplate project template. You can create a new project using this template by installing the Visual Studio template extension or visit the GitHub site to view the source code.
\nFor a true in-depth look into CSP, I highly recommend reading Mozilla's documentation on the subject. It really is the best resource on the web. I will assume that you've read the documentation and will be going through a few examples below.
\nContent Security Policy or CSP is a great new HTTP header that controls where a web browser is allowed to load content from and the type of content it is allowed to load. It uses a white-list of allowed content and blocks anything not in the allowed list. It gives us very fine grained control and allows us to run our site in a sandbox in the users browser.
\nCSP is all about adding an extra layer of security to your site using a Defence in Depth strategy. It helps detect and mitigate Cross Site Scripting (XSS) and various data injection attacks, such as SQL Injection.
\nSo what does this look like in a web browser. Well, here is an example of a Content-Security-Policy HTTP header shown in Chrome. I used the ASP.NET Core Boilerplate Visual Studio project template to create a ASP.NET MVC project that has CSP applied, right out of the box.

This is the HTTP header in the screenshot above. We'll discuss it in a lot more detail later in this post. Essentially it says, block everything, except scripts, images, fonts, Ajax requests and forms to or from my domain and also allow scripts from the Google and Microsoft CDN's.
\nContent-Security-Policy: default-src 'none';\n script-src 'self' ajax.googleapis.com ajax.aspnetcdn.com;\n style-src 'self' 'unsafe-inline';\n img-src 'self';\n font-src 'self';\n connect-src 'self';\n form-action 'self';\n report-uri /WebResource.axd?cspReport=true\n\nSo for example, you may only want to load CSS, JavaScript and Images from your own trusted domain(s) and block everything else. You also might want to block any use of third party plug-ins (Flash or Silverlight) or frames. Using this type of policy, the only way an attacker could compromise your site using an XSS attack, would be to somehow get a malicious script from your own domain served up on your pages in separate script files as in-line styles and scripts are not blocked by CSP by default (You can turn this off but I will go on to tell you why this is a bad idea later on).
\n<script src="http://evil.com/Script.js"></script>\n\nWith the above CSP HTTP header in place if an attacker did manage to inject the script above, browsers would throw CSP violation errors and the evil script would not be executed or even downloaded. You can see what that looks like in Chrome below.
\n
Even better, the browser never even downloads the evil script in the first place. You can compare the two screen-shots of Fiddler below. The left side shows that the evil Script.js file was never even requested but a Content Security Policy violation was logged to the URL highlighted (I'll talk more about this later). The right side shows the site with no CSP policy in effect. The browser tries to download the evil Script.js file and as this is just demo and I haven't gone to the trouble of setting up an evil website, it can't be found and returns a 404 Not Found.
\n
\nÂ
There are a number of 'directives' that are used in the policy above. Mozilla has the full list of directives and how each is used here. Each directive controls access to a particular function in a web browser. I will not cover each one in details as they all work in the same way but I will cover the most important and unique directives below.
\nThe default-src directive lets us apply some default restrictions. For example if I specified the following CSP policy, it would allow all types of content from my sites domain, as well as TrustedSite.com.
Content-Security-Policy: default-src 'self' TrustedSite.com\n\nNow the above policy is pretty loose, it tells a browser it can load frames, Ajax requests, Web Sockets, fonts, images, audio, video, plug-ins, scripts and styles from both of those domains. It may well be that you don't use most of the things on that list. A much better policy would be to block everything by default and then only allow certain resources that you actually use as shown below.
\nContent-Security-Policy: default-src 'none'; \n script-src TrustedSite.com; \n style-src 'self'; \n img-src 'self'; \n font-src 'self'; \n connect-src 'self'; \n form-action 'self'\n\nYou can see that default-src has been set to none which blocks everything by default. Then we add other directives that allow, scripts from TrustedSite.com and styles, images, fonts, Ajax request and form submissions to my sites domain. This is a lot more secure and restrictive but it does require you to think more carefully about your policy.
The report-uri directive is another special instruction. It gives the web browser a URL where it can post details of any violations to a CSP policy in JSON format. This is vitally important and allows us to find out about anyone trying to hack our site but probably much more likely, it allows us to find out about any resources that we have accidentally blocked because our policy was too restrictive and we did not do enough testing. In the example below, we are telling the browser to post CSP violation errors in JSON format to WebResource.axd?cspReport=true.
Content-Security-Policy: default-src 'self'; report-uri /WebResource.axd?cspReport=true\n\nIf we take the evil script above and try to add it to our page with the above CSP policy, we get a CSP violation error and you can see the JSON sent to us by the Chrome browser below. Please do note that different browsers do sent errors which are slightly different. Some browsers and indeed versions of browsers give more information than others.
\n{\n "csp-report": {\n "document-uri": "http://localhost:8080/",\n "referrer": "",\n "violated-directive": "default-src 'self'",\n "effective-directive": "script-src",\n "original-policy": "default-src 'self';report-uri /WebResource.axd?cspReport=true",\n "blocked-uri": "http://evil.com",\n "status-code": 200\n }\n}\n\nAs I've mentioned before in-line styles are not allowed when using CSP because there is a risk that an attacker could inject in-line styles into a compromised page. All styles must be referenced from external CSS files as shown below.
\n<link href="/Site.css" rel="stylesheet"/>\n\n<style>\n p {\n font-size:12pt;\n }\n</style>\n\nThere is an extension to this directive which allows inline styles but you should avoid it as it is unsafe. Indeed the setting you have to pass to the style-src directive is called unsafe-inline.
style-src 'self' 'unsafe-inline'\n\nJust like the style-src directive, script-src directive causes inline scripts to be blocked by default due to the risk of XSS attacks. Apart from inline scripts the JavaScript eval() function is also blocked by default.
Also just like the script-src directive, there is a way to enable inline scripts too which is also called unsafe-inline. There is also another extension called unsafe-eval, which allows access to the eval function. Once again these should be avoided and I have covered them here only because you should be wary of those who tell you to use them.
script-src 'self' 'unsafe-inline' 'unsafe-eval'\n\nCSP can be a pretty dangerous HTTP header if you have misconfigured it. Imagine a user visiting a site and wanting to view a YouTube video on your site but your CSP policy has blocked the video and all they see is a blank space where the video should be and no indication that something is wrong, unless they are clever enough to use the browser developer tools. That's a pretty poor user experience.
\nTo combat this problem the W3C created the Content-Security-Policy-Report-Only HTTP header. This works just the same as Content-Security-Policy but it only reports violations of your policy and does not cause the browser to actually block anything.
So you're sold on CSP and want to know how you can implement this great new HTTP header on your ASP.NET MVC website. Well, to get started all you need to do is install the NWebsec.Mvc NuGet package.
\nNWebsec is a great collection of MVC filters which can be applied globally to all requests or to individual controllers or actions. NWebSec contains a series of MVC filters to support CSP but includes several other filters which I've already blogged about here.
\nHere is the CSP policy I have applied to the ASP.NET Core Boilerplate site and the code which is used to create it. This policy is applied to all responses from the site.
\nContent-Security-Policy: default-src 'none';\n script-src 'self' ajax.googleapis.com ajax.aspnetcdn.com;\n style-src 'self' 'unsafe-inline';\n img-src 'self';\n font-src 'self';\n connect-src 'self';\n form-action 'self';\n report-uri /WebResource.axd?cspReport=true\n\n// Content-Security-Policy - Add the Content-Security-Policy HTTP header to enable Content-Security-Policy.\nGlobalFilters.Filters.Add(new CspAttribute());\n// OR\n// Content-Security-Policy-Report-Only - Add the Content-Security-Policy-Report-Only HTTP header to enable logging of \n// violations without blocking them. This is good for testing CSP without enabling it.\n// To make use of this attribute, rename all the attributes below to their ReportOnlyAttribute versions e.g. CspDefaultSrcAttribute \n// becomes CspDefaultSrcReportOnlyAttribute.\n// GlobalFilters.Filters.Add(new CspReportOnlyAttribute());\n\n// default-src - Sets a default source list for a number of directives. If the other directives below are not used \n// then this is the default setting.\nfilters.Add(\n new CspDefaultSrcAttribute()\n {\n // Disallow everything from the same domain by default.\n None = true,\n // Allow everything from the same domain by default.\n // Self = true\n });\n\n// connect-src - This directive restricts which URIs the protected resource can load using script interfaces \n// (Ajax Calls and Web Sockets).\nfilters.Add(\n new CspConnectSrcAttribute()\n {\n // Allow AJAX and Web Sockets to example.com.\n // CustomSources = "example.com",\n // Allow all AJAX and Web Sockets calls from the same domain.\n Self = true\n });\n// font-src - This directive restricts from where the protected resource can load fonts.\nfilters.Add(\n new CspFontSrcAttribute()\n {\n // Allow fonts from example.com.\n // CustomSources = "example.com",\n // Allow all fonts from the same domain.\n Self = true\n });\n// form-action - This directive restricts which URLs can be used as the action of HTML form elements.\nfilters.Add(\n new CspFormActionAttribute()\n {\n // Allow forms to post back to example.com.\n // CustomSources = "example.com",\n // Allow forms to post back to the same domain.\n Self = true\n });\n// img-src - This directive restricts from where the protected resource can load images.\nfilters.Add(\n new CspImgSrcAttribute()\n {\n // Allow images from example.com.\n // CustomSources = "example.com",\n // Allow images from the same domain.\n Self = true,\n });\n// script-src - This directive restricts which scripts the protected resource can execute. \n// The directive also controls other resources, such as XSLT style sheets, which can cause the user agent to execute script.\nfilters.Add(\n new CspScriptSrcAttribute()\n {\n // Allow scripts from the CDN's.\n CustomSources = string.Format("ajax.googleapis.com ajax.aspnetcdn.com"),\n // Allow scripts from the same domain.\n Self = true,\n // Allow the use of the eval() method to create code from strings. This is unsafe and can open your site up to XSS vulnerabilities.\n // UnsafeEval = true,\n // Allow inline JavaScript, this is unsafe and can open your site up to XSS vulnerabilities.\n // UnsafeInline = true\n });\n// style-src - This directive restricts which styles the user applies to the protected resource.\nfilters.Add(\n new CspStyleSrcAttribute()\n {\n // Allow CSS from example.com\n // CustomSources = "example.com",\n // Allow CSS from the same domain.\n Self = true,\n // Allow inline CSS, this is unsafe and can open your site up to XSS vulnerabilities.\n // Note: This is currently enable because Modernizr does not support CSP and includes inline styles\n // in its JavaScript files. This is a security hold. If you don't want to use Modernizr, \n // be sure to disable unsafe inline styles. For more information see:\n // http://stackoverflow.com/questions/26532234/modernizr-causes-content-security-policy-csp-violation-errors\n // https://github.com/Modernizr/Modernizr/pull/1263\n UnsafeInline = true\n });\n\nNotice how there is one MVC filter for each CSP directive. This is actually a very elegant solution. Consider the fact that you may want the actions in a particular controller to be able to display YouTube videos, note that YouTube makes use of iFrames to embed videos and it's embed mark-up is shown below.
\n<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/PGM_uBy99GA" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>\n\nWith the above CSP policy, the Chrome browser throws the following error.
\n
We need to add the frame-src and child-src directive which can be added to the specific controller. Note that the child-src directive is a CSP 2.0 directive and frame-src is deprecated in CSP 2.0 but we still need to add it for older browsers.
[CspChildSrc(CustomSources = "*.youtube.com")]\n[CspFrameSrcAttribute(CustomSources = "*.youtube.com")]\npublic class HomeController : Controller\n{\n // Action methods ommitted.\n}\n\nBut what if we only want to allow a YouTube video to display for a single action rather, than all of the actions in a controller. Well its as simple as moving the attributes to the action, rather than the controller.
\npublic class HomeController : Controller\n{\n [CspChildSrc(CustomSources = "*.youtube.com")]\n [CspFrameSrcAttribute(CustomSources = "*.youtube.com")]\n public ActionResult Index()\n {\n // This view displays a YouTube Video.\n return this.View();\n }\n}\n\nSetting up the reporting of CSP violations is a bit more complicated. You need to add the CspReportUriAttribute MVC filter and add a special function in your Global.asax.cs file to actually handle a violation as shown below.
filters.Add(new CspReportUriAttribute() { EnableBuiltinHandler = true });\n\n// Added to Global.asax.cs\nprotected void NWebsecHttpHeaderSecurityModule_CspViolationReported(object sender, CspViolationReportEventArgs e)\n{\n CspViolationReport violationReport = e.ViolationReport;\n // Log the CSP violation here.\n}\n\nThe CspViolationReport is a representation of the JSON CSP violation that the browser sends you. It contains several properties, which can tell you about the blocked URL, the violated directive, the user agent and a lot more. This is your opportunity to log this data in your preferred logging framework.
One final note, all of this code is available to view on GitHub here and is part of the ASP.NET Core Boilerplate project template.
\nCSP is a proper standard, you can read the W3C documentation here. According to the W3C at the time of writing, CSP is at Candidate Recommentation, which is "a version of the standard that is more firm than the Working Draft" and "The standard document may change further, but at this point, significant features are mostly locked".
\nIf you take a look at CanIUse.com, you can see that FireFox 23+, Chrome 25+, Safari 7+ and Opera 15+ already support the official Content-Security-Policy HTTP header, while the next version of IE (Spartan or IE12, who knows what they'll name it?) will come with full support too.
A number of older browser versions supported CSP using the X-Content-Security-Policy or X-WebKit-CSP HTTP header (The X- is commonly used to add features to browsers which are not yet finalised) but these older implementations are buggy (Their use can mean content on your site gets blocked, even though you allowed it!) and should not be used.
There is currently an 'Editors Draft' of CSP 2.0, written by the W3C standards body. It was published on 13 November 2014.
\nThe intention of this version is to fill a few gaps and add a few new directives which allow control over web workers, embedded frames, application manifests, the HTML documents base URL, where forms can be posted and the types of plug-ins the browser can load. NWebsec supports most of these new directives already (except notably the plug-in types) and you can start using them today.
\nAs well as these changes, CSP 2.0 also tries to address the pain points in using CSP and perhaps the reason for its slow take-up so far i.e. the inability to safely use in-line CSS and JavaScript using the style and script tags in your HTML.
\nSo why would you want to use in-line styles and scripts in the first place? Well, do you use Modernizr? Yes, well Modernizr does not work with CSP (I discuss this below). It makes use of in-line styles to test for various web browser capabilities and so requires the unsafe-inline directive to function. There are other libraries that also have a similar requirement. Other reasons for using in-line styles and scripts are to use CSP on an existing web application, where you don't want to spend time moving to separate script files.
CSP 1.0 had the unsafe-inline directive which allowed the use of in-line style and script tags but it is pretty dangerous and makes CSP partially pointless. It gives attackers the ability to inject code into your site (Using another vulnerability in your site if there is one) and to pull off a Cross Site Scripting (XSS) attack. Using CSP 1.0 meant loading styles and scripts from separate CSS and JavaScript files. CSP 2.0 introduces two new ways to use in-line styles and scripts.
Nonces work a little like the Anti-Forgery Token in ASP.NET MVC. A cryptographically random string is generated and sent to the client in the CSP HTTP header, as well as in the HTML with the style or script tag like so:
\nContent-Security-Policy: default-src 'self'; \n script-src 'self' https://example.com 'nonce-Nc3n83cnSAd3wc3Sasdfn939hc3'\n\n<script>\nalert("Blocked because the policy doesn't have 'unsafe-inline'.")\n</script>\n<script nonce="EDNnf03nceIOfn39fn3e9h3sdfa">\nalert("Still blocked because nonce is wrong.")\n</script>\n<script nonce="Nc3n83cnSAd3wc3Sasdfn939hc3">\nalert("Allowed because nonce is valid.")\n</script>\n\nThere is a problem however, web browsers that only support CSP 1.0, will not understand the nonce directive and will block the in-line script above. To resolve this issue, we combine the nonce with the unsafe-inline directive. CSP 1.0 web browsers will execute the in-line script as before (insecure but backwards compatible), but CSP 2.0 browsers will disregard 'unsafe-inline' when they see the nonce and only execute in-line scripts with the nonce set. This gives an upgrade path for existing sites and they can benefit from CSP 2.0 without requiring a massive rewrite to get rid of in-line styles and scripts.
Nonces can be easily implemented by using the HTML helper provided by NWebSec. You can find out more about how this feature is implemented in NWebSec here.
\n<script @Html.CspScriptNonce()>document.write("Hello world")</script>\n<style @Html.CspStyleNonce()>\n h1 {\n font-size: 10em;\n }\n</style>\n\nThe big disadvantage with this approach is that the nonce is different for each response sent to the client. This means that you cannot cache any page using nonces. If your page is specific to a user, then you probably don't want to cache that page anyway and it doesn't matter but otherwise using nonces is not possible.
\nUsing hashes solves the caching problem we have with nonces. The server computes the hash of a particular style or script tags contents, and includes the base64 encoding of that value in the Content-Security-Policy header like so:
Content-Security-Policy: script-src 'sha512-YWIzOWNiNzJjNDRlYzc4MTgwMDhmZDlkOWI0NTAyMjgyY2MyMWJlMWUyNjc1ODJlYWJhNjU5MGU4NmZmNGU3OAo='\n\n<script>\n"alert('Hello, world.');"\n</script>\n\nAs you can see, the script itself remains unchanged and only the HTTP header changes. We can now, happily cache the page with the in-line script in it. Unfortunately, at the time of writing NWebSec does not support hashes at all. If you feel this feature is worthwhile as I do, then you can raise an issue on NWebSec's issue list.
\nSo for the reasons you've learned above, using in-line styles and scripts is not the way to go. Apart from the fact that CSP will block them, you also cannot minify and obfuscate in-line scripts very easily using ASP.NET MVC (There are ways I have looked into but they aren't very good). So moving scripts to external CSS and JavaScript files will mean you can use CSP and you might get a small performance boost. So what's the problem? Well, CSP is not currently supported in a few major libraries.
\nAs I've said above Modernizr makes use of in-line styles to test for various web browser capabilities and so requires the insecure 'unsafe-inline' directive to function. There is a fix for the problem but its very old and can no longer be merged into the current branch of the Modernizr code. I would fix it myself but I'm not enough of a JavaScript guru to do so. What I have done is raise this Stack Overflow question which seeks to ask for a workaround or fix and to generally raise awareness.
\nSo far, I've received no responses from GitHub or Stack Overflow but there is hope. AngularJS (Another popular JavaScript library) has a CSP compatible mode which makes use of an external CSS file but is very easy to set up. There is no reason why Modernizr could not have something similar. Another alternative is if NWebSec supports hashes, we can add work out the hashes of any scripts that Modernizr is using and include these in our CSP HTTP header.
\nBrowser Link is a very cool Visual Studio feature that allows you to update an MVC view while debugging and hit a refresh button to refresh any browsers using that page. Unfortunately, this handy feature is not compatible with CSP because Visual Studio adds in-line scripts to the bottom of the page you are debugging. This of course, causes CSP violation errors. A simple workaround is to either introduce the unsafe-inline directive while debugging or turn off browser link altogether.
I have raised this suggestion on Visual Studio's User Voice page to get the problem fixed. I understand that this area has been changed significantly in ASP.NET Core, so it may not be needed by the time we all upgrade.
\nSo far, not many websites in the wild have implemented CSP. I think there are a few reasons:
\nX-Content-Security-Policy or X-WebKit-CSP) were buggy or had unexpected behaviour (The Content-Security-Policy HTTP header no longer has this problem).report-uri directive. If you find a violation, you can quickly discover if someone is attacking your site, your CSP policy is not valid or you have a bug in your site.There is a really interesting white paper written in 2014 and titled 'Why is CSP Failing? Trends and Challenges in CSP Adoption' which goes over these issues I listed above in a lot more depth.
\nAccording to the white paper, CSP is deployed in enforcement mode on only 1% of the Alexa Top 100 sites. I believe things are about to change. All major browsers now support the CSP HTTP header, NWebSec makes it easy to add to an MVC project, this blog post tells you exactly how it works and the ASP.NET Core Boilerplate project template gives you a project template that enables CSP by default, right out of the box.
\nThere are big names using CSP right now. Go ahead and check the HTTP response headers from sites like Facebook, CNN, BBC, Google, Huffington Post, YouTube, Twitter and GitHub. Things have moved on from when the white-paper was written and CSP adoption is starting to gain traction. Read Twitter's case study, on adopting CSP.
\nAnother interesting finding from the white-paper was that browser extensions and even ISP's were injecting scripts into pages, that caused CSP violation reports. CSP may break some browser extensions that inject code into the page. You may consider this a good or bad thing. From a security point of view, what you need to ask is, do you really trust any browser extension to modify your code? I don't know about you but I don't want any extensions and especially any ISP's dirty fingers in my source code.
\nYou can use SSL/TLS which will stop most ISP's from fiddling with your code but some governments get around even this! So CSP gives us some extra protection from man in the middle attacks from browser extensions and ISP's.
\nAdvertising can be a problem for CSP. Some ad providers are better than others. Some providers use resources whose locations are constantly changing which can cause CSP violation errors if your policy is too strict. CNN has adopted a novel workaround for this problem. It embeds all of its adverts into frames which show pages with no CSP restrictions or at least very liberal ones.
\nThere are special web crawlers that have been created to crawl all of the links on your domain, in an attempt to generate a valid CSP policy automatically. CSP Tools is one such project on GitHub, which given a list of URL's can crawl the web pages and generate a CSP policy. Another approach the tool uses is to look at your CSP violation error reports and come up with rules based on these.
\nBe careful using this approach however, it may not catch everything. The best approach is to build up your CSP policy as you build your site from the ground up and then carry out some testing to make sure you have got it right. You can set the CSP policy to report only mode, so that browsers don't actually block anything but do report CSP violation errors, once you are happy that no violations are being reported, you can apply your policy. Finally, you need to keep an eye out for CSP violation errors if they do occur and get them fixed as soon as you see them.
\nThe CSP Tester Chrome extension is an example of a tool you can use to apply CSP policies to your site and view the effects in the browser console window.
\nAs I've mentioned before, the best way is to build CSP into your site as you build your site. You can use the report-uri directive to log any violations and get them fixed. You can also use the Content-Security-Policy-Report-Only HTTP header instead of Content-Security-Policy, to stop the browser from actually blocking anything if you are not confident in the level of testing you have done.
Wow, that was a long blog post. I wanted it to be as comprehensive as possible. I hope I've shown that now is the time to invest in implementing CSP and if you are developing a new site, then integrating CSP into it at an early stage will mean that you reap the benefits of a much greater level of security. The ASP.NET Core Boilerplate project template is a great place to start and will give you a working code example which tells a thousand words on its own.
\n", "url": "https://rehansaeed.com/content-security-policy-for-asp-net-mvc/", "title": "Content Security Policy (CSP) for ASP.NET MVC", "summary": "Content Security Policy (CSP) is a HTTP header which white-lists content the browser is allowed to load. This post discusses its application in ASP.NET MVC.", "image": "https://rehansaeed.com/images/hero/Content-Security-Policy-CSP-for-ASP.NET-MVC-1366x768.png", "date_modified": "2015-03-17T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/nwebsec-asp-net-mvc-security-through-http-headers/", "content_html": "This series of blog posts goes through the additions made to the default ASP.NET MVC project template to build the ASP.NET Core Boilerplate project template. You can create a new project using this template by installing the Visual Studio extension or visit the GitHub site to view the source code.
\nSecurity is hard at the best of times. Web security...well...it takes things to a whole new level of difficulty. It is ridiculously easy to slip up and leave holes in your sites defences.
\nThis blog post as well as the ASP.NET Core Boilerplate project are not a replacement for your own knowledge but it does help in setting up some defaults to be more secure and giving you a few more tools out of the box to help secure your site.
\nIf you have some time and want to learn more about web security I highly recommend Troy Hunt's Pluralsight course called Hack yourself first. Note that Pluralsight requires a paid subscription (I'm quite against posting links to paid content but this course is pretty good. You can also get a trial subscription if you're interested). Here is a free video by Troy which covers the same topic but in a little less depth.
\nI would also, highly recommend reading up on Troy Hunt's blog which has extensive examples of real life websites in the wild, written by major companies getting web security horribly wrong.
\nThe NWebSec NuGet packages written by André N. Klingsheim are a great way to add additional security to your ASP.NET MVC site. The ASP.NET Core Boilerplate project template includes them by default.
\nEverything is preconfigured and commented as much as possible out of the box but remember this is a project template to get you started. You still need to put the effort in to customize the site security to your own requirements and put in some time learning about what each of the security features does and how best to use it.
\nHTTP has been around for a very long time and so, a fairly large number of HTTP headers have been accumulated over time. Some are more useful than others but many of them are aimed at making the web more secure.
\nAndré N. Klingsheim has a brilliant blog post called Security through HTTP response headers which is a must read and fairly comprehensive. Go on, I'll wait for you to finish reading. NWebSec provides a host of ActionFilterAttribute's (The rest of this post expects you to know what these are) which can be applied in three different ways:
\nNWebSec's ActionFilterAttribute's add and configure specific HTTP headers. Most of them are preconfigured in ASP.NET Core Boilerplate for you to apply globally but some require you to take action.
\nThe X-Frame-Options HTTP header stops click-jacking by stopping the page from opening in an iframe or only allowing it from the same origin (your domain). There are three options to choose from:
X-Frame-Options header should be set in the HTTP response, instructing the browser to display the page when it is loaded in an iframe - but only if the iframe is from the same origin as the page.X-Frame-Options header should be set in the HTTP response, instructing the browser to not display the page when it is loaded in an iframe.X-Frame-Options header should not be set in the HTTP response.We can use NWebSec to set it to block all iframe's from loading the site which is the most secure option and the default option set in ASP.NET Core Boilerplate.
// Filters is the GlobalFilterCollection from GlobalFilters.Filters\nfilters.Add(\n new XFrameOptionsAttribute()\n {\n Policy = XFrameOptionsPolicy.Deny\n });\n\nYou should note that for newer browsers, this HTTP header has become superseded by the Content-Security-Policy HTTP header which I will be covering in my next blog post. However, it should still be used for older browsers.
This HTTP header is only relevant if you are using TLS. It ensures that content is loaded over HTTPS and refuses to connect in case of certificate errors and warnings. You can read a complete guide to setting up your site to run with a free TLS certificate here.
\nNWebSec currently does not support an MVC filter that can be applied globally. Instead we can use the Owin (Using the added NWebSec.Owin NuGet package) extension to apply it.
app.UseHsts(options => options.MaxAge(days:30).IncludeSubdomains());\n\nAs well as this header, MVC ships with the RequireHttpsAttribute. This forces an unsecured HTTP request to be re-sent over HTTPS. It does so without requiring any extra HTTP headers. Instead, this is a function of the MVC framework itself, which checks requests and simply redirects users if they send a normal HTTP request to a HTTPS URL. This attribute can be set globally (Using HTTPS throughout your site is a good idea these days) as shown below:
\nfilters.Add(new RequireHttpsAttribute());\n\nBoth of these lines of code have an overlapping purpose but work in different ways. The RequireHttpsAttribute uses the MVC framework, while the NWebSec option relies on browsers responding to the Strict-Transport-Security HTTP header. Security should be applied in thick layers, so it's worth using both features. ASP.NET Core Boilerplate assumes you are not using TLS by default but does include the above lines of code commented out with a liberal sprinkling of comments to make it easy to add back in.
This HTTP header stops IE9 and below from sniffing files and overriding the Content-Type header (MIME type) of a HTTP response. This filter is added by default in ASP.NET Core Boilerplate.
filters.Add(new XContentTypeOptionsAttribute());\n\nThis HTTP header stops the automatic downloading and opening of your HTML pages by browsers which then go on to run the page as if it were part of your site. It and forces the user to save the page and manually open the HTML document. This filter is added by default in ASP.NET Core Boilerplate.
\nfilters.Add(new XDownloadOptionsAttribute());\n\nNWebSec provides a number of other useful HTTP headers. The SetNoCacheHttpHeadersAttribute helps turn off caching by applying the Cache-Control, Expires and Pragma HTTP headers (Expires and Pragma have been superseded by Cache-Control but still need to be applied for backward compatibility).
Another useful filter provided is XRobotsTagAttribute. This adds the X-Robots-Tag HTTP header, which tells robots (Google or Bing) not to index any action or controller this attribute is applied to. Note, that ASP.NET Core Boilerplate includes a robots.txt file which you should use instead of this filter but I've added this here for completeness.
A good place to use these attributes would be on a page where you want to post back credit card information because caching credit card information could be a security risk and you probably don't want search engines indexing your checkout pages either.
\npublic class CheckoutController : Controller\n{\n [SetNoCacheHttpHeadersAttribute, XRobotsTagAttribute(NoIndex = true, NoFollow = true)]\n public ActionResult Checkout(CardDetails cardDetails)\n {\n // Checkout customers purchases securely.\n }\n}\n\nThe CspAttribute filter adds valuable support for the new Content-Security-Policy (CSP) HTTP header. I will be covering this extensively in my next blog post so I've only mentioned it here. There are other HTTP headers but they turn off browser security features and I'm not really sure why you would use those.
In the image below, you can see the ASP.NET Core Boilerplate site in action. I've taken a screenshot of the HTTP response headers. You will see the ones listed in this email among them.
\n
Using HTTP headers for security is just one extra tool in your arsenal to secure your site. As you will see in my next post about the new Content-Security-Policy (CSP) HTTP header, it can be a very powerful tool but not one to be used in isolation. You need to think about security across the whole spectrum of your site to catch all the glaring holes you may have missed.
\n", "url": "https://rehansaeed.com/nwebsec-asp-net-mvc-security-through-http-headers/", "title": "NWebSec ASP.NET MVC Security Through HTTP Headers", "summary": "The NWebSec NuGet packages help secure your ASP.NET MVC site using HTTP headers. The ASP.NET Core Boilerplate project template configures them our of the box.", "image": "https://rehansaeed.com/images/hero/NWebSec-1366x768.png", "date_modified": "2015-02-05T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/reactive-extensions-part7-sample-events/", "content_html": "Its been a while since I've done another Rx post. They've been pretty popular and thanks to the community for all the positive feedback. I was talking to a colleague yesterday who had been using standard C# events in WPF (The principals learned in this post can apply anywhere). He had subscribed to the TextChanged event in C# and was updating the user interface on the fly, whenever the user typed in a character of text. He was getting way too many events being fired and his user interface couldn't keep up with all the work it was being asked to do.
this.TextBox.TextChanged += this.OnTextBoxTextChanged;\n\nprivate void OnTextBoxTextChanged(object sender, TextChangedEventArgs e)\n{\n // Heavy User Interface updates that can cause the application to lock up.\n}\n\nThis is a very common scenario which I myself have come across several times. The solution to this problem is to take a sample of the events being fired and only update the user interface every few seconds. This is possible without Reactive Extensions (Rx) but you have to write a fair amount of boilerplate code (I know, I've done it myself).
\nReactive Extensions (Rx) can do this with a few easy to understand (This is the real bonus) lines of code. The first step is to wrap the WPF TextChanged event (I've shown how to do this in a previous post here).
public IObservable<TextChangedEventArgs> WhenTextChanged\n{\n get\n {\n return Observable\n .FromEventPattern<TextChangedEventHandler, TextChangedEventArgs>(\n h => this.TextBox.TextChanged += h,\n h => this.TextBox.TextChanged -= h)\n .Select(x => x.EventArgs);\n }\n}\n\nthis.WhenTextChanged\n .Sample(TimeSpan.FromSeconds(3))\n .Subscribe(x => Debug.WriteLine(DateTime.Now + " Text Changed"));\n\nThe final and most succinct step is to use the Sample method to only pick out the latest text changed event every three seconds and pass that on to the Subscribe delegate. It really is that easy and this blog post really is this short because of that!
Security is a big subject in the web world. Largely because it's super easy to leave your site insecure and open to attack. The default ASP.NET MVC project template is pretty weak when it comes to security. It trades security for simplicity. The ASP.NET Core Boilerplate project template, shifts that balance more in favour of security while still trying to be as simple as possible. Several insecure settings in the Web.config file have been changed and made secure by default.
\nThis series of blog posts goes through the additions made to the default ASP.NET MVC template to build the ASP.NET Core Boilerplate project template. You can create a new project using this template by installing the Visual Studio template extension or visit the GitHub site to view the source code.
\nIn the early stages of development, you want to see the full stack trace of your exceptions when an error occurs on a page. When it comes to releasing your site, you need to hide this sensitive information. Unbelievably, the default ASP.NET MVC template leaves this sensitive information wide open. To hide this, you need to add the customErrors section to your web.config file and turn it on.
The problem is that we still want this setting to be turned off when debugging. This is where configuration file transforms come in. This setting is off when the solution configuration is Debug and on when it is Release. The debug attribute in the compilation section is set in the same way.
By default JavaScript from external sites can access the cookies from the default ASP.NET MVC template. They can also be sent unencrypted over the wire, because they don't use SSL. The httpCookies section can be added to secure your cookies (This can also be done in code but the point is that we are making it secure by default. You could easily forget to turn this on in code).
\n<!-- httpOnlyCookies - Ensure that external script cannot access the cookie. -->\n<!-- requireSSL - Ensure that the cookie can only be transported over SSL. -->\n<httpCookies httpOnlyCookies="true" requireSSL="false" />\n\nBy default we set requireSSL to false because we don't know if you are going to use SSL in your site or not. If you are using SSL, you need to turn this on.
By default ASP.NET shouts about itself a lot. It sends HTTP headers with each response telling the world and dog what version of ASP.NET your site is hosted on and even what version of MVC you are using. Below is an example of the extra headers needlessly being sent with every request:
\n
To fix this problem you need to do a few things. The first is to set the enableVersionHeader setting on the httpRuntime section to false.
<!-- enableVersionHeader - Remove the ASP.NET version number from the response headers. Added security through obscurity. -->\n<httpRuntime targetFramework="4.5" enableVersionHeader="false" />\n\nSecond, you need to clear the custom headers as shown below.
\n<httpProtocol>\n <customHeaders>\n <!-- X-Powered-By - Remove the HTTP header for added security and a slight performance increase. -->\n <clear />\n </customHeaders>\n</httpProtocol>\n\nTroy Hunt is a great MVC security guru and definitely worth a read on this subject.
\nWe rename the ASP.NET session cookie from its default name of ASP.NET_SessionId to s. Now, users of our site, no longer have any idea what web server we are using (There are still ways to find out but we are making it harder) and we save a few more bytes being sent over the wire because we have a shorter name.
<!-- cookieName - Sets the name of the ASP.NET session cookie (Defaults to 'ASP.NET_SessionId'). -->\n<sessionState cookieName="s" />\n\nBy default, ASP.NET MVC allows 4096 characters in the request URL. This is to reduce the effects of denial of service attacks. You can reduce this limit further by setting the maxRequestLength setting on the httpRuntime section. The template does not do this by default but does include a comment highlighting this.
<!-- maxRequestLength="4096" - The maximum length of the url request in kilobytes. -->\n<httpRuntime maxRequestLength="4096"/>\n\nMachine keys are used by MVC to generate anti-forgery tokens, which you should be using with any form on your site. If your site is deployed to a server cluster, you need to generate a machine key and add it to the system.web section of your web.config file. This is to ensure that both machines in your server cluster are using the same machine key to generate anti-forgery tokens. This link tells you more about how to do this.
<machineKey decryptionKey="[YOUR DECRYPTION KEY GOES HERE]" validationKey="[YOUR VALIDATION KEY GOES HERE]"/>\n\nThe popular Elmah NuGet package is included and configured for error logging out of the box. However, to properly secure it, you should change the URL pointing to it (An attacker can only probe your Elmah page for vulnerabilities if they can find it). You should also use some form of authentication to limit the Elmah page to certain roles or users. Both of these steps can only be taken by the person using the template. However, where we can't write the code for you, we add liberal comments and add an entry into the check-list so you don't forget to do this. By default we also allow remote access to the Elmah pages, consider turning this off if you have local access to machine the site is hosted on. Here are the relevant app settings:
\n<!-- In case of authentication is turned on, you can specify exact roles of user that have access (eg. "Administrator"). -->\n<add key="elmah.mvc.allowedRoles" value="*" />\n<!-- In case of authentication is turned on, you can specify exact users that have access (eg. "johndoe"). -->\n<add key="elmah.mvc.allowedUsers" value="*" />\n<!-- Configure ELMAH.MVC access route. Note that you should probably change this to something else. \n This is to add a little security through obscurity. hackers can't hack your elmah page if they \n don't know where it is. -->\n<add key="elmah.mvc.route" value="elmah" />\n\nGlimpse is another great tool to help with debugging and diagnostics for your site. Like Elmah, Glimpse has it's own URL which you should rename. Glimpse is turned off in 'Release' mode for security reasons but you could keep it turned on and use authentication to limit who can access it. The relevant section for Glimpse is shown below.
\n<!-- glimpse - Navigate to {your site}/glimpse and turn on Glimpse to see detailed information about your site.\n (See http://getglimpse.com/ for a video about how this helps with debugging).\n You can also install addons for Glimpse to see even more information. E.g. Install the Glimpse.EF6\n NuGet package to see your SQL being executed (See http://getglimpse.com/Extensions for all Glimpse extensions).\n For more information on how to configure Glimpse, please visit http://getglimpse.com/Help/Configuration\n or access {your site}/glimpse for even more details and a Configuration Tool to support you. \n Note: To change the glimpse URL, change the value in endpointBaseUri and also the glimpse URL under \n httpHandlers and handlers sections above. -->\n<glimpse defaultRuntimePolicy="On" endpointBaseUri="~/glimpse">\n</glimpse>\n\nI'm not quite sure why but configuring the ASP.NET MVC anti-forgery tokens cannot be done in the web.config file. The following code can be found in the Global.asax.cs file.
private static void ConfigureAntiForgeryTokens()\n{\n // Rename the Anti-Forgery cookie from "__RequestVerificationToken" to "f". \n // This adds a little security through obscurity and also saves sending a \n // few characters over the wire.\n AntiForgeryConfig.CookieName = "f";\n\n // If you have enabled SSL. Uncomment this line to ensure that the Anti-Forgery \n // cookie requires SSL to be sent accross the wire. \n // AntiForgeryConfig.RequireSsl = true;\n}\n\nWe are renaming the anti-forgery cookie from __RequestVerificationToken to f. This saves a few bytes and obscures the technology we are using a little. You can also require SSL for the anti-forgery cookie to be sent over the wire. This is commented out by default but if you are using SSL, set this to true for added security.
Enabling tracing while debugging your site is a fairly common occurrence. It can be done with a single line of config:
\n<system.web>\n <trace enabled="true"/>\n</system.web>\n\nYour tracing information can be easily views by navigating to http://YourSite/trace.axd as shown here:

The security angle on tracing is two-fold. First is the most obvious, you could leave the trace.axd page open to anyone who knows to try that URL on your site. Thus, leaking valuable inside information about your site, as well as the version of ASP.NET and .NET you are using. The fix for this is simple, you just need to remember to remove the tracing node from your web.config file.
Once again, you can use configuration file transforms to fix this problem. In your Web.Release.config file, you can add the following code to remove tracing but only when the site is built in release mode:
<system.web>\n <!-- customErrors - Turn on custom error pages instead of ASP.NET errors containing stack traces which are a security risk. -->\n <customErrors xdt:Transform="SetAttributes(mode)" mode="On"/>\n <!-- compilation - Turn off debug compilation. -->\n <compilation xdt:Transform="RemoveAttributes(debug)" />\n <!-- trace - Turn off tracing, just in case it is turned on for debugging. -->\n <trace xdt:Transform="Remove" />\n</system.web>\n\nThe second problem is that even if you do this, accessing http://YourSite/trace.axd causes a 500 Internal Server Error on your site! This gives an attacker a clue that you are using ASP.NET. The correct thing to do is for the site to respond with a 404 Not Found error page instead. It turns out that in release mode you have to remove the tracing HTTP handlers altogether to stop your site responding to this URL. You can do that by adding the following snippet to the Web.Release.config file:
<system.webServer>\n <!-- remove TraceHandler-Integrated - Remove the tracing handlers so that navigating to /trace.axd gives us a \n 404 Not Found instead of 500 Internal Server Error. -->\n <handlers>\n <remove xdt:Transform="Insert" name="TraceHandler-Integrated" />\n <remove xdt:Transform="Insert" name="TraceHandler-Integrated-4.0" />\n </handlers>\n</system.webServer>\n\nNavigating to a directory using IIS and ASP.NET MVC can cause a 403 Forbidden response to be returned. Actually, its a 403.14 Forbidden response to be exact. IIS is basically telling us that directory browsing is disabled (As it should be, directory browsing is a severe security risk. It can allow attackers to see your Web.config file with all your connection strings in it!). You can see what happens when I navigate to the physical /Content folder below:

So what is the problem? Well, a user would expect a 404 Not Found response if a resource is not found. A 403.14 Forbidden response tells a potential attacker that there is a folder there and that you are using IIS. Not the most useful information to an attacker but combine it with other information and it could be useful. The way to fix this is to handle 403.14 errors and replace the response with a standard 404 Not Found. We just need to add the code below:
\n<system.webServer>\n <!-- Custom error pages -->\n <httpErrors errorMode="Custom" existingResponse="Replace">\n <!-- Redirect IIS 403.14 Forbidden responses to the error controllers not found action.\n A 403.14 happens when navigating to an empty folder like /Content and directory browsing is turned off\n See http://www.troyhunt.com/2014/09/solving-tyranny-of-http-403-responses.html -->\n <error statusCode="403" subStatusCode="14" responseMode="ExecuteURL" path="/error/notfound" />\n <!-- ...Ommitted Code... -->\n </httpErrors>\n</system.webServer>\n\nJust adding this is not enough however. If I fire up Fiddler Navigating to the /Content folder of the site now results in a 301 Document Moved response, followed by a 404 Not Found. We can do much better than that.

To get around the above issue, you need to turn off default document handling in IIS. Please do note, that this will stop IIS from returning the default document (Using whats called a courtesy redirect) when navigating to a folder e.g. navigating to /Folder which contains an index.html file will not return /Folder/index.html. This should not be a problem as we are using ASP.NET MVC controllers and actions and not physical files.
<system.webServer>\n <!-- Stop IIS from doing courtesy redirects used to redirect a link to a directory without\n to a slash to one with a slash e.g. /Content redirects to /Content/. This gives a clue\n to hackers as to the location of directories. -->\n <defaultDocument enabled="false"/>\n</system.webServer>\n\nNow, navigating to /Content will return us a simple and correct 404 Not Found and we don't have the courtesy redirect any-more too. Take a look at the same request in Fiddler:

IIS seems to have a lot of strange behaviours that have a detrimental effect on security. If you use the above settings however, you can cut out IIS's extra features that you don't need or want. Look out for the next post when I'll be discussing the very cool NWebSec NuGet package, which provides a whole host of comprehensive ASP.NET MVC security related filters which you can apply to your site.
\n", "url": "https://rehansaeed.com/securing-the-aspnet-mvc-web-config/", "title": "Securing the ASP.NET MVC Web.config", "summary": "The web.config file is insecure in the default ASP.NET MVC project template. This post talks you through securing the ASP.NET MVC Web.config file.", "image": "https://rehansaeed.com/images/hero/ASP.NET-Core-Boilerplate-1366x768.png", "date_modified": "2014-12-17T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/configuration-file-transforms-in-visual-studio-should-be-built-in/", "content_html": "::: tip Update 1\nI should point out that configuration file transforms have been available for some time for web projects only. I believe the feature was developed by the web team within Microsoft. This fact just makes it more strange, that this feature was not rolled out to the masses.\n:::
\n::: tip Update 2\nMicrosoft has just announced on their UserVoice site that SlowCheetah will be updated for Visual Studio 2015 and future versions will indeed have configuration file transforms built in! That's great news and shows another example of Microsoft listening to its developers.\n:::
\nI have used a number of IDE's in my time as a developer, from bare bones text editors like Sublime (Which you should seriously consider purchasing. It beats the pants off of notepad and most notepad competitors) to fully fledged development environments like Netbeans and Visual Studio. It has to be said though that Microsoft does a much better job than most and packs a lot of power into their punch.
\nIt is with puzzlement and confusion then that something as useful and common as transforms for configuration files (.config) are still not supported by Microsoft. You may be wondering what I'm babbling about. Well, every developer at some point has had to release their application on more than one environment, even if it's just your own development machine and wherever your application is released. In the past I've worked with as many as four distinct environments. Each with it's own application settings, database connection strings and other settings stored in .config files. In the past, this has been a nightmare.
\nHappily though, someone named Sayed Hashimi working for Microsoft has unofficially created a Visual Studio extension called Slow Cheetah. Go ahead, read through, that page if you haven't already, I'm not going to describe the genius that is Slow Cheetah here (I'm not just talking about the ironic name, which I quite like. Sayed Hashimi seems to have a gift for odd names, as he is also a developer for the Side Waffle project).
\nUnfortunately, according to this post, Slow Cheetah is no longer going to be supported in future versions of Visual Studio. His reasons for dropping support are interesting. In the first paragraph, he suggests that the existence of Slow Cheetah has stopped Microsoft from building configuration file transforms into Visual Studio in the first place.
\nIt's been a few months since Sayed's post on GitHub. While I'm sure the community will step in and keep the tool updated for future versions of Visual Studio, this is really something that should have been built in to Visual Studio years ago!
\nDon't despair, there is hope. On the Visual Studio User Voice site, there is a suggestion to support configuration file transforms out of the box. What's more, is that the suggestion is currently number seven in the list of 'hot' ideas and number three if you remove the ideas that Microsoft have already commented on. Please do go and vote for this suggestion, Microsoft does read and more importantly act on them.
\nI spent some time a year ago teaching a junior developer the new C# async and await feature, as well as the Task Parallel Library (TPL). Not only did it blow his mind but I realized that so much that I had learned had now become obsolete and that this new developer would never have to struggle with asynchronous code as I had. In my opinion, this feature is another one of those moments when we can consign another series of old method to the dustbin of history. We just need to help Microsoft know about it.
\n", "url": "https://rehansaeed.com/configuration-file-transforms-in-visual-studio-should-be-built-in/", "title": "Configuration File Transforms In Visual Studio Should Be Built In", "summary": "Configuration File Transforms can be done with the Slow Cheetah Visual Studio extension by Sayed Hashimi. This feature should be built into Visual Studio.", "image": "https://rehansaeed.com/images/hero/Visual-Studio-1366x768.png", "date_modified": "2014-12-15T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/custom-visual-studio-project-templates/", "content_html": "Creating a custom Visual Studio project template that you can use from the 'New Project' dialogue is a great way to reduce the amount of repetitive code you have to write. It gets you running through the jungle with the leaves brushing against your face, instead of sitting at the starting block wondering where your shoes are.
\nThe Visual Studio Gallery is a great place to share your custom Visual Studio project template with the community. There are actually two ways to create a template. I recently had to create one for ASP.NET Core Boilerplate and found it less than simple. This post talks you through creating one and submitting it to the Visual Studio Gallery, allowing anyone to use it from the 'New Project' dialogue.
\n
The project template shown in the 'Web' section of Visual Studio's 'New Project' dialogue.
\nThe first method of creating a custom project template is to use the 'Export Template' wizard in Visual Studio, as shown in the steps below, you can create a .zip file which contains a template of your selected project.

The Visual Studio Export Template menu item for exporting a .zip project template.
The wizard allows you to select the project from your solution you want to export as a template.
\n
The export template 'Choose template type' screen.
\nYou can even specify an icon and preview image of the project template. This is perfect as this information also gets displayed in the Visual Studio 'New Project' dialogue.
\n
The export template 'Select template options' screen, where you can describe your template.
\nThe .zip file that is output must be copied into the following folder, for Visual Studio to pick it up:
C:/Users/[Your User Name]/Documents/Visual Studio [2010|2012|2013|2015]/Templates/ProjectTemplates\n\n
The .zip file output as part of the Visual Studio Export Template wizard.
This is a super easy process and you should really consider creating your own templates if you find yourself making the same old changes to every project you are creating. The downside is that its not very customizable and these types of project templates cannot be submitted to the Visual Studio Gallery.
\nThe gallery website only supports project templates created using the VSIX package. Creating a VSIX package is a pretty involved process and will take a couple of hours.
\nThis Visual Studio extension claims to support exporting your projects into VSIX packages using a simple wizard interface. Unfortunately, this tool is only available for Visual Studio 2010 and the Microsoft developers of the extension seem to have stopped further development.
\nThis is a real shame. Here we have an excellent tool which the community really needs. A tool that makes creating a VSIX project template package, a matter of a few clicks. This should be built into Visual Studio out of the box!
\nThere is even an alternative method of creating project templates and sharing them with the community that I should mention. It's called SideWaffle (I like the name for some reason). It makes it slightly easier to create a template but its still not as simple as a few clicks. It's certainly something worth taking a look at though.
\nThe first step is to create a .zip project template package using the steps above. Once you've done that, download and install the Microsoft Visual Studio 2013 SDK. This will give you new project templates to create VSIX packages. The first thing to do is create a new C# Project Template (Or a Visual Basic one if you prefer).

The new project dialogue, showing how to create a new VSIX C# project.
\nThe next step is to open the .zip version of your project template and copy the contents into your new project. Copy everything (Including the project file), except the 'Properties' folder as this will conflict with the existing one in the project.

An example of a VSIX C# project from the ASP.NET Core Boilerplate project template.
\nSelect all the files in the project and in the properties window, ensure that their 'Build Action' is set to 'Content'.
\n
The properties window for a file in a VSIX C# project. Note, that the file build action has been set to 'Content'.
\nThe .vstemplate file in the project contains all the information about the contents of the project template. If you select it and go to the 'Properties' window, you can set the 'Category' property which determines where your project template will appear in the 'New Project' dialogue.

The category a VSIX C# project is displayed under, in the Visual Studio New Project dialogue.
\nAn example of the .vstemplate file is shown below. Note, that I've set the project template name, description and the default name of the project which is shown in the 'New Project' dialogue and the user can rename. The XML then goes on to declare the name of the .csproj file in the Project node, followed by each and every file and folder to be included in the template.
Note, that the AssemblyInfo.cs file is a little special. There are two of them in the project. The one we are interested in was already located in the root of the project when we created it. We also need to add it to the XML below slightly differently than the other files (See below).
You can use the OpenInWebBrowser or OpenInEditor attributes on a ProjectItem to get the file to be opened in a web browser or text editor when the project is first created. I've used OpenInWebBrowser to open the ReadMe.html file containing basic information about the project, when the project is first created.
<?xml version="1.0" encoding="utf-8"?>\n<VSTemplate Version="3.0.0" Type="Project" xmlns="http://schemas.microsoft.com/developer/vstemplate/2005" xmlns:sdk="http://schemas.microsoft.com/developer/vstemplate-sdkextension/2010">\n <TemplateData>\n <Name>ASP.NET Core Boilerplate</Name>\n <Description>A professional ASP.NET MVC template for building secure, fast, robust and adaptable web applications or sites. It provides the minimum amount of code required on top of the default MVC template provided by Microsoft. Find out more at RehanSaeed.com</Description>\n <Icon>MvcBoilerplateTemplate.ico</Icon>\n <ProjectType>CSharp</ProjectType>\n <RequiredFrameworkVersion>4.5</RequiredFrameworkVersion>\n <SortOrder>1000</SortOrder>\n <TemplateID>f2d50b53-cff3-41b4-8481-dac14c18ea48</TemplateID>\n <CreateNewFolder>true</CreateNewFolder>\n <DefaultName>WebProject</DefaultName>\n <ProvideDefaultName>true</ProvideDefaultName>\n </TemplateData>\n <TemplateContent>\n <Project File="MvcBoilerplate.csproj" ReplaceParameters="true">\n <ProjectItem ReplaceParameters="true" TargetFileName="Properties%5CAssemblyInfo.cs">AssemblyInfo.cs</ProjectItem>\n <Folder Name="App_Start">\n <ProjectItem ReplaceParameters="true" OpenInEditor="false">BundleConfig.cs</ProjectItem>\n <ProjectItem ReplaceParameters="true" OpenInEditor="false">FilterConfig.cs</ProjectItem>\n <ProjectItem ReplaceParameters="true" OpenInEditor="false">RouteConfig.cs</ProjectItem>\n <ProjectItem ReplaceParameters="true" OpenInEditor="false">Startup.Container.cs</ProjectItem>\n </Folder>\n <!-- Omitted lots of Folder and ProjectItem nodes... -->\n <ProjectItem ReplaceParameters="true" OpenInWebBrowser="true">ReadMe.html</ProjectItem>\n </Project>\n </TemplateContent>\n</VSTemplate>\n\nThe next step is to add a VSIX Project to our solution. This is the project that will actually build a .vsix file.

The new project dialogue, showing how to create a new VSIX project.
\nIf you open the .vsixmanifest file, you can fill in basic information about the template. This information will be displayed in the 'New Project' dialogue. I have specified an Icon, Preview Image and Licence file. All three of these files are added to the project.

The metadata for the VSIX project, describing the .vsix installer file and also the project shown in the solution explorer, showing the files included.
The 'Install Targets' tab lets you target a specific version of Visual Studio. I changed this to support Visual Studio 2012 and above by specifying the version range to be [11.0,]. More information here.
The 'Dependencies' tab is very similar. Here you can specify which version of the .NET Framework your project depends on. I stuck with the defaults of .NET 4.5 only.
\n
The installation targets or versions of Visual Studio your VSIX extension will install to.
\nIf you click on the 'Assets' tab on the left, you can add a reference to your other 'C# Project Template' project.
\n
The VSIX C# projects you want to add to the VSIX extension as an asset to be installed.
\nSubmitting your new VSIX extension to the Visual Studio Gallery is a great way to share your new Project Template with the world. The ASP.NET Core Boilerplate extension now has over a hundred downloads within two days of submitting it!
\nIf you've followed the steps above to create a VSIX package and tested that it works correctly then submitting one to the site is really easy. Follow this link to the submission page and fill in the form. An example of what it looks like can be seen below:
\n
An example of the Visual Studio Gallery Submission Page.
\nOnce you have made your submission and you go to the 'New Project' dialogue, you can go to the 'Online' section and see your project template listed.
\n
The project template shown in the 'Web' section of Visual Studio's 'New Project' dialogue.
\nIf you use the template above, explicitly install it from the Visual Studio Gallery or install it from 'Extensions and Updates', you will then see the project appear in the 'New Project' dialogue under the category you specified in the settings above.
\n
The project template shown in the 'Online' section of Visual Studio's 'New Project' dialogue.
\nCreating a simple .zip project template is super simple and something everyone should know and use for larger projects. Creating VSIX project templates is a lot more involved but to be honest, there is no reason why it should be. I hope Microsoft makes this process a lot easier.
These days there is a ridiculous range of devices that can access your website from phone and desktop browsers to phone apps, operating systems and search engine bots. Most of them will require some kind of icon or image to display for your website. Some of them go even further and even allow you to specify splash screens for when your page is loading or an RSS feed URL for the latest updates from your site.
\nA brain dump of all my knowledge regarding favicon's and many other ASP.NET MVC features can be found in the ASP.NET Core Boilerplate project on GitHub. Its a professional ASP.NET MVC template for building secure, fast, robust and adaptable web applications or sites. It provides the minimum amount of code required on top of the default MVC template provided by Microsoft.
\nThis blog post tries to be as comprehensive as possible in explaining the absolute madness that is the internet favicon and its related 'bits' for want of a better word. So without further ado, here is a list of files that you need to add to support all the different devices that can access your site:
\n![]()
The list of all files required to support favicon's and splash screen images on all devices.
\nNow you can add all these files to the root directory of your site and have a really messy project or you can add the files to a /content/icons folder in your project and add the following link and meta tags to the head section of your HTML pages:
<!-- Icons & Platform Specific Settings - Favicon generator used to generate the icons below http://realfavicongenerator.net -->\n<!-- shortcut icon - This file contains the following sizes 16x16, 32x32 and 48x48. -->\n<link rel="shortcut icon" href="/content/icons/favicon.ico">\n<!-- favicon-96x96.png - For Google TV https://developer.android.com/training/tv/index.html#favicons. -->\n<link rel="icon" type="image/png" href="/content/icons/favicon-96x96.png" sizes="96x96">\n<!-- favicon-32x32.png - For Safari on Mac OS. -->\n<link rel="icon" type="image/png" href="/content/icons/favicon-32x32.png" sizes="32x32">\n<!-- favicon-16x16.png - The classic favicon, displayed in the tabs. -->\n<link rel="icon" type="image/png" href="/content/icons/favicon-16x16.png" sizes="16x16">\n\n<!-- Android/Chrome -->\n<!-- manifest-json - The location of the browser configuration file. It contains locations of icon files, name of the application and default device screen orientation. Note that the name field is mandatory.\n https://developer.chrome.com/multidevice/android/installtohomescreen. -->\n<link rel="manifest" href="/content/icons/manifest.json">\n<!-- theme-color - The colour of the toolbar in Chrome M39+\n http://updates.html5rocks.com/2014/11/Support-for-theme-color-in-Chrome-39-for-Android -->\n<meta name="theme-color" content="#1E1E1E">\n<!-- favicon-192x192.png - For Android Chrome M36 to M38 this HTML is used. M39+ uses the manifest.json file. -->\n<link rel="icon" type="image/png" href="/content/icons/favicon-192x192.png" sizes="192x192">\n<!-- mobile-web-app-capable - Run Android/Chrome version M31 to M38 in standalone mode, hiding the browser chrome. -->\n<!-- <meta name="mobile-web-app-capable" content="yes"> -->\n\n<!-- Apple Icons - You can move all these icons to the root of the site and remove these link elements, if you don't mind the clutter.\n https://developer.apple.com/library/safari/documentation/AppleApplications/Reference/SafariHTMLRef/Introduction.html#//apple_ref/doc/uid/30001261-SW1 -->\n<!-- apple-touch-icon-57x57.png - Android Stock Browser and non-Retina iPhone and iPod Touch -->\n<link rel="apple-touch-icon" sizes="57x57" href="/content/icons/apple-touch-icon-57x57.png">\n<!-- apple-touch-icon-114x114.png - iPhone (with 2Ă display) iOS = 6 -->\n<link rel="apple-touch-icon" sizes="114x114" href="/content/icons/apple-touch-icon-114x114.png">\n<!-- apple-touch-icon-72x72.png - iPad mini and the first- and second-generation iPad (1Ă display) on iOS = 6 -->\n<link rel="apple-touch-icon" sizes="72x72" href="/content/icons/apple-touch-icon-72x72.png">\n<!-- apple-touch-icon-144x144.png - iPad (with 2Ă display) iOS = 6 -->\n<link rel="apple-touch-icon" sizes="144x144" href="/content/icons/apple-touch-icon-144x144.png">\n<!-- apple-touch-icon-60x60.png - Same as apple-touch-icon-57x57.png, for non-retina iPhone with iOS7. -->\n<link rel="apple-touch-icon" sizes="60x60" href="/content/icons/apple-touch-icon-60x60.png">\n<!-- apple-touch-icon-120x120.png - iPhone (with 2Ă and 3 display) iOS = 7 -->\n<link rel="apple-touch-icon" sizes="120x120" href="/content/icons/apple-touch-icon-120x120.png">\n<!-- apple-touch-icon-76x76.png - iPad mini and the first- and second-generation iPad (1Ă display) on iOS = 7 -->\n<link rel="apple-touch-icon" sizes="76x76" href="/content/icons/apple-touch-icon-76x76.png">\n<!-- apple-touch-icon-152x152.png - iPad 3+ (with 2Ă display) iOS = 7 -->\n<link rel="apple-touch-icon" sizes="152x152" href="/content/icons/apple-touch-icon-152x152.png">\n<!-- apple-touch-icon-180x180.png - iPad and iPad mini (with 2Ă display) iOS = 8 -->\n<link rel="apple-touch-icon" sizes="180x180" href="/content/icons/apple-touch-icon-180x180.png">\n\n<!-- apple-mobile-web-app-title - The name of the application if pinned to the IOS start screen. -->\n<!-- <meta name="apple-mobile-web-app-title" content=""> -->\n<!-- apple-mobile-web-app-capable - Hide the browsers user interface on IOS, when the app is run in 'standalone' mode. Any links to other pages that are clicked whilst your app is in standalone mode will launch the full Safari browser. -->\n<!-- <meta name="apple-mobile-web-app-capable" content="yes"> -->\n<!-- apple-mobile-web-app-status-bar-style - default/black/black-translucent Styles the IOS status bar. Using black-translucent makes it transparent and overlays it on top of your site, so make sure you have enough margin. -->\n<!-- <meta name="apple-mobile-web-app-status-bar-style" content="black"> -->\n\n<!-- Apple Startup Images - These splash screen images are only shown if apple-mobile-web-app-capable is set to true. https://gist.github.com/tfausak/2222823 -->\n<!-- apple-touch-startup-image-1536x2008.png - iOS 6 & 7 iPad (retina, portrait) -->\n<!-- <link rel="apple-touch-startup-image" href="/content/icons/apple-touch-startup-image-1536x2008.png" media="(device-width: 768px) and (device-height: 1024px) and (orientation: portrait) and (-webkit-device-pixel-ratio: 2)"> -->\n<!-- apple-touch-startup-image-1496x2048.png - iOS 6 & 7 iPad (retina, landscape) -->\n<!-- <link rel="apple-touch-startup-image" href="/content/icons/apple-touch-startup-image-1496x2048.png" media="(device-width: 768px) and (device-height: 1024px) and (orientation: landscape) and (-webkit-device-pixel-ratio: 2)"> -->\n<!-- apple-touch-startup-image-768x1004.png - iOS 6 iPad (portrait) -->\n<!-- <link rel="apple-touch-startup-image" href="/content/icons/apple-touch-startup-image-768x1004.png" media="(device-width: 768px) and (device-height: 1024px) and (orientation: portrait) and (-webkit-device-pixel-ratio: 1)"> -->\n<!-- apple-touch-startup-image-748x1024.png - iOS 6 iPad (landscape) -->\n<!-- <link rel="apple-touch-startup-image" href="/content/icons/apple-touch-startup-image-748x1024.png" media="(device-width: 768px) and (device-height: 1024px) and (orientation: landscape) and (-webkit-device-pixel-ratio: 1)"> -->\n<!-- apple-touch-startup-image-640x1096.png - iOS 6 & 7 iPhone 5 -->\n<!-- <link rel="apple-touch-startup-image" href="/content/icons/apple-touch-startup-image-640x1096.png" media="(device-width: 320px) and (device-height: 568px) and (-webkit-device-pixel-ratio: 2)"> -->\n<!-- apple-touch-startup-image-640x920.png - iOS 6 & 7 iPhone (retina) -->\n<!-- <link rel="apple-touch-startup-image" href="/content/icons/apple-touch-startup-image-640x920.png" media="(device-width: 320px) and (device-height: 480px) and (-webkit-device-pixel-ratio: 2)"> -->\n<!-- apple-touch-startup-image-320x460.png - iOS 6 iPhone -->\n<!-- <link rel="apple-touch-startup-image" href="/content/icons/apple-touch-startup-image-320x460.png" media="(device-width: 320px) and (device-height: 480px) and (-webkit-device-pixel-ratio: 1)"> -->\n\n<!-- Windows 7 Taskbar - This depends on your site, so no code here. See http://www.buildmypinnedsite.com/windows7 -->\n\n<!-- Windows 8 IE10 -->\n<!-- application-name - The name of the application if pinned to the start screen. -->\n<!-- <meta name="application-name" content=""> -->\n<!-- msapplication-TileColor - The tile colour which shows around your tile image (msapplication-TileImage). -->\n<!-- <meta name="msapplication-TileColor" content="#5cb95c"> -->\n<!-- msapplication-TileImage - The tile image. -->\n<!-- <meta name="msapplication-TileImage" content="/content/icons/mstile-144x144.png"> -->\n\n<!-- Windows 8.1 IE11 -->\n<!-- msapplication-config - The location of the browser configuration file. If you have an RSS feed, go to\n http://www.buildmypinnedsite.com and regenerate the browserconfig.xml file. You will then have a cool live tile! -->\n<meta name="msapplication-config" content="/content/icons/browserconfig.xml">\n\nNow don't be too scared, there are only 24 lines that you need, the rest is all comments describing what each line is for, which I'll go through it in the rest of this post.
\nNow, go ahead take another look above. That is 30 files and almost as many lines of code if you decide to have your files in a nice separate folder. Take a moment to let the insanity of this situation settle in. All we are really trying to do is set an icon for our site!
\nThis approach does use more bandwidth. Those 24 lines take up around 2.8KB, if you decide to support everything or around 1.4KB if you skip support for Apple splash screens which takes about half the space due to its extremely verbose meta tags.
\nHowever, you should be using GZip compression for transferring your HTML pages over the internet (I'll be covering GZip compression in a subsequent post) so when compressed we are talking around 650 Bytes if you include everything or around 465 Bytes if you remove support for Apple splash screens.
\nAt the end of the day it's a trade off and I'll leave making that decision up to you. You can support all of it, none of it or anything in between. 650 Bytes for every page can add up to a fair amount of bandwidth, especially if you have a large number of requests coming into your site. If you had say a million requests, which is not unheard of if you consider that this overhead is added to every page, then you are looking at around 0.6GB of bandwidth and that's before you add up extra bandwidth usage from the images and Android/Chrome/Windows XML/JSON configuration files.
\nThe manifest.json and browserconfig.xml file are small files around 1KB but they can also be GZIP compressed and more importantly they can be cached by the browser. A bigger problem is the image files. These files are up to 37KB in size, they cannot be compressed as PNG's are already compressed but they can be downloaded once and then cached. It's difficult to calculate how often these files will be downloaded and how much bandwidth this will use.
Then again, how do we measure the value of users who feel more engaged with your site because they get a more customized and integrated experience when using your site. It's a difficult question and the answer will be different for every site.
\nFavicon's were introduced in 1995 by Microsoft with Internet Explorer 5.0. You could add a favicon.ico file to the root of your site and it would get displayed next to the address bar.
Favicons use the ancient .ico image format which began life as a part of Windows 1.0! A lot of people don't realize that the .ico file can actually contain several images of varying sizes and colour depths. Typically the image sizes can include 16x16, 32x32, 48x48, 64x64, 128x128 and 256x256. Windows or your web browser can then choose the appropriate size they need for display. Most favicon's are uncompressed images and although the images are small the file size is not as small as it could be.
These days favicon's are no longer displayed in the address bar of your browser (IE being an exception). Miscreants were abusing this feature and using padlock favicon's to trick unsavvy users into thinking the page was secure and had SSL enabled. Most browsers now only show icons on tabs or when a site is favourited. You can see a table of how desktop browsers use Favicon's here.
\n![]()
iOS devices can pin your site to the home screen and you can provide icons in a variety of sizes to support phone and tablets with differing screen resolutions. All the files shown above start with apple-touch-icon. Increasingly websites are being built to look and feel like everyday phone apps. Apple lets you specify three additional meta tags which allows you to customize what happens after your site is pinned to the home screen:
<!-- apple-mobile-web-app-title - The name of the application if pinned to the IOS start screen. -->\n<meta name="apple-mobile-web-app-title" content="Your Site Title">\n<!-- apple-mobile-web-app-capable - Hide the browsers user interface on IOS, when the app is run in 'standalone' mode. \n Any links to other pages that are clicked whilst your app is in standalone mode will launch the full Safari browser. -->\n<meta name="apple-mobile-web-app-capable" content="yes">\n<!-- apple-mobile-web-app-status-bar-style - default/black/black-translucent Styles the IOS status bar. Using \n black-translucent makes it transparent and overlays it on top of your site, so make sure you have enough margin. -->\n<meta name="apple-mobile-web-app-status-bar-style" content="black">\n\n![]()
The left screen shows a normal web browsing experience on iOS and the right shows a web app capable experience.
\nIf you decide to make your site web app capable by using the meta tag above, you can also set a splash screen which gets shown when your site is first launched. Once again, there are several sizes depending on the screen resolution and all the images start with apple-touch-startup-image. Find more information about iOS favicon's, startup images and meta tags here.
If your site is pinned to the Windows 7 taskbar then you can customize the jump list items with additional links to pages on your site. There are lots of other additional features (See image below) but these require JavaScript and some additional work. Also, I'm not really sure how often people pin websites to the taskbar (I personally have never done it), so I'm not sure if its worth it. Check out the Windows 7 Build My Pinned Site page for examples and more information.
\n![]()
Windows Phone 8 and Windows 8 takes a very interesting approach. Pinning a site to the start screen of one of these devices gives you a very large tile. You can specify an icon for your tile and also an RSS feed URL. The RSS feed is polled and new updates are shown on your tile regularly (I've implemented this feature on this site, so pin this site to your Windows 8 home screen and give it a try). You can also specify a background colour which is used when showing the RSS feed items. Pretty cool eh!
\n![]()
Now if Microsoft had taken Apple's approach it would be cluttering up the head section of your site with a lot of meta data. Microsoft splits off its tile configuration into a separate browserconfig.xml file (See example below). This is a much cleaner approach and very much welcomed. You can add this file to the root of your site or if you want to move it elsewhere, add a meta tag pointing to it (Note that this file was introduced in Windows 8.1 and Windows 8 still uses meta tags in the head of the page. Windows 8 is on its way out, so I probably would not support it).
<?xml version="1.0" encoding="utf-8"?>\n<browserconfig>\n <msapplication>\n <tile>\n <square70x70logo src="tiny.png"/>\n <square150x150logo src="square.png"/>\n <wide310x150logo src="wide.png"/>\n <square310x310logo src="large.png"/>\n <TileColor>#fff200</TileColor>\n </tile>\n <notification>\n <polling-uri src="http://notifications.buildmypinnedsite.com/?feed=https://rehansaeed.com/feed/&id=1"/>\n <polling-uri2 src="http://notifications.buildmypinnedsite.com/?feed=https://rehansaeed.com/feed/&id=2"/>\n <polling-uri3 src="http://notifications.buildmypinnedsite.com/?feed=https://rehansaeed.com/feed/&id=3"/>\n <polling-uri4 src="http://notifications.buildmypinnedsite.com/?feed=https://rehansaeed.com/feed/&id=4"/>\n <polling-uri5 src="http://notifications.buildmypinnedsite.com/?feed=https://rehansaeed.com/feed/&id=5"/>\n <frequency>30</frequency>\n <cycle>1</cycle>\n </notification>\n </msapplication>\n</browserconfig>\n\nAndroid/Chrome recently introduced new favicon's and browser settings. interestingly, their solution looks very similar to Microsoft's approach. Microsoft includes all their settings in a browserconfig.xml file which can be included in the root of your site or using a meta tag to refer to it. Android/Chrome has taken a very similar step and introduced a manifest.json file which you can also optionally point to in your HTML as shown below.
<!-- manifest-json - The location of the browser configuration file. It contains locations of icon files, name of the application and default device screen orientation. Note that the name field is mandatory.\n https://developer.chrome.com/multidevice/android/installtohomescreen. -->\n<link rel="manifest" href="/content/icons/manifest.json">\n\nThe manifest.json file contains the name of the site, optional page orientation settings and the location of favicon images. Unfortunately, the name of the site is not an optional field but required according to the specification. The rest of the file is dedicated to specifying the location of the various favicon images of varying pixel densities. You can also optionally control the orientation of the site and how it appears on an Android device. One new feature is the ability to set the browser chrome theme colour. This can be done with the theme-color meta tag and examples of the results can be seen below:
<!-- theme-color - The colour of the toolbar in Chrome M39+\n http://updates.html5rocks.com/2014/11/Support-for-theme-color-in-Chrome-39-for-Android -->\n<meta name="theme-color" content="#1E1E1E">\n\n![]()
Older versions of Android (M31 to M38) don't use the manifest.json file but are very similar to iOS and even uses some of the iOS icons if they are provided, as iOS icons tend to be a little higher resolution and more widely supported. Android also has the ability to hide the browser chrome to make your site behave like an app. It's meta tag has a different name:
<!-- favicon-192x192.png - For Android Chrome M36 to M38 this HTML is used. M39+ uses the manifest.json file. -->\n<link rel="icon" type="image/png" href="/content/icons/favicon-192x192.png" sizes="192x192">\n<!-- mobile-web-app-capable - Run Android/Chrome version M31 to M38 in standalone mode, hiding the browser chrome. -->\n<!-- <meta name="mobile-web-app-capable" content="yes"> -->\n\nFavicons are used in a few other places, such as pinning your site to the taskbar in Windows for instance or even on your television.
\nEven if you decide not to make your site web app capable and support just the basic iOS, Android and Windows icons and settings, your still in for a fair amount of work to create all the right images and get all the meta tags just right.
\nThe ASP.NET Core Boilerplate project template can help you get started quickly as all of the above files and meta tags are built in from the start, all you need to do is delete the ones you don't want (A lot quicker than starting from scratch).
\nI also highly recommend using Real Favicon Generator in conjunction with Microsoft's Windows 8 Build My Pinned Site and Windows 7 Build My Pinned Site pages. These three sites combined can help you get most of the way there and fairly quickly.
\nThe Real Favicon Generator site above will generate a .ico file for you but to get a real pixel perfect icon I personally use Paint.NET in conjunction with the Icon plugin to edit .ico image files.
So what is the future? Higher screen resolutions and a wider variety of devices of different sizes is now the norm and each one seems to need it's own images. Each manufacturer has added their own meta tags too.
\nOne approach would be to standardize a set of three or four images sizes and then also provide a colour meta tag. The image can then be shown in the center and the colour shown around the image to fill in any gaps. This means that the image does not have to be resized and this approach would also support splash screens and non-rectangular or odd shaped icons. Indeed, this is the approach Microsoft has already taken with their Windows 8.1 Store App splash screens and it works well in my experience.
\nAn even better and web standards based approach is to use SVG favicon's. These are vector images which do not lose image fidelity even when scaled. Unfortunately, this feature is only currently supported by the Firefox desktop browser. If all browsers implemented this feature we could go back to the days of Internet Explorer 5.0 when we only needed to create a single favicon.ico file. An SVG favicon can be set by adding the following tag:
\n<link rel="icon" type="image/svg+xml" href="favicon.svg"/>\n\nLets all hope more browsers support this simple approach but don't hold your breath.
\n", "url": "https://rehansaeed.com/internet-favicon-madness/", "title": "Internet Favicon Madness (Updated)", "summary": "Add favicon's to your website to support iOS, Android, Windows 7, Windows 8, Windows Phone and more. Find out where icons are used on each platform and how.", "image": "https://rehansaeed.com/images/hero/Favicons-1366x768.png", "date_modified": "2014-11-24T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/asp-net-mvc-boilerplate/", "content_html": "ASP.NET Core Boilerplate is a professional ASP.NET MVC template for building secure, fast, robust and adaptable web applications or sites. It provides the minimum amount of code required on top of the default MVC template provided by Microsoft.
\n
The main benefits of using this template are:
\nTwo templates are provided. One for ASP.NET 4.6 MVC 5 and another ASP.NET Core template which is currently under development and is missing some features due to ASP.NET Core still being in Beta. For more information about what's new in the ASP.NET Core template, see here.
\n
]

The default MVC template that Visual Studio gives you does not make best use of the tools available. It's insecure, slow, and really has a very basic feature list (That's the point of it). ASP.NET Core Boilerplate provides you with a few more pieces of the puzzle to get you started quicker. It makes liberal use of comments and even gives you a check-list of tasks which you need to perform to make it even better.
\nThe rest of this article is going to briefly go through the improvements made over using the default MVC template. I'll then finish up with instructions on how you can use it. Also, look out for more posts in the future, where I will go through each feature in detail.
\nThe default MVC template is not as secure as it could be. There are various settings (Mostly in the web.config file) which are insecure by default. For example, it leaks information about which version of IIS you are using and allows external scripts to access cookies by default!
ASP.NET Core Boilerplate makes everything secure by default but goes further and uses various HTTP headers which are sent to the browser to restrict things further.
\nIt also makes use of the new Content Security Policy (CSP) HTTP Header using the NWebSec NuGet packages. CSP revolutionizes web security and I highly recommend reading the above link.
\nSetting up SSL/TLS, so that your site runs over HTTPS is made easy with easy step by step instructions and links.
\nThe default MVC template does a pretty poor job in the performance department. Probably because they don't make any assumptions about which web server you are using. Most of the world and dog that are writing ASP.NET MVC sites use IIS and there are settings in the web.config file under the system.webServer section which can make a big difference when it comes to performance.
ASP.NET Core Boilerplate makes no such assumptions. It turns on GZip compression for static and dynamic files being sent to the browsers making them smaller and quicker to download. It also uses Content Delivery Networks (CDN) by default to make common scripts like jQuery quicker to download (You can turn this off of course but the point is ASP.NET Core Boilerplate is fast by default).
\nThat's not all! There are a bunch of other tweaks and examples of practices which can help improve the performance of the site. ASP.NET Core Boilerplate achieves a score of 96/100 on YSlow (Its not possible to get the full 100 as some of it's criteria contradict each other and site scripts need to be moved to a CDN).
\nThe default ASP.NET MVC template takes no consideration of Search Engine Optimization at all. ASP.NET Core Boilerplate adds a dynamically generated robots.txt file to tell search engines which pages they can index. It also adds a dynamically generated sitemap.xml file where you can help search engines even further by giving them links to all your pages.
ASP.NET MVC has some very useful settings for appending trailing slashes to URL's and making all URL's lower case. Unfortunately, both of these are turned off by default, which is terrible for SEO. This project turns them on by default.
\nIt also includes an MVC filter which helps to redirect non-canonical URL's (URL's without a trailing slash or mixed case characters which are considered different URL's by search engines) to their canonical equivalent.
\n4% of the world population is estimated to be visually impaired, while 0.55% are blind. Get more statistics here. ASP.NET Core Boilerplate ensures that your site is accessible by adding aria attributes to your HTML mark-up and special short-cuts for people using screen readers.
\nWebsites need to reach as many people as possible and look good on a range of different devices. ASP.NET Core Boilerplate supports browsers as old as IE8 (IE8 still has around 4% market share and is mostly used by corporations too lazy to port their old websites to newer browsers).
\nASP.NET Core Boilerplate also supports devices other than desktop browsers as much as possible. It has default icons and splash screens for Windows 8, Android, Apple Devices and a few other device specific settings included by default.
\nAt some point your site is probably going to throw an exception and you will need to handle and log that exception to be able to understand and fix it. ASP.NET Core Boilerplate includes Elmah, the popular error logging addin. It's all preconfigured and ready to use.
\nASP.NET Core Boilerplate uses popular Content Delivery Networks (CDN) from Google and Microsoft but what happens in the unlikely event that these go down? Well, ASP.NET Core Boilerplate provides backups for these.
\nNot only that but standard error pages such as 500 Internal Server Error, 404 Not Found and many others are built in to the template. ASP.NET Core Boilerplate even includes IIS configuration to protect you from Denial-of-Service (DoS) attacks.
\nASP.NET Core Boilerplate makes use of Glimpse (As advertised by Scott Hanselman). It's a great tool to use as you are developing, to find performance problems and bugs. Of course, Glimpse is all preconfigured, so you don't need to lift a finger to install it.
\nDoing things right does sometimes take a little extra time. Using the Inversion of Control (IOC) pattern for example should be a default. ASP.NET Core Boilerplate uses the Autofac IOC container by default. Some people get a bit tribal when talking about IOC containers but to be honest, they all work great. I picked Autofac because it has lots of helpers for ASP.NET MVC and Microsoft even uses it for Azure Mobile Services.
\nASP.NET Core Boilerplate also makes use of the popular LESS files for making life easier with CSS. For an example, it can make overriding colours and fonts in the default Bootstrap CSS a cinch.
\nASP.NET MVC is a complicated beast. You can end up with lots of magic strings which can be a nightmare when renaming something. There are many ways of eliminating these magic strings but most trade maintainability for slower performance. ASP.NET Core Boilerplate makes extensive use of constants which are a trade-off between maintainability and performance, giving you the best of both worlds.
\nAn Atom 1.0 has been included by default. Atom was chosen over RSS because it is the better and newer specification. PubSubHubbub 0.4 support has also been built in, allowing you to push feed updates to subscribers.
\nThere is a lot more to implementing search in your application than it sounds. ASP.NET Core Boilerplate includes a search feature by default but leaves it open for you to choose how you want to implement it. It also implements Open Search XML right out of the box. Read Scott Hanselman talk about this feature here.
\nOpen Graph meta tags and Twitter Card meta tags are included by default. Not only that but ASP.NET Core Boilerplate includes fully documented HTML helpers that allow you to easily generate Open Graph object or Twitter Card meta tags easily and correctly.
\nThat's easy, just choose one of the following options:
\nFile -> New Project -> Web.git clone https://github.com/Dotnet-Boxed/Templates\n\nYou can find release notes for each version here and a To-Do list of new features and enhancements coming soon here.
\nPlease report any bugs or issues on the GitHub issues page here.
\nAt some point, I will try to create a Visual Studio Deployment package (VSIX) and list this project template on the Visual Studio extensions site. To use the template, it will be as easy as choosing ASP.NET Core Boilerplate from the online templates in the File -> New Project -> Online Template menu. Unbelievably, it's actually pretty complicated to create one of these. I found the Export Template Wizard Visual Studio extension which can do this easily but it's not been updated since Visual Studio 2010.
I am also taking a look at creating separate Visual Studio templates which include ASP.NET Web API and OAuth authentication. This is of course an open source project, I fully expect contributions and suggestions from the community.
\n", "url": "https://rehansaeed.com/asp-net-mvc-boilerplate/", "title": "ASP.NET Core Boilerplate", "summary": "ASP.NET Core Boilerplate is a professional ASP.NET MVC template for building secure, fast, robust and adaptable web applications or sites.", "image": "https://rehansaeed.com/images/hero/ASP.NET-Core-Boilerplate-1366x768.png", "date_modified": "2014-11-12T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/elysium-extra-1-1-released/", "content_html": "https://www.youtube.com/watch?v=PGM_uBy99GA
\nVersion 1.1.4 of Elysium Extra has just been released! If you've never heard of it, Elysium Extra is a Windows Presentation Foundation (WPF) SDK which provides a wide variety of controls and styles. Here are the relevant links to get started with the project:
\n
At the time of writing the NuGet package has been downloaded 900 times which is pretty exciting given that its been live for only a few months and the project had no theme support in its early life.
\nWPF has not been getting a lot of love recently. You only have to trek the internet to see all the old WPF projects which have died or gone into hibernation with little or no new updates. I've also seen a lot of 'troll like' comments in Microsoft comment boards asking why no more updates for WPF have been forthcoming.
\nMy personal opinion is that WPF is a very mature product and does not need as many new 'features'. Even still, there have been minor updates by Microsoft fairly recently as part of .NET 4.5. Let us not forget that Visual Studio is written in WPF and the technology is being maintained. There is a lot of noise being made about upstart XAML technologies like Windows Phone and Windows Store apps (I've written a few myself and they're great too) so sometimes it's easy to overlook WPF.
\nThe latest version of Elysium Extra adds full theming support. There is now a Dark and Light theme (A bit like Windows Store Apps). You can even change the Accent and Contrast colours dynamically on the fly. I've taken a screenshot of the sample application in the Dark them with a nice red accent colour:
\n
So how do you change the theme? Well you can do it in XAML by changing your App.xaml file like so:
\n<extra:ElysiumApplication x:Class="[YOUR NAMESPACE GOES HERE].App"\n xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"\n xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"\n xmlns:extra="http://schemas.extra.com/ui"\n AccentColor="Red"\n ContrastColor="LightBlue"\n SemitransparentContrastColor="LightCoral"\n Theme="Dark"\n StartupUri="MainWindow.xaml"/>\n\nIn the above sample code, I'm setting the theme to dark and changing the three theme related colours. This is all totally optional of course. You can even change the theme in code behind instead like so:
\npublic partial class App\n{\n public App()\n {\n this.Theme = Theme.Dark;\n this.AccentColor = Colors.Red;\n this.ContrastColor = Colors.LightBlue;\n this.SemitransparentContrastColor = Colors.LightCoral;\n }\n}\n\nOne final feature that I think is very cool is that individual controls can now have a different theme from the rest of the application. You can take a look at the example below where there are two text boxes but one of them has the theme explicitly set to Dark.
\n
<extra:Window x:Class="WpfApplication1.MainWindow"\n xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"\n xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"\n xmlns:extra="http://schemas.extra.com/ui"\n Height="100" \n Title="Main Window" \n Width="200">\n <StackPanel>\n <TextBox Margin="5"\n Text="Hello World"/>\n <TextBox extra:ThemeManager.Theme="Dark"\n Margin="5"\n Text="Hello World"/>\n </StackPanel>\n</extra:Window>\n\nIn the previous version of Elysium Extra, we were making judicious use of ResourceDictionary merging to allow us to split up our XAML files, so that each control has it's own separate XAML file. This led to a large amount of duplication of objects in memory because the contents of various ResourceDictionary's were being instantiated multiple times.
There are a few different approaches to this WPF problem. One that most library writers take (including Microsoft) is to only have a single massive XAML file containing all styles and templates. I hope you like scrolling and never being able to find anything because this is very difficult to maintain. Another approach that the original Elysium project took was to split your xaml files but then use .tt template files to generate a single ResourceDictionary which I thought was an elegant approach.
Elysium Extra has taken a different route. There is a new SharedResourceDictionary type which only instantiates its contents once. You can use this type yourself too in the same way you use ResourceDictionary. It's very useful if you are merging a dictionaries from more than one location. Here is an example take from Elysium Extra itself where we are merging two resource dictionaries:
<ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"\n xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">\n <ResourceDictionary.MergedDictionaries>\n <controls:SharedResourceDictionary Source="/Framework.UI;component/Themes/WPF/Base/Converter.xaml"/>\n <controls:SharedResourceDictionary Source="/Framework.UI;component/Themes/WPF/Base/Brush.xaml"/>\n </ResourceDictionary.MergedDictionaries>\n \n <!-- Your Code Here -->\n \n</ResourceDictionary>\n\n
So far, there are two contributing developers working on Elysium Extra (Myself and zsKengren who has contributed new controls which are still to be added to the library)Â and 22 people following the project according to GitHub. That is not nearly enough and I would like to see more community activity.
\nElysium Extra is a totally open source project. You can look at the source code and even use bits of it in your own projects freely. I occasionally get people contacting me telling me that they want to use the project or even how it has really helped them. That's great feedback and long may it continue!
\n", "url": "https://rehansaeed.com/elysium-extra-1-1-released/", "title": "Elysium Extra 1.1 Released", "summary": "Elysium Extra Version 1.1 is a Windows Presentation Foundation (WPF) SDK providing Metro styles for built in WPF controls and some custom controls.", "image": "https://rehansaeed.com/images/hero/Elysium-Extra-1366x768.png", "date_modified": "2014-11-05T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/model-view-viewmodel-mvvm-part4-inotifydataerrorinfo/", "content_html": "In this next part, I'm going to discuss validation of your view models using the INotifyDataErrorInfo interface. Validation is an often ignored part of the Model-View-ViewModel (MVVM) story. If you need to create a form for your users to fill in (which is probably most applications, I would have thought), then you probably need to validate user input in some way and the INotifyDataErrorInfo interface can get you there.
\n
That was a valid TextBox using INotifyDataErrorInfo.

That was an invalid TextBox using INotifyDataErrorInfo.
In the example above you can see a name text box which requires text, to be in a valid state. In the valid state there is a big green tick next to the text box and conversely in an invalid state, there is a big yellow warning sign, the text box background becomes pink and you get a nice tool-tip telling you what the error is. By the way, this example is taken from my Elysium Extra WPF project which is freely available on GitHub.
\nYou can see the interface and its corresponding event arguments below. If the name property in our view model changes and is empty then the state of our view model is invalid, we can raise the ErrorsChanged event, set the HasErrors property to return true and make any calls to GetErrors return a list of the errors (In our case we only have one but there could be multiple errors).
namespace System.ComponentModel\n{\n public interface INotifyDataErrorInfo\n {\n bool HasErrors { get; }\n\n event EventHandler<DataErrorsChangedEventArgs> ErrorsChanged;\n\n IEnumerable GetErrors(string propertyName);\n }\n \n public class DataErrorsChangedEventArgs : EventArgs\n {\n public DataErrorsChangedEventArgs(string propertyName);\n\n public virtual string PropertyName { get; }\n }\n}\n\nThat's a fair amount of work and a base class to do all that makes life much easier. So what are the main aims of a base class implementing INotifyDataErrorInfo?
INotifyPropertyChanged. Handily, I showed how best to create a base class for that in my last article in this series. So our new base class can inherit from the NotifyPropertyChanges base class.ErrorsChanged C# event.So, without further ado, here is my implementation. Note that there are three classes:
\nnamespace Framework.ComponentModel\n{\n using System;\n using System.Collections;\n using System.Collections.Generic;\n using System.ComponentModel;\n using System.Diagnostics;\n using System.Linq;\n using System.Reactive.Linq;\n using System.Reflection;\n using System.Runtime.CompilerServices;\n using Framework.ComponentModel.Rules;\n\n /// <summary>\n /// Provides functionality to provide errors for the object if it is in an invalid state.\n /// </summary>\n /// <typeparam name="T">The type of this instance.</typeparam>\n public abstract class NotifyDataErrorInfo<T> : NotifyPropertyChanges, INotifyDataErrorInfo\n where T : NotifyDataErrorInfo<T>\n {\n private const string HasErrorsPropertyName = "HasErrors";\n\n private static RuleCollection<T> rules = new RuleCollection<T>();\n\n private Dictionary<string, List<object>> errors;\n\n /// <summary>\n /// Occurs when the validation errors have changed for a property or for the entire object. \n /// </summary>\n event EventHandler<DataErrorsChangedEventArgs> INotifyDataErrorInfo.ErrorsChanged\n {\n add { this.errorsChanged += value; }\n remove { this.errorsChanged -= value; }\n }\n\n /// <summary>\n /// Occurs when the validation errors have changed for a property or for the entire object. \n /// </summary>\n private event EventHandler<DataErrorsChangedEventArgs> errorsChanged;\n\n /// <summary>\n /// Gets the when errors changed observable event. Occurs when the validation errors have changed for a property or for the entire object. \n /// </summary>\n /// <value>\n /// The when errors changed observable event.\n /// </value>\n public IObservable<string> WhenErrorsChanged\n {\n get\n {\n return Observable\n .FromEventPattern<DataErrorsChangedEventArgs>(\n h => this.errorsChanged += h,\n h => this.errorsChanged -= h)\n .Select(x => x.EventArgs.PropertyName);\n }\n }\n\n /// <summary>\n /// Gets a value indicating whether the object has validation errors. \n /// </summary>\n /// <value><c>true</c> if this instance has errors, otherwise <c>false</c>.</value>\n public virtual bool HasErrors\n {\n get\n {\n this.InitializeErrors();\n return this.errors.Count > 0;\n }\n }\n\n /// <summary>\n /// Gets the rules which provide the errors.\n /// </summary>\n /// <value>The rules this instance must satisfy.</value>\n protected static RuleCollection<T> Rules => rules;\n\n /// <summary>\n /// Gets the validation errors for the entire object.\n /// </summary>\n /// <returns>A collection of errors.</returns>\n public IEnumerable GetErrors() => this.GetErrors(null);\n\n /// <summary>\n /// Gets the validation errors for a specified property or for the entire object.\n /// </summary>\n /// <param name="propertyName">Name of the property to retrieve errors for. <c>null</c> to \n /// retrieve all errors for this instance.</param>\n /// <returns>A collection of errors.</returns>\n public IEnumerable GetErrors(string propertyName)\n {\n Debug.Assert(\n string.IsNullOrEmpty(propertyName) ||\n (this.GetType().GetRuntimeProperty(propertyName) != null),\n "Check that the property name exists for this instance.");\n\n this.InitializeErrors();\n\n IEnumerable result;\n if (string.IsNullOrEmpty(propertyName))\n {\n List<object> allErrors = new List<object>();\n\n foreach (KeyValuePair<string, List<object>> keyValuePair in this.errors)\n {\n allErrors.AddRange(keyValuePair.Value);\n }\n\n result = allErrors;\n }\n else\n {\n if (this.errors.ContainsKey(propertyName))\n {\n result = this.errors[propertyName];\n }\n else\n {\n result = new List<object>();\n }\n }\n\n return result;\n }\n\n /// <summary>\n /// Raises the PropertyChanged event.\n /// </summary>\n /// <param name="propertyName">Name of the property.</param>\n protected override void OnPropertyChanged([CallerMemberName] string propertyName = null)\n {\n base.OnPropertyChanged(propertyName);\n\n if (string.IsNullOrEmpty(propertyName))\n {\n this.ApplyRules();\n }\n else\n {\n this.ApplyRules(propertyName);\n }\n\n base.OnPropertyChanged(HasErrorsPropertyName);\n }\n\n /// <summary>\n /// Called when the errors have changed.\n /// </summary>\n /// <param name="propertyName">Name of the property.</param>\n protected virtual void OnErrorsChanged([CallerMemberName] string propertyName = null)\n {\n Debug.Assert(\n string.IsNullOrEmpty(propertyName) ||\n (this.GetType().GetRuntimeProperty(propertyName) != null),\n "Check that the property name exists for this instance.");\n\n EventHandler<DataErrorsChangedEventArgs> eventHandler = this.errorsChanged;\n\n if (eventHandler != null)\n {\n eventHandler(this, new DataErrorsChangedEventArgs(propertyName));\n }\n }\n\n /// <summary>\n /// Applies all rules to this instance.\n /// </summary>\n private void ApplyRules()\n {\n this.InitializeErrors();\n\n foreach (string propertyName in rules.Select(x => x.PropertyName))\n {\n this.ApplyRules(propertyName);\n }\n }\n\n /// <summary>\n /// Applies the rules to this instance for the specified property.\n /// </summary>\n /// <param name="propertyName">Name of the property.</param>\n private void ApplyRules(string propertyName)\n {\n this.InitializeErrors();\n\n List<object> propertyErrors = rules.Apply((T)this, propertyName).ToList();\n\n if (propertyErrors.Count > 0)\n {\n if (this.errors.ContainsKey(propertyName))\n {\n this.errors[propertyName].Clear();\n }\n else\n {\n this.errors[propertyName] = new List<object>();\n }\n\n this.errors[propertyName].AddRange(propertyErrors);\n this.OnErrorsChanged(propertyName);\n }\n else if (this.errors.ContainsKey(propertyName))\n {\n this.errors.Remove(propertyName);\n this.OnErrorsChanged(propertyName);\n }\n }\n\n /// <summary>\n /// Initializes the errors and applies the rules if not initialized.\n /// </summary>\n private void InitializeErrors()\n {\n if (this.errors == null)\n {\n this.errors = new Dictionary<string, List<object>>();\n\n this.ApplyRules();\n }\n }\n }\n}\n\nnamespace Framework.ComponentModel.Rules\n{\n using System;\n\n /// <summary>\n /// A named rule containing an error to be used if the rule fails.\n /// </summary>\n /// <typeparam name="T">The type of the object the rule applies to.</typeparam>\n public abstract class Rule<T>\n {\n private string propertyName;\n private object error;\n\n /// <summary>\n /// Initializes a new instance of the <see cref="Rule<T>"/> class.\n /// </summary>\n /// <param name="propertyName">The name of the property this instance applies to.</param>\n /// <param name="error">The error message if the rules fails.</param>\n protected Rule(string propertyName, object error)\n {\n if (propertyName == null)\n {\n throw new ArgumentNullException(nameof(propertyName));\n }\n\n if (error == null)\n {\n throw new ArgumentNullException(nameof(error));\n }\n\n this.propertyName = propertyName;\n this.error = error;\n }\n\n /// <summary>\n /// Gets the name of the property this instance applies to.\n /// </summary>\n /// <value>The name of the property this instance applies to.</value>\n public string PropertyName => this.propertyName;\n\n /// <summary>\n /// Gets the error message if the rules fails.\n /// </summary>\n /// <value>The error message if the rules fails.</value>\n public object Error => this.error;\n\n /// <summary>\n /// Applies the rule to the specified object.\n /// </summary>\n /// <param name="obj">The object to apply the rule to.</param>\n /// <returns>\n /// <c>true</c> if the object satisfies the rule, otherwise <c>false</c>.\n /// </returns>\n public abstract bool Apply(T obj);\n }\n}\n\nnamespace Framework.ComponentModel.Rules\n{\n using System;\n\n /// <summary>\n /// Determines whether or not an object of type <typeparamref name="T"/> satisfies a rule and\n /// provides an error if it does not.\n /// </summary>\n /// <typeparam name="T">The type of the object the rule can be applied to.</typeparam>\n public sealed class DelegateRule<T> : Rule<T>\n {\n private Func<T, bool> rule;\n\n /// <summary>\n /// Initializes a new instance of the <see cref="DelegateRule<T>"/> class.\n /// </summary>\n /// <param name="propertyName">>The name of the property the rules applies to.</param>\n /// <param name="error">The error if the rules fails.</param>\n /// <param name="rule">The rule to execute.</param>\n public DelegateRule(string propertyName, object error, Func<T, bool> rule)\n : base(propertyName, error)\n {\n if (rule == null)\n {\n throw new ArgumentNullException(nameof(rule));\n }\n\n this.rule = rule;\n }\n\n /// <summary>\n /// Applies the rule to the specified object.\n /// </summary>\n /// <param name="obj">The object to apply the rule to.</param>\n /// <returns>\n /// <c>true</c> if the object satisfies the rule, otherwise <c>false</c>.\n /// </returns>\n public override bool Apply(T obj) => this.rule(obj);\n }\n}\n\nnamespace Framework.ComponentModel.Rules\n{\n using System;\n using System.Collections.Generic;\n using System.Collections.ObjectModel;\n\n /// <summary>\n /// A collection of rules.\n /// </summary>\n /// <typeparam name="T">The type of the object the rules can be applied to.</typeparam>\n public sealed class RuleCollection<T> : Collection<Rule<T>>\n {\n /// <summary>\n /// Adds a new <see cref="Rule{T}"/> to this instance.\n /// </summary>\n /// <param name="propertyName">The name of the property the rules applies to.</param>\n /// <param name="error">The error if the object does not satisfy the rule.</param>\n /// <param name="rule">The rule to execute.</param>\n public void Add(string propertyName, object error, Func<T, bool> rule) =>\n this.Add(new DelegateRule<T>(propertyName, error, rule));\n\n /// <summary>\n /// Applies the <see cref="Rule{T}"/>'s contained in this instance to <paramref name="obj"/>.\n /// </summary>\n /// <param name="obj">The object to apply the rules to.</param>\n /// <param name="propertyName">Name of the property we want to apply rules for. <c>null</c>\n /// to apply all rules.</param>\n /// <returns>A collection of errors.</returns>\n public IEnumerable<object> Apply(T obj, string propertyName)\n {\n List<object> errors = new List<object>();\n\n foreach (Rule<T> rule in this)\n {\n if (string.IsNullOrEmpty(propertyName) || rule.PropertyName.Equals(propertyName))\n {\n if (!rule.Apply(obj))\n {\n errors.Add(rule.Error);\n }\n }\n }\n\n return errors;\n }\n }\n}\n\nAn example of how you can use this base class is as follows.
\npublic class ZombieViewModel : NotifyDataErrorInfo<ZombieViewModel>\n{\n private string name;\n private int limbsRemaining;\n\n static ZombieViewModel()\n {\n Rules.Add(new DelegateRule<ZombieViewModel>(\n "Name",\n "Name cannot be empty.",\n x => !string.IsNullOrEmpty(x.Name)));\n Rules.Add(new DelegateRule<ZombieViewModel>(\n "LimbsRemaining",\n "A zombie can't have less than zero limbs.",\n x => x.LimbsRemaining >= 0));\n Rules.Add(new DelegateRule<ZombieViewModel>(\n "LimbsRemaining",\n "A zombie can only have up to four limbs.",\n x => x.LimbsRemaining <= 4));\n }\n\n public string Name\n {\n get => this.name;\n set => this.SetProperty(ref this.name, value);\n }\n\n public int LimbsRemaining\n {\n get => this.limbsRemaining;\n set => this.SetProperty(ref this.limbsRemaining, value);\n }\n}\n\nAs you can see, our view model has two properties and as shown in the last post in the series we are using the SetProperty method to raise PropertyChanged events. The only bit I've added for validation is in the static constructor containing the three validation rules.
The Name property has a single rule applied to it. When the name is empty a validation error is raised. The LimbsRemaining property has two rules and when it is less than zero or more than four, validation errors are raised auto-magically.
Under the covers, each time the PropertyChanged event is raised, we apply the corresponding rule relating to the property and if the rule fails, we raise the ErrorsChanged event, raise a PropertyChanged event for the HasErrors property (Which is now true) and finally ensure that any calls to GetErrors now returns the error shown in the rule.
The DelegateRule<T> class shown above is a really easy way to provide nice, simple rules. If you need something more complex you can create your own rule by inheriting from the Rule<T> base class. An example of this could be a custom rule to validate an email address or telephone number.
C# events are old school. Reactive Extensions (Rx) provides a cleaner and far more powerful drop-in replacement for C# events. I'm not going to go over the advantages of Reactive Extensions here but you can take a look at a series of blog posts I've done in the past.
\nWe can hide the ErrorsChanged C# event by explicitly implementing the interface (Click here for details on implicit versus explicit implementations of interfaces).
The ErrorsChanged C# event can still be accessed by first casting the object to INotifyDataErrorInfo. Validation in XAML languages, which uses this interface continues to work. Our new Reactive Extensions (Rx) observable event called WhenErrorsChanged of type? IObservable<string> (The string is the property name) is now the default method of subscribing for error changed events and we've hidden away the old C# event.
The INotifyDataErrorInfo interface is supported by most XAML frameworks including WPF, Silverlight and Windows Phone. Currently WinRT does not support the interface at the time of writing but you can bet that they will in future and in the mean time you can use the WinRT XAML Validation library in conjunction with the code below to plug this gap.
This interface used to be used for validation but was replaced by INotifyDataErrorInfo. The new interface provides a much nicer API which is easier to code against and better performance. If you are still using the old interface, its time to make the change.
I have been tweaking this base class for the last few years and feel I've got a fairly good balance. I've not seen too many implementations of this interface, most blogs seem to cover INotifyPropertyChanged pretty well though. I'd be very interested if anyone has any comments or thoughts on improvements. Feel free to sound-off in the comments.
Code is written to be read by humans, not for machines so it makes sense that following some basic ground rules for the look and feel of your code could make your code easier to read and boost your productivity. This is particularly important if you work in teams where each developer can go off and write their code in entirely different ways. Reading these different styles hinders your productivity. In my opinion if you add up all the few extra seconds here and there, it all adds up to extra hours or even days wasted over the course of a year per developer.
\n::: warning Disclaimer\nAt the end of the day, there are no rules for coding style. This is all a matter of personal preference.\n:::
\nI have recently been doing a fair amount of T-SQL and C++ and thought I'd look into some form of naming conventions for the two languages. If you've read my previous blog post 'Stop the Brace Wars, Use StyleCop', then you'll know how I feel about coding style in the C# language.
\nThe SQL language is an interesting case, its a really old language from 1974 and in those days they didn't even have keyboards that could deal with upper and lower case letters!
\nA lot of examples you'll see in books have used all-caps for the SQL keywords like SELECT and WHERE. As is widely researched, all-caps is really bad for readability. However, as this Stack Overflow article shows, all-caps is still a really popular style of writing SQL.
SELECT s.Name, s.Size\nFROM Spaceship s\nWHERE s.Name = 'Death Star'\n\nselect s.Name, s.Size\nfrom Spaceship s\nwhere s.Name = 'Death Star'\n\nIn the above example, the all capitals doesn't look too bad. It's a very short SQL statement and helps break up the three parts of the query. You could argue, that the SQL keywords are coloured blue, so we don't need the capitalization and that's a pretty good argument. As an aside please remember that colour blindness affects approximately 1 in 12 men and 1 in 200 women in the world (source).
\nBut this is a really simple SQL statement, if you start writing a stored procedure of any complexity, things get ugly pretty fast (Imagine writing C# with upper case keywords, yuck!). Happily though, SQL developers seemed to have cottoned onto this. A lot more real world SQL examples on blogs and forums seem to be all lower-case. Even to me though, all lower case SQL does not look entirely correct, perhaps I've just been conditioned into all-caps, it is however easier to read for more complex T-SQL.
\nC++ is fairly similar to C#, so for me as a mainly C# developer it's easier to write in a similar style. However, I was not happy with this approach and wanted to see what was being done elsewhere and what was the more 'correct' approach, if there was one.
\nI found the Google C++ Style Guide which is a really detailed, yet simple set of guidelines for how to write your C++. Definitely worth a quick read.
\nCoding style is a deeply personal subject and a pretty important one too that is often overlooked. In my opinion, it's always worth spending a little time looking up the preferred methods (There will usually be more than one) of writing in any particular language and picking one of the most popular approaches.
\nIf you're working in a team, you'll reap the benefits pretty quickly. Code written by others will look just like yours, saving you precious seconds. Even if your a solo developer, you'll benefit. Developers are inherently plagiarists, copying snippets of code found on-line written by others. Using a common coding style will mean that your style is more likely to be the same as the next snippet of code you or I shamelessly copy from the internet.
\n", "url": "https://rehansaeed.com/naming-conventions/", "title": "Naming Conventions", "summary": "Code is written to be read by humans, not machines. Naming conventions and standard code styling can boost productivity, particularly if working in teams.", "image": "https://rehansaeed.com/images/hero/Naming-Conventions-1366x768.png", "date_modified": "2014-09-04T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/model-view-viewmodel-mvvm-part3-inotifypropertychanged/", "content_html": "I know there have been lots of Model-View-ViewModel (MVVM) articles talking about INotifyPropertyChanged. I've read lots of them and this is the aggregation of all the knowledge I've learned plus some cool new stuff (I've not seen it done anywhere else but I could be wrong) which I've also covered in my Reactive Extensions (Rx) posts.
\nSo what are the main aims of a base class implementing INotifyPropertyChanged? Well, I think there are a few:
\nPropertyChanged C# event.So, without further ado, here is my implementation.
\nnamespace Framework.ComponentModel\n{\n using System;\n using System.ComponentModel;\n using System.Diagnostics;\n using System.Reactive.Linq;\n using System.Reflection;\n using System.Runtime.CompilerServices;\n\n /// <summary>\n /// Notifies subscribers that a property in this instance is changing or has changed.\n /// </summary>\n public abstract class NotifyPropertyChanges : Disposable, INotifyPropertyChanged //, INotifyPropertyChanging\n {\n /// <summary>\n /// Occurs when a property value changes.\n /// </summary>\n event PropertyChangedEventHandler INotifyPropertyChanged.PropertyChanged\n {\n add { this.propertyChanged += value; }\n remove { this.propertyChanged -= value; }\n }\n\n /// <summary>\n /// Occurs when a property value is changing.\n /// </summary>\n // event PropertyChangingEventHandler INotifyPropertyChanging.PropertyChanging\n // {\n // add { this.PropertyChanging += value; }\n // remove { this.PropertyChanging -= value; }\n // }\n\n /// <summary>\n /// Occurs when a property value changes.\n /// </summary>\n private event PropertyChangedEventHandler propertyChanged;\n\n /// <summary>\n /// Occurs when a property value is changing.\n /// </summary>\n // private event PropertyChangingEventHandler PropertyChanging;\n\n /// <summary>\n /// Gets the when property changed observable event. Occurs when a property value changes.\n /// </summary>\n /// <value>\n /// The when property changed observable event.\n /// </value>\n public IObservable<string> WhenPropertyChanged\n {\n get\n {\n this.ThrowIfDisposed();\n\n return Observable\n .FromEventPattern<PropertyChangedEventHandler, PropertyChangedEventArgs>(\n h => this.propertyChanged += h,\n h => this.propertyChanged -= h)\n .Select(x => x.EventArgs.PropertyName);\n }\n }\n\n /// <summary>\n /// Gets the when property changing observable event. Occurs when a property value is changing.\n /// </summary>\n /// <value>\n /// The when property changing observable event.\n /// </value>\n // public IObservable<EventPattern<PropertyChangingEventArgs>> WhenPropertyChanging\n // {\n // get\n // {\n // return Observable\n // .FromEventPattern<PropertyChangingEventHandler, PropertyChangingEventArgs>(\n // h => this.PropertyChanging += h,\n // h => this.PropertyChanging -= h)\n // .AsObservable();\n // }\n // }\n\n /// <summary>\n /// Raises the PropertyChanged event.\n /// </summary>\n /// <param name="propertyName">Name of the property.</param>\n protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)\n {\n Debug.Assert(\n string.IsNullOrEmpty(propertyName) ||\n (this.GetType().GetRuntimeProperty(propertyName) != null),\n "Check that the property name exists for this instance.");\n\n PropertyChangedEventHandler eventHandler = this.propertyChanged;\n\n if (eventHandler != null)\n {\n eventHandler(this, new PropertyChangedEventArgs(propertyName));\n }\n }\n\n /// <summary>\n /// Raises the PropertyChanged event.\n /// </summary>\n /// <param name="propertyNames">The property names.</param>\n protected void OnPropertyChanged(params string[] propertyNames)\n {\n if (propertyNames == null)\n {\n throw new ArgumentNullException(nameof(propertyNames));\n }\n\n foreach (string propertyName in propertyNames)\n {\n this.OnPropertyChanged(propertyName);\n }\n }\n\n /// <summary>\n /// Raises the PropertyChanging event.\n /// </summary>\n /// <param name="propertyName">Name of the property.</param>\n protected virtual void OnPropertyChanging([CallerMemberName] string propertyName = null)\n {\n Debug.Assert(\n string.IsNullOrEmpty(propertyName) ||\n (this.GetType().GetRuntimeProperty(propertyName) != null),\n "Check that the property name exists for this instance.");\n\n // PropertyChangingEventHandler eventHandler = this.PropertyChanging;\n\n // if (eventHandler != null)\n // {\n // eventHandler(this, new PropertyChangingEventArgs(propertyName));\n // }\n }\n\n /// <summary>\n /// Raises the PropertyChanging event.\n /// </summary>\n /// <param name="propertyNames">The property names.</param>\n protected void OnPropertyChanging(params string[] propertyNames)\n {\n if (propertyNames == null)\n {\n throw new ArgumentNullException(nameof(propertyNames));\n }\n\n foreach (string propertyName in propertyNames)\n {\n this.OnPropertyChanging(propertyName);\n }\n }\n\n /// <summary>\n /// Sets the value of the property to the specified value if it has changed.\n /// </summary>\n /// <typeparam name="TProp">The type of the property.</typeparam>\n /// <param name="currentValue">The current value of the property.</param>\n /// <param name="newValue">The new value of the property.</param>\n /// <param name="propertyName">Name of the property.</param>\n /// <returns><c>true</c> if the property was changed, otherwise <c>false</c>.</returns>\n protected bool SetProperty<TProp>(\n ref TProp currentValue,\n TProp newValue,\n [CallerMemberName] string propertyName = null)\n {\n this.ThrowIfDisposed();\n\n if (!object.Equals(currentValue, newValue))\n {\n this.OnPropertyChanging(propertyName);\n currentValue = newValue;\n this.OnPropertyChanged(propertyName);\n\n return true;\n }\n\n return false;\n }\n\n /// <summary>\n /// Sets the value of the property to the specified value if it has changed.\n /// </summary>\n /// <typeparam name="TProp">The type of the property.</typeparam>\n /// <param name="currentValue">The current value of the property.</param>\n /// <param name="newValue">The new value of the property.</param>\n /// <param name="propertyNames">The names of all properties changed.</param>\n /// <returns><c>true</c> if the property was changed, otherwise <c>false</c>.</returns>\n protected bool SetProperty<TProp>(\n ref TProp currentValue,\n TProp newValue,\n params string[] propertyNames)\n {\n this.ThrowIfDisposed();\n\n if (!object.Equals(currentValue, newValue))\n {\n this.OnPropertyChanging(propertyNames);\n currentValue = newValue;\n this.OnPropertyChanged(propertyNames);\n\n return true;\n }\n\n return false;\n }\n\n /// <summary>\n /// Sets the value of the property to the specified value if it has changed.\n /// </summary>\n /// <param name="equal">A function which returns <c>true</c> if the property value has changed, otherwise <c>false</c>.</param>\n /// <param name="action">The action where the property is set.</param>\n /// <param name="propertyName">Name of the property.</param>\n /// <returns><c>true</c> if the property was changed, otherwise <c>false</c>.</returns>\n protected bool SetProperty(\n Func<bool> equal, \n Action action,\n [CallerMemberName] string propertyName = null)\n {\n this.ThrowIfDisposed();\n\n if (equal())\n {\n return false;\n }\n\n this.OnPropertyChanging(propertyName);\n action();\n this.OnPropertyChanged(propertyName);\n\n return true;\n }\n\n /// <summary>\n /// Sets the value of the property to the specified value if it has changed.\n /// </summary>\n /// <param name="equal">A function which returns <c>true</c> if the property value has changed, otherwise <c>false</c>.</param>\n /// <param name="action">The action where the property is set.</param>\n /// <param name="propertyNames">The property names.</param>\n /// <returns><c>true</c> if the property was changed, otherwise <c>false</c>.</returns>\n protected bool SetProperty(\n Func<bool> equal, \n Action action,\n params string[] propertyNames)\n {\n this.ThrowIfDisposed();\n\n if (equal())\n {\n return false;\n }\n\n this.OnPropertyChanging(propertyNames);\n action();\n this.OnPropertyChanged(propertyNames);\n\n return true;\n }\n }\n}\n\nAn example of how you can use this base class is as follows.
\npublic class CatCountViewModel : NotifyPropertyChanges\n{\n private int numberOfCats;\n\n public int NumberOfCats\n {\n get => this.numberOfCats;\n set => this.SetProperty(ref this.numberOfCats, value);\n }\n}\n\nAs I said before, performance is king. A slow application is a frustrating application. However, there has always been a problem. When you want to raise a property changed event, you have to pass in a string. We can't check the validity of the string at compile time, only at runtime. So we can get errors due to typos etc. which can get overlooked.
\nThere are a lot of implementations of INotifyPropertyChanged that use reflection or expression trees and as this and this blog show, using reflection is a terribly slow method of raising an event and to be avoided.
\nLuckily, Microsoft introduced the CallerMemberNameAttribute attribute, which means that as in the above example, we don't need to add a string for the property name, it gets added for us to the last optional parameter in the SetProperty method.
The SetProperty method uses the ref keyword to pass the parameter by reference (Passing parameters by reference is faster). It then checks to see if the numberOfCats parameter is different from the value parameter (There is no point raising a property changed event if they are the same). Only then, do we raise a property changed event.
But what about dependent properties. Where one property affects the value of another. Well, lets take a look at another example.
\npublic class CatCounter : NotifyPropertyChanges\n{\n private int numberOfCats;\n\n public int NumberOfCats\n {\n get => this.numberOfCats;\n set => this.SetProperty(ref this.numberOfCats, value, "NumberOfCats", "NumberOfCatsDescription");\n }\n\n public string NumberOfCatsDescription => $"{this.NumberOfCats} Cats Counted";\n}\n\nYou can see, that I've not done anything spectacular and just passed in the strings. As I'm using the params keyword, you can pass in as many strings as you want and the SetProperty method will raise a property changed event for each one.
If you give me a moment, I will explain why I think this is the right compromise to make. Lets make no mistake, you do need to compromise between performance and simplicity/maintainability. There are approaches which make this eventuality simpler and easier to understand but they can and will degrade performance.
\nSo does using strings cause problems? First of all, if you use a Visual Studio Add-in like Resharper, this problem is solved as it checks that the strings match the property name for you. Secondly, as a backup the OnPropertyChanged method in the implementation above contains some Debug.Assert statements (These are removed in Release mode and have no effect on performance) to check that the property names exist and are correct, if they are not you get a error message. Thirdly, this is fairly rare in my experience and I can deal with the overhead of having a couple of extra strings.
Again, this is a choice I've made to go with performance over maintainability.
\nWhat if you want to wrap an object that looks like the one below with a class that supports INotifyPropertyChanged? This is a scenario I have not seen many people cover but occurs fairly often in my experience.
\npublic class CatCount\n{\n public int Count { get; set; }\n}\n\nAn example view model for the CatCount class can be found below.
\npublic class CatCountModel : NotifyPropertyChanges\n{\n private CatCount catCount;\n\n public int NumberOfCats\n {\n get { return this.catCount.Count; }\n set { this.SetProperty(() => this.catCount.Count == value, () => this.catCount.Count = value); }\n }\n}\n\nSo here we are providing the SetProperty method with two delegates. We can't use the ref keyword we used earlier because this gives us the compiler error "A property, indexer or dynamic member access may not be passed as an out or ref parameter". So we use delegates as an alternative which is not as fast as the ref keyword but almost as fast.
The first delegate determines if the cat count has actually changed. Only if it has (Remember, executing a delegate is far cheaper than updating the UI), do we call the next delegate which actually sets the value. Finally the SetProperty method raises a property changed event.
C# events are old school. Reactive Extensions (Rx) provides a cleaner and far more powerful drop-in replacement for C# events. I'm not going to go over the advantages of Reactive Extensions here but you can take a look at a series of blog posts I've done in the past.
\nWe can hide the PropertyChanged C# event by explicitly implementing the interface (Click here for details on implicit versus explicit implementations of interfaces).
The PropertyChanged C# event can still be accessed by first casting the object to INotifyPropertyChanged. Binding in XAML languages, which uses this interface continues to work. Our new Reactive Extensions (Rx) observable event called WhenPropertyChanged of type IObservable<string> (The string is the property name) is now the default method of subscribing for property changed events and we've hidden away the old C# event.
Take another look at the title of this paragraph, it says INotifyPropertyChanging and not INotifyPropertyChanged.
\nThis interface has a single event called PropertyChang**ing** and is raised before a property is about to be changed. This interface is not actually used by any XAML framework but does complement the INotifyPropertyChanged interface and can be useful in your view models when you want to know that a property is about to change and do something about it.
Given that we've written a base class, it is super easy to include it too. You should note that this interface only exists in the full .NET Framework and Silverlight. It does not exist on Windows Store or Windows Phone platforms.
\nAs we are writing a base class for a Portable Class Library (PCL), I've commented it out. However, if I were to create a full .NET or Silverlight class library, I would definitely put that code back in.
\nIf you find the interface useful and you too are using a Portable Class Library (PCL), you could take a copy of the INotifyPropertyChanging interface and include it with your base class. If Microsoft ever decide to include it into the PCL, you simply need to remove your class and use the one in the framework.
I have gone through many iterations to get to this base class. As I've shown, I've had very particular goals in mind. Your mileage may vary but I believe with the tools Microsoft have given us, this is a good compromise and covers all the scenarios I can think of. I'd be very interested if anyone has any comments or thoughts on improvements. Feel free to sound-off in the comments.
\n", "url": "https://rehansaeed.com/model-view-viewmodel-mvvm-part3-inotifypropertychanged/", "title": "Model-View-ViewModel (MVVM) - Part 3 - INotifyPropertyChanged", "summary": "An base class implementation for the INotifyPropertyChanged interface. Used in the Model-View-ViewModel (MVVM) pattern. Targeted for best performance.", "image": "https://rehansaeed.com/images/hero/MVVM-1366x768.png", "date_modified": "2014-06-18T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/model-view-viewmodel-mvvm-part2-idisposable/", "content_html": "View models these days interact with all kinds of precious resources like Compasses and the GPS. Implementing IDisposable is an important pattern you can follow to dispose of these resources cleanly. Freeing them up to be used elsewhere and saving the users battery (Particularly important on mobile devices). Using the IDisposable interface in the Model-View-ViewModel (MVVM) pattern is a wise decision.
Implementing IDisposable correctly is ridiculously hard. If you don't know how hard it really is, I recommend reading the top comment on this Stack Overflow article.
Implementing IDisposable is one of the rare times in C# where a developer has to use C# Destructors and also one of the few times when we have to tickle the garbage collector to stop it from trying to release the unmanaged resources twice by calling SuppressFinalize on GC.
\nHaving to write this code repeatedly is difficult and error prone, so how about a base class?
\nnamespace Framework.ComponentModel\n{\n using System;\n using System.Reactive;\n using System.Reactive.Linq;\n using System.Reactive.Subjects;\n\n /// <summary>\n /// Base class for members implementing <see cref="IDisposable"/>.\n /// </summary>\n public abstract class Disposable : IDisposable\n {\n private bool isDisposed;\n private Subject<Unit> whenDisposedSubject;\n\n /// <summary>\n /// Finalizes an instance of the <see cref="Disposable"/> class. Releases unmanaged\n /// resources and performs other cleanup operations before the <see cref="Disposable"/>\n /// is reclaimed by garbage collection. Will run only if the\n /// Dispose method does not get called.\n /// </summary>\n ~Disposable() => this.Dispose(false);\n\n /// <summary>\n /// Gets the when errors changed observable event. Occurs when the validation errors have changed for a property or for the entire object.\n /// </summary>\n /// <value>\n /// The when errors changed observable event.\n /// </value>\n public IObservable<Unit> WhenDisposed\n {\n get\n {\n if (this.IsDisposed)\n {\n return Observable.Return(Unit.Default);\n }\n else\n {\n if (this.whenDisposedSubject == null)\n {\n this.whenDisposedSubject = new Subject<Unit>();\n }\n\n return this.whenDisposedSubject.AsObservable();\n }\n }\n }\n\n /// <summary>\n /// Gets a value indicating whether this <see cref="Disposable"/> is disposed.\n /// </summary>\n /// <value><c>true</c> if disposed; otherwise, <c>false</c>.</value>\n public bool IsDisposed => this.isDisposed;\n\n /// <summary>\n /// Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.\n /// </summary>\n public void Dispose()\n {\n // Dispose all managed and unmanaged resources.\n this.Dispose(true);\n\n // Take this object off the finalization queue and prevent finalization code for this\n // object from executing a second time.\n GC.SuppressFinalize(this);\n }\n\n /// <summary>\n /// Disposes the managed resources implementing <see cref="IDisposable"/>.\n /// </summary>\n protected virtual void DisposeManaged()\n {\n }\n\n /// <summary>\n /// Disposes the unmanaged resources implementing <see cref="IDisposable"/>.\n /// </summary>\n protected virtual void DisposeUnmanaged()\n {\n }\n\n /// <summary>\n /// Throws a <see cref="ObjectDisposedException"/> if this instance is disposed.\n /// </summary>\n protected void ThrowIfDisposed()\n {\n if (this.isDisposed)\n {\n throw new ObjectDisposedException(this.GetType().Name);\n }\n }\n\n /// <summary>\n /// Releases unmanaged and - optionally - managed resources.\n /// </summary>\n /// <param name="disposing"><c>true</c> to release both managed and unmanaged resources;\n /// <c>false</c> to release only unmanaged resources, called from the finalizer only.</param>\n private void Dispose(bool disposing)\n {\n // Check to see if Dispose has already been called.\n if (!this.isDisposed)\n {\n // If disposing managed and unmanaged resources.\n if (disposing)\n {\n this.DisposeManaged();\n }\n\n this.DisposeUnmanaged();\n\n this.isDisposed = true;\n\n if (this.whenDisposedSubject != null)\n {\n // Raise the WhenDisposed event.\n this.whenDisposedSubject.OnNext(Unit.Default);\n this.whenDisposedSubject.OnCompleted();\n this.whenDisposedSubject.Dispose();\n }\n }\n }\n }\n}\n\nThere are several interesting facets to this implementation.
\nObjectDisposedException when you try to access a property or method after the object has been disposed. To achieve this, there is a ThrowIfDisposed helper method which can be added to the top of each property or method.IsDisposed property which can be useful if we don't know if the object is disposed or not.WhenDisposed property. This allows us to register for the dispose event.Here is an example of how the base class is used to dispose of both a managed and unmanaged (COM object) resources.
\npublic class DisposableExample : Disposable\n{\n private ManagedResource managedResource;\n private UnmanagedResource unmanagedResource;\n\n public void Foo()\n {\n this.ThrowIfDisposed();\n\n // Do Stuff\n }\n\n protected override void DisposeManaged() =>\n this.managedResource.Dispose();\n\n protected override void DisposeUnmanaged()\n {\n Marshal.ReleaseComObject(this.unmanagedResource);\n this.unmanagedResource = null;\n }\n}\n\nAn example of how to dispose of an instance of the above object.
\nDisposableExample disposable = new DisposableExample();\ndisposable.WhenDisposed.Subscribe(x => Console.WriteLine("Disposed Event Fired"));\ndisposable.Dispose();\nConsole.WriteLine(disposable.IsDisposed);\n\nAs you can see, it looks a whole lot simpler and has some pretty cool helper functions and features. No more need to remember how to implement this complicated pattern.
\n", "url": "https://rehansaeed.com/model-view-viewmodel-mvvm-part2-idisposable/", "title": "Model-View-ViewModel (MVVM) - Part 2 - IDisposable", "summary": "Implementing IDisposable correctly is ridiculously hard. A Disposable base class can make it easier. Using IDisposable in Model-View-ViewModel (MVVM) really helps.", "image": "https://rehansaeed.com/images/hero/MVVM-1366x768.png", "date_modified": "2014-06-13T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/model-view-viewmodel-mvvm-part1-overview/", "content_html": "I have been meaning for some time to do a series of posts about Model-View-ViewModel (MVVM) and its potential base classes. Then I read Mike Taulty's post about why MVVM 'bits' not built in to .NET.
\nMy aim in these posts will be to either, pick off the shelf components which are best of breed where there is no point reinventing the wheel or build my own components where necessary.
\nAssuming you already know about the basic Model-View-ViewModel (MVVM) pattern described in the title image of this post, when we talk about MVVM, what do we really mean?
\nWell there are several .NET platforms that all provide some basic low level support for Model-View-ViewModel (MVVM), Windows Store, Windows Phone, Silverlight and Windows Presentation Foundation (WPF). It is all of these platforms that I'll be discussing and targeting my code towards.
\nIn Mike Taulty's post, he goes through a list of 'bits' which all come together to help with building an application that fits into the MVVM design pattern. I've added to that list below:
\nIDisposable - When you have a scarce resource like a GPS, gyroscope or compass, you inevitably need to dispose of it somewhere. Implementing IDisposable properly is hard work. A base class would be handy.INotifyPropertyChanged - This is the building block of all .NET based MVVM. There needs to be a base class for this that is high performance and yet simple and easy to use.INotifyDataErrorInfo - Validation is an often overlooked part of an application. This handy interface makes doing validation of your view models a cinch.IEditableObject (WPF only) - This interface helps with implementing undo and redo but is used specifically in the WPF data grid.ObservableCollection<T> - This collection is a good start out of the box but why does it still not have an AddRange method? Why do we not have an ObservableDictionary<TKey, TValue> or a KeyedObservableCollection<TKey, TValue>? What if you have a collection of items implementing INotifyPropertyChanged and you want to know if one of those items changes, why can't the collection type help you there also?ICommand - Most implementations out there provide a base class for ICommand and usually call it RelayCommand or DelegateCommand. They usually have another implementation with a generic argument RelayCommand<T> or DelegateCommand<T>. These are a quick way to add a command to your view model, where the implementation of the command is usually a method in your view model passed in as a delegate. Sometimes though, this is not enough. What if you have a largish command and want to split it off into a separate class, a base class for ICommand would be useful. What if you have a command that does async and await? ICommand doesn't support that but can we provide some help here?I'll pause just here as I think the above listed items are all base classes that could be used across the board on all the major platforms. They are at the very heart of MVVM in .NET. The rest of the list below are more dependant on the type of application you are building.
\nMessageBox's, MessageDialog's, Toast's etc. Giving the user information or asking them questions happens on all platforms. This problem is very similar to the Navigation problem.ICommand's in XAML because they don't provide a Command property or sometimes you want to fire a command based on some event or even a key press.The last two things in the list are more abstract requirements for any MVVM framework.
\nMessageBox's, GPS API's or other API's which make testing difficult. You don't want a MessageBox popping up in the middle of your test do you?Wow, that's a lot of stuff! All of this 'stuff' is related but covers a huge range of subjects. A lot of existing MVVM frameworks try to do all of this at once!
\nIn my humble opinion, because they do so much, they usually only cover some, say 70-80% of the full functionality. What business does an MVVM framework have including an IoC framework? There are lots of IoC frameworks out there that are far more powerful than anything we could write but a lot of MVVM frameworks include one too.
\nSo ideally what we need is something modular, that you can plug bits into but also something that covers all the bases.
\nWhats your opinion? I've looked at a lot of frameworks MVVM Light, PRISM, etc. In my opinion, the seven top items are the most important but also the most neglected bits of MVVM. Is there some framework out there that does all this and more?
\nI'll discuss this and a lot more in the coming posts.
\n", "url": "https://rehansaeed.com/model-view-viewmodel-mvvm-part1-overview/", "title": "Model-View-ViewModel (MVVM) - Part 1 - Overview", "summary": "What really goes into using Model-View-ViewModel (MVVM) in .NET. Base classes for INotifyPropertyChanged, INotifyDataErrorInfo, IDisposable and a lot more.", "image": "https://rehansaeed.com/images/hero/MVVM-1366x768.png", "date_modified": "2014-05-14T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/portable-class-library-version-of-notificationsextensions-nuget-package/", "content_html": "I have recently got into creating NuGet packages, when I had to create one for Elysium Extra. I discovered it was really easy to do too. I've just finished creating another one called, you guessed it NotificationsExtensions.Portable and I did in in 5 minutes!
NotificationsExtensions.Portable is a Portable Class Library (PCL) version of other NotificationsExtensions NuGet Packages. It's used to Create Windows 8.1 or Windows Phone 8.1 Tile, Toast and Badge Notification XML. This package is intended for use, instead of or as well as the following NuGet packages:
This project helps to create XML representing Tile, Toast and Badge notifications on the Windows 8.1 and Windows Phone 8.1 platforms. You can take a look at the template catalogue to see the types of templates available on these platforms.
\n
It's useful when trying to send notifications from the server side using Azure Mobile Services .NET Backend or some other .NET based push notification. When you want to create notification XML in a standard .NET project and not a WinRT project. I personally use it for my London Travel Live and Currency Converter Pro apps.
\nNotificationsExtensions.Portable is available on NuGet. Simply follow the instructions below:
NotificationsExtensions.Portable:Install-Package NotificationsExtensions.Portable -Version 1.0.0\n\nAll praise goes to the above two projects and the Microsoft developers who built them. The only changes I made to the code was to switch from XmlDocument to XDocument, remove a few WinRT specific references and stick it into a Portable Class Library (PCL).
In my previous posts on Reactive Extensions (Rx) I've outlined a few clear areas where Reactive Extensions can be used in the real world. I've uncovered areas where it provides a cleaner and improved API surface as compared to older .NET code. Namely, replacing C# events, wrapping existing C# events and replacing System.Threading.Timers (Or other Timer classes, of which there are a few in .NET).
Once you have your observables, you need to do something with them. In my last post on the subject I showed how and when you can await an observable.
\nIn this post I'm going to show how you can also go the other way around. You can turn tasks into an observable. I'll also show one clear reason to use this facility.
\nThe ToObservable extension method allows you to convert a Task or Task<T> into an IObservable<T>. Calling ToObservable on a Task returns an IObservable<Unit>. A Unit is a kind of empty object that does nothing, the only reason it is there is because there is no IObservable (Without the T) interface.
IObservable<Unit> observable = Task.Run(() => Console.WriteLine("Working")).ToObservable();\n\nIObservable<string> observableT = Task<string>.Run(() => "Working").ToObservable();\n\nIf you subscribe to the above observables, they will only ever return one value and then complete. You might be thinking, hang on just a second Rehan, whats the point of doing this?
\nSo when should we use this feature? Well, lets walk through some examples and see what happens. Lets assume we have the following contrived code:
\npublic Task<string> GetHelloString()\n{\n return Task.Run(\n async () =>\n {\n await Task.Delay(500);\n return "Hello";\n });\n}\n\npublic Task<string> GetWorldString()\n{\n return Task.Run(\n async () =>\n {\n await Task.Delay(1000);\n return "World";\n });\n}\n\nWhat happens in the case where we call both of these methods and want to get the first result back. How does this code look using the Task Parallel Library (TPL) as compared to Reactive Extensions (Rx).
\npublic async Task<string> WaitForFirstResultAndReturn()\n{\n Task<string> task1 = this.GetHelloString();\n Task<string> task2 = this.GetWorldString();\n\n return await Task.WhenAny(task1, task2);\n}\n\npublic async Task<string> WaitForFirstResultAndReturn()\n{\n IObservable<string> observable1 = this.GetHelloString().ToObservable();\n IObservable<string> observable2 = this.GetWorldString().ToObservable();\n\n return await observable1.Merge(observable2).FirstAsync();\n}\n\nIn the Task Parallel Library (TPL) example, I simply use the WhenAny method to await the first task that completes and then return the result.
In the Reactive Extensions example above, I'm converting my tasks to observables, using the Merge method to convert them to a single observable and then using the FirstAsync method to await the first result (We covered await'ing observables in the last post).
Overall the two techniques look pretty similar, with the TPL having a slight edge in terms of simplicity.
\nHow about another example. Here we will try to await both of the results and put them together to get some meaningful result.
\npublic async Task<string> WaitForAllResultsAndReturnCombinedResult()\n{\n Task<string> task1 = this.GetHelloString();\n Task<string> task2 = this.GetWorldString();\n\n return string.Join(" ", await Task.WhenAll(task1, task2));\n}\n\npublic async Task<string> WaitForAllResultsAndReturnCombinedResult()\n{\n IObservable<string> observable1 = this.GetHelloString().ToObservable();\n IObservable<string> observable2 = this.GetWorldString().ToObservable();\n\n return await observable1.Zip(observable2, (x1, x2) => string.Join(" ", x1, x2));\n}\n\nIn the Task Parallel Library (TPL) example, I'm using the WhenAll method to await the results of both tasks which are returned as an array of strings. I then join these strings and return the results.
In the Reactive Extensions example above, I'm converting my tasks to observables, then using the Zip method to combine the results returned from both observables by providing it with a delegate which joins the two strings.
Again, both look pretty similar but with the pure TPL example being slightly simpler to understand.
\nOne more example, this time we'll return the first result but add a timeout to the equation.
\npublic async Task<string> WaitForFirstResultAndReturnResultWithTimeOut()\n{\n Task<string> task1 = this.GetHelloString();\n Task<string> task2 = this.GetWorldString();\n Task timeoutTask = Task.Delay(100);\n\n Task completedTask = await Task.WhenAny(task1, task2, timeoutTask);\n if (completedTask == timeoutTask)\n {\n throw new TimeoutException("The operation has timed out");\n }\n\n return ((Task<string>)completedTask).Result;\n}\n\npublic async Task<string> WaitForFirstResultAndReturnResultWithTimeOut()\n{\n IObservable<string> observable1 = this.GetHelloString().ToObservable();\n IObservable<string> observable2 = this.GetWorldString().ToObservable();\n\n return await observable1.Merge(observable2).Timeout(TimeSpan.FromMilliseconds(100)).FirstAsync();\n}\n\nIn the Task Parallel Library (TPL) example, I'm awaiting a third task which represents the timeout. If the timeout task finishes first, I raise a TimeoutException.
In the Reactive Extensions example, we merge the two observables again but this time use the Timeout method to achieve the same results.
Here we have a clear winner, the Reactive Extensions code is more concise and easier to follow.
\nWhat happens when we combine the two approaches.
\npublic async Task<string> WaitForFirstResultAndReturnResultWithTimeOut2()\n{\n Task<string> task1 = this.GetHelloString();\n Task<string> task2 = this.GetWorldString();\n\n return await Task\n .WhenAny(task1, task2)\n .ToObservable()\n .Timeout(TimeSpan.FromMilliseconds(1000))\n .FirstAsync();\n}\n\nHere we use the ToObservable and Timeout methods right at the end. As you can see this combined approach gives us the best of both worlds and makes the code much easier to read.
One definite reason to convert Task's to Observables is to use the Timeout method. There may be other reasons but I'm having a hard time thinking of any right now. In fact, I'm having a hard time thinking of any other posts to make about Reactive Extensions (Rx). It's an interesting chunk of code and I've learned a lot writing this series of posts as I hope you have too.
::: tip Update (2021-10-22)\nUpdated after I discovered that there is a new Nuget package called Microsoft.Bcl.HashCode which allows you to use System.HashCode in frameworks older than netstandard2.1.\n:::
::: tip Update (2019-06-12)\nI updated my HashCode implementation to cover a few more scenarios which I discuss below.\n:::
\n::: tip Update (2018-08-14)\nI updated this article to talk about a new HashCode class included in .NET Core 2.1 and licensing information for my code since I've been asked repeatedly.\n:::
Implementing GetHashCode is hard work and little understood. If you take a look on MSDN or Stack Overflow for a few pointers, you'll see a plethora of examples with all kinds of little used C# operators and magic numbers with little explanation for developers (Especially the newbies) what they are for and why we need them. This, for a method which exists on the Object class and is the root of all that is good and wholesome in C# is surprising.
Before I continue, I recommend reading Eric Lippert's blog post about the subject. He does not show any code, just goes into when and where we need to implement the GetHashCode method. Eric does a much better job than I could do but in short, GetHashCode is implemented wherever you implement the Equals method and ideally your class should be immutable.
Now down to the nitty gritty. How do we make implementing GetHashCode easy. Well, suppose we have the following class:
public sealed class SuperHero\n{\n private readonly string name;\n private readonly int age;\n private readonly ReadOnlyCollection<string> powers;\n\n public SuperHero(string name, int age, IEnumerable<string> powers)\n {\n this.name = name;\n this.age = age;\n this.powers = new ReadOnlyCollection<string>(powers.ToList());\n }\n\n public int Age => this.age;\n\n public string Name => this.name;\n\n public ReadOnlyCollection<string> Powers => this.powers;\n\n public override bool Equals(object obj)\n {\n // ...\n }\n\n public override int GetHashCode()\n {\n // TODO\n }\n}\n\nIn our example we have an immutable object with a variety of fields of different types, including a collection. One possible implementation of GetHashCode according to the highest rated Stack Overflow post (If modified to fit our example and deal with null's) may be:
public override int GetHashCode()\n{\n unchecked\n {\n int hashCode = 17;\n\n hashCode = (hashCode * 23) + (name == null ? 0 : this.name.GetHashCode());\n\n hashCode = (hashCode * 23) + this.age;\n\n foreach (string power in this.powers)\n {\n hashCode = (hashCode * 23) + (power == null ? 0 : power.GetHashCode());\n }\n\n return hashCode;\n }\n}\n\nI don't know about you but that code looks awfully unwieldy to me. For a start we've got two different magic numbers 17 and 23. Why? As it happens these are prime numbers and reduces the chance of getting collisions between hashes (Two un-equal objects are supposed to have different hash codes but sometimes this is not the case due to hash collisions that can occur).
\nWe've also got the unchecked C# keyword which stops overflow checking to improve performance (That's not something you see every day). Bear in mind that the whole point of the GetHashCode method is to allow things like the Dictionary type to quickly retrieve objects.
I personally would not be able to remember how to do this each time I need to implement GetHashCode and it seems like you could very easily introduce bugs by making a typo. How about a helper class (Well actually a struct for better performance)?
/// <summary>\n/// A hash code used to help with implementing <see cref="object.GetHashCode()"/>.\n/// </summary>\npublic struct HashCode : IEquatable<HashCode>\n{\n private const int EmptyCollectionPrimeNumber = 19;\n private readonly int value;\n\n /// <summary>\n /// Initializes a new instance of the <see cref="HashCode"/> struct.\n /// </summary>\n /// <param name="value">The value.</param>\n private HashCode(int value) => this.value = value;\n\n /// <summary>\n /// Performs an implicit conversion from <see cref="HashCode"/> to <see cref="int"/>.\n /// </summary>\n /// <param name="hashCode">The hash code.</param>\n /// <returns>The result of the conversion.</returns>\n public static implicit operator int(HashCode hashCode) => hashCode.value;\n\n /// <summary>\n /// Implements the operator ==.\n /// </summary>\n /// <param name="left">The left.</param>\n /// <param name="right">The right.</param>\n /// <returns>The result of the operator.</returns>\n public static bool operator ==(HashCode left, HashCode right) => left.Equals(right);\n\n /// <summary>\n /// Implements the operator !=.\n /// </summary>\n /// <param name="left">The left.</param>\n /// <param name="right">The right.</param>\n /// <returns>The result of the operator.</returns>\n public static bool operator !=(HashCode left, HashCode right) => !(left == right);\n\n /// <summary>\n /// Takes the hash code of the specified item.\n /// </summary>\n /// <typeparam name="T">The type of the item.</typeparam>\n /// <param name="item">The item.</param>\n /// <returns>The new hash code.</returns>\n public static HashCode Of<T>(T item) => new HashCode(GetHashCode(item));\n\n /// <summary>\n /// Takes the hash code of the specified items.\n /// </summary>\n /// <typeparam name="T">The type of the items.</typeparam>\n /// <param name="items">The collection.</param>\n /// <returns>The new hash code.</returns>\n public static HashCode OfEach<T>(IEnumerable<T> items) =>\n items == null ? new HashCode(0) : new HashCode(GetHashCode(items, 0));\n\n /// <summary>\n /// Adds the hash code of the specified item.\n /// </summary>\n /// <typeparam name="T">The type of the item.</typeparam>\n /// <param name="item">The item.</param>\n /// <returns>The new hash code.</returns>\n public HashCode And<T>(T item) =>\n new HashCode(CombineHashCodes(this.value, GetHashCode(item)));\n\n /// <summary>\n /// Adds the hash code of the specified items in the collection.\n /// </summary>\n /// <typeparam name="T">The type of the items.</typeparam>\n /// <param name="items">The collection.</param>\n /// <returns>The new hash code.</returns>\n public HashCode AndEach<T>(IEnumerable<T> items)\n {\n if (items == null)\n {\n return new HashCode(this.value);\n }\n\n return new HashCode(GetHashCode(items, this.value));\n }\n\n /// <inheritdoc />\n public bool Equals(HashCode other) => this.value.Equals(other.value);\n\n /// <inheritdoc />\n public override bool Equals(object obj)\n {\n if (obj is HashCode)\n {\n return this.Equals((HashCode)obj);\n }\n\n return false;\n }\n\n /// <summary>\n /// Throws <see cref="NotSupportedException" />.\n /// </summary>\n /// <returns>Does not return.</returns>\n /// <exception cref="NotSupportedException">Implicitly convert this struct to an <see cref="int" /> to get the hash code.</exception>\n [EditorBrowsable(EditorBrowsableState.Never)]\n public override int GetHashCode() =>\n throw new NotSupportedException(\n "Implicitly convert this struct to an int to get the hash code.");\n\n private static int CombineHashCodes(int h1, int h2)\n {\n unchecked\n {\n // Code copied from System.Tuple so it must be the best way to combine hash codes or at least a good one.\n return ((h1 << 5) + h1) ^ h2;\n }\n }\n\n private static int GetHashCode<T>(T item) => item?.GetHashCode() ?? 0;\n\n private static int GetHashCode<T>(IEnumerable<T> items, int startHashCode)\n {\n var temp = startHashCode;\n\n var enumerator = items.GetEnumerator();\n if (enumerator.MoveNext())\n {\n temp = CombineHashCodes(temp, GetHashCode(enumerator.Current));\n\n while (enumerator.MoveNext())\n {\n temp = CombineHashCodes(temp, GetHashCode(enumerator.Current));\n }\n }\n else\n {\n temp = CombineHashCodes(temp, EmptyCollectionPrimeNumber);\n }\n\n return temp;\n }\n}\n\nThe helper struct can be used in our SuperHero class like so:
public override int GetHashCode()\n{\n return HashCode\n .Of(this.name)\n .And(this.age)\n .AndEach(this.powers);\n}\n\nNow isn't that pretty? All the nasty magic numbers and unchecked code has been hidden away. It is a very lightweight and simple struct, so although we create new instances of it, it's stored in the stack rather than the memory heap. What's more, is that is code is just as fast (I've timed it)! We're using generics so there is no boxing or unboxing going on. We're still using the unchecked keyword, so overflow checking is still disabled.
One interesting edge case is what to do when hashing a collection and you get either a null or empty collection. Should you use a zero to represent both scenarios (zero is usually used to represent a null value) or differentiate them somehow. I managed to get a response from Jon Skeet himself on Stack Overflow:
\n\nif both states are valid, it seems perfectly reasonable to differentiate between them. (Someone carrying an empty box isn't the same as someone not carrying a box at all...)
\n\n
This is why we use the prime number 19 (it could have been any prime number) to represent an empty collection. Whether this matters or not depends on your use case. If an empty collection means something different in your scenario, then we've got you covered. Generally speaking though, if you are exposing a collection property in your class you should consider making it a getter only and initializing it in the constructor, so that it is never null in the first place but here we're trying to cover all scenarios.
If you are using .NET Core 2.1, consider using the System.HashCode struct instead of my code. If your using an older framework you can also try the Microsoft.Bcl.HashCode NuGet package. There are two ways to use it:
The Combine method can be used to create a hash code, given up to eight objects.
public override int GetHashCode() =>\n HashCode.Combine(object1, object2);\n\nThe Add method is similar to my code but it does not handle collections and is not fluent:
\npublic override int GetHashCode()\n{\n var hash = new HashCode();\n hash.Add(this.object1);\n hash.Add(this.object2);\n return hash.ToHashCode();\n}\n\nThere are several advantages to using the .NET Core HashCode:
null's automatically.The disadvantages are:
\nHashCode.Add method does not.I have been asked repeatedly for licensing of my code above. Developers have been asking permission to use it in a Kerbal Space program plugin and even in the excellent Chocolatey project which has been totally unexpected for me because this is code I wrote years ago. It just goes to show how fundamental GetHashCode is. Please consider the code as MIT licensed, do good with it and be excellent to each other!
So I've just finished extolling the wonderful virtues of TaskCompletionSource with a colleague and thought I'd share the joy more widely. Eventually this will turn into a post about how great Reactive Extensions (Rx) is, I promise.
TaskCompletionSource is a great tool to turn any asynchronous operation which does not follow the Task Parallel Library (TPL) pattern into a Task. The example below is something I've started to do in a few places.
\npublic Task<bool?> ShowDialog()\n{\n TaskCompletionSource<bool?> taskCompletionSource = new TaskCompletionSource<bool?>();\n\n Window window = new MyDialogWindow();\n EventHandler eventHandler = null;\n eventHandler = \n (sender, e) =>\n {\n window.Closed -= eventHandler;\n taskCompletionSource.SetResult(window.DialogResult);\n };\n window.Closed += eventHandler;\n window.Show();\n\n return taskCompletionSource.Task;\n}\n\nIn the example above we are creating a new window, registering for its Closed event and then showing the window. When the window is closed, we de-register from the closed event handler (To avoid a remaining reference to the window, causing a memory leak) and the DialogResult of the window is passed to the TaskCompletionSource using the SetResult method.
The TaskCompletionSource gives us a nice Task object which we can return at the end of the method. When we return the task its status is WaitingForActivation. Only when the SetResult method is called when the window closes, does the tasks status change to RanToCompletion.
This whole operation has been wrapped up and packaged nicely in a Task<bool?> with a nice bow on top with the help of TaskCompletionSource. Now we can call the method and await the results from the method call, thus allowing us to savour the power and simplicity the TPL affords us.
bool? result = await ShowDialog();\n\nThere are other great ways to use TaskCompletionSource of course. Generally speaking though I have found myself using it to turn an operation where I am waiting for an event into a task. For more information on TaskCompletionSource or the TPL in general I highly recommend reading Stephen Toub's blog.
Having showed my colleague the above example and feeling very content, I suddenly realised that Reactive Extensions (Rx) can make the code even simpler. With the advent of the latest version of Reactive Extensions (Rx) you can now await observables and we can turn the method above into this:
\npublic async Task<bool?> ShowDialog()\n{\n var window = new MyDialogWindow();\n var closed = Observable\n .FromEventPattern<EventHandler, EventArgs>(\n h => window.Closed += h,\n h => window.Closed -= h);\n\n window.Show();\n await closed.FirstAsync()\n\n return window.DialogResult;\n}\n\nThe await keyword is just some syntactic sugar in the C# language that makes writing thorny asynchronous code effortless. The real meat of what drives it is the GetAwaiter method. The Reactive Extensions (Rx) team seeing the genius that is the Task Parallel Library (TPL) took advantage. They added this method (actually an extension method) to IObservable<T>, allowing us to await an observable as seen in the example above.
However, there is a caveat which I shall explain. In the example above the Closed event could conceivably be fired any number of times (If the window was opened and closed a few times) and the observable wrapper around the Closed event never completes. So our observable returns multiple results but a task can only return a single result.
\nThe secret in our example is the FirstAsync method. We are actually awaiting the first result returned by our observable and don't care about any further results. By default awaiting an observable without the FirstAsync method above will actually await the last result before completion. If your observable does not complete, then you will be waiting forever!
Handily the Reactive Extensions (Rx) team has added several methods which you can use before you use await to modify the result of what you are awaiting. All of these methods end with the word Async. I've added a short list of these methods below (There are lots of overloads so I've just highlighted the main ones):
// Returns the first element of an observable sequence.\nstring result = await observable.FirstAsync();\n\n// Returns the first element of an observable sequence, or a default value if no such element exists.\nstring result = await observable.FirstOrDefaultAsync();\n\n// Returns the last element of an observable sequence. \n// This is the default action of awaiting an observable.\nstring result = await observable.LastAsync();\n\n// Returns the last element of an observable sequence, \n// or a default value if no such element exists.\nstring result = await observable.LastOrDefaultAsync();\n\n// Returns the only element of an observable sequence, and throws an exception if there is not exactly \n// one element in the observable sequence.\nstring result = await observable.SingleAsync();\n\n// Returns the only element of an observable sequence, or a default value if the observable sequence \n// is empty; this method reports an exception if there is more than one element in the observable sequence.\nstring result = await observable.SingleOrDefaultAsync();\n\n// Invokes an action for each element in the observable sequence and returns a Task that will get \n// signalled when the sequence terminates.\nawait observable.ForEachAsync(x => Console.WriteLine(x));\n\nAll of the above methods allow you to pick a single result from your observable. ForEachAsync is different though as it performs an action on each item and when your observable completes (If it does) then the task completes.
So we've learned how to await observables in different ways and how it can be another way of doing the same thing that TaskCompletionSource does but in a cleaner more elegant way.
We've also learned that there are some caveats that you need to be aware of when awaiting an observable i.e. that observables return multiple results and you have to pick one to return in your task.
\n", "url": "https://rehansaeed.com/reactive-extensions-part5-awaiting-observables/", "title": "Reactive Extensions (Rx) - Part 5 - Awaiting Observables", "summary": "How and where to use Task Parallel Library (TPL) async and await with Reactive Extensions (Rx). Also, how to use TPL for awaiting observables.", "image": "https://rehansaeed.com/images/hero/Reactive-Extensions-1366x768.png", "date_modified": "2014-03-27T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/wpf-metro-part4-elysium-extra/", "content_html": "https://www.youtube.com/watch?v=PGM_uBy99GA
\nIn these series of posts I'm going to do a quick review of a few different open source WPF Metro (Or Modern if you prefer) style SDK's. Each of them provides styles and new controls to build your WPF application from scratch. Here are the home pages for the open source projects in question where you can download the source code and the binaries to play with them yourself:
\n\n::: warning\nI am the developer of Elysium Extra. It was developed at the company where I work, where I had an excellent manager who saw the advantage of open sourcing the code and making it widely available to the .NET community.
\nMy intention was to submit this code to the Elysium project, however, the developer had been away for a couple of months, so I released Elysium Extra as a separate project. Happily though, the developer of Elysium has returned and resumed development of the SDK. I've yet to discuss how we can move forward with the two code bases.
\nThis post will be a bit longer as I need to outline the contents of Elysium Extra as it's a new SDK and needs a little explaining (It contains a lot of cool stuff!).\n:::
\nElysium Extra is an add-on project to Elysium. It fills in some gaps which existed in the Elysium SDK, improves some styles and adds lots of new styles, new controls and style guidelines.
\nAs the code has been used in a real life, large scale application (Albeit heavily modified and cleaned up to make it ready for open sourcing), it should in theory (Fingers crossed) be production quality. It also seems to contain a lot of attached properties and other small bits of code which were found to be required when building a WPF application, that other SDK's do not contain. All of these have demo's in the sample application.
\nThe look and feel picks up from where Elysium leaves off. Some control styles have been modified and improved, others have been modified for consistency.
\nThe window has be heavily modified to allow full customization. Pretty much every feature you can think of has been added. Changes include Window border glow effects, saving window placement, full screen mode...the list goes on.
\nOther major differences between this SDK over Elysium are that the TextBox and ComboBox controls have been brought more into line with each other, so if you add them in a form they don't look out of place.
\nThe ability to change theme's is unfortunately currently broken as a side effect of Elysium Extra having to wrap the Elysium XAML styles. This is pretty high up the list of things planned to be fixed though.
\n
Custom colours over and above what Elysium provides and guidelines on how to use them.
\nAs this SDK has been open sourced from a live application. There are lots of styles for icons, menu items and app-bar buttons. These are things you'd use in every application. Things like cut, copy and paste buttons etc.
\n

The ElasticPicker is a very unique control that I'm most proud of. It can be used to filter a collection of things with a few clicks, all while looking fantastic and doing lots of flashy animations that will make your eyes light up. The screen-shots do not do it justice, so do watch the video for a better understanding.
\nIn the example below the ElasticPicker is bound to a collection of people and we have given the user the ability to filter based on four attributes of the people. Each attribute shows the number of people in that category, when the use selects a category all the numbers, sizes and colours of all of the other boxes change. It is a sight to behold.
\n

Icons are ridiculously underestimated in their ability to improve the look and feel of an application and also in the difficulty of getting them right. I've had to use PNG icons in the past and I can tell you finding an off the shelf icon that matches the look and feel of your application and indeed your other icons is hard work. Creating your own icons from scratch is even harder.
\nWith XAML combined with the Metro look and feel though, all this becomes easy. Nearly 3000 icons have been provided out of the box. Icons are a single colour, making them all look consistent and in keeping with each other. Its simple to create your own with the options of Geometry, Text or icons using PNG images using opacity masks.
\n![]()
The wizard control is the basis of all my screens. It can be used like the Frame and Page controls in Silverlight or Windows Phone, where you navigate between pages. The difference being that you can optionally structure your pages or WizardItem's into a linear or tree hierarchy and use the breadcrumb navigation bar to navigate between pages.
\nIt gets even better. Each WizardItem supports entrance and exit animations. So when you are moving pages, some really nice transition animations can be applied to any of your controls on the page. If you set backgrounds to your pages, they even fade into each other.
\n


Although not an entirely new control, the DataGrid has been augmented with quite a bit of extra functionality. There are a few new column types like the DataTime and ProgressBar columns. Attached properties have been used to add the ability to add column specific group summaries, save column settings, add single click editing and single click row/cell de-selection among other things.
\n
The ExpanderMenu is a great control that a colleague came up with. We've generally used it in the DataGrid. When a user clicks on a row, the ExpanderMenu appears in a nice sliding animation and can be used instead of or in conjunction with a ContextMenu.
\n
The ability to drag and drop in a completely MVVM friendly method has been added. It is super simple to use and you can get it working with just four lines of XAML! The DragManager is used to make something draggable and the DragCommand is used to fire a standard ICommand. It's super customizable and very easy to use.
\nFurthermore, styles have been created for the ListBoxItem and DataGridRow which allow re-ordering using drag and drop.
\n

The Fish Eye list box is just a standard list box, except mousing over or selected in item makes it grow and it gives you a nice top level menu bar. The NotesListBox can be used to display several rich text notes, which expand gradually when you mouse over them.
\n

The Accordian is a bit like the Expander control, except that you have more than one and can control how may can open at any one time.
\nThe Accordian control has been taken from the left overs of the WPF Toolkit (Buggy controls which did not make it into the proper .NET framework). It has been fixed and totally restyled to resemble the Expander style.
\n
For those that have used the Flyout control in Windows 8, this is very much the same. In the SDK, it's just an Expander control with a custom style (I like to re-use existing controls as much as possible where it makes sense).
\n

Various input controls have been added or improved. Notable additions include the NumericUpDown controls which allow you to select a number and the ButtonTextBox, which can be used to add a button action to a text box.
\n




Validation is something that's built into WPF but is so often neglected by people styling controls. So far, this SDK supports validation on the TextBox, ComboBox, DatePicker and numeric up down controls such as IntegerUpDown. I recomend using the INotifyDataErrorInfo interface on your view models or the NotifyDataErrorInfo base class provided in conjunction with these controls.
\n
The HyperlinkButton is just a button with a custom style which underlines the text in the button on mouse over and changes the mouse cursor to a hand.
\n
Several new abilities have been added to the window. These include better customization of look and feel, full screen mode, saving window placement, setting the task-bar overlay icon using XAML and/or binding and a custom window drop shadow.
\n
This is a special Window which allows you to overlay another window with some content. Crucially this window is not modal but allows modal style interaction with the underlying window.
\n
The standard WPF MessageBox is ugly and uses a really bad design where you can only ask Yes or No style questions. The MessageDialog comes to the rescue with a Windows 8 style dialogue. You can ask any questions and even customize the allowed response buttons.
\n
The NotifyBox is like the Microsoft Outlook style email notifications. A small window fades in on the bottom right and slowly disappears, unless you mouse over it.
\n
The loading content control can be used to cover any content with a loading animation and loading message. You can toggle it on or off using a simple boolean value.
\n
You can add any fixed size content to the Paging control and it will separate it into pages which the user can switch between with a nice transition animation.
\n

The GraphTreeView is just a style for a TreeView, which shows the TreeViewItem's horizontally instead of vertically. In the past I've allowed users to switch between horizontal and vertical views on my TreeView's and users have been pretty appreciative.
\n
Animating controls has been made simple. All you need to do is add a couple of lines of XAML and your ItemsControl's will fade and slide in sequentially or randomly.
\n
The flip control is basically an ItemsControl which only shows the selected item. When the selected item changes, it flips between the items until it gets to the new selected item.
\n
There are several custom border controls available, although usage is discouraged because they are not very metro!
\n
The code is available on GitHub and I've tried to keep it up to date with new additions and fixes being made over the weekends.
\nI have tried to keep the code in keeping with the existing Elysium project, so when controls are moved to the parent Elysium project (I don't know how or when this will happen) this should be a fairly simple process.
\nThe code is fully StyleCop compliant. If you don't use StyleCop or even don't know what it is, I suggest you read this.
\nIf nothing happens in the next couple of months I will start to consider creating a NuGet package, along with all the niceties that make a developers life a little easier.
\nThis is a very young project with currently only one developer but I do intend to keep it working. Your mileage with this SDK may vary but any contributions or bug fixes are welcome.
\n", "url": "https://rehansaeed.com/wpf-metro-part4-elysium-extra/", "title": "WPF Metro Part 4 - Elysium Extra", "summary": "Elysium Extra is an excellent Windows Presentation Foundation (WPF) SDK providing Metro styles for built in WPF controls and some custom controls.", "image": "https://rehansaeed.com/images/hero/Elysium-Extra-1366x768.png", "date_modified": "2014-03-19T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/wpf-metro-part3-elysium/", "content_html": "https://www.youtube.com/watch?v=YChJguA6ai8
\nIn these series of posts I'm going to do a quick review of a few different open source WPF Metro (Or Modern if you prefer) style SDK's. Each of them provides styles and new controls to build your WPF application from scratch. Here are the home pages for the open source projects in question where you can download the source code and the binaries to play with them yourself:
\n\nThis is the third post in the series. I've been using Elysium at work for the last year and have been pretty impressed with it. There are a few gaps in terms of the styles it provides but that's where Elysium Extra comes in.
\nThis SDK takes its own view of what Metro ought to look like. It makes much more use of the accent colour in its controls which have a bold and striking look to them. Where colour is used, it is used in abundance, on the otherwise plain white canvas.
\nOnce again, this SDK supports themes out of the box. It gives you the option of a dark or light background, an accent colour and contrast colour.
\n

The standard Windows 8 buttons are all here but composed of three variants. The SDK also provides a Windows 8 style application bar which pops up from the bottom of the screen on right click. Buttons placed in the application but have separate styles and look pretty pleasing. The developer has done a pretty good job of recreating the Windows 8 application bar experience.
\nThe standard alternative to the CheckBox control is here too.
\n

These are the alternatives to the standard WPF ProgressBar control and are quite pleasing to the eye.
\nThe code is available on GitHub and you could download the latest change set but it seems to be broken at the time of writing because the developer is doing some pretty major re-factoring (Use Team Foundation Server shelve-sets to avoid checking in broken code). That's a bit of a shame as the basics of this SDK are pretty solid.
\nLooking at the discussion boards, there seems to be a fair amount of activity but the developer seems to have gone on holiday as I've been unable to contact him to talk about Elysium Extra which I'll be discussing in my next post (Update: The developer is back and says he is continuing development after a brief pause).
\nThis is a solid SDK but it does lack basic styles for some WPF controls such as the DatePicker and DataGrid, for some this is a deal breaker but Elysium Extra covers these bases and more. Elysium does provide a NuGet package which is great and even an MSI which has full integration with Visual Studio!
\n

Reactive Extensions (Rx) is a huge library. In this series of blog posts I've tried to illustrate real world applications where using Rx can significantly improve and/or simplify your code. So far I've talked about using Rx to replace standard C# events and also wrapping C# events with observables.
\nIn this post I'm going to talk about another area where Rx provides a nicer API surface, as compared to an existing .NET API. In particular, I'm talking about the .NET Timers. There are a few different timers available in the .NET framework. The main ones being System.Threading.Timer and System.Timers.Timer. Each one has their pros and cons but I'm not going to go into which ones are better as that's a whole other conversation.
\nBelow is a very simple example of how to use the System.Timers.Timer (System.Threading.Timer is very similar). We new-up a Timer with the number of milliseconds we want to timer to raise it's Elapsed event, register for the Elasped event and finally start the timer.
public void StartTimer()\n{\n Timer timer = new Timer(5000);\n timer.Elapsed += this.OnTimerElapsed;\n timer.Start();\n}\n\nprivate void OnTimerElapsed(object sender, ElapsedEventArgs e)\n{\n // Do Stuff Here\n Console.WriteLine(e.SignalTime);\n // Console WriteLine Prints\n // 11/03/2014 10:58:35\n // 11/03/2014 10:58:40\n // 11/03/2014 10:58:45\n // ...\n}\n\nNow here is how to do almost the exact same thing with Reactive Extensions. In this scenario we use the Interval method to specify the timer interval and simply subscribe to the observable. The only difference is that we don't get a DateTime with the time of the timer elapsed event but an integer representing the number of times the timer elapsed delegate has been fired (Much more useful in my opinion, as you can always derive the time from this number).
public void StartTimer()\n{\n Observable\n .Interval(TimeSpan.FromSeconds(5))\n .Subscribe(\n x =>\n {\n // Do Stuff Here\n Console.WriteLine(x);\n // Console WriteLine Prints\n // 0\n // 1\n // 2\n // ...\n });\n}\n\nBut that's not all, Reactive Extensions provides another method which can be quite useful. In the example below which looks almost exactly the same, we use the Timer method instead of Interval. This method only calls the timer elapsed delegate once.
public void StartTimerAndFireOnce()\n{\n Observable\n .Timer(TimeSpan.FromSeconds(5))\n .Subscribe(\n x =>\n {\n // Do Stuff Here\n Console.WriteLine(x);\n // Console WriteLine Prints\n // 0\n });\n}\n\nThe Timer method has lots of overloads which you can take a look at. The example below calls the timer elapsed delegate every five seconds but only starts to do so, after a minute has passed.
public void StartTimerInOneMinute()\n{\n Observable\n .Timer(TimeSpan.FromMinutes(1), TimeSpan.FromSeconds(5))\n .Subscribe(\n x =>\n {\n // Do Stuff Here\n Console.WriteLine(x);\n // Console WriteLine Prints\n // 0\n // 1\n // 2\n // ...\n });\n}\n\nThe final thing you need to know about the Interval and Timer methods is that they can optionally take an IScheduler as a final parameter. This allows the timer elapsed delegate to be run on any thread of your choosing. In the example below we are subscribing the event on the WPF UI thread. There are several other Scheduler's you can use, so just take a look.
public void StartTimerOnUIThread()\n{\n Observable\n .Interval(TimeSpan.FromSeconds(5), DispatcherScheduler.Current)\n .Subscribe(\n x =>\n {\n // Do UI Stuff Here\n });\n}\n\nIf you haven't started using Reactive Extensions yet, then here is yet another reason to get going. What I have not shown in this post is the shear superhuman power you get when you use the Linq methods to modify your observable just before you make the call to Subscribe.
Would you buy a sandwich without the filling? Would you buy a car without the radio? No? Then why are most Windows 8.1 laptops sold without touch screens?
\nI do a bit of Windows 8 and Windows Phone development on the side, so a touch screen is handy for testing but you're probably asking, whats so good about a touch screen? Well this is a little difficult to quantify. Having tried a laptop with one, there are some operations which are just more natural with your fingers. Resizing an image with a two finger gesture sticks out as one good example. You find yourself using it every now and then wherever appropriate.
\nWindows 8 seems to have gotten a bad rap in recent times, mostly due to the full screen start screen. I personally can't see the problem, it just downright statistically more efficient and quicker to open an application than the old cramped Windows7/Vista start menu and has lots of cool features like pausing file transfers and mounting ISO files which I use all the time.
\n
Heat map of time to reach items in the Start menu from the Start button
(green items are the fastest to get to, red items are the slowest)

Heat map of time to reach tiles in the Start screen from the Start button
(green tiles are the fastest to get to, red tiles are the slowest)
That's not to say it doesn't have its problems, I like to use the Windows 8.1 mail application. I like it because I can pin it to my start screen and get notifications on the tile for any new emails. As a heavy desktop user though, I want to open the application on the desktop in a window and not in the full screen touch friendly window. Happily though :), this is going to be fixed in the next update where WinRT apps can now be run on the good old desktop.
\n
Windows 8 Bing Finance app running on Desktop
\nIhave been looking to replace my trusty old non-touch screen Samsung laptop running Windows 8.1 with a new touch screen model. However, I like to shoot a few HE shells at some tanks while playing World of Tanks on the weekend in my down time. Its not the most demanding of games but I'd like to look forward to future updates and other games like Battlefield 4.
\nSo I've been doing my research on what my options are for a touch screen gaming laptop and was very surprised to discover that they are pretty non-existent. The closest thing you'll find is the Asus N550JV-CN201H, which is not really a gaming laptop with an nVidia GT750M low to mid-range graphics card. It'll play today's games at mid-range settings and will probably choke on tomorrows games altogether.
\nWhy is this? Well after some reading I get the impression that laptop manufacturers believe that enthusiasts don't like glossy screens and adding a touch screen means adding a glossy layer on top of the screen. This I think is personal preference, I have seen one matt screened touch screen laptop in the wild but I can't remember where, so it is at least possible to have a matt screen. So why the delay?
\nWhat I really want is the MSI GS70 Stealth but with a touch screen. This is the worlds thinnest 17" laptop with an Intel i7 CPU, a GT765M mid to high end graphics card and even a SSD RAID 0 array. Overall its a very impressive beast packed into a small a space as possible.
\n
So if MSI is listening, the demand is here for touchscreen gaming laptops, surprise us.
\n", "url": "https://rehansaeed.com/windows-8-touchscreen-gaming-laptops/", "title": "Windows 8.1 TouchScreen Gaming Laptops", "summary": "Would you buy a sandwich without the filling? No? Then why are most Windows 8.1 laptops sold without touchscreens? We need a touchscreen gaming laptop.", "image": "https://rehansaeed.com/images/hero/Fry-Shutup-And-Take-My-Money-1366x768.png", "date_modified": "2014-03-06T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/wpf-metro-part2-mahapps-metro/", "content_html": "https://www.youtube.com/watch?v=FLmCkKfIp84
\nIn these series of posts I'm going to do a quick review of a few different open source WPF Metro (Or Modern if you prefer) style SDK's. Each of them provides styles and new controls to build your WPF application from scratch. Here are the home pages for the open source projects in question where you can download the source code and the binaries to play with them yourself:
\n\nThis is the second post in the series, I take a look at MahApps Metro. Again, this SDK provides a complete set of WPF styles and several custom controls, mostly inspired by the new Windows 8 WinRT applications.
\nThis SDK borrows heavily from the Windows 8 Metro design language. It provides fairly large controls (For desktop applications, Windows 8 seems to have larger buttons to be touch friendly) with fairly thick borders, which gives each control a more imposing look. There are lots of brushes used by this SDK and the developer has provided a nice colour swatch.
\n
The Window itself positively glows as the developer has spent a lot of time customizing the window drop shadow. The title bar of the window also stands out as it is highlighted in the applications accent colour. Theme support is built in, with light or dark combined with an accent colour as shown below.
\n

(See Above) Similar to the ToggleSwitch in Windows 8, this control provides an optional replacement for the standard CheckBox control. Every metro SDK seems to include this control.
\n(See Above) Again this control borrows from the Windows 8 application bar buttons. A staple of all metro SDK's it seems.
\nThis is a great control that is sorely needed in most WPF applications. Internally it handles numbers of type Double but I suppose you could use a converter to work with Integer and other numeric types.
\nThere is a great selection of text box controls. They all seem to have the watermark capability (Again, a simple addition sorely missed in WPF). There is also the option of text boxes with buttons.
\n
A simple circular progress indicator control. I can stare at these things for hours. Very mesmerizing.
\nThe range slider is another great addition. It looks very much like the Slider control with an additional thumb button.
\n
This is a truly great message box. Instead of copying and restyling the standard Windows MessageBox, it has been totally reshaped and re-imagined ala Windows 8 to cover the entire parent Window. Not only that but you can place your own custom content in there. There are a few different options to choose from, the version below shows a text box.
\n
This Windows 8 style fly-out control does what it says on the tin, it flies in and out of the side of the Window. A nice feature is that it covers the title bar of the window but the caption buttons are still overlayed above the fly-out control.
\n
This control also exists in Windows 8, although it works slightly differently. It allows you to view several items one at a time and provides next and previous buttons to cycle between them.
\n
I downloaded the source code hosted on GitHub and took it for a whirl. All the XAML is pretty well layed out and there is a fair amount of code in the project but that's expected as there are lots of custom controls to play with. There is only a single dependency on System.Windows.Interactivity, so you only need to add references to two DLL's to your projects.
\nLooking at the discussion boards, there seems to be a fair amount of activity with the developers answering questions and even responding to feedback which is great to see. The last check-in was just four days ago at the time of writing so this project is still very much active. I can see this project getting better and better in the future.
\nOnce again I'm very impressed with this SDK, it provides styles for all the standard WPF controls and takes a pretty good stab at bringing a lot of Windows 8 Metro controls to the desktop. It does all this while looking and feeling great. They even provide a NuGet package which is great.
\n


https://www.youtube.com/watch?v=Bk7mlEQI2rk
\nIn these series of posts I'm going to do a quick review of a few different open source WPF Metro (Or Modern if you prefer) style SDK's. Each of them provides styles and new controls to build your WPF application from scratch. Here are the home pages for the open source projects in question where you can download the source code and the binaries to play with them yourself:
\n\nIn this first post I will be discussing Modern UI for WPF. This is a very lightweight framework which has a complete set of styles for the built in WPF controls.
\nThe look of the application seems to borrow quite heavily from Microsoft's Zune application, which is Microsoft's equivalent of Apple's iTunes. Zune was used by Microsoft as an early test bed for their Metro style. Microsoft then went on from there to build Windows 8. This is Zune below:
\n
The Window style is quite stunning, mostly achieved by providing some very nice soft backgrounds to go with the option of white or black themes. The theme is fairly flexible, allowing a selection of 'Accent' colours and even selection of small and large font sizes. One cool feature I discovered is that changing the them does not simple change the colours but the Developer has used a colour animation to transition from one colour to another.
\n

The controls themselves are more toned down. Employing simple grey lines and a very minimalist design. Most of the controls are slightly on the small side compared to Windows 8 but having played with Zune I can tell you that the developer has recreated everything pretty accurately.
\n
All the built in WPF controls seem to have been styled and a type ramp is even included, giving a simple way to show text in different guises. Validation of input is supported via the TextBox control. It would have been nice for other controls to have been supported too but it is very nicely done.
\nThe main control of interest is the tab control called ModernMenu, this provides a very interesting multi-level tabbed experience. It allows you to split the tab contents into seperate user controls using the URI to the XAML page a bit like the Silverlight experience. Navigation is always a problem that needs to be solved in a WPF application and this is one very clean method of doing it.
\n
A Windows 8 style circular app-bar button. Comes in two sizes.
\n
The developer has inherited from the Window class to create three very useful window types.
\n

This is very easily the best progress indicator I have seen. It has several modes, allowing you to select the method of showing progress. Simply mesmerizing.
\n
This is a nice little control allowing you to use BB code to write some text.
\n
The source code seems pretty lightweight and nicely structured, with each controls style having it's own XAML file. There are only two DLL's which need to be deployed, they even provide a NuGet package which is great. Looking at the checking history, the single developer (The creator of XAML Spy no less) seems fairly active with the last check-in performed on 21st January 2014, so the project is still very much active. Looking at the discussion forums he seems to also be very helpful.
\nOverall I'm very impressed with this SDK, it's a labour of love from a single developer. It has a very high attention to detail and covers most bases for most developers, providing a full set of WPF styles.
\nIf the developer is reading, I would suggest adding a tab in his sample showing the colours and brushes available to the developers and expanding his validation to other input controls.
\nThis is a great project and it would be a pleasure to use any application developed using this SDK.
\n", "url": "https://rehansaeed.com/wpf-metro-part1-modern-ui-for-wpf/", "title": "WPF Metro Part 1 - Modern UI for WPF", "summary": "Modern UI for WPF is an excellent Windows Presentation Foundation (WPF) SDK providing Metro styles for built in WPF controls and several custom controls.", "image": "https://rehansaeed.com/images/hero/Modern-UI-for-WPF-1366x768.png", "date_modified": "2014-02-19T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/reactive-extensions-part3-naming-conventions/", "content_html": "Standard C# events do not have any real naming convention, except using the English language to suggest that something has happened e.g. PropertyChanged. Should a property returning an IObservable<T> have a naming convention? I'm not entirely certain but I'll explain why I have used one and why.
C# events are easily differentiated in a class from properties and methods because they have a different icon in the Visual Studio Intelli-Sense. Visual Studio does not provide IObservable<T> properties any differentiation. This may change in the future if Microsoft decides to integrate Reactive Extensions (Rx) more deeply into Visual Studio.
The second reason for using a naming convention is that I often wrap existing C# events with a Reactive Extensions event. It's not possible to have the same name for a C# event and an IObservable<T> property.
You will have noticed already if you've looked at my previous posts that I use the word 'When' prefixed before the name of the property. I believe, this nicely indicates that an event has occurred and also groups all our Reactive Extension event properties together under Intelli-Sense.
\npublic IObservable<string> WhenPropertyChanged\n{\n get { ... };\n}\n\nI have read in a few places people suggesting that so called 'Hot' and 'Cold' (See here for an explanation) observables should have different naming conventions. I personally feel that this is an implementation detail and I can't see why the subscriber to an event would need to know that an event was 'Hot' or 'Cold' (Prove me wrong). Also, trying to teach this concept to other developers and get them to implement it would mean constantly looking up the meanings (I keep forgetting myself), whereas using 'When' is a nice simple concept which anyone can understand.
\nThis is a pretty open question at the moment. What are your thoughts on the subject?
\n", "url": "https://rehansaeed.com/reactive-extensions-part3-naming-conventions/", "title": "Reactive Extensions (Rx) - Part 3 - Naming Conventions", "summary": "Reactive Extensions (Rx) Advantages of using IObservable property naming conventions and comparison between C# events.", "image": "https://rehansaeed.com/images/hero/Reactive-Extensions-1366x768.png", "date_modified": "2014-02-14T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/reactive-extensions-part2-wrapping-events/", "content_html": "Sometimes it is not possible to replace a C# event with a Reactive Extensions (Rx) event entirely. This is usually because we are implementing an interface which has a C# event and we don't own the interface.
\nHowever, as I'll show in this post, its possible to create IObservable<T> wrappers for C# events and even to hide the C# events entirely from consumers of the class.
The method of wrapping C# events depends on the type of event handler used. Below are the three type of event handler and the method of wrapping them with an observable event.
\nThe FromEventPattern method is used to wrap the event. Notice we have to specify delegates for subscribing (+=) and unsubscribing (-=) to the event.
public event EventHandler BunnyRabbitsAttack;\n\npublic IObservable<object> WhenBunnyRabbitsAttack\n{\n get\n {\n return Observable\n .FromEventPattern(\n h => this.BunnyRabbitsAttack += h,\n h => this.BunnyRabbitsAttack -= h);\n }\n}\n\nThis example is much the same as the last, except we have to deal with the event arguments. The FromEventPattern method returns an EventPattern<T> object, which contains the sender and the event arguments. We're only interested in the contents of the event arguments, so we use a Select to return just the BunnyRabbits property.
public event EventHandler<BunnyRabbitsEventArgs> BunnyRabbitsAttack;\n\npublic IObservable<BunnyRabbits> WhenBunnyRabbitsAttack\n{\n get\n {\n return Observable\n .FromEventPattern<BunnyRabbitsEventArgs>(\n h => this.BunnyRabbitsAttack += h,\n h => this.BunnyRabbitsAttack -= h)\n .Select(x => x.EventArgs.BunnyRabbits);\n }\n}\n\nSome C# events use a custom event handler. In this case we have to specify the type of the event handler as a generic argument in the FromEventPattern method.
public event BunnyRabbitsEventHandler BunnyRabbitsAttack;\n\npublic IObservable<BunnyRabbits> WhenBunnyRabbitsAttack\n{\n get\n {\n return Observable\n .FromEventPattern<BunnyRabbitsEventHandler, BunnyRabbitsEventArgs>(\n h => this.BunnyRabbitsAttack += h,\n h => this.BunnyRabbitsAttack -= h)\n .Select(x => x.EventArgs.BunnyRabbits);\n }\n}\n\nThe disadvantage of the above approach is that we now have two ways to access our event. One with the old style C# event and the other with our new Reactive Extensions event. With a bit of trickery we can hide the C# event in some cases.
\nThe INotifyPropertyChanged interface is very commonly used by XAML developers. It has a single event called PropertyChanged. To hide the PropertyChanged C# event we can explicitly implement the interface (Click here for details on implicit versus explicit implementations of interfaces). Secondly, we wrap the event as we did before.
Now the PropertyChanged C# event can only be accessed by first casting the object to INotifyPropertyChanged (Binding in XAML languages, which uses this interface continues to work). Our new Reactive Extensions observable event is now the default method of subscribing for property changed events.
public abstract class NotifyPropertyChanges : INotifyPropertyChanged\n{\n event PropertyChangedEventHandler INotifyPropertyChanged.PropertyChanged\n {\n add { this.propertyChanged += value; }\n remove { this.propertyChanged -= value; }\n }\n\n private event PropertyChangedEventHandler propertyChanged;\n\n public IObservable<string> WhenPropertyChanged\n {\n get\n {\n return Observable\n .FromEventPattern<PropertyChangedEventHandler, PropertyChangedEventArgs>(\n h => this.propertyChanged += h,\n h => this.propertyChanged -= h)\n .Select(x => x.EventArgs.PropertyName);\n }\n }\n\n protected void OnPropertyChanged(string propertyName) =>\n this.propertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));\n}\n\nSo it may not always be possible to get rid of, dare I say it legacy C# events but we can certainly wrap them with Reactive Extension observables and even hide them altogether.
\n", "url": "https://rehansaeed.com/reactive-extensions-part2-wrapping-events/", "title": "Reactive Extensions (Rx) - Part 2 - Wrapping C# Events", "summary": "Reactive Extensions IObservable wrappers for C# events and hiding the C# events entirely from subscribers using explicit interface implementations.", "image": "https://rehansaeed.com/images/hero/Reactive-Extensions-1366x768.png", "date_modified": "2014-02-13T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/reactive-extensions-part1-replacing-events/", "content_html": "For those who have not tried Reactive Extensions (Rx) yet, I highly recommend it. If I had to describe it in a few words it would be 'Linq to events'. If you have not already learned about it, this is by far the best resource on learning its intricacies.
\nI have spent a lot of time reading about Reactive Extensions but what I have not found in my research is examples or pointers on how or even where it should be used in preference to other code. One area where you should definitely consider using Reactive Extensions is as a direct replacement for bog standard C# events, which have been around since C# 1.0. This post will explain how.
\nHere is an example of a standard C# event using the standard recommended pattern:
\npublic class JetFighter\n{\n public event EventHandler<JetFighterEventArgs> PlaneSpotted;\n\n public void SpotPlane(JetFighter jetFighter)\n {\n EventHandler<JetFighterEventArgs> eventHandler = this.PlaneSpotted;\n if (eventHandler != null)\n {\n eventHandler(this, new JetFighterEventArgs(jetfighter));\n }\n }\n}\n\nNow this is how you replace it using Reactive Extensions:
\npublic class JetFighter\n{\n private Subject<JetFighter> planeSpotted = new Subject<JetFighter>();\n\n public IObservable<JetFighter> PlaneSpotted => this.planeSpotted.AsObservable();\n\n public void SpotPlane(JetFighter jetFighter) => this.planeSpotted.OnNext(jetFighter);\n}\n\nSo far it's all pretty straightforward, we have replaced the event with a property returning IObservable<T>. Raising the event is a simple matter of calling the OnNext method on the Subject class. Finally, we do not return our Subject<T> directly in our PlaneSpotted property, as someone could cast it back to Subject<T> and raise their own events! Instead we use the AsObservable method which returns a middle man. So far so good.
Reactive Extensions also has the added concept of errors and completion, which C# events do not have. These are optional added concepts and not required for replacing C# events directly but worth knowing about, as they add an extra dimension to events which may be useful to you.
\nThe first concept is dealing with errors. What happens if there is an exception while you are spotting the plane and you want to notify your subscribers that there is a problem? Well you can do that, like this:
\npublic void SpotPlane(JetFighter jetFighter)\n{\n try\n {\n if (string.Equals(jetFighter.Name, "UFO"))\n {\n throw new Exception("UFO Found")\n }\n\n this.planeSpotted.OnNext(jetFighter);\n }\n catch (Exception exception)\n {\n this.planeSpotted.OnError(exception);\n }\n}\n\nHere we are using the OnError method to notify all the events subscribers that there has been an exception.
So what about the concept of completion? Well, that's just as simple. Suppose that you have spotted all the planes and you want to notify all your subscribers that there will be no more spotted planes. You can do that like this:
\npublic void AllPlanesSpotted() => this.planeSpotted.OnCompleted();\n\nSo now all the code put together looks like this:
\npublic class JetFighter\n{\n private Subject<JetFighter> planeSpotted = new Subject<JetFighter>();\n\n public IObservable<JetFighter> PlaneSpotted => this.planeSpotted;\n\n public void AllPlanesSpotted() => this.planeSpotted.OnCompleted();\n\n public void SpotPlane(JetFighter jetFighter)\n {\n try\n {\n if (string.Equals(jetFighter.Name, "UFO"))\n {\n throw new Exception("UFO Found")\n }\n\n this.planeSpotted.OnNext(jetFighter);\n }\n catch (Exception exception)\n {\n this.planeSpotted.OnError(exception);\n }\n }\n}\n\nConsuming the Reactive Extensions events is just as easy and this is where you start to see the real benefits of Reactive Extensions. This is how you subscribe and unsubscribe (often forgotten, which can lead to memory leaks) to a standard C# event:
\npublic class BomberControl : IDisposable\n{\n private JetFighter jetfighter;\n\n public BomberControl(JetFighter jetFighter) =>\n jetfighter.PlaneSpotted += this.OnPlaneSpotted;\n\n public void Dispose() =>\n jetfighter.PlaneSpotted -= this.OnPlaneSpotted;\n\n private void OnPlaneSpotted(object sender, JetFighterEventArgs e) =>\n JetFighter spottedPlane = e.SpottedPlane;\n}\n\nI'm not going to go into it in too much detail, you subscribe using += and unsubscribe using -= operators.
This is how the same thing can be accomplished using Reactive Extensions:
\npublic class BomberControl : IDisposable\n{\n private IDisposable planeSpottedSubscription;\n\n public BomberControl(JetFighter jetFighter) =>\n this. planeSpottedSubscription = jetfighter.PlaneSpotted.Subscribe(this.OnPlaneSpotted);\n\n public void Dispose() =>\n this.planeSpottedSubscription.Dispose();\n\n private void OnPlaneSpotted(JetFighter jetFighter) =>\n JetFighter spottedPlane = jetfighter;\n}\n\nThe key things to note here are first, the use of the Subscribe method to register for plane spotted events. Second, the subscription to the event is stored in an IDisposable which can later be disposed of, to un-register from the event. This is where things get interesting, since we now have an IObservable<T> we can now use all kinds of Linq queries on it like this:
jetfighter.PlaneSpotted.Where(x => string.Equals(x.Name, âEurofighterâ)).Subscribe(this.OnPlaneSpotted);\n\nSo in the above line of code, I'm using a Linq query to only register to events where the name of the spotted plane is Eurofighter. There are a lot more Linq methods you can use but that's beyond the scope of this post and also where you should take a look at this website.
Reactive Extensions (Rx) is a pretty large library which does a lot of stuff which overlaps with other libraries like the Task Parallel Library (TPL). It brings no new capabilities but does bring new ways to do things (much like Linq), while writing less code and with more elegance. It can be confusing coming to it as a newcomer, as to where exactly it can be used effectively. Replacing basic events with IObservable<T> is definitely one area where we can leverage its power.
There is an on-going war among developers. This silent war has claimed countless hours of developer time through hours wasted in pointless meetings and millions of small skirmishes over the style of each developers written code. This post outlines a peace treaty, a way forward if you will but first I will outline the problem.
\nThis is the main battlefront where most time is wasted and where developers are most entrenched in their forward positions. To use underscores for your field names or the this keyword. I myself am in this this camp but neither has a clear advantage in the battlefield. The underscores make it marginally quicker to access your fields using intelli-sense, while the this keyword makes it quicker to differentiate class members from static members.
private int _property;\n\npublic int Property\n{\n get { return _property; }\n}\n\nprivate int property;\n\npublic int Property\n{\n get { return this.property; }\n}\n\nThis lesser known conflict is where JavaScript styling has leaked into C#. The default formatting rules in visual studio usually quashes this conflict but there are still those who see white space as wasted space and will go the extra mile by changing the Visual Studio settings to 'fix' this problem. I personally stick to the defaults and find the other method hard to read, a small sacrifice of a few extra lines is worth the gain in readability.
\npublic int Property\n{\n get { return this.property; }\n}\n\npublic int Property {\n get { return this.property; }\n}\n\nSome (like me), like to have all fields, properties, constructors and methods separated into their own groups. Others prefer fields grouped with properties, and members from implemented interfaces kept together. Again, there is no real right way, the former allows quick navigation to find what you need, while the latter allows quick navigation of members which relate to each other.
\nHere is one area where there is a clear advantage in one camp. As outlined in this Stack Overflow post, adding using statements inside the namespace can sometimes save a few seconds of troubleshooting. Yet even here, Visual Studio lets us down by having using statements outside the namespace as the default.
\nLooking at another developers code can be an interesting experience. The style, look and feel of any code can vary wildly, even if they are written in the same language. Many times, I have found it difficult to find what I'm looking for and it takes time to adjust to each unique style.
\nMany a time, these large differences can occur even in the same teams, reducing productivity. This leads to the inevitable 'standards' meetings, where a teams of developers sit in a room and discuss underscores versus this and other differences at length. My own experience is that each side is entrenched and hunkering down into their positions, not wanting to have to change their writing style. In the end, the majority wins out or people continue working in their own way and people get used to it.
I would argue that as there is no clear superior coding style, it is a pointless waste of time arguing over it. However, there is something to be said for a commons coding style. This is where StyleCop comes in. It is a set of style rules which can be applied to your C# code.
\nThere need be no lengthy discussion or arguing over it. Keep the default rules (turn off the comment rules if you choose) and you quickly have a set of standards that can be universally applied and tested not just in your team but universally by all C# developers around the world.
\nDownload some sample code from GitHub and be at ease at the familiar look to the hopefully well designed code. I paint a rosy picture but I see more and more developers using StyleCop. It takes a week to get used to the change in your writing style, I myself switched from underscores to this and have never looked back. You can too.
The Task Parallel Library in conjunction with the async and await keywords are great but there are some subtleties which you should consider. One of these is the use of the ConfigureAwait method.
If I wanted to get a list of the titles of the new posts from my RSS feed I could write the following code:
\nprivate async Task<IEnumerable<string>> GetBlogTitles()\n{\n // Current Thread = UI Thread\n HttpClient httpClient = new HttpClient();\n\n // GetStringAsync = ThreadPool Thread\n string rss = await httpClient.GetStringAsync("https://rehansaeed.com/feed/");\n\n // Current Thread = UI Thread\n List<string> blogTitles = XDocument.Parse(rss)\n .Descendants("item")\n .Elements("title")\n .Select(x => x.Value)\n .ToList();\n\n // Current Thread = UI Thread\n return blogTitles;\n}\n\npublic async Task UpdateUserInterface()\n{\n // Current Thread = UI Thread\n IEnumerable<string> blogTitles = await this.GetBlogTitles();\n\n // Current Thread = UI Thread\n this.ListBox.ItemsSource = blogTitles;\n}\n\nIf I was to call this method, then the entire method would execute on the calling thread except the bit where we call GetStringAsync which would go off and do its work on the ThreadPool thread and then we come back onto the original thread and do all our XML manipulation.
Now if this was a client WPF or WinRT application which has a UI thread, all of the XML manipulation we are doing would be done on the UI thread. This is placing extra burden on the UI thread which could mean application freeze ups if the UI thread is being heavily taxed. The solution is simple, we add ConfigureAwait(false) to the end of the call we are making to get the RSS XML. So now our new code looks like this:
private async Task<IEnumerable<string>> GetBlogTitles()\n{\n // Current Thread = UI Thread\n HttpClient httpClient = new HttpClient();\n\n // GetStringAsync = ThreadPool Thread\n string rss = await httpClient.GetStringAsync("https://rehansaeed.com/feed/").ConfigureAwait(false);\n\n // Current Thread = ThreadPool Thread\n List<string> blogTitles = XDocument.Parse(rss)\n .Descendants("item")\n .Elements("title")\n .Select(x => x.Value)\n .ToList();\n\n // Current Thread = ThreadPool Thread\n return blogTitles;\n}\n\npublic async Task UpdateUserInterface()\n{\n // Current Thread = UI Thread\n IEnumerable<string> blogTitles = await this.GetBlogTitles();\n\n // Current Thread = UI Thread\n this.ListBox.ItemsSource = blogTitles;\n}\n\nSo now all our XML manipulation is done on the ThreadPool thread along with the HTTP GET we are doing using the HttpClient. Notice however, that when we return the blog titles to the calling method we are back on the UI thread. Each time you do an await, the default behaviour is to continue on the thread we started with. By adding ConfigureAwait(false), we are overriding this behaviour to continue on whatever thread the Task was running on.
For more on the Task Parallel Library (TPL) I highly recommend reading Stephen Toub's blog.
\n", "url": "https://rehansaeed.com/configureawait-task-parallel-library/", "title": "ConfigureAwait in Task Parallel Library (TPL)", "summary": "The importance of using ConfigureAwait when using the Task Parallel Library (TPL) to improve performance and reduce context switching.", "image": "https://rehansaeed.com/images/hero/Microsoft-.NET-1366x768.png", "date_modified": "2014-02-07T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } }, { "id": "https://rehansaeed.com/just-start-the-first-blog-post/", "content_html": "I thought I should begin this first blog post with a few words on what I hope to achieve. I started creating this website because I wanted to create a space where I could post interesting things I found or learned whilst working as a software developer or just generally in life. I hope to not just blog about software development, best practices or cool code snippets, but also look into some of the other âsoft skillsâ a developer might need, such as software design and aesthetics which I have a particular interest in.
\nI have reached a point where I've helped build some pretty cool stuff in my career so far. I now want to share some of these things with the wider community. Hopefully, I'll have written some code which is of use to someone. Feel free to drop a comment now and againâŠ
\n", "url": "https://rehansaeed.com/just-start-the-first-blog-post/", "title": "Just Start - The First Blog Post", "summary": "I thought I should begin this first blog post with a few words on what I hope to achieve.", "image": "https://rehansaeed.com/images/hero/Just-Start-1366x768.png", "date_modified": "2014-02-01T00:00:00.000Z", "author": { "url": "https://rehansaeed.com" } } ] }