The reason for this is that moving forward, all new functionality is exposed using AzureRM; and this is what the new Azure Portal is built upon. Coincidentally just days earlier we came across an issue running the old New-AzureWebsite cmdlet; and it took a while for Microsoft to recognise and fix the issue, so I wanted to move away from the old ways.
A lot of our Web Apps are deployed using a process along these lines:
Quite simple, and it has worked for years.
After working through a couple of sites and updating the scripts, I reached a rather old web app that was created long before AzureRM was a thing. That shouldn't be a problem though as AzureRM can be used to manage Web Apps regardless of when they were created.
However the first deployment of this site following the change resulted in a dreaded red warning message informing me that the deployment had failed. The PowerShell output indicated rather vaguely that the "staging" deployment slot couldn't be found - so it couldn't be deleted.
After confirming the deployment slot was indeed there, by visiting https://mysite-staging.azurewebsites.net I started digging deeper.
The output from Get-AzureWebsite showed the staging slot - that made sense as the site was there running. But Get-AzureRmWebAppSlot didn't. And neither did the Azure Portal.
Being a staging slot it didn't have anything that wasn't about to be nuked, so I deleted the staging slot using Remove-AzureWebsite -Name mysite -Slot staging and recreated it using New-AzureWebsite -Name mysite -Slot staging.
As if my magic, this new staging slot popped up in the Azure Portal and could also be found using the AzureRM cmdlets!
I don't yet know if this is a common issue, or if there was a problem with this one site, but it appears that Azure got a little confused somewhere and AzureRM wasn't aware of this old deployment slot.
So, if you find yourself unable to create or delete a staging slot using New-AzureWebAppSlot, check that it doesn't exist through Azure Service Management first.
Visual Studio 2017 introduces a new JavaScript language service that is used to power IntelliSense. The new language service is a big change and takes advantage of TypeScript under the hood. While all of this is great, it looks like there are some changes that could leave you scratching your head.
One key change I've discovered is that this new language service does not support reference directives, or at least it doesn't appear to work today. By that I mean using something like /// <reference path="ScriptFile1.js" /> to instruct IntelliSense that you'd like it to scan an external JavaScript file to provide you with auto-complete for the external functions, types and fields from the referenced file.
We use reference directives quite extensively, throughout our projects and since ditching Visual Studio 2015 (about 2 hours after the Visual Studio 2017 hit release) I started wondering where my nice IntelliSense had gone. At first I put it down to reindexing the several dozen JavaScript files so didn't think too much of it. But days later and nothing has changed.
Whether this is a bug, or it has been left out of Visual Studio 2017 in favour of you using a module system or TypeScript definitions; I don't know. But I hope we can get it back in a future update.
For now, the great new Go To All feature helps quickly find and check the correct method signatures when I need to call an external method.
]]>Brand change aside, I've not been able to put Application Insights through its paces so I thought I would follow up with how things were going, and the issues that occurred on the way.
This is a tough starting point. Overall I am very impressed with what Application Insights offers and how it can help a DevOps team. It would be much easier to start by listing the niggles or issues first as that list is much shorter. But I don't want to kick off by listing the negatives.
At any time you can refer to the Azure Portal to check on the health of your app. There isn't a single healthy/unhealthy indicator you can look, but by customizing the dashboard you can bubble up the metrics that are important to you and your application. Don't care about 404 errors? Ditch it. Don't need to track the daily visitors you've had? Ditch it. Start by thinking about what is important to you. If you monitor logs or other instrumentation manually right now, you could make a few tweaks to present this information on your dashboard so that you can see new events at a glance.
The web application we started off with is scaled across 6 web servers, all tucked away behind a reverse proxy powered by IIS Application Request Routing.

Application Insights knows this because it sees 6 streams of data coming from different servers, and can use this data to alert me to any problems. If one of the streams was to stop for a while it can raise an alert (via e-mail and in the dashboard) so I can see that one of the nodes is misbehaving.
But that's only the basics - the tip of the iceberg really. Application Insights has a lot of power and machine learning capabilities behind the scenes and not only does it let you know when your custom alerts are triggered, but it can learn about your app and flag up potential issues early on.
For example, we've had a couple of bad releases over the last few months for one reason or another, and Application Insights is smart enough to detect an increase in exceptions in your application. It does this by monitoring normal usage and establishing a baseline of how your app works. Then when something abnormal happens (such as a steady flow of internal server errors, or 404 errors), it fires off an alert and e-mails you. A couple of times Application Insights has beat our existing logging and monitoring methods and we've been able to start troubleshooting quicker than normal. As applications grow in size, complexity or scale to more and more servers, it is amazing to be told where issues are happening so quickly.
We've used Google Analytics for years and while it is a great product itself, Application Insights brings some extra tricks to the table. Yes, Google Analytics can show you visitor numbers and what pages they are visiting; but Application Insights adds to this a lot of server-side data that just isn't available from a client-side tool.
So not only do we see what pages visitors are looking at, their browser details, geo-location, operating system, page visits; but we can drill down to a request and actually see how long it took to process, where the time was spent in processing that request, and go as far as seeing the server-side dependencies. Those dependencies could be database queries, calls to WCF Services, HTTP services, or requests to Azure services such as querying Table Storage or retrieving a blob. All of these add to the response time and are invaluable in troubleshooting slow page load times.
While I limited myself to picking my top 3 likes above (because if you want to know what other cool stuff Application Insights can do, you can read about it yourself, or even try out the free tier), I can honestly only come up with two negatives so far.
I have Application Insights running in a reasonably large and visited web app, and within a week the data allowance of the free tier was behind us and we had to scale up to the standard tier. Less than weeks later and e-mails start coming in to say the 15m data point quota was almost being reached again.
I didn't want to cough up for the Premium tier, but luckily Application Insights gives you another option - data sampling. Instead of processing every event sent to Application Insights, you can throttle this back to consume a fraction of the live data. While it does not give you perfect insights, you do still get a good idea of how things are operating.
So far I have only had to scale back to 50% sampling and that's enough to stay within the quota based on a daily data point figure of between 500k and 900k. As traffic and usage increases though, I'm happy in knowing I can turn the dial to cut the sampling down to 33%, 25% or even lower.
Every site generates 404 errors. Browsers alone can cause a fair amount when they look for browser-specific files such as icons, or metadata. Take Safari for instance... I see regular requests for app icons at different sizes both in their normal and precomposed forms. Our app does not have all of the different combinations of these so it results in a bunch of 404 errors in Application Insights.
It would have been great if these could be filtered out in the dashboard with a click or two; but instead you need to filter these out in the application itself. I did this by writing a noise filter, inheriting from ITelemetryProcessor, and performing some basic logic checks in the bool OkToSend(ITelemetry item) method. It isn't a big job, but it does require access to the source code of the application and for some modifications to be made.
At the moment I'm happy, and don't feel the need to look elsewhere. I think I'm getting all of the metrics I could ever need. And when it comes to the cost, I have a nice balance between data sampling and monetary spend.
However there are some changes on the cards and Microsoft have announced some big changes to their pricing tiers. I haven't run the numbers yet, but hope that the changes won't hit the wallet.
So stay tuned...
]]>Their Connect(); //2016 event kicked off today with the usual keynotes and while there were some surprises around Microsoft joining the Linux Foundation, and Google joining the Microsoft .NET Foundation, it's the new toys I am more interested in.
So here are the big things:
Visual Studio 2017 RC
Any .NET Developer knows Visual Studio, and the latest candidate brings a brand-new installation experience, new Docker integration, the latest .NET Core tooling and so much more smart stuff that you'll still be discovering new bells and whistles when Visual Studio 2019 lands!
Visual Studio for Mac (Preview)
Xamarin Studio, under a new name, with some cool new tricks including .NET Core support. Or as Microsoft put it...
A mobile-first, cloud-first IDE. Made for the Mac.
Bring your apps written in any language to Visual Studio Mobile Center’s cloud and lifecycle services and you’ll get faster release cycles, higher-quality apps, and the time and data to focus on what users want.
dotnet migrate and you can then open the new project file in Visual Studio 2017 RC, or Visual Studio for Mac (Preview). The MSBuild/csproj support is considered "alpha" so use at your own risk.Time to get downloading!
]]>Azure Functions take this a step further and is positioned as a its own product within the Azure platform rather than a lesser-known feature of App Services web apps. Without delving in to the inner workings, Azure Functions are small bits of code (hence the functions name) that stand alone, operate and can be called independently of your other functions. Azure Functions are Microsoft's answer to AWS Lambda.
You can think of an Azure Function almost as a Web API method. Input comes in (akin to a HTTP request in the Web API world), it matches the function (based on the URI), some code runs, and there is some output (a HTTP response when talking about Web API).
Well no, Azure Functions is more high level than that. I'm sure Web API (or some component of it) is involved down in the lower levels somewhere, but you don't need to think about it, or anything else other than the code you want to write. Forget writing controllers, forget routing, forget a lot of things. Just jump straight in to solving your business problem and moving on to the next.
If you have ever used IFTTT you're in the same ballpark with Azure Functions. You define your trigger (the "this" of If-this-then-that), your code runs and produces some output (the "that" of If-this-then-that). Microsoft provide a number of triggers out of the box:
With Azure Functions still in preview, I'm sure new triggers will appear by the time it reaches general availability, but even now the above triggers give you so much potential.
You may be wondering how you write your function. Well let me say that with this new Microsoft, they aren't forcing you to use C#. Yes C# is there and supported but so is NodeJS. There's even F# support in the works too for the functional crew amongst us.

Writing your function is as easy as it could be. No IDE to mess around with, no SDK to download (although I'm certain you could do both if you wish); simply point your browser at the Azure Portal and create away. Make changes until you are happy and with one click of the Save button your code is compiled and ready to be triggered. No packaging, no uploading, no deployment - click and run.
On the other side of your function, you can output in a number of different ways again. Other than sending back a traditional HTTP response to the caller, you could...
The magic of Azure Functions means you don't need to concern yourself with how any of these services work. That's all done for you using bindings which makes it as easy as writing a little JSON and setting some variables in your function.
For starters, you don't need to think about the infrastructure or the mountains of boilerplate code you need to write to create an ASP.NET Web API site. And you don't need to think about packaging and deploying your WebJob and the WebJobs sharing resources with your web app.
With Azure Functions you can focus on what is important to you. Think of things in terms of events and action those events however you need to. Put the concepts of a web server, web app and frameworks to the back of your mind and dive in.
Some people call this going serverless but I personally dislike that term. No matter how much we are abstracted away from the underlying technology we all know there's a server involved. It is just no longer something we need to be involved in.
Unlike WebJobs where you are paying for a web app already, Azure Functions is priced on a combination of code execution time and the number of executions. The math is a little hard to get your head around to start with, but in simple terms, you choose how much memory your service needs and then pay a minuscule amount for the usage of that memory while you use it. If you don't call your function, you don't pay anything. Call it every second or require lots of memory and your costs will slowly increase.
However Microsoft are kind enough to give you a free monthly grant that lets you get started. That grant is rather generous and includes 1 million executions for free every month. With that you get 400,000 GB-s (Gigabit Seconds - how many gigabits of memory you need per second of operation) of execution time.
According to Microsoft's pricing page that means your function can execute for over 3 million seconds per month if using the minimal 128MB memory tier. Which is a huge amount of time when you consider a 30-day month runs at 2,592,000 seconds.
With that, there's no excuse not to find a reason to give Azure Functions a try!
]]>There are three tiers to choose from, Free, Standard and Premium, and the main difference is that you need to hand Microsoft some money as the number of data points you record increases. The inclusive figures for each tier are:
You may notice I say other data points. That's because all tiers include unlimited session data meaning you can get metrics about user counts, session length, environment info and device details. It's once you get past this that you are in the other data point realm.
Each tiers also includes:
Yes, that's a lot of data you could be collecting, but you may find you burn through this very quickly indeed just as I did if you don't monitor things carefully.
Luckily Microsoft do watch your back and will start e-mailing you when you reach 80% usage. Once you hit 100% you can have logging stop, or upgrade to a larger plan. On the Standard plan, you can also opt to use a PAYG model that will see you billed around £1 for an extra million data points, so if you do require more than 15 million data points but can't push the boat out to pay for the Premium offering, you could save a few pounds by paying for the extra data points as you use them.
As I said, I found 5 million data points ran out very quickly, but the site this was running on wasn't a small site at all. I had installed AI on a multi-tenanted site that ran across 6 web servers. That resulted in a lot of data coming in from around 40 IIS sites, and it chewed up those data points in less than a week.
Total time taken: ~30 seconds.
It is this simple:
Job done.
But what if you want to make the most of the free plan or can't afford to upgrade to a plan with a bigger allowance?
Out of the box, Application Insights records everything you throw at it which is great and is the best way to get true insights in to your application or web app.
However you may be happy with sampling the data and only processing a fraction of it. Everything outside of the sample size you configure is thrown away. Microsoft's description of sampling goes like this:
Sampling is a feature in Application Insights that allows you to collect and store a reduced set of telemetry while maintaining a statistically correct analysis of application data. It reduces traffic and helps avoid throttling. The data is filtered in such a way that related items are allowed through, so that you can navigate between items when you're performing diagnostic investigations. When metric counts are presented to you in the portal, they are renormalized to take account of the sampling, to minimize any effect on the statistics.
The settings here are accessed from the same Quota + pricing blade that you used to set your pricing tier above.
As shown in the image to the right, with a few click you can start sampling your data and stretch those free data points even further before you have to start coughing up any money.
This post has covered the initial pain points I went through and I hope it helps you to get up and running. When I get a little more time I will write another post about the basics of application monitoring and putting all of this data to some use.
]]>Application Insights is performance monitoring, analytics, logging, exception handling, and more on steroids. When talking about web apps, Application Insights gives you an immense amount of detail for every request to your site to you can learn what your visitors are doing, and the experience they are receiving.
Don't be put off by this being a Microsoft product though. In true new-Microsoft style, Application Insights works with any platform and any language. While I'm sure there are some obscure platforms or languages you may have to roll your own integration with, Microsoft have covered almost all of the key options out of the box. It goes without saying that Visual Studio users get the first-class experience, but it won't take you long to get up and running regardless of the technology you use, or the provider you host with. You can even fully instrument your on-premesis applications in minutes.
In short, No it doesn't have to cost a penny. You can get started with a free account which should be plenty for a lot of smaller hobbyist sites.
But for businesses or your more popular sites you will want to look at the paid plans once you start capturing more data than the free plan provides. That threshold comes once you log 5 million data points over the month. Session data is free and unlimited, so these data points cover other types of info including (but not limited to):
While the service is in preview, you can get the Standard and Premium plans at a discounted price of £14.97 and £60.78 per month, and the pay back can be hugely rewarding with the knowledge you'll unlock.
In minutes, seriously.
There are two paths to go down that depend on your situation and partly where your application runs.
Deploy Application Insights at run time to add instrumentation to an already running web app running on IIS, Azure Web Apps, or J2EE; with zero updates to your code.
Or if you don't mind making a few minor changes, setup Application Insights at development time to give you the ultimate control over web and desktop apps.
I went with development time changes when adding Application Insights to one of our larger multi-tenanted sites only last week and it was pretty straight forward. After adding a few NuGet packages and tweaking a config file we had almost realtime activity appearing via the Azure Portal.
I'm still learning my way around the power of it all. The basics come quickly and easily, but a lot of the power is tucked away in the analytics and proactive issue detection that comes as your data grows.
So now you know the basics, why not get started for free and kick the wheels?
]]>The issue manifested itself in the laptop (Windows 7-based) connecting to the iPhone hotspot and almost immediately disconnecting. If you were quick, you would spot the blue tethering bar appear for a second on the iPhone before vanishing just as quick.
Out of pure luck, Windows displayed a notification on the desktop that it wasn't able to connect to the Wi-Fi hotspot, and in the message it displayed the hotspot name. What stood out was the name of the hotspot. Mixed in with the standard "John Smith's iPhone" naming was a non-ASCII character code right where the apostrophe should be.
Thinking this was odd I jumped in to Settings > General > About > Name and discovered the name contained a non-standard apostrophe. It wasn't the normal apostrophe that you access on the 123 section of the keyboard. This was one of the other apostrophes that you can access by long-pressing on that apostrophe key and selecting one of the presented options in the popup.
Here's an example to watch out for:

Notice in the example on the right, the apostrophe has a slight slant to it. It isn't a completely vertical mark as on the left. This is the big telling point that something isn't right.
As soon as this unusual character was removed, Windows was happy and is tethering away nicely.
]]>
It's here! After many years of hard work by Microsoft, we have now moved in to a true cross-platform .NET world.
.NET Core is a brand new platform built from the ground up that allows developers to create web apps, services, and applications that run on Windows, Linux and Mac. Yes you read that right, you can truly write-once and run your code on the OS of your choice.
A box running Windows, Linux or OS X/MacOS is a useful starting point.
A note on Linux... If you are running a rare distribution of Linux your mileage may vary, but if you are using one of the mainstream distributions (RHEL, Ubuntu, Debian, Fedora, CentOS, openSUSE) you should be good.
For development you are going to need the .NET Core SDK. These are available as an installer for a few Operating Systems, otherwise you can get the .NET Core SDK binaries in an archive and extract them manually.
You will notice that the .NET Core SDK is labelled "Preview 2". The reason for this is that although .NET Core itself has been released, the tooling is a little behind and is still being baked. Although the tools are in preview, the .NET Core components are fully released and at version 1.0.
Next you will need a code editor or IDE. As we are talking cross-platform here, the obvious choice is Visual Studio Code. If you haven't already used Visual Studio Code, it's a relatively new code editor from Microsoft, which is ideal because it too is available on Windows, Linux and OS X.
After installing VS Code, the first thing I recommend you do is install the free C# extension from the Visual Studio Marketplace.
Windows users with the full Visual Studio 2015 IDE can grab Visual Studio 2015 Update 3 which brings you a bunch of .NET Core support and tooling.
From there, head over to the Microsoft .NET Core documentation site and take a look at the tutorials.
Download the .NET Core Installer, or .NET Core binaries.
These contain the runtime only and don't include any of the SDK or CLI tools required for development. This is what you would install on your production machines.
Here are the key blogs announcing this in more detail:
]]>...a lightweight, low-ceremony, framework for building HTTP based services on .Net and Mono. The goal of the framework is to stay out of the way as much as possible and provide a super-duper-happy-path to all interactions.
Think of Nancy as ASP.NET MVC or Web API after going on a diet. A long, hard and life-changing diet.
Setup takes minutes, documentation is clear and precise, and there is almost no boiler-plate code to write to get things up and running. You can seriously put together a HTTP based service in minutes.
That's enough of the sales pitch and time to get on with the important stuff.
In my last post I covered an issue we found with some old code that was tightly coupled to a database and needed a bit of an overhaul. What we needed to do was move some business logic out of a widely-deployed add-in to our ERP package, and move it to a central service for ease of maintenance.
The thought of setting up a site using ASP.NET MVC or ASP.NET Web API had me shudder because the code for this service would be eclipsed by the amount of setup, configuration settings and plumbing these frameworks need. I needed something quicker and easier, and that's when I remembered a framework called Nancy mentioned on a recent podcast I had heard.
So I scanned over the documentation and chose to implement Nancy by self-hosting it inside of a Windows Service. No IIS needed, and it keeps this service small and relatively portable.
I won't go in to the ins-and-outs of creating a Windows Service as that is outside of the scope of this post, but you'll find plenty of guidance online if you search for it.
At the heart of a Nancy application are modules. These define the behavior of your application and can be compared to a Controller/ApiController in the ASP.NET frameworks.
Here's an overview of our Nancy module:
public class PaymentOnAccountModule : NancyModule
{
public PaymentOnAccountModule()
{
Get["/payment-on-account"] = _ => "We don't accept GET requests - go away";
Post["/payment-on-account"] = _ =>
{
var model = this.Bind<PaymentOnAccountModel>();
//Connect to the database
//Perform update
//Run other business logic
if (everythingWasOkay)
{
return HttpStatusCode.OK;
}
else
{
return HttpStatusCode.BadRequest;
}
};
}
}
In its simplest form, a module is no more than a class inheriting from NancyModule, and defining the routes and actions in the constructor.
The first line of the constructor defines what happens if someone performs a GET request to /payment-on-account. This service doesn't need to return anything so we show the client a simple message.
The Post line is where the work happens. We start by using the model-binding functionality built-in to Nancy to bind the POSTed data in to a concrete PaymentOnAccountModel which is no different to a POCO you would write for ASP.NET MVC or Web API. This allows us to access the submitted data in a strongly-typed manner.
After doing our work, we need to respond to the client with the result. We keep it simple by returning either a 200 or 400 HTTP status code as an indication of success or failure.
So that's our module, but how do we make use of it?
Inside of the Service class for our Windows Service (the host application for Nancy in this case), we define the host at the class level:
NancyHost host;
Inside of the constructor we use:
var port = MyService.Properties.Settings.Default.HTTPPort;
StaticConfiguration.DisableErrorTraces = false;
var hostConfiguration = new HostConfiguration
{
UrlReservations = new UrlReservations() { CreateAutomatically = true }
};
host = new NancyHost(hostConfiguration, GetUriParams(port));
As we already know hard-coding is bad, we define the HTTP port Nancy will use in a configuration file and load this to start with.
Next we define the configuration we want Nancy to use by creating a HostConfiguration object. UrlReservations = new UrlReservations() { CreateAutomatically = true } instructs Nancy to register the URLs with Windows automatically so you don't need to set these up manually. This action requires administrative rights at runtime, but as we're running as a Windows Service we can cover this requirement easier than if this was an application running in user space.
Finally we create an instance of NancyHost, passing it the previously discussed host configuration, and a list of the URLs to listen on. GetUriParams is nothing special and simply builds a list of URIs that we want Nancy to listen on based on the machines hostname and IP addresses. If you're testing this yourself, you can use a URL to listed on "http://localhost:port" to start with.
As it stands, everything we need is setup but Nancy won't respond just yet. We need to inform Nancy to start and the local place to do this is in the OnStart event of the Windows Service:
protected override void OnStart(string[] args)
{
host.Start();
}
We will also stop Nancy if the service is stopped:
protected override void OnStop()
{
host.Stop();
}
As soon as the Windows starts this service, Nancy will be ready and listening.
Notice we haven't had to do any real configuration for Nancy. We haven't had to teach it about our module, wire the module up in a config file or anything. Nancy is smart enough to find the module based on the class inheriting from NancyModule.
Other than using Attribute Routing, ASP.NET MVC/Web API can't come close to this level of simplicity! Our of the box, Nancy will even respond with its own little 404 error page should a client get a URL wrong.
Back to our situation, once we followed the steps above we had a working API service wrapped in a Windows Service.
Dropped on to its final home in production it has been running perfectly for a week now and memory usage is a tiny 8MB so it isn't going to hog resources at all.
I've only touched on the absolute basics of Nancy here but it's all we needed. I will certainly consider Nancy in future when we need a HTTP service, especially for these microservice like applications.
For more information, go and check out the Nancy website, or the project over on GitHub.
]]>That happened to me this week when I discovered a classic case of tight coupling between two objects that should have been kept at arms length, or at least implemented in a less rigid way. Tightly coupled code normally results in a ripple effect when one element changes, as even the smallest change can break the other components. It's like a domino effect - you change one thing and everything else in close proximity starts to fall over.
Earlier this week I started planning for a database migration, where we aim to take move some key business database over to a new 2-node active/passive SQL Server cluster. Part of the planning was cataloging the databases, discovering which clients are using them, and what else may touch these databases as they stand today.
After scouring the SQL Server logs I found all of the expected clients - web servers, application servers and so on. But what I didn't expect to see were occasional connections from a number of Windows Servers running Remote Desktop Services. The only thing these run is our ERP package from SAP and the SAP client shouldn't know of our other databases at all.
It is perfectly normal for this client application to connect to our SQL Server though as it relies on a number of SAP databases, but these connections were to one of our own databases used by one of our web apps. Odd indeed.
Tracking things down thanks to the application name in the SQL Server connection string, it turned out to be an add-on we developed in-house for our ERP package. This was developed getting on for 10 years ago and "just works" so there hasn't been the need to touch or update it since so it has slipped to the back of our minds.
Pulling down the source code from our repository I was happy to find it opened perfectly in Visual Studio 2015 and even built first time. Quite a surprise since it had been built back in the Visual Studio 2008/2010 days! However the happiness soon passed when I saw the reason for the database connections. One function of this add-on is to perform some additional business logic when an incoming payment was logged via the SAP client, and part of this workflow involved writing details of the transaction to a database directly.
That's not too uncommon I'm sure, but the biggest concern was that it used a hard-coded connection string, naming a specific hostname and database name right in the CS file. Had we moved these databases without checking the logs first, this client would have thrown a fit every time someone logged a new payment and it could have taken a while to track down.
So what's the problem with that?
Nothing at first. When you write code like this and hard code a connection string you'll get it built, tested and deployed without problem just like we did. But roll forward some time and a number of events could shatter this, namely:
The moment any of those things happen, without proper planning you will be fire-fighting and taking calls from users when they start receiving errors.
How should it be done?
There are a few small improvements that could be made to address the above scenarios, and a bigger one that we have ultimately chosen to follow, of which I will follow up with another blog post soon.
The quick fixes here would firstly remove the dependency on a specific SQL Server. At the moment the Windows hostname is hard coded right in to the connection string so if that server goes offline or changes for whatever reason, the client can't connect.
SQL Server has a little-known feature called Hostname Aliases. You can quite quickly configure an instance of SQL Server with an alternate name (or alias) that can be ported to another instance or server when required. Think of this as a CNAME record in DNS so you don't have to refer to the static Windows hostname.
We could create a new alias for my-addon.domain.tld with SQL Server, create a matching CNAME record in DNS to point my-addon.domain.tld to current-sql-server.domain.tld, and use my-addon.domain.tld in the connection string.
That's one part of it, but still having a hard-coded connection string is bad. It will require the application to be recompiled, versioned, packaged and deployed to all clients when a change needs to be made.
A cleaner option would be to store the connection string in an application configuration file, or (less desirably) the Windows Registry. With this you can alter the configuration settings independently of your code.
But what about database or table schema changes?
Now we have a cleaner way of connecting to the database, it doesn't help things if someone makes changes to the database schema or in the case of this add-on, the schema of the table it writes to. The client itself here has to know the structure of the table, and how to write to it. And this doesn't cut it for me today, and needs to change.
In C# we have interfaces that can act as a contract in a client/server setup, defining the methods/actions/structure available to a client to use. We don't have this in the SQL world so we need another intermediary who can act as a gateway for dozens of instances of this add-on.
If we were developing an application to run on home computers and it needed to access data from an on-premises database we wouldn't allow our SQL Server to be published directly to the internet and allow direct SQL connections. We'd have to be mad!
In this scenario we would have a web service that the "client" communicated with to retrieve data and perform actions. So why not do the same thing here?
A simple web service of some sort would allow all database-interaction to be performed centrally by the service, leaving the client to simply pass data to the service. The client no longer needs to know about the SQL Server, the database, or the table. And if we needed to change the business logic later, we only need to update this single web service rather than updating code running on dozens of machines.
So that's what we did, and in the next post I will introduce Nancy and how we used Nancy to build a very small and lightweight web-based API service.
]]><table> in your MVC view?
I know I probably do it at least a few times a week. But have you stopped to think for a moment about what else is out there that, a) could be easier to integrate, and b) would empower your users or visitors?
That's exactly what we did on a recent project and the outcome has been very positive by both developers and users. From the start we already knew this internal sales tool would lead to any number of future requests for custom reports or requests for assistance when someone forgets how to do something in Excel. So we had the idea of throwing out plain old tables and implementing a rich and powerful third-party component to give the power to the user, right there in the browser.
If you spent 5 minutes looking around right now you'll be surprised at just how many options there are in the form of jQuery plugins, let alone what you'll find for other popular Javascript frameworks. There are so many choices and sadly, as is quite common, many of them are abandonware.
So we started with what appeared to be the most popular free options and began testing them out. Some fell early on and we didn't bother going past playing with their online demos, and others we felt were techncially suitable but lacked the community to drive it forward or to support us if we committed ourselves to that particular product.
When we had a short list down we then had a look at the big commercial options that generally come in the form of developer toolkits - a bundle full of dozens of different components that should provide you with all of the UI elements you may want for your web apps.
The one that stood out for us was Syncfusion, who provides components for ASP.NET WebForms, ASP.NET MVC, and Javascript. Oh and LightSwitch and Silverlight too but are many people still using those? They also have products for mobile and desktop development, so they could be your ideal one-stop-shop.
After building out some test pages for each candidate it was clear the commercial options were the best choice despite some varying price tags, and following a lot of testing dialog with Syncfusion we were given the opportunity to try their Essential Studio product (including support) for 3 months with no commitment to buy. A company who offers that on top of a standard 30 day trial must be confident that they'll stand up well against their competitors. So we jumped at the opportunity.
Here's an example of what their javascript-based grid can do:

Okay so it may not win awards for the prettiest grid but there are numerous themes to choose from, a Bootstrap theme to make it look at home if you use Bootstrap, and Syncfusion even offer a new theme studio where you can build your own theme with a few clicks. Or if you're like us, you can hack around and build your own CSS stylesheet by hand.
Looks aside, functionality isn't shortcoming. Pitched against a mid-level Excel user I don't think this grid would fall short and we've found that since implementing it, we've had zero requests for an option to export the data to Excel for further manipulation. That wouldn't be hard though as exporting to Excel, Word and PDF is an out of the box feature - we just haven't flicked that switch yet.
]]>With SharePoint Server 2016 on the horizon (and now in RC) I felt it was about time to take action and head off the (not-so-much) fun a SharePoint upgrade can bring.
In early 2015 we jumped on the Office 365 train initially as a cost saving effort when it came to renewing our Open Value Subscription that we licensed the Office Suite through. Doing so allowed us to license Office for our users at a lower cost, as well as including a few other Office 365 services to sweeten the deal.
One of these was extra services we receive as part of our Office 365 subscription is SharePoint Online. If you are looking at SharePoint Online I think the first sentence on that page really sells it to the IT Pro.
SharePoint Online delivers the powerful features of SharePoint without the associated overhead of managing the infrastructure on your own.
It pretty much sums up the whole cloud movement everyone talks about, and being the old person with SharePoint experience at our company I felt it was a smart move to, erm, move to SharePoint Online. After all we're already paying for it as part of our Office 365 subscription, so why not let Microsoft worry maintaining our SharePoint setup, handle the patching and hopefully soon, the upgrade to SharePoint Server 2016.
Anyway with top-level approval we kicked off by investigating the numerous options available in migrating to SharePoint Online. I won't go in to the detail of it all but there are some manual and long-winded options, and some very smart options depending on scale, amount of data, complexity of your setup and budget available.
After weeks of testing offerings from Sharegate, Thinkscape, AvePoint, Dell, and Metalogix, we settled on one of the newer Metalogix family members - Essentials for Office 365. Originally developed by MetaVis and acquired by Metalogix in 2015, Essentials for Office 365 ticked all of the boxes when it came to migration but brings much more to the table.
Some of the tools we tried are just that - migration tools - and once that job is done there is very little chance you would touch them again. But Essentials for Office 365 includes management, security and reporting functionality that will make your life easier well after the migration is over.
It is still early days but we've already migrated a few site collections and a fair about of data but so far it has happened without a single problem.
Once I've had more time to try out all of the other features of Essentials for Office 365 I may follow this up with another post. But for now, if you are involved in a SharePoint migration or your company is moving to Office 365, I recommend you download a trial of Essentials for Office 365 and give it a go.
]]>I find I have already thrown out manual string concatenation and string.Format in a lot of my day-to-day activities and replaced it almost exclusively with string interpolation that I covered a couple of months ago.
Another feature that has crept in and is already helping improve code readability is expression bodied members. If you haven't heard of expression bodied members, or simple expression bodies as some people call it, it is a way of writing the block of code normally found representing a function body as a simple lambda expression. Here's an example with a rather simple class:
public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName
{
get
{
return $"{FirstName} {LastName}";
}
}
public override string ToString()
{
return $"First name: {FirstName}; Last name: {LastName}";
}
}
With the power of expression bodied members this could be simplified to:
public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName => $"{FirstName} {LastName}";
public override string ToString() => $"First name: {FirstName}; Last name: {LastName}";
}
Notice how the FullName property and the overridden ToString() method has lost the curly braces, return keyword? After the normal property of method declaration the => (aka the lambda arrow or fat arrow) indicates that expression to the right will be returned as the value.
While it hasn't saved a great deal of keypresses it does look a lot neater especially where the code was initially short and simple. Expression bodies can't replace multiple line code blocks so you won't be able to replace all of your current member declarations with expression bodied member declarations. So while a nice improvement it isn't a complete change or replacement of the traditional curly brace code blocks.
You may have also noticed in the above example that FullName is a getter-only property. Expression bodies lend themselves perfectly to this use-case. It isn't possible to use an expression body to replace a normal get/set property so don't waste your time trying!
If you want to try these out but don't have access to Visual Studio 2015, I highly recommend LINQPad as a great code playground. You'll want to grab version 5 as LINQPad 5 fully supports the new language features in C# 6.0.
]]>Originally we were manually executing PowerShell scripts to control the load balancing and a mixture of scripts and manual browser tests to do the rest. For our largest project we would be looking at close to 40 minutes of work to roll a new release out between three environments (not without proper testing in-between though!). And on a busy day we could release anywhere between 2-6 updates to our primary web app. What a waste of time!
After installing Octopus Deploy, it took a few tries to hone this process within the web-based management dashboard. But it was well worth the 10-15 minutes. With our environments and custom scripts now setup properly we can now have a new release pushed out to our development environment right from Visual Studio - without needing to touch Octopus Deploy at all. No one needs to think about it, no one notices the sites go offline, no one spots any downtime and unless you were expecting it you wouldn't know anything had happened.
That's how DevOps should be.
Today we have over 10 projects managed by Octopus Deploy with several environments spread across on-prem servers and up in Microsoft Azure. It deploys our web apps, Windows Services, and even some internally-developed command-line tools used by Windows Task Scheduler around the company.
Without Octopus Deploy we would be spending a great deal more of our time deploying new releases, meaning less time on the important part - developing new features and (occasional) bug fixes.
In a later post I will cover our plans to make the most of Octopus Deploy, and hopefully, how we streamline even further.
]]>Seriously - back in the 'old' days (of a few months ago) we were manually pushing out several new releases per day across our dev, test and production environments; with each environment consisting of at least two nodes at any time. Sounds crazy when I think back about how things used to be.
On the road to where we are now, I toyed with a few different options to make life easier. I started by switching deployment of a couple of smaller projects to IIS Web Deploy and while it was a step in the right direction, it wasn't enough. CruiseControl.NET, Automatic and [a number of other tools](https://www.google.co.uk/search?q=.net automated deployment tools) looked good and we tried a few out but none stuck like Octopus Deploy.
Octopus Deploy (@OctopusDeploy), for those who don't know, is a deployment automation tool specifically for .NET developers. It can integrate in to your development cycle in a number of ways and can be picked up quite quickly. That doesn't mean it is a simple product at all. While on the surface you can do a lot of common tasks with a few clicks of the mouse, its power really surfaces when you start utilising its PowerShell scripting abilities to tackle the more complex deployment scenarios.
Maybe I'll cover a little more on our specific uses in a later blog post, but for now I wanted to make sure Octopus Deploy was on your radar.
In my opinion Octopus Deploy is worth every penny when it comes to the licensing costs. But you don't have to take my word for it. Try it out yourself by grabbing a 45-day trial, after which the trial converts in to a FREE Community Edition that allows you to deploy up to 5 projects to up to 10 target machines.
We ran with the Community Edition for a couple of months but quickly upgraded to a paid version when we wanted to add some more projects.
And we haven't looked back since.
If you have already come across Octopus Deploy what did you think of it? And is it still part of your toolkit?
]]>Null-conditional Operators are, according to the MSDN documentation, a way to check for null values before accessing a member or performing an operation on an index. Simplified it means you don't have to litter your code with if-null checks before accessing a property of an object.
You know the code, it always goes something like this:
public decimal CalculateItemPrice(ShoppingCartItem item)
{
if (item == null)
{
return 0M;
}
if (item.Pricing == null)
{
return 0M;
}
return item.Pricing.Price * item.Quantity;
}
We've all written something like that and if you look at whatever project you are currently working on you will probably find dozens of those null checks scattered throughout your code-base.
Now let's see how this could look if we use Null-conditional Operators:
public decimal CalculateItemPrice(ShoppingCartItem item)
{
var price = item?.Pricing?.Price ?? 0;
var quantity = item?.Quantity ?? 0;
return price * quantity;
}
Cleaner right?
When accessing item?.Pricing?.Price, if item is null a null value is returned from the expression and the rest of the chain isn't executed.
Assuming item is not null, it then checks the Pricing property. If it is null, a null value is returned and execution of the chain stops.
Finally if neither item or Pricing are null, then Price is returned. If at any point in this chain is null returned, the good old null-coalescing operator ensures that price is set to zero.
The same applies when obtaining the quantity. If item is null, quantity defaults to zero.
When accessing the index of a collection (array/list etc) the following code will check that articles[0] isn't null first. This is the index access part. Following on we are then using member access to check if CreatedOn is null. And to finish it off, if either are null we return DateTime.MinValue.
public DateTime GetDateOfFirstArticle(Article[] articles)
{
return articles?[0]?.CreatedOn ?? DateTime.MinValue;
}
This is something you're probably less likely to do on a day-to-day basis, but the Null-conditional Operator can also simplify invoking delegates. Consider this regular piece of code:
var eventHandler = this.ValueChanged;
if (eventHandler != null)
{
eventHandler(this, new EventArgs());
}
This can now be simplified like so:
this.ValueChanged?.Invoke(this, new EventArgs());
I'm not sure how it could be cleaner or simpler but I'm sure the .NET Team will have a go in a future version of C#.
I hope this quick run-through proves useful to you and helps you write shorter, cleaner code.
]]>String interpolation is used to construct strings using a template-like syntax. If you've used Handlebars before, you'll see some similarities to how Handlebars binds values in your HTML templates.
Until now, the majority of us would have used string concatenation or String.Format to build your strings, mixing string literals and variables to construct the desired output.
Both methods had their drawbacks, the most common being the difficult to read code littered with + symbols, or {0/1/2/3/4/5/6...} placeholders. Neither really lent itself to improving code readability. Enter string interpolation which is a very welcome improvement.
Instead of inserting the {0} placeholder in a string literal and passing in the same number of parameters to String.Format, you can now reference the variable (or conditional expression) straight in your string.
Here's a simple example containing all three options:
//Let's declare some variables
var firstName = "Bob";
var age = 25;
//String concatenation
Console.WriteLine("Hello " + firstName + ".\nYou are " + age + " year" + (age == 1 ? "" : "s") + " old.");
//String.Format
Console.WriteLine(string.Format("Hello {0}.\nYou are {1} year{2} old.", firstName, age, (age == 1 ? "" : "s")));
//New string interpolation
Console.WriteLine($"Hello {firstName}.\nYou are {age} year{(age == 1 ? "" : "s")} old.");
All three versions produce identical output, but notice how cleaner the final line is compared to the other using string concatenation or String.Format. The big difference is that you prefix your string with a $ symbol.
You'll not only see that firstName and age are used to construct the final string, but yo can also include a conditional expression by placing the expression inside the braces wrapped in parentheses.
Yes you can and it's not as difficult as you'd think:
//String interpolation with formatting
Console.WriteLine($"Hello {firstName, 10}.\nYou are {age:D3} year{(age == 1 ? "" : "s")} old.");
The output of this is:
Hello Bob.
You are 025 years old.
You'll need an environment that supports version 6.0 of C# for starters. Visual Studio 2015 is the prime candidate and is available in both paid and free editions. But if you are looking for something a little lighter on disk usage, and a whole lot easier to jump in to, go and download a free copy of LINQPad 5.
I hope you see the benefits of this great new feature. It's a welcome addition to the language.
Oh and if you prefer VB to C#, you aren't left out as string interpolation is also included in Visual Basic 14.
]]>You've probably written statements like this loads of times:
<!-- A single statement -->
@{ var myValue = 2; }
<!-- An inline expression (mixed with HTML) -->
<div id="divResult">Your starting value is @myValue</div>
<!-- A multi-line statement block -->
@{
var myValue = 2;
var multiplier = 5;
var result = myValue * multiplier;
}
Your result is @result
Those are quite basic examples and sometimes you have a more complex statement or you want to output an inline expression constructed from two or more variables or pieces of data.
I came across this exact scenario recently and was surprised to see that the Razor engine doesn't read-ahead too much. Instead it stops interpreting Razor syntax as soon as it encounters a space that isn't enclosed in a string.
So something like:
@Model.LastUpdatedBy ?? "N/A"
results in
Some User ?? "N/A"
Do you see what's gone wrong? The Razor engine outputs the result of @Model.LastUpdatedBy, but as the next character is a space, parsing of the syntax ends and the remainder of the line is treated as plain-old HTML.
The good news is there is a very simple, and rather obvious fix. To instruct the Razor engine to treat the whole line as a single statement you just need to wrap your code in parentheses.
@(Model.LastUpdatedBy ?? "N/A")
then gives you the expected result of:
Some User
If only all issues were this easy to resolve!
]]>Specifically targeting .NET Frameworth 4.6 LINQPad 5 brings a bunch of new features to the table including:
If you haven't tried LINQPad before, I highly recommend it as your new code scratchpad. I would be surprised if it doesn't save you time almost straight away in your development role.
But you may be wondering what all of this costs? Well the great thing is the standard edition of LINQPad is free. Yep, you get an immensely powerful code playground for nothing more than the time it takes to download and unzip a file.
If you're a hardcore developer or just like to have all of the bells a whistles turned on, grab one of the three paid editions that best suits your needs. With the Premium edition you'll get a full integrated debugger with many of the features you come to expect from the Visual Studio IDE (and at a fraction of the price).
For existing LINQPad users, LINQPad 5 is a paid upgrade and existing LINQPad 4 owners can get a 40% discount if you act quick. It's a no-brainer.
Download LINQPad now, and why not follow @linqpad and spread the word.
]]>LINQPad is advertised as "The .NET Programmer’s Playground" and it really lives up to this. It is the ultimate code scratchpad if you develop in C# (don't worry, F# and VB is supported too) and it can save you some serious development time.
Imagine the scenario... you need to test out some new code - maybe you want to play around with the new C# 6.0 features (you'll want LINQPad 5 which is currently in beta for this) - so what do you do today? Fire up Visual Studio and create a new console app? Not now you've discovered LINQPad.
Instead, you open up LINQPad and start typing. Hit F5 and the output of your code is displayed below the editor - not in some separate DOS-style console after the application has been compiled. Want to visualise a complex object? No problem. Want to query a database with a few clicks? No problem.
You can throw so much at LINQPad and it just handles it all.
As much as I enjoy Visual Studio (especially the newly released Visual Studio 2015), LINQPad is simpler and quicker when it comes to testing something out or learning something new.
LINQPad is a free download and is always one of the first applications I stick on a new machine. While you can benefit a lot from the free version, additional amazing functionality can be unlocked for a pocket-friendly price.
So next time you need to try out a snippet of C#, F# or VB code, give LINQPad a try. You won't regret it.
]]>True to their word, today Microsoft have released Remote Server Administration Tools for Windows 10 and you can grab it now over at the Microsoft Download Centre.
Available in 32-bit and 64-bit packages (yes it seems some people are still using 32-bit systems), the download weighs in around 40MB so it won't be long before you are happily managing your remote servers from the latest Windows desktop client.
Oddly, there are a number of features that this release does not include:
So if you are looking to manage any of these roles or features, it's back to Remote Desktop or PowerShell for you.
Download a copy here: Remote Server Administration Tools for Windows 10.
]]>Without a compatible version, Windows 10 users are left in the cold when it comes to managing the roles and features installed on remote Windows Server 2012 and Windows Server 2012 R2 servers.
You can get versions of RSAT for Windows 7 SP1 and Windows 8.1 but these don't work on Windows 10, so if you are an early adopter it looks like you will need to wait a little longer and rely on Remote Desktop or another workstation to manage your servers remotely for a little longer.
@LiamBramwell A new RSAT for Windows 10 will be released this month, along with Windows Server 2016 Technical Preview 3
— Gabriel Aul (@GabeAul) August 8, 2015
I'm hoping it lands sooner rather than later.
]]>At first hard coding looked the way we'd go but that's messy right? Why not take a leaf out of MVC's book and use an Attribute to decorate the model and render this using a HtmlHelper extension method. It's a very simple process and a pretty clean way of doing it.
The starting point is the HelpTextAttribute that you can use to decorate your model. It is nothing more than the DisplayAttribute with a new name:
namespace MySite
{
/// <summary>
/// Specifies the help message for a property
/// </summary>
public class HelpTextAttribute : DescriptionAttribute
{
/// <summary>
/// Initializes a new instance of the HelpTextAttribute class with a help message.
/// </summary>
/// <param name="helpText">The help text</param>
public HelpTextAttribute(string helpText) : base(helpText) { }
}
}
Now you can decorate your model by adding [HelpText(string)] like so:
namespace MySite.Models
{
public class Company
{
[Required]
[Display(Name = "Company Name")]
[HelpText("The legal entity name for this company")]
public string CompanyName { get; set; }
}
}
The hard(er) part comes in displaying this help text inside your view. A HtmlHelper extension method does the work here to accept a lambda expression to target a property of your model:
namespace MySite
{
public static class HtmlHelperExtensions
{
public static string HelpTextFor<TModel, TValue>(this HtmlHelper<TModel> html, Expression<Func<TModel, TValue>> expression)
{
var memberExpression = expression.Body as MemberExpression;
if (memberExpression == null)
{
throw new InvalidOperationException("Expression must be a valid member expression");
}
var attributes = memberExpression.Member.GetCustomAttributes(typeof(HelpTextAttribute), true);
if (attributes.Length == 0)
{
return string.Empty;
}
var firstAttribute = attributes[0] as HelpTextAttribute;
return html.Encode(firstAttribute?.Description ?? string.Empty);
}
}
}
All that's left to do is display the help text where you need it in your view:
@Html.HelpTextFor(m => m.CompanyName)
Of course this could be extended to add localisation support without too much trouble. Instead of passing the help text to the HelpTextAttribute you could pass the key to a resource string, and have the HelpTextFor method from whatever resource manager you are using. But if all you need is simple help text in a single language, YAGNI.
]]>That's right, thanks to over three thousand votes on UserVoice, if your MSDN subscription, VSO Professional license or other method of upgrade entitles you to Visual Studio Professional 2015 (or Enterprise which has replaced Ultimate) you will quickly spot CodeLens appearing throughout your code, waiting quietly to serve. In my opinion it is one of the best features of Visual Studio and makes finding code changes and references a snap.
It's well worth reading up on as it can really improve your productivity and help cut some corners navigating large solutions.