mozzy.dev 2012-02-06T00:00:00+00:00 https://mozzy.dev Thomas Morris [email protected] Umbraco Codegarden 2023 2023-08-01T00:00:00+00:00 https://mozzy.dev/posts/umbraco-codegarden-2023/ I wrote an article for TPXimpact about my experience at Umbraco Codegarden 2023.

Recently, a couple of our team had the opportunity to attend the annual Umbraco conference known as Codegarden. Hosted in Odense, Denmark it pulls together 800+ in person to the event, but also a number of people were able to attend online as a hybrid experience.

Read in full here: Umbraco Codegarden 2023

]]>
Unboxing Umbraco 11 2023-02-07T00:00:00+00:00 https://mozzy.dev/posts/umbraco-11/ I wrote an article for TPXimpact about Umbraco 11.

Not too long ago we reviewed Umbraco 9, calling it a significant milestone and a major step up from prior versions of Umbraco. In this article we’d like to share our thoughts on Umbraco 11 and see if it’s worth the upgrade.

Read in full here: Umbraco 11

]]>
Looking back at 2022 2023-01-03T00:00:00+00:00 https://mozzy.dev/posts/looking-back-at-2022/ As I start 2023, in typical fashion you start to think about the year ahead, what kind of things to get excited for and to perhaps introduce. I've even written about this in the past. However, this year I'm thinking it'd be useful to look back and see what I have achieved in the hope that it provides me the bliss from constantly searching to what's next.

From a work perspective, there's been a few highlights.

  • Got promoted to Technical Lead
  • Launched various new Umbraco websites
  • Taken on hosting for our internal tech talks
  • Rekindled umBristol and getting involved with the community

From a personal point of view, there's been some big changes.

  • Started some renovation of our 1930s house
  • Elliot joined our world
  • Proposed to Hayley, my wonderful partner (she said yes)

And that's good enough for me. Dream big, but let's cherish and celebrate too.

]]>
Umbraco 9 - The latest major version from Umbraco 2021-12-02T00:00:00+00:00 https://mozzy.dev/posts/umbraco-9-latest-major/ I wrote an article for TPXimpact about Umbraco 9.

As of Sept 2021, Umbraco is now a fully-fledged .NET 5 (ASP.NET Core) CMS. A significant milestone, which brings a multitude of new features and benefits.

Read in full here: Umbraco 9: The latest major version from Umbraco

]]>
Breaking down Umbraco 2021-03-16T00:00:00+00:00 https://mozzy.dev/posts/breaking-down-umbraco/ I wrote an article for TPXimpact about why you might choose Umbraco for a project.

Umbraco’s benefits are many. Using technologies we’re already familiar with, the CMS allows our solutions to achieve a breadth of functionality and remain malleable post production.

Read in full here: Breaking down Umbraco

]]>
GitHub Actions 2020-08-19T00:00:00+00:00 https://mozzy.dev/posts/github-actions/ There are quite a few build and deploy options available to developers these days. Previously, I have wrote about using a combination of Team City and Octopus Deploy. These are still good tools, but will likely require a bit of setup and probably require a VM.

A more recent trend is to have your actions linked to your repository, where you can have it all self contained and in one place. There are pros and cons to both, but I'm gonna show you how you might do that with GitHub Actions.

dotnet

I work mostly with .NET, so lets take a look at the workflow for that. With .NET Core you can now use Linux (and macOS) as your build target. Here we're using ubuntu-latest.

We're also setting the relevant DOTNET_VERSION and running our commands, dotnet build and dotnet publish. Problems with the build will be visible within GitHub and you can follow along with the progress.

Finally, we're pushing our published version of the app to Azure Web Apps using a publish profile. You could also choose to generate a NuGet package and push to a feed for distribution.

Note, you can also build .NET Framework apps, that will look a little different but the concept is the same. You'll target windows-latest as that is a prerequisite for .NET Framework, and the commands will be msbuild.

node.js

If you've got a FE repo, then you'll likely want to use npm to compile some assets and package them up. Here's how you can do that.

We've set up our NODE_VERSION and then run the commands, npm install and npm build. At that point, we might want to package up the assets or deploy as an application.

other

There are lots of other options available to you, without going into all of them here, I encourage you to take a look through the docs and see what you might want for your needs.

If you're interested in what's installed on the runners, then take a look at this repo which contains the full list of software and spec for each.

The approach here can also be used for similar tools such as Bitbucket Pipelines or Azure Pipelines. Choose what fits the bill for you.

Have fun!

]]>
Building a bottle shop 2020-08-10T00:00:00+00:00 https://mozzy.dev/posts/custom-store-in-umbraco/ One of the things that is important to an ecommerce experience is the speed at which you can make informed decisions and place items in your cart. From there, the next phase will be to checkout and proceed with the order. Typically, this is classed as a conversion and you want as many as possible. You might do various tests to achieve that end goal and nudge the customer along that path.

What if you were open to the idea of not pushing the customer to checkout immediately and provide them time to make their choices, even over a few weeks. This was a concept that we played with at BeerBods, centred around a reservation of beer, to build a case of beers you pick in your own time. The experience itself is more about exploration and finding the beer you want.

Not only does that shift the UX in a slightly different direction, it also has some technical challenges. Namely, how do you reserve a beer, how is this stock controlled and how do I come back to my continue with my purchase.

Introducing Umbraco

Umbraco as a CMS allows for a lot of variation in terms of document types and property editors. Essentially, you can build a custom editing interface relatively easily to manage your content. In my case, I wanted to add products to Umbraco and have these editable by anyone in the team. There was no data warehouse, so adding directly into Umbraco was ok in this scenario. I was able to define the required information and set up some basic stock control with SKUs and pricing. Since we were already using Umbraco, it was comfortable to use on this front.

The product pages themselves can be rendered using traditional patterns and templates. The more interesting part came when adding the functionality to add a beer to your case.

Enter VueJS and Algolia

VueJS is a javascript framework that allows for building SPA (single page applications) like functionality, introducing more dynamic and responsive user interfaces. It can be added with relative ease to an existing website, which made it a good option for building our 'shop' front. VueJS would be responsible for rendering components, handling FE state and making calls to our API.

Algolia is a search platform that has a number of pre-built components to integrate with it's service. There was one of particular interest that near enough provided the basis for our 'shop' and the required functionality. This included a list of results, options to filter, sort and search. The idea of adding this kind of service had a number of benefits, it was quick to get started and by having an external service handling search powered by a CDN, it was very responsive to customers. We could push data to the service (whenever a product was updated) and query the results wherever we needed.

Combined, we were able to build a UI that was quick to load, easy to filter and visually interesting with a lot of customisation options. All of the API calls would be triggered from the FE and the state would update our UI to present back to the user.

Adding Fluidity

So, we've got our products and our UI for the front end. We now need to persist changes and provide a stateful system for the current user. This was done by implementing custom database tables and an API to enable adding items, removing items and saving these items for a later date.

These custom database tables can be exposed in our Umbraco instance by using a package called Fluidity. It provides a CRUD like experience for the editors so that we can see what's being added and when, as well as allowing us to add beers on the behalf of a customer.

It's important to note that the idea of persistance is 2-fold, once when we are on the FE and we need to ensure state between sessions and also on the BE, where we need to actually store the data and reserve the beer. This distinction means that we don't get people adding multiple beers filling up stock until they commit to saving that data. At which point we've got their details and can follow up if necessary.

When it comes to purchase or if the customer comes back to their choices, we can load these from the API and allow the customer to proceed. The items can be locked in and an order can be made just like any other checkout, allowing the next stage of fulfilment in the warehouse.

If the customer didn't come back (which did happen), then we'd prompt them to finish their case via email. If after the 30 days they hadn't made a purchase, then we would need to remove their reservation and put the beers back into the shop for other customers. This was done by using Hangfire, a tool for running background tasks at desired times.

The result

What we were looking for was a responsive, easy to use 'shop' front for customers to pick beer and save until they were ready to make a purchase. We needed the ability to add beers to our shop and view details about what was in progress or had been ordered. By using a combination of Umbraco on the back end, and VueJS / Algolia on the front end, the result turned out how we wanted it to and we saw great customer interaction.

Note: This functionality is no longer live on the website but you can view the introductory blog post.

]]>
Remote working 2020-03-09T00:00:00+00:00 https://mozzy.dev/posts/remote-working/ For the past couple of years I've been working in a mostly remote position. There are a lot of positives, but it's important to know that it's not for everyone and it does come with it's own challenges that you'll need to think about.

Here's a few things that I've noticed can help. Your experiences may/will be different.

Tips on remote working

Keep in regular contact with your fellow workers. Provide updates to the team about your progress. Discuss using online tools for greater visibility of decisions made. It helps provide context and allows for further input as necessary. This works both ways.

Use face to face meetings via video. If there are multiple people joining at the other end, use a proper meeting room with good a/v equipment. Holding a laptop with others joining is not a positive experience and leaves people feeling less involved. Not everything needs a call. Embrace async communication.

You will at times feel isolated and need human interaction. Go to places that encourage this and create a different environment. This can be a shared office, coworking space or local coffee shop. When you know you need to focus, you'll welcome the solitude.

Try and work to your own schedule. Take a longer lunch if it helps you be more effective at what you do. If you need to go and do something, then do it and work around it. Define check in times and availability to your coworkers. This is not about being unavailable, it's about managing your time better.

Replace your commute with something for yourself. Go for a walk with the dog, cycle round the block, or grab a coffee from your local. It will help kickstart the day and allow your mind to seperate home from work. Likewise, wearing your pjs all day whilst comfortable is an easy way to laze about.

Invest in your space. Ideally you'll have somewhere that is a place for you, with a decent set of equipment and lighting. Working in a common room within your house/flat will likely make it harder to concentrate and may lead to distractions.

If your company has to emphasize the fact you need to work whilst you are at home, then they probably don't trust you to do any work or are at least unfamiliar with the concept. Don't overcompensate by doing more work, just alleviate concern with what you are doing.

If you want to do more work because you're in the zone or whatever, by all means do so, just remember to take some time out and recharge.

Take notes and provide a plan of what you'll be working on. This can be as simple as you like. Working to a few bullet points and targets will add extra focus and satisfaction when complete.

Spend time with your family, friends and significant others. It can be really easy to slip into constantly thinking about work and failing to relax with the people around you. Enjoy life outside of work.

Hopefully these tips will put you on the right path to remote working bliss.

]]>
Mediatr and CQRS 2020-01-30T00:00:00+00:00 https://mozzy.dev/posts/mediatr-and-cqrs/ Introducing new changes

To begin, I think it would be useful to go over a scenario that I'm sure many people have come across before and why perhaps it would be useful to look into an alternative.

Let's say we've got a bunch of beers and we've allowed our users to see them along with their basic details and an option to buy. When someone is interested in that beer, they might want to know whether or not this beer is any good. It'd be great if we could allow our users to review a beer.

When using the repository pattern we would have created a new repository, passed that into a service and called that service from within our controller to return the reviews and allow our users to review a beer. We will have likely used the same model for our read and write. This works and you carry on as normal happy that users can now add some extra context to beers.

public class BeerReviewService
{
private readonly Database _db => ApplicationContext.Current.DatabaseContext.Database;

public void Create(BeerReview beerReview)
{
_db.Insert(beerReview);
}

public IList<BeerReview> GetAllReviews()
{
return _db.Fetch<BeerReview>("SELECT * FROM BeerReviews");
}

public IList<BeerReview> GetReviewsByBeerId(int beerId)
{
return _db.Fetch<BeerReview>("SELECT * FROM BeerReviews WHERE BeerId = @0", beerId);
}
}

A couple weeks later, someone has started taking advantage of this rating option and posted a bunch of fake reviews. You need to delete them. Something that when you first built your service, the functionality wasn't required and for good reason didn't add it in just for the hell of it. Except that now, you need to make a change to your rating service by adding some more code. A couple more weeks, and people are asking to edit those reviews as they've added some typos. You need to introduce edit functionality and the rating service grows some more. Whilst we've now got some honest reviews, not everyone agrees with them and we'd like to allow people to upvote a review to show appreciation for how good a review is. Again, we need to go back to our rating service. This can go on and on and soon enough the original code that you wrote bares little resemblance to the intentions of what was supposedly a simple implementation. The contract itself has completely shifted.

public class BeerReviewService
{
private readonly Database _db => ApplicationContext.Current.DatabaseContext.Database;

public void Create(BeerReview beerReview)
{
_db.Insert(beerReview);
}

public void Edit(BeerReview beerReview)
{
_db.Save(beerReview);
}

public void Upvote(int reviewId)
{
var beerReview = GetReviewById(reviewId);
if (beerReview == null)
{
return;
}

beerReview.UpVotes += 1;
_db.Save(beerReview);
}

public void Delete(BeerReview beerReview)
{
if (beerReview == null)
{
return;
}
_db.Delete(beerReview);
}

public IList<BeerReview> GetAllReviews()
{
return _db.Fetch<BeerReview>("SELECT * FROM BeerReviews");
}

public IList<BeerReview> GetReviewsByBeerId(int beerId)
{
return _db.Fetch<BeerReview>("SELECT * FROM BeerReviews WHERE BeerId = @0", beerId);
}

public IList<BeerReview> GetReviewsByUserId(int userId)
{
return _db.Fetch<BeerReview>("SELECT * FROM BeerReviews WHERE UserId = @0", userId);
}

public BeerReview GetReviewById(int reviewId)
{
return _db.FirstOrDefault<BeerReview>("SELECT * FROM BeerReviews WHERE ReviewId = @0", reviewId);
}
}

How about CQRS?

Taking a look at our reviews example, there were a few things we ended up wanting to do, but essentially we can split those into something that modifies something, or something that retrieves something.

CQRS or Command Query Responsibility Segregation in it's full name, is a pattern that separates command logic from query logic. If you need to make an update or create an entity, you would use a command. If you need to query for some data, you would create a query. By following this pattern, it can help with the Single Responsibility Principle. No longer are you likely to have a service or such like that combines both update and retrieval, providing a better separation of concerns. GraphQL has a similar concept, you can either query data or you can apply a mutation to change data.

One of the benefits of this is that the code itself should be self documentating. The name itself should provide all you need to know. For example, CreateBeerReviewCommand will as one might assume, create a review for a beer.

public class CreateBeerReviewCommand : IRequest<int>
{
public int BeerId { get; set; }
public int UserId { get; set; }
public string Review { get; set; }
}

As for a query, likewise the naming convention should be self explanatory. GetBeerReviewsQuery in order to fetch a collection of reviews for a beer. Things have suddenly become a lot more descriptive and relate better to their function.

public class GetBeerReviewsQuery : IRequest<BeerReviewsModel>
{
public int BeerId { get; set; }
}

Once you've got a command or a query, you need to handle that request. One way to do that, is to create a new handler which mimics the naming convention of your command (or query). So, you will have a CreateBeerReviewCommand and a CreateBeerReviewHandler which will perform any necessary logic (i.e. create a beer review) and return a response. You define the expected response, so in the CreateBeerReviewHandler we will return the id of the newly created review for example.

public class CreateBeerReviewHandler : IRequestHandler<CreateBeerReviewCommand, int>
{
private readonly Database _db => ApplicationContext.Current.DatabaseContext.Database;

public int Handle(CreateBeerReviewCommand request)
{
var beerReview = new BeerReview()
{
BeerId = request.BeerId,
UserId = request.UserId,
Review = request.Review,
LastUpdated = DateTime.Now
};
_db.Save(beerReview);

return beerReview.ReviewId;
}
}

An example

If we go back to our earlier example, when asked to allow users to edit their reviews, we can implement a new EditBeerReviewCommand with it's own handler and it's own logic.

public class EditBeerReviewCommand : IRequest
{
public int ReviewId { get; set; }
public string Review { get; set; }
}

public class EditBeerReviewHandler : IRequestHandler<EditBeerReviewCommand>
{
private readonly Database _db => ApplicationContext.Current.DatabaseContext.Database;

public void Handle(EditBeerReviewCommand request)
{
var beerReview = _db.GetById(request.ReviewId);
if (beerReview != null)
{
beerReview.Review = request.Review;
beerReview.LastUpdated = DateTime.Now;
_db.Save(beerReview);
}
}
}

You'll notice that the code being used to edit a beer review only needs to reference the review id and provide the newly updated review itself. The models in this sense are a lot smaller than passing around a DB model, and they are created to perform a particular task. You do end up with more, but hopefully you can see the benefit in doing so.

Now, you might be thinking we've got a ton of classes going on here and I'm going to have a lot of dependencies in my controller to achieve all of this. You need to ensure the right commands/queries are going to the right handlers, it would be good to have something that could help mediate these dependencies. Luckily there is a Nuget package out there that makes things a bit simpler. Enter Mediatr by Jimmy Bogard (of AutoMapper fame).

Mediatr is a package that helps link up the relevant commands/queries to the correct handlers. This means that we can simplify our controllers to use an instance of IMediatr, which we'll then use to send our request to get the relevant response. In the example below, we've got a beer page and we're getting the reviews for that beer by creating a new GetBeerReviewsQuery.

public class BeerController : Controller
{
private readonly IMediator _mediatr;

public BeerController(IMediator mediatr)
{
_mediatr = mediatr;
}

public IActionResult Index()
{
var beerId = 1234;
var model = new BeerViewModel
{
BeerId = beerId,
Name = "My beer",
Description = "My awesome pale ale",
Price = 2.99m,
Reviews = _mediatr.Send(new GetBeerReviewsQuery { BeerId = beerId })
};

return View(model);
}
}

In this scenario we've isolated the code to get reviews relating to a beer and we're loading them our handler for GetBeerReviewsQuery. It'll look something like this.

public class GetBeerReviewsHandler : IRequestHandler<GetBeerReviewsQuery, BeerReviewsModel>
{
private readonly Database _db => ApplicationContext.Current.DatabaseContext.Database;

public BeerReviewsModel Handle(GetBeerReviewsQuery request)
{
var reviews = _db.Fetch<BeerReview>("SELECT * FROM BeerReviews WHERE BeerId = @0", request.BeerId);
var response = new BeerReviewsModel()
{
Reviews = reviews,
Count = reviews.Count,
LastUpdated = DateTime.Now
};

return response;
}
}

Vertical Slices

Now that we've split our application more closely into commands and queries relating to specific tasks, it might be a good idea to group those tasks together somehow. This technique involves organising your project into vertical slices rather than by type. i.e. we can define features and split them into verticals. You can keep code relating to beer reviews as one slice of your application, rather than simply having controllers, services, models and so on with a combination of multiple domains. When you need to modify the beer reviews code, you know where to find it and the relating code is positioned together as one piece. You may wish to use different projects or release as different features in a microservices architecture, in which case this idea would help with providing updates in isolation. From a testing point of view, the easier to read and smaller classes should help resulting in code that should be more maintainable as the project grows.

Going further

The examples provided here are relatively basic to show the concepts and how you might work with this pattern, however in a typical world you'll also need to think and caching, validation, logging and various other steps in the pipeline. Thankfully, Mediatr has the concept of pipelines and adding behaviours, so you can run code before or after handlers. I'll leave that for another time.

References

]]>
Enabling features in production 2019-12-18T00:00:00+00:00 https://mozzy.dev/posts/enabling-features-in-production/ Note: this article was written for 24days in Umbraco and the original can be found here: https://24days.in/umbraco-cms/2019/features-in-production/

When deploying features or new changes to our websites, we might not want them to be available to all users at once. It would be nice if we could slowly introduce new features to our users. We might even want to completely disable them and have them there in secret. In this article, I'm going to present a few different ways in which you can release features into production, even when they might not be fully complete.

Background

Before we look into the ways in which we can get our features into production, it's useful to think about how we might have typically developed new changes in our website before a release.

One of the most common ways is to use feature branches in a gitflow scenario, where you would build your new feature, deploy to a test environment and then try and merge those changes with anything else that has changed in the meantime. You'll probably bundle a number of changes together and then release those to production.

Some of the issues at this stage might be that you've got a tricky merge, you've tested something in isolation and when you bring it all together other issues might arise. You also haven't really trialled those changes with actual users, so those changes may not be desired or there could be problems with how it all works.

Benefits and options

Therefore, it's probably a good idea to highlight some of the benefits as to why you might want to integrate your new changes sooner rather than later.

  • your features get into production quicker
  • you can get feedback earlier
  • you can test things in reduced quantities
  • you should end up with a better product
  • your client will be happy

You might have seen reasons like this elsewhere, and that's because they actually run true for agile development and the scrum process, which in turn is a good fit for continuous delivery. A notion that your app is always in a deliverable state, since any new changes have been safely integrated somehow.

On to the ways in which we can enable our changes...

App settings

One of the easiest ways is to define in your web.config, application settings somewhere, or provide settings in your Umbraco solution to features on and off. You'll need to check this setting within your code before including the new changes for your users.

This can be done with a feature helper, which you can query the app setting and then return a true or false as to whether the new changes should be included or not.

public static class FeatureHelper
{
public static bool IsFeatureEnabled(string featureFlag)
{
// check if we have enabled functionality via web.config
bool.TryParse(ConfigurationManager.AppSettings.Get(featureFlag), out var setting);
return setting;
}
}

If you'd like to turn off entire controllers or action methods, one of the ways in which you can do that is via the use of attributes. These will decorate your actions and apply the filter before any of the other code is run. In the attribute logic, it will call our feature helper to determine whether or not a feature should be applied.

public class FeatureAttribute : ActionFilterAttribute
{
public string ConfigVariable { get; set; }

public FeatureAttribute(string configVariable)
{
ConfigVariable = configVariable;
}

public override void OnActionExecuting(ActionExecutingContext filterContext)
{
// if the feature is not enabled, then redirect to 404
if (!FeatureHelper.IsFeatureEnabled(ConfigVariable))
{
filterContext.HttpContext.Response.Redirect("/404/");
}
else
{
base.OnActionExecuting(filterContext);
}
}
}

This allows for features to be controlled relatively easy in terms of when it is available, but it's an all or nothing solution which might not be desired when you want to tentatively introduce customers to your new feature.

Session

We can apply changes based on a user's session data, which could be enabled via a campaign to help with testing. Ask for people to click a link and become part of the session. You can then trigger an update that will kill off any session data if there are issues. The code above that we used for checking app setting can be amended to apply with session data.

public static class FeatureHelper
{
public static bool IsFeatureEnabled(string sessionFlag)
{
// check if we have a valid session to enable functionality
var sessionVar = HttpContext.Current?.Session[sessionFlag];
if (sessionVar == null)
{
return false;
}

return bool.Parse(sessionVar.ToString());
}
}

This is great for ad hoc testing, as everything can return to normal after the session. This is ideal if you don't have an account system set up, but it is helpful if you can target or segment your users if you have a hypothesis you want to test.

A/B testing would be a good scenario here. One set of users could get the feature enabled and the other set could continue as normal to see if anything is improved if your feature relates to performance or conversions. It's important that you can define what improvement looks like or can at least get some feedback from your users.

User preference

If you do have an account section, you could provide the user the choice as to whether they want to opt in to a newly released feature that you might want to test further before enabling for all. You would need to check against the current logged in user as to whether or not they would like to see those changes.

In Umbraco, you could define this as a member property or membership group so you can see within the backoffice which users have enabled your feature and segment your users that way. If your changes are more granular, then using properties is probably preferred.

Just to note, for members in Umbraco there is the concept of role based access, which is tied with membership groups, so could be an option if you wanted to hook in that way instead. This uses .NET authorisation under the hood.

Umbraco member properties

This puts the control on to your users, and they are more likely to be actively engaged with a feature and also understand that there may be issues. If they're not happy with how it works, then they can easily opt back out and carry on as normal. Within your website, you can provide an option for feedback based on whether a new feature has been turned on. They can submit reports and you can gain some quality insight as to how your users are actually using your changes.

Deployment slots

If you're doing your deployments via Azure App Services, then another option at your dispense, is to use deployment slots. The main use case for this is to run a staging slot alongside your live slot, and to swap them when doing a deployment to ensure that you've tested your changes, warmed up your site beforehand and traffic is switched over without downtime.

There is another benefit to it though, in that it can allow you to drive traffic to either slot based on a percentage. That way you can test with 20% of users before rolling out to the other 80% and this is managed through deployment. You can do similar with traffic manager or other load balancing rules.

Azure deployment slots

With this option, it's even more important that you can provide suitable metrics and logs to determine how your new changes are performing. Without reasonable guidance, you're in the dark as to whether or not it's a good idea to roll out.

Features as a service

With the rise of microservices, why not make use of a service that can handle our features? One of those services is LaunchDarkly, which provides a dashboard for your features and has a number of libraries (inc. JS and .NET) that you can integrate with. As a solution, it'll allow you to get a full feature management tool pretty quickly, and one that should provide plenty of options albeit at a price. Useful to consider, and there may well be others out there. The concepts should still be the same as we've outlined in this article.

[Update]

Since originally writing this article, another option which I've found is the .NET Feature Management library. This is built on top of the .NET configuration system and provides a nice set of APIs for using in your applications. It looks like a great option. Here's a quick start guide to set up with Azure.

Conclusion

Testing features in production is a big part of continuous improvement and ties in well with the continuous development way of doing things. You can deliver changes quicker and have greater confidence in when a feature is enabled for all. Ideally, you can ramp up usage in a controlled manner.

You can gain feedback on your changes from real users and scenarios, which may in turn provide much greater insight as to how something works. The end goal being that your changes have been well integrated and are of better quality overall.

]]>
Cloudflare page rules 2019-07-16T00:00:00+00:00 https://mozzy.dev/posts/cloudflare-page-rules/ As you might have read, I recently moved my site over to a new domain. mozzy.dev

I needed a way to redirect requests from my old domain to the new one, but I didn't really want to go creating redirect rules at the server level or whatever (think IIS, Apache). In fact, tcmorris.net is simply a Github Pages site. It needed to be a 301 redirect and not just a meta refresh tag, which simply reloads the browser.

Way back, I used Cloudflare to provide free SSL on my site, and the DNS etc is still handled there. So, I had a look around and came across Page Rules. These are neat little rules that you can apply to your app such as 'Always use HTTPS' or 'Auto Minify'. A lot of this is provided by Cloudflare to use across your site, but this allows you to be specific with individual pages if you wish.

The one that I was interested in was 'Forwarding URL', which allows you to forward requests and trigger a redirect of some sort (301/302). You can use wildcards in the matching rule to cover all pages for example, and then set a new destination along with the matching wildcard info. Example below for how I managed to get a 301 redirect working to my new domain.

Cloudflare Page Rule example

Rather neat I'm sure you'll agree.

For more info, have a look at their docs: Cloudflare examples

]]>
Introducing mozzy.dev 2019-07-15T00:00:00+00:00 https://mozzy.dev/posts/introducing-mozzydev/ It's been a while, but after the launch of Hylia by Andy Bell, I figured it was about time that I updated my personal site.

With that, the opportunity for a bit of a refresh arised and so mozzy.dev was born.

Personal brand

I've not toyed too much with changing my username, it's mostly been mozzy16 from an early start and then I tried tcmorris to give a more professional spin.

The origins of mozzy16 are rather boring, a play on my surname and my birth date. tcmorris was just tied to my name.

After being called mozzy, moz or mo quite often, it seems appropriate to stay along those lines. This time just with a developer spin, since that's what I do.

Why is this important?

Here's some reading that you might find of interest from Paul Seal of codeshare.co.uk fame.

Andy tells us his story too: So long, HankChizlJaw

Personal sites

With a personal brand goes a personal site. Sure, you could write on Medium or some other online community, but in doing so you're effectively losing a level of control over your content. If I wanted to change anything on this site, I've got the source code, including all the posts and images that go with it. In migrating from Jekyll, all I needed to do was copy and paste some files.

If you're wanting to start a personal site, perhaps try something like Hylia as a starting point.

]]>
Using migrations in Umbraco 2018-10-28T00:00:00+00:00 https://mozzy.dev/posts/using-migrations-in-umbraco/ Migrations are a really handy way to deploy your database changes to new environments. Umbraco use them extensively for processing any upgrades between versions and you can use them too. If you've ever used Entity Framework, then this will probably be fairly familiar.

What does it do?

Umbraco has actually been doing this since v6, with the idea that running migrations via code versus having a suite of custom SQL scripts can offer more options.

There is a table which keeps track of all the migrations that have been run called umbracoMigration, it looks a bit like this:

  • add image

You'll notice that it has a name next to each migration, as well as the version. This is how we can see what state our database is in. So, for the above it's run up to 7.12.3 in Umbraco. When you deploy changes to a new server, if it's pointing to an older version then it will run the migration on startup. This is mostly used for schema changes, but you can also insert data or run your own custom code.

How does it work?

Umbraco has this concept of a MigrationRunner that you can trigger to execute your migrations. You'll need to set the target version and then you can find all the relevant migrations to get to that version and run them.

In order to find these migrations, all you need to do is create a file that looks a bit like this:

/// <summary>
/// MyCustomTable migration
/// </summary>
[Migration("1.0.0", 1, MigrationNames.MyCustomNamespace)]
public class MyCustomTableMigration : MigrationBase
{
private const string TableName = "MyCustomTable";

public MyCustomTableMigration(ISqlSyntaxProvider sqlSyntax, ILogger logger)
: base(sqlSyntax, logger)
{
}

/// <summary>
/// Process database upgrade
/// </summary>
public override void Up()
{
// create a new table if it doesn't exist
var tables = SqlSyntax.GetTablesInSchema(Context.Database).ToList();
if (!tables.InvariantContains(TableName))
{
Create.Table(TableName);
}

// or you can run alterations on existing tables
var columns = SqlSyntax.GetColumnsInSchema(Context.Database).ToArray();
var columnExists = columns.Any(x =>
string.Equals(x.TableName, tableName) &&
string.Equals(x.ColumnName, columnName)
);
if (!columnExists)
{
Alter.Table("SomeOtherTable").AddColumn("MyCustomString").AsString().Nullable();
}
}

/// <summary>
/// Process database downgrade
/// </summary>
public override void Down()
{
// drop the table
Delete.Table(TableName);

// remove the column from existing table
Delete.Column("MyCustomString").FromTable("SomeOtherTable");
}
}

The code above is saying add a new table and also alter an existing table by adding a new column of type string. The commands are exposed via the inherited MigrationBase which Umbraco provides. We're stating that this is v1.0.0 in a SemVer format. We're also saying that this is going to have it's own name for the migration (MigrationNames.MyCustomNamespace), this keeps all your migrations grouped for your custom code.

We need to tell Umbraco to run this code on startup. Here is how you can do that:

public class CustomMigrationEventHandler : ApplicationEventHandler
{
protected override void ApplicationStarted(UmbracoApplicationBase umbracoApplication, ApplicationContext applicationContext)
{
// check target version
var rawTargetVersion = ConfigurationManager.AppSettings["app:MigrationVersion"] ?? "1.0.0";
var targetVersion = SemVersion.Parse(rawTargetVersion);
if (targetVersion != null)
{
HandleMigrations(targetVersion);
}

base.ApplicationStarted(umbracoApplication, applicationContext);
}

private void HandleMigrations(SemVersion targetVersion)
{
// get all migrations already executed
var currentVersion = new SemVersion(0, 0, 0);
var migrations = ApplicationContext.Current.Services.MigrationEntryService.GetAll(MigrationNames.MyCustomNamespace);

// get the latest migration executed
var latestMigration = migrations.OrderByDescending(x => x.Version).FirstOrDefault();
if (latestMigration != null)
{
currentVersion = latestMigration.Version;
}

if (targetVersion == currentVersion)
{
// nothing to do
return;
}

var migrationsRunner = new MigrationRunner(
ApplicationContext.Current.Services.MigrationEntryService,
ApplicationContext.Current.ProfilingLogger.Logger,
currentVersion,
targetVersion,
MigrationNames.MyCustomNamespace);

try
{
// run migrations
migrationsRunner.Execute(UmbracoContext.Current.Application.DatabaseContext.Database);
}
catch (Exception e)
{
LogHelper.Error<MigrationEvents>("Error running migration", e);
}
}
}

That's a fair chunk of code. Quick run through as to what's happening...

  • find out what version to target
  • figure out if that is a later version that what we have
  • if so, run the migrations
  • throw an error if it blows up

Your custom table should get added the next time you start Umbraco. Notice that I've added an appsetting for keeping track of the version called app:MigrationVersion. You can add this via your web.config and then use it to target different versions and update through your deployment variables. This way, we opt in and have a configurable value rather than changing our code each time.

So, we've added our custom table and now want to make a change. What do we do? Well, we create a new migration which inherits from MigrationBase, has our code in and then bump up the version via the config value.

Any gotchas?

Yeah, a few things to be aware of.

In the migration examples above, we used static strings to denote the table and column names. You can also use a reference like so, which uses type T.

Create.Table<MyCustomTable>();

The slight issue with this though is that if you want to remove MyCustomTable from your code, then you're going to have to change your migration as you don't particularly want to keep old references around just so that it will compile. The migration shouldn't change though, it should be able to run multiple times and by the end you should be in the same state as the version asked for. The term for this is idempotent.

The other thing that to note is that once you've racked up a few of these, it might be hard to realise which ones relate to which version. Umbraco handles this by using named folders that relate to the version they are targetting. Here's an alternative:

  • 001_InitialMigration.cs
  • 002_MyNextMigration.cs
  • 003_AddCustomerTable.cs

Essentially, just adding a prefix so that you can easily see at a glance which migrations have been added and what order they occurred. You could also map these to the SemVer version if you wanted.

When you deploy these changes to your site, they will run on startup and could provide a bit of downtime for your customers. Think about what that means to you and figure out if you can do this at low traffic or you could use a blue-green strategy where you alternate between 2 versions of the live site allowing you to prep everything without your customers taking the hit.

Summary

Hopefully you now understand a little about what migrations are and can see how to utilise them on your projects. Following the patterns mentioned in this post, you should be deploying with ease.

References

]]>
App offline strategies for your website 2018-10-09T00:00:00+00:00 https://mozzy.dev/posts/app-offline-strategies/ It's undesirable, but sometimes you'll be in a scenario where you may need to take your sife offline for maintenance. This could be an upgrade to your CMS of choice (e.g. Umbraco), processing of changes behind the scenes or simply restricting users at high traffic.

app_offline.htm

The simplest way to do this is to place an app_offline.htm file in the root directory of the website. This will instruct .NET to route all traffic to that file and show this to the user. It'll need to be simple html and you'll need to inline any CSS that you want applied.

This is great if you want to restrict traffic altogether, but if you want to do stuff in the background then you are a bit stuck.

IIS Rewrite

So, one way in which you can get around that is by using a rewrite rule which negates a certain IP. This will allow whoever visits your site via the IP added, to browse the website without obstruction. Anyone else, will get served the offline file. Below is an example transform that you can use to achieve that.

<rule name="App-offline" stopProcessing="true" enabled="true" xdt:Transform="InsertIfMissing" xdt:Locator="Match(name)">
<match url="^(.*)$" ignoreCase="false"/>
<conditions logicalGrouping="MatchAll">
<add input="{URL}" pattern="/offline.htm$" ignoreCase="false" negate="true"/>
<add input="{REMOTE_ADDR}" pattern="^127.0.0.1$" ignoreCase="false" negate="true"/>
</conditions>
<action type="Redirect" url="/offline.htm" redirectType="Found"/>
</rule>

Maintenance manager

Perhaps you don't have a specific IP you want to allow and are interested in being able to turn this off and on. Well, the good news is if you are using Umbraco, then Kevin Jump has created a package for that. It's called maintenance mode for Umbraco. You can login to Umbraco and then you'll be given the option to apply the restriction to everyone, as well enabling access for back office users if you wish.

You can download that here: https://our.umbraco.com/packages/backoffice-extensions/maintenance-manager/

Limited downtime?

There's a few ways to achieve this. One of the most common being blue-green deployments where you alternate between active and inactive versions of the website, resulting in one being live and the other being updated and switched to. Usually this will require some kind of load balancer although with deployment slots in Azure you can achieve a similar thing.

You could use tooling such as Octopus Deploy to set up a variety of deployment strategies, with the ability to run custom scripts and actions throughout the deployment. You can also configure a deployment to run at scheduled times (e.g. 2am where there is expected to be little traffic).

]]>
Using Models Builder in a project 2018-06-03T00:00:00+00:00 https://mozzy.dev/posts/using-models-builder/ A while back I gave an overview of the different options that Models Builder provides you. I didn't go into a lot of detail, but I did cover why you might want to use one option over another.

Structure

Fast forward a little, and I think I've reached an approach that I'm happy with and allows me to build things in a way I want without adding extra complication. I'm more familiar with an MVC type scenario and so here's the kind of solution I normally end up with for Umbraco projects.

  • Project.Web.Core - class library
  • Project.Web.UI - website project (reference to the above)

There will likely be some other projects in there as well, e.g. tests.

I would typically place my models within the Core project and then utilise these within the views of the UI project. To do that, there are some configuration options to instruct Models Builder as to how to create models.

<add key="Umbraco.ModelsBuilder.Enable" value="true" />
<add key="Umbraco.ModelsBuilder.ModelsMode" value="AppData" />
<add key="Umbraco.ModelsBuilder.ModelsNamespace" value="Project.Web.Core.Models.Content" />
<add key="Umbraco.ModelsBuilder.ModelsDirectory" value="~/../Project.Web.Core/Models/Content/" />
<add key="Umbraco.ModelsBuilder.AcceptUnsafeModelsDirectory" value="true" />

The above will place the models generated by Models Builder into my Project.Web.Core project, so that I can then use them how I wish. Include them in the project and do a build, and they will be included in the project dll.

The good thing about this is if you already have a project that was using Model.Content.GetPropertyValue("propertyAlias") it's not too difficult to change them over to strongly typed models. Models Builder itself is essentially doing all that for you. If you're starting fresh on a project, then there a few other options for your model/mapping needs. None of the below generate models, so are quite different in usage to Models Builder. They do offer more complex scenarios in terms of mapping and granular view models, and may be something you prefer.

Using the models

I've now got a bunch of files that provide me with the generated models from Umbraco. These are content models and they all inherit from PublishedContentModel. They are partial classes. When it comes to using these models, it might be instantiated via a controller or it might just be there for me via Umbraco's routing.

In a view, this is what we can do:

@inherits UmbracoViewPage<MyModel>

Then, within the view there will be intellisense and I will be notified of compilation errors. A nice benefit of using strongly typed models. Since we made use of UmbracoViewPage we also get access to all the Umbraco helper methods.

Ok, that sounds great. But, there's something a little iffy with using a model that Umbraco generated for me and is closely tied with the content in Umbraco. What if I wanted to extend this or have my own properties? Well, there's a solution for that. The official docs suggest that we create another partial class and add what we need there, but I think I prefer this method. We can just inherit from the model that Models Builder gave us.

public class FridayBeersViewModel : FridayBeers
{
public FridayBeersViewModel(IPublishedContent content)
: base(content) { }

// custom properties
public bool ShowBanner => BannerImage != null;
}

We've still got all the properties from the generated model and we can also build up a custom view model, which more closely matches the output in our views. As a side note, if you're view model ends up with little relation to the generated model then you probably shouldn't be inheriting from it and can just roll your own.

In the controller, you would then have something like this.

public class FridayBeersController : BasePageController
{
public ActionResult FridayBeers()
{
var model = new FridayBeersViewModel(CurrentPage);
// apply any other updates on the model
return CurrentTemplate(model);
}
}

I'm hijacking the route so that I can amend the default behaviour of Umbraco. I'm also making use of the template based routing in Umbraco, where the template name matches my action. BasePageController just inherits from SurfaceController and IRenderMvcController, and has a method called CurrentTemplate that resolves the template and passes the model to the view. Here's the example in Umbraco core: view on github.

Compositions

The above example would work well for a page or a block of content, but we might want to break things up a little more and make better use of sharing code. When we use compositions, what we get from a generated sense is an interface and then our content models implement those properties. They can also make use of many compositions. This is great, because it means we can then define the interface as the model within a view if we want to. For example, if we had a interface such as IMetaData we can create a related partial that handles the meta tags on our site.

@inherits UmbracoViewPage<IMetaData>

<meta name="description" content="@Model.MetaDescription" />
// etc.

Or we might have a couple of pages that have a banner, and a few without. We just let our doc types use the compositions and they know how to render the banner.

ModelTypeAlias

I'm sure at one point during development of an Umbraco website you would have created a class with some constants in them, relating to the alias of the doc type. An easy way to vary your code depending on the doc type. Well, Models Builder provides this in a straightforward way.

public partial class ExamplePage : PublishedContentModel
{
public new const string ModelTypeAlias = "ExamplePage";
public new const PublishedItemType ModelItemType = PublishedItemType.Content;

public BasePage(IPublishedContent content)
: base(content)
{ }

// other properties
}

To use this, just do ExamplePage.ModelTypeAlias when you need it.

Summary

I've outlined a few ways in which you can use Models Builder on a project above perhaps what you get out of the box, so that you can get the most out of it in a simple manner. As a pattern, this should allow for easy extensibility and a quick way of consuming the content from Umbraco.

Further reading

]]>
Joining BeerBods 2018-02-01T00:00:00+00:00 https://mozzy.dev/posts/joining-beerbods/ As of 12th Feb, I will be the newest member of the beer subscription company, BeerBods and will be taking up the role of CTO / Head Geek.

That sounds great, but what does it mean I hear you ask? Well, I'll be looking at their current website offering, giving it a refresh and building out new functionality. In terms of infrastructure and integrations, I'll be looking at ways to introduce tech in all aspects of the business. The focus will be on how BeerBods can become better, and how to grow as a company. There are a number of projects on the horizon and lots of great ideas, it'll be my task to understand and evaluate how these can come to fruition and be integrated from a tech point of view.

It's probably worth rewinding for a second, to mention a little more about BeerBods and how they got started. Back in 2011, Matt started having beer tastings in his shed. A year later and an idea about drinking better beer using the web as a platform formed. A simple website to launch and #beerbods was born on Twitter, every Thursday at 9pm you can enjoy a beer (each individual has the same beer) and talk about it online. Shortly after, it managed to become a fully fledged business and the orders were coming in. The social element and the stories behind what you are drinking is a core aspect of the business.

Now, those of you who know me will probably not be surprised to find out that I've managed to get a role working in the beer industry. I've been interested in the beer scene for a number of years. When I was at uni, I remember heading to the local supermarket and getting some mates round to rate the beer we just bought. Back in those days, it was more traditional ale of course, but the experience, finding new beers and social part of the tastings was very much part of it. Similar to what Matt and the BeerBods team aim to do.

I can't wait to get started.

]]>
Getting Started with Docker 2017-05-31T00:00:00+00:00 https://mozzy.dev/posts/getting-started-with-docker/ It's more than likely that you will have already heard about Docker, but what exactly is it and what does it offer you? Well, Docker is a container platform. But, what is a container? Well, a container is an isolated image that includes everything you need to run your app. So, that includes your code, the runtime, any libraries and settings. Unlike a Virtual Machine (VM), you do not need to bundle a Guest OS. This keeps things lightweight and you can run multiple containers on a single host. The premise for this, is that containerised software will always run the same, regardless of the environment so you shouldn't get the fabled 'works on my machine' issue.

Downloading Docker

Docker is available in two editions: Community Edition (CE) and Enterprise Edition (EE). Docker CE is available for Mac, Windows and Linux distros. I'm going to be talking about the Windows version in this article, so grab the stable version of Docker for Windows from here:

https://store.docker.com/editions/community/docker-ce-desktop-windows

Docker uses Windows-native Hyper-V virtualisation, so if you haven't enabled this feature it will prompt you to enable this and restart. Once that is done, you'll notice that Docker is running in your taskbar. A quick click on 'About Docker' and you should see confirmation of the current version.

Running Docker

To interact with Docker, you can use the docker commands in a terminal such as Powershell. Here's how to run the simple Hello World example.

PS C:\Users\tmorris> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
78445dd45222: Pull complete
Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Some useful commands:

  • docker version: see version information for Docker
  • docker ps: display running containers
  • docker rm <container>: remove a container
  • docker images: display images
  • docker rmi <image>: remove an image

To enable tab completion within Powershell, download posh-docker.

An example

Here is a slightly more involved example that shows a simple application running .NET Core in a Linux container. All it does is display a brief welcome by dotnetbot.

docker run microsoft/dotnet-samples

Docker dotnetbot

Options

Docker opens up a few possibilities in the developer workflow. Here's some usage cases:

  • Create a consistent development environment between your team
  • Pull in your dependencies as neatly packaged Docker images
  • Isolate concerns, develop and deploy independently with a microservices architecture
  • Build and test your apps with Bitbucket Pipelines (or similar) to provide Continuous Integration (CI)
  • Scale up and manage your infrastructure with Docker Swarm

Summary

Docker is gaining momentum amongst the dev community. I've given a quick overview here about how to get started and how you can incorporate Docker into your workflow. Give it a try :)

References

]]>
Updating older Umbraco packages 2016-10-27T00:00:00+00:00 https://mozzy.dev/posts/updating-older-umbraco-packages/ In a recent post, I outlined some of the reasons for upgrading Umbraco and how best to approach the challenge. In that post, I mentioned that you are likely to have a number of issues with custom packages or custom data types. Below is a rundown of some of the examples I have encountered. If you have found others, it'd be great if you could add to the comments section and I'll look to edit this post with extra details in the hope that it can become a valuable resource.

Packages

This is basically a list of packages which I have come across via a previous upgrade to Umbraco 7+. Quite a few will have a version that works with Umbraco 7+, others will simply not work. Where possible, I have included a suitable upgrade path.

Package Description v7 support? Alternative
AttackMonkey Custom Menus used to disable delete/copy/move etc on some content nodes, to idiot proof the CMS (e.g. Stop users deleting the home page) Yes (v2+)
AttackMonkey Tab Hider used to hide some of the tabs on content types from certain users (e.g. Only admins can edit the SEO tabs) No
AttackMonkey Security security helper, password strength validation No Membership provider regex
AutoFolders used to automatically organise news type content into Year/Month folders No DateFolders / uDateFoldersy
CMSImport used to import content from an old CMS Yes
Config Tree allows you to view all the site config files in the back office Yes
Contour used to provide forms Yes (use latest)
Contour Contrib adds some additional functionality to Contour, e.g. Recaptcha Yes
DAMP used as a replacement for the built in media picker, as it offers more functionality No Built in
DocType Mixins plugin to allow DocType composition in earlier versions of Umbraco Sort of Doc Type Compositions
Embedded Content allows for repeating content structures within a single page No Archetype / Nested Content
FamFamFam icons adds additional icons for use in the content tree to represent Document Types No Built in / packages
Google Maps for Umbraco Data Type that allows the selection and rendering of Google Maps No Alternative packages available
ImageGen allows image resizing and compression Yes Image Processor
Mass Replacer bulk find and replace actions for Umbraco back office, occasionally used when site wide brand names need standardising etc Maybe
Media Icons displays file type icon rather than the built in ones in the Umbraco media library No
Open Calais Autotag can be used as a replacement for the tags data type in earlier versions of Umbraco No
Path Fixup This is a developer dashboard control to fix database issues N/a Fixed
Repeatable Custom Content A datatype which allows adding repeatable custom contents/child nodes. No Archetype / Nested Content
Robots.txt allows you to view the robots.txt file for the site in the back office Yes
Structure Extensions allows you to set default Document Types for child pages Maybe Built in
uComponents used for various additional Data Types No Built in
Yoyo CMS Tag Manager a plugin that adds an additional section to the CMS allowing you to visually view and manage all of the tags used on the site Yes (v3+)

Data Types

Ok, so this one is more of an extension to the packages section above, but I figured it'd be useful to separate the two.

Property Editor v7 support? Alternative Conversion
DAMP No Built-in media picker
Embedded Content No Nested Content Manual
Form Picker Yes
Google Map No Package
Legacy MNTP No MNTP XML to CSV
Short URL Field No Textbox?
uComponents: Multiple Dates No Package Manual
uComponents: Multi-URL Picker No Related Links Manual
uComponents: URL Picker No URL Picker XML to JSON
XPath DropDownList No MNTP

As mentioned before, this is by no means a complete list but should offer some guidance for others who come across a similar task. You might have an existing site or customer who uses some of these packages and been holding off an upgrade. It's easy enough to figure out what the outcome of such an upgrade will be, so I'd say give it a go and figure out what breaks (if anything).

]]>
Getting Started with Models Builder 2016-06-22T00:00:00+00:00 https://mozzy.dev/posts/getting-started-with-models-builder/ Models Builder comes included in Umbraco 7.4 and out of the box it should just work, but what if you want more options? Below is a rundown of all the different modes for Models Builder and what they actually mean in terms of code.

I realise that there is some documentation and posts on this already, but this is aimed as a primer and how it worked for me.

Enabling Models Builder

In order to get Models Builder working, there is one key web.config update that you need to do.

<add key="Umbraco.ModelsBuilder.Enable" value="true" />
<add key="Umbraco.ModelsBuilder.ModelsMode" value="PureLive" />

If you set the first one to false, then it simply doesn't do anything and your site runs as before without any of the Models Builder stuff. The next one is important in terms of selecting which mode you want Models Builder to run in. There are quite a few options here, which I've tried to go through and offer the usage case for each.

PureLive

This is the default option with Umbraco and is designed as an easy way to get started. You shouldn't have to do anything clever and you will have strongly typed models in your views. Except that, this doesn't come with intellisense and you can't use them anywhere else apart from your views. This is all possible via some in memory compilation, at runtime.

The main point here is that it's really easy to edit your content types and templates from within Umbraco without the need for an application restart.

Dll Models

This option generates the models within ~/App_Data/Models and then compiles them into a single dll, which is added to the bin folder of your website project. A compiled dll allows for intellisense and you can use it throughout your website project. Also, there are two options here: LiveDll and Dll mode. The first one updating (and causing an application restart) whenever content types are changed. The latter working as an opt-in update via the click of a button.

This one is mainly for working with Visual Studio and leaving as a dll and not much else.

AppData Models

This option generates the models within ~/App_Data/Models, but they aren't added to a dll. If you want them compiled then you can include them within Visual Studio and get intellisense and use throughout the project. Again, two options: LiveAppData and AppData. Works the same as before.

Very similar to Dll mode but with a bit more selection over what gets compiled.

API Models

This one is a little different in that it doesn't generate the models in the Umbraco website. You decide the location of where they are generated and then reference so that they can be used within your website project. It also relies on a Visual Studio extension and NuGet package (Umbraco.ModelsBuilder.Api) so that it can connect to your Umbraco instance. This means that it is up to you to update your models whenever a change is made to the content types. You probably don't want to generate models in the Umbraco website anymore either so it is advisable to set Umbraco.ModelsBuilder.ModelsMode to Nothing when using this method.

This is probably useful if you normally have a separate project for all your controllers, models, custom code, etc.

What to use?

At this point in time, I'm thinking the option which is going to get used by myself is the API mode since this allows for a defined location and a separate project. However, it is also quite feasible that I would be keen to use the PureLive mode so that I can take advantage of making quick and easy updates without having to deploy or recompile any of the code.

References

]]>
Upgrading Umbraco 2016-05-17T00:00:00+00:00 https://mozzy.dev/posts/upgrading-umbraco/ During a recent project, I was tasked with upgrading a site from v4 of Umbraco to the latest and greatest. There are a few reasons for this and it doesn't come without it's challenges, but I think the end result is definitely worth it.

Note: For reference, I am talking mainly around upgrading an existing implementation of Umbraco and not migrating to another instance. Also, specifically around major releases, since patch updates rarely cause any issues and where possible you should always keep up to date if you can.

Why Upgrade?

Well, who wouldn't want to make use of the updated UI and far improved underlying architecture of Umbraco. There are many things which the HQ team (and others within the community) have been working on and a lot can change in a relatively short time in the tech world.

  • Complete redesign
  • Works in mobile formats
  • Better editing experience
  • New service layer
  • Better performance
  • Greater support for latest technology
  • Packages!
  • etc.

On the other hand, there may be problems around how the site used to be built (think WebForms/XSLT) which render it difficult to upgrade or simply too time consuming to get back on to the upgrade path and work with the best version of Umbraco. These are choices which are largely down to the client and will require some conversations around. Obviously, we'd all like to work to the latest version and be on a common playground, but sometimes other things hinder us from taking these steps.

The good news is that with some planning and a little bit of perserverance and knowledge it isn't actually as daunting as you think it might be. I've heard many a time that the concept of upgrading a site that was built 3-5 years ago was better off with a complete rewrite and rework of the content, including fresh designs. Whilst this might be the path you go down more often than not, sometimes it's not exactly feasible or best placed to migrate a bunch of content and then factor in all of the changes around that. For larger sites, this certainly becomes the case.

What's the process?

Since 7.3, largely an Umbraco upgrade can be done via NuGet without too much extra work required. This is all dealt with by the migration scripts that were added and are included with each minor release from now on (even dating back to FourOneZero). When you update all the files to the latest version, Umbraco will perform a check against the database and if it finds that it needs to update then you will be prompted to upgrade the database via the installer. Click the button and wait until you see the new Umbraco UI.

But things might not go as smooth as that, sometimes the installer might fallover or it will report that there are incompatible data types. What do you do then? When it comes to the data types, I found that you could map these to the newer counterparts or simply take a note and update those once you have performed the base upgrade. Take a note of what was there before and what you want to have afterwards. As for when it fails, find the error and look for support on the forums, or place an issue on the tracker. Everyone that does this, will help Umbraco to help others and make the process better for all.

At this point I'm referring mostly towards the concept of getting Umbraco working in terms of the back-office and having a working CMS. We still need to take a look at our code that actually runs the website. Above I mentioned that you might be using old technology or you might be using packages / data types that are no longer compatible. The prep work around figuring out this stuff before is important and will provide a decent comparison between where you are and where you want to be. I found that exporting the doc types, templates and macros from the database was a particularly useful exercise for this. It's quite possible that there aren't many custom implementations and that actually once you've done the database side of things, Umbraco works without any major adjustments.

Some of the other things you might want to look into:

  • update the icons
  • update data types (if required)
  • figure out best use of packages
  • assess the use of doc types (it's possible to change them and sort into folders)
  • assess the use of macros (could it just be a Razor partial?)
  • check over the content

Switching out WebForms/XSLT for MVC/Razor can be a time consuming task, but in the end it turns out to be an opportunity to amend some of the methods from before and update to new techniques. It's also a rather nice way of realising that XSLT was rather nasty, and in getting rid of it all you can brief a sigh of relief that another Umbraco site has moved on from the not so distant past. Figuratively speaking of course. Unfortunately, you won't be able to switch out too much when it comes to content structure, but you may be able to upgrade some of those old packages to their equivalent v7 counterparts. For example, Embedded Content now becomes Nested Content and everyone is happy.

Conclusion

So, you've got an old site and are thinking about taking the leap. Assess your options, figure out what is important and how big the scale of the job might be. Take backups and compare. Embrace the changes and enjoy the new world of Umbraco. It's quite a good CMS you know.

]]>
Continuous delivery for .NET (revisited) 2015-07-03T00:00:00+00:00 https://mozzy.dev/posts/continuous-delivery-for-dotnet-revisited/ Last year I wrote about deployments and the idea of continuous delivery for .NET. During that article I spoke about how to set up and configure TeamCity and Octopus Deploy as tools for deployment as well as adding some notes around process.

Well, things change and constant improvements are made, so this is a look back on that article and an update as to how the process has been modified. The good news is that the tools chosen in the first article have become staples in the deployment process and what we are talking about here is a refinement. To reiterate the purpose...

TeamCity is the build tool, Octopus Deploy is the deployment tool.

TeamCity

As mentioned before we are using TeamCity to build the solution, perform tests and create an artifact for deployment. We also use a template so this can be shared across projects.

  1. Configure version (GitVersion)
  2. Fetch packages, build solution and run OctoPack
  3. Front end tasks (e.g. npm install / grunt / gulp)
  4. Perform tests and check code coverage
  5. Publish package to Octopus Deploy NuGet server

You may want to split this out into 2 or 3 configurations depending on requirements. Reasons for doing this would be if you wanted to run your unit tests separately (they might take a long time) or if you wanted to publish your artifact as a dependency (i.e. manual task to push the artifact which is created in your other configuration).

GitVersion

We are using this as a meta-runner within TeamCity, there are other options as to how you use this tool such as command line or MSBuild tasks. What this does is create our SemVer or NuGet version automatically for us based on our git history. Previously we were doing a lot of the versioning manually and then we came up with a powershell script to achieve a similar outcome until we stumbled upon this which does it all for us. It follows Gitflow and Github flow conventions and can be applied to other branching strategies with some configuration.

Octopus Deploy

Used to deploy an artifact (NuGet package) to a given environment, as well as setting up a website in IIS and updating variables for different SQL connections. Our process here is largely the same, yet Octopus Deploy has grown up a little since then and now has support for lifecycles, automatic release creation and (in next release) offline deployments amongst some other really useful features.

  1. Test SQL connection
  2. Grab NuGet package
  3. Create / update website in IIS
  4. Deploy files to website folder
  5. Update variables / apply transforms
  6. Test URL (200 OK)
  7. Notify status of deployment via Slack
  8. Clean up / apply retention policies

Steps 2, 3, 4 and 5 can actually be done via one step in Octopus Deploy (deploy a NuGet package), but I have split it out here for better readability. We have also added some basic tests around our deployment...

  • Test SQL connection : if we can't access the database using the provided connection string, we don't deploy
  • Test URL : we ensure that we get a 200 OK status back once we have deployed
  • Notify status : we use Slack for sending out a deployment status (could also send an email if you prefer traditional methods)

Depending on the project or usage case you might want to do other steps such as backup the database or website folder, grant permissions to a certain user, or install a package required for deployment via Chocolatey for example. There are loads of other options in the step template library: https://library.octopusdeploy.com/

Lifecycles

This was introduced in 2.6 and allowed for structuring your deployment process between environments. So, you could set it up to no longer allow a release package to be deployed straight to Production without any testing for example. This forces you to take a release through the proper deployment process and get sign-off before promoting.

So, an example lifecycle could be set up like this...

Internal testing

  • Dev (auto-deploy)
  • QA

Client testing (any 1)

  • UAT
  • Pre Prod

Go live (any 1)

  • Production
  • DR (backup servers)

Within this you can specify whether all of the environments need to be deployed to or at least one in the lifecycle phase, denoted in brackets above. You can also set different retention policies per phase, so you would probably want to keep all releases in the 'go live' phase, but maybe the latest 5 in the 'internal testing' and 'client testing' phase.

3.0

A new version of Octopus Deploy is in pre-release. With this comes a bunch of changes and new features...

  • Deployment targets which allow for offline deployments, Azure websites, SSH and Linux support
  • Rebuilt with SQL Server rather than RavenDB
  • Improved performance
  • Something called delta compression, which only transfers things that have changed in your package and should make deployments a lot quicker.
  • Migration tool so you can export your configuration into JSON and import into other instances of Octopus Deploy
  • Changes to tentacle architecture, which means that the deployment aspect of a tentacle is no longer tightly coupled to the Octopus version. Enter Calamari, a command-line tool invoked by Tentacle during a deployment for doing deployment tasks. It is also open-source, so you can fork and make it your own.
]]>
ASP.NET vNext and Mac OS X 2015-06-22T00:00:00+00:00 https://mozzy.dev/posts/aspnet-vnext-and-mac-osx/ One of the highlights of ASP.NET vNext is that it will be cross-platform. What this means is a few things...

  • No more need for Windows VMs (designers will be happy)
  • Potential for cheaper hosting, can run ASP.NET on Linux
  • Open to more developers (plus benefits of open source also)
  • It's pretty cool.

Installing DNVM / DNX

The first place to go is here: https://github.com/aspnet/Home

This will outline the ideas behind the next version of ASP.NET and offer some instructions for install. Below are the outlines.


The key to getting set up is something called the .NET Version Manager (DNVM), which is a command line tool you use to download DNX (.NET execution environment). The DNX contains the code to bootstrap and run our applications.

A lot of this is built into the latest preview of Visual Studio 2015.

The steps to install DNVM on OS X are far easier if we have Homebrew installed. So to get that we need to run:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

To install the DNVM:

brew tap aspnet/dnx
brew update
brew install dnvm

Then, we need to ensure that the command is registered with our bash profile:

echo "source dnvm.sh" >> ~/.bashrc

You can run DNVM to get the latest version of DNX so that we can run our applications:

dnvm upgrade

Creating an app

Now we should have ASP.NET available to us on our Mac. The next thing we want to do is create a test application. To do this, we can use Yeoman to generate this for us. There are a few dependencies... so, to set these up we need to do some more command line entries.

Install node.js:

brew install node

Install Yeoman:

npm install -g yo

Install generator-aspnet

npm install -g generator-aspnet

Details of the ASP.NET vNext generator are here: https://github.com/OmniSharp/generator-aspnet

Now that we have the yo generator, we can type this into a command prompt:

yo aspnet

... and what you should end up with is something that looks like this:

Pretty cool huh?! In this example, I am going to choose to create a real basic console application, but the templates allow you to create web applications also.

After answering a few questions, we should have something that we can run and obtain the output from our test console application:

dnu restore
dnx . run

And that just about sums up how to get started with ASP.NET vNext on OS X.

]]>
Umbraco Codegarden review and the future 2015-06-19T00:00:00+00:00 https://mozzy.dev/posts/codegarden15-review/ Last week I attended my first Codegarden. It was great to be able to figure out what the fuss was all about. I really enjoyed being able to meet other Umbraco devs and be a part of it all. Rather than go in to all the details, I'll outline some of the highlights.

Keynote

If you hadn't already heard, here are the major announcements:

  • 7.3 beta is out and available to download
  • ASP.NET Identity including support for oAuth so that you can login to the back office with your Facebook or Google account for example.
  • Load balancing has been updated to work much better with scaling
  • Our Umbraco has been given a complete overhaul
  • New REST API for Content, Media, Members, and Relations using the HAL format. Here's a bit of reading around HAL: http://stateless.co/hal_specification.html
  • Content type editor changes within the back office making all those document types a joy to edit

I think one of the main things to take away from this is that Umbraco want to be able to iterate in a much faster and open manner, releasing earlier versions for testing and review with the community. So, the REST API and Content type editor are examples of this.

Sessions and open space

In my preview I noted some of the sessions which took my interest. As it turned out, there were some great sessions although I felt the urge for some more in-depth talks with more of an emphasis on code. The last day in particular was what I found the most interesting and useful for me. The idea is that the schedule and topics are defined by the people attending. Each person suggests a topic of interest and people go along and discuss where the future of Umbraco lies. Being able to talk directly and in an open manner with others who share similar issues and thoughts was a decent way to provoke ideas and knowledge for the direction of Umbraco and it's community. It wasn't just code... there were also topics around quality of life, communication channels and the future of Codegarden.

Social

Part of what adds to the fun of Codegarden is that it is a 3 day event. Evenings are spent hanging out with other developers and there is enough time to not feel like you are there one minute and gone the next (even if it did go rather quickly).

This year, there was a march through Copenhagen to the meatpacking district where we were able to enjoy the clear night sky with a few drinks and good company. Umbraco bingo was as crazy as I thought it might be and the package competition had a nice comic twist to it.

Nyhavn

Until next year

Next year Codegarden will be moving to a new location, and I'm hoping that I'll be able to join you all for more Umbraco goodness. Until then, I intend to get much more involved with the community and start giving back. Following on from the open space at Codegarden around what you would like to see Umbraco achieve in the next 12 months, here are a few that I've come up with for myself.

  • I have already logged a number of issues re: Our Umbraco and they have been fixed very quickly (kudos on that), it is a simple process but helps shape things to how you might want them. It's also handy to find some bugs and make Umbraco aware.
  • Submit some pull requests and help fix bugs / add new features. #1 : http://issues.umbraco.org/issue/U4-6562
  • Attend and present at local events such as umBristol. Presenting is still quite a new concept to me and something which I struggle getting to grips with, I'm sure as I throw myself more into these kind of situations I'll become more comfortable.
  • Kickstart some kind of annual event in the South West. There has been a number of people interested in this, it would be awesome if we could make this happen!
  • Start creating some packages! Not just for the sake of it, but there are times when we as developers work on a project and it would be good to share some of the more useful custom implementations we have done.
  • Write an article for the totally awesome Skrift.io.
  • Build a new website for my football team on the latest version of Umbraco and using some of the latest methodologies (e.g. strongly typed models for one). This one is currently in progress and I will be hoping to release this within the next couple of months.

So there you have it, let's see how much of this I can actually complete.

]]>
Umbraco Codegarden preview 2015-06-03T00:00:00+00:00 https://mozzy.dev/posts/codegarden15-preview/ It seems like the Umbraco community has a particular buzz around this time of year. I have heard quite a few stories about Codegarden, the Umbraco conference which takes place every summer. It is of course situated in Denmark, the birth place of Umbraco and for a number of years now people have been flocking to Copenhagen to hear more about what is on the horizon for this open-source .NET CMS.

I have been working with Umbraco for a while now so it was probably about time that I managed to get a ticket... yes this will be my first Codegarden! Big thanks to Zone for making that happen.

After looking at the schedule, here are a few that I have picked out...

Umbraco Roadmap Panel

Find out about all that is coming up in future releases of Umbraco. Version 7 was a big step, but Umbraco is a product that keeps on moving. There have also been some rather big changes within Microsoft and .NET itself and so it could be that we hear about how Umbraco will be looking to integrate with vNext. I'm really looking forward to see what is in store.

How to develop a killer package

Lee and Matt outline all that is required to make a package and get it out on the appropriate channels. Kind of like an overview of how to contribute to Umbraco with new ideas or improvements. As someone who would like to get more involved in this side of things, it should prove to be a helpful guide.

Beyond the web

We primarily see Umbraco as a traditional CMS which maps directly to website content and it's HTML. This session looks to help us realise that Umbraco can be used for other purposes and that this can be tied with all sorts of other connected devices. One for the future maybe.

Securing Your Umbraco

We build Umbraco websites on a regular basis and many of these are live at present. All across the world, there could be someone who has malicious intentions towards your website. This session looks at preventing that. This is something I'm hoping I won't walk away from with too much shock, but it's definitely an important aspect of maintaining websites.

See you there

I have only listed a few here, but there is a great schedule and I'm sure there will be sessions outside of the above which will really take my attention. For instance, the sessions geared towards customising Umbraco, using ReactJS with Umbraco or how to deal with load balancing and then there are also the workshops...

And of course getting to meet all the people that I follow on Twitter whilst sharing a drink or two will be great too. (I'm a big fan of Mikkeller btw)

For all those that are attending this year, see you at Codegarden! And to everyone else, be sure to keep track of all of the updates and find out all that is happening with the world of Umbraco.

]]>
Awesome Umbraco 2015-05-01T00:00:00+00:00 https://mozzy.dev/posts/awesome-umbraco/ Many of you are probably aware that one of the great things about Umbraco is the fact that it is easily extendable and can be used for all sorts of flexible solutions. This is mostly down to the community and the people who are constantly building upon a solid foundation.

It would be a shame for all this great work to go unnoticed and so Lee Keheller has created a repository that showcases some of these packages and groups them into their use.

A collection of awesome Umbraco 7 packages, resources and shiny things.

Next steps

Check out the Github repo and add some packages to the list!

]]>
Using a custom domain for accessing umbraco 2015-03-16T00:00:00+00:00 https://mozzy.dev/posts/custom-domain-for-umbraco/ By default, content editors will be editing their content by appending umbraco to their domain. Sometimes you might not want your content editor to use that method and would like them to use an alternative URL. This could be for a few reasons...

  • You don't want Joe public trying to access the Umbraco back-office on your site. This is more security for obscurity and therefore isn't necessarily a perfect solution, but by changing the link to Umbraco, it does partially lock it down. Note, if you set a hostname on your server to the URL you want to use and then point your hosts file to the server it is possible to prevent /umbraco from being publically accessible. i.e. it is only local to the server.
  • You want to create a different URL for your content editor to use when editing content. Perhaps it might be nicer for them to use or they only want the Umbraco side of things to be under SSL.
  • You have multiple sites within one Umbraco instance. This actually makes it a lot easier to group sites into one entity, otherwise each site will have their own Umbraco URL. One URL to remember, one URL to use.

Below is an example of how to do this. We match the URL for Umbraco and then perform a redirect to our not found page if it doesn't match our pattern. Note this part: (?!Surface) - this is important as it means that AJAX calls using Surface controllers still work.

Therefore, when editing the Umbraco site and assuming will only have an SSL certificate for Umbraco, a content editor will use:

https://admin.example.com/umbraco/

Whereas, the site will be visible at:

http://www.example.com/

If you have done something similar or have alternatives that you use, then please feel free to leave a comment!

<!-- Restrict access to Umbraco -->
<rule name="Restrict access" stopProcessing="true">
<match url="umbraco(?!/Surface/)" />
<conditions logicalGrouping="MatchAny" trackAllCaptures="false">
<add input="{HTTP_HOST}" matchType="Pattern" pattern="admin.example.com" ignoreCase="true" negate="true" />
</conditions>
<action type="Redirect" url="/not-found" appendQueryString="false" />
</rule>
]]>
Page Not Found in Umbraco 2015-03-02T00:00:00+00:00 https://mozzy.dev/posts/page-not-found-umbraco/ Content finders in Umbraco are really useful as they can intercept the routing and give you access to the context. There are a few uses of this and one on those is to find your page not found page that you have created especially in Umbraco. This differs from the more traditional way of setting a page id within your umbracoSettings.config file and is useful for giving your client complete control over their page not found page. So, in the example below, there would be a corresponding document type in Umbraco called NotFoundPage.

using System.Linq;
using Umbraco.Core;
using Umbraco.Core.Models;
using Umbraco.Web;
using Umbraco.Web.Routing;

public class PageNotFoundContentFinder : IContentFinder
{
public bool TryFindContent(PublishedContentRequest request)
{
// have we got any content?
if (request.PublishedContent == null)
{
// Get the root node for domain
var home = request.RoutingContext.UmbracoContext.ContentCache.GetByRoute(request.Domain.RootNodeId + "/");

// Try and find the 404 node
var notFoundNode = home.Children.Where(x => x.DocumentTypeAlias == "NotFoundPage").FirstOrDefault();
if (notFoundNode != null)
{
// Set Response Status to be HTTP 404
request.SetResponseStatus(404, "404 Page Not Found");

// Set the node to be the not found node
request.PublishedContent = notFoundNode;
}
}

// hopefully we will have content at this point
return request.PublishedContent != null;
}
}
]]>
Courier investigation 2015-02-24T00:00:00+00:00 https://mozzy.dev/posts/courier-investigation/ Installation

Download from here: https://our.umbraco.org/projects/umbraco-pro/umbraco-courier-2

  • Go to required instance of Umbraco to install on.
  • Go to Developer tab
  • Open up packages folder and select Install local package
  • Choose downloaded zip file
  • Confirm install
  • Wait for Umbraco to reload
  • Add location(s) you would like to promote to

Example location

This is configurable in courier.config

<repository name="Example QA site" alias="example-qa" type="CourierWebserviceRepositoryProvider" visible="true">
<url>http://test.client.example.co.uk/</url>
<user>0</user>
</repository>

Courier works on the basis of having a connection between Umbraco instances and then being able to compare and push changes. If there is no connection for Umbraco, then a sync will not be possible. This also means that Courier needs to be installed on all environments that need to be in sync.

Usage

Once the locations have been set up and proven to work. i.e. they are able to connect without error, the usage from a client perspective is rather simple.

Basic usage

There will be a new context-sensitive option when selecting content called Courier. When this is selected a new window will open asking for a target machine to deploy to, along with confirmation of what content will be transferred. Click the button to deploy and wait for Courier to package the content and transfer across to the other location.

Revisions

An alternative to using the context-sensitive option is to go into the newly added Courier section where you can add a revision. This allows you to choose more than just content and let's you choose things like Dictionary items, Document types, Languages, Media among others. So, if you had created a bunch of content that had some media and potentially some files you wanted to deploy, you could create a revision to transfer elsewhere. The idea is that whatever needs to be transferred, can be. Courier will also help you out and sort out the dependencies if there are any. Once you have selected your options and created your revision (package), then this will be available on other locations to transfer.

Configuration

Since Courier has is it's own section, there is the option to restrict access to revisions. It is possible that you may only want to give developers access to this section whilst giving clients access to the basic use of selecting content. If you do wish to give clients access to the Courier section, then you can also choose what you want them to be able to transfer. So, you probably wouldn't want them to know about the Datatypes or Document types, but you would probably want them to be able to transfer Files, Folders and Media.

By default, Courier is set up quite nicely and will cover most usage cases however there are some other configuration options that might be preferable.

Choose which folders can be included in revisions.

<folderItemProvider>
<include>
<!--<folder>~/media/assets/somefolder</folder>-->
</include>
</folderItemProvider>

Choose which files can be included in revisions.

```xml
<fileItemProvider>
<!--<folder>~/media/assets/somefolder</folder>-->
<!--<file>~/media/assets/somefile.png</file>-->
</fileItemProvider>

Allow for children/parent media to be included.

<mediaItemProvider>
<includeChildren>false</includeChildren>
<includeParents>true</includeParents>
</mediaItemProvider>

Allow/deny access by IP/users. (in security element)

<filters>
<ipfilter>
<allow>*</allow>
</ipfilter>
<userfilter>
<allow>*</allow>
<!--<deny>editor</deny>-->
</userfilter>
</filters>
]]>
Continuous delivery for .NET 2014-10-09T00:00:00+00:00 https://mozzy.dev/posts/continuous-delivery-for-dotnet/ One very important factor of being a developer is deploying your code to the appropriate environment without anything failing. In order to do this, we should be automating as many tasks as possible to reduce human failure. Of course, some degree of human interaction should happen but by in large, we shouldn't need to do much once we have set up our deployment process.

Simple guide

At it's very basic, this should happen every time we work on a project.

  1. Create project + repo
  2. Write some code
  3. Push some code to repo
  4. Grab code and push to server

Now... parts 1, 2 and 3 will largely be the same wherever, but 4 could be done a number of different ways and there are various different settings / updates required to get a working website. Let's not go into too much details of the different options, but look at a particular method for making this work.

TeamCity and Octopus Deploy

In order to for us to deploy, we are going to use TeamCity and Octopus Deploy. There is a bit of configuration required to do so, but this should be simpler than the traditional Web Deploy method.

It is worth noting that one of the goals in deployment is to allow different roles within the process. So, the devs can push code and then the sysadmins or tech leads can manage deployments. Therefore, publish from Visual Studio is completely out of the question. This has a few advantages in that the relevant people are in charge of their remit. For example, a dev doesn't have the ability to push changes to Production and therefore a degree of protection / sign-off is involved. sysadmins can create a new server and see at a granular level what gets pushed if they choose to. Important server details are not exposed, which can be the case with Web Deploy.

Below is the basic overview of steps required in set up...

  1. Install TeamCity on build server (one time only)
  2. Install Octopus TeamCity plugin (one time only)
  3. Install Octopus Server on build server (one time only)
  4. Create servers for project (one time only for each project)
  5. Install Octopus Tentacle on required servers (one time only for each server)
  6. Install OctoPack in project via NuGet (one time only for each project)

As shown, a lot of these tasks you will not need to revisit and once completed, will allow you to deploy without the need for manual intervention.

TeamCity

Used to build the solution, perform tests and create an artifact for deployment. This will have continuous integration set up for the dev branch so that we don't have to trigger the release each time. We can use a template to share this across projects.

  1. Checkout repository and build solution (run OctoPack)
  2. Perform tests and check code coverage
  3. Publish package to Octopus NuGet server
  4. Create Octopus release (trigger Octopus Deploy)

Octopus Deploy

Used to deploy an artifact to a given environment, as well as setting up a website in IIS and updating variables for different SQL connections. This all happens on the server we are deploying to via a secure connection.

  1. Grab NuGet package
  2. Create website in IIS
  3. Deploy files to website folder
  4. Update variables / apply transforms
  5. Clean up / apply retention policies

When a release is created, Octopus Deploy will keep this indefinitely unless you tell it to. This means that you can rollback quickly to a prior release, but also means that you could be left with a number of files left on the server that you don't want. Therefore, within Octopus Portal you can amend the number of releases you want to keep and for how many days. More info can be found in the documentation.

Why use both?

It is true that you could use TeamCity to deploy to each environment and perform configurations for you. However, this is reliant on Web Deploy being installed on the server you wish to deploy to. This is not a requirement for Octopus Deploy, which uses the idea of tentacles to open up a communication between the build server and the web server. This has several security enhancements and is generally easier to configure. You could also get Octopus Deploy to set up a site in IIS or perform Powershell tasks for you, something TeamCity and other build platforms are not built for.

All in all, it is about using what is built for the task at hand. TeamCity can be used for building the code, running the tests and creating a single release package. Octopus Deploy can then be used to deploy this NuGet package wherever you want and ensure that the configuration is correct for the environment, even cleaning up files in the process.

If you would like to read more, there is a blog post detailing this approach, which explains why Octopus Deploy was created.

Example setup

  • 1 build server with TeamCity and Octopus server installed
  • 1 web/db server for internal Development/QA (shared web and db)
  • 1 web/db server for UAT (could be separate web and db)
  • 1 web/db server for Production (could be separate web and db)

In your project you would need the following transforms...

  • Web.Development.config
  • Web.QA.config
  • Web.UAT.config
  • Web.Production.config

These will get run automatically when uploaded to the web server by Octopus Deploy. There are a couple of things to take note of, however. Ensure that the transform matches the environment name in Octopus Deploy and make sure that the files are included in the NuGet package. For more notes on this, check the documentation.

Octopus Deploy also has this idea of variables that can be set within the Octopus Portal (admin area). This means that you can define your variables and specify which environment these will apply to. As long as you have these turned on for your project, they will overwrite the name/value pairs within your config files (not just web.config). There is also the added benefit of being able to define variable sets and inherit these on a project. One such example, could be a set of logging options that you only enable for UAT and Production, but you tend to do this on every project. With a variable set, you can ensure they are included with little configuration. See the documentation for more examples.

]]>
What's Jekyll? 2012-02-06T00:00:00+00:00 https://mozzy.dev/posts/whats-jekyll/ Jekyll is a static site generator, an open-source tool for creating simple yet powerful websites of all shapes and sizes. From the project's readme:

Jekyll is a simple, blog aware, static site generator. It takes a template directory [...] and spits out a complete, static website suitable for serving with Apache or your favorite web server. This is also the engine behind GitHub Pages, which you can use to host your project’s page or blog right here from GitHub.

It's an immensely useful tool and one we encourage you to use here with Hyde.

Find out more by visiting the project on GitHub.

]]>