<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Codenizer]]></title><description><![CDATA[Talking code, continuous delivery and everything in between]]></description><link>https://blog.codenizer.nl/</link><generator>Ghost 1.19</generator><lastBuildDate>Thu, 16 Apr 2026 18:54:50 GMT</lastBuildDate><atom:link href="https://blog.codenizer.nl/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[RoadCaptain beta release]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I'm pretty excited to be able to share with you the first beta release of RoadCaptain!</p>
<p>RoadCaptain is an app that makes riding on Zwift even more fun and can really push your limits in Watopia.</p>
<p>How? Simple: you are no longer limited to the fixed routes in Watopia, with</p></div>]]></description><link>https://blog.codenizer.nl/roadcaptain-post/</link><guid isPermaLink="false">6239f86207d49f053e881713</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Tue, 22 Mar 2022 16:25:34 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>I'm pretty excited to be able to share with you the first beta release of RoadCaptain!</p>
<p>RoadCaptain is an app that makes riding on Zwift even more fun and can really push your limits in Watopia.</p>
<p>How? Simple: you are no longer limited to the fixed routes in Watopia, with RoadCaptain you can build your own routes and explore Watopia even more.</p>
<p>Always wanted to do 3 laps on the Volcano as a warm up followed by blasting through the Jungle Loop? Now you can!</p>
<p>Read all about it <a href="https://blog.codenizer.nl/roadcaptain/">here</a>!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Handling production outages at Jedlix]]></title><description><![CDATA[Jedlix is the leading software platform for car-centric smart charging of Electric Vehicles in Europe. Read how we handle outages and keep improving.]]></description><link>https://blog.codenizer.nl/production-outage-handling-at-jedlix/</link><guid isPermaLink="false">5d47cf76127fd2072f50365e</guid><category><![CDATA[jedlix]]></category><category><![CDATA[oncall]]></category><category><![CDATA[outage]]></category><category><![CDATA[alerts]]></category><category><![CDATA[alerting]]></category><category><![CDATA[opsgenie]]></category><category><![CDATA[logs]]></category><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Tue, 06 Aug 2019 16:11:15 GMT</pubDate><media:content url="https://blog.codenizer.nl/content/images/2019/08/jedlix-logo-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://blog.codenizer.nl/content/images/2019/08/jedlix-logo-1.png" alt="Handling production outages at Jedlix"><p>Jedlix is the leading software platform for car-centric smart charging of Electric Vehicles (EVs) in Europe. Jedlix teams up with BMW, Tesla, Renault and multiple energy partners to unlock the value of the flexibility of EVs charging process at scale, reduce the Total Cost of Ownership of the cars, and enable their sustainable insertion into the energy grid.</p>
<p>At Jedlix we control the charging of EVs. You can imagine that if an EV driver leaves for work in the morning and the battery is nearly empty you can't simply roll to the petrol station and fill up. That's why we work hard to ensure our systems don't fail.</p>
<p>No matter how many safeguards you put in place in your CI/CD pipeline or the number of tests you have, every so often your platform decides it has a bad day and wants to see if you'll notice it's misbehaving.</p>
<p>At Jedlix we have invested a lot of effort in creating systems that are resilient to various types of outages, and have the capacity to be observable to ensure we know if a system does misbehave.</p>
<p>That, unfortunately, doesn't entirely prevent the occasional outage from happening.</p>
<h2 id="observability">Observability</h2>
<p>As mentioned before we have done a lot of work on the observability of our systems. We use <a href="https://logz.io">Logz.IO</a> as our log aggregator, that allows us to create alerts on certain patterns (or absence thereof) in the logs of a system to indicate a <em>potential</em> outage.</p>
<p>Apart from logs our systems also push metrics that provide a different perspective on how the platform is performing, like numbers of API requests, error rates, message processing rates etc. We use a combination of <a href="https://www.influxdata.com/">InfluxDB</a> and <a href="https://grafana.com/">Grafana</a>. Like with our logs, we also define alerts when metrics exceed certain thresholds.</p>
<p>When an alert triggers it will either go to our Slack <code>#operations</code> channel or, if the severity is high enough, directly to <a href="https://www.opsgenie.com/">OpsGenie</a> and the engineer that is on-call.</p>
<blockquote>
<p>If you want to know more about how we do observability, that will be in a separate blog post soon!</p>
</blockquote>
<h2 id="analerttriggers">An alert triggers...</h2>
<p>You hope it doesn't, but it will. An alert triggers and OpsGenie comes to disturb your peace and quiet:</p>
<img src="https://blog.codenizer.nl/content/images/2019/08/alert-opsgenie---Copy.png" style="border:solid 1px #cccccc;max-width:350px;display:block;margin:0 auto 0 auto" alt="Handling production outages at Jedlix">
<p>So, now what?</p>
<p>There are many ways that you can deal with outages and it will depend on the company and people you work with what that process looks like. Perhaps you have a Network Operations Center (a <em>NOC</em>) that will triage the alert and notify whomever is responsible. Maybe you have a <em>hot-phone</em>  that you physically pass around the team to share the on-call rotation and it starts ringing.</p>
<p>At Jedlix we have an on-call rotation that includes all engineers but, among others, also our product owner and CTO. The reason for this is to ensure everyone (well mostly everyone) in the company is aware of what goes into running our systems which helps in talking about improvements to our platform.</p>
<h2 id="handlinganoutage">Handling an outage</h2>
<p>To the meat of it: how do we actually deal with an alert?</p>
<p>Our process is mostly this:</p>
<ol>
<li>Acknowledge the alert</li>
<li>Provide updates on analysis and mitigating actions</li>
<li>Let everybody know when the outage is resolved</li>
</ol>
<p>But even more importantly there is one rule: <em>Don't panic!</em></p>
<p>The only responsibility of anyone who is on-call is to ensure the alert doesn't go unnoticed and to call people to help resolve the outage. We are a team, it would not make sense to suddenly pile all the responsibility on a single person when an outage occurs.</p>
<h2 id="signaltonoiseratio">Signal-to-noise ratio</h2>
<p>Clear communication is essential in handling an outage. There are a lot of ways you can get this wrong and it took us some experimentation to get to something that works well for us.</p>
<p>We use Slack for our team communication and originally it used to be that only a message was posted in our <code>#operations</code> channel when the issue was resolved without too much detail. You could argue that this is the most important thing, however that leads to a lot of questions and uncertainty during the outage itself. A further risk (if you want to call it that) is that multiple people will jump in because they are not aware someone is already investigating.</p>
<p>Our initial improvement on this process was to post regular updates in the Slack channel, this helped in providing more information to the spectators. One of the drawbacks of this approach was that during the outage other notifications would appear in the channel making progress hard to follow.</p>
<p>The current approach is to use the threads feature of Slack. The person handling the outage post regular updates during the analysis and mitigation of the incident. This allows spectators to quickly catch up and see what has been done so far.</p>
<img src="https://blog.codenizer.nl/content/images/2019/08/thread-overview.png" style="border:solid 1px #cccccc;max-width:400px;display:block;margin:0 auto 0 auto" alt="Handling production outages at Jedlix">
<p>An additional benefit of keeping a log in the Slack thread is that it provides you with a clear timeline of the outage which is important input for the postmortem.</p>
<blockquote>
<p>While using Slack is convenient for the product development teams, it does not provide a good way to inform other people in the company. Some of our colleagues are not on Slack so how do we let them know? Do we let them know? The current agreement is that if users are impacted we send a notification in our company WhatsApp group. Perhaps not ideal, but it works for us.</p>
</blockquote>
<h2 id="thiswasbadletsnotdothatagain">This was bad, let's not do that again</h2>
<p>When the dust has settled after an outage we take a look at how to prevent this happening again. One of the (very few) formal processes we have is to write up a postmortem document. The goal is to capture the timeline of what happened, the impact of the outage and what improvements we need to do.</p>
<p>Our early attempts at doing this was to get everybody in a room and try to hammer out the postmortem as fast as possible. As you might imagine, having a lot of people together leads to a lot of questions, proposals, opinions and generally a lot of feedback (valuable as it may be) unrelated to the outage itself.</p>
<p>Our current approach is that the first-responder initiates the postmortem and leads it. The only other people in the room are those directly involved. The first-responder will prepare the postmortem document and fill in as much as possible, starting with the timeline.</p>
<p>The goal of the postmortem is to identify if and what we need to improve to either get notified earlier, prevent the outage from happening or make our systems more resilient to that failure so the outage does not occur again. Obviously this doesn't limit itself to technical improvements but also our process and communication.</p>
<h2 id="improvingourimprovements">Improving our improvements</h2>
<p>Jedlix isn't a large company with a special onboarding department for new people joining, we don't have dedicated people to handle outages who know the process front to back. So how do we make sure that handling an outage and following up with a postmortem is something you can &quot;just&quot; do instead of having to gain the tribal knowledge that allows you to do it.</p>
<p>At Jedlix we are fans of <a href="https://medium.com/@ricomariani/the-pit-of-success-cfefc6cb64c8">The pit of success</a> so naturally we want to apply this to our outages as well. One of the easier things we did here is that every alert links to a runbook, this ensures you don't have to think about what you need to check, you can follow a checklist instead. This greatly reduces human error and the risk that we forget to check something. On the postmortem side we went a little bit further and introduced automation.</p>
<h2 id="callintheforensicexaminer">Call in the forensic examiner</h2>
<p>Previously we talked about the postmortem document and that creating the timeline is one of the more tedious parts. Fortunately for us the timeline already lives in Slack (because that's where we put our status updates) and Slack happens to have a very nice API.</p>
<p>This allowed us to create <em>PostMortemBot</em>.</p>
<p>PostMortemBot provides everyone the option to generate a new postmortem document right from the Slack thread that contains all the updates by typing <code>@postmortembot autopsy this</code>. It will gather all messages in this thread including links, images etc and create a Markdown document available for download that can then be put on our internal wiki:</p>
<img src="https://blog.codenizer.nl/content/images/2019/08/image--9-.png" style="border:solid 1px #cccccc;max-width:400px;display:block;margin:0 auto 0 auto" alt="Handling production outages at Jedlix">
<p>The resulting <em>autopsy report</em> looks like this:</p>
<img src="https://blog.codenizer.nl/content/images/2019/08/autopsy-report.png" style="border:solid 1px #cccccc;max-width:400px;display:block;margin:0 auto 0 auto" alt="Handling production outages at Jedlix">
<p><em>(note: this is just an exmple)</em></p>
<p>The first-responder now can fill in the other sections of the report and optionally remove any entries from the timeline that are not useful or necessary.</p>
<p>You can imagine that it now becomes a lot easier to start the postmortem process as most of the (boring) work is already done for you.</p>
<h2 id="doesitstophere">Does it stop here?</h2>
<p>No. Well... yes, for now.</p>
<p>The PostMortemBot already improves our lives and we will have to evaluate after future outages if this is enough automation or we may need more. Possible features could be a command to initiate an <em>outage situation</em>  which could automate some of the communication tasks. Perhaps a <code>@postmortembot statusupdate &quot;everything is under control&quot;</code> that sends a message to our WhatsApp group, who knows.</p>
<p>The main takeaway here is to not only use the postmortem to evaluate the outage but also the process itself. Improve where possible, automate <em>just enough</em>.</p>
<p>We hope you've gotten a nice overview of how we handle our outages. If you are interested in working with our team to drive renewables forward <a href="https://jedlix.com/en/carreer/">we are hiring</a>.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Running Kubernetes on Windows]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When I started down this path I've had quite a few people ask me <em>&quot;WHY?? DEAR $DIETY WHY DO YOU WANT THIS?&quot;</em></p>
<p>Fair enough, given the whole ecosystem around containers and especially Docker and Kubernetes is largely focused on Linux and *nix's we are a .Net shop and</p></div>]]></description><link>https://blog.codenizer.nl/k8s-on-windows/</link><guid isPermaLink="false">5c4ed157cb38a607d18e532c</guid><category><![CDATA[docker]]></category><category><![CDATA[k8s]]></category><category><![CDATA[kubernetes]]></category><category><![CDATA[hyper-v]]></category><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Mon, 28 Jan 2019 14:28:02 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>When I started down this path I've had quite a few people ask me <em>&quot;WHY?? DEAR $DIETY WHY DO YOU WANT THIS?&quot;</em></p>
<p>Fair enough, given the whole ecosystem around containers and especially Docker and Kubernetes is largely focused on Linux and *nix's we are a .Net shop and most of our developers work on Windows machines. Switching people over purely for the sake of running a kubernetes (k8s) environment is not something I'd encourage.</p>
<p>One might say: <em>&quot;But you can create a k8s cluster in the cloud right?&quot;</em>, well, yes that is certainly possible and most cloud providers give you free credits to do that. The drawback is that it forces you to always have an Internet connection available (repeat after me: &quot;The network is always reliable, there is no latency&quot;). I've found that having resources available locally works well for me, but this is definitely something you'll need to decide for yourself.</p>
<p>Having said that, this is about setting up a k8s cluster on Windows 10.</p>
<h2 id="herebedragons">Here be dragons</h2>
<p>I want to be honest with you, figuring this out has given me a few more grey hairs and I fear I've worn down some of my more choiciest swear words. This was probably more due to my own stubornness than documentation not being available. Still, it's always a challenge to piece together the right information and discarding whatever knowledge you have from working on different platforms (I used to code on a Mac).</p>
<h2 id="commonapproach">Common approach</h2>
<p>What you'll find is when searching for k8s on Windows is that people will recommend <a href="https://github.com/kubernetes/minikube">minikube</a> and that is usually a good option. Unfortunately I've had a lot of trouble getting to get this setup to work reliably on my machine. The main reason for this seems to be in the networking area and IPv4/IPv6 configurations. At home it works like a charm (IPv4 only network), however at the office we have a IPv4/IPv6 combined network and things break down rapidly. I've tried disabling IPv6 on the network adapters (both host and the k8s VM) but to no avail.</p>
<h2 id="usedocker">&quot;Use docker&quot;</h2>
<p>Luckily we have the Internet. After searching how to resolve the IPv4/IPv6 problems I came across <a href="https://github.com/kubernetes/minikube/issues/2150#issuecomment-437329843">this comment</a> that made me go: &quot;well.... <em>BLEEP</em>...&quot;</p>
<p>So I set out to see if I could get the Docker way to work. (Narrator: it worked)</p>
<h2 id="startingout">Starting out</h2>
<p>There are a few things you'll need before setting off on this quest of getting a k8s cluster working on your machine:</p>
<ul>
<li>Windows 10 <a href="https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v#check-requirements">Enterprise, Pro or Education</a> edition with Hyper-V enabled (the link documents how to do that)</li>
<li>Docker Desktop (I tested with Version 2.0.0.2 (30215), stable channel and build 0b030e1) previous versions may work but <a href="http://jargon.net/jargonfile/y/Yourmileagemayvary.html">YMMV</a>. According to <a href="https://docs.docker.com/docker-for-windows/kubernetes/">the docs</a> it should work from version 18.06 Stable.</li>
<li><code>kubectl</code> (I recommend to install this using Chocolatey)</li>
</ul>
<h2 id="configuringdocker">Configuring docker</h2>
<p>After installing Docker (and rebooting! It's Windows after all) you will need to enable k8s. Open the Docker settings application and select the Kubernetes option, you should see this:<br>
<img src="https://blog.codenizer.nl/content/images/2019/01/docker-kubernetes-settings.png" alt="docker-kubernetes-settings"><br>
Make sure the <em>Enable Kubernetes</em> and <em>Show system containers (advanced)</em> options are selected.<br>
Docker will prompt you that it needs to install k8s support, click Ok when it does.</p>
<p>After the installation is complete, the Docker configuration is complete.</p>
<h2 id="verifyingkubernetesworks">Verifying Kubernetes works</h2>
<p>As Docker already manages the k8s cluster we need to check if it is properly running and that you can access it.</p>
<p>From a PowerShell or Command window use <code>kubectl</code> to check connectivity:</p>
<pre><code class="language-PowerShell">$&gt; kubectl cluster-info
</code></pre>
<p>That should provide you with output similar to the following:</p>
<pre><code>Kubernetes master is running at https://localhost:6445
KubeDNS is running at https://localhost:6445/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
</code></pre>
<p>If you're a command line hero, you'll be fine from this point on. However I recommend also deploying the dashboard so you can have a bit quicker look at your k8s environment.</p>
<h2 id="settingupthek8sdashboard">Setting up the k8s dashboard</h2>
<p>Open a PowerShell console and run the following commands (from <a href="https://github.com/kubernetes/dashboard/wiki/Installation#recommended-setup">the official docs</a>):</p>
<ul>
<li><code>kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kube-system</code></li>
<li><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml</code></li>
<li><code>kubectl proxy</code></li>
</ul>
<p>At this point you can access the dashboard on <a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login</a> but you'll be greeted with this:<br>
<img src="https://blog.codenizer.nl/content/images/2019/01/kubernetes-dashboard-login.png" alt="kubernetes-dashboard-login"><br>
Select the <code>Kubeconfig</code> option and select the <code>config</code> file from your <code>%USERPROFILE%\.kube</code> directory. After clicking Sign In you should see the dashboard. If not please check <a href="https://github.com/kubernetes/dashboard/wiki/Access-control">the</a> <a href="https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above">docs</a>.</p>
<blockquote>
<p>NOTE: The dashboard deployment is now using v1.10.1. You may want to check <a href="https://github.com/kubernetes/dashboard/releases">here</a> if that's the latest version before deploying.</p>
</blockquote>
<h2 id="manthehelm">Man the helm</h2>
<p>At this point you'll have a working k8s cluster running on your Windows machine, without minikube. The next step is deploying a helm chart to your cluster.</p>
<p>I've provided a sample helm chart that you get <a href="https://github.com/sandermvanvliet/k8sdemo">here</a>, clone the repository and <code>cd</code> into the directory. Next, run <code>helm install .</code> and wait for the command to complete and it should end with something like:</p>
<blockquote>
<p><code>NOTES:</code><br>
<code>1. Get the application URL by running these commands: http://k8s.demo.net/</code></p>
</blockquote>
<p>Now when you try to access that URL you'll be sadly disappointed because it will report a connection refused error because there is nothing listening at that address.</p>
<h2 id="knockknock">Knock-knock</h2>
<p>This is weird, the helm chart contains an <code>ingress</code> specification so it should be accessible right? When we check the dashboard it even shows listed under the ingresses. What gives?</p>
<p>Well by default there is no ingress controller active so there is nothing that will accept HTTP(s) traffic and route it to our service. Fortunately we can solve this problem by adding the nginx ingress controller.</p>
<p>Open a PowerShell console and run the following command:</p>
<pre><code class="language-PowerShell">$&gt; helm install stable/nginx-ingress --name ingress-nginx --namespace ingress-nginx --wait
</code></pre>
<p>Once this command completes you should be able to connect to the service. Let's see:</p>
<pre><code class="language-PowerShell">$&gt; curl.exe -v &quot;http://k8s.demo.net/&quot;
* Could not resolve host: k8s.demo.net
* Closing connection 0
curl: (6) Could not resolve host: k8s.demo.net
</code></pre>
<p>Ok, well great. It still doesn't work. Actually this was to be expected, your local machine is not using a DNS server that knows how to map the domain name <code>k8s.demo.net</code> to <code>127.0.0.1</code> where the ingress controller is listening at.</p>
<p>The easiest way to fix this is by adding an entry to the <code>HOSTS</code> file. Open up a PowerShell window with administrative privileges (you'll need this to edit the file) and run:</p>
<pre><code class="language-PowerShell">$&gt; notepad c:\windows\system32\drivers\etc\hosts
</code></pre>
<p>and add the following entry:</p>
<pre><code>127.0.0.1    k8s.demo.net
</code></pre>
<p>Save the file and run curl again:</p>
<pre><code class="language-PowerShell">$&gt; curl.exe -v &quot;http://k8s.demo.net/&quot;
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to k8s.demo.net (127.0.0.1) port 80 (#0)
&gt; GET / HTTP/1.1
&gt; Host: k8s.demo.net
&gt; User-Agent: curl/7.55.1
&gt; Accept: */*
&gt;
&lt; HTTP/1.1 200 OK
&lt; Server: nginx/1.15.6
&lt; Date: Mon, 28 Jan 2019 13:59:07 GMT
&lt; Content-Type: text/plain; charset=utf-8
&lt; Content-Length: 6
&lt; Connection: keep-alive
&lt; X-App-Name: http-echo
&lt; X-App-Version: 0.2.3
&lt;
hello
* Connection #0 to host k8s.demo.net left intact
</code></pre>
<p>Success!</p>
<h2 id="thefinishline">The finish line</h2>
<p>At this point you have a fully working kubernetes cluster running on your machine and you have deployed a service that you can call from your machine using a proper domain name.</p>
<p>There are a few things to keep in mind. To run the dashboard you'll have to run <code>kubectl proxy</code>, it's possible to expose the dashboard using <code>NodePort</code> but I haven't tested that yet.</p>
<p>The <code>HOSTS</code> file &quot;hack&quot; isn't very user friendly. If you're working with the same services all the time this shouldn't be too big of a deal but there should be a way to do this easier with a local DNS server for example.</p>
<p>For now, happy k8s'ing. If you have questions feel free to reach out via Twitter <a href="https://twitter.com/Codenizer">@Codenizer</a>.</p>
<h3 id="acknowledgements">Acknowledgements</h3>
<p>This wouldn't have been possible without a lot of people writing Github issues, official documentation and blog posts. I've added a list below that hopefully captures most of the resources I've used for this journey.</p>
<ul>
<li>Scott Hanselman's blog: <a href="https://www.hanselman.com/blog/HowToSetUpKubernetesOnWindows10WithDockerForWindowsAndRunASPNETCore.aspx">https://www.hanselman.com/blog/HowToSetUpKubernetesOnWindows10WithDockerForWindowsAndRunASPNETCore.aspx</a></li>
<li>Rancher docs that pointed me to nginx ingress: <a href="https://rancher.com/blog/2018/2018-05-18-how-to-run-rancher-2-0-on-your-desktop/">https://rancher.com/blog/2018/2018-05-18-how-to-run-rancher-2-0-on-your-desktop/</a></li>
<li>Docker documentation: <a href="https://github.com/docker/for-win">https://github.com/docker/for-win</a></li>
<li>Nginx ingress docs: <a href="https://kubernetes.github.io/ingress-nginx/examples/docker-registry/">https://kubernetes.github.io/ingress-nginx/examples/docker-registry/</a></li>
<li>k8s dashboard wiki: <a href="https://github.com/kubernetes/dashboard/wiki">https://github.com/kubernetes/dashboard/wiki</a></li>
<li>Github issue for HOSTS file: <a href="https://github.com/docker/for-win/issues/1901">https://github.com/docker/for-win/issues/1901</a></li>
<li>Setting up ingress on minikube: <a href="https://medium.com/@Oskarr3/setting-up-ingress-on-minikube-6ae825e98f82">https://medium.com/@Oskarr3/setting-up-ingress-on-minikube-6ae825e98f82</a></li>
<li>Helm: <a href="https://helm.sh/">https://helm.sh/</a></li>
<li>Hashicorp echo server: <a href="https://hub.docker.com/r/hashicorp/http-echo">https://hub.docker.com/r/hashicorp/http-echo</a></li>
</ul>
</div>]]></content:encoded></item><item><title><![CDATA[Visualizing API landscape and health]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Many people in recent years have jumped on the (micro) services bandwagon for many reasons. But, as with radioactively-spider-bitten-students, with great services come great responsibility... eh, well, observability.</p>
<p>As more and more services are introduced into your landscape, it becomes more difficult to keep track if services are running. More</p></div>]]></description><link>https://blog.codenizer.nl/visualizing-apis-with-se4/</link><guid isPermaLink="false">5c2b4607cb38a607d18e5325</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Tue, 01 Jan 2019 15:55:32 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Many people in recent years have jumped on the (micro) services bandwagon for many reasons. But, as with radioactively-spider-bitten-students, with great services come great responsibility... eh, well, observability.</p>
<p>As more and more services are introduced into your landscape, it becomes more difficult to keep track if services are running. More importantly, how do you quickly see which services are affected if one dies?</p>
<h2 id="healthywhatdoesitmean">Healthy, what does it mean?</h2>
<p>First things first: How do we know if a service is healthy?</p>
<p>Most services depend on other components to function properly, for example a database, a cache or another service. If any of these are not available or are not responding quickly enough, we can say that the service is <em>unhealthy</em>.</p>
<p>Still each service knowing abouth the health of its own components doesn't do us a lot of good just yet. We're in the <em>&quot;A tree falls in the forest and there is nobody there does it make a sound?&quot;</em> territory right now.</p>
<p>A first approach here is to have the checks log an error when it fails its criteria, something along the lines of <code>Health test failed</code>. We can now leverage our logging infrastructure (Splunk, ELK, LogzIO etc) to trigger alerts for us.</p>
<p>And then suddenly your Slack <code>#operations</code> channel contains these little messages:</p>
<blockquote>
<p><span style="color:red">[ERROR]</span> ReallyImportantService is unhealthy</p>
</blockquote>
<p>Ok, well now.</p>
<p>So obviously we know the service failed but we still need to resort to our logs and figure out what <em>part</em> of the service is failing. Database? Messagebus? ¯\_(ツ)_/¯</p>
<h2 id="weneedtogodeeper">We need to go deeper</h2>
<p>What we need is a way to ask the service itself what's wrong instead of digging through our alerts and logs. Given that we're dealing with API's that are accessible over HTTP anyway, maybe we should leverage that. We could create an endpoint on our API that provides us with the results of the health checks.</p>
<p>But why reinvent the wheel? Fortunately for us the folks at <a href="https://www.beamly.com/">Beamly</a> have already done a lot of the thinking for us. They came up with the <a href="https://github.com/beamly/SE4/blob/master/SE4.md">Simple Service Status Endpoints</a> or <em>SE4</em> that defines four endpoints to quickly get information about the health of a service.</p>
<p>For our goal of seeing what is going on we'll be using these two:</p>
<ul>
<li><code>/service/healthcheck/gtg</code> which tells us if the service is <em>OK</em> (<code>/gtg</code>: <em>good-to-go</em>)</li>
<li><code>/service/healthcheck</code> which gives a list of dependencies and if they are <em>OK</em></li>
</ul>
<p>I won't go in too much detail (the linked doc does that better) but we now have a common way of accessing health information for all our services. You can imagine that this helps a lot when you have a <em>lot</em> of services, you don't need to write each integration separately.</p>
<h2 id="mappingexpedition">Mapping expedition</h2>
<p>Unfortunately we're still not a lot closer to where we want to be: <em>a view of our landscape with health indicators.</em></p>
<p>Even though right now we can get detailed information about a service, we need a way to determine which service is using what. If you read the SE4 document, you'll see that each test has a <code>test_name</code> and we can use that to identify the dependency.</p>
<p>But how do we match identifiers to services?</p>
<p>You will probably have already adopted a naming convention for your services. Perhaps like this <code>https://environment-domain-api.mycompany.com/</code> to create <code>https://production-user-api.mycompany.com/</code>. Your application name would be <code>user-api</code> and this is a good candidate for an identifier to use in the <code>test_name</code> for services that depend on the User API.</p>
<p>The problem is, we still need to find all our services. But before we can find our services, what environments do we have?</p>
<p>When you run your services in The Cloud<sup>TM</sup>, you'll get the benefit of infrastructure management APIs. These allow you to inspect what you have deployed and where you can find these services (and many other things like configuration, firewall settings, you name it).</p>
<blockquote>
<p>For example if you run on Azure, you could query App Service instances, on AWS you could look for Application Load Balancers or you can use Consul or Zookeeper on your own infrastructure.</p>
</blockquote>
<p>My current discovery method is slightly less sophisticated: a config file.... but it does the job.</p>
<p>The key point here is that you use the data from service discovery to determine which services make up the landscape for a particular environment (and therefore what an environment means to you).</p>
<p>Pulling everything together we can now find out which services exist, what the relations are between them and what additional dependencies exist.</p>
<h2 id="visualizing">Visualizing</h2>
<p>We have our discovery, naming and endpoints, but still we have no visual representation of the landscape. The next step is figuring out how to show this.</p>
<p>Fortunately there are a lot of libraries available to do this for us, <a href="http://visjs.org/">vis.js</a> is one I've found to work well. We can transform our list of services and dependencies into their graph equivalents: nodes and edges. The healthy state of the services and dependencies is then used to color the nodes and edges red or green.</p>
<p>A sample graph then looks like this:<br>
<img src="https://blog.codenizer.nl/content/images/2019/01/demo.png" alt="demo"></p>
<p>Of course this is a very simple example but I hope it gives you an idea about how it works in scenarios with many more services.</p>
<h2 id="wrappingup">Wrapping up</h2>
<p>We've seen how we moved from having no information on our service health to a visual overview that helps to quickly spot the problem and the scope of it. This can be a big benefit when alerts go off and you quickly need to diagnose a problem. It also allows you to easily communicate a problem scope to stakeholders.</p>
<p>At <strong>$company</strong> we're using <a href="https://github.com/sandermvanvliet/Voltmeter">Voltmeter</a> which is an implementation of what I've talked about in this post. Hopefully you'll find it useful and it allows you to get started using this yourself to monitor service health.</p>
</div>]]></content:encoded></item><item><title><![CDATA[VS2017 + Specflow = challenging!]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This week I tried to set up a Specflow acceptance test project in one of the applications I'm working on. I already knew that Specflow support for netcore is <a href="https://github.com/techtalk/SpecFlow/pull/649">work in progress</a>. So thinking myself very clever, I created a project targeting <code>net462</code> with the assumption that it would <em>JustWork<sup></sup></em></p></div>]]></description><link>https://blog.codenizer.nl/vs2017-specflow-challenging/</link><guid isPermaLink="false">5a51d8c108c32b7b6bf0062b</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Tue, 07 Nov 2017 20:45:32 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>This week I tried to set up a Specflow acceptance test project in one of the applications I'm working on. I already knew that Specflow support for netcore is <a href="https://github.com/techtalk/SpecFlow/pull/649">work in progress</a>. So thinking myself very clever, I created a project targeting <code>net462</code> with the assumption that it would <em>JustWork<sup>TM</sup></em></p>
<p>Of course, it didn't. The code behind files were not generated from the add-in on save 😕</p>
<p>Okay, so I knew you could let Specflow generate the code behind on build, or actually <a href="http://specflow.org/documentation/Generate-Tests-from-MsBuild/"><code>BeforeBuild</code></a>. Good! Let's try that!</p>
<p><code>&lt;Import Project=&quot;packages\specflow\2.2.1\tools\TechTalk.Specflow.targets&quot; /&gt;</code></p>
<p>that should just.... shit:</p>
<p><code>The imported project &quot;C:\git\project\packages\specflow\2.2.1\tools\TechTalk.Specflow.targets&quot; was not found. Confirm that the path in the &lt;Import&gt; declaration is correct, and that the file exists on disk.</code></p>
<p><strong>Problem #1:</strong><br>
<code>packages\</code> is not in the solution directory anymore because reasons (hint: VS2017 + new NuGet) 😶. Great, so where do we reference the Specflow targets from? Luckily there is a MSBuild property <code>$(NuGetPackageRoot)</code> that points to that path, sooooooo:</p>
<p><code>&lt;Import Project=&quot;$(NuGetPackageRoot)\specflow\2.2.1\tools\TechTalk.Specflow.targets&quot; /&gt;</code></p>
<p>whoop, project loads again! Let's build!</p>
<p>Hmpf. No tests in the test explorer, NCrunch shows nothing... bugger.</p>
<p><strong>Problem #2:</strong><br>
Code behind files still not being generated. Why didn't it work?<br>
A lot of digging turned up a <a href="https://stackoverflow.com/questions/43921992/how-can-i-use-beforebuild-and-afterbuild-targets-with-visual-studio-2017?answertab=votes#tab-top">StackOverflow post</a> that explained why the <code>BeforeBuild</code> wasn't called. TL;DR: in VS2017 the build system uses sensible defaults to reduce the noise in <code>.csproj</code> files. That has the unfortunate side effect that <code>BeforeBuild</code> isn't called, which Specflow depends on.</p>
<p>Cool, applied the import magicking done, let's build:</p>
<p><code>TechTalk.Specflow.targets(47,5): error : Object reference not set to an instance of an object.</code></p>
<p>$^$%^#@$@#$^$^ not again!</p>
<p>This one took some digging in the Specflow sources. This little bit of source code spelunking led me to <a href="https://github.com/techtalk/SpecFlow/blob/master/TechTalk.SpecFlow.Generator/Project/MsBuildProjectReader.cs#L126">MsBuildProjectReader.cs</a> and the aptly named method <code>IsNewProjectSystem()</code>. The method checks if the root node of the <code>.csproj</code> file contains an <code>Sdk</code> property, which new style projects do!<br>
Oh but I just removed that to fix the <code>BeforeBuild</code> problem.... shit.</p>
<p>I wonder if.... ok yeah, adding <code>Sdk=&quot;&quot;</code> still loads the project, let's see if it builds....</p>
<p><code>TechTalk.Specflow.targets(47,5): error : Object reference not set to an instance of an object.</code></p>
<p>(╯°□°）╯︵ ┻━┻</p>
<p>More source code spelunking later, it turns out that we need a <code>specflow.json</code> in the project because for VS2017 style projects Specflow is also changing to JSON instead of <code>app.config</code> (see <a href="https://github.com/techtalk/SpecFlow/blob/master/TechTalk.SpecFlow.Generator/Project/MsBuildProjectReader.cs#L111">here</a>)</p>
<p>Adding a <code>specflow.json</code> with just <code>{}</code> as it's contents....</p>
<p><code>TechTalk.Specflow.targets(47,5): error : Object reference not set to an instance of an object.</code></p>
<p>Double (╯°□°）╯︵ ┻━┻</p>
<p>Ok, contents matter. Digging into <a href="https://github.com/techtalk/SpecFlow/blob/master/TechTalk.SpecFlow/Configuration/JsonConfig/JsonConfigurationLoader.cs">JsonConfigurationLoader</a> I created the following <code>specflow.json</code>:</p>
<pre><code class="language-json">{
	&quot;SpecFlow&quot;: {
		&quot;UnitTestProvider&quot;: {
			&quot;Name&quot;: &quot;xUnit&quot;
		}
	}
}
</code></pre>
<p>Hit build.....</p>
<p>BOOM! A wild <code>MyFeature.feature.cs</code> appears!</p>
<p>NCrunch suddenly turns green, TestExplorer shows a test, life is good!</p>
<h2 id="recap">Recap</h2>
<p>Pulling this all together, you'll need a <code>.csproj</code> that looks like this:</p>
<pre><code class="language-xml">&lt;Project Sdk=&quot;&quot;&gt;
  &lt;Import Project=&quot;Sdk.props&quot; Sdk=&quot;Microsoft.NET.Sdk&quot; /&gt;

  &lt;PropertyGroup&gt;
    &lt;TargetFramework&gt;net462&lt;/TargetFramework&gt;
  &lt;/PropertyGroup&gt;
  
  &lt;Import Project=&quot;Sdk.targets&quot; Sdk=&quot;Microsoft.NET.Sdk&quot; /&gt;
  
  &lt;Import Project=&quot;$(NuGetPackageRoot)\specflow\2.2.1\tools\TechTalk.Specflow.targets&quot; /&gt;
&lt;/Project&gt;
</code></pre>
<p>and a <code>specflow.json</code> like this:</p>
<pre><code class="language-json">{
	&quot;SpecFlow&quot;: {
		&quot;UnitTestProvider&quot;: {
			&quot;Name&quot;: &quot;xUnit&quot;
		}
	}
}
</code></pre>
<p>You will get the code behinds to be generated on-build. It's a far cry from the generate-on-save behaviour of the add-in, but at least it gets us going.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Configure client time zone in Dockerized Splunk]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When you use the official Splunk <a href="https://hub.docker.com/r/splunk/splunk/">Docker container</a> the default configuration is that the UI shows time in the UTC time zone. This can be quite confusing if you are not actually in that particular time zone (or deal with daylight saving time....)</p>
<p>Fortunately Splunk allows you to select the</p></div>]]></description><link>https://blog.codenizer.nl/splunk-tz-docker/</link><guid isPermaLink="false">5a51d8c108c32b7b6bf0062a</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Thu, 31 Aug 2017 12:03:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>When you use the official Splunk <a href="https://hub.docker.com/r/splunk/splunk/">Docker container</a> the default configuration is that the UI shows time in the UTC time zone. This can be quite confusing if you are not actually in that particular time zone (or deal with daylight saving time....)</p>
<p>Fortunately Splunk allows you to select the time zone you want in the UI (it's in the top right corner, click &quot;Account settings&quot;). But if you frequently recycle the container this gets a bit tedious and it's something you'll easily forget.</p>
<p>As it turns out, you can set the default for all users via a configuration file that acts as a template for all users. The only thing that is needed is to create a <code>user-prefs.conf</code> file in the directory <code>/opt/splunk/etc/system/local</code>. (Found this solution <a href="https://answers.splunk.com/answers/439363/how-to-set-a-default-timezone-for-an-entire-multis.html#answer-438695">here</a>). The file looks like this:</p>
<pre><code>[general]
eai_app_only = False
eai_results_per_page = 25
tz = Canada/Alberta
</code></pre>
<p>In this example I've configured the default UI time zone to be Alberta in Canada (UTC-06:00). You can simply set the <code>tz =</code> to the time zone you need.</p>
<p>Putting this together in a <code>docker-compose.yml</code> it will look like this:</p>
<pre><code>version: &quot;3.2&quot;

volumes:
  opt_splunk_etc:
  opt_splunk_var:

services: 
  splunk:
    hostname: splunkenterprise
    image: splunk/splunk:latest
    environment:
      SPLUNK_START_ARGS: --accept-license
      SPLUNK_ENABLE_LISTEN: 9997
      SPLUNK_ADD: tcp 1514
    volumes:
      - type: volume
        source: opt_splunk_etc
        target: /opt/splunk/etc
      - type: volume
        source: opt_splunk_var
        target: /opt/splunk/var
      - type: bind
        source: ./user-prefs.conf
        target: /opt/splunk/etc/system/local/user-prefs.conf
        read-only: true
    ports:
      - &quot;8000:8000&quot;
      - &quot;9997:9997&quot;
      - &quot;8088:8088&quot;
      - &quot;1514:1514&quot;
</code></pre>
<p>The interesting bit here is the <code>bind</code> volume that maps the <code>user-prefs.conf</code> file into the right location inside the container.</p>
<p>Easy as.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Don't be PASVe with Docker]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Today I've had the interesting challenge to help <a href="https://github.com/ester-coolblue">Ester</a> trying to figure out why a Specflow test wasn't behaving as it should. Of course the test ran fine in the CI pipeline but locally, no joy.</p>
<h3 id="somebackground">Some background</h3>
<p>The application that this test is for generates a file from data</p></div>]]></description><link>https://blog.codenizer.nl/donbe-pasve-with-docker/</link><guid isPermaLink="false">5a51d8c108c32b7b6bf00629</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Wed, 23 Aug 2017 10:57:58 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Today I've had the interesting challenge to help <a href="https://github.com/ester-coolblue">Ester</a> trying to figure out why a Specflow test wasn't behaving as it should. Of course the test ran fine in the CI pipeline but locally, no joy.</p>
<h3 id="somebackground">Some background</h3>
<p>The application that this test is for generates a file from data in a database, which it then uploads to a FTP server. Easy as.</p>
<p>The test we were looking at basically does the following:</p>
<ol>
<li>Clear the files from the FTP server</li>
<li>Set up test data in the database</li>
<li>Run the application</li>
<li>Verify that the file has been uploaded to the FTP server</li>
</ol>
<p>Because this test uses an FTP server, Ester already set up a local FileZilla server and configured the test and application to point to that server. So far, so good.</p>
<p>However the test failed at step 1...</p>
<p>Time to investigate!</p>
<h3 id="thestartofourjourney">The start of our journey</h3>
<p>Okay, so what failed exactly? The method that clears the files from the FTP server is nothing special, it just lists the files in the directory (which we get from settings) and then deletes them one by one.</p>
<p>The error we saw was that the file we attempted to delete didn't exist. Which is odd as we just got that file from a list (<code>NLST</code>) operation. Looking at the path supplied to the delete command it looked like this:<br>
<code>ftp://localhost/workdir/workdir/somefile.dat</code> apparently the directory <code>workdir</code> was included twice. How did that happen? We certainly didn't put it in the format string twice.</p>
<p>Apparently FileZilla includes the directory name in the result of a <code>NLST</code> operation. So if you do <code>NLST workdir</code> you get <code>workdir/file1</code>, <code>workdir/file2</code> etc instead of <code>file1</code>, <code>file2</code>. According to <a href="https://cr.yp.to/djb.html">djb</a> <code>NLST</code> should <a href="https://cr.yp.to/ftp/list.html">return an abbreviated list</a> whereas <code>LIST</code> returns the full path for every file.</p>
<p>So our first problem is a FTP server with weird behavior... nice!</p>
<h3 id="enterdocker">Enter Docker!</h3>
<p>Okay, so FileZilla doesn't work for us, let's pick another FTP server. Luckily <a href="https://github.com/nrjohnstone">Nathan</a> has already poked around and created a Docker container ready to use with <a href="https://www.pureftpd.org/project/pure-ftpd">pure-ftpd</a>. After copying the necessary <code>docker-compose.yml</code> and <code>DOCKERFILE</code> we were good to go:</p>
<pre><code>C:\src\application&gt; docker-compose up
Traceback (most recent call last):
...
...
  File &quot;site-packages\docker\transport\npipesocket.py&quot;, line 49, in connect
pywintypes.error: (2, 'WaitNamedPipe', 'The system cannot find the file specified.')
docker-compose returned -1
</code></pre>
<p>Not very encouraging.... what the bleep is <code>WaitNamedPipe</code> anyway?</p>
<p>Some Googling only revealed posts with this issue from at least a year ago. Probably updating Docker couldn't hurt.</p>
<p>Oh how wrong I was...</p>
<p>After downloading the latest Docker version and running the installer, it told me: <em>&quot;Hahaha! I only work on Windows 10!&quot;</em> Great. Okay so I should've read the site more carefully and downloaded the <a href="https://www.docker.com/products/docker-toolbox">Docker Toolbox</a> instead because that supports Windows 7.</p>
<p>One reboot later...</p>
<pre><code>Error: Unable to connect to system D-Bus (2/3): D-Bus not installed
</code></pre>
<p>Shit.</p>
<p>Then I saw another message that the Virtual Box service could not be found. Interesting... I opened up Virtual Box to see if I could start the Docker VM, that didn't work. Also it told me there was an update available. So on a hunch I updated Virtual Box, thinking that re-installing probably would correct the situation.</p>
<p>Another reboot later...</p>
<pre><code>DEVBOX ~
$ 
</code></pre>
<p>Yay! It works, Docker is up!</p>
<pre><code>DEVBOX ~ 
$ docker-compose up
...
...
Attaching to pureftpd-test...
</code></pre>
<p>Awww yis!<br>
By now, everything was working and I was able to connect to the FTP server successfully via IP address <code>192.168.155.165</code> (&lt;-- remember this).</p>
<h3 id="fixingthemtests">Fixing them tests</h3>
<p>Okay so now we had a working environment, correctly configured test suite and application to point to the FTP server. Time to run the test again and see if it works.</p>
<p>Hoo-rah!! Test cleanup worked, setup worked, running the application worked. Only the assertion failed... bummer!</p>
<p>Let's see if the file got created on the FTP server... nope. Weird. I thought we had everything working, right? Double checking the configuration it certainly looks that way. Time to debug, launch the application from Visual Studio directly and see if it is behaving weird.</p>
<p>Stepping through the file upload I noticed something weird:</p>
<pre><code>PASV
227 Entering Passive Mode (127,0,0,1,216,3004)
</code></pre>
<p><code>127.0.0.1</code>? But wait a minute, we're connecting to <code>192.168.155.165</code> aren't we?</p>
<p>No wonder the upload fails. The FTP server is telling the application to connect to localhost but nothing is listening there. What's going on?</p>
<p>It turns out <code>PASV</code> instructs the FTP server to tell the client the port <strong>and</strong> the IP address to connect to. But because the FTP server thinks it's running on localhost, it will tell that to the client causing a mismatch. Luckily there is <a href="https://tools.ietf.org/html/rfc2428#section-3">RFC-2428</a> that introduces the <code>EPSV</code> verb which only returns the port number to connect to.</p>
<p>So what causes this address confusion in the first place? Docker maps ports to localhost right? Yes, it does. However because of how Docker works on Windows 7 this doesn't quite work all the way.<br>
Normally a Docker container runs directly on the host OS, which makes mapping the ports and IPs straightforward. <code>127.0.0.1</code> in the container is mapped to <code>127.0.0.1</code> on the host.<br>
However because Docker on Windows 7 runs inside a VirtualBox Linux VM the host is not Windows but the Linux VM! Also VirtualBox doesn't map on <code>127.0.0.1</code> but <em>real</em> IP addresses the mismatch occurs.<br>
This image shows the difference between Docker on Windows 7 vs Windows 10:<br>
<img src="https://blog.codenizer.nl/content/images/2017/08/docker-vs-vbox-1.png" alt="Docker nested in VirtualBox nested in host OS"></p>
<p>Luckily the PASV vs EPSV issue is easily fixable, we just changed:<br>
<code>ftpClient.DataConnectionType = FtpDataConnectionType.PASV;</code><br>
to:<br>
<code>ftpClient.DataConnectionType = FtpDataConnectionType.EPSV;</code></p>
<p>FTP upload: solved!</p>
<h3 id="fixingthemtests2">Fixing them tests (2)</h3>
<p>Unfortunately, the test still didn't pass. The file we expected still didn't exist on the FTP server... or does it?</p>
<p>Looking at the code for the assertion it was using a FileHelper class. Wait a minute, <strong>File</strong>Helper? We're using a FTP server right?</p>
<p>So it turns out that the test was set up to expect the FTP location to be mapped to a local folder which then gets inspected. Major assumption there! But &quot;no worries&quot; (as our Aussie colleagues would say), we can just map the appropriate volume in the Docker container!</p>
<p>Yes, well, no.</p>
<p>Because Docker on Windows 7 is something of a hacktastic solution, mapping volumes actually doesn't quite work. We've specified the folder in the <code>docker-compose.yml</code> but it kept turning up as not mapped in the Docker container. Apparently we are out of luck here until we switch to Windows 10 (not something I'll do to satisfy a test though).<br>
<strong>Update:</strong> Although VirtualBox isn't used anymore, the IP address jumble hasn't gone away. (Thanks <a href="https://twitter.com/phermens">Pat</a>)</p>
<p>Changing to a different tack: why are we looking at the local file system anyway? Aren't we interested in whether the file exists <em>on the FTP server?</em></p>
<p>The easy solution was just to use the FtpHelper instead of the FileHelper and hey presto, test passes!</p>
<h3 id="lessonslearned">Lessons learned</h3>
<p>I think the most important thing to learn from investigating this test failure is to first try and understand what the test attempts to do and, for me personally, first to draw a picture of what is going on. Something like this:<br>
<img src="https://blog.codenizer.nl/content/images/2017/08/test-overview-1.png" alt="Schematic overview of test case"><br>
would have helped a lot in figuring out which part(s) of the test were failing and identify the assertion that was checking the local directory instead of the FTP server.</p>
<p>Another important point is that you cannot depend on the behavior of 3rd party systems. Something that became painfully clear with FileZilla vs pure-ftpd for example. I think we need to have a look at the behavior of our 3rd party and their FTP server to ensure we can reliably replicate it in our test environments.</p>
<p>Also where Docker was involved, there is some (arcane) knowledge you need to have when attempting to troubleshoot issues. I feel that the &quot;just use Docker, you're problems will be solved&quot; attitude doesn't always hold up. Docker is a piece of software you'll need to grok in order to use it effectively. I'll definitely be looking at organizing some training/workshop around this in the next few weeks.</p>
<h3 id="conclusion">Conclusion</h3>
<p>Test that shit. Really, do it.</p>
</div>]]></content:encoded></item><item><title><![CDATA[NDC Oslo - Conference day 1 - part 2]]></title><description><![CDATA[<div class="kg-card-markdown"><p>All right, a bit of sleep and a lot of coffee, NDC day 2 is about to kick off so I've got a little bit of time to continue with the recap of day 1.</p>
<h2 id="whenfeatureflagsgobad">When feature flags go bad</h2>
<p>This talk by Edith Harbaugh (<a href="https://twitter.com/Edith_H">@Edith_H</a>) followed nicely on</p></div>]]></description><link>https://blog.codenizer.nl/ndc-oslo-conference-day-1-part-2/</link><guid isPermaLink="false">5a51d8c108c32b7b6bf00628</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Sat, 17 Jun 2017 12:55:10 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>All right, a bit of sleep and a lot of coffee, NDC day 2 is about to kick off so I've got a little bit of time to continue with the recap of day 1.</p>
<h2 id="whenfeatureflagsgobad">When feature flags go bad</h2>
<p>This talk by Edith Harbaugh (<a href="https://twitter.com/Edith_H">@Edith_H</a>) followed nicely on the one from Sam Newman and continued his story on feature toggles.</p>
<p>Feature toggles are a way to enable (or disable!) features in a deployed system (or application, or service) in a way that is non-disruptive to the usage of the application. This immediately brings us to the first key point about feature toggles: if you can only toggle at deployment time, it's configuration! Feature toggles are meant to be used on a running system without incurring downtime.</p>
<p>A nice example Edith shared with us is one from PagerDuty, they have certain feature flags around features that improve performance. But they disable them before they go to lunch. Why? Well everybody wants to have lunch in peace and quiet right? This shows us that adopting feature toggles allows you to have control over your live systems.</p>
<p>Although we might think of using feature toggles only as a on-or-off scenario, you can use this mechanism to give some users early access to certain features. Or even make sure some people <em>don't</em> get access (like your competitors!). A feature toggle itself is just a toggle, but the strength of this pattern is that it allows you to use a lot of information about a user to flip a toggle at run-time. Imagine only enabling a feature for people who visit NDC for the first time, are taller than 6 ft. and haven't shaved in a week.... Well you get the point.</p>
<p>What's really important when implementing feature toggles is that they should <strong>always be short lived</strong>. There is nothing more dangerous than a feature toggle that remains and is suddenly flipped. Ask the people at Knight Capital... oh wait, you can't because they went bankrupt<sup><a href="#ref-1">1</a></sup>. When you introduce a feature toggle, you introduce technical debt (the old feature). Be sure to remove them as quickly as possible.</p>
<h2 id="awsserverlesswithnetcore">AWS Serverless with .Net core</h2>
<p>One of the things I like so much about conferences is that you get to meet the people building the frameworks and tools you are using every day. Norm Johanson is part of the team that builds the AWS tooling for the .Net platform.</p>
<p>The focus of this talk was demonstrating how you can use C# and dotnet core to build functions that run on AWS Lambda. Lambda is the serverless computing platform built by Amazon and for a long time it's only been possible to run NodeJS workloads in there. However since November 2016 it is possible to use dotnet core, so now we can leverage C# to build functions and use the tools we love to do it.</p>
<p>The AWS tooling is supported in VS 2017 (via the extension gallery), VS 2015 (as downloadable package from Amazon) and the dotnet core CLI. The beauty is that the Visual Studio extensions use the dotnet core CLI to do all the work. This also makes it super easy to integrate this in CI pipelines, no Visual Studio install needed or custom PowerShelling involved. The extensions also give you project templates (in both VS and dotnet core CLI) that will help you bootstrap a Lambda function project.</p>
<p>In his demo Norm showed that you can easily deploy the functions to AWS from within Visual Studio. However he was quick to point out that this is nice for getting to know Lambda but you should actually leverage tools like CloudFormation to manage your deployments. AWS does provide a number of ready to use templates (or stacks as they call it) which makes setting that up a lot easier.</p>
<p>When building functions with dotnet core it is important to realise that only netcoreapp1.0 is currently supported because Amazon only wants long-term support platforms. The expectation is that when dotnet core 2.0 is released it will become available relatively soon.</p>
<p>To take this even further, it is also possible to run full ASP.Net Core apps in AWS Lambda! Using API gateway to expose the app you can run a webapp (or WebAPI) completely serverless. Be aware though that all static content is now also being processed by your lambdas so you might want to route those calls directly to S3 (pro-tip!)</p>
<p>Check out the video Q&amp;A with Norm about Lambda <a href="https://www.youtube.com/watch?v=DD0q8lh_FiU&amp;t=32s">here</a></p>
<h2 id="recap">Recap</h2>
<p>So, after day 1 we can already say that the session agenda at NDC is full of good stuff! We've gone from cowboys, to authorization, toggled our way through features and merge conflicts to running applications in serverless environments. And this was just day 1!<br>
What's really great is that everybody here is so relaxed, I've met some cool people and have been able to talk to a bunch of the speakers. My cunning plan<sup><a href="#ref-2">2</a></sup> is also starting to work out and I've recorded a bunch of video Q&amp;A's with the speakers. (Which you can find <a href="https://www.youtube.com/channel/UCy7P-QDoOsXVZA304_4kN_Q">here</a>)</p>
<sup>
<a name="ref-1">[1]</a> [https://dougseven.com/2014/04/17/knightmare-a-devops-cautionary-tale/](https://dougseven.com/2014/04/17/knightmare-a-devops-cautionary-tale/)
<a name="ref-2">[2]</a> [https://youtu.be/AsXKS8Nyu8Q?t=9](https://youtu.be/AsXKS8Nyu8Q?t=9)
</sup>
</div>]]></content:encoded></item><item><title><![CDATA[NDC Oslo - Conference day 1]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I don't think I've ever seen a conference open this way. Everybody gathered on the exhibition floor around a stage in the center with a massive screen hanging above it with a countdown timer. When the timer ran out we could hear the intro of the Dire Straits song <a href="https://www.youtube.com/watch?v=wTP2RUD_cL0&amp;feature=youtu.be&amp;t=35">&quot;</a></p></div>]]></description><link>https://blog.codenizer.nl/ndc-oslo-conference-day-1/</link><guid isPermaLink="false">5a51d8c108c32b7b6bf00627</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Wed, 14 Jun 2017 20:47:14 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>I don't think I've ever seen a conference open this way. Everybody gathered on the exhibition floor around a stage in the center with a massive screen hanging above it with a countdown timer. When the timer ran out we could hear the intro of the Dire Straits song <a href="https://www.youtube.com/watch?v=wTP2RUD_cL0&amp;feature=youtu.be&amp;t=35">&quot;Money for nothing&quot;</a>, what followed was a very NDC special version of the song! The lyrics were changed to introduce the speakers, really cool!</p>
<p>Once the song was done and the intro video of the NDC organizers was finished it was time for...</p>
<h2 id="thekeynote">The Keynote</h2>
<p>And then there was a cowboy on the stage... A what?? Yes, a cowboy. Well he sure looked like one but it was time for Dylan Beattie to kick off NDC Oslo.</p>
<p>What better way to start than asking if there are any questions? Not that he forgot the rest of his presentation and skipped to the end but rather to set the stage (ha ha, pun intended) for what he was going to talk about.</p>
<p>Because if we don't ask questions, we never start figuring anything out and we grind to a halt. Dylan talked a lot about the what drives us as human beings (and us as developers even more) to go forth and find out &quot;hey, can I eat that?&quot;</p>
<p>From there he took us through the process of trying to answer a question. For example, &quot;What is the capital of Norway?&quot; Depending on the context you're asking it in, it can either be Oslo, <a href="https://www.nbim.no/">a ridiculously large number</a> or the letter 'N'.</p>
<p>In the process of answering questions us humans have, along the way, invented some pretty amazing stuff. From fire, books and calculus to boats, rockets and GPS. Toss in the World Wide Web and if you look really close, it was all done in support of answering questions. (To which we all know the ultimate answer is 42)</p>
<p>Dylan had one final piece of advice for us: &quot;Use flatscreens&quot;. In the style of Baz Lurhmann's <a href="https://www.youtube.com/watch?v=sTJ7AzBIJoI">&quot;Wear sunscreen&quot;</a> he told us to go forth, ask why a lot and enjoy NDC Oslo!</p>
<h2 id="implementingauthenticationandauthorization">Implementing authentication and authorization</h2>
<p>In this session by Dominick Baier &amp; Brock Allen the focus was more on authorization than authentication. It turns out that authentication is a relatively simple problem in this space, sure getting it right is a challenge but there are only so many variations.</p>
<p>The trick however is how to do authorization.</p>
<p>From their experiences (and to be honest my own as well), authorization is a very application specific problem, and although there are some reusable mechanisms there are always enough things that make that approach not work.</p>
<p>What happens a lot is that tokens generated by identity providers get abused to store permissions and claims. This leads to very large tokens and put the responsibility of working out what claim means what with the identity provider instead of the application (and it's business domain). For example <code>Delete</code> may mean something completely different in service A than service B, but now you can't differentiate other than to push more permissions in the token.</p>
<p>Instead we should be looking at mapping identity to permissions in the context of the service and it's business domain. That means that this mapping should happen not in the identity provider but in an authorization provider. That provider can be a completely separate service (even centralised for many services) or even an in-memory one that lives inside of the context of the service itself.</p>
<p>Key is that mapping identity to permissions is a domain specific problem and should be solved there.</p>
<h2 id="serverlessrealityorbs">Serverless: Reality or BS</h2>
<p>Serverless systems are one of the main themes here at NDC, and this talk tries to shed some light on the different architectural styles and platforms to build serverless systems. Lynn Langit is an independent cloud architect who has experience building these systems on the well known platforms of Amazon, Google, and Microsoft.</p>
<p>Serverless systems started when AWS Lambda was introduced, or rather when S3 became available. The simplest form of serverless is a static site running from an S3 bucket! Let that sink in for a minute. Because AWS was the first provider to jump in this space they currently have the platform that is furthest ahead compared to Google or Azure. However both of them are catching up rapidly and in case of Azure with some nice features like <a href="https://azure.microsoft.com/en-us/services/functions/">Logic Functions</a>.</p>
<p>Of course there are some caveats to building serverless platforms. The success largely depends on the type of workload you have. An example is the Australian census. Built as a government IT project, it was way too expensive, delivered too late and when it went live it crashed because millions of people tried to use it at the same time. Two students rebuilt it as part of a hackathon project using AWS lambda and simulated the load. It cost three days and $500 to build <em><strong>and run!</strong></em></p>
<p>So if you have peak load or run processes on a schedule, serverless computing can help you a long way. Instead of paying for servers sitting idle, you pay for actual usage and don't worry about managing servers (with the additional cost that is involved there)</p>
<p>Also you should be worrying about vendor lock-in. These vendors are good at giving you freebies to get you on board, but once there and you are using many of their services it can become really hard to switch providers. Plan accordingly for this when designing your serverless architecture. Have an exit strategy and consider whether using micro or nano instances would also be a possibility.</p>
<h2 id="featurebranchesandtoggles">Feature branches and toggles</h2>
<p>This talk by Sam Newman is about whether you should use feature branches or practice trunk-based development with feature toggles. Hint: it's the latter.</p>
<p>Some good examples of the pain of merging lead us to the research done by the people behind &quot;The state of DevOps&quot; reports. They concluded that practicing trunk-based development leads to faster feature delivery, less problems with integrating changes from many developers and generally better software.</p>
<p>A good contrast was made about using pull requests in open source projects. These projects deal with untrusted committers that submit changes and with the pull request mechanism you as maintainer get much better tools to manage that. However most teams consist of developers who are trusted committers, they're your team mates! Therefore it makes more sense to not use PRs but instead make sure that everybody commits small changes frequently and thereby reduce the pain of integration.</p>
<hr>
<p>Stay tuned for the rest of day 1, soon to follow!</p>
</div>]]></content:encoded></item><item><title><![CDATA[NDC Oslo - Workshop days]]></title><description><![CDATA[<div class="kg-card-markdown"><p>After arriving in Oslo on Sunday and doing the touristy walkabout through the city, Monday saw the start of the <em>Designing Microservices</em> workshop by <a href="http://www.samnewman.io/">Sam Newman</a>. Now personally I've been waiting for a good opportunity to jump on the microservices bandwagon for some time now and joining this workshop is</p></div>]]></description><link>https://blog.codenizer.nl/ndc-oslo-workshop-days-2/</link><guid isPermaLink="false">5a51d8c008c32b7b6bf00624</guid><category><![CDATA[NDC]]></category><category><![CDATA[NDCOslo]]></category><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Tue, 13 Jun 2017 19:54:37 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>After arriving in Oslo on Sunday and doing the touristy walkabout through the city, Monday saw the start of the <em>Designing Microservices</em> workshop by <a href="http://www.samnewman.io/">Sam Newman</a>. Now personally I've been waiting for a good opportunity to jump on the microservices bandwagon for some time now and joining this workshop is definitely preparation to doing just that.</p>
<p>Of course with microservices being in the opinion of many, yours truly included, a bandwagon thing it doesn't hurt to actually try and understand what they actually are.</p>
<p>Now before I continue, these are some insights I've gained from the workshop but I'm not Sam and I highly recommend doing this workshop yourself if you have the opportunity. He's way better at explaining this (and arguably funnier)</p>
<p>With that out of the way, let's get go!</p>
<p>So then, <strong>should you build microservices?</strong></p>
<p>Well the only answer to that is: It depends. (I have it on good authority that this is what a  consultant would answer<sup><a href="#ref-1">[1]</a></sup>). But in truth, it really does. In the two days of the workshop we kept coming back to this question and I'll try and explain why it depends.</p>
<p>The micro of microservices is a bit of a misnomer because it implies a tiny tiny service. How tiny? <a href="http://www.ben-morris.com/how-big-is-a-microservice/">Nobody</a> <a href="https://martinfowler.com/microservices/">really</a> <a href="https://particular.net/blog/goodbye-microservices-hello-right-sized-services">knows</a>. But no fear! There is a better definition. Sam gave us the following:</p>
<blockquote>
<p>Small <strong>Independently Deployable</strong> services that <strong>work together</strong>, modelled around a <strong>business domain</strong></p>
</blockquote>
<p>This reminded me a lot of Uncle Bob's definition of single responsibility, of which he says to consider it as <em>single reason for change</em> <sup><a href="#ref-2">[2]</a></sup>. One of the key points is that these services can be deployed without requiring <em>another</em> service or application to be deployed in lock-step with it. In a way, a monolithic system is in fact independently deployable, it's one unit of deployment.</p>
<p>But again, the definition mentions small. But what is small? When is a service small enough? I think that answer could either be 42<sup><a href="#ref-3">[3]</a></sup> or that you need to define that for yourself. Size largely depends on what works for your team and organisation.</p>
<p>It is more important that the services you create are designed around business domains. This helps to isolate functionality for that domain inside a single service (which can use many others!). Also the responsibility for changing that service should be with one team and not many people making changes (and thus sharing responsibility, which doesn't work<a href="#ref-4">[4]</a>).</p>
<p>When you start figuring out which parts of your application can be isolated and turned into microservices it is a good idea to read up on Domain Driven Design. Designing microservices is largely identifying bounded contexts and defining the models you expose to your consumers.</p>
<p>Defining the boundaries of your services will be the hardest part and what you need to keep in mind is the cost of change. Changing the shared model of your service when it's running in production is pretty high cost, moving a sticky note on a wall in your design phase is pretty darn cheap. Also keep in mind that when your system is still new it is perfectly fine to build as a modular monolith and split out into services later when the domains and contexts are more clear and less likely to change.</p>
<p><strong>But wait! You are at a technical conference! What about the technology?</strong></p>
<p>So far I've only talked about the design and businessy stuff but obviously there is more to it than that. Truth of the matter is, all the technology to build microservices already exists and has existed for a number of years already. It's more a matter of using it in the right way and applying practices rigorously.</p>
<p>Now you don't necessarily need to use REST to build microservices, XML-RPC or CORBA works just as well (and this is 1990's era technology!), however HTTP (and REST) give you advantages in providing a useful protocol for communication between services. Just be aware that for some circumstances it might make more sense to communicate over UDP because you need very high volume and losing some data is fine, keep your options open.</p>
<p>With the introduction of more services in your landscape you will need to focus more on how you deploy your services, manually deploying doesn't cut it anymore. Invest in a proper Continuous Integration / Continuous Delivery pipeline in which you can integrate automated testing. This helps reduce the time you need to spend to get services into production and provide more confidence that what is deployed actually works.</p>
<p>Once in deployed (to production), keeping track of how your landscape is performing becomes challenging. Like with deployment, manually checking status is madness. Manual alerting is even worse (getting that call from a user is a poor alerting mechanism!). You should collect logs from your various services in a centralised location like ELK or Splunk so you can diagnose failures better as looking for log files across many services becomes too painful.</p>
<p>Because in a microservices landscape an action performed by a user can cross multiple service boundaries it can be challenging to stitch together log entries when something goes wrong. To improve this it is a good idea to implement correlation id's and attach this to your log entries. It makes it that much easier to track calls across multiple services. Tools like Zipkin can leverage this to help track down which service is taking a long time for example.</p>
<p>One bit of advice Sam gave us, and I agree on this one, build this stuff in from the start! It is so much more difficult to do it afterwards (cost of change!). When you start building the first services and deploying them into production you will find out what works for your organisation and what you actualy need. Use that knowledge in the next services you build so you improve your overall platform while you are moving forward instead of trying to retrofit logging and monitoring onto existing services.</p>
<p>Another important thing to consider is using Infrastructure-as-Code to help you automate the deployment of your infrastructure. It also gives you more confidence that what is running in production is actually what you intended instead of a collection of special snowflakes that nobody can remember how they were built. With this approach you can also more easily destroy and rebuild environments which is really useful in for example testing scenarios or disaster recovery (to name a few, there are many more).</p>
<p>Of course there is a lot more to talk about on the technology side but I'll save that for the next blog post as this one got long already. More stuff from NDC to come, also I hope to do a lot more video Q-and-A sessions with some of the speakers so watch this space.</p>
<sub>
<a name="ref-1">[1]</a> Yes, I gave this answer and was slammed for it...  
<a name="ref-2">[2]</a> [https://8thlight.com/blog/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html](https://8thlight.com/blog/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html)  
<a name="ref-3">[3]</a> You have no idea where your [towel is](https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy)  
<a name="ref-4">[4]</a> Personal observation and gut feel. Shared responsibility makes it unclear who actually is responsible and leads to deflection of work (no you need to talk to that guy...)
</sub>
</div>]]></content:encoded></item><item><title><![CDATA[Set TFS vNext build variables via the REST API]]></title><description><![CDATA[<div class="kg-card-markdown"><p>At $WORKPLACE we're currently busy porting over our XAML builds to the TFS 2015 build system and we wanted to be able to queue another build when the current build completes.<br>
Fortunately for us the REST API makes that a trivial exercise and that particular bit was running fairly quickly.</p></div>]]></description><link>https://blog.codenizer.nl/set-tfs-vnext-build-variables-via-the-rest-api/</link><guid isPermaLink="false">5a51d8c208c32b7b6bf0062d</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Wed, 04 May 2016 14:11:48 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>At $WORKPLACE we're currently busy porting over our XAML builds to the TFS 2015 build system and we wanted to be able to queue another build when the current build completes.<br>
Fortunately for us the REST API makes that a trivial exercise and that particular bit was running fairly quickly.</p>
<p>However...</p>
<p>Because we have a build that will automatically get a version number we needed to pass that version number on to the next build in line. &quot;Aha!&quot;, I hear you think, &quot;You can use variables for that!&quot; and yes, you are correct. TFS 2015 allows you, just as in XAML builds, to specify additional variables you can set at queue time.</p>
<p>Alas, the <a href="https://www.visualstudio.com/en-us/integrate/api/build/builds#Queueabuild">REST API documentation</a> doesn't specify how to do that.</p>
<p>But after a bit of poking around in the JSON data that is returned when you retrieve information about a build I noticed the <code>parameters</code> property which seemed to contain the variables used at build time.</p>
<p>So when you want to pass variables when queueing a build through the REST API you can use something like this:</p>
<pre><code class="language-json">{
  &quot;definition&quot;: {
    &quot;id&quot;: 25
  },
  &quot;sourceBranch&quot;: &quot;refs/heads/master&quot;,
  &quot;parameters&quot;: &quot;{ \&quot;version\&quot;: \&quot;1.2.3.4\&quot; }&quot;
}
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[The purple crocodile is imaginary!]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I'm willing to bet the following is going to feel very familiar to you:</p>
<p>Here you are at Awesome Company, happily working on the SuperProjectOfTheMonth and about every 15 minutes you run into this little thing that's costing you just a microsecond of brain activity to notice before you get</p></div>]]></description><link>https://blog.codenizer.nl/the-purple-crocodile-is-imaginary/</link><guid isPermaLink="false">5a51d8c208c32b7b6bf0062f</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Mon, 25 Jan 2016 17:08:16 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>I'm willing to bet the following is going to feel very familiar to you:</p>
<p>Here you are at Awesome Company, happily working on the SuperProjectOfTheMonth and about every 15 minutes you run into this little thing that's costing you just a microsecond of brain activity to notice before you get on with whatever it was you're doing.</p>
<p>Now it'll only cost you a microsecond, I mean, I can't probably even blink that fast, but if that happens every 15 minutes I wouldn't get anything done, let alone remember what I was doing before.</p>
<p>Luckily for us, we work in IT and we can <strong>DoStuff<sup>TM</sup></strong></p>
<p>At this point I'd like to make another bet:</p>
<p>You get all excited about the massive improvement you're going to make when suddenly a wild impediment appears and says &quot;No we can't do this because <em>reasons</em>&quot;</p>
<p>If you're Dutch (or have lived here for some time) you may have come across a tv-commercial that goes like this:</p>
<blockquote>
<p>Two little kids are with their mother at a swimming pool and they seem to have lost their purple inflatable crocodile. When they go to the reception desk and see that it's propped against the wall behind the counter they ask if they can have it back.<br>
The man behind the counter responds that they will need to fill in a form to which the mother replies: &quot;But it's right there, that's the crocodile&quot;<br>
The man won't be budged and replies: &quot;But you still will need to fill in the form&quot; at which point the mother walks away with her children looking completely bewildered.</p>
</blockquote>
<p>Now this is only a tv-commercial and may seem a bit &quot;heh, funny&quot; but it provides me with a nice example of what I see happening quite frequently.</p>
<p>Remember the bet I made earlier? I think that not only the fact that <em>&quot;reasons&quot;</em> pop up can be demotivating, there is a more fundamental problem here. Given previous experiences it may very well be that you won't even bother to change or do anything <strong>because people will say no anyway!</strong></p>
<p>I feel that this way of thinking gradually gets you into a situation where you muddle on and become oblivious to the things you could change to improve your life: <strong>The purple crocodile has become real!</strong></p>
<p>But fear not!</p>
<p>While it can feel like trying to swim up a waterfall you should make an effort to change. <em>&quot;But won't that be tedious?&quot;</em> I can hear you say. Yes it most likely will, but I'd like to offer some advice. When someone says <strong>No!</strong>, there is probably a reason behind it and it may be that the nay-sayer thinks that you are aware of these reasons. As a bonus they may even think you're stupid because you're too thick to understand them.<br>
So my advice is: <em>Find out what those reasons are.</em></p>
<p>Because when you do you can convince people why change is necessary. It can also help you clarity the need for change and you may learn something new as well. A new perspective never hurts!</p>
<p>And sometimes it just pays off to just do what you need and show people how you improved. And sometimes it's just easier to ask for forgiveness instead of permission.</p>
<p>I hope that you now no longer believe in purple crocodiles or if you do, just to give it a nice name and show how it, too, can change.</p>
</div>]]></content:encoded></item><item><title><![CDATA[What I learned from organizing a developer conference]]></title><description><![CDATA[<div class="kg-card-markdown"><p>At the end of 2014 I had been toying with the idea to organize a conference for the developer group I work with. At the time I had two basic goals:</p>
<ul>
<li>Provide a platform for the devs to have a talk about the cool stuff they are working on</li>
<li>Get</li></ul></div>]]></description><link>https://blog.codenizer.nl/what-i-learned-from-organizing-a-developer-conference/</link><guid isPermaLink="false">5a51d8c208c32b7b6bf0062c</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Fri, 12 Jun 2015 09:12:15 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>At the end of 2014 I had been toying with the idea to organize a conference for the developer group I work with. At the time I had two basic goals:</p>
<ul>
<li>Provide a platform for the devs to have a talk about the cool stuff they are working on</li>
<li>Get all the devs together (again)</li>
</ul>
<p>Now the second bullet might seem a bit odd to put there as us nerds have the unique ability to recognize others on sight, but we've had so many projects move through the building and people moving in and out of teams that it got a bit difficult to figure out who works where. Also knowing your fellow devs makes it a lot easier to solve problems by asking someone you know, to find a sparring partner or <a href="http://en.wikipedia.org/wiki/Rubber_duck_debugging">rubber duck</a>.</p>
<p>As far as the first goal is concerned, I believe that in our dev group there are a lot of smart people that do interesting stuff on their projects, but also have fiddled with new libraries or tools that could help us build better software. At the regular dev meetings it's always a bit of a challenge to find people who want to talk about these topics, so how was I going to get them to speak at a conference? But more on that later.</p>
<p>The experience I usually have when I get a new idea is that I get really enthusiastic and full of energy you know, that conquer-the-world feeling. Having seen many ideas die a quick and messy death for various reasons I really wanted to avoid that this time around. So one of the first things I did was to pitch my idea to other devs and see if it was something people would like to see, a bit of market research if you will.</p>
<p>I think that from pitching my idea I learned the first lesson: get feedback early</p>
<p>Organizing something all by yourself can be very rewarding if you pull it off but it also means that you need to be careful that you don't suffer from tunnel vision and create something that you might find useful but others might not. One suggestion I got was to join the GoForIt days, our innovation challenge and to gather a group of people to help me hammer out the details of the conference.</p>
<h1 id="goingforit">Going for it</h1>
<p>During the 24 hours of innovation (or: GoForIt day) I worked with a bunch of people (hi Jim, Michael &amp; Henk!) to determine what the conference would look like, how it should be called, how many sessions etc. We had a lot of discussion about how large we should make it, just our dev group? Invite all devs? Do we invite only devs or should we try to get testers (QA engineers) and BA's to join us as well?</p>
<p>I think these questions led to lesson no 2: keep it real.</p>
<p>At first I wanted to aim big: get as many people as we can. 100+ attendees? Yes, please! However as this was the first time this conference is held we decided to manage our own expectations and scaled it down a bit and figured that if we could get about 60 people to show up that would be great.</p>
<p>But getting people to show up, that means having great sessions and that means getting speakers for those sessions. Which leads us to...</p>
<h1 id="volunteeringinvitingspeakers"><s>Volunteering</s> Inviting speakers</h1>
<p>Within any organization there are always the usual suspects who will show up to meet-ups and tech sessions and a number of those will jump to any chance to do a demo or presentation about something. So to make sure I got enough speakers to fill all the slots I had a talk to a couple of likely candidates and pitched the conference idea. Luckily for me the reactions I got were all positive so filling up the slots actually went pretty smoothly...</p>
<p>... until I got a cancellation.</p>
<p>Cue third lesson: always have a backup.</p>
<p>It's a sensible idea to have backups of your data but I think this also applies to basically anything else. Always have a plan B. Think about what could go wrong with plan A, it helps you identify problems that may occur later and to come up with alternatives for when that happens.<br>
I'm not saying that you always (all right, yes I did say always) have to have a plan B but what helped me is that when the shit hits the fan I can think: all right, bad things happen but let's move on.</p>
<p>Luckily I was able to quickly find another speaker who was able to put together a presentation on very short notice.</p>
<h1 id="timingandgettingpeopletojoin">Timing (and getting people to join)</h1>
<p>One thing I haven't talked about so far is the actual set-up of the conference. During the GoForIt we had quite some discussion on how long the individual sessions should be and what the overall conference schedule would be like.</p>
<p>We decided to plan the conference on an afternoon instead of a full day as we thought it more likely for people to show up if it doesn't take up their whole day. We often face quite some pressure within projects that make it difficult for people to join these types of events.</p>
<p>Deciding on the length of a session is always a bit difficult, as a speaker it's easy to fill an hour or maybe even two on your particular topic and I've frequently found myself needing to trim a session to fit a time slot. For the conference we wanted to have enough time for both the presentation and a discussion, we wanted to make sure that the sessions were not just one-way sending of information. We thought that by making every time slot 1 hour that would work out nicely, 45 minutes of presentation and 15 minutes discussion.</p>
<p>Did that work? Yes, I think so. From my own session and the others that I've seen there was plenty time available for people to ask questions which is what we wanted. I did get feedback from a couple of the speakers that filling a 1 hour slot was too much and that it might prevent other people from hosting a session. It's definitely something we need to think on.</p>
<p>Having said that, one problem however was that we tried to cram too many sessions in one afternoon.</p>
<p>When we planned the conference we wanted to have enough sessions and I think we might have gone a bit overboard with that. The conference schedule ended up like this:</p>
<table>
<tr><td colspan="2">Keynote</td></tr>
<tr><td>Session 1 - Room 1</td><td>Session 1 - Room 2</td></tr>
<tr><td>Session 2 - Room 1</td><td>Session 2 - Room 2</td></tr>
<tr><td>Session 3 - Room 1</td><td>Session 3 - Room 2</td></tr>
</table>
Starting at 13:00 and every session taking an hour, plus a break between each session meant that we finished at 17:45. Obviously we didn't think this was a problem but we ended the last sessions with about 5 people in either room... *ouch!*
<p>So I think the lesson here is: plan carefully!</p>
<p>It's easy with a project like this to focus too much on the content of the sessions because that is one of the things you obviously know and care about. But organizing an event is about so much more than just that. I remember speaking to someone from Microsoft a couple of years ago about the TechEd and TechDays events and one of the remarks was that they had to worry even about the temperature of the rooms! Wish I had remembered that particular conversation when I was doing the planning and not now when I'm writing this blog.</p>
<p>The thing we ran in to was that apparently Wednesday is a popular day to work at home or have a day off within our company. So it's likely we would have had more people join if we'd only have planned the conference on, say, a Thursday.</p>
<p>Also I think that most people just found 17:45 too late to stay around, starting earlier on the day would, in hindsight, have been better.</p>
<h1 id="communicationcommunicationcommunication">Communication, communication, communication</h1>
<p>How do you announce an event like this? We figured that the newsletters that get e-mailed around every month would serve as a starting point, however we have seen that a lot of people just don't read them. To make sure we got enough exposure we decided to print posters and bring the event to everyones attention that way. Also we've got these nice tv-screens in our company restaurant on which we were able to put a promotional video.</p>
<p>The effect of all this? Not a whole lot.</p>
<p>It turned out that when I spoke to people before the conference a lot actually hadn't heard about it, despite our efforts at reaching as many people as we could. I think that we should have communicated the date and the general theme of the conference a lot sooner than the couple of weeks before we did.</p>
<p>I think that we also could have used the various team managers and lead developers more to promote the event and to encourage other people to join as well.</p>
<h1 id="conclusion">Conclusion</h1>
<p>The question that kept running through my head during the conference and afterwards at the drinks was: can I call the conference a success?</p>
<p>I think I can safely say yes. We had great sessions, people actually turned up to watch them and the overall feel was great!</p>
<p>Would I do this again? Oh yes! It has been a great experience for me and I think I've learned a couple of valuable lessons in organizing the conference.</p>
<h1 id="thanks">Thanks</h1>
<p>All right, this isn't an award ceremony but I would like to thank a couple of people, Michael &amp; Jim for the brainstorms and pulling this off, Henk for the reality checks, Wouter for being our fairy godfather and all the speakers for their enthusiasm.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Deployment hook in XL Deploy]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Now that we've migrated to TFS 2013 we now have Team Rooms available for all the projects. TFS provides a number of default events such as work item state changes and builds that have completed. Because we use XL Deploy to handle our automated deployments I wanted to include those</p></div>]]></description><link>https://blog.codenizer.nl/deployment-hook-in-xl-deploy/</link><guid isPermaLink="false">5a51d8c008c32b7b6bf00622</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Fri, 08 May 2015 16:03:39 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Now that we've migrated to TFS 2013 we now have Team Rooms available for all the projects. TFS provides a number of default events such as work item state changes and builds that have completed. Because we use XL Deploy to handle our automated deployments I wanted to include those events to the team room as well.</p>
<p>XL Deploy has a nice REST API but unfortunately there is no way to register a hook when a deployment completes. Fortunately XL Deploy has a way to add rules that can execute extra steps at various points in a deployment. I've used this to add a step to the end of a deployment plan that uses a little <a href="http://www.jython.org/">Jython</a> script to call a remote REST API whenever the deployment completes.</p>
<h2 id="addingarule">Adding a rule</h2>
<p>XL Deploy stores it's rules under the <code>$XLDEPLOYPATH/server/ext</code> folder in the xl-rules.xml file and this is where we'll add the necessary configuration to run our script.<br>
The rule definition looks like</p>
<pre><code class="language-XML"> &lt;rule name=&quot;DeploymentSucceeded&quot; scope=&quot;post-plan&quot;&gt;
    &lt;steps&gt;
        &lt;jython&gt;
          &lt;order&gt;99999&lt;/order&gt;
          &lt;description&gt;Execute application deployed hook&lt;/description&gt;
          &lt;script-path&gt;hooks/application-deployed.py&lt;/script-path&gt;
          &lt;jython-context&gt;
            &lt;application expression=&quot;true&quot;&gt;context.deployedApplication.name&lt;/application&gt;
            &lt;environment expression=&quot;true&quot;&gt;context.deployedApplication.environment.name&lt;/environment&gt;
            &lt;version expression=&quot;true&quot;&gt;context.deployedApplication.version.name&lt;/version&gt;
            &lt;hooksDict expression=&quot;true&quot;&gt;context.repository.read(&quot;Environments/Hooks&quot;)&lt;/hooksDict&gt;
          &lt;/jython-context&gt;
        &lt;/jython&gt;
    &lt;/steps&gt;
&lt;/rule&gt;
</code></pre>
<p>I've set the rule scope to be <code>post-plan</code> to make sure that this step is only added after the validation and planning stages have completed. The rule adds a new jython step that instructs XL Deploy to call the script defined in <code>script-path</code> and is a relative path starting from <code>$XLDEPLOYPATH/server/ext</code> on the XL Deploy server.</p>
<p>To provide the script with some useful information you can define a number of parameters in the <code>jython-context</code>. These parameters will be made available as variables in the jython script. In this example I pass in the application, environment and version names. The <code>hooksDict</code> parameter is pointing to a dictionary under <code>Environments/Hooks</code> in the XL Deploy repository and it contains the URL of the REST API we want to call.</p>
<pre><code>! Before you add this to your xl-rules.xml be sure to make a backup first!
</code></pre>
<p>Depending on the configuration of your XL Deploy instance you may need to restart the server before the rule is activated.</p>
<h2 id="thehooksdictionary">The Hooks dictionary</h2>
<p>To be able to configure the URL that the script will be calling I've added a dictionary under the Environments folder in the XL Deploy repository. This dictionary contains the following settings:</p>
<table>
<thead>
<tr><th>Key</th><th>Value</th></tr>
</thead>
<tbody>
<tr><td>host</td><td>localhost:8000</td></tr>
<tr><td>deployment-complete-url</td><td>/$application/$environment/$version/$username </td></tr>
</tbody>
</table>
<p>The <code>$application</code>, <code>$environment</code>, <code>$version</code> and <code>$username</code> placeholders will be replaced by the Jython script later on. If you leave <code>host</code> empty the script won't run which makes it an easy way to disable the hook alltogether.</p>
<h2 id="creatingthescript">Creating the script</h2>
<p>Now that we have the rule in place we need to act whenever the step is triggered. The script is actually fairly simple, it collects the variables, figures out the current username running the deployment and calls a REST service:</p>
<pre><code class="language-Jython">import httplib
from string import Template

if hooksDict != null and &quot;host&quot; in hooksDict.entries and &quot;deployment-complete-url&quot; in hooksDict.entries:
	username = context.task.username
	
	context.logOutput(&quot;Calling deployment-completed hook with: &quot; + username + &quot; deployed application &quot; + application + &quot; (&quot; + version + &quot;) to &quot; + environment)
	
	s = Template(hooksDict.entries[&quot;deployment-complete-url&quot;])
	url = s.substitute(application=application, version=version, environment=environment, username=username)

	conn = httplib.HTTPConnection(hooksDict.entries[&quot;host&quot;])
	conn.request(&quot;GET&quot;, url)
	r1 = conn.getresponse()
	conn.close()
</code></pre>
<pre><code>! Note that Jython is a Python derived language and is white-space sensitive.
</code></pre>
<p>In the script you can see that the <code>deployment-complete-url</code> is retrieved from the dictionary and the placeholders are replaced with the actual values. This allows a bit of flexibility when defining a REST service endpoint.</p>
<h2 id="creatingtherestservice">Creating the REST service</h2>
<p>To demonstrate that this all works I put together a simple NodeJS app (this is pushing the term app) to receive the request that the Jython script makes. It will simply log the request URL to a file and return &quot;OK&quot;:</p>
<pre><code class="language-Javascript">var http = require(&quot;http&quot;);
var fs = require(&quot;fs&quot;);

var server = http.createServer(function(req, res) {
    fs.appendFile(&quot;/tmp/completed.log&quot;, req.url + &quot;\n&quot;, function(err) {
        if(err) {
          res.writeHead(500, { &quot;Content-Type&quot;: &quot;text/plain&quot; });
          res.end(err);
        }
        else {
            res.writeHead(200, { &quot;Content-Type&quot;: &quot;text/plain&quot; });
            res.end(&quot;OK\r\n&quot;);
        }
    });

});

server.listen(8000, '127.0.0.1');
</code></pre>
<p>This service is started by running:</p>
<pre><code>sauron@localhost&gt; node restservice.js
</code></pre>
<h2 id="testingthedeploymentcompletehook">Testing the deployment complete hook</h2>
<p>To test the rule and the actual script I created the following in a local XL Deploy instance:</p>
<ul>
<li>Application: TestApp</li>
<li>Package: TestPackage (without anything in it)</li>
<li>Environment: Production</li>
</ul>
<p>I also added the Hooks dictionary with the settings as described above.</p>
<p>On the deployment tab simply drag the package TestPackage in the left box and the environment Production in the right box and click Execute. Because there isn't actually anything in the package or the environment the deployment will start immediately and you should see sometthing like this:<br>
<img src="https://blog.codenizer.nl/content/images/2015/05/xldeploy-before.png" alt="XL Deploy, before deployment"><br>
And when the execution is complete you should see this:<br>
<img src="https://blog.codenizer.nl/content/images/2015/05/xldeploy-after.png" alt="XL Deploy, deployment completed"></p>
<p>If everything went OK then the file <code>/tmp/completed.log</code> (or wherever you've put it) should show:</p>
<pre><code>/TestApp/Production/TestPackage/admin
</code></pre>
<h2 id="wrapup">Wrap up</h2>
<p>While extending XL Deploy this way works and provides a nice and simple way to achieve what I wanted I'm not entirely convinced this is the best way to do this. It may be that a plugin is actually better suited and could be more powerfull but that is something I need to look at in the future.</p>
<p>In the meantime, this hook now allows me to post messages to the TFS team room whenever someone deploys an application with XL Deploy!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Consuming WCF services with Reporting Services]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When looking to consume a WCF service in SQL Server Reporting Services (SSRS) I came across the XML datasource and a lot of articles showing me that while it is possible, it is a bit of a hassle with formatting the query and figuring out exactly which SOAP action to</p></div>]]></description><link>https://blog.codenizer.nl/consuming-wcf-services-with-reporting-services/</link><guid isPermaLink="false">5a51d8c208c32b7b6bf0062e</guid><dc:creator><![CDATA[Sander]]></dc:creator><pubDate>Sun, 11 May 2014 05:08:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>When looking to consume a WCF service in SQL Server Reporting Services (SSRS) I came across the XML datasource and a lot of articles showing me that while it is possible, it is a bit of a hassle with formatting the query and figuring out exactly which SOAP action to use and in what namespace.</p>
<p>Thats why I thought to have a go at creating a data source for SSRS that truly consumes a WCF service and would give me better metadata, like which fields are in the result set, which parameters the SOAP operation needs etc. So less of this:</p>
<pre style="padding-left: 30px;">&lt;Query&gt;
  &lt;Method Name="MyMethodRequest" Namespace="http://tempuri.org/"&gt;
    &lt;Parameters&gt;
      &lt;Parameter Name="Param1"&gt;&lt;DefaultValue&gt;ABC&lt;/DefaultValue&gt;&lt;/Param&gt;
    &lt;/Parameters&gt;
  &lt;/Method&gt;
  &lt;SoapAction&gt;http://tempuri.org/MyService/MyMethod&lt;/SoapAction&gt;
&lt;/Query&gt;</pre>
<p>but instead just:</p>
<pre style="padding-left: 30px;">MyMethod</pre>
<p>And let the extension figure out the rest for you.</p>
<p>I dug around on MSDN and found a quite <a title="Data Processing Extensions Overview" href="http://msdn.microsoft.com/en-us/library/ms152816.aspx" target="_blank">comprehensive article</a> about writing a custom Data Provider Extension (DPE) that provides a walkthrough on how to create such an extension. It didn't take me all that long to create my own extension and have it generate WCF clients at runtime. Easy as pie you'd say.</p>
<p>Well.. not really. Deploying should be as simple as dropping the assembly in the correct location and editing the SSRS config file, but when I reloaded the web UI it didn't show my extension and the event log was riddled with errors. Yikes!</p>
<p>After a bit of digging it turns out that SSRS 2012 (the version I'm targeting) does not support the .Net 4.0 runtime but instead is based on .Net 2.0. Fortunately it did get an update so that .Net 3.5 is supported so we're good to go: WCF is supported in that version.</p>
<p>Having rebuilt the project against v3.5 I deployed the extension again but alas, the extension was still not visible. It took me a while to figure it out but I had switched the architecture from AnyCPU to x86 and that prevented the extension from loading. After I fixed that it worked like a charm!</p>
<p>I have put the sources for the extension up <a title="GitHub - WCF Data Source" href="http://github.com/sandermvanvliet/WCFDataSource" target="_blank">on GitHub</a> so that other people looking to use WCF services in SSRS can use it as well. Have a look here for details on how to deploy and use the extension.</p>
</div>]]></content:encoded></item></channel></rss>