Docker with Kubernetes in Azure

I had a really good workshop about using Docker with the orchestrator Kubernetes in Azure. Microsoft build a github repository called project Phoenix, which can be used to learn it step by step.

The workshop is about creating Container using Docker. Containers are something like virtual machines, just without an operating system. You will get a big benefit from it, if you do not want to ship your testing environment and you are using microservices.

For continuous integration and deployment we used Microsoft Team Services to set it up. There you can build an release the whole application and put it in a container registry. Everyone can just download the container image from the registry and start the application. If you want to use public container you can user dockerhub. For closed projects we used Azure to setup a private registry.

After that we did setup Kubernetes. If you have a microservice, you can use Kubernetes to watch over your services. In kubernetes a services is just an endpoint which is always available. The workload will be done by containers called pods. These pods can be destroyed and recreated at will, the client of your microservice will always call the service.

In general Kubernetes will help in the following aspects:

  • Cluster Management
  • Scheduling
  • Lifecycle & Health
  • Naming & Discovery
  • Load Balancing
  • Scaling
  • Image Repoistory
  • Continuous Delivery
  • Logging & Monitoring
  • Storage Volumes

Azure can set up the whole pipeline for building and deploying in the new “DevOps Project”. Just try it out. It looks really promising, because it supports multiple languages and should be setup in some minutes. But be aware: It is a Preview!

Feel free to leave a comment


UI-Tests with CodedUi

Today I wrote some UI-tests for my ASP.NET Core application. UI-Testframeworks let you record the steps you take with some final assertions. The recording tool creates a “UI-Map”, which stores the mapping between the C# Objects and the DOM (Document Object Model). I hit a wall really fast with the straight forward approach. I’ll show you what I went through.

In the example I have an index page with a list, where I can create items on a second page. In the first try the result looks mostly like this:

var browser = BrowserWindow.Launch(new Uri(“http://localhost:63238/”));
var div = browser.FindElement(By.Id(“errors”));

The question is: What happens when I change an element? I could nest it or need to make an other click on a modal window. Whatever the reason, I need to change ALL tests which contain the part of the code. This violates the SRP (Single Response Principle). The used example has only 2 pages and 2 elements to use. Just imagine this code in a real world application.

Next point is the readablility of the Tests. It is really time consuming to read and understand. Just try to figure out what the code does or read the next test, which does the same.

var browser = BrowserWindow.Launch(new Uri(“http://localhost:63238/”));
var index= new HomePage(browser);

The last test is called a DAMP (Descriptive And Meaningful Phrases) test. To accomplish that, write the navigation Code for each page in one class. If you like to use the testrecorder, use a “Coded UI Test Map” for each page you have in your application. Start with the general layout, which contains all actions which are present all the time.

Next write the steps, what you can do on each page in a partial class. The return value of the actions should always be a new page. This is also known as FluentAPI. Here is my homepage:

public class HomePage : SharedActionsAndElements
public HomePage(BrowserWindow browserWindow) : base(browserWindow)

public CreateContactPage CreateNewContact()
return new CreateItemPage(BrowserWindow);


With that your UI-Tests can adhere to the SRP and your code is maintainable. Feel free to write a comment or ask some questions if some things are unclear.


In this post I will talk a bit about the holy grail of REST (Representation State Transfer). There is the “Richardson Maturity Model” to classify REST. The higher your level the better you are. Let’s start by telling a bit of history, how I went through some of these levels.

At university the first thing we had to build as a team was a chat program. We took the straight forward approach. Create some services which communicate over a TCP/IP stream with each other. This was our first application which was distributed. Most of the time in university we did build applications, which only communicate with the file system and the console. The problem with the TCP stream is, it does not work well in the internet. In most cases the ports are blocked by a firewall for security reasons.

Some semester later we learned about that “awesome” technology called Service Oriented Architecture (SOA). This solved the problem with the blocked TCP ports. All data is enveloped in XML and tunneled through the HTTP ports, which are mostly open. If you did something like that, you are probably at least on Level 0. 🙂

In one company I worked for, they sent every data over a TCP/IP abstraction layer. This was probably Level -1. My task was to create a project from scratch. I ended up using SignalR to communicate between client and backend. With SignalR I introduced resources to my application. So I had an URL-resource for people and different resources for competition management or other data. I leveled up again to Level 1. Afterwards I think the hardest part was to get management to approve my architecture and technology.

In the same project I often had the problem with saving entities. I only knew that if an identifier was not set, it was created, otherwise it was updated. For creating an entity we had to do things differently than for updating it. That bugged me back then and I didn’t have a solution because I had no time to educate myself and I had my hands full with getting people to understand what and how we were doing HTTP. I knew that HTTP possessed verbs like “POST”, “PUT”, “DELETE” and many more, but SignalR didn’t support that. I didn’t make the connection in my brain. But sometimes people use good technology just wrong. So I should have used a controller of ASP.NET for our RPC (Remote Procedure Call) calls. Only for live messaging I should have used SignalR. This knowledge brought me up to Level 2.

This sounds really good but it was not the end of the road. The project I worked on had really few developers (one) and my colleague with the vast amount of domain knowledge had to maintain our legacy product. One day, management figured we had to do a release at the end of the month. The result of this short release date with too few features implemented resulted in me having 4 other developers to work with. In our meetings we explained the tickets and then we implemented them in the sprint. Several times our frontend guru came in and asked me about the workflow for the user. I rolled my eyes but I knew he can’t learn all processes in one month. Took me several years and I still don’t fully understand it. I implemented the backend and the process worked there. One day he broke the UI, the other day I broke the backend, because of the missing understanding. I couldn’t solve the real problem back then. But now I know that writing a descriptive API helps . The best example comes from HTML in my point of view. In Level 2, when we do a call to a resource, we get the data of the resource. If we now add control structure like in HTML, we level up to Level 3. This way the UI guru does not need to know what to do when. My API tells him!

I hope I could give you an example of what to improve in the REST-API you are writing and maybe you see some similarities. Feel free to share your experiences or ask if some things were not clear!


Why to start with HATEOAS?

In my future posts I am going to write a bit about Hypermedia As The Engine Of Aplication State (HATEOAS). It uses an API which does not only contain data, it also contains control structure. Take as an example a ticketing system. When you created a ticket via an API with Hypermedia, it tells you, that you can edit it afterwards, delete it or submit it for review. But it will not show you the option for closing the ticket.

Now that we know what HATEOAS is and I would like to summarize some advantages of using Hypermedia. A benefit is that you can separate the UI better from the domain logic. If you have complex domain logic, you do not always require to change the UI when the process is changing. This is helpful if you have separate teams for UI and backend. If you have multiple UI’s, like Android, IOS and a WPF UI this could also help. With this the UI team can concentrate on styling the UI and the backend team can concentrate on implementing the business logic!

Second benefit is regarding architecture. The API not only tells you what you can do and when, it also tells you who you need to call to get your job done. This way you can split your architecture easily in microservices. If your one application becomes too big for the server, you can easily split it and move parts to other machines. The UI does not care, because the API always tells the correct location to use.

Another aspect is localization. Because your UI does not know where to draw which button, the API can contain all the text for the UI. So it is not required to write localized code in UI and backend.

A last aspect is, if you do not care about the UI at all, you don’t need to write it. Because some standards for describing the API do already exist. If you are using one of these then there is definitely an existing UI which you can use.

I hope that gives you a small overview on the benefits of HATEOAS.

Feel free to leave a comment!

Workshops done right

Last weekend I had a workshop in rhetorics held by @rhetorikhelden.

I had to do video without any knowledge on how to perform in front of a crowd. I became nervous, spoke too fast and broke the flow of my speech all the time. It was funny listening to myself, but only because I know that I can perform better now.

What can one  do to improve your free speech? The key aspects for me were:

  1. Talk more slowly and take breaks. Instead of filling the sentence with “ohms” or other stupid words which do not belong there, make a break!
  2. Create a story which you can use and fall back to. If you missed what you wanted to talk about, continue with the story.
  3. Deliver a message and repeat it. People will forget you, if they do not remember what you told them! So repeat that message that they will never forget it!


So for your next talk in front of a crowd, try to use some methods and tell me if that helped!

Loading unreferenced assemblies at runtime for IoC with ASP.NET Core

Recently we wanted to load an assembly at runtime with our ASP.NET core application. In the assembly was an installer for our IoC-container castle.windsor. Unfortunatley it could and should not directly be referenced by our startup application, because it would have created circular dependencies.

We used the IoC-Initializer class from Uli Armbruster. When the class loads the IWindsorInstallers from the other assemblies, it will throw an error for our assembly:

System.IO.FileNotFoundException: ‘Could not load file or assembly ‘coIT.CMS.Infrastructure.Persistence, Version=, Culture=neutral, PublicKeyToken=null’. The system cannot find the file specified.’

The file was definitely there, but I was somehow unable to load it. So I tried loading the file manually with an Assembly.Load(), but i got the same error. By googling for a solution to load assemblies in .NET Core, I found the following function:


With this the assembly was loaded correctly and my IoC container could access the file and everything was working as expected!

I hope I could help you guys. Feel free to leave a comment!

SSL and NameServers with Azure

Recently I wanted to add an Azure test website to “”, just to test how I have to set up SSL for our applications. There are 2 vital things which are required:

  1. access to the nameserver for your domain
  2. the SSL-certificate

First setup the nameserver for your domain so that it has a CNAME pointing to your Azure website. CNAME is only used for subdomains or wildcard domains. If the root domain must point to Azure, an “A record” should be used, as written here.

The CNAME should point to something like “”. This way your DNS knows when somebody calls “”, it should be the content of “”. You can check if this works correctly with

In Azure go to “web app”->”Custom domains”. There the hostname “” must be validated and added. With this done, the url should show the correct website from Azure. Be aware that most browser will show you that this site cannot be trusted, because SSL is missing. The certificate as it is, is from *

You can fix this warning by going to “web app”->”SSL Certificates”. Upload the private certificate with password and add an SSL binding to the correct hostname. After that just enable “HTTPS only” under “Custom domains” and it is all set. will show a nice SSL symbol in your browser.

Feel free to leave comments or ask questions!