Microsoft Accounts for business

Have you ever asked, why do I have to choose between work- and personal-account when logging into Microsoft sites? Or maybe even sometimes you are asked and sometimes you are not asked if it is a personal- or a work-account?

Lets start with the work accounts. They are created through a company and are owned by the company. Our company has the domain “co-IT.eu” and it is registered with Microsoft. A new login is created by an admin of the company. Those can be created in the PartnerCenter. All those accounts are in an ActiveDirectory from Microsoft, which is only for business software. So as example:

  • Azure Company Subscriptions
  • Office 365
  • PartnerCenter
  • Skype for Business
  • OneDrive for Business

 

The other ActiveDirectory is the personal one. These are not owned by a company, they are owned by you! So the big difference is: a personal account can be delete by the person owning it and a company account can be deleted by the company owning it. Personal accounts can be created by everyone here. Your email address is not allowed to end with a company domain. In my case I cannot create a personal account for our domain. But be aware, it was possible in the past! So personal accounts with company emails exists, but cannot be created anymore. With the personal account you can login here:

In the case you are an Microsoft Certified Professional, you have the privilege of helping your company with the Microsoft Partner Network. You must link your personal-account to the company in the PartnerCenter.

IF you are one of the guys like me, who have both accounts on a company email, than you can’t link them in the PartnerCenter. They are working on it or they recommend to move the MCP-ID to a “real” personal account.

 

 

Advertisements

ASP.NET Core 2.1. MVC Testingframework

Today I had the opportunity to use the new MVC testing framework for integration tests. In a previous post, I wrote about CodedUI and now I want to show a different approach.

When using CodedUI I stumbled over some problems, which could become a bit of burden over the time:

  • Test setup: it is only possible to write the tests for the UI against a running application
  • running Tests: When running the tests on the local machine, CodedUI takes over the controls and it is not possible to work.
  • Continuous Integration: CodedUI will not run out of the box, because it needs controls over the mouse and access to a browser

The Testingframework lets you setup a testserver in your code. Instantiate the testserver in your SetUp method and for the generic type use your startup.cs.

[TestFixture]
public class BasicTests
{
private WebApplicationFactory<Startup> _factory;

[SetUp]
public void Setup()
{
_factory = new WebApplicationFactory<Startup>();
}

}

The next step is to create a client in your test and send a request. In my case I just want to test, that the root page works:

[Test]
public async Task Get_EndpointsReturnSuccessAndCorrectContentType()
{
// Arrange
var client = _factory.CreateClient();

// Act
var response = await client.GetAsync(“/”);

// Assert
response.EnsureSuccessStatusCode(); // Status Code 200-299
Assert.AreEqual(“text/html; charset=utf-8”,
response.Content.Headers.ContentType.ToString());
}

The good part here is, that the Tests will run without an UI and just return the response from the web server.

When it is required to customize our setup from our web server, we can inherit from WebApplicationFactory and override the ConfigureWebHost function. The following configuration will setup an in memory database for the EntityFrameworkCore, seed the database and disable the “Authorize” filter. It is not required to test the ASP.NET Authentication(this is already done by microsoft), I would like to test my own code:

public class NoAuthenticationWebApplicationFactory<TStartup>
: WebApplicationFactory<Startup>
{
protected override void Dispose(bool disposing)
{

base.Dispose(disposing);
}

protected override void ConfigureWebHost(IWebHostBuilder builder)
{
builder.ConfigureServices(services =>
{
// Create a new service provider.
var serviceProvider = new ServiceCollection()
.AddEntityFrameworkInMemoryDatabase()
.BuildServiceProvider();

// Add a database context (ApplicationDbContext) using an in-memory
// database for testing.
services.AddDbContext<WorkshopContext>(options =>
{
options.UseInMemoryDatabase(“InMemoryDbForTesting”);
options.UseInternalServiceProvider(serviceProvider);
});

services.AddMvc(opts => { opts.Filters.Add(new AllowAnonymousFilter()); });
// Build the service provider.
var sp = services.BuildServiceProvider();

// Create a scope to obtain a reference to the database
// context (ApplicationDbContext).
using (var scope = sp.CreateScope())
{
var scopedServices = scope.ServiceProvider;
var db = scopedServices.GetRequiredService<WorkshopContext>();
var logger = scopedServices
.GetRequiredService<ILogger<NoAuthenticationWebApplicationFactory<TStartup>>>();

// Ensure the database is created.
db.Database.EnsureCreated();

try
{
// Seed the database with test data.
Utilities.InitializeDbForTests(db);
}
catch (Exception ex)
{
logger.LogError(ex, $”An error occurred seeding the ” +
“database with test messages. Error: {ex.Message}”);
}
}
});
}
}

With that it is possible to customize your web server and run tests against it. Just use the new WebApplicationFactory in your tests. With that setup we can:

  • Customize our web server for each test
  • can run the tests at all time without having to stop working.

In my next post I’ll write about how you can parse the webpages and navigate through them! Feel free to share your ideas about integration testing!

Exception Handling for Posts in ASP.NET Core

Visual Studio comes with a nice feature called scaffolding. If you use that, Visual Studio will generate code for your controllers. A Post-method looks like that:

[HttpPost]
[ValidateAntiForgeryToken]
public ActionResult Create(IFormCollection collection)
{
try
{
// TODO: Add insert logic here

return RedirectToAction(nameof(Index));
}
catch
{
return View();
}
}

Here we have some problems:

  • Violation of Single-Response-Principle (SRP). Error handling and logic is in the “Create”-function
  • Don’t Repat Yourself (DRY).  Every method will have the error handling.
  • Validation from constructors will not return the error message

For that, it is practical to write an ActionFilterAttribute. Here is the code:

public class PostExceptionFilterAttribute : ActionFilterAttribute
{
public override void OnActionExecuted(ActionExecutedContext context)
{
if (context.HttpContext.Request.Method.Equals(“POST”))
{
if (context.Exception is ArgumentException exception)
{
context.ModelState.AddModelError(exception.ParamName, exception.Message);
context.ExceptionHandled = true;
context.Result = new ViewResult();
}
else if (context.Exception is Exception exception1)
{
context.ModelState.AddModelError(“”, exception1.Message);
context.ExceptionHandled = true;
context.Result = new ViewResult();
}
}

base.OnActionExecuted(context);
}
}

Next the filter must be added to MVC, so that it is used all the time. Change the fallowing in the ‘Startup.cs’:

services.AddMvc(options => { options.Filters.Add<PostExceptionFilterAttribute>(); })

Happy coding and feel free to leave a comment!

Docker with Kubernetes in Azure

I had a really good workshop about using Docker with the orchestrator Kubernetes in Azure. Microsoft build a github repository called project Phoenix, which can be used to learn it step by step.

The workshop is about creating Container using Docker. Containers are something like virtual machines, just without an operating system. You will get a big benefit from it, if you do not want to ship your testing environment and you are using microservices.

For continuous integration and deployment we used Microsoft Team Services to set it up. There you can build an release the whole application and put it in a container registry. Everyone can just download the container image from the registry and start the application. If you want to use public container you can user dockerhub. For closed projects we used Azure to setup a private registry.

After that we did setup Kubernetes. If you have a microservice, you can use Kubernetes to watch over your services. In kubernetes a services is just an endpoint which is always available. The workload will be done by containers called pods. These pods can be destroyed and recreated at will, the client of your microservice will always call the service.

In general Kubernetes will help in the following aspects:

  • Cluster Management
  • Scheduling
  • Lifecycle & Health
  • Naming & Discovery
  • Load Balancing
  • Scaling
  • Image Repoistory
  • Continuous Delivery
  • Logging & Monitoring
  • Storage Volumes

Azure can set up the whole pipeline for building and deploying in the new “DevOps Project”. Just try it out. It looks really promising, because it supports multiple languages and should be setup in some minutes. But be aware: It is a Preview!

Feel free to leave a comment

UI-Tests with CodedUi

Today I wrote some UI-tests for my ASP.NET Core application. UI-Testframeworks let you record the steps you take with some final assertions. The recording tool creates a “UI-Map”, which stores the mapping between the C# Objects and the DOM (Document Object Model). I hit a wall really fast with the straight forward approach. I’ll show you what I went through.

In the example I have an index page with a list, where I can create items on a second page. In the first try the result looks mostly like this:

var browser = BrowserWindow.Launch(new Uri(“http://localhost:63238/&#8221;));
browser.FindElement(By.Id(“create”)).Click();
var div = browser.FindElement(By.Id(“errors”));
div.TryFind().Should().BeFalse();

The question is: What happens when I change an element? I could nest it or need to make an other click on a modal window. Whatever the reason, I need to change ALL tests which contain the part of the code. This violates the SRP (Single Response Principle). The used example has only 2 pages and 2 elements to use. Just imagine this code in a real world application.

Next point is the readablility of the Tests. It is really time consuming to read and understand. Just try to figure out what the code does or read the next test, which does the same.

var browser = BrowserWindow.Launch(new Uri(“http://localhost:63238/&#8221;));
var index= new HomePage(browser);
index.NavigateToCreateNewContact().HasError().Should().BeFalse();

The last test is called a DAMP (Descriptive And Meaningful Phrases) test. To accomplish that, write the navigation Code for each page in one class. If you like to use the testrecorder, use a “Coded UI Test Map” for each page you have in your application. Start with the general layout, which contains all actions which are present all the time.

Next write the steps, what you can do on each page in a partial class. The return value of the actions should always be a new page. This is also known as FluentAPI. Here is my homepage:

public class HomePage : SharedActionsAndElements
{
public HomePage(BrowserWindow browserWindow) : base(browserWindow)
{
}

public CreateContactPage CreateNewContact()
{
BrowserWindow.FindElement(By.Id(“create”)).Click();
return new CreateItemPage(BrowserWindow);
}

}

With that your UI-Tests can adhere to the SRP and your code is maintainable. Feel free to write a comment or ask some questions if some things are unclear.

HATEOAS – The real REST

In this post I will talk a bit about the holy grail of REST (Representation State Transfer). There is the “Richardson Maturity Model” to classify REST. The higher your level the better you are. Let’s start by telling a bit of history, how I went through some of these levels.

At university the first thing we had to build as a team was a chat program. We took the straight forward approach. Create some services which communicate over a TCP/IP stream with each other. This was our first application which was distributed. Most of the time in university we did build applications, which only communicate with the file system and the console. The problem with the TCP stream is, it does not work well in the internet. In most cases the ports are blocked by a firewall for security reasons.

Some semester later we learned about that “awesome” technology called Service Oriented Architecture (SOA). This solved the problem with the blocked TCP ports. All data is enveloped in XML and tunneled through the HTTP ports, which are mostly open. If you did something like that, you are probably at least on Level 0. 🙂

In one company I worked for, they sent every data over a TCP/IP abstraction layer. This was probably Level -1. My task was to create a project from scratch. I ended up using SignalR to communicate between client and backend. With SignalR I introduced resources to my application. So I had an URL-resource for people and different resources for competition management or other data. I leveled up again to Level 1. Afterwards I think the hardest part was to get management to approve my architecture and technology.

In the same project I often had the problem with saving entities. I only knew that if an identifier was not set, it was created, otherwise it was updated. For creating an entity we had to do things differently than for updating it. That bugged me back then and I didn’t have a solution because I had no time to educate myself and I had my hands full with getting people to understand what and how we were doing HTTP. I knew that HTTP possessed verbs like “POST”, “PUT”, “DELETE” and many more, but SignalR didn’t support that. I didn’t make the connection in my brain. But sometimes people use good technology just wrong. So I should have used a controller of ASP.NET for our RPC (Remote Procedure Call) calls. Only for live messaging I should have used SignalR. This knowledge brought me up to Level 2.

This sounds really good but it was not the end of the road. The project I worked on had really few developers (one) and my colleague with the vast amount of domain knowledge had to maintain our legacy product. One day, management figured we had to do a release at the end of the month. The result of this short release date with too few features implemented resulted in me having 4 other developers to work with. In our meetings we explained the tickets and then we implemented them in the sprint. Several times our frontend guru came in and asked me about the workflow for the user. I rolled my eyes but I knew he can’t learn all processes in one month. Took me several years and I still don’t fully understand it. I implemented the backend and the process worked there. One day he broke the UI, the other day I broke the backend, because of the missing understanding. I couldn’t solve the real problem back then. But now I know that writing a descriptive API helps . The best example comes from HTML in my point of view. In Level 2, when we do a call to a resource, we get the data of the resource. If we now add control structure like in HTML, we level up to Level 3. This way the UI guru does not need to know what to do when. My API tells him!

I hope I could give you an example of what to improve in the REST-API you are writing and maybe you see some similarities. Feel free to share your experiences or ask if some things were not clear!

 

Why to start with HATEOAS?

In my future posts I am going to write a bit about Hypermedia As The Engine Of Aplication State (HATEOAS). It uses an API which does not only contain data, it also contains control structure. Take as an example a ticketing system. When you created a ticket via an API with Hypermedia, it tells you, that you can edit it afterwards, delete it or submit it for review. But it will not show you the option for closing the ticket.

Now that we know what HATEOAS is and I would like to summarize some advantages of using Hypermedia. A benefit is that you can separate the UI better from the domain logic. If you have complex domain logic, you do not always require to change the UI when the process is changing. This is helpful if you have separate teams for UI and backend. If you have multiple UI’s, like Android, IOS and a WPF UI this could also help. With this the UI team can concentrate on styling the UI and the backend team can concentrate on implementing the business logic!

Second benefit is regarding architecture. The API not only tells you what you can do and when, it also tells you who you need to call to get your job done. This way you can split your architecture easily in microservices. If your one application becomes too big for the server, you can easily split it and move parts to other machines. The UI does not care, because the API always tells the correct location to use.

Another aspect is localization. Because your UI does not know where to draw which button, the API can contain all the text for the UI. So it is not required to write localized code in UI and backend.

A last aspect is, if you do not care about the UI at all, you don’t need to write it. Because some standards for describing the API do already exist. If you are using one of these then there is definitely an existing UI which you can use.

I hope that gives you a small overview on the benefits of HATEOAS.

Feel free to leave a comment!