Going Deep Enough with Microservices

Moving from a monolith architecture to microservices is a widely debated process, with many recommendations and nuggets of advice available on the web in blogs like this. There are so many different opinions out there mainly because where an enterprise finds their main complexities lay depends on the skillsets of their technologists, the domain knowledge within the business and the existing code base. During the years I’ve spent as a contractor in a very wide range of enterprises, I’ve seen lots of monolith architectures – all of them causing slightly different headaches because those responsible for developing them let different aspects of the architecture slip. After all, the thing that is often forgotten is that if a monolith is maintained well, then it can work. The reverse is also true – if a microservice architecture is left to evolve on its own, it can cause as many problems as a poorly maintained monolith.

Domains

One popular way to break things down is using Domain Driven Design. Two books which cover most concepts involved in this process are ‘Building Microservices’ by Sam Newton (http://shop.oreilly.com/product/0636920033158.do) and ‘Implementing Domain Driven Design’ by Vaughn Vernon (http://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577) which largely references ‘Domain Driven Design: Tackling Complexity in the Heart of Software’ by Eric Evans (http://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215). I recommend Vaughn’s book over Evans’ as the latter is a little dry.

If you take on board even just half the content covered in these books, you’ll be on a reasonable footing to get started. You’ll make mistakes but as Sam Newton points out (and I’ve seen for myself) that’s inevitable.

Something that seems to be left out of a lot of domain driven discussions is what happens beyond the basic CRUD processes and domain logic in the application layer. Attention sits primarily with the thin interaction between a web interface and the domain processing by the aggregate in question. When dismantelling a monolith architecture into microservices, focus on just the application layer can give the impression of fast progress but in reality half the picture is missing. It’s likely that in a few months there will be several microservices but instead of them operating solely in their sub-domains, they’ll still be tied to the database that the original monolith was using.

Context

It’s hugely important to pull the domain data out of the monolith store. This is for the very same reasons we segregate service responsibilities into sub-domains. Data pertaining to a given domain may exist in other domains as well but changes will not necessarily be subjected to the same domain rules and individual records may have different properties. There may be a User record in several sub-domains, each with a Username property but the logic around how duplicate Usernames are prevented should sit firmly in a single sub-domain. If a service in a different sub-domain needs to update the username, it should either call a public service from the Profile sub-domain or raise a ‘Username Updated’ event that the Profile sub-domain would handle, process and possibly respond with a ‘Username Update Failed’ event of its own.

This example may be a little contrived – checking for duplicates could be something that’s implemented everywhere it’s needed. But consider what would happen if it became necessary to check for duplicates within another external system every time a Username is updated. That logic could easily be encapsulated behind the call to the Profile service but having to update every service that updates Usernames wouldn’t be good practice.

So if we are now happy that the same data represented in different sub-domains could at any one time be different (given the previous two paragraphs) then we shouldn’t store the data for both sub-domains in the same table.

Local Data

In fact, we’re now pretty well removed from needing a classic relational database for storing data that’s local to the sub-domain. We’re dealing with data that is limited in scope and is intended for use solely by the microservices built to sit in that sub-domain. NoSQL databases are ideal for this scenario and no matter which platform you’ve chosen to build on there are excellent options available. One piece of advice I think is pretty sound is that if you are working in the cloud, you’ll usually get the best performance by using the data services provided by your cloud provider. Make sure you do your homework, though – some have idiosyncracies that can impact performance if you don’t know about them.

So now we have data stored locally to the sub-domain, but this isn’t where the work stops. It’s likely there’s a team of DBA’s jumping around wondering why their data warehouse isn’t getting any new data.

The problem is that the relational database backing the monolith wasn’t just acting as a data-store for the application. There were processes feeding other data-stores for things like customer reporting, machine learning platforms and BI warehouses. In fact, anything that requires a historical view of things will be reading it from one or more stores that are loaded incrementally from the monolith’s relational database. Now data is being stored in a manner best suiting any given sub-domain, there isn’t a central source for that data to be pulled from into these downstream stores.

Shift of Responsibility

Try asking a team of dba’s if they fancy writing CLR based stored procedures to detect changes and pull new records into their warehouse by querying whatever data-store technologies have been decided on in each case – I doubt they’ll be too receptive. The responsibility for getting data out of each local data-store now has to move closer to the application services.

The data guys are interested in recording historical and aggregated records, which is convenient as there is a useful well known tool for informing different systems that something has happened – an event.

It’s been argued that using events to communicate across sub-domains is miss-using an event stream as a message bus. My argument in this case is that the back-end historical data-store is still within the original sub-domain. The data being stored belongs specifically to that sub-domain and still holds the same context as when it was saved. There has been a transition to a new medium of storage but that’s all.

So we are now free to raise events from our application microservices into eventstreams which are then handled by a service specifically designed to transfer data from events into whatever downstream stores were originally being fed from the monolith database. This gives us full extraction from the monolithic architecture and breaks the sub-domain’s dependency on the monolith database.

There is also the possibility that we can now give more fine grained detail of changes than was being recorded previously.

Gaps in the Monolith Database

Of course back end data-stores aren’t the only consumers of the sub-domain’s data. Most likely there will be other application level queries that used to read the data you’re now saving outside of the monolith database. How you manage these dependencies will depend on whether the read requests are coming from the same sub-domain or another. If they’re from the same sub-domain then it’s equally correct to either pull the data from an event stream or from microservices within that sub-domain. Gradually, a sub-domain’s dependency on the monolith database will die. If the queries are coming from a different sub-domain then it’s better to continue to update the monolith database as a consumer of the data stored locally to the sub-domain. The original table no longer containing data that is relevant to the sub-domain you’re working on.

Switching

Obviously we don’t want to have any gaps in the data being sent to our back-end stores, so as we pull functionality into microservices and add new data-stores local to the sub-domain, we also need to build the pipeline for our new back end processing of domain events into the warehouse. As this gets switched on, the loading processes from the original monolith can be switched off.

External Keys

Very few enterprise systems function in isolation. Most businesses make use of off-the-shelf packages or cloud based services such as Salesforce. Mapping records into these systems usually means using the primary key of each record to create a reference. If this has happened then the primary key from the monolith is most likely being relied on to hold things together. Moving away from the monolith database means the primary key generation has probably been lost.

There are two options here and I’d suggest going with whatever is the easiest – they both have their merits and problems.

  1. Continue to generate unique id’s in the same way as the monolith database did and continue to use these id’s for reference across different systems. Don’t rely on the monolith for id generation here, create a new process in the microservice that continues the same pattern.
  2. Start generating a new version of id generation and copy the new keys out to the external systems for reference. The original keys can eventually be lost.

Deeper than Expected

When planning the transition from monolithic architecture to microservices, there may well be promises from the management team that time will be given to build each sub-domain out properly. Don’t take this at face value – Product Managers will still have their roadmaps to fulfill and unfortunately there is maybe only 30% of any given slice of functionality being pulled out of a monolith that an end user will ever see. Expect the process to be difficult no matter what promises are made.

What I really want to get across here is that extracting even a small amount of functionality into microservices carries with it a much deeper dive into the enterprise’s tech stack than just creating a couple of application services. It requires time and focus from more than just the Dev team and before it can even be started, there has to be a architectural plan spanning the full vertical slice of a sub-domain, from front end to warehoused historical data.

Consequences of Not Going Deep Enough

How difficult do you find it in your organisation to get approval for technical upgrade work, or for dealing with technical debt as a project (which I’m not advocating is a good strategy), or for doing anything which doesn’t have a directly measurable positive impact on new product? In my experience, it isn’t easy and I’m not sure it should be, but that’s for another post.

Imagine you’ve managed to extract maybe 70% of your application layer away from your monolith but you’re still tied to the same data model. Have you achieved what you set out to do? You certainly don’t have loose coupling because everything is tied at the data level. You don’t have domain isolation. You are preventing your data team from getting access to the juicy new events you don’t really need to be raising (because the changed data is already available everywhere). You’ve turned a monolith into an abomination – it isn’t really microservices and it isn’t a classic monolith, it isn’t really any desired pattern at all. Even worse, the work you are missing is pretty big and may not directly carry with it any new features. Will you get agreement to remove coupling with the database as a project itself?

How are your developers doing? How many of them see that the strategy is only going half way? How many are moaning about paying lip service to the architecture? Wasn’t that one of the reasons you started with microservices in the first place?

Can you deploy the microservices without affecting other sub-domains? What if there are schema changes? What if there are schema changes in 2 sub-domains and one needs to be rolled back after release because it wasn’t quite right? Wasn’t this something microservices was supposed to prevent?

How many dodgy hacks or ‘surprises’ are there in your new code where devs have managed to make domain isolated services work with a single relational data model? How many devs waste time hand wrangling when they know they’re building something that is going to be technical debt the moment it goes live?

Ok, so I’m painting a darker picture than you’ll probably feel, but each of these scenarios will almost certainly come up, you just might not get to hear about it.

The crux for me is thinking about the reasons for pursuing a microservice architecture. The flexibility, loose coupling, technology agnosticity (if that’s a real term), the speed of continuous delivery that you’re looking for. Unless you go deeper than the low lying fruit of the application layer, you’ll be cheating yourself out of these benefits. Sure, you’ll see improvements short term but you are building something which is already technical debt. No matter what architecture you choose, if you don’t invest in maintaining it properly (or even building it properly in the first place) then it will ultimately become your albatross.

Events vs Commands

In the world of service oriented architectures and CQRS style processes there is a tendancy for nearly everything to raise events. Going back a few years however, before REST became fashionable many interactions were by RPC and often the result of processing commands from a queue.

So when did commands become an anti-pattern? Well of course, they never did. These days we just have to understand when it’s more appropriate to send a command or raise an event.

Here’s a table to help you decide what you should be using:

Events Commands
An event is all about something that has already happened A command is all about something that the originating service wants to happen (although it might not be successful)
A service raising an event doesn’t care what happens to it. Something consuming an event is not critical to the service’s function. A service sending a command needs that command to be processed as part of it’s functionality.
An event could be consumed by one, many or no consumers. A command is intended for one specific consumer.
An event can suggest loose coupling between services. A command definitely indicates tight coupling – the originating service knows about the command target.
A service prevented from raising an event can only report that the event was not raised. A service prevented from sending a command can report the failure to a team with specific domain knowledge about what will happen down stream if the command is not processed. The service may be designed to fail its own process if the command fails.

A really good example of the right use of an event is communicating between services within a bounded context that something has happened. The originating service will have successfully completed its function before raising the event. Consumers of the event do something else in addition that the originating service doesn’t really care about.

A good example of the right use of a command is where two different platforms need to be kept in sync with each other. When data is updated in one system a sync command is sent to update the other. If something stops that command getting sent (e.g. an auth issue between the service and a message queue) then the service can react and alert people to the issue, or it may be that the update in the originating service needs to fail.

Both events and commands are important in a distributed system. Using them in the right places makes your intent much clearer and helps keep your system structured.

User Secrets in asp.NET 5

Accidentally pushing credentials to a public repo has never happened to me, but I know a few people for whom it has. AWS have an excellent workaround for this by using credential stores that can be configured via the CLI or IDE but this technique only works for IAM user accounts, it doesn’t allow you to connect to anything outside of the AWS estate.

Welcome to User Secrets in asp.NET 5 – and they’re pretty cool.

User Secrets are a part of the new asp.NET configuration mechanism. If you open Visual Studio 2015 and create a new Web API project, for example, you’ll be presented with something somewhat different to previous versions. Configuration is carried out in Startup.cs, where we can conditionally load configuration from one or many sources including .config and .json files, environment variables and the User Secret store. To access User Secrets, you want to modify the constructor like so:

public Startup(IHostingEnvironment env, IApplicationEnvironment appEnv)
{
    var builder = new ConfigurationBuilder(appEnv.ApplicationBasePath)
        .AddJsonFile("config.json")
        .AddUserSecrets()
        .AddEnvironmentVariables();

    Configuration = builder.Build();
}

In this example, the order of calls to AddJsonFile(), AddUserSecrets() and AddEnvironmentVariables() makes a difference. If the property ‘Username’ is defined in config.json and also as a secret then the value in config.json will be ignored in favour of the secret. Similarly, if there is a ‘Username’ environment variable set, that would win over the other two. The order loaded dictates which wins.

To create a secret, first open a Developer Command Prompt for VS2015. This is all managed via the command line tool ‘user-secret’. To check if you have everything installed, at the prompt, type ‘user-secret -h’.

C:Program Files (x86)Microsoft Visual Studio 14.0>user-secret -h

If user-secret isn’t recognised then you may need to install the SecretManager command in the .NET Development Utilities (DNU). Do this by typing ‘dnu command install SecretManager’.

C:Program Files (x86)Microsoft Visual Studio 14.0>dnu command install SecretManager

In my case, this was again not recognised, even though I had just completed a full install of every component of Visual Studio 2015 Professional. If this is still not working for you, then you need to update the .NET Version Manager (DNVM). Do this by typing ‘dnvm upgrade’.

C:Program Files (x86)Microsoft Visual Studio 14.0>dnvm upgrade

Hopefully, you should get a similar response to this:

C:Program Files (x86)Microsoft Visual Studio 14.0>dnvm upgrade
Determining latest version
Downloading dnx-clr-win-x86.1.0.0-beta6 from https://www.nuget.org/api/v2
Installing to C:UsersPeter.dnxruntimesdnx-clr-win-x86.1.0.0-beta6
Adding C:UsersPeter.dnxruntimesdnx-clr-win-x86.1.0.0-beta6bin to process PATH
Adding C:UsersPeter.dnxruntimesdnx-clr-win-x86.1.0.0-beta6bin to user PATH
Native image generation (ngen) is skipped. Include -Ngen switch to turn on native image generation to improve application startup time.
Setting alias 'default' to 'dnx-clr-win-x86.1.0.0-beta6'

Now try installing the command. You should see all of your registered NuGet sources being queried for updates and then a whole host of System.* packages being installed. The very end of the response should look something like this:

Installed:
    10 package(s) to C:UsersPeter.dnxbinpackages
    56 package(s) to C:UsersPeter.dnxbinpackages
The following commands were installed: user-secret

Now when you run ‘user-secret -h’ you should get this:

Usage: user-secret [options] [command]

Options:
  -?|-h|--help  Show help information
  -v|--verbose  Verbose output

Commands:
  set     Sets the user secret to the specified value
  help    Show help information
  remove  Removes the specified user secret
  list    Lists all the application secrets
  clear   Deletes all the application secrets

Use "user-secret help [command]" for more information about a command.

You can see five possible commands listed, and getting help on any particular one is also explained. As an example, if you want to set a property ‘Username’ to ‘Guest’ then type this:

C:Program Files (x86)Microsoft Visual Studio 14.0>cd MyProjectFolder
C:MyProjectFolder>user-secret set Username Guest

Where MyProjectFolder is the location of a project.json file.

So there you have it. You’re ready to create secrets that can never be accidentally pushed into a public repo or shared anywhere they shouldn’t be. Just remember that emailing them to the dev sitting next to you might not be much better.

Useful links:

https://github.com/aspnet/Home/wiki/DNX-Secret-Configuration

http://stackoverflow.com/questions/30106225/where-to-find-dnu-command-in-windows

http://typecastexception.com/post/2015/05/17/DNVM-DNX-and-DNU-Understanding-the-ASPNET-5-Runtime-Options.aspx