Scale or Fail

I’ve heard a lot of people say something like “but we don’t need huge scalability” when pushed for reason why their architecture is straight out of the 90’s. “We’re not big enough for devops” is another regular excuse. But while it’s certainly true that many enterprises don’t need to worry so much about high loads and high availability, there are some other, very real benefits to embracing early 21st century architecture principals.

Scalable architecture is simple architecture

Keep it simple, stupid! It’s harder to do than it might seem. What initially appears to be the easy solution can quickly turn into a big ball of unmanageable, tightly coupled string of dependencies where one bad line of code can affect a dozen different applications.

In order to scale easily, a system should be simple. When scaling, you could end up with dozens or even hundreds of instances, so any complexity is multiplied. Complexity is also a recipe for waste. If you scale a complex application, the chances are you’re scaling bits which simply don’t need to scale. Systems should be designed so hot functions can be scaled independently of those which are under utilised.

Simple architecture takes thought and consideration. It’s decoupled for good reason – small things are easier to keep ‘easy’ than big things. An array of small things all built with the same basic rules and standards, can be easily managed if a little effort is put in to working out an approach which works for you. Once you have a few small things all being managed in the same way, growing to lots of small things is easy, if it’s needed.

Simple architecture is also resilient, because simple things tend not to break. And even if you aren’t bothered about a few outages, it’s better to only have the outages you plan for.

Scalable architecture is decoupled

If you need to make changes in anything more than a reverse proxy in order to scale one service, then your architecture is coupled, and shows signs of in-elasticity. Other than being scalable, decoupled architecture is much easier to maintain, and keeps a much higher level of quality because it’s easier to test.

Decoupled architecture is scoped to a specific few modules which can be deployed together repeatedly as a single stack with relative ease, once automated. Outages are easy to fix, as it’s just a case of hitting the redeploy button.

Your end users will find that your decoupled architecture is much nicer to use as well. Without having to make dozens of calls to load and save data in a myriad of different applications and databases, a decoupled application would just make only one or two calls to load or save the data to a dedicated store, then raise events for other systems to handle. It’s called eventual consistency and it isn’t difficult to make work. In fact it’s almost impossible to avoid in an enterprise system, so embracing the principal wholeheartedly makes the required thought processes easier to adopt.

Scalable architecture is easier to test

If you are deploying a small, well understood, stack with very well known behaviours and endpoints, then it’s going to be no-brainer to get some decent automated tests deployed. These can be triggered from a deployment platform with every deploy. As the data store is part of the stack and you’re following micro-architecture rules, the only records in the stack come from something in the stack. So setting up test data is simply a case of calling the API’s you’re testing, which in turn tests those API’s. You don’t have to test beyond the interface, as it shouldn’t matter (functionally) how the data is stored, only that the stack functions correctly.

Scalable architecture moves quicker to market

Given small, easily managed, scalable stacks of software, adding a new feature is a doddle. Automated tests reduce the manual test overhead. Some features can get into production in a single day, even when they require changes across several systems.

Scalable architecture leads to higher quality software

Given that in a scaling situation you would want to know your new instances are going to function, you need attain a high standard of quality in what’s built. Fortunately, as it’s easier to test, quicker to deploy, and easier to understand, higher quality is something you get. Writing test first code becomes second nature, even writing integration tests up front.

Scalable architecture reduces staff turnover

It really does! If you’re building software with the same practices which have been causing headaches and failures for the last several decades, then people aren’t going to want to work for you for very long. Your best people will eventually get frustrated and go elsewhere. You could find yourself in a position where you finally realise you have to change things, but everyone with the knowledge and skills to make the change has left.

Fringe benefits

I guess what I’m trying to point out is that I haven’t ever heard a good reason for not building something which can easily scale. Building for scale helps focus solutions on good architectural practices; decoupled, simple, easily testable, micro-architectures. Are there any enterprises where these benefits are seen as undesirable? Yet, when faced with the decision of either continuing to build the same, tightly coupled, monoliths which require full weekends (or more!) just to deploy, or building something small, light weight, easily deployed, easily maintained, and ultimately scalable, there are plenty of people claiming “Only in an ideal world!” or “We aren’t that big!”.

Bonkers!

What’s Slowing Your Business?

There are lots of problems that prevent businesses from responding to market trends as quickly as they’d like. Many are not IT related, some are. I’d like to discuss a few problems that I see over and over again, and maybe present some useful solutions. As you read this, please remember that there are always exceptions. But deciding that you have one of these exceptional circumstances is always easier when starting from a sensible basic idea.

Business focused targeting.

For many kinds of work, quicker is better. For software development, quicker is better. But working faster isn’t the same thing as delivering faster.

I remember working as technical lead for a price comparison site in the UK, where once a week each department would read out a list of the things they achieved in the last week and how that had benefited the business. For many parts of the business there was a nice and easy line that could be drawn from what they did each week and a statistic of growth (even if some seemed quite contrived). But the development team was still quite inexperienced, and struggling to do CI never mind CD. For the less experienced devs, being told to “produce things quicker” had the opposite effect. Traditional stick and carrot doesn’t have the same impact on software development as on other functions, because a lot of the time what speeds up delivery seems counter intuitive.

  • Have two people working on each task (pair programming)
  • Focus on only one feature at a time
  • Write as much (or more) test code as functional code
  • Spend time discussing terminology and agreeing a ubiquitous language
  • Decouple from other systems
  • Build automated delivery pipelines

These are just a few examples of things which can be pushed out because someone wants the dev team to work faster. But in reality, having these things present is what enables a dev team to work faster.

Development teams feel a lot of pressure to deliver, because they know how good they can be. They know how quickly software can be written, but it takes mature development practices to deliver quickly and maintain quality. Without the required automation, delivering quick will almost always mean a reduction in quality and more time taken fixing bugs. Then there are the bugs created while fixing other bugs, and so on. Never mind the huge architectural spirals because not enough thought went into things at the start. In the world of software, slow and steady may lose the first round, but it sets the rest of the race up for a sure win.

Tightly coupling systems.

I can’t count how often I’ve heard someone say “We made a tactical decision to tightly couple with <insert some system>, because it will save us money in the long run.”

No.

Just no.

Please stop thinking this.

Is it impossible for highly coupled systems to be beneficial? No. Is yours one of these cases? Probably not.

There are so many hidden expenses incurred due to tightly coupled designs that it almost never makes any sense. The target system is quite often the one thing everything ends up being coupled with, because it’s probably the least flexible ‘off the shelf’ dinosaur which was sold to the business without any technical review. There are probably not many choices for how to work with it. Well the bottom line is: find a way, or get rid. Ending up with dozens of applications all tightly bound to one central monster app. Changes become a nightmare of breaking everyone else’s code. Deployments take entire weekends. License fees for the dinosaur go through the roof. Vendor lock in turns into shackles and chains. Reality breaks down. Time reverses, and mullets become cool.

Maybe I exaggerated with the mullets.

Once you start down this path, you will gradually lose whatever technical individuals you have who really ‘get’ software delivery. The people who could make a real difference to your business will gradually go somewhere their skills can make a difference. New features will not only cost you more to implement but they’ll come with added risk to other systems.

If you are building two services which have highly related functionality, ie. they’re in the same sub-domain (from a DDD perspective), then you might decide that they should be aware of each other on a conceptual level, and have some logic which spans both services and depends on both being ‘up’, and which get versioned together. This might be acceptable and might not lead to war or famine, but I’m making no promises.

It’s too hard to implement Dev Ops.

No, it isn’t.

Yes, you need at least someone who understands how to do it, but moving to a Dev Ops approach doesn’t mean implementing it across the board right away. That would be an obscene way forwards. Start with the next thing you need to build. Make it deployable, make it testable with integration tests written by the developer. Work out how to transform the configuration for different environments. Get it into production. Look at how you did it, decide what you can do better. Do it better with the next thing. Update the first thing. Learn why people use each different type of technology, and whether it’s relevant for you.

Also, it’s never too early to do Dev Ops. If you are building one ‘thing’ then it will be easier to work with if you are doing Dev Ops. If you have the full stack defined in a CI/CD pipeline and you can get all your changes tested in pre-production environments (even infra changes) then you’re winning from the start. Changes become easy.

If you have a development team who don’t want to do Dev Ops then you have a bigger problem. It’s likely that they aren’t the people who are going to make your business succeed.

Ops do routing, DBA’s do databases.

Your developers should be building the entire stack. They should be building the deployment pipeline for the entire stack. During deployment, the pipeline should configure DNS, update routing tables, configure firewalls, apply WAF rules, deploy EC2 instances, install the built application, run database migration scripts, and run tests end to end to make sure the whole lot is done correctly. Anything other than this is just throwing a problem over the fence to someone else.

The joke of the matter is that the people doing the developer’s ‘dirty work’ think this is how they support the business. When in reality, this is how they allow developers to build software that can never work in a deployed state. This is why software breaks when it gets moved to a different environment.

Ops, DBA’s, and other technology specialists should be responsible for defining the overall patterns which get implemented, and the standards which must be met. The actual work should be done by the developer. If for no other reason than the fact that when the developer needs a SQL script writing, there will never be a DBA available. The same goes for any out-of-team dependencies – they’re never available. This is one of the biggest blockers to progress in software development: waiting for other people to do their bit. It’s another form of tight coupling, building inter-dependent teams. It’s a people anti-pattern.

If you developers need help to get their heads around routing principals or database indexing, then get them in a room with your experts. Don’t get those people to do the dirty work for everyone else, that won’t scale.

BAU handle defects.

A defect found by a customer should go straight back to the team which built the software. If that team is no longer there, then whichever team was ‘given’ responsibility for that piece of software gets to fix the bug.

Development teams will go a long way to give themselves an easy life. That includes adding enough error handling, logging, and resilient design practices to make bug fixing a cinch, but only if they’re the ones who have to deal with the bugs.

Fundamental design flaws won’t get fixed unless they’re blocking the development team.

Everything else.

This isn’t an exhaustive list. Even now there are more and more things springing to mind, but if I tried to shout every one out then I’d have a book, not a blog post. The really unfortunate truth is that 90% of the time I see incredibly intelligent people at the development level being ignored by the business, by architects, even by each other, because even though a person hears someone saying ‘this is a good/bad idea’ being able to see past their own preconceptions to understand that point of view is often incredibly difficult. Technologists all too often lack the soft skills required to make themselves heard and understood. It’s up to those who have made a career from their ‘soft skills’ to recognise that and pay extra attention. A drowning person won’t usually thrash about and make a noise.

Getting FitNesse to Work

Sample code here.

Recently I’ve been looking into Specification by Example, which people keep defining to me as BDD done the right way. Specification by Example fully implemented includes the idea of an executable specification. A concept that has led me back to FitNesse having given it the cold shoulder for the last six or seven years.

I’ve always thought of FitNesse as a great idea but I struggled to see how to use it correctly when as a developer I was mainly focused on continuous delivery and Dev Ops. I didn’t see where in the development cycle it fit or how tests in a wiki could also be a part of a CD pipeline. Revisiting FitNesse with a focus on Specification by Example gave me the opportunity to work some of this out and I think I’m very likely to suggest using this tool in future projects.

Before it’s possible to talk about FitNesse in a CI environment, there are a few basics to master. I don’t want to go into a full breakdown of all functionality, but I do want to communicate enough detail to allow someone to get started. In this post I’ll concentrate on getting FitNesse working locally and using it to execute different classes of test. In a later post, I’ll cover how to make FitNesse work with CI/CD pipelines and introduce a more enterprise level approach.

This post will focus on using FitNesse with the Slim test runner against .NET code but should be relevant for a wider audience.

Installing FitNesse

Installing and running FitNesse locally is really easy and even if you’re intending to deploy to a server, getting things running locally is still important. Follow these steps:

  1. Install Java
  2. Follow these instructions to install and configure FitNesse
  3. Create a batch file to run FitNesse when you need it. My command looks like:
    java -jar .fitnesse-standalone.jar -p 9080
    

Once you’re running the FitNesse process, hit http://localhost:9080 (use whichever port you started it on) and you’ll find a lot of material to help you get to grips with things, including a full set of acceptance tests for FitNesse itself.

What FitNesse Isn’t

Before I get into talking about how I think FitNesse can be of use in agile development, I’d like to point out what it isn’t useful for.

Tracking Work

FitNesse is not a work tracking tool; it won’t replace a dedicated ticketing system such as Jira, Pivotal Tracker or TFS. Although it may be possible to see what has not yet been built by seeing what tests fail, that work cannot be assigned to an individual or team within FitNesse.

Unit Testing

FitNesse can definitely be used for unit testing and some tests executed from FitNesse will be unit tests, so this might seem a bit contradictory. When I say FitNesse isn’t for unit testing, I mean that it isn’t what a developer should be using for many unit tests. A lot of unit testing is focused on a much smaller unit than I would suggest FitNesse should be concerned with.

[TestFixture]
public class TestingSomething
{
    [Test]
    [ExpectedException(typeof(ArgumentNullException))]
    public void Constructor_NullConnectionString_ThrowsExpectedException()
    {
        var something = new Something(null);
    }
}

This test should never be executed from FitNesse. It’s a completely valid test but other than a developer, who cares? Tests like this could well be written in a TDD fashion right along with the code they’re testing. Introducing a huge layer of abstraction such as FitNesse would simply kill the developer’s flow. In any case, what would the wiki page look like?

Deploying Code

Fitnesse will execute code and code can do anything you want it to, including deploying other code. It is entirely possible to create a wiki page around each thing you want to deploy and have the test outcomes driven by successful deployments. But really, do you want to do that? Download Go or Ansible – they’re far better at this.

What FitNesse Is

OK, so now we’ve covered a few things that FitNesse is definitely not, let’s talk about how I think it can most usefully fit into the agile development process.

FitNesse closes the gap between specification and executable tests, creating an executable specification. This specification will live through the whole development process and into continued support processes. Let’s start with creating a specification.

Creating a Specification

A good FitNesse based specification should be all about behaviour. Break down the system under test into chunks of behaviour in exactly the same way as you would when writing stories for a SCRUM team. The behaviours that you identify should be added to a FitNesse wiki. Nesting pages for relating behaviours makes a lot of sense. If you’re defining how a service endpoint behaves during various exception states then a parent page of ‘Exception States’ with one child page per state could make some sense but you aren’t forced to adhere to that structure and what works for you could be completely different.

Giving a definition in pros is great for communicating a generalisation of what you’re wanting to define and gives context for someone reading the page. What pros don’t do well is define specfic examples – this is where specficiation by example can be useful. Work as a team (not just ‘3 amigos’) to generate sufficient specific examples of input and output to cover the complete behaviour you’re trying to define. The more people you have involved, the less likely you are to miss anything. Create a decision table in the wiki page for these examples. You now have the framework for your executable tests.

Different Types of Test

In any development project, there are a number of different levels of testing. Different people refer to these differently, but in general we have:

  • Unit Tests – testing a small unit of code ‘in process’ with the test runner
  • Component Tests – testing a running piece of software in isolation
  • Integration Tests – testing a running piece of software with other software in a live-like environment
  • UI Teststesting how a software’s user interface behaves, can be either a component test or integration test
  • Load Tests – testing how a running piece of software responds under load, usually an integration test
  • Manual Tests – testing things that aren’t easily quantifiable within an automated test or carrying out exploratory testing

Manual tests are by their nature not automated, so FitNesse is probably not the right tool to drive these. Also FitNesse does not natively support UI tests, so I won’t go into those here. Load tests are important but may require resources that would become expensive if run continuously, so although these might be triggered from a FitNesse page, they would probably be classified in such a way so they aren’t run constantly. Perhaps using a top level wiki page ‘Behaviour Under Load’.

In any case, load tests are a type of integration test, so we’re left with three different types of test we could automate from FitNesse. So which type of test should be used for which behaviour?

The test pyramid was conceived by Mike Cohn and has become very familiar to most software engineers. There are a few different examples floating around on the internet, this is one:

This diagram shows that ideally there should be more unit tests than component tests, and more integration tests than system tests etc. This assertion comes from trying to keep thing simple (which is a great principle to follow); some tests are really easy to write and to run whereas some tests take a long time to execute or even require someone to manually interact with the system. The preference should always be to test behaviour in the simplest way that proves success but when we come to convince ourselves of a specific piece of behaviour, a single class of test may not suffice. There could be extensive unit test coverage for a class but unless our tests also prove that class is being called, then we still don’t have our proof. So we’re left with the possibility that any given behaviour could require a mix of unit tests, component tests and integration tests to prove things are working.

Hooking up the Code

So, how do we get each of our different classes of tests to run from FitNesse? Our three primary types of test are unit, component and integration. Let’s look at unit tests first.

Unit Tests

A unit test executes directly against application code, in process with the test itself. External dependencies to the ‘unit’ are mocked using dependency injection and mocking frameworks. A unit test depends only on the code under test, calls will never be made to any external resources such as databases or file systems.

Let’s assume we’re building a support ticketing system. We might have some code which adds a comment onto a ticket.

namespace Ticketing
{
    public class CommentManager
    {
        private readonly Ticket _ticket;

        public CommentManager(Ticket ticket)
        {
            _ticket = ticket;
        }

        public void AddComment(string comment, DateTime commentedOn, string username)
        {
            _ticket.AddComment(new Comment(comment, username, commentedOn));
        }
    }
}

To test this, we can use a decision table which would be defined in FitNesse using Slim as:

|do comments get added|
|comment|username|commented on|comment was added?|
|A comment text|AUser|21-Jan-2012|true|

This will look for a class called DoCommentsGetAdded, set the properties Comment, Username and CommentedOn and call the method CommentWasAdded() from which it expects a boolean. There is only one test line in this test which tests the happy path, but others can be added. The idea should be to add enough examples to fully define the behaviour but not so many that people can’t see the wood for the trees.

We obviously have to create the class DoCommentsGetAdded and allow it to be called from FitNesse. I added the Ticketing.CommentManager to a solution called FitSharpTest, I’m now going to add another class library to that project called Ticketing.Test.Unit. I’ll add a single class called DoCommentsGetAdded.

namespace Ticketing.Test.Unit
{
    public class DoCommentsGetAdded
    {
        public string Comment { get; set; }
        public string Username { get; set; }
        public DateTime CommentedOn { get; set; }

        public bool CommentWasAdded()
        {
            var ticket = new Ticket();
            var manager = new CommentManager(ticket);
            int commentCount = ticket.Comments.Count;
            manager.AddComment(Comment, CommentedOn, Username);
            return ticket.Comments.Count == commentCount + 1;
        }
    }
}

To reference this class from FitNesse you’ll have to do three things:

  1. Install FitSharp to allow Slim to recognise .NET assemblies. If you use Java then you won’t need to do this part, FitNesse natively supports Java.
  2. Add the namespace of the class either in FitSharp’s config.xml or directly into the page using a Slim Import Table. I updated the config to the following:
    
    
      
        Ticketing.Test.Unit
      
    
    
  3. Add an !path declaration to your FitNesse Test page pointing at your test assembly.

To get your Test page working with FitSharp, the first four lines in edit mode should look like this:

!define TEST_SYSTEM {slim}
!define COMMAND_PATTERN {%m -r fitSharp.Slim.Service.Runner -c D:ProgramsFitnesseFitSharpconfig.xml %p}
!define TEST_RUNNER {D:ProgramsFitnesseFitSharpRunner.exe}
!path C:devFitSharpTestTicketing.Test.UnitbinDebugTicketing.Test.Unit.dll

Notice the !path entry (line 4). The last step is to add your truth table to the page, so the whole page in edit mode looks like this:

!define TEST_SYSTEM {slim}
!define COMMAND_PATTERN {%m -r fitSharp.Slim.Service.Runner -c D:ProgramsFitnesseFitSharpconfig.xml %p}
!define TEST_RUNNER {D:ProgramsFitnesseFitSharpRunner.exe}
!path C:devFitSharpTestTicketing.Test.UnitbinDebugTicketing.Test.Unit.dll

|do comments get added|
|comment|username|commented on|comment was added?|
|A comment text|AUser|21-Jan-2012|true|

Now make sure your solution is built (if you have referenced the debug assemblies then make sure you build in debug) save your changes in FitNesse and hit the ‘Test’ button at the top of the page.

 

There are of course lots of different types of tables you can use to orchestrate different types of tests. The full list is beyond the scope of this post but if you dip into the documentation under ‘Slim’ then you’ll find it quite easily.

The mechanism always remains the same, however – the diagram below outlines the general pattern.

 

 

This is not really any different to what you would see if you were to swap Slim for NUnit. You have the system driving the test (Slim or NUnit), the test assembly and the assembly under test. The difference here is that rather than executing the test within your IDE using Resharper (or another test runner) you’re executing the test from a wiki page. This is a trade off, but like I said at the beginning: not all tests should be in FitNesse, we’re only interested in behaviour which a BA might specify. There will no doubt be other unit tests executed in the usual manner.

Integration Tests

Integration tests run against a shared environment which contains running copies of all deployables and their dependencies. This environment is very like production but generally not geared up for high load or the same level of resilience.

The sequence diagram for unit tests is actually still relevant for integration tests. All we’re doing for the integration tests is making the test assembly call running endpoints instead of calling classes ‘in process’.

Let’s set up a running service for our ‘add comment’ logic and test it from FitNesse. For fun, let’s make this a Web API service running in OWIN and hosted with TopShelf. Follow these steps to get running:

  1. Add a new command line application to your solution. Call it TicketingService (I’m using .NET 4.6.1 for this).
  2. Add NuGet references so your packages file looks like this:
    
    
      
      
      
      
      
      
      
      
      
      
    
    
  3. Add a Startup class which should look like this:
    using System.Web.Http;
    using Owin;
    
    namespace TicketingService
    {
        public class Startup
        {
            public void Configuration(IAppBuilder appBuilder)
            {
                HttpConfiguration httpConfiguration = new HttpConfiguration();
                httpConfiguration.Routes.MapHttpRoute(
                       name: "DefaultApi",
                       routeTemplate: "api/{controller}/{id}",
                       defaults: new { id = RouteParameter.Optional }
                   );
                appBuilder.UseWebApi(httpConfiguration);
            }
        }
    }
    
  4. Add a Service class which looks like this:
    using System;
    using Microsoft.Owin.Hosting;
    
    namespace TicketingService
    {
        public class Service
        {
            private IDisposable _webApp;
    
            public void Start()
            {
                string baseAddress = "http://localhost:9000/";
    
                // Start OWIN host
                _webApp = WebApp.Start<Startup>(url: baseAddress);
            }
    
            public void Stop()
            {
                _webApp.Dispose();
            }
        }
    }
    
  5. Modify your Program class so it looks like this:
    using System;
    using Topshelf;
    
    namespace TicketingService
    {
        class Program
        {
            static void Main(string[] args)
            {
                HostFactory.Run(
                    c =>
                    {
    
                        c.Service<Service>(s =>
                        {
                            s.ConstructUsing(name => new Service()); 
    
                            s.WhenStarted(service => service.Start());
                            s.WhenStopped(service => service.Stop());
                        });
    
                        c.SetServiceName("TicketingService");
                        c.SetDisplayName("Ticketing Service");
                        c.SetDescription("Ticketing Service");
    
                        c.EnablePauseAndContinue();
                        c.EnableShutdown();
    
                        c.RunAsLocalSystem();
                        c.StartAutomatically();
                    });
    
                Console.ReadKey();
            }
        }
    }
    
  6. Make sure your TicketService project is referencing your Ticketing project.
  7. Add a fake back end service to provide some persistence:
    using Ticketing;
    
    namespace TicketingService
    {
        public static class BackEndService
        {
            public static Ticket Ticket { get; set; } = new Ticket();
        }
    }
    
  8. And finally, add a controller to handle the ‘add comment’ call and return a count (yes, I know this is a bit of a hack, but it’s just to demonstrate things)
    using System.Web.Http;
    using Ticketing;
    
    namespace TicketingService
    {
        public class CommentController : ApiController
        {
    
            public void Post(Comment comment)
            {
                BackEndService.Ticket.AddComment(comment);
            }
    
            public int GetCount()
            {
                return BackEndService.Ticket.Comments.Count;
            }
        }
    }
    

Right, run this service and you’ll see that you can add as many comments as you like and retrieve the count.

Going back to our sequence diagram, we have a page in FitNesse already, and now we have an assembly to test. What we need to do is create a test assembly to sit in the middle. I’m adding Ticketing.Test.Integration as a class library in my solution.

It’s important to be aware of what version of .NET FitSharp is built on. The version I’m using is built against .NET 4, so my Ticketing.Test.Integration will be a .NET 4 project.

I’ve added a class to the new project called DoCommentsGetAdded which looks like this:

using System;
using RestSharp;

namespace Ticketing.Test.Integration
{
    public class DoCommentsGetAdded
    {
        public string Comment { get; set; }
        public string Username { get; set; }
        public DateTime CommentedOn { get; set; }

        public bool CommentWasAdded()
        {
            var client = new RestClient("http://localhost:9000");
            var request = new RestRequest("api/comment", Method.POST);
            request.AddJsonBody(new {Text = Comment, Username = Username, CommentedOn = CommentedOn});
            request.AddHeader("Content-Type", "application/json");
            client.Execute(request);

            var checkRequest = new RestRequest("api/comment/count", Method.GET);
            IRestResponse checkResponse = client.Execute(checkRequest);

            return checkResponse.Content == "1";
        }
    }
}

I’ve also added RestSharp via NuGet.

There are now only two things I need to do to make my test page use the new test class.

  1. Add the namespace Ticketing.Test.Integration to FitSharp’s config file:
    
    
      
        Ticketing.Test.Unit
        Ticketing.Test.Integration
      
    
    
  2. Change the !path property in the test page to point at the right test assembly:
    !define TEST_SYSTEM {slim}
    !define COMMAND_PATTERN {%m -r fitSharp.Slim.Service.Runner -c D:ProgramsFitnesseFitSharpconfig.xml %p}
    !define TEST_RUNNER {D:ProgramsFitnesseFitSharpRunner.exe}
    !path C:devFitSharpTestTicketing.Test.IntegrationbinDebugTicketing.Test.Integration.dll
    
    |do comments get added|
    |comment|username|commented on|comment was added?|
    |A comment text|AUser|21-Jan-2012|true|
    

Now, make sure the service is running and hit the ‘Test’ button in FitNesse.

Obviously, this test was configured to execute against our localhost, but it could just as easily have been configured to execute against a deployed environment. Also, the code in the test class isn’t exactly what I would call first class, this is just to get stuff working.

The important thing to take from this is that the pattern in the original sequence diagram still holds true, so it’s quite possible to test this behaviour as either a unit test or as an integration test, or even both.

Component Tests

A component test is like an integration test in that it requires your code to be running in order to test. Instead of your code executing in an environment with other real life services, it’s tested against stubs which can be configured to respond with specific scenarios for specific tests, something which is very difficult to do in an integration environment where multiple sets of tests may be running simultaneously.

I recently wrote a short post about using Mountebank for component testing. The same solution can be applied here.

I’ve added the following controller to my TicketingService project:

using System.Net;
using System.Web;
using System.Web.Http;
using RestSharp;
using Ticketing;

namespace TicketingService
{
    public class TicketController : ApiController
    {
        public void Post(Ticket ticket)
        {
            var client = new RestClient("http://localhost:9999");
            var request = new RestRequest("ticketservice", Method.POST);
            request.AddHeader("Content-Type", "application/json");
            request.AddJsonBody(ticket);

            IRestResponse response = client.Execute(request);

            if (response.StatusCode != HttpStatusCode.Created)
            {
                throw new HttpException(500, "Failed to create ticket");
            }
        }
    }
}

There are so many things wrong with this code, but remember this is just for an example. The functioning of this controller depends on an HTTP endpoint which is not part of this solution. It cannot be tested by an integration test without that endpoint being available. If we want to test this as a component test, then we need something to pretend to be that endpoint at http://localhost:9999/ticketservice.

To do this, install Mountebank by following these instructions:

  1. Make sure you have an up to date version of node.js installed by downloading it from here.
  2. Run the Command Prompt for Visual Studio as an administrator.
  3. Execute:
    npm install -g mountebank
    
  4. Run Mountebank by executing:
    mb
    
  5. Test it’s running by visiting http://localhost:2525.

To create an imposter which will return a 201 response when data is POSTed to /ticketservice, use the following json:

{
    "port": 9999,
    "protocol": "http",
    "stubs": [{
        "responses": [
          { "is": { "statusCode": 201 }}
        ],
        "predicates": [
              {
                  "equals": {
                      "path": "/ticketservice",
                      "method": "POST",
                      "headers": {
                          "Content-Type": "application/json"
                      }
                  }
              }
            ]
    }]
}

Our heavily hacked TicketController class will function correctly only if this imposter returns 201, anything else and it will fail.

Now, I’m going to list the code I used to get this component test executing from FitNesse from a Script Table. I’m very certain that this code is not best practice – I’m trying to show the technicality of making the service run against Mountebank in order to make the test pass.

I’ve added a new project to my solution called Ticketing.Test.Component and I updated the config.xml file with the correct namespace. I have two files in that solution, one is called imposters.js which contains the json payload for configuring Mountebank, the other is a new version of DoCommentsGetAdded.cs which looks like this:

using System;
using System.IO;
using System.Net;
using RestSharp;

namespace Ticketing.Test.Component
{
    public class DoCommentsGetAdded
    {
        private HttpStatusCode _result;

        public string Comment { get; set; }
        public string Username { get; set; }
        public DateTime CommentedOn { get; set; }

        public void Setup(string imposterFile)
        {
            var client = new RestClient("http://localhost:2525");
            var request = new RestRequest("imposters", Method.POST);
            request.AddHeader("Content-Type", "application/json");
            using (FileStream imposterJsFs = File.OpenRead(imposterFile))
            {
                using (TextReader reader = new StreamReader(imposterJsFs))
                {
                    string imposterJs = reader.ReadToEnd();
                    request.AddParameter("application/json", imposterJs, ParameterType.RequestBody);
                }
            }
            client.Execute(request);
        }

        public void AddComment()
        {
            var client = new RestClient("http://localhost:9000");
            var request = new RestRequest("api/ticket", Method.POST);
            request.AddJsonBody(new { Number = "TicketABC" });
            request.AddHeader("Content-Type", "application/json");
            IRestResponse restResponse = client.Execute(request);
            _result = restResponse.StatusCode;
        }

        public bool CommentWasAdded()
        {
            return _result != HttpStatusCode.InternalServerError;
        }

        public void TearDown(string port)
        {
            var client = new RestClient("http://localhost:2525");
            var request = new RestRequest($"imposters/{port}", Method.DELETE);
            request.AddHeader("Content-Type", "application/json");
            client.Execute(request);
        }
    }
}

I’ve updated my FitNesse Test page to the following:

!define TEST_SYSTEM {slim}
!define COMMAND_PATTERN {%m -r fitSharp.Slim.Service.Runner -c D:ProgramsFitnesseFitSharpconfig.xml %p}
!define TEST_RUNNER {D:ProgramsFitnesseFitSharpRunner.exe}
!path C:devFitSharpTestTicketing.Test.ComponentbinDebugTicketing.Test.Component.dll

|Script:do comments get added|
|setup;|C:devFitSharpTestTicketing.Test.ComponentbinDebugimposter.js|
|add comment|
|check|comment was added|true|
|tear down;|9999|

A Script Table basically allows a number of methods to be executed in sequence and with various parameters. It also allows for ‘checks’ to be made (which are your assertions) at various stages – it looks very much like you would expect a test script to look.

In this table, we’re instantiating our DoCommentsGetAdded class, calling Setup() and passing the path to our imposters.js file, calling AddComment() to add a comment and then checking that CommentWasAdded() returns true. Then we’re calling TearDown().

Setup() and TearDown() are there specifically to configure Mountebank appropriately for the test and to destroy the Imposter afterwards. If you try to set up a new Imposter at the same port as an existing Imposter, Mountebank will throw an exception, so it’s important to clean up. Another option would have been to set up the Imposter in the DoCommentsGetAdded constructor and add a ~DoCommentsGetAdded() destructor to clean up – this would mean the second and last lines of our Script Table could be removed. I am a bit torn as to which approach I prefer, or whether a combination of both is appropriate. In any case, cleaning up is important if you want to avoid port conflicts.

Ok, so run your service, make sure Mountebank is also running and then run your test from FitNesse.

Again, this works because we have the pattern of our intermediary test assembly sat between FitNesse and the code under test. We can write pretty much whatever code we want here to call any environment we like or to just execute directly against an assembly.

Debugging

I spent a lot of time running my test code from NUnit in order to debug and the experience grated because it felt like an unnecessary step. Then I Googled to see what other people were doing and I found that by changing the test runner from:

!define TEST_RUNNER {D:ProgramsFitnesseFitSharpRunner.exe}

to:

!define TEST_RUNNER {D:ProgramsFitnesseFitSharpRunnerW.exe}

FitSharp helpfully popped up a .NET dialog box and waited for me to click ‘Go’ before running the test. This gives an opportunity to attach the debugger.

Summary

A high level view of what has been discussed here:

  • Three types of test which can be usefully run from FitNesse: Unit, Integration and Component.
  • FitNesse is concerned with behaviour – don’t throw NUnit out just yet.
  • The basic pattern is to create an intermediary assembly to sit between FitNesse and the code under test. Use this to abstract away technical implementation.
  • Clean up Mountebank Imposters.
  • You need FitSharp to run FitNesse against .NET code.
  • Debug with FitSharp by referencing the RunnerW.exe test runner and attaching a debugger to the dialog box when it appears.

Not Quite Enterprise

In this first post on FitNesse, I’ve outlined a few different types of test which can be executed and listed code snippets and instructions which should allow someone to get FitNesse running on their machine. This is only a small part of the picture. Having separate versions of the test suite sat on everyone’s machine is not a useful solution. A developer’s laptop can’t be referenced by a CI/CD platform. FitNesse can be deployed to a server. There are strategies for getting tests to execute as part of a DevOps pipeline.

This has been quite a lengthy post and I think these topics along with versioning and multi-environment scenarios will be best tackled in a subsequent post.

I also want to take a more process oriented look at how tests get created, who should be involved and when. So maybe I have a couple of new entries to work on.

Integration Testing Behaviour with Mountebank

Developer’s machine > dev shared environment > staging environment > UAT > production.

Probably not exactly how everyone structures their delivery pipelines but probably not that far off. It allows instant feedback on whether what a developer is writing actually works with the code other developers are writing. And that’s a really good thing. Unfortunately, it misses something…

Each environment (other than the developer’s own machine) is shared with other developers who are also deploying new code at the same time. So how do you get an integration test for component A that relies on component B behaving in a custom manner (maybe even failing) to run automatically, without impacting the people who are trying to build and deploy component B?

If we were writing a unit test we would simply inject a mocked dependency. Fortunately there’s now a fantastic piece of kit available for doing exactly this but on an integration scale: enter Mountebank.

This clever piece of kit will intercept a network call for ANY protocol and respond in the way you ask it to. Transparent to your component and as easy to use as most mocking frameworks. I won’t go into detail about how to configure ‘Imposters’ as their own documentation is excellent, but suffice to say it can be easily configured in a TestFixtureSetup or similar.

So where does this fit into our pipeline? Personally, I think the flow should be:

Push code to repo > Code is pulled onto a build server > Build > Unit test > Integration test > Start deployment pipeline

The step where Mountebank comes in is obviously ‘integration testing’.

Keep in mind that installing the component and running it on the build agent is probably not a great idea, so make good use of the cloud or docker or both to spin up a temporary instance which has Mountebank already installed and running. Push your component to it, and run your integration tests. Once your tests have run then the instance can be blown away (or if constantly destroying environments gets a bit slow, maybe have them refreshing every night so they don’t get cluttered). Docker will definitely help keeping these processes efficient.

This principle of spinning up an isolated test instance can work in all kinds of situations, not just where Mountebank would be used. Calls to SQL Server can be redirected to a .mdf file for data dependent testing. Or DynamoDb tables can be generated specifically scoped to the running test.

What we end up with is the ability to test more behaviours than we can do in a shared environment where other people are trying to run their tests at the same time. Without this, our integration tests can get restricted to only very basic ‘check they talk to each other’ style tests which although have value do not cover everything we’d like.

Going Deep Enough with Microservices

Moving from a monolith architecture to microservices is a widely debated process, with many recommendations and nuggets of advice available on the web in blogs like this. There are so many different opinions out there mainly because where an enterprise finds their main complexities lay depends on the skillsets of their technologists, the domain knowledge within the business and the existing code base. During the years I’ve spent as a contractor in a very wide range of enterprises, I’ve seen lots of monolith architectures – all of them causing slightly different headaches because those responsible for developing them let different aspects of the architecture slip. After all, the thing that is often forgotten is that if a monolith is maintained well, then it can work. The reverse is also true – if a microservice architecture is left to evolve on its own, it can cause as many problems as a poorly maintained monolith.

Domains

One popular way to break things down is using Domain Driven Design. Two books which cover most concepts involved in this process are ‘Building Microservices’ by Sam Newton (http://shop.oreilly.com/product/0636920033158.do) and ‘Implementing Domain Driven Design’ by Vaughn Vernon (http://www.amazon.com/Implementing-Domain-Driven-Design-Vaughn-Vernon/dp/0321834577) which largely references ‘Domain Driven Design: Tackling Complexity in the Heart of Software’ by Eric Evans (http://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215). I recommend Vaughn’s book over Evans’ as the latter is a little dry.

If you take on board even just half the content covered in these books, you’ll be on a reasonable footing to get started. You’ll make mistakes but as Sam Newton points out (and I’ve seen for myself) that’s inevitable.

Something that seems to be left out of a lot of domain driven discussions is what happens beyond the basic CRUD processes and domain logic in the application layer. Attention sits primarily with the thin interaction between a web interface and the domain processing by the aggregate in question. When dismantelling a monolith architecture into microservices, focus on just the application layer can give the impression of fast progress but in reality half the picture is missing. It’s likely that in a few months there will be several microservices but instead of them operating solely in their sub-domains, they’ll still be tied to the database that the original monolith was using.

Context

It’s hugely important to pull the domain data out of the monolith store. This is for the very same reasons we segregate service responsibilities into sub-domains. Data pertaining to a given domain may exist in other domains as well but changes will not necessarily be subjected to the same domain rules and individual records may have different properties. There may be a User record in several sub-domains, each with a Username property but the logic around how duplicate Usernames are prevented should sit firmly in a single sub-domain. If a service in a different sub-domain needs to update the username, it should either call a public service from the Profile sub-domain or raise a ‘Username Updated’ event that the Profile sub-domain would handle, process and possibly respond with a ‘Username Update Failed’ event of its own.

This example may be a little contrived – checking for duplicates could be something that’s implemented everywhere it’s needed. But consider what would happen if it became necessary to check for duplicates within another external system every time a Username is updated. That logic could easily be encapsulated behind the call to the Profile service but having to update every service that updates Usernames wouldn’t be good practice.

So if we are now happy that the same data represented in different sub-domains could at any one time be different (given the previous two paragraphs) then we shouldn’t store the data for both sub-domains in the same table.

Local Data

In fact, we’re now pretty well removed from needing a classic relational database for storing data that’s local to the sub-domain. We’re dealing with data that is limited in scope and is intended for use solely by the microservices built to sit in that sub-domain. NoSQL databases are ideal for this scenario and no matter which platform you’ve chosen to build on there are excellent options available. One piece of advice I think is pretty sound is that if you are working in the cloud, you’ll usually get the best performance by using the data services provided by your cloud provider. Make sure you do your homework, though – some have idiosyncracies that can impact performance if you don’t know about them.

So now we have data stored locally to the sub-domain, but this isn’t where the work stops. It’s likely there’s a team of DBA’s jumping around wondering why their data warehouse isn’t getting any new data.

The problem is that the relational database backing the monolith wasn’t just acting as a data-store for the application. There were processes feeding other data-stores for things like customer reporting, machine learning platforms and BI warehouses. In fact, anything that requires a historical view of things will be reading it from one or more stores that are loaded incrementally from the monolith’s relational database. Now data is being stored in a manner best suiting any given sub-domain, there isn’t a central source for that data to be pulled from into these downstream stores.

Shift of Responsibility

Try asking a team of dba’s if they fancy writing CLR based stored procedures to detect changes and pull new records into their warehouse by querying whatever data-store technologies have been decided on in each case – I doubt they’ll be too receptive. The responsibility for getting data out of each local data-store now has to move closer to the application services.

The data guys are interested in recording historical and aggregated records, which is convenient as there is a useful well known tool for informing different systems that something has happened – an event.

It’s been argued that using events to communicate across sub-domains is miss-using an event stream as a message bus. My argument in this case is that the back-end historical data-store is still within the original sub-domain. The data being stored belongs specifically to that sub-domain and still holds the same context as when it was saved. There has been a transition to a new medium of storage but that’s all.

So we are now free to raise events from our application microservices into eventstreams which are then handled by a service specifically designed to transfer data from events into whatever downstream stores were originally being fed from the monolith database. This gives us full extraction from the monolithic architecture and breaks the sub-domain’s dependency on the monolith database.

There is also the possibility that we can now give more fine grained detail of changes than was being recorded previously.

Gaps in the Monolith Database

Of course back end data-stores aren’t the only consumers of the sub-domain’s data. Most likely there will be other application level queries that used to read the data you’re now saving outside of the monolith database. How you manage these dependencies will depend on whether the read requests are coming from the same sub-domain or another. If they’re from the same sub-domain then it’s equally correct to either pull the data from an event stream or from microservices within that sub-domain. Gradually, a sub-domain’s dependency on the monolith database will die. If the queries are coming from a different sub-domain then it’s better to continue to update the monolith database as a consumer of the data stored locally to the sub-domain. The original table no longer containing data that is relevant to the sub-domain you’re working on.

Switching

Obviously we don’t want to have any gaps in the data being sent to our back-end stores, so as we pull functionality into microservices and add new data-stores local to the sub-domain, we also need to build the pipeline for our new back end processing of domain events into the warehouse. As this gets switched on, the loading processes from the original monolith can be switched off.

External Keys

Very few enterprise systems function in isolation. Most businesses make use of off-the-shelf packages or cloud based services such as Salesforce. Mapping records into these systems usually means using the primary key of each record to create a reference. If this has happened then the primary key from the monolith is most likely being relied on to hold things together. Moving away from the monolith database means the primary key generation has probably been lost.

There are two options here and I’d suggest going with whatever is the easiest – they both have their merits and problems.

  1. Continue to generate unique id’s in the same way as the monolith database did and continue to use these id’s for reference across different systems. Don’t rely on the monolith for id generation here, create a new process in the microservice that continues the same pattern.
  2. Start generating a new version of id generation and copy the new keys out to the external systems for reference. The original keys can eventually be lost.

Deeper than Expected

When planning the transition from monolithic architecture to microservices, there may well be promises from the management team that time will be given to build each sub-domain out properly. Don’t take this at face value – Product Managers will still have their roadmaps to fulfill and unfortunately there is maybe only 30% of any given slice of functionality being pulled out of a monolith that an end user will ever see. Expect the process to be difficult no matter what promises are made.

What I really want to get across here is that extracting even a small amount of functionality into microservices carries with it a much deeper dive into the enterprise’s tech stack than just creating a couple of application services. It requires time and focus from more than just the Dev team and before it can even be started, there has to be a architectural plan spanning the full vertical slice of a sub-domain, from front end to warehoused historical data.

Consequences of Not Going Deep Enough

How difficult do you find it in your organisation to get approval for technical upgrade work, or for dealing with technical debt as a project (which I’m not advocating is a good strategy), or for doing anything which doesn’t have a directly measurable positive impact on new product? In my experience, it isn’t easy and I’m not sure it should be, but that’s for another post.

Imagine you’ve managed to extract maybe 70% of your application layer away from your monolith but you’re still tied to the same data model. Have you achieved what you set out to do? You certainly don’t have loose coupling because everything is tied at the data level. You don’t have domain isolation. You are preventing your data team from getting access to the juicy new events you don’t really need to be raising (because the changed data is already available everywhere). You’ve turned a monolith into an abomination – it isn’t really microservices and it isn’t a classic monolith, it isn’t really any desired pattern at all. Even worse, the work you are missing is pretty big and may not directly carry with it any new features. Will you get agreement to remove coupling with the database as a project itself?

How are your developers doing? How many of them see that the strategy is only going half way? How many are moaning about paying lip service to the architecture? Wasn’t that one of the reasons you started with microservices in the first place?

Can you deploy the microservices without affecting other sub-domains? What if there are schema changes? What if there are schema changes in 2 sub-domains and one needs to be rolled back after release because it wasn’t quite right? Wasn’t this something microservices was supposed to prevent?

How many dodgy hacks or ‘surprises’ are there in your new code where devs have managed to make domain isolated services work with a single relational data model? How many devs waste time hand wrangling when they know they’re building something that is going to be technical debt the moment it goes live?

Ok, so I’m painting a darker picture than you’ll probably feel, but each of these scenarios will almost certainly come up, you just might not get to hear about it.

The crux for me is thinking about the reasons for pursuing a microservice architecture. The flexibility, loose coupling, technology agnosticity (if that’s a real term), the speed of continuous delivery that you’re looking for. Unless you go deeper than the low lying fruit of the application layer, you’ll be cheating yourself out of these benefits. Sure, you’ll see improvements short term but you are building something which is already technical debt. No matter what architecture you choose, if you don’t invest in maintaining it properly (or even building it properly in the first place) then it will ultimately become your albatross.

When Things Just Work

A particularly tricky epic hits the development team’s radar. The Product Manager has been mulling it over for a while, has a few use cases from end users and has scoped things pretty well. He’s had a session with the Product Owner who has since fleshed things out into an initial set of high priority stories from a functional point of view. The Product Owner spends an hour with the Technical Lead and another Developer going over the epic. It quickly becomes apparent that they’re going to have to build some new infrastructure and a new deployment pipeline so they grab one of the Architects to make sure their plans are in line with the technical roadmap. Some technical stories are generated and some new possibilities crop up. The Product Owner takes these to the Product Manager. Between them they agree what is expected functionally and re-prioritise now they have a clearer picture of what’s being built. In the next grooming session the wider team hit the epic hard. There’s a lot of discussion and the Architect gets called back in when there are some disagreements. The QA has already spent some time considering how testing will work and she’s added acceptance criteria to all the stories which are reviewed by the devs during the meeting. Each story is scored and only when everyone is happy that there is a test approach, a deployment plan and no unresolved dependencies, does a story get queued for planning. It takes three sprints to get the epic completed, during which the team focus primarily on that epic. They pair program and work closely with the QA and Product Owner who often seem like walking, talking specifications. There are a few surprises along the way and some work takes longer than expected but everyone’s ok with that. Some work gets completed quicker than expected. The Product Manager has been promoting the new feature so when it finally goes fully live the team get some immediate feedback on what users like and dislike. This triggers a few new stories to make some changes that go out in the next release.

Well, that all sounded really simple. Everyone involved did their bit, no-one expected anything unreasonable, the only real pressure came from the team’s own pride in their work and everyone went home on an evening happy. So why is it that so often this isn’t the case? Why do some teams or even entire companies expend huge amounts of effort but only succeed in increasing everyone’s stress levels, depriving people of a decent night’s sleep and releasing product with more bugs than MI5?

What Worked?

If you’re reading this then hopefully team management is something you’re interested in and you’re probably well aware that there is never just one thing that makes it work. In fact, Roy Osherove described three distinctly different states that a team can find itself in which require very different management styles and expose very different team dynamics.

If you haven’t already done so, take a look at Roy’s blog here: http://5whys.com/ and download his book here: https://leanpub.com/teamleader

Often team members are affected by influences external to the team – for example, the Product Manager is almost certain to be dealing with multiple teams and end users. Their day could have started pretty badly and that will always colour interactions with others – we’re only human. So at any one given moment of interaction, the team could be in one of many states that could either be conducive to a good outcome or that could be leading to a problem.

Let’s try pulling apart this idealistic scenario and see how many opportunities there were for derailment.

Early Attention

In our scenario, the epic isn’t straight forward so the Product Manager has been thinking about what it means to his end users and where it fits into his overall plan. A lesser Product Manager might not have given it any attention yet and instead of having some stories outlined there might not be much more than a one line description of the functionality. Lack of care when defining work speaks to the rest of the team about how little the epic matters. If the Product Manager doesn’t care enough about the work to put effort into defining it properly, then others will often care just as little.

Early Architectural Input

Right at the beginning the Architect is questioned about how they see the work fitting in with the rest of the enterprise. Without talking tech early the team could waste time pursuing an approach which is at best not preferred and possibly just won’t work.

Product Owner and Technical Lead

The Product Owner and the Technical Lead take on the initial task of trying to get to grips with the story together. These two are a perfect balance of product and development. Moving up the seniority tree, the Product Manager and the Architect can often disagree vehemently, but the less senior pair need the relationship of mutual trust they’ve built. Nowhere is there a better meeting of the two disciplines. Lose either of these roles and the team will suffer.

Changing Things

After looking at things from a technical point of view, the Product Owner goes back to the Product Manager to discuss some changes. If the Product Owner isn’t open to this then many opportunities for quick wins will be missed and it becomes much more likely that the team will be opening at least one can of worms that they’d prefer not to.

Grooming Wide

Although strictly speaking all the work that goes into defining a story is ‘grooming’ and doesn’t have to include the whole team, there should be a point (probably in a grooming session) where the majority of the team gets the chance to review things and give their own opinions. If this doesn’t happen then much of the team’s expertise is being wasted. Also, some team members will be specialists in certain areas and may see things that haven’t yet been considered. Lastly (and maybe most importantly) if the team are simply spoon fed a spec and expected to build it, they aren’t being engaged in the work – they are far more likely to care less about what is being built.

Ready for Planning

The team make sure that the stories have test approaches, reams of acceptance criteria, no unresolved dependencies and that everyone believes the stories are clear enough to work on them. This is a prerequisite to allowing the work to be planned into a sprint. Without this gate to progression, work can be started that just can’t be finished. Working on something that can’t yet be delivered is a waste of time.

Questions with Answers

During the sprints, the Product Owner and QA are on hand all the time for direct face to face discussions about the work. “Does this look right?” “Can we reduce the number of columns on the screen?” “Is this loading quick enough?” – developers are at the sharp end and need to have answers right there and then so they can build. Without Product and QA representation available, a simple question like these can stall a story for an hour, a day or maybe too long to complete in the sprint.

And More

This is one small, quite contrived scenario. In real life things are rarely as straightforward but with every interaction and every piece of work within the team and beyond there is the risk for some kind of derailment. To list every scenario would take forever, so how can problems be avoided?

Knowing Their Jobs

Each individual in this team knew where they fitted into the puzzle. Everyone worked together as a team. In fact things need to go a little further than that for the best outcome – team members should know what each other’s jobs entail. Take the Technical Lead role; they should be protecting their team from ill conceived ideas (even when there seems to have been a lot of effort gone into those ideas). It’s part of their job to question whether proceeding with a given piece of work is the best thing to do. When they do raise concerns, these should be listened to and discussed with a level of maturity and mutual respect that befits someone who fulfilling one of the requirements of their job. Equally, when a QA suggests that a feature seems flawed even though it passes acceptance criteria, their issue should be addressed, even if nothing is ultimately changed. This is part of the QA’s job – raising issues of quality.

I’d like to outline what I see as an excellent balance of team members and responsibilities. I don’t mean to insinuate that this is the only way a team can work – in fact I encourage every team to find their own way. Regular retrospectives where the team look at how they’re performing, what works and what doesn’t, allow the team to form their own processes. This is nearly always preferable to mandating how things should be done as people are more likely to stick to their own ideas.

That having been said, I believe the lines are there to blur and if I were to define roles around those lines, it would be like this:

Product Manager

The Product Manager is responsible for the product road map. They are focussed on what the end users are missing in the product as it stands and they are working to define feature sets that will better meet the end user’s requirements and better any competition. It’s a trade off between effort, risk, how badly the feature is wanted, when it is technically feasible to implement it.

It is vitally important that their roadmap is fully understood initially by the Product Owner and then by the senior members of the team. Decisions are constantly being made during delivery which benefit from a knowledge of product focus.

The Product Manager is not responsible for writing stories or working with the development team beyond receiving demo’s of completed or near completed work. They’re not responsible for the velocity of the team or for managing the team in any way, although they do have a vested interest in when things are going to get delivered and will almost certainly require an explanation if things can’t or don’t happen as quickly as they’d like.

Product Owner

The Product Owner can best be thought of as an agile Business Analyst. They are responsible for communicating the vision of the Product Manager to the team. They sit within the team and are answerable to the Team Lead rather than the Product Manager. This might seem odd but the team needs them to put their need to understand the product ahead of anything else. Without good understanding of what is being built, the team cannot hope to build the right thing.

The Product Owner wants their stories to be understood, so they will want the team to be involved in deciding what information is recorded in a story and how it is formatted. Story design will usually evolve over a few sprints from discussions during retrospectives. It is the Product Owner’s sole responsibility to make sure that all functional details are described once and only once and that none are left out. They will often work closely with the team’s QA to make sure all functional aspects have acceptance criteria before the story is presented in a grooming session. At a minimum, before a story is allowed to be planned into a sprint, all functional details must have an acceptance criteria to go with them. It is quite acceptable for these acceptance criteria to be considered the specification, rather than there being two different descriptions of the same thing. This reduces duplication, which is desirable because duplication leaves a story open to misunderstanding.

QA

The QA should be focussing on automation as much as possible. The last thing a QA wants to do is spend their time testing – it isn’t a good use of their time. Yes, there are a few things that are still best done by opening the app and having a look and I’m not saying that automation should be solely relied on, but it should be the default approach. Recognising that something can’t be automated should be the exception rather than noticing that something could. Something automated never needs to be tested again until that thing changes. Something manual adds to the existing manual set of tests. Only so much of the latter can go on before the QA has no time to really think about the quality of the product.

The QA should be happy that they understand how they are going to ensure the quality of every single story picked up by the team. They should have recorded acceptance criteria which act as a gateway to ‘done’ for the developers and a reminder for the QA about different aspects of the story.

Sometimes there will be several different levels of criticality to functionality. For example, imagine a piece of work on a back office system which includes modifications to an integration to a customer facing system. It’s not the end of the world if the back office system needs a further tweak to make things absolutely perfect, but if the integration breaks, then end users may be affected and data may be lost. The integration should definitely be tested with automation – for every field and every defined piece of logic involved. The internal system’s ui, could probably be tested manually, depending on the circumstances.

The QA should make sure they spend time with the Product Owner looking at new stories and working quite abstractly to define acceptance criteria around the functionality. During grooming, technical considerations will kick up other acceptance criteria and a decision around what needs to be automated and what doesn’t can be made by the team. A decision which the QA should make sure is made actively and not just left to a default recipe.

Architect

The Architect is not really a part of a specific team. They will have their own relationship with the Product Manager and will be familiar with the product road map. They will have three main considerations on their mind:

  1. They will be looking at the direction that technology in general is going and will be fighting the rot that long lived systems suffer from. This isn’t because code breaks, quite the opposite – the problem is that working code could stay in place for years after anyone who had anything to do with writing it has left the company, even if that code is not particularly good. Replacing working systems is a difficult thing for businesses to see a need for but if the technology stack that is relied on isn’t continuously refreshed then it becomes ‘technical debt’. Something that no-one wants to touch and that is expensive to fix.
  2. They will be making sure the the development teams have enough information about the desired approach to system building. If following a DDD approach, the teams need to know what the chosen strategy is for getting data from one subdomain to another. They want to know what format they should be raising their domain events in, how they should be versioned. Given new ideas and technologies, they need to know when it’s ok to start using them and more importantly, when it’s too premature to start using them.
  3. In conjunction with the Product Roadmap, they will be defining the Technical Roadmap. This document takes into consideration what Product are wanting to do and what the technical teams are wanting to do. It’s almost a given that the Technical Roadmap will have to feedback into the Product Roadmap as it shows what needs to be done to deliver. For this reason, it’s generally a good idea not to consider a Product Roadmap complete until it has been adjusted to accommodate the technical plan.

In the scenario at the start of this post, the Architect was consulted because additional infrastructure was going to be needed. This is something that could happen several times a week and something that the Architect should be prepared for. They need to give definite answers for what should be done right now, not what that decision would be if the question was asked in a month’s time – this leads to premature adoption of new concepts and technologies that aren’t always fully understood or fully agreed on.

Developers

The Developers are at the sharp end. They should have two main focusses, and make no mistake, both are as important as each other:

  1. They need to build the Product.
  2. They need to work out how they can become better developers.

Notice how I haven’t mentioned anything about requirements gathering. This is something that has classically been considered what most developers overlook, but in a development team this is done by the Product Owner, Product Manager and to a lesser extent the QA. It’s this support net of walking talking specifications combined with well defined stories that allow a developer to focus on doing what they do best, writing code.

Something that can be ignored by less experienced developers is that the Product is not just the code that makes it up; it has to be delivered. If building a web application then this will ultimately have to deployed, most likely into at least one test environment and into production. Maybe this process is automated or is handed on to a ‘Dev Ops’ team, but it’s still down to the developers to build something that can actually be deployed. For example, if a piece of code that would normally talk to a database is changed so that it now only works with a new version of the database, how do we deploy? If we deploy the database first, then the old code won’t work when it calls it. If we deploy the code first then the call to the old database won’t work. There could be dozens of instances of the code, which could take hours to deploy to one by one – it isn’t usually possible to deploy data changes at exactly the same time as deploying code. Basically, the Developers have to keep bringing their heads out of the code to think about how their changes impact other systems.

Technical Lead

Technical Lead is a role that is very close to my heart as it’s a role I tend to find myself in. I believe that if a development team delivers a product, then a Technical Lead delivers the team. They are technically excellent but are split focussed – they are as much about growing the team as they are about making sure the team are delivering what is expected.

There is a lot of noise about a good Technical Lead being a ‘servant leader’ and for the most I think this is true. Still, there are times when it’s necessary to be direct, make a decision and tell people to do it. When to use each tactic is something only experience teaches but get it wrong and the team will lose focus and cohesion.

It’s quite common to find that a Technical Lead has one or more developers in their team that are better coders than themselves. It’s important for someone in the lead role to realise that this is not a slur on their abilities; the better coder is a resource to be used and a person to grow. They aren’t competition. This often happens when developers don’t want to split their focus away from the technology and onto the people. They become incredible developers and eventually often move into a more architectural role. Because of their seniority, they can also stand in for the Technical Lead when it’s needed. Which can allow a lead to occasionally focus on building something specific (it’s what we all enjoy doing, after all).

Ultimately it’s down to the Technical Lead to recognise what is missing or lacking and make sure that it is made available. They see what the team needs and makes sure the team gets it (even if sometimes that’s a kick up the arse).

Blurring the Lines

Not every team has enough people to fill all these roles. Not every company is mature enough to have the staff for each of these roles. So this is where the lines begin to blur.

It isn’t always critical to have each of these roles fulfilled as a full time resource. Each company and team’s specific situation needs to be considered on it’s own merits. What is critical is understanding why these roles exist. This knowledge allows the same individual to have different hats at different times. But beware of giving too much power to one person or to removing the lines of discussion which is what generates great ideas.

So What Actually Makes it Work?

In a highly technical environment I find my own opinion on this very surprising. I don’t think it’s any individual’s technical ability or how many different programming languages they can bring to bear. I don’t think it’s having every detail of every epic prepared completely before development work starts. Primarily, consideration must be given to the structure and dynamics of a team. the above is an excellent starting point although as I’ve already mentioned, it may be that multiple roles are taken by any one individual. Other than this, the I believe a successful team will have these qualities:

Patience, mutual respect and a healthy dose of pragmatism.

If there is one constant in every team I’ve ever worked with or in, it’s that some things will go wrong. Mistakes will happen – no-one is immune (and if they claim otherwise don’t believe them). Making progress forwards doesn’t mean always building the most incredibly perfect solution, it means getting product out the door which end users will be happy with. So if there are mistakes, have the patience to realise that it was inevitable. If you make the mistake, have the respect for the rest of your team to realise that they can help fix things – believe that they aren’t judging you by your mistake. Finally, remember that the perfect solution is not the perfect system – there’s more to delivering software than trying to find all the bugs.

A team with these values will ultimately always outperform a collection of technically excellent individuals.