My First Release Weekend

At the time of writing this post, I am 41 years old, I’ve been in the business of writing software for over 20 years, and I have never ever experienced a release weekend. Until now.

It’s now nearly 1 pm. I’ve been here since 7 am. There are a dozen or so different applications which are being deployed today, which are highly coupled and maddeningly unresilient. For my part, I was deploying a web application and some config to a security platform. We again hit a myriad of issues which hadn’t been seen in prior environments and spent a lot of time scratching our heads. The automated deployment pipeline I built for the change takes roughly a minute do deploy everything, and yet it took us almost 3 hours to get to the point where someone could log in.

The release was immediately labelled a ‘success’ and everyone starts singing praises. As subsequent deployments of other applications start to fail.

This is not success!

Success is when the release takes the 60 seconds for the pipeline to run and it’s all working! Success isn’t having to intervene to diagnose issues in an environment no-one’s allowed access to until the release weekend! Success is knowing the release is good because the deploy status is green!

But when I look at the processes being followed, I know that this pain is going to happen. As do others, who appear to expect it and accept it, with hearty comments of ‘this is real world development’ and ‘this is just how we roll here’.

So much effort and failure thrown at releasing a fraction of the functionality which could have been out there if quality was the barrier to release, not red tape.

And yet I know I’m surrounded here by some very intelligent people, who know there are better ways to work. I can’t help wondering where and why progress is being blocked.

Legislature and Off the Shelf Thinking

I’m always pleasantly surprised when I find an aspect of software delivery which I hadn’t previously considered, or seen as fully as I might have.

Today I was chatting with a colleague who it turns out has a long history in the business of superannuation (pensions, for those in the UK). I was expressing my very heart felt belief that building a business’ core domain using an off the peg system is a risky undertaking. I talked about how I could understand a company purchasing a CRM system, as customer management is a well understood space, and why spend money developing an in-house CRM system when you really only want it to do what every other CRM system does? I talked about how leveraging an off the peg system leaves the core domain at the mercy of the business which owns the system. I expressed dissatisfaction at the architecture of off the peg solutions; that they are generally monolithic and impossible to marry with today’s continuous delivery practices.

It was about at this point when she explained that the business of superannuation is predominantly legislated by the Australian government, and the main difference between funds is in choice of investment opportunity. The services made available are mandated by legislation. The way the fund is managed is by and large mandated by legislation. So much is legislated that there are off the shelf systems available which cover pretty much every aspect of managing a superannuation fund, from managing fund investments to giving members access to manage their accounts online. These systems are also kept up to date with legislative changes as and when they happen. So funds stay within the bounds of the law simply by using that software.

Ok, so the off the shelf story always sounds rosier than it is in real life, but there’s an interesting point to see here. Because the core domain of the business is not proprietary business logic, because the core domain is in fact not just well known but it’s enforced by an external third party, then an off the peg solution could perhaps model it perfectly well. Updates to the system from the vendor are driven by the same legislation as drives the business. The main differentiators between systems becomes UX and architecture.

The conversation has left me wondering whether there are any other businesses which are so highly legislated that the same logic would apply. It has also left me wondering how a business which embraces an off the shelf solution for their core domain might also find it difficult to embrace modern software delivery techniques. In fact, I wonder if they would ever reach the tipping point where it becomes necessary to raise the dev teams above simply hacking solutions together.

Scale or Fail

I’ve heard a lot of people say something like “but we don’t need huge scalability” when pushed for reason why their architecture is straight out of the 90’s. “We’re not big enough for devops” is another regular excuse. But while it’s certainly true that many enterprises don’t need to worry so much about high loads and high availability, there are some other, very real benefits to embracing early 21st century architecture principals.

Scalable architecture is simple architecture

Keep it simple, stupid! It’s harder to do than it might seem. What initially appears to be the easy solution can quickly turn into a big ball of unmanageable, tightly coupled string of dependencies where one bad line of code can affect a dozen different applications.

In order to scale easily, a system should be simple. When scaling, you could end up with dozens or even hundreds of instances, so any complexity is multiplied. Complexity is also a recipe for waste. If you scale a complex application, the chances are you’re scaling bits which simply don’t need to scale. Systems should be designed so hot functions can be scaled independently of those which are under utilised.

Simple architecture takes thought and consideration. It’s decoupled for good reason – small things are easier to keep ‘easy’ than big things. An array of small things all built with the same basic rules and standards, can be easily managed if a little effort is put in to working out an approach which works for you. Once you have a few small things all being managed in the same way, growing to lots of small things is easy, if it’s needed.

Simple architecture is also resilient, because simple things tend not to break. And even if you aren’t bothered about a few outages, it’s better to only have the outages you plan for.

Scalable architecture is decoupled

If you need to make changes in anything more than a reverse proxy in order to scale one service, then your architecture is coupled, and shows signs of in-elasticity. Other than being scalable, decoupled architecture is much easier to maintain, and keeps a much higher level of quality because it’s easier to test.

Decoupled architecture is scoped to a specific few modules which can be deployed together repeatedly as a single stack with relative ease, once automated. Outages are easy to fix, as it’s just a case of hitting the redeploy button.

Your end users will find that your decoupled architecture is much nicer to use as well. Without having to make dozens of calls to load and save data in a myriad of different applications and databases, a decoupled application would just make only one or two calls to load or save the data to a dedicated store, then raise events for other systems to handle. It’s called eventual consistency and it isn’t difficult to make work. In fact it’s almost impossible to avoid in an enterprise system, so embracing the principal wholeheartedly makes the required thought processes easier to adopt.

Scalable architecture is easier to test

If you are deploying a small, well understood, stack with very well known behaviours and endpoints, then it’s going to be no-brainer to get some decent automated tests deployed. These can be triggered from a deployment platform with every deploy. As the data store is part of the stack and you’re following micro-architecture rules, the only records in the stack come from something in the stack. So setting up test data is simply a case of calling the API’s you’re testing, which in turn tests those API’s. You don’t have to test beyond the interface, as it shouldn’t matter (functionally) how the data is stored, only that the stack functions correctly.

Scalable architecture moves quicker to market

Given small, easily managed, scalable stacks of software, adding a new feature is a doddle. Automated tests reduce the manual test overhead. Some features can get into production in a single day, even when they require changes across several systems.

Scalable architecture leads to higher quality software

Given that in a scaling situation you would want to know your new instances are going to function, you need attain a high standard of quality in what’s built. Fortunately, as it’s easier to test, quicker to deploy, and easier to understand, higher quality is something you get. Writing test first code becomes second nature, even writing integration tests up front.

Scalable architecture reduces staff turnover

It really does! If you’re building software with the same practices which have been causing headaches and failures for the last several decades, then people aren’t going to want to work for you for very long. Your best people will eventually get frustrated and go elsewhere. You could find yourself in a position where you finally realise you have to change things, but everyone with the knowledge and skills to make the change has left.

Fringe benefits

I guess what I’m trying to point out is that I haven’t ever heard a good reason for not building something which can easily scale. Building for scale helps focus solutions on good architectural practices; decoupled, simple, easily testable, micro-architectures. Are there any enterprises where these benefits are seen as undesirable? Yet, when faced with the decision of either continuing to build the same, tightly coupled, monoliths which require full weekends (or more!) just to deploy, or building something small, light weight, easily deployed, easily maintained, and ultimately scalable, there are plenty of people claiming “Only in an ideal world!” or “We aren’t that big!”.

Bonkers!