Technical Excellence

“Don’t overengineer this. We need to move as fast as we can.”

Too many business representatives

The Agile Manifesto is built on 12 pillars, the 9th of which (at the time of writing) is:

Continuous attention to technical excellence and good design enhances agility.

I regard the Agile Manifesto as a thing of truth. It has been compiled and refined by some of the best, most widely experienced minds in my field. I’m in awe that I’ve had the privilege to exchange a few thoughts with some of them and that they give this knowledge freely with the hope that it will benefit others. I’m not here to argue their case (if you need convincing, that’s for another post, another time) I’m here to talk about one specific idea which the manifesto includes: technical excellence increases agility.

What is technical excellence?

Here are a few different concepts which contribute significantly toward technical excellence:

  • Clean code
  • Testable architecture
  • The right architecture
  • TDD
  • A unified error logging strategy
  • Automation and continuous delivery
  • A cross-discipline test approach
  • Well chosen and well implemented abstractions
  • Iterative, continuous improvement
  • Fixing problems when you find them
  • Fulfilling the non-functional requirements

Technical excellence is applying these practices in such a way as to benefit the business. The reason why this is often difficult to do is that the business rarely understands how these practices could possibly benefit them, and often thinks exactly the opposite.

One of the last slides of this talk from Martin Fowler shows how high quality code is a speed enabler, rather than speed and flexibility happening at the detriment of quality, as many businesses believe.

My list is not comprehensive but if you get these things right, you’ll almost certainly be working as a  technically excellent team. I’m not going to go into every one of these individually, but I’d like to start with talking about the last item in my list – the non-functional requirements.

Non-functional requirements

I’m willing to bet that you will have worked on more projects where non-functional requirements were fuzzy and poorly understood than projects where they were well defined and driving the architecture.

Non-technical people can find non-functional requirements dry and overly technical when compared the juicy fun to be had with designing user interfaces. Often they fall into the realm of “a technical concern” which the business wants to ignore as much as possible, because the don’t directly bring revenue.

Performance and resilience are the two usual considerations when people talk about non-functional requirements, but more needs to be thought about than “it needs to be resilient”, or “it needs to respond quickly”. It might be that the thing you’re building doesn’t actually need to be particularly resilient or responsive, so why waste the effort in making it so? Even if it does, there is a price which comes with this. These factors have to be considered with a financial head on as well as with UX expertise – sometimes a small decline in perceived stability can drastically impact the amount of trust your users have in your application. Tying that small drop in performance to a hit in revenue takes cross-disciplined expertise – an understanding of user behaviours and of the financial workings of the business.

It’s also common to see a service which doesn’t need to be highly available or responsive being designed the same as everything else, because “this is how we build things”.

Believing you can only ship one type of ‘thing’ is undervaluing delivery teams and costing the business money.

Further non-functional requirements include such things as:

  • Data protection
  • Security
  • Capacity
  • Scalability
  • Elasticity

Understanding which of these requirements is more and which  is less important, tells us how our architecture needs to look, which tells us what our delivery pipelines will look like. It tells us what skills our software teams will be comprised of, and will impact our test approach. Getting these things wrong can increase the complexity of what you’re building by an order of magnitude, which will impact delivery times and cost.

Paying attention to non-functional requirements is one of the most basic behaviours of technically excellent teams.

Agility is directly affected by how easy it is to work with the chosen solution. The wrong approach brings added complexity which makes the solution rigid – it becomes harder to change direction, or sometimes even just to continue extending the solution as planned. When you experience pain while building something, it’ll be even harder to work with in future.

Clean code

Clean code is another important characteristic of technically excellent teams. Keeping code clean for a development team can be likened to cleaning a workspace while baking – it doesn’t have to happen, but not doing it will leave a huge headache later on. At some point someone will need to work in this space again and if things aren’t left in a way that is expected, they have to first waste time tidying up.

That last sentence is true whether we’re talking about baking or coding.

Wasting a bit of time tidying up is actually the best outcome when code isn’t kept clean. If there are time pressures (when aren’t there time pressures?) a developer may decide that they can’t take the time to tidy up and instead struggle to implement a change in a more complicated, less appropriate way. That now adds to the complexity and you have started producing legacy code in real time! Getting further and further away from where the solution is easy to extend.

For decades we’ve been aware that normalising a database allows it to be built on and extended with the least amount of pain. Why is it taking us so long to realise the same applies to software?

There are other concepts which also fall under the hood of clean code: TDD, well chosen abstractions, fixing problems when you find them, iterative improvement. These all help us produce clean code, ignoring them will impact our ability to work with the solution, which kills our forward progress and cripples our agility.

Well chosen abstractions

There more different approaches for decomposing a solution into manageable chunks than I can name. I tend to fall into the Domain Driven Design camp, but other approaches exist and can be just as affective. Whatever your preference, the aim is the same: creating meaningful abstractions makes the code, architecture, and intent of the engineer, easier to understand and work with.

Working on a messaging system and finding there is the concept of a queue is not surprising and is incredibly helpful to a developer. Finding something called “Terminator” is a little more surprising, and leaves a developer having to look further in the code to work out for themselves what it does – and they may not get it completely right.

Uncertainty is a progress killer. Anything which blocks forward momentum takes time away from the considerations we should be paying a lot of attention to.

Equally, when dealing with the physical break down of a system into discrete services, if there isn’t a clear reason for the existence of each service, developers are left confused about how best to extend things. Just as bad, if the service level abstraction isn’t “complete” then it will be overly chatty and tightly coupled with other services, making any change painful.

A unified error logging strategy

Writing error reporting code over and over for each different thing you build is a ridiculous waste of time. It also introduces one off logic which a developer has to review to understand. Treat all your exceptions in the same way, build a single way to track, monitor, and alert exceptions. Reuse this everywhere.

This concept doesn’t just impact the quality of the software delivered, it helps developers while they build.

As wonderful as it would be for all bugs to be found locally on a development machine while everything is running with a debugger, this isn’t real life. Often we don’t see a problem until our software has been deployed to a staging environment, or maybe even into production.

Depending on your architecture, it could be that a significant portion of your bugs simply can’t ever be seen locally!

This means we are relying on our error handling strategy to tell us what went wrong. If it isn’t up to the job, then it will take significantly longer to fix each problem. This impedes progress, which makes it very difficult to work – killing our agility.

Automation and continuous delivery

Even as we’re about to move into the 21st year of the 21st century, I still see a lot of confusion about continuous delivery. The question I hear most is “What if we don’t want that feature to go live yet?”.

Deploying and releasing are two different things which do not necessarily have any bearing on each other beyond deployment happening at some point before release.

Delivering constantly (but not necessarily switching new features on) allows the business to release a feature as early as possible – as soon as they believe they will get a return from it.

The constant pushing of new code also tells us early if we’ve introduced anything unexpected. Such a problem can be discovered quickly, if we have maintained a level of technical excellence, and dealt with fast. Contrary to common belief, time between failures is often not as important as time to fix.

I have only ever seen continuous delivery working well where teams are using a trunk based source control strategy. The mindset where everything pushed to the remote is good enough to be released seems to be a major factor in making CD work. It combines techniques such as branching by abstraction and making small, iterative changes, as well as test automation.

I’ve seen it work best when developers are allowed some flexibility in what their committed code looks like, in the knowledge that it will be iterated on and made good within a few commits. This allows for some experimentation without having to create a feature branch. It also shows that the team are trusted to make good decisions about when to branch.

The bottom line

If you are working on something small and well defined, then it’s easy to output clean, extendable code with the right amount of tests. If you deliver small things regularly then you are working in a way that allows you to keep your codebase clean. Then by paying attention to the non-functional requirements, you can make a conscious decision about what needs to be built and what doesn’t, rather than mindlessly hoping the one line you’re following will be good enough and quick enough to deliver.

But none of this happens if you ignore technical excellence.

Ignoring technical excellence leads to overly complicated architecture, confusing abstractions at all levels, chatty and highly coupled systems, and a codebase which is difficult to extend and test. Which results in low agility and high feature delivery times.

How to build a technically excellent team

I’m going to let you into what seems to be a very well kept secret: it’s likely that everyone in your team wants to be technically excellent. There are toxic people out there who are really not interested, and if you have some of them in the team, you have to deal with them directly. But the vast majority of people I’ve ever worked with have all wanted to do their best. Sometimes all they need is to see that the business sees their technical skills and knowledge as valuable.

Difficulties arise when non-technical team members (or “the business”) are driving technical decisions for which they don’t have the experience and knowledge to fully understand. Always make sure a Technical Lead understands business view on a technical decision and allow them to weigh things up.

Believe technical people when they say “you are ignoring something important” – they often don’t mean “we must spend the next week doing nothing but this”, and instead are trying to highlight something which will impact them in ways you might not be able to understand.

Build a solid set of non-functional requirements for a piece of work, and the delivery team will have a much better understanding of what needs to be built. There will be less moments where technical people are wondering why things are the way they are.

Encourage the team to apply iterative learning to their development skills instead of focusing on analysing process failures. Focus on the team and their technical skills above the procedures they’re following – strict procedure is often a distraction.

Force yourself to buy into the idea that clean codebases are the easiest to work with, and they don’t happen without effort. Show that this is important by making it a team value and talk about how it directly impacts revenue in a positive way.

In other words: show some trust in your technical people, and give them room to grow.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s