Why Agile fails — and what you can do about it

Glenn Wilson
5 min readAug 27, 2022

--

I was watching former colleague and peer Toby Irvine talk about Quality Assurance and Quality Control at an event recently. I won’t go into the details on QA and QC; Toby does it much more eloquently than I ever could. But I wanted to draw upon one part of his talk where he asserts that Agile has failed to deliver what it promised. Although I agree with this sentiment, there are pockets where Agile does work which suggests we should not proclaim that Agile is a complete failure. I refer to Agile in the context of the generic form as described by the Agile manifesto (https://agilemanifesto.org/). Hence the use of the capital “A”.

Toby framed his assessment in the context of a learning and improvement cycle called PDCA, which was made popular in Japan during the 1950s. The architect of this particular cycle was W. Edwards Deming, who drew his inspiration from what he referred to as the ‘Shewhart cycle’. Walter Shewhart was a statistician who worked in telecommunications during the 1930s using statistical analysis to improve customer experience. Deming, who was also a statistician, is widely regarded as the person who transformed Japanese manufacturing fortunes following the Second World War. Simply put, Deming’s cycle has four elements: plan, do, check and act. We plan some work, we do some work, we check it has worked, we then act on output of the work, and we plan again for the next iteration. This has become popularised as the aforementioned PDCA cycle.

Now. keen observers among you may have noticed that I italicised ‘check’ in the previous paragraph. The word ‘check’ was not liked by Deming. He preferred the word ‘study’ and wrote about the Plan Do Study Act (PDSA) cycle. There is a very good reason why Deming favoured ‘study’ over ‘check’: he believed that in order to derive learning from the ‘do’ step in the cycle, you need to study the outcomes in detail, not just verify the output of the activity — it’s not just a checkbox exercise. The word ‘study’ has much deeper meaning than ‘check’. It suggests that we analyse the data from multiple angles. We answer questions such as: did we plan correctly? Did we do the activity according to the plan? Did the results meet our expectations? Did we make the right assumptions? Were the parameters correct? From these questions we can learn a great deal about the value of the work we’ve completed.

The question ‘did this work?’ does not go deep enough.

So let’s return the main point of this post — why does Agile fail? According to Toby, the reason is simply the mechanics of Agile: we write code, we check it, we write more code, we check it, we write some more code, etc. We continue this do-check cycle until, as Toby wonderfully observes, the money runs out. We avoid planning because we misinterpret ‘delivering working software over comprehensive documentation’ and ‘responding to change over following a plan’ as an excuse to just write code without the need to plan our work effectively. When we do plan (such as sprint planning), we focus on selecting, and prioritising development tasks to be completed within a sprint. Agile development falls into a never-ending cycle of code changes that is detached from requirements driven by value. Agile fails.

But let’s look at this through a Deming lens. So, instead of ‘checking’ the output, we ‘study’ the outcome of our work. To do that, we need some idea of why we wrote code — what was the purpose of the code we wrote? What business value did we hope it would achieve? We may not have planned the details, but we would have an idea of the purpose of the products we are developing. Likewise, we know for whom we are developing our products and how our products satisfy their requirements. With this knowledge, we can deliver products that we believe meet the needs of our customers. We use analytics to examine the results of our code changes, not just to see whether they worked, but to see whether they delivered the expected value to our customers based on the assumptions we made when we wrote the code. In other words, we study the effects of our efforts. We study a number of parameters to assess the true value of the code changes made rather than proclaiming success because we checked that the code changes functionally worked.

Armed with Deming’s PDSA cycle, we start with an idea or a purpose from which we derive a plan to satisfy a value proposition. From this plan,we perform some tasks; in this case, we write code to satisfy the purpose. Next, we study the results of our tasks through a process of reflection. This is why retrospectives are so important and need to be carried out correctly. I have seen many instances of poor retrospectives where we ask what worked? What didn’t work? Essentially, this is a ‘check’. There’s no real study being carried out. Nevertheless, we ought to make the most of retrospectives to study the results of the changes we make, the effects they have on customers, on sales, on usage, on adoption, on costs, etc. We must attend retrospective meetings primed with data that allows us to understand the effect our code changes have had.

The knowledge we gain from this reflective exercise leads to ideas that we can act upon: maybe we need to focus on one part of the product; maybe we observe that our code changes surprisingly affected an important parameter negatively; or we recognise that our original assumptions underpinning specific changes were wrong. These actions allows us to create hypotheses on the changes we can make that could improve the situation. These provide the blueprint to plan our next step and create the questions needed to validate assumptions during the study phase.

Working with small work packages for each iteration of the PDSA cycle, thus reducing the number of parameters to evaluate, we can make small steady and relatively fast progress towards a product that delights our customers. With greater customer satisfaction, comes more sales, and more revenue, and disrupts the endless do-check cycle.

So, I posit that Agile does work. But if we believe it is not working, we may have been doing it wrong. We need to stop focussing on writing good code and checking that the code works, and start delivering value by studying the outcomes of our work to deliver value to our customers.

--

--

No responses yet