BY Peter Carleton
As a CRM consultancy we naturally get involved in a lot of change projects (new CRM, significant process overhaul, etc.). Over the years, we have found that many organisations don’t spend enough time focusing on the testing and requirements verification phase of change projects.
In this post we want to make the case for allocating the appropriate time and money to test whatever changes you are making – i.e. why test and how much is enough? And to bring your attention to some warning signs that might mean that your project does not have sufficient testing time built into the programme. But most importantly, we want to ensure that you use the best tool you have, to ensure quality in the software and systems that your organisation relies on.
There’s quite a lot of ground we could cover here but we’ll keep it to the essentials.
So why test?
The glib answer is: “test as much as you need to know it works”. But that’s easy to ignore and there are tangible risks of not testing. Like most things, it really does come down to more work and therefore more time and money.
Using a system that has problems, which you only discover when you start to use it, could lead to:
Extra work to fix bad data.
Extra work to undo workarounds and retrain staff.
More difficulty rectifying a problem due to developers/consultants going away after your project finishes.
Mistrust among operators, users, any other people dependent on it.
Having to choose between BAU and fixing the problems.
The central idea is: if you can fix problems earlier, you spend less time dealing with the accumulating consequences of those problems in the long run.
If you want another quotable: The bigger the problem, the easier it is to discover. The more you test, the more you find but the bigger, costlier ones you tend to find first.
Which brings us to the question: How do you know how much testing is enough?
There is a law of diminishing returns that applies to testing, so therefore you can do “too much”. There are a lot of ‘rules’ out there to estimate how much time you should allocate in a project schedule, which typically range between 25% to 50% of total project time. Instead of worrying about percentages, a more realistic approach is to consider what you are “buying” with your testing. Look at it this way: testing is an investment that repays you with the confidence that the system will do what you expect it to.
Here are some references that take that sort of approach:
“So a naive answer is that writing test carries a […] tax. But, we pay taxes in order to get something in return...” “…the additional benefits I get more than offset the additional cost…” - https://testing.googleblog.com/2009/10/cost-of-testing.html
“…my philosophy is to test as little as possible to reach a given level of confidence.” – Kent Beck, https://stackoverflow.com/a/153565
So how do you know how much testing is enough? You can create a Test Plan and find out. Gather up your most experienced system users and have them follow this basic step-by-step for each part of the system:
List a series of typical business activities that, when successfully demonstrated in the system, prove that every requirement has been met. These are your “test scenarios”.
Just checking, your requirements do fully cover: (A) everything you expect the system to do (B) are prioritised, and (C) are formally documented.
For each scenario, create the specific data you need to show that the system will behave in the way that you expect in those circumstances. You should also predict what the results will be and write those down too. These are your test cases.
Don’t stop until the people who are directly accountable for the success of the system or the dependent processes are confident that the testing will prove the system will do what they expect it to. You might want to consider incorporating formal sign off as part of this step.
We hope this helps you get thinking about how you could incorporate testing into your next change project!