Working on large IT projects or software rollouts, we know the feeling when ‘go live’ is imminent, stakeholders are briefed, status reports are showing more green than red, and all bugs have been found… or have they?
Nine out of ten organisations report finding severe defects that were not identified before they went live. This was the finding of a survey by the IT research advisory firm IDC/Foundry commissioned by Acutest in 2023. The responses from 109 organisations of different sizes and from across all industry sectors showed that undetected bugs resulted in a range of negative outcomes. Most importantly, there was clear evidence that these oversights directly caused system outages and severely affected performance levels.
Assurance and testing
In our upcoming series of articles in Housing Technology, we want to encourage organisations to put assurance and testing at the heart of their change programmes. Why? Not only does this approach accelerate delivery, it also improves confidence in go-live and meets customers’ expectations. By the end of this series of articles, we want you to:
- Learn the survey’s key findings and why they matter for the housing sector;
- Understand what these findings reveal about an organisation’s approach to QA and testing;
- Share our insights on the common pitfalls in QA and how you can avoid them.
For a long time, the QA industry has been stressing the importance of testing early to resolve defects before they become more difficult to fix:
- 93 per cent of organisations released software into ‘live’ with severe, previously-undetected bugs.
- 18 per cent of releases in companies with fewer than 1,000 employees contained severe defects while larger organisations reported severe defects in 13 per cent of releases.
- 51 per cent of organisations reported having application downtime.
- 46 per cent of organisations experienced performance errors because of undetected defects released into ‘live’.
In 2007, the Journal of Defence Software Engineering highlighted that just over 60 per cent of defects reported during testing are attributable to problems introduced at requirements specification and solution design, a value confirmed by Acutest’s own review of its client base which also revealed that 39 per cent of the defects found in ‘live’ started as flaws in requirements and design.
Test from day one…
Despite these years of repeated failures, 40 per cent of survey respondents didn’t agree that testing should start on the first day of a project (i.e. when the defects which would eventually be manifest in ‘live’ would first be detectable).
What should be done to find these defects early and before they become difficult to fix?
Improve requirement reviews
Requirement and design reviews need to be effective at finding problems, not just a passive sign-off milestone in a project’s life cycle. Reviews need:
- Clear objectives from the start;
- The right stakeholders to be present and actively participating;
- Dedicated time set aside for the review;
- To scrutinise each requirement;
- Review findings to be documented, tracked and acted on.
All too often, requirements are documented in ways that don’t make it easy to find defects. These include requirements that prescribe how the technology should behave while the business need is often missing entirely, or requirements without success criteria (or with criteria that can’t be measured) leading to applications that don’t perform as the business intended.
Embrace AI
Whatever development methodology is followed to deliver software solutions, requirements need to be captured in a way that they are easily understood and can be reviewed more effectively.
Clarity comes from creating requirements as user stories and scenarios that focus on what the business wants, why it needs it, who benefits from it, and how we will know when it has been delivered. It is the stakeholder, the ‘who’ identified in each user story and scenario, who needs to participate in reviews.
Software delivery teams are increasingly looking at generative AI to help them with capturing requirements. Using AI to collate information from disparate documents, emails, meeting notes and process flows into clear and consistent user stories and scenarios accelerates delivery and ensures consistency while reducing the load on busy teams. If used correctly, AI enables teams to generate an initial draft of user stories far faster than traditional methods, leaving more time for rigorous review of these requirements by subject-matter experts.
Not just if it works but how well it works
System downtime and performance issues were highlighted as the most common problems arising from defects found in ‘live’.
For many organisations, it’s easier to create requirements that describe how the solution should function rather than describing those properties that don’t impact how the solution works but contribute to how well it works, such as its performance, resilience or security.
These ‘non-functional’ requirements are often addressed too late or not at all, but it’s these characteristics that are more likely to determine the long-term value of an IT solution than its functional requirements.
One way to address this is to develop a catalogue of non-functional user stories and use them as a check list for all solution implementations, adapting them for each delivery rather than reinventing the wheel each time.
Start as you mean to go on
In summary, the survey shows us that almost all companies releasing software into ‘live’ operation fall foul of severe, previously-undetected defects. The majority of defects found during both testing and live operations are caused by defects in requirements or gaps in non-functional coverage.
These problems can be minimised by effective reviews of requirements and design documents at the very start of the project. Writing requirements as user stories aids clarity and concision, making reviews easier and more effective. The use of generative AI can accelerate the definition of user stories and release stakeholders to conduct effective reviews and remove defects at source.
The next two articles will look at what the survey revealed about budgeting and the cost of quality assurance and testing as well as how to make the best use of constrained resources.
Limited budgets for testing
Respondents to the survey found that cost was the most important factor in testing and assurance; consequently, budgets committed to testing are tight and have to be stretched across multiple levels of testing, a situation faced by many in the housing sector where expenditure on IT systems runs into millions of pounds.
Cost constraints often result in testing being done in-house using resources that aren’t always viewed to be experts by their own organisations, with outside expertise only being brought in once an emergency has hit.
Andrew Lazenby is director of consulting services at Acutest (part of Trustmarque).