Exploding defects and avoiding them with Agile testing methodology
Imagine this scenario. It’s two weeks until the scheduled release of your solution. For the first time, software testers are allowed to examine the code — putting a magnifying glass to three months of hard work by your development team. They find 10 major bugs threatening the viability of the release. Then they begin testing the interactions of those defective features.
A case of exploding defects
The issues multiply by five. As new code is released to fix the bugs, problems with interactions continue until eventually you have 150 bugs driving the go/no-go decision one week from launch. And — every day the business is asking you whether the release is threatened.
That’s a cruel punishment we wouldn’t inflict on our worst enemy. It’s unruly, stressful — testing the bond of any tight-knit project team. The good news? It’s avoidable.
But the reality is even today too many businesses find themselves mired in the case of exploding defects. The reason harkens back to the very history of software development and testing.
Read more: Download our complete eBook on software testing circa 2016.
“Don’t go chasing Waterfall”
Back in the early days of software development, the Waterfall software development methodology was as catching as TLC’s 1994 hit song. Requirements were captured at once, designed and built. In a process as new as building software, enterprise stakeholders wanted all details to be documented and configured before construction began.
In this era before Agile testing methodology, project owners and product teams operated under the assumption that software testing was only necessary after development was completed. Testers’ role? To validate that the features functioned as outlined in the original requirements document. Anything different was by definition a defect.
Software is a rapidly evolving marketplace of ideas. The problem with testing in Waterfall development? It didn’t leave room to adapt.
Even if the project requirements changed throughout the course of development, testers were kept in the dark. They were still comparing against the detailed requirements list. Any nuances or adjustments decided by the team during development but not documented, lay outside the testers’ purview.
Ideas be nimble, ideas be quick.
Let’s be honest. Throughout the development lifecycle, product owners don’t hold fast to the first documented conception of their product. Needs change. And so do expectations. Software testers brought in at the tail end of a project were kept in the dark, left to compare the feature list planned for development and the features represented in the finished app or platform wondering, “Is this a defect or an intentional change?” The time it took them to chase down the answers they needed ate up time and budget.
Made in the USA — the industrial roots of software development
The decision to involve testers at the end of the development lifecycle evolved from the process used to manufacture physical goods. In manufacturing, electrical or mechanical engineers would design the process to a build widget — take a smoke detector for example.
Once the smoke detector was built, it was checked for quality. Testers weren’t involved along the way. There was no need. Their only purpose was validating the functionality of the device — not informing the build.
There’s a huge difference between building a smoke detector and building an app. Software isn’t built using plastic and metal. It’s structured with ideas. As fast as an opinion changes, so too can the output of software development. Along the way there are opportunities for logical oversights and a need for validation throughout the process.
Software can also be released far more frequently than changing the manufacturing line of smoke detector production. Applying the same constraints used in the physical world no longer made sense. Only this industrial process needed more than a revolution to improve inefficiencies in software testing.
History lesson
In the ’90s, the sequence of quality assessment changed. Statistical Process Control (SPC) came into vogue. Following SPC, meant verifying the process was in control and capable
at all times i.e. measure during the process rather than at the end. This mentality, while identical to the new approach to Agile software testing, preceded it by nearly a decade.
Along came Agile testing methodology
But rather than continuing to develop, pass code over the metaphorical “wall” to testers and wait for a defect report to be sent back as if by carrier pigeon, Agile methodology helped reimagine this antiquated development methodology.
Agile testing methodology allocates testers at the beginning of the project as members of the core project team. That way, changing ideas can be tracked and features tested as they roll off the “assembly line.” In Agile, testing occurs during each two-week sprint rather than posthumously, after development is completed.
The Agile testing methodology process acknowledges that the needs of the enterprise are constantly shifting during the ideation and build process. Pushing the pause button on business is a happy, albeit unrealistic, hope during software development. The reality is: Business development and learnings don’t stop when software development starts. Minds change. Requirements are adapted.
Product teams and testers needed a new way to serve to the needs of businesses. Agile values shipping code over the extensive documentation that defined the old process. By documenting less, developers can do more.
Diffusing a runaway defect reaction
As illustrated in our exploding defects example, the ROI of testing throughout the process changes dramatically in Agile development. New features are implemented in every sprint. A very important part of ensuring software quality throughout the process is the re-testing of existing functional and nonfunctional areas of a system after new implementations, enhancements, patches or configuration changes. The purpose? To validate these changes have not introduced new defects. Those familiar with software development know this as regression testing. As if you need any other reasons to move testing further forward in the Agile lifecycle. Here are six.
1. A red rope mentality. Moving software testing forward in the process gives quality testers a “red rope to pull” in challenging the development team should they identify logical oversights occurring during the build.
2. Leverage experience. Passive validation on the back-end of the project doesn’t leverage the vast experience and intuition of testers.
3. Aerial perspective. The difficulty validating software exponentially increases if they aren’t able to track and understand the changes to initial requirements being made along the way.
4. Testers as users. Your testers are capable of more than just rote checks. They are the first users of your solution. Incorporating this experiential feedback at the beginning of the lifecycle allows functionality to be adjusted along the way (if it lands in scope).
5. Two words: Accumulated bugs. Therein lies an increased risk for defects if testers aren’t working in concert with developers and checking interactions with existing functionality as new features are developed.
6. High-level documentation. If software testers are involved throughout the development cycle, test cases can be documented at a high level. The alternative? Developers documenting in detail the needs for QAs operating with no relative context.
Read our full eBook and skip ahead to page 11.
Want more free resources from us? Visit our main site and subscribe today!