…..Walk like an Egyptian
The software industry is in a continuous state of change, testing is no different.
There are few hard and fast rules and it’s fair to say that one size doesn’t fit all.
That’s not to say that there aren’t themes and trends that are sensible to follow.
Having recently completed a very thorough and engaging Agile refresh training course, I had chance to take stock of our testing strategy versus that which was put forward by the trainer.
Invariably, the Mike Cohn‘s test pyramid model appeared at the forefront.
The emphasis of this model is that of automation, heavier at the base of the pyramid with unit testing, then with service (also read API layer or acceptance) tests. Finally, UI testing at the top, requiring fewer tests, as defects should not be so prevalent higher up the pyramid. This comes accompanied with a y axis running upward to represent cost of implementing the tests, both in implementation and in speed of feedback.
These are all very valid points. But, to follow this model and this model alone is dangerous.
Lisa Crispin has a fine overview of Agile testing available on her website http://lisacrispin.com/downloads/AgileTestingOverview.pdf
She mentions the Agile Testing Quadrants and the shared responsibility of the whole development team in Agile.
If you spend a little time Googling around the subject of the testing pyramid model, you can find a common thread of arguments for and against adopting the model.
The truth is that even if your organisation is running Agile, there will likely be many flavours of the methodology in use. The approach to testing is no different.
You have to find your own way, to cover what works best for your team, your skillset and your customers.
There are, of course variants of the testing pyramid model, where it is inverted (ice cream cone), often used as an example of what is wrong with UI heavy and Unit test light development.
However, Noah Sussman made on that makes a little more sense.
I am a staunch believer in the quality and importance of humans as testers and the use of automation to be used to ‘check’ (another well trodden path of friction for many testers). Del Dewar‘s post The Testing-Checking Synergy explains this better than I.
If you’re not testing, you’re not uncovering any information, and in your not uncovering any information, you’re simply confirming nothing more than speculation about what your product may (or may not) do, and there’s an enormous amount of information that you don’t (and may never) know about your product.
The only message, I guess I’m trying to articulate is that there is so much value in testing throughout the SDLC, automation enabling and informing testing, so that a product can be released with a greater sense of confidence in its quality.
Our approaches and practices should evolve through time, we learn from our mistakes and we improve and move with the times. The importance of staying relevant and continuous improvement is what should be our key drivers.
I am eternally grateful to all those who take their time to share their thoughts and practices via social media, blogs, vlogs, podcasts, workshops, webinars and conference talks.