The first time I saw James Whittaker was in 2004 at an IIST conference. He dazzled us with live demos of bugs that were found in public software. This included a technique of editing HTML files to change quantity combo box values to negatives, resulting in a credit to one’s VISA card.
And now, fresh into his job as Google’s test engineering director, I was thrilled to see him headlining STARwest’s keynotes with his challenging “All That Testing Is Getting in the Way of Quality” presentation.
After a brief audience survey, James convinced most of us that testers do not get respect in their current roles. Then he kicked us while we were down, by suggesting the reason we lack respect is all the recent software quality “game changers” have been thought of by programmers:
- Today’s software is easier to update and fix problems in (easier than software from 10 years ago).
- Crash recovery – some software can fix itself.
- Reduction of dependencies via standards (e.g., most HTML 5.0 websites work on all browsers now).
- Continuous Builds – quicker availability of builds makes software easier to test but it has nothing to do with testers.
- Initial Code Quality (e.g., TDD, unit tests, peer reviews)
- Convergence of the User and Test Community (e.g., crowd source testing, dog food testing). Per James, “Testers have to act like users, users don’t have to act.”
Following the above were four addition “painful” facts about testing:
- Only the software matters. People care about what was built, not who tested it.
- The value of the testing is the activity of testing, not the artifact. Stop wasting your time creating bug reports and test cases. Start harnessing the testing that already exists (e.g., beta testing).
- The only important artifact is the code. Want your tests to matter? Make them part of the code.
- Bugs don’t count unless they get fixed. Don’t waste time logging bugs. Instead, keep the testers testing.
The common theme here is that programmers are getting better at testing, and testers are not getting better at programming. The reason this should scare testers is, per James:
“It’s only important that testing get done, not who gets it done.”
I agree. And yes, I’m a bit scared.
After a cocky demo of some built in bug reporting tools in a private version of Google Maps, James finally suggested his tip on tester survival; get a specialty and become an expert in some niche of testing (e.g., Security, Internationalization, Accessibility, Privacy) or learn how to code.
The hallway STARwest discussions usually brought up Whittaker’s keynote. However, apart from a few, nearly everyone I encountered did not agree with his message and some even laughed it off. One tester I had lunch with tests a system used by warehouse operators to organize warehouses. He joked that his warehouse users would not be drooling at the opportunity to crowdsource test the next version of WarehouseOrganizer 2.0. In fact, they don’t even want the new version. Another tester remarked that his software was designed to replace manual labor and would likely result in layoffs...dog food testing? Awkward.
Thank you STARwest, for bringing us such a challenging keynote. I thoroughly enjoyed it and think it was an important wake up call for all of us testers. And now I’m off to study my C#!
I don't agree with all of his points, but there is one in particular that I do agree with:
“It’s only important that testing get done, not who gets it done.”
Personally, this doesn't scare me, it makes me feel relieved -- relieved that *someone* is saying this to testers in a way that's getting them thinking.
At the end of the day, unless you are doing "over funded, deadlineless, R&D" software is a widget, just like a rivet, or a 2x4. It might be sold individually, or bundled into a bigger product. It might be used to fix something, build something, make something easier, or make something "cooler", but it's still just a widget... a product.
Nobody aside from the product producer cares how a product is produced (at least not aside from curiosity, environmental consciousness, health & safety, or legal reasons). They care even less about how the quality of the product is established. All they care about is getting a product of adequate quality for their purpose at a price they are willing to pay.
The sooner that testers realize that businesses will *always* try to find ways to produce their product as quickly & cheaply as possible while maintaining fitness for purpose, the more valuable they will be able to make themselves. Until they realize that and act accordingly, they *should* be scared (about the longevity of their career, anyway).
I'm a tester, and I don't care who does the testing, as long as it adds the necessary value.
I do think there are ways to add value without necessarily being a coder or a technical specialist (for instance, being a great test designer who can teach test design to others), but (and I expect to take some heat for this) Whittaker isn't wrong... maybe inflammatory, possibly exaggerating in terms of the state of the industry as a whole, perhaps even overstating certain points... but not wrong -- at least not according to my experience.
Thanks for the great comments, Scott.
"Until they realize that and act accordingly, they *should* be scared "
I like your paragraph the statement above is from. We should tell testers (including ourselves) what you mean by "acting accordingly". I have some ideas about it but I would love to hear from you and others.
Good synopsis Eric.
Truth is that testing has not contributed to quality on par with the six other technologies I listed. I will list more at EuroStar that have also outperformed testing. Testers need to learn to be more impactful and simply saying "user testing doesn't apply to me" means your lunch partner wasn't listening. What about the other 5? Maybe you are just clinging to what you know and have lost objectivity.
Sorry testers but being important simply because you say you are important isn't good enough. Game up or become irrelevant. I am trying to HELP YOU HERE!
Scott, let's talk about what you don't agree with over a beer next time we are together.
I think there will always be a need for testers (especially manual testers) for large scale enterprise software. For most of the stuff out there which are trivial apps like shopping carts, restaurant websites, Twitter, ToDo apps, etc. there's less of a need.
As long as people need complex mission/business critical software that has multiple integration points and do more than just simple CRUD operations then it's best to have testers that can catch the edge cases that might be detrimental to running a business and affect cash flow.
And lastly things like SOX require an audit team like a test team to ensure that things work as expected so that we don't end up with another Enron, WorldCom, etc.
Hopefully there will always be a need for enterprise level software, but if in the future all people will use are shopping cart websites, Twitter, and Facebook then regardless of whether you're a tester or developer -- we're all screwed.
Otherwise I wouldn't worry too much but learning how to code a little bit to automate and script things that are repetitive testing leaving time to research, think of, and try more interesting tests cases is a good thing.
We have seen this for few years now in our business that the testers role should be integrated to the Development team work and the tasks (and the mind set) should change from the Quality Assurance type of work to Quality Leadership.
Taking the leadership is a huge responsibility and must happen in every stage of the sw design cycle. Mentoring the dev team, introducing new practices and motivating to use those, new testing tools/frameworks, making sure the learning from retrospective is taken in to acount etc. not excluding the need of hands on testing.
I would be lying if James Keynote did not make me uncomfortable. It did because I am seeing this trend in previous work places and my current one. This is about money. And companies will look at any solution that can show the cost savings benefit. When implemented strategically correctly, testing early and in an automated fashion will save money. The need for manual testing, even those offshore, will continue to diminish.
G'day Eric,
I blogged my response to your post.
tl;dr: The keynote seems to have been controversial, but not terribly substantial. If it gets testers thinking though, then great.
I have to disagree with aturners comment of " When implemented strategically correctly, testing early and in an automated fashion will save money."
All it does is shift the money cost and potential loss to a different place. Sure it might be less, but it will still occur. And this is because testing and automation is only as good as the person who implemented it.
Meaning if your testing and automation work are done by an amateur developer then guess what, you get amateur testing & automation. GIGO, and you still loose money.
From what I read on this post and a couple of others JW brings up some salient points for the Testing community. I think some of it is off base and heavily biased by his experiences (particularly with automation), but he is in the ballpark. By that I mean there is a shift going on now to more specialized (technical or non-technical) work. It behooves us to keep up on the technology, methods and tools to support those changes.
There will be a shake out of the "certified" fake testers over the next few years. It is inevitable (thank you Agent Smith). But alls Development has done over the last few years, IMO, with the "advancements" in testing tools for them (xUnit, better IDE, CI, etc.) is finally catch up to where they should have been 10+ years ago.
This is the one thing from agile that I really do like. It has forced Developers, and other groups, to start considering Testing an integral part of the process and not just a necessary evil. About fracking time!
Otherwise only time will tell, and for me I think I can keep my day job.
I wasn't there, but reading your account I didn't find it "terribly enlightning". In terms of a vision for a new world (with various agile/xp practices), I like Brian Marick's vision for how to staff a team (see section titled "Staffing" here: http://www.exampler.com/old-blog/2003/10/04/#agile-testing-project-7). Brian's description includes practices like TDD, CI, etc. and integrates good testers. At the same time he has good arguments about why developers can do testing.
When I first started working, I was told that all QA jobs will eventually be outsourced. Here I am 9 years later, still working and feeling fairly confident that my job isn't going overseas anytime soon. So I'm a little skeptical of people who try to predict the future of an entire job role.
Eric -- As promised, a related blog post by me.
On the Alleged Death of Testing
James -- Happy to. I think mostly it's a matter of context, specifically related to the differences between orgs that develop software as their primary business, and orgs that develop software as a "necessary part of doing business". I guess "don't agree" might have been too simple. More appropriate would have been "don't agree as universally as the keynote implies". Make sense?
-- Scott
Good summary! Last night at a local user group meeting, recruiters confirmed the need for testers to grow their technical skills and/or specialize.
I listened to the webcast and thought Whittaker's ideas were interesting. I've always been a proponent of the need for testers to develop and developers to test. Not sure I agree with the idea of doing away with defect tracking. How can you know a defect won't be fixed unless you report it and discuss it?
For the most part, though, I agreed with Whittaker, given the proper context. There are many environments and applications where users do not expect and will not accept poorly tested code. But, when you aren't paying for the app, and the bug causes no monetary loss, and fixes are fast, people are more accepting.