- "Instead of figuring out what works, they are stuck investigating what doesn’t work.”
Ilya asked:
Why did you use "stuck" referring to context of the other testers? Isn't "investigating what doesn’t work" more important than "figuring out what works" (other factors being equal)?
I love that question. It really made me think. Here is my answer:
- If stuff doesn’t work, then investigating why it doesn’t work may be more important than figuring out what works.
- If we’re not aware of anything that is broken, then figuring out what else works (or what else is not broken) is more important than investigating why something doesn’t work…because there is nothing broken to investigate.
When testers spend their time investigating things that don’t work, rather than figuring out what does work, it is less desirable than the opposite. Less desirable because it means we’ve got stuff that doesn’t work! Less desirable to who? It is less desirable for the development team. It means there are problems in the way we are developing software.
An ultimate goal would be bug free software, right? If skilled testers are not finding any bugs, and they are able to tell the team how the software appears to work, that is a good thing for the development team. However, it may be a bad thing for the tester.
- Many testers feel like failures if they don’t have any issues to investigate.
- Many testers are not sure what to do if they don’t have any issues to investigate.
- If everything works, many testers get bored.
- If everything works, there are fewer hero opportunities for many testers.
I don’t believe things need to be that way. I‘m interested in exploring ways to have hero moments by delivering good news to the team. It sounds so natural but it isn’t. As a tester, it is soooooo much more interesting to tell the team that stuff just doesn’t work. Now that’s dysfunctional. Or is it?
And that is the initial thought that sparked my Avoid Trivial Bugs, Report What Works post.
Thanks, Ilya, for making me think.
Here's my worry about this whole "what works" versus "what doesn't work" argument - I worry that it's too easy to find whatever you are looking for.
ReplyDeleteIf you are looking to find working features, you'll find them.
If you are looking for broken features, you'll find them.
And since I know that developers (and pretty much everyone else) tend to look for things that work, I like to depend on testers to look for things that don't work.
In the BBST, Cem Kaner defines testing as a "technical investigation ... to provide stakeholders with quality of a product...".
ReplyDeleteIn a previous paper (2004), he defies the notion that the primary reason to test is to find bugs. Among other tasks us testers carry out, he identifies:
-Assess conformance to specifications,
-Find safe scenarios for use with the product,
-Verify correctness of the product,
Granted, he also mentions other bug related reasons to test. But I also try to tell you the alternative ones.
Regards,
Omar Navarro
http://rincondeltester.blogspot.com
Joe, good point. I agree that a tester should fill that niche (i.e., finding things that don't work). But if the only thing testers ever reported were those things that don't work...well, then we would never ship. So eventually, once stuff is working, testers can hopefully report what works. Until then, we are "stuck" investigating bugs. The more there is to investigate, the less we are probably shipping. That's why I like the word "stuck".
ReplyDeleteI'm getting away from your point, I know.
If the goal is to ship with 0 defects then sure, reporting bugs is going to stop you shipping. But if all thats reported is very low sev defects then either (a) the testing is not focused correctly or (b) the functionality is working well enough that all there is left is the polish.
ReplyDeleteI like the lo-tech testing dashboard idea where areas of functionality are coloured as the testing progresses.
And there should be nothing from stopping a tester reporting that features worked well and giving the devs some praise
The discussion so far seems to have omitted talking about RISK.
ReplyDeleteSo here's my 2 cents:
It's usually most important to FIND BUGS in parts that are SUPPOSED to work no matter what.
I've heard horror stories of projects where testers focused on reporting easy-to-spot hard-to-fix stuff until the deadline was up.. and then the actual critical paths (hard-to-spot stuff) were not tested at all.
Very nice post. Thank you for sharing this. I agree with you Joe saying " I like to depend on testers to look for things that don't work". In present scenario software quality testing is a crucial aspect for any product or application and we have smart QA Testing experts who understands the criticality.
ReplyDelete