I’ve had the luxury of working on an AUT that hasn’t gone live for 3 years. Now that we’re live, the old familiar tester stress, guilt, and anger is back.
When the first major production bug was discovered I wanted to throw up. I felt horrible. Several people had to work around the clock to clean up corrupt data and patch the problem. I wanted to personally apologize, to each person on my team and hundreds of users, for not catching the bug in testing…and I did apologize to a couple individuals and offer my help. Apologizes in these cases don’t help at all, other than for personal guilt and accountability.
During my selfish guilt, I opened my eyes and realized my fellow devs felt just as accountable as I did (if not more so), and never attempted to pass blame to me. I started asking myself who is really more at fault here; the tester who didn’t test the scenario or the developer who didn’t code to handle it?
I think the tester is 75% responsible for the bug and the developer, 25%. However, the dev probably gets the brunt of the blame because they are a more prominent part of the development team. I would guess more end users have heard of people called software developers than have heard of people called software testers.
I wouldn't blame yourself. It's impossible to catch every single problem. Bugs are going to happen regardless.
As far as % responsibility I think it's more like 90% the developer's fault and 10% is due to testers/BL/BA/CEO/shareholders/George Bush/Beyonce/50 Cent/etc. And I'm sure most end users have heard of testers but ultimately the fault does indeed fall on the developer's shoulders. We wrote the code and it didn't perform as intended.
I don't know what the bug or problem was, but I think the important thing is instead of griping and being angry is to figure out what can we learn from it. Whenever there's a major problem and after it's resolved I do a post-mortem analysis and think of how we could've prevented it in the first place. Is there something I should've thought of doing or tested to avert this problem? Is there a pattern I can derive and apply to future problems like this? etc.
Thanks for the comment, Purple Crayon Superhero.
I like your advice to learn from the bugs found in the wild. Sometimes testers do a poor job of predicting the way software will be used, and they miss important tests.
Don't think there is a need to do any blaming, leave that kind of stuff for bad relationships. I agree with the superhero that you should learn from it, maybe improve your test beds to be more "real world" like.
However, that doesn't mean there won't be production bugs in the feature, there always will be as the software is just too complex with too many states and too much different hardware and other software versions to run it on that you just can't catch all the bugs. Not to mention we're all people, so some things slip our attention.
As James Bach says in one of his lectures, there are infinite tests. You just can't cover them all in a very short and finite time under the extreme pressure of the CEO who wants the testing to end and the software out now :)
To more practical advice - when I used to do QA full time I would show my test designs to my team leader, the developers in charge of the features, and usually another member of the QA department. Sometimes they actually read them and gave me feedback about the tests, that decreases the chance of missing something important. I also tried to be in touch with tech support to be aware of what kind of problems the customers are running into and what configurations they are using. Trying to run the same stuff they are running would help to make sure the important tests pass in the right environments.
HTH!
Ido Schacham,
Great suggestion to talk to your tech support staff. I'll bet they would love to vent a little about user complaints. Understanding these complaints will give us insight to user expectations and help narrow future testing. I like it!