In one of James Whittaker’s recent webinars , he mentioned his disappointment when tester folks brag about bug quantities. It has been popular, lately, to not judge tester skills based on bug count. I disagree.
Last Monday night I had a rare sense of tester accomplishment. After putting in a 14 hour day, I had logged 32, mostly showstopper, bugs; a personal record. I felt great! I couldn’t wait to hear the team’s reaction the next day. Am I allowed to feel awesome? Did I really accomplish anything? Damn right I did. I found many of those bugs by refining my techniques throughout the day, as I become familiar with that dev’s mistakes. I earned my pride and felt like I made a difference.
But is it fair to compare the logged bug list of two testers to determine which tester is better? I think it is...over time. Poor testers can hide on a team because there are so few metrics to determine their effectiveness. Bug counts are the simplest metric and I think it’s okay to use them sometimes.
I work with testers with varying skills and I see a direct correlation. When a tester completes an entire day of work without having logged a single bug, I see a problem. The fact is, one logged bug proves at least some testing took place. No logged bugs could mean the AUT is rock solid. But it could also mean the tester was more interested in their Facebook account that day.
“If it ain’t broke, you’re not trying hard enough.”
This silly little cliché actually has some truth. When I test something new, I start out gentle, running the happy tests and following the scripted paths…the scenarios everybody discussed. If the AUT holds up, I bump it up a notch. And so on, until the bugs start to shake loose. That’s when testing gets fun. Logging all the stupid brainless happy path bugs is just busy work to get to the fun stuff. (Sorry, a little off subject there)
Anyway, from one tester to another, don’t be afraid to celebrate your bug counts and flaunt them in front of your fellow testers…especially if it makes you feel good.
BTW - Does anyone else keep a record of most bugs logged in a day? Can you beat mine? Oh, and none of my 32 got rejected. :)
My opinions do not reflect those of my employer.
Subscribe to posts
Popular Posts
-
After attempting to use Microsoft Test Manager 2010 for an iteration, we quickly decided not to use it. Here is why. About 3 years ago we f...
-
Data warehouse (DW) testing is a far cry from functional testing. As testers, we need to let the team know if the DW dimension, fact, and b...
-
I recently read about 15 resumes for tester positions on my team. None of them told us anything about how well the candidate can test. Here...
-
Want your bug reports to be clear? Don’t tell us about the bug in the repro steps. If your bug reports include Repro Steps and Results se...
-
When someone walks up to your desk and asks, “How’s the testing going?”, a good answer depends on remembering to tell that person the right ...
Blog Archive
Labels
- Teamwork (86)
- bugs (81)
- process (66)
- software testing career (49)
- automation (45)
- writing tests (38)
- Personal Excellence (37)
- Managing Testing (33)
- questions (31)
- language (29)
- testing metaphor (23)
- Tools (19)
- STPCon (10)
- heuristics (10)
- Test Cases (9)
- test blogs (9)
- CAST (8)
- Presentations (8)
- Test This (8)
- metrics (8)
- Rapid Software Testing (7)
- Silliness (7)
- Data Warehouse Testing (6)
- Kanban (6)
- STARwest (6)
- Testing Conferences (6)
- Agile (4)
- Bug Report Attributes (4)
- Don't Test It (4)
- Stareast (4)
- documentation (4)
- Failure Story (3)
- Lightning Talks (3)
- Testing Related Ideas (3)
- You're A Tester (3)
- Performance Testing (2)
- Podcast (2)
- ATDD (1)
- BDD (1)
- HATDD (1)
- Meetups (1)
Who am I?
- Eric Jacobson
- Atlanta, Georgia, United States
- My typical day: get up, maybe hit the gym, drop my kids off at daycare, listen to a podcast or public radio, do not drink coffee (I kicked it), test software or help others test it, break for lunch and a Euro-board game, try to improve the way we test, walk the dog and kids, enjoy a meal with Melissa, an IPA, and a movie/TV show, look forward to a weekend of hanging out with my daughter Josie, son Haakon, and perhaps a woodworking or woodturning project.
if a tester joins in the middle of a testing phase or later in cycle, their bug counts will almost always be lower in comparison to the rest of the team. Early bird gets the first worm comes to mind with this scenario. BTW, I logged yesterday in my 10 hr day about 20 bugs
I think it's a fine way to measure skill, over time, and if that's all you do.
I would hope, though, that the goal isn't to find more, but to aid in preventing more. Work with the devs to find out why it is that you're finding so many bugs.
But maybe you wouldn't find so many if your fellow testers would get off Facebook?
I've logged more test days without a single bug reported than I have days with one, in the last year. Do I think that's a problem? No. Maybe I spent most of that day pinpointing a showstopper defect I found the day before, so that when the dev comes back off leave tomorrow they don't have to spend a day recreating it, and we get the fix that'll free up the rest of the app for testing one day sooner. Maybe I spent the day exploring the side effects of another defect. Maybe I deliberately picked out the quick to run, unlikely to fail tests in order to game another metric - our run-rate, so as to gain enough breathing space for more valuable exploration the next day. Maybe, I spent the day attempting to reproduce 27 other damned defects raised by shotgunners only to discover that they hadn't checked their results properly and the defects are all actually genuine No Fault Found. (Actually, I do think a couple of these items are a problem, but because they're due to metric chasing rather than value chasing).
14 hrs and 32 bugs?? Can you say Overachiever? :)
I want to be a QA person for a day just so that I could try to beat ur record.
Also, in my opinion just comparing bug counts for testers is a bit like comparing the number of lines of code a dev has written.
The quality of the bug is important as well as the quantity. A dev that writes a bunch of lines of code but is hard to read, debug, modify, and doesn't fully or correctly implement the feature wouldn't be good even though a ton of code was written. I'd prefer less code which is an art form in itself to be able to keep things simple, easy to read, modify, and fully implements the feature correctly.
Anyhow I digress, and back to testing. Perhaps another thing to consider is that the module being tested is of different complexity or written by developers of different skill levels which might contribute to the number of bugs (must resist comparing a module written by myself vs. others -- just kidding I'm not that narcissistic).
I guess it's not a perfect analogy with comparing it to counting the # of lines of dev code as you normally would want a higher bug count from a tester. Okay, I think I just went around in circles and have blown away my own original argument. Feel free to ignore my comment - I guess it's too late now that you've read this far...
Congrats on the personal record! :)
Once in a while, i do check bug counts and it does make me worry when one tester's has logged only 1 or 2 (or none!) while the rest are at a 2-digit bug count already... for the same day! And it infuriates me when i check out the function he's testing and easily come across some bugs. In a way, checking out the bug list size helps me catch whether someone's being too passive in his testing.
180 bugs in one day...yes they were all real bugs that have been since fixed...
Basim, I am not sure if I agree with the statement "if a tester joins in the middle of a testing phase or later in cycle, their bug counts will almost always be lower in comparison to the rest of the team. Early bird gets the first worm comes to mind with this scenario."
It is possible that
- The new person may bring in a different view/perspective to the application and end up finding more bugs.
- It is possible that he may find more bugs because he is doing more exploratory testing whereas the early birds been focusing on the scripted tests.
The bit that stands out for me here is the "I became familiar with that dev's mistakes". A point that suggests a very experienced tester that uses a lot of intuition perhaps?
What about the scenario where you get a good tester paired with a bad dev? How does the bug count from that pairing bear any comparison with the bug count from a good tester paired with a good dev? So whilst I'm definitely "for" tracking bug stats for a tester I do think it's important to be careful that we don't compare apples with pairs.
That said I have to admit I've never found 32 bugs in one day and having a degree of competition between testers has to be good for productivity within the team.
William Echlin
www.SoftwareTesting.net/blog
Great post Eric. I take the same approach to you with testing. Try the happy paths, hit a few edge cases, ramp it up and towards the end of my 'session' I give it a real kicking.
I'm not sure I agree entirely with measuring a testers progress through bug counts alone. But I do kind of agree when you say 'over time'.
I once worked with a tester who raised 62 defects in one day. He got praise and bonus. On closer inspection though, all 62 were the same bug, appearing in 62 different windows. I personally would have raised one defect and listed all of the areas. Saves on triaging, retesting, reviewing etc.
So it is easy to fake defect counts. So even over time we need to be careful. Continous raising of low priority, niggly, hardly an issue bugs may not be what is needed or important.
I did a blog posting on a related topic about double checking your audiences values.
http://pac-testing.blogspot.com/2009/03/double-check-your-audiences-values.html
Rob..
I found this post pretty refreshing; honest, and maybe not in keeping with "popular" opinion, which requires some bravery as well.
I've found 67 errors in one day; had to work late to finish up logging them...I reported showstoppers immediately and put aside the less important stuff until the end of the day. I usually work with an open notebook, taking notes as I work - some of them are pretty cryptic, but it works for me...
You'll notice I remember my "personal best" and yes, I'm always trying to beat my own "record".
As a manager, I can definitively tell everyone here that the best testers on my staff have the largest number of bugs logged. Good testers find more stuff than bad testers. There's no other way to put it.
I do NOT advocate evaluating testers by number of bugs for the very reasons cited by others here. It can lead to rewarding all kinds of Bad Behavior.
But if you're a manager and your team has been in place for a while, go look at your testers' bug lists. Now tell me if the people on the top with the most bugs are not ALSO your best people. It doesn't matter how I slice it, by severity level, complexity, application, whatever, the same people sift to the top every time.
It's gotta mean something....
Good post and the number of bugs you found is a good reason to celebrate!
- Linda
Linda, you're setting the bar pretty high. My record averaged out to about one bug per 24 minutes in a 14 hour day. I wonder what your bug per minute average was with 67!
Yes, there are some great points in the above comments about all the variables to consider. Rob, I love your story about the same bug logged 62 times. I've seen that on a smaller scale...and probably done it myself. Sometimes it's not obvious to the tester the first few times they see it but 62 dupes is embarrassing.
Of course the downside is, the good testers have to waste time verifying all their bugs. The testers that didn't log any find themselves with even more time to waste with Facebook. It would be a sweet gig to let the strong testers log the bugs and the weak testers verify their fixes. The only problem is, the weak testers wouldn't catch the stuff broken by the fix!
I don't think you can judge a tester by their bug list size. Start using it for monetary reward purposes and you are heading down a slipper slop.
That doesn't mean you can't have fun with it though. We often give props to those who end the day with double digit bug counts.
One has to wonder when this many bugs is found if the tester actually did any testing though - since a large amount of time was spent writing/logging bugs is taking away from time spent on increasing testing coverage of the product.
Personally I feel good when I find a lot of bugs. A lot of the stuff I find isn't something that gets written in a formal bug report but more the result from a well placed question with the right people within earshot (or around the table).
So be careful about thinking that bugs logged is an indication of testing accomplished that day. Or expand your definition of testing
:)