The Perfect Testing Job May Already Be Yours
5 comments Posted by Eric Jacobson at Tuesday, April 29, 2014My uncle is an audio file. He buys his equipment only after borrowing it to test in his house. He prefers vinyl, American-made audio equipment brands I’ve never heard of, uses dedicated amps, and only rips to FLAC. The sound quality of his system is impeccable.
Ever since I was a kid, I wanted a sound system as good as my uncle’s. When I was 14, I spent my paper route money on a pair of Boston Acoustic speakers, and a Marantz receiver (instead of the flashy JVC models). The following year I bought a Magnavox CD player because it had the curved slot in the CD arm, which at the time, meant it used a quality laser reader. Years later I added a Paradigm subwoofer after exhaustive research.
Although my home audio system doesn’t sound nearly as good as my uncle’s, it does sound better than most, at least I think so. I take pride in maintaining it and enjoy listening to music that much more.
The more I learn about testing, the more I start to compare my testing job to that of others. I feel pressure to modernize all test approaches and implement cool test techniques I’ve heard about. I’m embarrassed to admit I use a Kanban board without enforcing a WIP. Some in the industry advise:
"Try to do the right thing. If you cannot – leave!”
But I feel satisfaction making small changes. I enjoy the challenge of debate. I refine my ideas and find balance via contention. A poor process provides fodder for performance goals. Nirvana is boring.
Inspired by yet another Michael Bolton post. I’ll try to stop doing that.
The Power To Declare Something Is NOT A Bug
17 comments Posted by Eric Jacobson at Thursday, April 24, 2014Many think testers have the power to declare something as a bug. This normally goes without saying. How about the inverse?
Should testers be given the power to declare something is NOT a bug?
Well…no, IMO. That sounds dangerous because what if the tester is wrong? I think many will agree with me. Michael Bolton asked the above question in response to a commenter on this post. It really gave me pause.
For me, it means maybe testers should not be given the power to run around declaring things as bugs either. They should instead raise the possibility that something may be a problem. Then I suppose they could raise the possibility something may not be a problem.
The second thing (here is the first) Scott Barber said that stayed with me is this:
The more removed people are from IT workers, the higher their desire for metrics. To paraphrase Scott, “the managers on the floor, in the cube farms, agile spaces or otherwise with their teams most of the time, don’t use a lot of metrics because they just feel what’s going on.”
It seems to me, those higher up people dealing with multiple projects don’t have (as much) time to visit the cube farms and they know summarized information is the quickest way to learn something. The problem is, too many of them think:
SUMMARIZED INFORMATION = ROLLED UP NUMBERS
It hadn’t occurred to me until Scott said it. That, alone, does not make metrics bad. But it helps me to understand why I (as a test manager) don’t bother with them but I spend a lot of time fending off requests for them from out-of-touch people (e.g., directors, other managers). Note: by “out-of-touch” I mean out-of-touch with the details of the workers. Not out-of-touch in general.
Scott reminds us the right way to find the right metric for your team is to start with the question:
What is it we’re trying to learn?
I love that. Maybe a metric is not the best way of learning. Maybe it is. If it is, perhaps coupling it with a story will help explain the true picture.
Thanks Scott!
Nondeterministic Testing Instead Of Pass/Fail
2 comments Posted by Eric Jacobson at Tuesday, April 08, 2014I heard a great interview with performance tester, Scott Barber. Two things Scott said stayed with me. Here is the first.
Automated checks that record a time span (e.g., existing automated checks hijacked to become performance tests) may not need to result in Pass/Fail, as respect to performance. Instead, they could just collect their time span result as data points. These data points can help identify patterns:
- Maybe the time span increases by 2 seconds after each new build.
- Maybe the time span increases by 2 seconds after each test run on the same build.
- Maybe the time span unexpectedly decreases after a build.
- etc.
My System 1 thinking tells me to add a performance threshold that resolves automated checks to a mere Pass/Fail. Had I done that, I would have missed the full story, as Facebook did.
Rumor has it, Facebook had a significant production performance bug that resulted from reliance on a performance test that didn’t report performance increases. It was supposed to Fail if the performance dropped.
At any rate, I can certainly see the advantage of dropping Pass/Fail in some cases and forcing yourself to analyze collected data points instead.
Human vs. Machine Test Evaluation Has A Double Standard
7 comments Posted by Eric Jacobson at Friday, April 04, 2014I often hear skeptics question the value of test automation. Their questioning is healthy for the test industry and it might flush out bad test automation. I hope it continues.
But shouldn’t these same questions be raised about human testing (AKA Manual testing)? If these same skeptics judged human testing with the same level of scrutiny, might it improve human testing?
First, the common criticisms of test automation:
- Sure, you have a lot of automated checks in your automated regression check suite, but how many actually find bugs?
- It would take hours to write an automated check for that. A human could test it in a few seconds.
- Automated checks can’t adapt to minor changes in the system under test. Therefore, the automated checks break all the time.
- We never get the ROI we expect with test automation. Plus, it’s difficult to measure ROI for test automation.
- We don’t need test automation. Our manual testers appear to be doing just fine.
Now let’s turn them around to question manual testing:
- Sure, you have a lot of manual tests in your manual regression test suite, but how many actually find bugs?
- It would take hours for a human to test that. A machine could test it in a few seconds.
- Manual testers are good at adapting to minor changes in the system under test. Sometimes, they aren’t even aware of their adaptions. Therefore, manual testers often miss important problems.
- We never get the ROI we expected with manual testing. Plus, it’s difficult to measure ROI for manual testing.
- We don’t need manual testers. Our programmers appear to be doing just fine with testing.