It’s just not fair.
The better we test, the more we appear to not meet our deadlines.
Skilled testers provide more feedback than unskilled testers. The skilled testers find more bugs and raise more questions. The more bugs found, the more testing is required to verify the bugs. The more bugs that are fixed, the more testing is required to regression test.
The unskilled tester scratches the surface. If no bugs or questions are discovered and little feedback (e.g., test results) is produced, the unskilled tester calls it a day at 5PM everyday and naively goes home to watch TV. It’s possible to get away with this, especially when the missed defects are never discovered in production, and those that are, may be written off as too difficult to catch in test. Poor performers can hide well in the test world. You may know some.
What can we do about this frustrating injustice?
Reduce feature ownership.
The above paradigm may be partly the result of feature ownership. If the testers are each assigned certain features to test and therefore only responsible for seeing those features through to production, we see the unskilled tester rewarded with easily meeting deadlines, and the skilled tester pulling her hair out, trying to keep up.
Test managers have some control over this. They can ask the unskilled tester to assist the skilled tester with less cognitive tasks, such as bug verification or regression testing. This helps accentuate the team mentality, that nobody goes home until all features are fully tested. Most Agile teams are already doing this but I suspect the unskilled testers still manage to provide less value on Stories they pull from the task board.
Deadlines are not the main goal.
In most trades, we reward people for getting work done on time. Perhaps in testing, we should stop doing this. It’s almost as if we should do the opposite; reward testers for managing to keep the team busy fixing problems and thus not meeting the deadline.
I’m exaggerating, of course, but when we tell testers to “get this tested well and on time”, there is a conflict of interest. To make matters worse, it’s easier to look at a clock and say “great job, tester, you completed the testing on time” than it is to look at a piece of software and say “great job, tester, you tested this well”.
Let’s not forget to celebrate the efforts of those testers who always seem to be swamped and having a tough time meeting team test deadlines. They need a break sometimes too.
1 comments:
Subscribe to:
Post Comments (Atom)
The tester that is swamped seems to be stuck focusing on just a few initial features and trying to test the heck out of them, which is a good thing but might not be the best use of time as other features would not have the same amount of time dedicated for testing.
IMO, the best solution to optimize time is to do quick/high level passes kind of like a flyover and then iteratively based on previous high level passes you're able to narrow down which needs to be tested in more detail.
That way you at least have touched all the features that needs to be tested to some degree in the first level pass, instead of getting all bogged down in one or two features and leaving 10 other features not tested at all.
Writing this I imagined one of the unmanned airplane drones skimming 10K feet surveying the battlefield to pick out targets for testing, and then the 2nd fly through is with a bomber at 5k feet taking out the code bunkers targeted by the previous unmanned drone. Finally, foot soldier infantry is sent in on the ground to finish the job and drive out the remaining bugs still hiding underground.