We have a recurring conversation on both my project teams. Some Testers, Programmers, BAs believe certain work items are “testable” while others are not. For example, some testers believe a service is not “testable” until its UI component is complete. I’m sure most readers of this blog would disagree.
A more extreme example of a work item, believed by some to not be “testable”, is a work item for Programmer_A to review Programmer_B’s code. However, there are several ways to test that, right?
- Ask Programmer_A if they reviewed Programmer_B’s code. Did they find problems? Did they make suggestions? Did Programmer_B follow coding standards?
- Attend the review session.
- Install tracking software on Programmer_A’s PC that programmatically determines if said code was opened and navigated appropriately for a human to review.
- Ask Programmer_B what feedback they received from Programmer_A.
IMO, everything is testable to some extent. But that doesn’t mean everything should be tested. These are two completely different things. I don’t test everything I can test. I test everything I should test.
I firmly believe a skilled tester should have the freedom to decide which things they will spend time testing and when. In some cases it may make more sense to wait and test the service indirectly via the UI. In some cases it may make sense to verify that a programmer code review has occurred. But said decision should be made by the tester based on their available time and queue of other things to test.
We don’t need no stinkin’ “testable” flag. Everything is testable. Trust the tester.
Everything can be tested but to me it makes much more sense to define that something is testable when it's designed so that it is easily testable. That means more testing gets done in the same amount of time and with less effort. So, opposite of "testable" is not "not testable" but "difficult to test".
I think testability of code often goes hand in hand with maintainability. The same applies for that: everything can be maintained but it's much harder for some programs.
Edu, ah yes! Good point about testability. I guess testability also depends on the skills of the tester. However, that is an excellent argument for declaring things testable. If certain things are declared testable at the design phase, maybe they can be designed with testability in mind.
I agree with Edu's comment; ease of testing and does it make sense to test this in this state or this state. Yes, everything CAN be tested but perhaps a better discussion would be how do we get the most value out of a test? Value for the product, the team, and the organization.
I also wanted to comment on another item in your blog post, the item of installing software on a programmers maching to verify certain things were done... that to me sounds more like a trust issue rather than a test issue. You say "trust the tester", why not trust the entire team? If something is missed or done poorly it will be evident. Unfortunately this can be frustrating and cause unnecessary work for one part of the team or another, but to truly function as a team everyone should strive to trust as a team. There is also something to be said for autonomy. To that I would like to refer you to this article: http://www.blogger.com/comment.g?blogID=8951904624959546499&postID=8227474294328241427
In particular this section stands out for me: "To be truly intrinsically motivated and to gain a sense of achievement when they do make progress, people need to have some say in their own work. What's more, when employees have freedom in how to do the work, they are more creative. Two key aspects of autonomy are having the ability to make meaningful decisions in work and then feeling confident that — barring serious errors or dramatic shifts in conditions — those decisions will hold."
If people feel that they have no ownership, or that no matter what they do it will be called into question it can be very demotivating and thus detrimental to a project and a team.