Test This #8 - The Follow-On Journey
0 comments Posted by Eric Jacobson at Thursday, August 21, 2014While reading Duncan Nisbet’s TDD For Testers article, I stumbled on a neat term he used, “follow-on journey”.
For me, the follow-on journey is a test idea trigger for something I otherwise would have just called regression testing. I guess “Follow-on journey” would fall under the umbrella of regression testing but it’s more specific and helps me quickly consider the next best tests I might execute.
Here is a generic example:
Your e-commerce product-under-test has a new interface that allows users to enter sales items into inventory by scanning their barcode. Detailed specs provide us with lots of business logic that must take place to populate each sales item upon scanning its barcode. After testing the new sales item input process, we should consider testing the follow-on journey; what happens if we order sales items ingested via the new barcode scanner?
I used said term to communicate test planning with another tester earlier today. The mental image of an affected object’s potential journeys helped us leap to some cool tests.
Don’t Bother Indicating “Pass” or “Fail”
1 comments Posted by Eric Jacobson at Tuesday, August 05, 2014This efficiency didn’t occur to me until recently. I was doing an exploratory test session and documenting my tests via Rapid Reporter. My normal process had always been to document the test I was about to execute…
TEST: Edit element with unlinked parent
…execute the test. Then write “PASS” or “FAIL” after it like this…
TEST: Edit element with unlinked parent – PASS
But it occurred to me that if a test appears to fail, I tag said failure as a “Bug”, “Issue”, “Question”, or “Next Time”. As long as I do that consistently, there is no need to add “PASS” or “FAIL” to the documented tests. While debriefing about my tests post session, the assumption will be that the test passed unless indicated otherwise.
Even though it felt like going to work without pants, after a few more sessions, it turns out, not resolving to “PASS” or “FAIL” reduces administrative time and causes no ambiguity during test reviews. Cool!
Wait. It gets better.
On further analysis, resolving all my tests to “PASS” or “FAIL” may have prevented me from actual testing. It was influencing me to frame everything as a check. Real testing does not have to result in “PASS” or “FAIL”. If I didn’t know what was supposed to happen after editing an element with an unlinked parent (as in the above example), well then it didn’t really “PASS” or “FAIL”, right? However, I may have learned something important nevertheless, which made the test worth doing…I’m rambling.
The bottom line is, maybe you don’t need to indicate “PASS” or “FAIL”. Try it.