Don’t Describe The Bug In Your Repro Steps
3 comments Posted by Eric Jacobson at Tuesday, August 30, 2011Want your bug reports to be clear? Don’t tell us about the bug in the repro steps.
If your bug reports include Repro Steps and Results sections, you’re half way to success. However, the other half requires getting the information in the right sections.
People have a hard time with this. Repro steps should be the actions leading up to the bug. But they should not actually describe the bug. The bug should get described in the Results section (i.e., Expected Results vs. Actual Results).
The beauty of these two distinct sections, Repro Steps and Results, is to help us quickly and clearly identify what the bug being reported is. If you include the bug within the repro steps, our brains have to start wondering if it is a condition necessary to lead up to the bug, if it is the bug, if it is some unrelated problem, or if the author is just confused.
In addition to missing out on clarity, you also create extra work for yourself and the reader by describing the bug twice.
Don’t Do This:
Repro Steps:
- Create a new order.
- Add an item to your new order.
- Click the Delete button to delete the order.
- The order does not delete.
Expected Results: The order deletes.
Actual Results: The order does not delete.
Instead, Do This:
Repro Steps:
- Create a new order.
- Add an item to your new order.
- Click the Delete button to delete the order.
Expected Results: The order deletes.
Actual Results: The order does not delete.
CAST2011 was full of tester heavy weights. Each time I sat down in the main gathering area, I picked a table with people I didn’t know. One of those times I happened to sit down next to thetesteye.com blogger Henrik Emilsson. After enjoying his conversation, I attended his Crafting Our Own Models of Software Quality track session.
CRUSSPIC STMPL (pronounced Krusspic Stemple)…I had heard James Bach mention his quality criteria model mnemonic years ago. CRUSSPIC represents operational quality criteria (i.e., Capability, Reliability, Usability, Security, Scalability, Performance, Installability, Compatibility). STMPL represents development quality criteria (i.e., Supportability, Testability, Maintainability, Portability, Localizability).
Despite how appealing it is to taste the phrase CRUSSPIC STMPL as it exercises the mouth, I had always considered it too abstract to benefit my testing.
Henrik, on the other hand, did not agree. He began his presentation quoting statistician George Edward Pelham Box, who said “…all models are wrong, but some are useful”. Henrik believes we should all create models that are better for our context.
With that, Henrik and his tester colleagues took Bach’s CRUSSPIC STMPL, and over the course of about a year, modified it to their own context. Their current model, CRUCSPIC STMP, is posted here. They spent countless hours reworking what each criterion means to them.
They also swapped out some of the criteria for their own. Of note, was swapping out the 4th “S” for a “C”; Charisma. When you think about some of your favorite software products, charisma probably plays an important role. Is it good-looking? Do you get hooked and have fun? Does the product have a compelling inception story (e.g., Facebook). And to take CRUCSPIC STMP further, Henrik has worked in nested mnemonics. The Charisma quality item descriptors are SPACE HEADS (i.e., Satisfaction, Professionalism, Attractiveness, Curiosity, Entrancement, Hype, Expectancy, Attitude, Directness, Story).
Impressive. But how practical is it?
After Henrik’s presentation, I have to admit, I’m convinced it has enough value for it’s efforts:
- Talking to customers - If quality is value to some person, a quality model can be used to help that person (customers/users) explain which quality criteria is most important to them. This, in turn, will guide the tester.
- Test idea triggers - Per Henrik, a great model inspires you to think for yourself.
- Evaluating test results – If Concurrency is a target quality criterion, did my test tell me anything about performing parallel tasks?
- Talking about testing – Reputation and integrity are important traits for skilled testers. When James Bach or Henrik Emilsson talk about testing, their intimate knowledge of their quality models gives them an air of sophistication that is hard to beat.
Yes, I’m inspired to build a quality criteria model. Thank you, Henrik!
Safety Language and the Preservation of Uncertainty
0 comments Posted by Eric Jacobson at Thursday, August 18, 2011On the third day of CAST2011, Jeff (another tester) and I played the hidden picture exercise with James Bach. We were to uncover a hidden picture, one pixel at a time, uncovering the fewest amount of pixels possible in a short amount of time. This forced us to think about the balance between coverage and gathering enough valuable information to stop. I won’t tell you our approach, but eventually we felt comfortable stating our conclusion. James challenged us to tell him with absolute certainty what the hidden picture was. I responded with, “It appears to be a picture of…”, which to my delight was followed by praise from the master. He remarked on my usage of safety language.
Two days earlier, Michael Bolton’s heady CAST2011 keynote kept me struggling to keep up. He discussed the studies of scientists and thinkers and related their findings to software testing. To introduce the conference theme, he concluded that what we call a fact is actually context dependent.
Since CAST2011 focused on the Context-Driven testing school, we heard a lot about testing schools (or ways of thinking about testing). For example, the Factory testing school believes tests should be scripted and repeatability is important. Some don’t like the label “Factory” but James Bach pulled a red card and argued the label “Factory” can be a good label under the right circumstances (e.g., manufacturing). I never really understood why I should care about testing schools until Bolton (and Bach) explained it this way…
Schools allow us to say “that person is of a different school” rather than “that person is a fool”.
I’ll try to paraphrase a few of Michael’s ideas:
- The world is a complex, messy, variable place. Testers should accept reality and know that some ambiguity and uncertainty will always exist. A testers job is to reduce damaging uncertainty. Testers can at least provide partial answers that may be useful.
- If quality is value to some person, then who should test the quality for various people? This is why it’s important for testers to learn to observe people, and determine what is important to them. Professor of Software Engineeering, Cem Kaner, calls testing a social science for this reason.
- Cultural anthropologist Wade Davis believes people strive by learning to read the world beyond them.
Per Michael, if the above points are true, testers should use safety language. I really liked this part of the lesson. Instead of saying “it fails under these circumstances”, a tester should say “it appears to fail” or “it might fail under these circumstances”. Instead of “the root cause is…”, a tester should say “a root cause is…”. When dealing with an argument, say “I disagree” instead of “you’re wrong” and end by saying “you may be right”. This type of safety language helps to preserve uncertainty and I agree that testers should use it wherever possible.
Be Careful With That CAST2011 Kool-Aid
7 comments Posted by Eric Jacobson at Monday, August 15, 2011After three days at CAST2011, I finally caught up on the #CAST2011 Twitter feed. It was filled with great thoughts and moments from the conference, which reflects most of what I experienced. There was only one thing missing; critical reaction.
In Michael Bolton’s thought provoking keynote, I was reminded of Jerry Weinberg’s famous tester definition, “A tester is someone who knows that things can be different". Well, before posting on what I learned at CAST2011, I’ll take a moment to document four things that could have been different.
Here are some things I got tired of hearing at CAST2011.
- Commercial test automation tools are the root of all evil. Quick Test Pro (QTP) was the one that took the most heat (it always is). Speakers liked to rattle off all the commercial test automation tools they could think of and throw them into a big book-burning-fire. The reason I’m tired of this is I’ve had great success using QTP as a test automation tool and I didn’t use any of its record/playback features. I’ve been using my QTP tests to run some 600 checks for the last 24 iterations and it has worked great. I think any tool can suck when used in the wrong context. These tools can also be effective in the right context.
- Physical things are shiny and cool and new all over again. One presentation was about different colored stickies on a white board instead of organizing work items on a computer (you’ve heard that before). One was about writing tests on different colored index cards. Someone suggested using giant Lego blocks to track progress. In each of these cases, one can see the complexity grow (e.g., let’s stick red things on blue things to indicate the blue things are blocked, one guy entered his index card tests into a spreadsheet so he could sort them). Apparently Einstein used to leave piles of index cards all over his house to write his ideas down on. I’m thinking maybe that was because Einstein didn’t have an iphone. IMO, this obsession with using office supplies to organize complex work is silly. This is why we invented computers after all. Use software!
- Down with PowerPoint! It’s popular these days to be anti-PowerPoint and CAST2011 speakers jumped on that too. Half the speakers I saw did not bother to use PowerPoint. I think this is silly. There is a reason PowerPoint grew to such popularity. It works! I would much rather see an organized presentation that someone took the time to prepare, rather than watching speakers fumble around through their file structure looking for pictures or videos to show, which is what I saw 3 or 4 times. One speaker actually opened PowerPoint, mumbled something about hating it, then didn’t bother to use slideshow mode. So we looked at his slides in design view. PowerPoint presentations can suck, don’t get me wrong, but they can also be brilliant with a little creativity. Just watch some TED talks.
- Traditional scripting testers are wrong. You know the ones, those testers who write exhaustive test details so a guy off the street can execute their tests. Oh wait…maybe you don’t know the ones. Much time was spent criticizing that approach. I’m tired of it because I don’t really think those people are much of a threat these days. I’ve never worked with one and they certainly don’t attend CAST. Why spend time bashing them?
I’m not bitter. I learned from and loved all the speakers. Jon Bach and his brother, James, put on an excellent tester conference that I was extremely grateful to attend. I was just surprised we couldn’t get beyond the above.
Positive posts to come. I promise.
BTW - Speaking of candor… to Jon Bach’s credit, he opened Day2 with a clever self-deprecating bug report addressing conference concerns he had collected on Day1. Things like the name tag print being to small and the breakfast lacking protein. Most of these issues were addressed and he even used PowerPoint to address them. Go Jon! I was very impressed.
In my experiences, 95% of the test cases we write are read and executed only by ourselves. If we generally target ourselves as the audience, we should strive…
…to write the least possible to remember the test.
I like the term “test case fragment” for this. I heard it in my Rapid Software Testing class. On the 5% chance someone asks us about a particular test, we should be able to confidently translate our chicken scratches into a detailed test. That’s my target.
If we agree with the above, couldn’t we improve our efficiency even more by coming up with some type of test case short hand?
For example:
- Instead of “Expected Results”, I write… “E:”.
- Instead of writing statements like “the user who cancelled their order, blocks the user who logged in first”, I prefer to assign variables, “UserA blocks UserB”.
- Instead of multiple steps, I prefer one step test cases. To get there I make some shortcuts.
- Rather than specifying how to get to the start condition (e.g., find existing data vs. create data), I prefer the flexibility of word “with” as in “With a pending order”. How the order becomes pending is not important for this test.
- To quickly remember the spirit of the test, I prefer state-action-expected as in “With a pending order, delete the ordered product. E: user message indicates product no longer exists.”
- Deliberately vague is good enough. When I don’t have enough information to plug in state-action-expected, I capture the vague notion of the test, as in “Attempt to corrupt an order.”. In that case I drop the “E:” because it is understood, right? Expected Result: corruption handled gracefully.
It may be a stretch to call these examples “shorthand”, but I think you get the idea.
What test case short hand do you use?