I'm in love with the word “appears” when describing software. The word “appears” allows us to describe what we observe without making untrue statements or losing our point. And I think it leads to better communication.
For example, let’s say the AUT is supposed to add an item to my shopping cart when I click the “Add Item” button. Upon black box testing said feature, it looks like items are not added to the shopping cart. There are several possibilities here.
• The screen is not refreshing.
• The UI is not correctly displaying the items in my cart (e.g., the database indicates the added items are in my cart).
• The item is added to the wrong database location.
• The added item is actually displaying on the screen but it’s not displaying where I expect it to.
• A security problem is preventing me from seeing the contents of my cart.
• Etc.
The possibilities are endless. But so are the tests I want to execute. So like I said in my previous post, after determining how much investigation I can afford, I need to log a bug and move on. If I describe the actual result of my test as “the item was not added to my cart”, one could argue, “yes it was, I see it in the cart when I refresh...or look in the DB, etc.”. The clarification is helpful but the fact is, a bug still exists.
Here is where my handy little word becomes useful. If I instead describe the actual result as “the item does not appear to be added to my cart”, it becomes closer to an undisputable fact. Once you begin scrutinizing your observation descriptions, you may find (as I did) yourself making statements that are later proven untrue, and these may distract from the message.
Think about this a little before you decide this post appears to suck.
My opinions do not reflect those of my employer.
Subscribe to posts
Popular Posts
-
After attempting to use Microsoft Test Manager 2010 for an iteration, we quickly decided not to use it. Here is why. About 3 years ago we f...
-
Data warehouse (DW) testing is a far cry from functional testing. As testers, we need to let the team know if the DW dimension, fact, and b...
-
I recently read about 15 resumes for tester positions on my team. None of them told us anything about how well the candidate can test. Here...
-
Want your bug reports to be clear? Don’t tell us about the bug in the repro steps. If your bug reports include Repro Steps and Results se...
-
When someone walks up to your desk and asks, “How’s the testing going?”, a good answer depends on remembering to tell that person the right ...
Blog Archive
Labels
- Teamwork (86)
- bugs (81)
- process (66)
- software testing career (49)
- automation (45)
- writing tests (38)
- Personal Excellence (37)
- Managing Testing (33)
- questions (31)
- language (29)
- testing metaphor (23)
- Tools (19)
- STPCon (10)
- heuristics (10)
- Test Cases (9)
- test blogs (9)
- CAST (8)
- Presentations (8)
- Test This (8)
- metrics (8)
- Rapid Software Testing (7)
- Silliness (7)
- Data Warehouse Testing (6)
- Kanban (6)
- STARwest (6)
- Testing Conferences (6)
- Agile (4)
- Bug Report Attributes (4)
- Don't Test It (4)
- Stareast (4)
- documentation (4)
- Failure Story (3)
- Lightning Talks (3)
- Testing Related Ideas (3)
- You're A Tester (3)
- Performance Testing (2)
- Podcast (2)
- ATDD (1)
- BDD (1)
- HATDD (1)
- Meetups (1)
Who am I?
- Eric Jacobson
- Atlanta, Georgia, United States
- My typical day: get up, maybe hit the gym, drop my kids off at daycare, listen to a podcast or public radio, do not drink coffee (I kicked it), test software or help others test it, break for lunch and a Euro-board game, try to improve the way we test, walk the dog and kids, enjoy a meal with Melissa, an IPA, and a movie/TV show, look forward to a weekend of hanging out with my daughter Josie, son Haakon, and perhaps a woodworking or woodturning project.
Language is wonderful. By making a slightly less accurate statement, it because much more difficult to disprove.
There's always a tradeoff for a tester as they try to decide how much information is necessary to add to a bug.
In your example, I'd say it was clearly worth it to try pressing the refresh button.
I'd also argue that it would be worth it to check the database to make sure the data is actually in the cart -- assuming this was not extremely difficult or time-consuming to do.
Should the QA person run the client using an IDE debug tool and find the actual spot in code where an error occurs? Well, again, I can argue there are times when they should -- but likely not all the time.
Sometimes the tester's time is better spent finding bugs and not investigating them. Sometimes the developer's time is better spend coding and not investigating bugs. I don't think there's a general answer that will satisfy every situation.