Most testers/devs agree that multiple problems should not be documented in the same bug report. Should similar logic apply to tests? Maybe multiple verifications that could fail should not be in the same test.
Let’s say each of our tests include multiple verifications. If Test#1 has an overall PASS result it tells us all things verified in Test#1 worked as expected. I’m okay with this. However, if it gets a FAIL result it tells us at least one thing did not work as expected. Anyone who sees this failed test does not really understand the problem unless they drill down into some type of documented result details. And how do we know when we can execute this test again?
The simpler our tests, the easier they are to write, and the less working things they depend on to execute. I'll use an exaggerated example. The following test verifies a user can log in and edit an account.
1. Log in. Expected: Log in page works.
2. Edit an account. Expected: Account is edited.
What is this test really interested in? What if the log in page is broken but you know a workaround to get you to the same logged in state? Can you test editing an account? And if it works should this test still pass?
My ideal manual test is structured as follows (I’ll discuss automated tests next week).
Do A. Expect B.
It has one action and one expected result. Everything else I need to know prior to my action is documented in my test’s Preconditions. This helps me focus on what I am actually trying to test. If I discover upstream bugs along the way, good, I’ll log them. But they need not force this specific test to fail. Get it? Have you thought about this? Am I missing anything?
Let’s say each of our tests include multiple verifications. If Test#1 has an overall PASS result it tells us all things verified in Test#1 worked as expected. I’m okay with this. However, if it gets a FAIL result it tells us at least one thing did not work as expected. Anyone who sees this failed test does not really understand the problem unless they drill down into some type of documented result details. And how do we know when we can execute this test again?
The simpler our tests, the easier they are to write, and the less working things they depend on to execute. I'll use an exaggerated example. The following test verifies a user can log in and edit an account.
1. Log in. Expected: Log in page works.
2. Edit an account. Expected: Account is edited.
What is this test really interested in? What if the log in page is broken but you know a workaround to get you to the same logged in state? Can you test editing an account? And if it works should this test still pass?
My ideal manual test is structured as follows (I’ll discuss automated tests next week).
Do A. Expect B.
It has one action and one expected result. Everything else I need to know prior to my action is documented in my test’s Preconditions. This helps me focus on what I am actually trying to test. If I discover upstream bugs along the way, good, I’ll log them. But they need not force this specific test to fail. Get it? Have you thought about this? Am I missing anything?
4 comments:
Subscribe to:
Post Comments (Atom)
I guess it depends on how structured and repeatable you want your manual tests to be. I would argue that manual tests should be somewhat open-ended. You have your requirements and that should guide the "expected" results, but in terms of what you test, I don't think that a strict, repeatable, set of steps buys you much after you run it the first time.
Run your test, then write down what you did and what you found so that whomever runs this same test in the future can alter it in a meaningful way.
When a test that has multiple verifications fail the failure assertion message should be clear as to why it failed so you shouldn't have to dig into the test to figure that out.
I do concur that having a simple test that does X to expect Y is the simplest and clearest way to conduct tests. However, although the benefit of it is that it's simple the disadvantage is also that it's simple.
Sometimes if you want the best bang for the buck and to get stuff done, then reusability is the way to go even if it means slightly complicating things. Reusability is one of the tenants of OO (object-oriented) programming and using classes/objects.
Alex,
I guess I didn't explain my idea well in this post. I agree, in general, open-ended manual tests are better than strict steps. My post was not trying to imply any level of detail in a test. Instead, I was trying to say something about the number of verifications. I see these (verifications vs. detail) as independent from each other.
I'm trying to answer the question of when something called a "test" should begin and end. I have seen functional tests that have 10 or more steps and each of these steps can pass/fail. I believe said test would be more valuable if broken into 10 independent tests.
sillypants,
Ah, it sounds like you are talking about automated tests (or unit tests) because you are talking about assertion messages. Either that or your manual test case app is more sophisticated than mine. I use Mercury TestDirector to execute manual tests. It forces a validation for each test step but these easily get lost behind the overall result.
At any rate, I certainly agree with you on automated tests.