If I could take one test automation rule to my grave, this would be it. I had forgotten that it was optional.
I know, I know, it’s seems so tempting to break this rule at first; TestA puts the product-under-test in the perfect state for TestB. Please don’t fall into this trap.
Here are some reasons (I can think of) to keep your tests mutually exclusive:
- The Domino Effect – If TestB depends on TestA, and TestA fails, there is a good change TestB will fail, but not because the functionality TestB is checking fails. And so on.
- Making a Check Mix – Once you have a good number of automated checks, you’ll want the freedom to break them into various suites. You may want a smoke test suite, a regression test suite, a root check for a performance test, or other test missions that require only a handful of checks...dependencies will not allow this.
- Authoring – While coding an automated check (a new check or updating a check), you will want to execute that check over and over, without having to execute the whole suite.
- Easily Readable – When you review your automation coverage with your development team or stakeholders, you’ll want readable test methods. That usually means each test method’s setup is clear. Everything needed to understand that test method is contained within the scope of the test method.
I can sympathize with your plight, but I disagree with the rigidity in which you take your stand.
ReplyDeleteI think its perfectly ok to have a set of tests within a self-contained suite that are dependent on each other. Some pros:
- No extra setup/teardown that re-exercises tests you've already run
- It's easy to follow the tests in the order that they are written
- Its more similar to a real-life-cycle.
Let's take testing a simple CRUD API for example.
Following your stance, you create the following tests with the following setup/teardown:
Setup: Common interface setup
Test: Create /foo1
Teardown: Common interface teardown; Delete /foo1
Setup: Common interface setup; Create /foo2
Test: Read /foo2
Teardown: Common interface teardown; Delete /foo2
Setup: Common interface setup; Create /foo3
Test: Update /foo3
Teardown: Common interface teardown; Delete /foo3
Setup: Common interface setup; Create /foo4
Test: Delete /foo4
Teardown: Common interface teardown; None
-----------
Whereas you can make a single suite of tests like:
Setup: Common interface setup
Test: Create /foo1
Test: Read /foo1
Test: Update /foo1
Test: Delete /foo1
Teardown: Common interface teardown
----------
This eliminates the extra baggage of setup/teardown, but keeps the dependencies within the single suite. You can build several different suites that are self-contained, and run these parallel in order to reduce time.
There are some cons to this approach in addition to the ones you have already outlined:
- You don't get as much parallelism during test runs
- Doesn't exercise edge cases that might be caught without dependencies (i.e. create /foo during setup and immediately delete)
- If a test fails midway, you can't trust the outcome of the other tests until it passes again.
To me, its a give and take. You have to weigh the risks/rewards of either ways, for some cases it may make sense to go with one over the other. Just use logic, common sense, and do what the team as a whole thinks is best and works for them.
I do not agree with you. Very often you have to break your rule to get meaningful tests.
ReplyDeleteLets have a look at the following two tests:
Test A creates a user for your application.
Test B checks if the user was created.
Test A can run without Test B. Test B is dependent on Test A.
If you stick to your rule tests get very complicated as you can not break them down into small parts.
How will you test the following:
User A enters User B into the application.
User B changes himself to User C.
User D trays to delete User B etc.
It is nearly impossible without breaking your rule.
The way to go is: Take snapshots of the system after each action e.g.
Insert the User take a snapshot.
Change the User take a snapshot.
Delete the User take a snapshot.
Doing so allows you to make changes on the test deleting the user without the need of always running Test A and B before C.
I can't follow your point Maurus. Can you explain it a different way? I'm arguing that tests should not be dependent on each other.
ReplyDeleteWhat are you actually trying to test in the long user scenario test with UserA, UserB, UserC, and UserD? Whatever it is, one test method can put the product into a state, trigger the thing you are testing, and observe the results.
Maurus, I'm also struggling with your proposition. What is your first TestA testing? It seems to me, TestA would create a user then assert that the user was created. That sounds like one test to me.
ReplyDeleteB.D. Goad, thanks for challenging my position. I suspect this will be a difficult discussion to have via blog post comments but I'll try.
ReplyDelete"No extra setup/teardown that re-exercises tests you've already run" - I'm not suggesting one re-test something already accounted for in an existing test method. It seems to me, having various test methods that use the same setup/teardown is one of the main advantages of following the XUnit test pattern. Maybe I'm misunderstanding.
"It's easy to follow the tests in the order that they are written". Is following tests in order a goal? Should that be easy? Isn't it easier to not have to follow tests in an order? Easier still to understand a test by just looking at that test?
"Its more similar to a real-life-cycle." I think you're saying tests that run in a specific order can uncover problems better b/c they imitate users. This gives me pause more than the above two pros. However, IMO, that's something human testers can do much better than machines. I don't believe a goal of automation is to act like users.