As with manual tests, for each automated test we write, we must make a decision about where the test begins/ends and how much stuff it attempts to verify.
My previous post uses an example test of logging into an app then editing an account. My perfect manual test would stick with the main test goal and only verify the account edit. But if I were to automate this, I would verify both the log-in page and account edit. In fact, I would repeat the log-in page verification even if it occurred in every test. I do this for two reasons.
1. Automated verification is more efficient than manual verification.
2. Automated tests force extra steps that can’t determine bug workarounds.
In most cases, automated verification is more efficient than manual verification. Once the verification rules are programmed into the automated test, one no longer has to mentally determine whether said verification passes or fails. Sure, one can write just as many rules into a manual test, but it still takes a significant amount of time to visually check the UI. Worse yet, it takes a great deal of administrative work to record the results of manual verifications. So much time that I often get lazy and assume I will remember what I observed.
With manual tests, we can think on the fly and use our AUT knowledge to get the correct precondition state for each test. However, automated tests do not think on the fly. Thus, we have to ensure each automated test begins and ends in some known state (e.g., AUT is closed). This forces our automated tests to have a great deal more steps than our manual tests. An upstream bug may not have much impact on a test if a human finds a workaround. However, that same upstream bug will easily break an automated test if the test author did not plan for it. Thus, multiple verifications per test can help us debug our automated tests and easily spot upstream bugs (in both our AUT and our automated test library).
My opinions do not reflect those of my employer.
Subscribe to posts
Popular Posts
-
After attempting to use Microsoft Test Manager 2010 for an iteration, we quickly decided not to use it. Here is why. About 3 years ago we f...
-
Data warehouse (DW) testing is a far cry from functional testing. As testers, we need to let the team know if the DW dimension, fact, and b...
-
I recently read about 15 resumes for tester positions on my team. None of them told us anything about how well the candidate can test. Here...
-
Want your bug reports to be clear? Don’t tell us about the bug in the repro steps. If your bug reports include Repro Steps and Results se...
-
When someone walks up to your desk and asks, “How’s the testing going?”, a good answer depends on remembering to tell that person the right ...
Labels
- Teamwork (86)
- bugs (81)
- process (66)
- software testing career (49)
- automation (45)
- writing tests (38)
- Personal Excellence (37)
- Managing Testing (33)
- questions (31)
- language (29)
- testing metaphor (23)
- Tools (19)
- STPCon (10)
- heuristics (10)
- Test Cases (9)
- test blogs (9)
- CAST (8)
- Presentations (8)
- Test This (8)
- metrics (8)
- Rapid Software Testing (7)
- Silliness (7)
- Data Warehouse Testing (6)
- Kanban (6)
- STARwest (6)
- Testing Conferences (6)
- Agile (4)
- Bug Report Attributes (4)
- Don't Test It (4)
- Stareast (4)
- documentation (4)
- Failure Story (3)
- Lightning Talks (3)
- Testing Related Ideas (3)
- You're A Tester (3)
- Performance Testing (2)
- Podcast (2)
- ATDD (1)
- BDD (1)
- HATDD (1)
- Meetups (1)
Who am I?
- Eric Jacobson
- Atlanta, Georgia, United States
- My typical day: get up, maybe hit the gym, drop my kids off at daycare, listen to a podcast or public radio, do not drink coffee (I kicked it), test software or help others test it, break for lunch and a Euro-board game, try to improve the way we test, walk the dog and kids, enjoy a meal with Melissa, an IPA, and a movie/TV show, look forward to a weekend of hanging out with my daughter Josie, son Haakon, and perhaps a woodworking or woodturning project.