If there were a testing conference that consisted of only lightning talks, I would be the first to sign up. Maybe I have a short attention span or something. STARwest’s spin on lightning talks is “Lightning Talk Keynotes” in which (I assume) Lee Copeland handpicks lightning talk presenters. He did not disappoint. Here is my summary:
Michael Bolton
Michael started with RST’s formal definition of a Bug: “anything that threatens the value of the product”. Then he shared his definition of an Issue: “anything that threatens the value of the testing” (e.g., a tool I need, a skill I need). Finally, Bolton suggested, maybe issues are more important than bugs, because issues give bugs a place to hide.
Hans Buwalda
His main point was, “It’s the test, stupid”. Hans suggested, when test automation takes place on teams, it’s important to separate the testers from the test automation engineers. Don’t let the engineers dominate the process because no matter how fancy the programming is, what it tests is still more important.
Lee Copeland
Lee asked his wife why she cuts the ends off her roasts, and lays them against the long side of the roast, before cooking them. She wasn’t sure because she learned it from her mother. So they asked her mother why she cuts the ends off her roasts. Her mother had the same answer so they asked her grandmother. Her grandmother said, “Oh, that’s because my oven is too narrow to fit the whole roast in it”.
Lee suggested most processes begin with “if…then” statements (e.g., if the software is difficult to update, then we need specs). But over time, the “if” part fades away. Finally, Lee half-seriously suggested all processes should have a sunset clause.
Dale Emory
If an expert witness makes a single error, out of an otherwise perfect testimony, it raises doubts in the juror's minds. If 1 out of 800 automated tests throws a false positive, people accept that. If it keeps happening, people loose faith in the tests and stop using them. Dales suggests the following prioritized way of preventing the above:
- Remove the test.
- Try to fix the test.
- If you are sure it works properly, add it back to the suite.
In summary, Dale suggests, reliability of the test suite is more important then coverage.
Julie Gardiner
Julie showed a picture of one of those sliding piece puzzles; the kind with one empty slot so adjacent pieces can slide into it. She pointed out that this puzzle could not be solved if it weren’t for the empty slot.
Julie suggested slack is essential for improvement, innovation, and morale and that teams may want to stop striving for 100% efficiency.
Julie calls this “the myth of 100% efficiency”.
Note: as a fun gimmicky add-on, she offered to give said puzzle to anyone that went up to her after her lightning talk to discuss it with her. I got one!
Bob Galen
Sorry, I didn’t take any notes other than “You know you’ve arrived when people are pulling you”. Either it was so compelling I didn’t have time to take notes, or I missed the take-away.
Dorothy Graham
Per Dorothy, coverage is a measure of some aspect of thoroughness. 100% coverage does not mean running 100% of all the tests we’ve thought of. Coverage is not a relationship between the tests and themselves. Instead, it is a relationship between the tests and the product under test.
Dorothy suggests, whenever you hear “coverage”, ask “Of what?”.
Jeff Payne
Jeff began by suggesting, “If you’re a tester and don’t know how to code in 5 years, you’ll be out of a job”. He said 80% of all tester job posts require coding and this is because we need more automated tests.
Martin Pol
My notes are brief here but I believe Martin was suggesting, in the near future, testers will need to focus more on non-functional tests. The example Martin gave is the cloud; if the cloud goes down, your services (dependent on the cloud) will be unavailable. This is an example of an extra dependency that comes with using future technology (i.e., the cloud).
Hi,
Don't these two takeaways sound contradictory to each other in some terms?
From Hans Buwalda's talk "Don't let the automation engineers dominate...";
and Jeff Payne's talk "...we need more automated tests."
I find myself more inclined to what Hans Buwalda says, would love to hear what you have to say about it.
-Ferret
Ferret,
I think I agree with you. Those views are a bit contradictory. Buwalda believes testers should be decoupled from the automation engineers and Payne believes testers should be automation engineers.
It would be interesting to hear them clarify our understanding or to debate the issue with each other.
IMO, test automation is most effective when done by a tester who also has programming skills. They are better able to strike a balance between what they need tested and how feasible it is to automate.
If the two are separated (i.e., automator and tester), there is more risk in misunderstandings and inefficiency. Test intents may be lost in translation. The tester may request a seemingly simple check to automate, when in truth, it is incredibly inefficient for the automation. The two may never know.