The Value of Merely Imagining a Test – Part 2
2 comments Posted by Eric Jacobson at Thursday, November 12, 2015I’m a written-test-case hater. That is to say, in general, I think writing detailed test cases is not a good use of tester time. A better use is interacting with the product-under-test.
But something occurred to me today:
The value of a detailed test case increases if you don’t perform it and decreases when you do perform it.
- The increased value comes from mentally walking through the test, which forces you to consider as many details as you can without interacting with the product-under-test. This is more valuable than doing nothing.
- The decreased value comes from interacting with the product-under-test, which helps you learn more than the test case itself taught you.
What’s the takeaway? If an important test is too complicated to perform, we should at least consider writing a detailed test case for it. If you think you can perform the test, you should consider not writing a detailed test case and instead focusing on the performance and taking notes to capture your learning as it occurs.
The Value of Merely Imagining a Test – Part 1
3 comments Posted by Eric Jacobson at Thursday, November 12, 2015An import bug escaped into production this week. The root cause analysis took us to the usual place; “If we had more test time, we would have caught it.”
I’ve been down this road so many times, I’m beginning to see things differently. No, even with more test time we probably would not have caught it. Said bug would have only been caught via a rigorous end-to-end test that would have arguably been several times more expensive than this showstopper production bug will be to fix.
Our reasonable end-to-end tests include so many fakes (to simulate production) that their net just isn’t big enough.
However, I suspect a mental end-to-end walkthrough, without fakes, may have caught the bug. And possibly, attention to the “follow-through” may have been sufficient. The “follow-through” is a term I first heard Microsoft’s famous tester, Michael Hunter, use. The “follow-through” is what might happen next, per the end state of some test you just performed.
Let’s unpack that: Pick any test, let’s say you test a feature to allow a user to add a product to an online store. You test the hell out of it until you reach a stopping point. What’s the follow-on test? The follow-on test is to see what can happen to that product once it has been added to the online store. You can buy it, you can delete it, you can let it get stale, you can discount it, etc… I’m thinking nearly every test has several follow-on tests.
Test Environment Responsibilities
1 comments Posted by Eric Jacobson at Wednesday, September 23, 2015We never have enough of them. They never mirror production. They never work.
My opinions at my current company:
- Who should own test environments? Testers.
- Who should build test environments? NOT testers. DevOps.
- Who should request test environments? Testers.
- Who should populate, backup, or restore the test data in test environments? Testers.
- Who should configure test environments to integrate with other applications in the system? NOT testers. DevOps.
- Who should deploy code to test environments? NOT testers. Whoever (or whatever) deploys code to production.
- Who should control (e.g., request) code changes to test environments? Testers.
- Who should create and maintain build/deploy automation? NOT testers. DevOps.
- Who should push the “Go” button to programmatically spin up temporary test environments? Testers or test automation.
Fiddling with test environments is not testing work, IMO. It only subtracts from test coverage.
Testers Are Never Done - Like Scientists
4 comments Posted by Eric Jacobson at Friday, August 21, 2015I’m a Dr. Neil deGrasse Tyson fanboy. In this video, he pokes fun of a common view of scientists. A view that when scientists think they’ve figure something out, they stop investigating and just sit around, proud of themselves. Neil says, “[scientists] never leave the drawing board”. They keep investigating and always embrace new evidence, especially when it contradicts current theories.
In other words, a scientist must trade closure for a continued search for truth. “Done” is not the desired state.
As a tester, I have often been exhausted, eager to make the claim. “it works…my job here is done”. And even when faced with contradicting evidence, I have found myself brushing it away, or hoping it is merely a user problem.
Skilled testers will relate. Test work can chew us up and spit us out if we don’t have the right perspective. Don’t burden yourself by approaching test work as something you are responsible for ending.
Acceptance Criteria. When User Stories have Acceptance Criteria (or Acceptance Tests), they can help us plan our exploratory and automated testing. But they can only go so far.
Four distinct Acceptance Criteria does not dictate four distinct test cases, automated or manual.
Here are three flavors of Acceptance Criteria abuse I’ve seen:
- Skilled testers use Acceptance Criteria as a warm-up, a means of getting better test ideas for deeper and wider coverage. The better test ideas are what need to be captured (by the tester) in the test documentation...not the Acceptance Criteria. The Acceptance Criteria is already captured, right? Don’t recapture it (see below). More importantly, try not to stop testing just because the Acceptance Criteria passes. Now that you’ve interacted with the product-under-test, what else can you think of?
- The worst kind of testing is when testers copy Acceptance Criteria from User Stories, paste it into a test case management tool, and resolve each to Pass/Fail. Why did you copy it? If you must resolve them to Pass/Fail, why not just write “Pass” or “Fail” next to the Acceptance Criteria in the User Story? Otherwise you have two sources. Someone is going to revise the User Story Acceptance Criteria and your test case management tool Acceptance Criteria instance is going to get stale.
- You don’t need to visually indicate that each of your distinct Acceptance Criteria has Passed or Failed. Your Agile team probably has a definition of “Done” that includes all Acceptance Criteria passing. That being said, if the User Story is marked Done, it means all the Acceptance Criteria passed. We will never open a completed User Story and ask, “which Acceptance Criteria passed or failed?”.
The Best Software Testing Tool? That’s Easy…
4 comments Posted by Eric Jacobson at Thursday, May 21, 2015Notepad.
After experimenting with a Test Case Management application’s Session-Test tool, a colleague of mine noted the tool’s overhead (i.e., the non-test-related waiting and admin effort forced by the tool). She said, I would rather just use Notepad to document my testing. Exactly!
Notepad has very little overhead. It requires no setup, no license, no logging in, few machine resources, it always works, and we don’t waste time on trivial things like making test documentation pretty (e.g., let’s make passing tests green!).
Testing is an intellectual activity, especially if you’re using automation. The test idea is the start. Whether it comes to us in the midst of a discussion, requirements review, or while performing a different test, we want to document it. Otherwise we risk losing it.
Don’t overlook the power of Notepad.
Don’t Bother Calculating ROI For Test Automation
2 comments Posted by Eric Jacobson at Wednesday, April 29, 2015Whilst searching for ways to measure the value of test automation, I read Doug Hoffman’s (I’m sure classic) Cost Benefits Analysis of Test Automation paper.
The first several pages were great. He discussed intangibles that should be left out of an ROI calculation like an immediate reduction in perceived productivity of the test organization as the automation is first developed. He went on to list the falsely expended benefits like the automation of existing manual tests. Then he compared fixed automation costs like scripting tools to variable automation costs like test maintenance.
Finally, Doug got to the formulas. After careful analysis of some 30+ factors, one can start calculating automation ROI and efficiency benefits. I rubbed my hands together and excitedly turned the page. Then…I think I puked into my mouth a little as I saw the following:
In the end, I latched onto one powerful statement Doug made, almost in passing, he said:
If the benefits of automation are required, then the ROI computation is unnecessary, the investment is required…it’s an expense.
If the benefits include reducing risk by performing automated checking that would not be possible by humans (e.g., complex math, millions of comparisons, diffs, load, performance, precision), then say no more…
I don’t want to suffer through the computations.
If You Interrupt Testing, It Will Cost Us 2 Lost Bugs
3 comments Posted by Eric Jacobson at Wednesday, April 22, 2015Look at your calendar (or that of another tester). How many meetings exist?
My new company is crazy about meetings. Perhaps it’s the vast numbers of project managers, product owners, and separate teams along the deployment path. It’s a wonder programmers/testers have time to finish anything.
Skipping meetings works, but is an awkward way to increase test time. What if you could reduce meetings or at least meeting invites? Try this. Express the cost of attending the meeting in units of lost bugs. If you find, on average, about 1 bug per hour of testing, you might say:
“Sure, I can attend your meeting, but it will cost us 1 lost bug.”
“This week’s meetings cost us 9 lost bugs.”
Obviously some meetings (e.g., design, user story review, bug triage) improve your bug finding, so be selective when choosing to declare the bugs lost cost.
Last week I started testing an update to a complex legacy process. At first, my head was spinning (it still kind of is). There are so many inputs and test scenarios...so much I don’t understand. Where to begin?
I think doing something half-baked now is better than doing something fully-baked later. If we start planning a rigorous test based on too many assumptions we may not understand what we’re observing.
In my case, I started with the easiest tests I could think of:
- Can I trigger the process-under-test?
- Can I tell when the process-under-test completes?
- Can I access any internal error/success logging for said process?
- If I repeat the process-under-test multiple times, are the results consistent?
If there were a spectrum that showed a focus between learning by not manipulating and learning by manipulating something-under-test, it might look like this:
My tests started on the left side of the spectrum and worked right. Now that I can get consistent results, let me see if I can manipulate it and predict its results:
- If I pass ValueA to InputA, do the results match my expectations?
- If I remove ValueA from InputA, do the results return as before?
- If I pass ValueB to InputA, do the results match my expectations?
As long as my model of the process-under-test matches my above observations, I can start expanding complexity:
- If I pass ValueA and ValueB to InputA and ValueC and ValueD to InputB, do the results match my expectations?
- etc.
Now I have something valuable to discuss with the programmer or product owner. “I’ve done the above tests. What else can you think of?”. It’s much easier to have this conversation when you’re not completely green, when you can show some effort. It’s easier for the programmer or product owner to help when you lead them into the zone.
That worst is over. The rest is easy. Now you can really start testing!
Sometimes you just have to do something to get going. Even if it’s half-baked.
Now that my engineering team is automating beyond the unit test level, this question comes up daily. I wish there were an easy answer.
If we make a distinction between checking and testing, no “tests” should be automated. The question instead becomes, which “checks” should be automated? Let’s go with that.
I’ll tell you what I think below, ranking the more important at the top:
Consider automating checks when they…
- can only feasibly be checked by a machine (e.g., complex math, millions of comparisons, diffs, load, performance, precision). These are checks machines do better than humans.
- are important. Do we need to execute them prior to each build, deployment, or release? This list of checks will grow over time. The cost of not automating is less time for the “testing” that helps us learn new information.
- can be automated below the presentation layer. Automating checks at the API layer is considerably less expensive than at the UI layer. The automated checks will provide faster feedback and be less brittle.
- will be repeated frequently. A simplified decision: Is the time it takes a human to program, maintain, execute, and interpret the automated check’s results over time (e.g., 2 years), less than the time it takes a human to perform said check over the same time span (e.g., 2 years). This overlaps with #2.
- check something that is at risk of failing. Do we frequently break things when we change this module?
- are feasible to automate. We can’t sufficiently automate tests for things like usability and charisma.
- are requested by my project team. Have a discussion with your project team about which checks to automate. Weigh your targets with what your team thinks should be automated.
- can be automated using existing frameworks and patterns. It is much cheaper to automate the types of checks you’ve already successfully automated.
- are checks. “Checks” are mundane for humans to perform. “Tests” are not because they are different each time.
What am I missing?
Getting Manual Testers Involved in Automation
2 comments Posted by Eric Jacobson at Friday, March 27, 2015Most of the testers at my new company do not have programming skills (or at least are not putting them to use). This is not necessarily a bad thing. But in our case, many of the products-under-test are perfect candidates for automation (e.g., they are API rich).
We are going through an Agile transformation. Discussions about tying programmatic checks to “Done” criteria are occurring and most testers are now interested in getting involved with automation. But how?
I think this is a common challenge.
Here are some ways I have had success getting manual testers involved in automation. I’ll start with the easiest and work my way down to those requiring more ambition. A tester wanting to get involved in automation can:
- Do unit test reviews with their programmers. Ask the programmers to walk you through the unit tests. If you get lost ask questions like, “what would cause this unit test to fail?” or “can you explain the purpose of this test at a domain level?”.
- Work with automators to inform the checks they automate. If you have people focused on writing automated checks, help them determine what automation might help you. Which checks do you often repeat? Which are boring?
- Design/request a test utility that mocks some crucial interface or makes the invisible visible. Bounce ideas off your programmers and see if you can design test tools to speed things up. This is not traditional automation. But it is automation by some definitions.
- Use data-driven automation to author/maintain important checks via a spreadsheet. This is a brilliant approach because it lets the test automater focus on what they love, designing clever automation. It lets the tester focus on what they love, designing clever inputs. Show the tester where the spreadsheet is and how to kick off the automation.
- Copy and paste an automated check pattern from an IDE, rename the check and change the inputs and expected results to create new checks. This takes 0-to-little coding skills. This is a potential end goal. If a manual tester gets to this point, buy them a beer and don’t push them further. This leads to a great deal of value, and going further can get awkward.
- Follow an automated check pattern but extend the framework. Spend some time outside of work learning to code.
- Stand up an automation framework, design automated checks. Support an Agile team by programming all necessary automated checks. Spend extensive personal time learning to code. Read books, write personal programs, take online courses, find a mentor.
Egads! It’s been several months since my last post. Where have I been?
I’ve transitioned to a new company and an exciting new role as Principal Test Architect. After spending months trying to understand how my new company operates, I am beginning to get a handle on how we might improve testing.
In addition to my work transition, myself and each member of my family have just synchronously suffered through this year’s nasty flu, and then another round of stomach flu shortly thereafter. The joys of daycare…
And finally, now that my son, Haakon, has arrived, I’ve been adjusting to my new life with two young children. 1 + 1 <> 2.
It has been a rough winter.
But alas, my brain is once again telling me, “Oh, that would make a nice blog post”. So let’s get this thing going again!