About five years ago, my tester friend, Alex Kell, blew my mind by cockily declaring, “Why would you ever log a bug? Just send the Story back.”
Okay.
My dev team uses a Kanban board that includes “In Testing” and “In Development” columns. Sometimes bug reports are created against Stories. But other times Stories are just sent left; For example, a Story “In Testing” may have its status changed to “In Development”, like Alex Kell’s maneuver above. This normally is done using the Dead Horse When-To-Stop-A-Test Heuristic. We could also send an “In Development” story left if we decide the business rules need to be firmed up before coding can continue.
So how does one know when to log a bug report vs. send it left?
I proposed the following heuristic to my team today:
If the Acceptance Test Criteria (listed on the Story card) is violated, send it left. It seems to me, logging a bug report for something already stated in the Story (e.g., Feature, Work Item, Spec) is mostly a waste of time.
Thoughts?
I wish more people would embrace this! I picked up that approach several years ago when using a SAAS solution (Rally) to manage test efforts. A "failed" test immediately implied dev ownership. The only time we had defects was when we needed to "migrate" a failed test to a future sprint or when defects came in as work items from production.
Why not just sit together with the developer, and talk to each other?
I fought for that so hard at my last job but my manager was a strong supporter of entering a new ticket so that we could "show off" that QA found the issue. If it's not in JIRA, it didn't happen was basically his motto. At times we were instructed to enter a ticket if the bug fix came back still broken.
I had a discussion about this with a developer just the other day. I was using this same heuristic, but he disagreed due to how it affects reporting.
The conversation:
Dev: "We want to show the client we've closed all stories in the sprint."
QA: "I'd be happy to do that, but what will we say when they point out the acceptance criteria have not been met?"
Dev: "If the acceptance criteria haven't been met, it's a defect and should go into Backlog."
QA: "Well, if the specified functionality has not been implemented, I would say the story is not complete and therefore should be re-opened. Issues found beyond that will be a defect."
Dev: "They're all issues, wouldn't you agree? If it's not as it should be, it's a defect."
I didn't resist further. It takes a little longer and gives a less accurate view of a project, but the team needed some smoke and mirrors to keep things going.
In situations where additional information (eg. error messages, log entries, etc.) aren't needed, I think I could get behind this idea.
I guess you’re saying that by sending it left, any developer would be able to perform the same task and receive the same failure. If that’s the case, why didn’t the developer catch it in the first place? Maybe that was a rhetorical question. :)
If you have your team next to you, sure "send it left", a quick verbal explanation and it should be picked up.
I would suggest adding a (history) comment though (or whatever is possible on the board), just to make sure that we don't leave our team member hanging in the dark, should he or she not have time to fix it at that specific moment (aka what acc. crit. is not getting hit properly).
In the end its all about communication between team members, and the sprint goal in our case.
Good suggestion, Dennis. I agree. My team uses Microsoft TFS, so our Stories have a "notes" tab. I have been indicating why I sent it back on the notes tab. I might say "acceptance criteria #4 not met".
Tim, we can joke (and we should) about what those silly programmers do sometimes. In all seriousness, they are human too. So yes, they sometimes just plain forget to check things. I'm sure the same would be true if programmers double-checked our work.
Renze, it's fashionable to suggest pair programming/testing as a solution to most dev team dysfunctions, but I'll play the skeptic.
Are you saying if testers merely sit with developers, the output is always perfect?
Because I can imagine another tester (or programmer), from outside the pair bubble, taking a look at the output and discovering a problem.