During a recent phone call with Adam White, he said something I can’t stop thinking about. Adam recently took his test team through an exercise to track how much of their day was actually spent testing. The results were scary. Then Adam said it, “If you’re not operating the product, you’re not testing”…I can’t get that out of my head.
Each day I find myself falling behind on tests I wanted to execute. Then I typically fulfill one of the following obligations:
- Requirement walkthrough meetings
- System design meetings
- Write test cases
- Test case review meetings
- Creating test data and preparing for a test
- Troubleshooting build issues
- Writing detailed bug reports
- Bug review meetings
- Meetings with devs b/c tester doesn’t understand implementation
- Meetings with devs b/c developer doesn’t understand bug
- Meetings with business b/c requirement gaps are discovered
- Collecting and report quality metrics
- Managing official tickets to push bits between various environments and satisfy SOX compliancy
- Update status and other values of tested requirements, test case, and bug entities
- Attempt to capture executed exploratory tests
- Responding to important emails (arriving multiple per minute)
Nope, I don’t see "testing" anywhere in that list. Testing is what I attempt to squeeze in everyday between this other stuff. I want to change this. Any suggestions? Can anyone relate?
12 comments:
Subscribe to:
Post Comments (Atom)
I disagree.
While the physical act of testing a product might not be in your day, you are providing opportunities through your other actions to build a better product. In the end, that's what it is all about.
In your requirements reviews are you looking for missing requirements, duplicate requirements, vague requirements, contradictory requirements?
In your system design meetings, are you looking for missing or erroneous integration points?
In writing your test cases and reviewing test cases, are you bringing a critical viewpoint to what, when, where and how you will test?
I could go point by point on the rest of your list, but I don't think I need to. All of these activities, if pursued without the critical questioning of testers, will open the potential for numerous flaws and defects in the downstream product. Which in turn makes "physical" testing much more difficult and time consuming.
I'd rather spend my time upstream in the details, trying to help the BA's and Dev's get it right first, rather than get a faulty product much closer to launch time only then to start finding tons of bugs or to be pressured to put a faulty product into production. I consider many of these activities to be "static testing", so by that definition you are testing.
It could be worse. ie. you could just be "testing" all day.
At least those items break up the monotony of doing the same thing all day.
I'd hate to be coding all day although lately I do sort of relate in the sense that I'd just want a couple of days where I can get my workload under control.
As I was writing this blog comment I got e-mail invite to an integration meeting for next week and see you're on it as well. Hah.
Sounds like we need interns.
Yes, I have a suggestion: refine, at least to some degree, Adam's statement.
If you're not interacting with the product, you're not doing test execution. You may, however, be doing important work on test design—modelling the test space, determining coverage, identifying or refining oracles, setting up test equipment or other aspect of your lab procedures. There can be great value in questioning the product in the course of that activity. Your observations and analyses may lead to better testing when the time comes, or may help to inform important decisions about the product and project.
That said, your observation of what you're actually doing is important, especially when someone comes along asking, "Why is testing taking so long?" while simultaneously making lots of other demands on your time.
Hi Eric,
What is the goal of the activities you listed?
Who benefits from them? Who needs to benefit?
Whose bugs are being discussed with developers when "Writing detailed bug reports" is one of the items in your list?
What do I mean by all this? Question everything that you spend time doing. Question the people that are requiring you to do these things. If questioning them does not turn some sort of light bulb on about the lack of testing, do something different that you know will bring about a beneficial change, and then discuss it. (It is better to beg forgiveness than to ask permission - Unknown author.)
More often than not "we do things this way" is simply because "we have ALWAYS done things this way". Change does not occur by chance, there needs to be a deliberate choice to change. You have a bird's eye view of a lot of the problems in your organization. Use that view to find out where you can possibly affect change, then just do it.
As for relating to this, sort of. I see it everyday. I think part of the nature of the beast in software testers is that we really want to serve our stakeholders. We are driven to try to make everyone happy with the end results. Unfortunately, the old fable about the man, his son, and a donkey is a stark reality. We cannot fully please each of these, we need to focus on the bare minimum of what we need to give to them in order to actually provide a service to all of them. It is about finding a balance between the activities and the needs.
And realizing the truth about the situation is almost halfway to the solution because you are already thinking about it. Keep going in that direction and you will find your answers.
No easy answers here, but the biggest problem is you have too many bugs. Cut down the number of bugs and you'll knock out about a third of that list. Of course, you're not putting the bugs in the code. Maybe get them to write more of your tests so that you don't have to manually execute them.
One thing you can do right now is quit writing test cases. Do your exploratory testing as normal and record the results instead (use the Session Based stuff you taught me!).
On reporting - find out which reports are actually looked at and if you (or others) are getting any actionable data out of them...really. If really not, then stop. Or produce less, more specific, to help solve a particular problem. For example, you have lots of bugs. Figure out if the majority of the bugs are of a certain type. Track those until they go down to a manageable level and then track the next highest, etc.
Build issues! (No comment)
I've had success with doing this kind of stuff lately, but I know how hard it will be to implement these things in your situation. If there's anything I can do to help, let me know.
Eric,
Good blog post. I find myself doing many of the tasks you outline. But I see these as valuable, as part of the process, as part of driving some 'quality' in to the system.
Do you not see these tasks as worthwhile?
I agree, time is not spent 'testing' the product but surely these activities all lead to a better end release. If they don't end in good software, then yep - needs to be some way to cull them from our days.
Rob..
Here's a question for you:
I know you've been at your job for a few years, does having a few more years of experience mean that you are less in charge of test execution and more in charge of test strategy?
It looks to me that your list has a lot to do with test strategy. JW recently had a post about this called "testing sucks"Would like to know your thoughts on it.
Marlena,
That’s a great question and I enjoyed reading JW’s post again. You’ve given me a different perspective on it. You’ve also made me feel much better about doing said non-test-execution tasks.
I guess being a senior tester does give most testers the opportunity to think more about strategy, and they naturally do so if they’re worth a darn. However, for me, thinking about test strategy does not have to be decoupled from operating the product. I agree with Matt Archer when he says “Most of the time we dream up our best tests while interacting with the system”. Even if I could dream up the most awesome tests in the world without operating the product, I guess those tests are useless until I execute them, right?
Hi Eric,
I too find as time goes by, I have less time to test. In fact I'm starting to feel like a test co-ordinator. One useful skill that could help is risk management.
Ray
www.testertroubles.com
Eric,
I'd like to refine my statement to match more cloesly with what some of the others have said in the comments
All the upstream activities are useful. What you've listed doesn't seem "unusual" for someone leading a test team be doing. But each item takes time away from increasing the coverage of the product.
In RST we were taught that exploratory testing is made up of learning, test design and test execution.
You can sit around and talk with dev about how something should look and work (learning) architect test cases (design), collect metrics (learning?), respond to emails, create test plans (test design) but until someone actually uses the product (test execution) then you know very little about the state of the product. You learning is a little more impeded.
Your list provides a great framework for explaining the question "Why testing is taking so long?" as someone pointed out. Did you happen to track the amount of time you spend on each of those items each week?
One suggestion would be to carve out some time just for operating the product. Are you finding all these meetings useful? Decline some of them - see what happens. Can you encourage your team members to clarify their understanding with the developers without your involvement?
As someone pointed out - your goal may be to build a better product. In my context it's the state of the product that people really care about. I make suggestions on we might improve the process for next time but I don't have control over the people who would ultimately implement any suggestions.
If your mission is to "drive quality upstream" then by all means keep doing these activities and add some more to your list (making sure unit tests are written, run and updated).
But if the mission of your test team is to provide accurately and timely information on the state of the product (which is my team's goal) then I would suggest you spend more time actually operating the product.
Perhaps you need more testers :)
I find myself in the same situation recently. Actual hands-on time with product has reduced due to a reduction in the size of the test team, and I now find myself spending more time learning about the product, understanding requirements, planning testing, implementing test automation etc. Putting the product through it's paces either by testing it or exposing it to a pre-release user group is most likely to root out any bugs, and I now rely on other team members (developers, product manager etc.) to help out with testing, and just try my best to co-ordinate their efforts inbetween my own testing efforts.
I think prioritisation of risk is the key to this dilema - just make sure all the high risk areas are given due care and attention and make sure the management is aware of the risks of a lack of testing resources.
I also agree with the points above - although hands on time with the product won't directly result in bug reports - it does help to reduce the risk of bugs appearing in the first place, or that they are caught upstream of manual scrpited/exploratory testing.
Well, in my opinion execution requires a proper methodology and set of process. Testing is an intensive activity and requires expertise along with skill. Thanks.