The second thing (here is the first) Scott Barber said that stayed with me is this:
The more removed people are from IT workers, the higher their desire for metrics. To paraphrase Scott, “the managers on the floor, in the cube farms, agile spaces or otherwise with their teams most of the time, don’t use a lot of metrics because they just feel what’s going on.”
It seems to me, those higher up people dealing with multiple projects don’t have (as much) time to visit the cube farms and they know summarized information is the quickest way to learn something. The problem is, too many of them think:
SUMMARIZED INFORMATION = ROLLED UP NUMBERS
It hadn’t occurred to me until Scott said it. That, alone, does not make metrics bad. But it helps me to understand why I (as a test manager) don’t bother with them but I spend a lot of time fending off requests for them from out-of-touch people (e.g., directors, other managers). Note: by “out-of-touch” I mean out-of-touch with the details of the workers. Not out-of-touch in general.
Scott reminds us the right way to find the right metric for your team is to start with the question:
What is it we’re trying to learn?
I love that. Maybe a metric is not the best way of learning. Maybe it is. If it is, perhaps coupling it with a story will help explain the true picture.
Thanks Scott!
I enjoyed this post about metrics, especially since I've seen them manipulated and misused in the past. I do think there should be an addition to the, "what are we trying to learn?" question, and that is, "what are we going to do with what we learn?" In my opinion, unless you have responses, the question does not help. Maybe a response is implied, but I've seen many times when a metric is reported, but there's no action taken on the metric. Where's the value then?
People get away with some cheating while reporting the project metrics to their customers. Customers should be wary about it. At the same time if the customer has some metrics of his own so that he can measure the project outcome himself then he will not be abused. Nice article.
I will also recommend a book which deals on such issues "Software Testing & quality assurance: from traditional to cloud computing"
Metrics are what you make them. I think they can be very valuable assuming your intent is to learn from them. Where it gets dicey is trying to utilize metrics that are simply the wrong measurement for what is trying to be fixed. Also too often individuals try to use out of date measurement approaches ( how many lines of code do you write a day? ). It's tricky though because everyone is trying to improve - the sw dev and verification teams owe stakeholders relevant information - the question is what is the information and how does one gather and disseminate such information in a lean way. Ahhh... so many "opportunities"!
While I agree that metrics can be used by those who are "out of touch" they can also be useful to the ground troops. One of the issues that we brought metrics in to solve were that our teams felt that test automation was making a difference, but it really didn't sink in until we were able to put together the metrics to show the positive impact of their blood, sweat, and tears. Those metrics also allowed us to take those best practices to other groups and align them to the same strategy. I think the best thing to do with those "out-of-touch" people is to show them a trend instead of a metric. Show them a graph with axis labels but no values and explain the trend, that's really what most managers are going for. If you provide data that allows someone to react to a single data point, then you're doing a disservice to the whole organization.
Metrics are useless if they do not tell a story.
Metrics are worse than useless if they tell the wrong story.
Trying to explain complex topics like ROI on test automation to 'upper management' without metrics is an effort in futility.
The adage 'You cannot manage what you don't measure' holds alot of truth.