Toxic Metrics: What can kill Agility in your team?

In the article Metrics – How to measure your team’s agility we talked about why and how to measure a team using the four Domains of Agility. We used one of our favorite phrases, which is a warning: Metrics shape behavior. This is important because we have to be very careful with metrics which may be insignificant or even get in the way of our work. In this article, we’ll address toxic metrics which contaminate the work environment and reduce the team’s agility and motivation.

Metrics shape behaviours

Appraising the individual

It isn’t uncommon to come across companies which appraise individuals in the team. We’ve seen boards with things like: Total number of tasks delivered by so-and-so, a number of bugs fixed by so-and-so. We’ve even seen photos of “Employee of the Month”, being the person who delivered the most functionalities in the Sprint. The principle may appear to be a good metric since it stimulates people to deliver even more as if they were salespeople in a store trying to hit targets.

The problem here is we’re dealing with complex environments and knowledge workers. Knowing how to share information and helping each other is fundamental if the team is going to develop a perfect the product. Individual appraisal stimulates competition and decreases cooperation. If I’m being appraised for delivering tasks, functionalities or defects corrected, I’ll have no time to stop and help another team member. Nor will I have time to stop and review whether we’re fulfilling the business objectives or whether the product is really going to satisfy the users and clients.

Comparing Teams

Another common mistake is to try to use metrics to compare teams working on different products and in different contexts. For example, take two teams, one responsible for the sales portal and other for client support, both of which use Net Promoter Score (NPS) to review client satisfaction. Comparing the NPS of these two teams isn’t a good idea, because when clients buy products they’re in a good mood and happy with the purchase. But when someone contacts client support it’s because something is wrong. It may be that the problem is actually to do with the sales portal, such as missing information, wrong orders etc. In any case, it’ll reflect negatively on the client support NPS.

Comparing teams with User Story Points

Using User Story Points or any other metric of effort to compare performance by teams is an abomination. Unless you want to be fooled, avoid this under any circumstances. The purpose of this estimate is to find out the speed of the team, to help it discuss and negotiate which stories will be included and which excluded from the next Sprint.

Comparing team specialties

I’ve also experienced another type of comparison with negative effects. This was a team in which the developers of an app for Apple were competing with the developers of an app for Google. The ratings at the App Store and Google Play were the main gauge for this confrontation. Result? The developers refused to help each other. In fact, whenever they were together, they turned their backs on each other, as if a duel might break out at any moment.

Does this mean that if a rating for an app is very high at the App Store and very low at Google Play, I shouldn’t take this into consideration?

Of course not, ratings are important for measuring client satisfaction. Instead of using them to promote competition, use them to promote cooperation. For instance: what do we know and do which makes the rating high in one app store and low in another? How can the developers work together to find a way of improving the experience of users at the store with the low rating? And so forth.

Vanity metrics

Not long ago, I got an e-mail from a company saying how successful some of their apps were. They’d hit the million downloads mark. Out of curiosity, I decided to take a look at the numbers for their apps at Google Play and came across a dreadful situation. The apps’ average rating was 2.7 and the number of people who still had them installed was barely in the thousands. People downloaded the apps, realized they weren’t what they expected, gave them a poor rating and uninstalled them from their smartphones.

As Eric Ries writes in his book The Lean Startup, the number of downloads is usually a vanity metric. By itself, it suggested the company was in good shape, but in reality, the rejection rate (not a vanity metric) was over 90%.

Other vanity metrics can be: Number of visitors to the site, number of registered users, a number of app logins, number of hits etc.

Use metrics which allow you to identify scenarios and take business decisions. In this article, we talk about some of them.

Measuring too many things

I’ve been in companies which use vast amounts of numbers as indicators. The time spent measuring, evaluating, analyzing and on the support was very great and the results tiny. The maintenance costs of these metrics were a greater burden than the benefits they brought.

Your metrics should always lead to taking action. Measuring for the sake of it makes no sense. And it’s an unnecessary cost.

Not measuring an area of agility

In the article Metrics – How to measure your team’s agility we wrote about the importance of measuring every area of agility. Measuring one and ignoring another is a sign that the team will soon break up. Some examples we’ve experienced:

  • Efficiency without Quality: The team delivered a lot and brought the company significant returns. However the quality was very low, many bugs per delivery and very few automated tests. Over time, this low quality took its toll and the team became highly inefficient.
  • Efficiency without Efficacy: The team delivered enough Sprints a week. The problem was, the number of users and the return were ridiculously low.
  • Efficacy and Efficiency with no Atmosphere: This team delivered a lot, brought the company plenty of return, but the team members were at each other’s throats. Daily Meetings and Retrospectives had been abolished and after a while, people started leaving the team. The result: efficacy and efficiency plummeted.
  • Quality without efficiency or efficacy: A company used the number of production errors as a metric. The fewer the better. The problem was, the “champion” was also the champion of NOT delivering.

Conclusion

Metrics will always change people’s behavior, so choose them very carefully to prevent them from intoxicating your product or company.

Authors

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. Please notice that we do not keep any personal information though.