From Agile to Radical: measuring team performance

Image by Gert Altmann from Pixabay
Image by Gert Altmann from Pixabay

In his book “Slow productivity”, the author, Cal Newport, provides an analysis of the history of performance management. He starts in the manufacturing era where productivity was easy to measure: in practice, it was the number of widgets per hour a factory worker could produce, using all the equipment available.

When knowledge work became the primary occupation, things changed quite significantly as measuring the productivity of a knowledge worker is a very difficult proposition. In many contexts, it’s virtually impossible to do in any actionable fashion. Of course, after some time, sometimes years, we can point back and identify breakthrough contributions, but this doesn’t make it actionable. All indicators are lagging ones.

The answer to measuring the productivity of knowledge workers became using a pseudo metric: perceived busyness. For a manager, the idea was that if your people run around as if they’re really squeezing the last out of every minute of their day, you were confident that your team was productive.

In my view, this analysis is spot on; many of the organizations I work with worship busyness. When I was in industry, some of my colleagues would joke that they would sleep when they were dead, but clearly, they didn’t have the time now. They were working as hard as they could, even if I couldn’t help but suspect that they created an illusion of busyness rather than accomplishing tangible and significant outcomes.

The challenge with knowledge work is that activities that can be automated often already are and we need the knowledge workers for the tasks that are, by definition, non-repeatable and often “wicked problems.” In these situations, the breakthroughs are hard, or rather impossible, to plan. Over the years, in my research, I’ve noticed several times that there are ideas that simply need time to germinate and that I need to noodle on for a long time before I have an insight that allows me to write a paper around it. And we all know the situation where you suddenly have a new insight or solution to a problem popping into your head while you’re doing something completely different, like taking a shower.

When assessing team performance, we have a choice. Either we accept that knowledge work is unmeasurable or we determine that it’s possible to measure the impact after some period of time. I’m firmly in the latter camp as in my research, I can also see the impact of my efforts, even if it sometimes takes years.

All organizations I work with have top-level KPIs that they’re pursuing such as revenue, margin, number of customers, recurring revenue, net promoter score, and so on. At the team level, many software teams track a number of feature-level metrics that allow them to understand how their component or subsystem is performing in the field. Often, these are quality related such as defect density, number of defects slipping to the field, test coverage and the like.

The challenge is that the team-level metrics and the top-level business KPIs aren’t connected in any way. Consequently, a team may really move the needle on their metrics, but the impact on the business is simply unknown. The team metrics move, the business KPIs move, but the connection between the two isn’t known and is impossible to determine due to all the confounding factors.

In the SaaS world, many companies build a hierarchical value model where the top-level business KPIs are connected to team-level metrics by an intermediate product KPI layer. To ensure the correct dependencies between the team, the product and the business levels, these companies continuously experiment and measure to collect quantitatively validated relationships between metrics and KPIs at all levels.

For example, one company periodically intentionally slowed down their solution for a small slice of their customer base to measure how customers experiencing a slower product behave differently than customers who experience normal performance. That allows them to connect team-level metrics, such as system performance, to business-level KPIs, such as revenue and customer satisfaction.

Once a company has a quantitatively established relationship between the business level and teams, it becomes feasible to start to measure team performance quantitatively and regularly. For one of the companies I worked with, every team knew at the end of each sprint exactly how much money they made for the company. Using A/B testing, the teams were able to move the conversion KPI with small amounts and any promille of conversion improvement directly translated into a positive revenue impact.

In this situation, if a team hasn’t moved the needle for several sprints, as a leader, you have a clear case to discuss team performance with the members and work on mitigation actions to address the lack of performance. Please note that these teams will tend to be very busy. However, as one of my managers often said, activity isn’t the same as progress.

A second approach is to organize teams around business KPIs. So, one team owns one of the top-level KPIs and has as its sole responsibility to move the needle on their KPI without or with minimal detrimental impact on the other top-level KPIs. In this case, it also is quite feasible to measure team performance as they either move the needle on their KPI or they don’t.

A remaining question is whether a team that moves the needle for their KPI does so sufficiently to warrant a high performance rating. This often requires an analysis of the KPI’s revenue impact in relation to the cost of the team. Each team has to contribute such that their cost is justifiable in terms of the benefits they bring to the company. For instance, even the net promoter score has a clear revenue impact as a higher score leads to lower cost of customer acquisition and lower attrition, increasing the total lifetime value of customers.

As most teams these days consist of knowledge workers, it’s often difficult to measure the performance of individuals and teams. For a long time, managers have addressed this by focusing on a pseudo metric: perceived busyness. With the growing availability of data and our ability to process larger and larger volumes, it’s increasingly feasible to create hierarchical value models where team-level metrics can be connected to business-level KPIs. Alternatively, we can organize teams around these business KPIs and measure their impact in that way. To end with a quote from Thomas Edison: “Being busy doesn’t always mean real work.”

Want to read more like this? Sign up for my newsletter at jan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch) or Twitter (@JanBosch).