There’s no such thing as customer value

Photo by Kelly Sikkema on Unsplash
Photo by Kelly Sikkema on Unsplash

As alluded to earlier, I work with quite a few companies in a consulting, advisory, board, research collaboration or other capacity. In many of these engagements, the discussion turns to what to build or how to extend the product with new functionality. When asked why we’re prioritizing certain types of functionality, the typical answer is that the prioritized work will maximize customer value.

The notion of customer value is very hard to disagree with. Of course, we all want to deliver value to customers and improve their lives, their business or any other aspect that our offering is concerned with. The problem is that the term “customer value” mostly is a ‘feel good’ concept. When we start to drill into it and try to define what constitutes customer value, the glossiness rapidly crumbles. In practice, there’s a severe lack of agreement in companies and teams as to what constitutes customer value. And even if there’s a qualitative agreement in terms of storytelling, when seeking to quantify this, things rapidly fall apart.

For the last decade or so, I have, together with some colleagues, conducted research on value. My conclusion is that in the majority of companies, there’s no agreement at all as to what constitutes customer value. This leads to significant inefficiencies for at least three reasons.

First, when we lack a proper understanding of what adds value to customers, it’s very easy for the organization to prioritize development work and other activities, based on the beliefs of product managers in the company as well as others, that turn out to not add value. We have quite a bit of evidence from our research that half or more of the features in a typical system are never or hardly ever used and shouldn’t have been built in the first place. This is an enormous problem as it indicates that half of all R&D effort is wasted.

Second, a lack of clear understanding of what constitutes customer value quickly results in teams building different narratives as to what adds value. This can easily lead to conflicting work across teams. For example, the team working on improving user experience or performance may have its contributions undone by the security team adding functionality that improves security but deteriorates user experience or performance.

Third, even if the organization has quantitatively defined what KPIs are to be tracked and optimized for, development initiatives will frequently improve some of these KPIs and deteriorate others. Often, a clear prioritization is lacking as to the relative importance of different KPIs, resulting in deadlocks or different prioritizations in different teams.

Our research shows that there are at least three strategies that can be used to overcome these challenges. First, apply value modeling. This is the process where the team identifies the measurable factors constituting the value delivered to the customer and then defines the relative priority of each of these factors. Although this seems obvious and easy, it proves to be very difficult for most teams as it forces an agreement on priorities and relevant KPIs.

One of the key challenges during value modeling is that groups tend to include too many factors because they want to be inclusive to the various participants. This leads to the “vital few versus worthwhile many” trap. When facilitating value modeling workshops, we work hard on reducing the number of factors to optimize for to the smallest possible number, preferably one. This is sometimes referred to as the Northstar Metric.

Second, we need to adopt short development cycles and experimental approaches. Each decision to develop some functionality or take some action is based on a hypothesis as to what the impact of the initiative will be. However, as we all know, hypotheses have a significant likelihood of being incorrect. Consequently, the less we invest until we can collect evidence concerning the hypothesis, the more effective we’ll be.

Third, collect quantitative evidence to correlate feature and system KPIs to higher-level system and business KPIs. It’s often hard to translate the value of adding or improving a feature in terms of the top-level impact on the company. In the age of DevOps, however, this can be accomplished by continuously collecting lower-level and higher-level data over multiple releases and conducting multivariate analyses. When doing so, we can make more evidence-based statements about the impact of individual features and teams.

The notion of customer value is a feel-good term that in most companies doesn’t mean much beyond the opinion of the individual using it. It’s easy to get everyone to adhere to customer value as a goal but all have their own interpretation of what this means. This leads to significant inefficiencies in organizations, including wasted development efforts, conflicting work conducted by teams and poor prioritization of work. To remedy this, we encourage companies to adopt value modeling, short development cycles and experimental approaches and to track both lower and higher-level KPIs to establish quantitative relationships between these KPIs that can be used for prioritizing development work. As Jeff Bezos so beautifully said, “We see our customers as invited guests to a party, and we’re the hosts. It’s our job every day to make every important aspect of the customer experience a little bit better.”

Want to read more like this? Sign up for my newsletter at jan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch), Medium or Twitter (@JanBosch).