Strategic digital product management: conclusion

Image by lifestylehack from Pixabay

We started this series with an exploration of the importance of extending our focus from only how to build the systems we’re responsible for to the why and what of these systems. When we only look at the how, we tend to focus on efficiency: cranking out as many features as possible per unit of R&D resource. More important than efficiency, however, is effectiveness.

The separation between the how and the what has caused, in most organizations, a very low return on the investment of R&D resources. Research by us as well as others shows that somewhere between 70 and 90 percent of all R&D resources are spent on commodity functionality that no customer really cares about, except for it needing to work. However, this functionality has been there for several product generations, so it’s surprising that it requires so many resources. Second, research shows that somewhere between half and two-thirds of the new features and functionality added to the system are never used or used so seldomly that the R&D investment in the feature should be considered wasted.

Over the years, my colleagues and I have developed a variety of techniques and approaches to address this issue, including the Hypex model, the Three Layer Product Model (3LPM), various approaches to A/B testing, value modeling and so on. However, the uptake of these techniques has been far from universal in the software-intensive systems companies that we predominantly work with. My assessment after many years of banging my head against the wall isn’t that the techniques and approaches don’t work, they do, but that product management and R&D have a Kiplingesque relationship in that East is East and West is West and the two shall never meet.

When product management and R&D are separated from each other, the main mode of interaction is through requirements and specifications. This requires product management to make a series of decisions around the product while making, often unfounded, guesses about the impact of these decisions on the customers. And, of course, R&D is doing its best to build what many know is completely useless functionality in the best way possible based on what it says in the specification.

In my view, the best way to address this is by adopting three main principles: removing the dichotomy between product management and R&D, treating requirements as hypotheses and value modeling. The dichotomy between product management and R&D is at the heart of the challenge I believe needs to be addressed. As we discussed when we wrote about organizing product management, the central tenet needs to be to move the interaction between product management and R&D from inter-team or inter-functional coordination to intra-team coordination. The product management activities and R&D activities need to occur within the same team to optimally align and coordinate.

The dichotomy between product management and R&D has caused the requirement specification to be the boundary object between the two functions. The problem is that product management tends to claim that the requirements are written in blood and are immutable in order to not look like idiots toward R&D. Meanwhile, R&D has no ambition to challenge product management about the relevance or viability of these requirements in order to maintain a reasonable working relationship across organizational boundaries.

However, requirements are simply hypotheses about what would add value to customers and we need to treat them as such. Once we treat each requirement as a hypothesis, we can start doing two important things. First, we can engage in a discussion around the impact, preferably measurable, that the unit of functionality is expected to have if built and included in the system. Second, once we’ve agreed upon the expected impact, we can conduct experiments to validate that belief. These experiments typically are concerned with developing a thin slice of the full feature and measuring the impact of that slice to establish more confidence that the hypothesis indeed holds and the requirement will have the expected impact.

More likely than not, the hypothesis will fail to hold and then we can decide to remove the functionality related to the requirement, hypothesis and experiment from the system and move on to the next hypothesis. The primary advantage is that instead of developing the entire feature, we’ve only built a thin slice, spending a fraction of the R&D effort, establishing that it doesn’t hold and not wasting more effort.

Finally, we need to seek to establish quantitative metrics or KPIs that capture what’s considered to be the value of the system to customers. It never ceases to amaze me how little agreement there is in companies about what the quantitative, measurable goals are concerning delivering value to customers, what constitutes customer value in quantitative terms and what is the relative priority of these various metrics in the case where we have to trade these off against each other. In various posts, the topic of value modeling has been brought up, but the end goal is that every agile team, every sprint, can quantitatively determine what value they delivered to customers. That requires a hierarchical value model where value factors aren’t just related horizontally, ie with a relative priority at the same level, but also vertically, where lower-level factors contribute to higher-level ones.

Product management is a challenging, multi-dimensional set of activities that live on the boundary between business strategy and technology strategy as well as sales and R&D. In this series, we’ve discussed five dimensions of product management: principles, scopes, activities, factors and roles. For each of these dimensions, we focused on three aspects that we considered to be the highest priority.

Although product management has been studied for decades, digitalization, including software, data and AI, has caused a significant change in the way we conduct it. We can go for fast feedback loops through the adoption of DevOps, run experiments, such as A/B testing, use the data collected from systems in the field and employ machine learning models, trained by the same data, to provide solutions superior to what we can create algorithmically. This series has sought to study and discuss what the implications are on product management and the rest of the organization. While this may not seem so relevant for some in product management, in the age of digital, as Eric Pearson said, it’s no longer the big beating the small but rather the fast beating the slow.

Want to read more like this? Sign up for my newsletter at jan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch) or Twitter (@JanBosch).