In my research and consulting engagements with companies, one of the recurring themes is the ambition of companies to become more data-driven in their way of working. After working on this topic with a variety of companies, my fellow researchers and I defined an adoption process that companies go through when adopting data-driven development practices. In an earlier blog post, I discussed this process in short, but I realize that it deserves more attention.
The figure below illustrates the five steps that companies go through. The first step is to start to model the value of a feature in a quantitative way. Second, the company needs to build the first version of a data collection and analysis infrastructure. Then, the development process becomes more iterative, meaning that a feature is built iteratively in multiple slices, rather than in one bit step. The fourth step is to accelerate the feedback loop and finally the company typically starts to connect feature level metrics to higher level business KPIs, resulting in a hierarchical value model. In this article we only discuss the first step.
Figure: summary of the adoption process
The first step in this process is to start with modeling the expected value of a feature. As I wrote in an earlier post, in many companies it is surprisingly difficult to agree on what the relevant factors are and what their relative priority is. Consequently, we have developed an approach even specifically for modeling feature value. In this approach, we first work with the company to pick a suitable feature. For this feature, we identify the relevant factors that we believe will be affected by this feature. These factors can be changes in customer behaviour or changes in system behaviour. Then, we discuss the relative priority of these factors and agree on a value function that describes the agreed upon expected value of a feature.
A value function can look like:
Vf=0.3*number_of_users + 0.4*successful_completion + 0.3*upsell_success
In this case, the team has decided that there are three factors, i.e. the number of users using the feature, the number of successful completions of the steps defined in the feature and the number of successful upsells caused by this feature. In addition, the team has defined the relative priority of these factors. The latter is important as, when the feature is developed iteratively, some factors may improve and others might decline. At that point, it is important to know whether the new version of the feature is successful or not.
After having applied this step with several companies, we have noticed that there are at least four challenges:
- Difficulties in agreeing on value factors and weight of factors: Even if the process outlined above seems trivial, in practice it is extremely challenging for a team to reach consensus on the relevant factors and their relative priority. The process surfaces deeply held beliefs about the system, its customers and features that, surprisingly, are far from agreed upon among the team.
- Painful quantification of value: Interestingly, especially in organizations where product managers were ordering features from R&D teams without much discussion, product managers are quite reluctant to explain their reasoning and quantify expected value of features. It is very easy to end up in a situation where the product manager feels that he or she is put under a spotlight and questioned. Proper facilitation where everyone feels safe to bring up viewpoints and without creating tension is very important.
- Lack of end-to-end understanding of value: Even if the team, including product management, may agree on the relevant factors and their relative priority, the relationship between the value of the feature and the business impact is frequently hard to define in more specific terms. This is important as R&D investments require a 5-20x result in realized business value.
- Illusion of alignment: In several companies, the workshops showed that teams were operating under an illusion of alignment that sometimes was even created intentionally. By abstracting topics of contention to a level of vagueness that everyone can stand behind it, teams and organizations create a (false) sense of unity. Getting precise, detailed and quantitative blows up this illusion and may create quite tense discussions.
Concluding, adopting data-driven development is a process that, based on our research and collaboration with companies, consists of several steps. The first step, however, is to define the value of a feature quantitatively and precisely and this post was concerned with that. Last year I wrote a short book about this. Feel free to contact me in case you’re interested in this. Of course, if you would like to have a workshop at your company to model feature value, please do not hesitate to reach out.