How To Double Your R&D Effectiveness – Part II

Please follow and like us:

Reflecting on software intensive companies during the summer break made me think about the way investment decisions are made and what new approaches, such as continuous integration and deployment as well as data analytics, mean for these investment decisions. A key principle that I learned while working in Silicon Valley is to minimize investment between validation points. So, what does this mean? It means that whenever an investment decision is taken, the focus should be to invest as little as possible until you can validate that the investment is really resulting in the desired outcome.

This principle has a number of components. The first is that you are crystal clear on what the desired outcome is. Not just as an individual, but for the organization as a whole. I have been involved in many feature prioritization sessions as well as new product initiatives and the process very easily turns political. When it turns political, people defend or attack features or new products based on their implicit beliefs, but these beliefs often are not actually aligned with others in the organization. The result being the formation of coalitions that seek to successfully get a feature or new product approved without any internal agreement within the coalition as to the expected and desired result of the initiative.

Associated with the first component is the demand that whenever initiating a proposal for an R&D investment (or any investment for that matter) is to specify, in detail and quantitatively, what the expected outcome is. If a new feature is expected to increase the time each customer uses the system, that should be measured. If it is concerned with increasing system performance, than this should be measured. We refer to this as the value function: an equation consisting of a weighted set of factors that clearly expresses the value expected from the feature or product.

The second component of the principle is that we need a mechanism for fast deployment of partially realized functionality, including the instrumentation to measure the factors that we are interested in improving. Interestingly, in many companies, there is a natural hesitation and even active resistance towards continuous deployment and the collection of quantitative data. One of the predominant reasons is that, in many cases, it will show that the emperor has no clothes. All kinds of beliefs held in the organization will be shown to be incorrect. As a consequence, people will look to avoid being exposed and their position affected negatively.

The action associated with this component is that the organization needs to undertake is to adopt full transparency of all the data collected from its systems and products. Only by full transparency, careful analysis of results and an agreed interpretation of the results will the organization become truly data driven. Once this is the case, the company can become a true learning organization where people formulate hypotheses and then test these in order to learn more about their customers and the systems that they provide to these customers.

The final component is a willingness to change course based on the data collected. To proceed taking small steps and adjusting based on the feedback after each step. For traditional, hierarchical organizations, this is notoriously difficult. Achieving alignment between the different functions and hierarchies in the organization tends to be extremely effort consuming and the thought of going through the process every couple of weeks feels like an impossibility. In many organizations, the yearly plan will be executed come hell or high water.

The final action is for the organization to decentralize decision making concerning R&D investment and to assign this to cross functional teams that have a clear and agreed definition of the desired outcome. When a team knows what factors it is optimizing for and it can get frequent, quantitative and accurate feedback on its efforts, it does not need a management hierarchy to take the decisions. The team can then actively experiment and find ways to move closer to the desired outcome.

Concluding, organizations have traditionally taken investment decisions where many resources were allocated per decision. Instead, we need to move towards small investments and frequent validation of the impact of these allocated resources. One company I worked with had a yearly release process and allocated hundreds of person years based on the yearly plan and roadmap. After adopting this principle, the company would never invest more than 10-20 person weeks of effort before validating with customers that the direction that each team was working on was relevant. Adopting the principle outlined here will make your R&D investments at least twice as effective. Imagine what you could accomplish if suddenly half of your resources were available to do useful work! So, adopt the principle of minimizing investment between validation points.

Leave a Comment