Having spent quite a bit of this summer thinking about machine learning and artificial intelligence, it seems to me that there’s a very important transformation ongoing from a focus on the qualitative to a focus on the quantitative. The moment we start with A/B testing, deploying multi-armed bandits or training machine learning models, the very first action we need to take is to define, in precise, quantitative terms, what the factors are that we are optimizing for and what the relative priority of these factors is. And, of course, what factors aren’t allowed to change outside a certain set of boundaries.
In many ways, this isn’t the first time we’re moving into this direction. Earlier, the notion of key performance indicators was widely used to control teams, departments and companies. Google and, before that, Intel have made extensive use of the OKR mechanism (Objective & Key Results), which combines a qualitative objective with quantitative key results.
Still, it seems to me that there’s at least one real change between earlier initiatives and today’s trend and this is the way we interact with software-intensive systems. Until now – and this still is the primary way of working – we’ve developed systems by defining how these should conduct their operations. Using requirement specifications and similar techniques, we’d describe what the system was expected to do in qualitative terms and then design it and define how it should accomplish the intended goal.
The primary difference now is that we tell systems what to accomplish in quantitative terms, give it a bunch of data to train on and then expect it to figure out how to accomplish the desired outcome. It’s a bit black-and-white as we still need to provide machine learning or deep learning models, but the nuts and bolts of how the system accomplishes the outcomes is through training based on data rather than through an engineer developing code.
The challenge of focusing on the outcomes is that these need to be defined quantitatively. The surprising fact that I have experienced at company after company is that there’s very little alignment on the relevant factors and their relative priority in teams, departments and organizations. Although I’ve raised this concern in an earlier blog post on “the illusion of alignment”, the problem isn’t just that teams need to agree on what to optimize for, but that any use of machine and deep learning requires a carefully defined set of quantitative success criteria. You can’t ask a system to train itself without defining what success looks like.
Although many think that the challenge is the machine learning, in many cases it’s the selection of the features to measure, track and optimize for that’s the real challenge. This concerns both the input features as well as the output features that we’re looking to affect positively. In the case of feature selection, there are two aspects that are challenging.
The first is, in many cases, it’s actually unknown what features are relevant for accomplishing the desired outcome. In this case, the team needs to engage in experimentation in order to gain reliable, quantitative insights into the causal relationships between input features and desired outcomes. Although this may seem a trivial exercise, in many situations there are confounding factors that influence the causalities and that make it very difficult to distinguish between correlation and causation.
The second is, as I’ve discussed in an earlier post, the alignment within the team about the desired outcome. Even though different team members may have various opinions, again here it may well be the case that the most optimal outcome function – in terms of features and their relative priority – is genuinely unknown. In this case, the team has to start with a first guess and then iterate through different models in order to figure out the best overall mix.
Concluding, as companies are increasingly becoming data- and AI-driven, there’s a growing need to define desired outcomes in quantitative terms. This is the only way that the organization will be able to exploit the benefits provided by digital technologies in general and machine and deep learning in particular. It requires a change in the organization far beyond the data science team: it requires everyone in leadership and decision-making roles to take a more quantitative approach to their job. One can no longer leave this to subordinates; you’ll be required to take this on yourself as well. Or, in short, quantify yourself!
To get more insights earlier, sign up for my newsletter atjan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch) or Twitter (@JanBosch).