This week, I spent time at companies building fabulously complex systems operating in even more complex ecosystems. It is clear that the business leaders as well as the technologists in these companies are struggling with managing the complexity of the systems that they are selling to customers. Whether it is telecommunications, automotive, healthcare or any other industry, we clearly are reaching the limits of what humans can manage from an intellectual capability perspective.
At the same time, during this week, I also got to spend time with my friends at Peltarion, which provides platform for what we call “operational AI”. When exploring the outstanding progress that we’re making using machine learning systems in general and deep learning systems specifically, it is clear that these systems are not “built” or “architected”, but rather “grown”. The problems these systems solve are so complex that “designing” solutions is not possible or prohibitively expensive.
As one approach, the field of complex systems and chaos theory has been around for a long time and everyone knows the story of the butterfly in Australia causing a hurricane off the coast of Florida. Rather than claiming that it is impossible to make any statements about complex systems, my main concern is that it is difficult to make accurate predictive statements about the behavior of complex systems because of nonlinear relations between inputs and outputs, feedback loops and emergent behavior in systems.
The interesting observation is that almost all our processes in companies, in industries and society as a whole are organized based on the principle that we can design and architect systems in a top-down fashion and that we are able to predict the implications of design decisions. For instance, in the certification community, the use of deep learning systems is considered to be highly suspect precisely because we can’t explain and predict the behavior of the system, especially for input data that is outside the scope of the training data. Similarly, many laws proposed by governments provide a specific, prescriptive solution to a specific issue but frequently fail to accomplish the desired outcome due to complex system behaviors.
The solution to this challenge is to focus on the desired outcomes rather than specific solutions. Once we’re clear on what we’re looking to accomplish in precise, measurable terms, it becomes much easier to experiment with different solutions in order to evaluate which of the proposed approaches result in the desired outcomes and which fail to do so. And this experimentation can be done by humans, as is the case with A/B/n experiments in, for instance, SaaS companies or through the use of machine learning algorithms when the data is available and training and validation can be conducted without negative side effects.
The final concern that I want to raise is that many companies lack a clear, explicit description of the desired outcomes. Many companies talk about system performance and customer value, but few provide the necessary specificity for it to mean something. For instance, how does the average throughput for the entire customer base and the response time for individual customers compare. Or what is the right balance between security and usability? For us to grow systems through experimentation and trial-and-error, we need to know and precisely define what constitutes “better”. And we need to know what are the guardrail metrics that can’t be violated. Once we have that, we can use different approaches to creating systems.
Concluding, in many industries we are reaching the limits of the complexity that humans can intellectually manage and it is important to change our perspective from the top-down architected and designed system to the complex (eco-)system viewpoint where the consequences of our decisions are not obvious and we need to experiment our way forward through, potentially, multiple trials to achieve the outcomes that we look for. It is time to explore growing rather than building systems.