During the last year, I’ve been in several discussions that, to a large extent, boiled down to “why is this product so stupid?”. The stupidity was defined by the lack of the system to anticipate user actions, the inability to learn to function better in a specific context or the total reliance on the user initiating activities by the system even if it was completely obvious what needed to be done. A few examples.
A representative of a company building radars relayed the story of being asked by a customer why the radar, once placed in a specific location, functioned the same after 2 minutes, 2 hours, 2 days and 2 months. Why wasn’t the radar learning from its context and improving its ability to detect objects by knowing what the static elements in the environment are and using that to better distinguish new objects?
A user of a route planning system complained about frequently being late for meetings because he wasn’t proactively warned for traffic jams that didn’t exist when he looked up the expected travel time the day before. Why doesn’t the system warn me, he lamented, for an unexpected traffic jam due to an accident or something so that I can leave earlier?
A company using expensive, high-tech equipment complained about the system being unable to adapt and get the hang of their very predictable schedule of operations. The equipment required adjustment time between different types of usage and even though the company ran virtually the same schedule day after day, the system didn’t learn to initiate the reconfiguration and subsequent adjustment by itself.
All these systems are built to the specifications that were put up before the start of development. All of them passed the validation and verification tests with flying colors. And yet, they fail to delight customers and users and provide significantly less efficiency and effectiveness than what they could.
Being in the age of AI, we have a set of tools in our toolbox that can help address the stupidity of products. Using different forms of learning and experimentation, we’re able to include behavior in systems for detecting patterns, developing hypotheses on these patterns being consistent, running experiments of proactive system behavior, measuring the effect of the experiment and then learning from it.
A theoretical AI researcher may claim that this is reinforcement learning and at its core, that’s a correct conclusion. However, it would also violate the Einstein principle of making everything as simple as possible, but not simpler. The key challenge around making systems smarter isn’t the basic reinforcement learning, but rather our ability to realize the aforementioned activities and behavior in systems without causing safety or security risks, without annoying the user (remember Clippy?), managing the stochastic nature of feedback and to focus on those things that actually add value for the user.
Still, customers are increasingly expecting their products to get better every day they use them. I want my car, my phone, my computer, my apps, my wearables to get better every day. I want my devices to learn from me and my behavior to deliver more value to me by adjusting accordingly. To achieve this, it’s not enough to adopt DevOps and run A/B tests, but it also requires fully autonomous experimentation by systems at speeds that R&D organizations simply cannot match.
Our systems shouldn’t nudge us into different behaviors, as many social media apps tend to do, but rather act proactively on our behalf and to our benefit. I want the systems that I use and interact with to take the lead and remove the burden of always remembering and initiating activities from my shoulders and free me to focus on the things that I’m uniquely good at. Please stop building stupid systems and focus on adding smart, proactive behavior instead of yet another feature.
To get more insights earlier, sign up for my newsletter at jan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch) or Twitter (@JanBosch).