
One of the key challenges that the companies that I work with struggle with is the burden of regulatory compliance. Especially in Europe, there are numerous rules, acts and regulations to follow. One company in the automotive industry claims that they need to follow somewhere between 250 and 300 of these to sell their products worldwide.
Because of the GDPR and the Data Act, the regulatory burden has already gone up considerably. With the AI Act and Cyber Resilience Act (CRA), among others, going into effect, this will only get worse in the coming years. The consequences can be quite significant, as several of the regulations can lead to fines of up to 10 percent of revenue.
Although these rules, acts and regulations don’t explicitly state that approaches like DevOps aren’t allowed, oftentimes it’s simply the easiest to ensure compliance once and not touch the system after it’s been put in operation. This goes against everything digitalization and its enabling technologies – software, data and AI – stand for: continuous evolution of systems through DevOps, fast data-driven feedback loops and AI techniques such as reinforcement learning.
In our recent interview study as well as our earlier work with European software-intensive companies, we’ve identified several challenges concerning AI in a regulatory compliance context. These include difficulty of interpretation, risk avoidance, need for human oversight, non-deterministic behavior and lack of automation.
First, especially for new rules, acts and regulations, their interpretation is often a significant challenge. In one case, I was told of a company asking five different law firms for an interpretation of the Data Act, only to get five different interpretations back. Typically, the measures get clarified through lawsuits and rulings, but with the high fines, few companies are willing to go down that road.
The lack of clarity around the interpretation of the regulations leads to the second challenge: risk avoidance. When risking 10 percent of revenue in fines, few leaders dare to ride too close to the edge in terms of compliance. Instead, companies make decisions that keep them far from the point where they would start to violate compliance. This often leads to a desire to maintain the status quo and avoid any changes and innovations, as these might bring risk with them. The consequence is a significant slowdown in innovation, which obviously exposes companies to disruption.
Especially global companies address this challenge by moving their innovation activities to locales where rules and regulations are less strict. For example, several European automotive companies have deployed their autonomous driving solutions in the US, as the regulatory burden is so much lower there. Other companies deploy innovative solutions in places such as Dubai for the same reason.
Third, many regulations and acts require a human in the loop to provide the necessary oversight. Although this approach is often well-intended and considered necessary from a legal liability perspective, the challenge is that many of the interpretations make it very difficult to replace humans with AI agents. Especially if these agents are continuously evolving.
The fourth challenge is that machine learning is non-deterministic by nature, which can lead to different outcomes in different situations, even if the same input data is provided. This is raised as a significant concern, especially in the safety-critical context, but the fact is that humans are non-deterministic as well. Algorithmic software can also behave in different ways during operations due to errors in the code or the surrounding infrastructure. Rather than assuming perfect behavior for components, we need architectures that monitor them and can degrade gracefully when they fail to operate according to expectations.
Finally, there’s the lack of automation. Most rules, acts and regulations require forms of evidence of compliance. Unlike the principle of “innocent unless proven guilty,” the burden of proving compliance is on the company providing the product or offering. Gathering the evidence manually is extremely effort-consuming; large companies may employ hundreds of people working on this full time and thousands more who spend part of their time on these tasks.
Although there are some solutions where evidence is automatically gathered by tooling inserted into the development tool chain, most companies base their evidence collection on human labor. That automatically leads to a situation where they seek to minimize their efforts in this area, which results in a reduction of the number of releases as well as avoidance of unproven technologies.
The fourth and final main area of challenges for companies seeking to become AI-first is regulatory compliance. There are five main concerns: difficulty of interpretation, risk avoidance, need for human oversight, non-deterministic behavior and lack of automation. When applying AI solutions in areas where regulatory compliance is important, both regulators and, where applicable, certification agencies are often unclear on how to achieve compliance using agents. This causes companies to avoid or slow down the pace of AI adoption, as it’s unclear how to deploy the solutions. Without deploying, the benefits won’t be achieved, so there’s no point. As I’ve expressed my concerns about the heavy regulatory burden for especially European companies, I’ll end with a quote by George Allan: “We should value innovation and freedom over regulation.”
Want to read more like this? Sign up for my newsletter at jan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch) or X (@JanBosch).