The AI-driven company: conclusion

Image by Dee from Pixabay

Over the last months, this series has explored the transition toward becoming an AI-driven company, with a particular focus on software-intensive systems industries such as automotive, industrial automation, telecommunications, energy and manufacturing equipment. These sectors face a unique constellation of challenges: long product lifecycles, safety-critical functions, complex supply chains, heterogeneous technology stacks and deeply embedded organizational traditions. When used optimally, the emerging generation of AI technologies can bring enormous benefits here, as AI is not only supporting software and data but also the design, deployment, maintenance and continuous improvement of physical products.

The series discussed four maturity or capability ladders. The first one is concerned with the company’s business processes. Early phases typically involve small pilots, often disconnected from strategic priorities. Higher maturity levels involve automation of decision-making, deployment of local AI-first solutions at the team or product-line level and eventually the emergence of super-agents and AI-first organizational designs. For software-intensive firms, the real value emerges when AI becomes integrated into the operational core – predictive maintenance, production optimization, logistics, energy management and customer insights.

The second maturity ladder focuses on the R&D process. In the first stage, AI is used as a personal assistant by engineers. Then, an AI agent is used as a compensator for knowledge lacking in the team and it starts to act as a team member. Next is where the team is replaced by a set of AI agents supervised by an architect or engineer. At the fourth level, we give an intent to a set of agents and they generate the system we asked for, often using an iterative approach. Finally, at the highest level, the AI agents remain part of the system and, during runtime, continuously experiment with improving the functionality of the system through reinforcement learning, A/B testing and runtime regeneration of code.

The third ladder is concerned with AI in the product. Software-intensive systems firms increasingly augment mechanical and electronic components with software, connectivity and ML/AI capabilities. We see companies often start with static ML, where offline-trained models are embedded in the product. The next stage is dynamic ML, where models are updated with new data during operation. Then, we move to multi-ML, where multiple models are integrated to deliver value in a coordinated fashion. In the fourth stage, we go to automated pipelines for retraining, deployment and monitoring. Finally, we have AI system generators, where AI agents continuously seek to improve the performance of the system at runtime.

For companies producing complex systems, such as robotics, vehicles and industrial equipment, these maturity levels reflect a fundamental shift: products no longer evolve only through engineering cycles, but continuously through data and algorithms.

The fourth capability ladder is concerned with the business ecosystem in which the company operates. Here, the transition is from purely human-driven inter-company relations to a situation where AI agents take over more and more of the responsibilities of humans. In the end, humans provide oversight and ensure regulatory compliance, but the operations in the business ecosystem are conducted by agents.

Of course, maturing along these ladders is far from trivial. In this series, we discussed several challenges. The first is to pick suitable and valuable use cases. Most firms initiate AI efforts with isolated pilots, often selected based on individual enthusiasm rather than strategic relevance. In industries with high engineering costs and long product cycles, misallocation of effort can delay progress by years.

Second, data quality, availability and real-time access remain significant bottlenecks in many embedded and cyber-physical systems. In many companies, data is fragmented across disciplines and functions. Also, the data coming back from the field is often limited due to a lack of clarity around the ownership of the data and a lack of legal agreements. Finally, particularly in safety-critical systems, the traceability of data, especially when allowing for some level of dynamism and learning at run-time, can be very difficult to realize.

Third, unlike purely digital companies, software-intensive systems firms must ensure that AI behaves safely when interacting with physical components. This creates challenges in testing, validation, verification, regulatory compliance and responsibility assignment when adding AI to the product.

Fourth, AI adoption often threatens established roles, workflows and power structures. The “expert fallacy,” the assumption that deep domain knowledge is a complete substitute for data-driven approaches, is still prevalent in many companies and acts as a brake on progress. Furthermore, organizations struggle to recruit and retain the necessary skills in ML engineering, MLOps, data engineering and AI-centric architecture.

Finally, AI introduces new forms of risk. Regulatory demands (like the AI Act, the Data Act and cybersecurity regulations) are particularly challenging for embedded and safety-critical domains. Companies must demonstrate accountability, robustness and auditability across the lifecycle of AI-enabled functions, which is quite challenging and requires companies to take a certain amount of risk to learn how to operate in the best way simply by doing.

As this entire series is based on a large interview study, there are a few key enablers that need to be considered. First, successfully adopting AI is a highly cross-functional challenge. It can’t be assigned to a single function but requires C-level support to ensure involvement of business, engineering and operations.

Second, whereas we earlier were focused on the models and their performance, the real challenge is to architect systems in such a way that models are integrated into the product and that all the enablers, such as data pipelines, monitoring, versioning, validation and so on, are built according to the needs of the models.

Third, the role of engineers especially transitions from building the system to supervising the building. The ability to supervise agents, evaluate the generated artefacts and iterate with the agents toward a suitable system is a skill that’s quite different from traditional engineering.

Finally, we shouldn’t underestimate the power of the AI learning loop. Companies that reach continuous-ML architectures gain compounding benefits: faster innovation cycles, more accurate models and increasingly personalized or optimized products. This creates competitive moats that late adopters struggle to overcome. So, you really need to get going now, if you’re not in motion already.

Looking forward, there are a few trends and strategic directions that are becoming increasingly important to relate to. The first is that new products should be designed around data and algorithms from the outset. This affects sensor selection, compute architecture, connectivity footprints, diagnostics and safety strategies. We can already see this in how ADAS and AD systems are architected, but AI-first design principles will differentiate next-generation systems in many industries.

Second, the AI learning loop is increasingly the key differentiator for companies. We need the infrastructure that enables automated data collection, retraining, deployment, monitoring and rollback for the models that sit inside physical products. Once we have the learning loop going and are delivering value to customers, slower-moving competitors will have a very hard time catching up to us.

Both for economic reasons and to ensure continuous evolution, we need to engage AI agents to move to system generators rather than systems built by humans. Humans will be concerned with improving the system generators rather than the systems generated by these. The toolchains that support this evolution will yield long-term benefits but are, today, incredibly hard to develop.

Finally, we can generate AI-enabled systems, but if safety, traceability and regulatory alignment aren’t in place, there won’t be a viable market for our products. We need to integrate compliance into our development and operational pipelines, rather than treating it as a post-hoc activity, as it allows for much faster innovation and lower friction.

The transition to becoming an AI-driven company represents one of the most significant shifts in the history of software-intensive systems. While the complexity of integrating AI into mechanical, electrical, embedded and cloud-based subsystems shouldn’t be underestimated, the potential benefits concerning performance, flexibility, safety, personalization and lifecycle cost are substantial.

Although we call this a transformation, it will likely be a continuous process as technologies evolve, regulations advance and expectations from customers and society change constantly. The key question is no longer whether to embrace AI, but how quickly and effectively the organization can progress from experiments and prototypes to scalable, safe and data-driven systems. It’s at times like these that companies either disrupt their industry or get disrupted by others. It’s up to you to decide who you’re going to be. To end with a quote by Elon Musk: “When something is important enough, you do it even if the odds aren’t in your favor.”

Want to read more like this? Sign up for my newsletter at jan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch) or X (@JanBosch).