
As we approach the end of 2025, it’s clear that the context in which software-intensive companies operate is shifting faster than ever. Over the year, working closely with businesses in domains such as automotive, telecom, industrial equipment and security, I’ve seen a growing divergence. Some organizations are accelerating, embracing data and AI to transform their products and R&D systems, while others remain stuck in incremental improvements, understandably cautious but increasingly vulnerable to global competition.
Across the Software Center companies and the various startups and scale-ups I engage with, the speed of learning has once again proven to be the most important predictor of future performance. In a world where models evolve weekly, customers expect continuous updates and regulatory requirements tighten, staying with the old way of doing things is the real risk. We need to change faster, but this isn’t about hero culture or pushing people harder. It’s about designing systems – organizational, technical and data-driven – that allow companies to operate in radically different ways.
As an end-of-year reflection, I thought I’d share five technology breakthroughs that made 2025 pretty exciting from my perspective. In addition, I pose five questions that I believe would be good to reflect on during the Christmas break as we prepare for the new year.
1. AI code generation becomes a reliable contributor
During the year, AI-based development tools matured significantly. Following my experiences, as well as what can be read in the media, such tools generate 30–40 percent of routine coding, test scaffolding and documentation. I know of cases where engineers contribute to code bases that they have little to no experience in and that may even have been written in languages that they don’t know (well). Teams move faster with higher quality, not just because of automation but because engineers get to focus on the hard problems.
2. Automated architecture and system modeling reach practical viability
Several research prototypes and early industrial tools can now generate interface definitions, executable models and alternative architectural designs in software. Not only that, AI-driven co-design environments can now propose alternatives in mechanics, electronics and system control loops. The boundaries between software, electronics and mechanics are starting to become more fluid as these tools identify better designs and more optimally allocate functionality to different technologies.
3. High-quality synthetic data becomes standard practice
Using real-world data is hard in many use cases, either because of regulatory compliance, such as GDPR and the Data Act, or because of low incidence rate, such as in autonomous vehicles. Therefore, it’s good to see that we’re getting better at generating domain-specific synthetic data, especially for sensor-heavy products, that closely mirrors real-world data. It allows companies to escape the bottleneck of rare-event collection and avoid violating regulations.
4. Real-time compliance validation enters the CI/CD pipeline
With the AI Act, Data Act, cybersecurity requirements and sector-specific regulations intensifying, compliance is shifting from annual audits to continuous validation. Tools that scan artifacts, pipelines and model behavior in real-time materially reduce compliance overhead and risk. For example, I’m involved with Kosli, concerning regulations in fintech, and ExplorAI, concerning the AI Act, to develop solutions that reduce manual effort by companies.
5. The first real AI-first product patterns emerge
In 2025, several companies launched products architected around AI from the outset. Perception, decision-making and user interaction rely on ML components as first-class citizens. These early examples foreshadow a phase where entirely new product categories will emerge – particularly in mobility, industrial automation and engineering tools. Although it’s still early days, we see companies deploying ML into cameras, ECUs, controllers and other constrained devices as the cost and time required to reach production-quality performance have dropped dramatically. Tools that monitor ML model performance in production and trigger safe retraining have become stable enough for early adoption, allowing companies to deploy ML that evolves rather than degrades over time.
As I wrote last week, I’m worried about the situation in Europe and the slow changes I’m seeing in industry. As most of us are soon taking some time off during the holiday season, I wanted to share some questions for all of us to reflect on while we’re out, in the hope that 2026 will show a significant acceleration.
Question 1. Where can AI realistically double our productivity and what stands in the way?
This isn’t about ambition but about identifying concrete domains, such as coding, testing, documentation or ML in the product itself, where AI can unlock measurable acceleration. As I shared in [the series during the fall](https://bits-chips.com/article/the-ai-driven-company-conclusion/), AI isn’t only relevant in R&D but should also be used to reinvent business processes as well as the business ecosystems in which we operate.
Question 2. Are we still organized around agile rituals or do we use fast data-driven learning loops?
Many of you will know that I feel that Agile has reached the end of its useful life. Many of the principles, such as fast feedback loops, are still relevant, but rather than practicing Agile mechanically, we need to focus on continuous, data-driven learning loops. The real shift is toward continuous value delivery, fast experimentation and learning from operational data.
Question 3. Do we treat data as a strategic asset with clear ownership, quality control and lifecycle management?
Although the embedded-systems industry increasingly collects vast amounts of data, the actual use of that data is still quite poorly realized in many of the companies I work with. Whenever data and opinions collide, still the opinions win in most cases I’ve been part of. Of course, we need to move toward using data as a strategic asset.
Question 4. How much of our compliance burden can we automate?
Although there’s significant progress in integrating the compliance burden in the CI/CD chain, the amount of manual effort in most companies is still incredibly high. Especially in highly regulated industries, the cost for automated compliance can reach 10 percent of revenue, according to some studies. The only way that I see to reduce cost and risk while enabling faster releases is through automation.
Question 5. What are we doing to increase organizational learning speed?
For all the talk about AI, the fact remains that it’s humans and organizational change that are on the critical path. Adoption is slow, not because the technology isn’t there, but because the people in organizations fail to adopt it at the necessary pace. We need shorter feedback loops, smaller batches, faster experimentation and more autonomy to change faster. Rather than for efficiency, we need to optimize for learning speed.
2025 confirmed what many of us already sensed: the companies that treat AI and data as structural enablers are pulling ahead. The good news is that the tools are becoming more accessible, the workflows clearer and the potential benefits more quantifiable. But capturing this potential requires more than technology adoption. It demands rethinking how products are conceived, how R&D is organized, how data flows through the organization and how compliance is integrated into the development process. As we head into 2026, the challenge and opportunity for software-intensive companies is the same: design organizations and systems that learn faster than the competition. Those who do will help define the next decade of innovation. To end with a quote from Peter Senge: “A learning organization is an organization that’s continually expanding its capacity to create its future.”
Want to read more like this? Sign up for my newsletter at jan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch) or X (@JanBosch).