Outdated belief #7: Post-deployment is relevant only for (serious) quality issues

Image by Comfreak from Pixabay

A few decades ago, the first reports were published on software errors resulting in financial losses exceeding 1 billion euros. Since then, many more accounts of software errors costing hundreds of millions or more have been in the news. The response in the larger community was twofold. First, test the heck out of every piece of software going out and take the time it needs to achieve that. Second, once a product containing software has left the factory, do everything you can to avoid having to change the software.

The reason to avoid changing software in a shipping product was, again, twofold. First, the cost of validating the software was often very high, not the least because significant human effort was required to conduct all the tests necessary to get to production quality. As most development followed a waterfall-style process, the validation phase typically found numerous errors. The subsequent problem was that, statistically, 25 percent of all the code changes intended to remove errors resulted in new errors being introduced, meaning that the list of issues to fix simply keeps growing if you’re not careful. So, once you’ve got a shippable version of the software with good quality, you want to avoid messing up the code and introducing new defects.

The second reason to avoid updating software post-deployment was that it required either a recall where the product, eg a vehicle, had to be brought to a service station or a service technician had to visit the site where the system was installed, eg a medical scanner. This typically resulted in hundreds of euros of expenses for each instance of the product for every software update. As the business model usually didn’t include a way to get paid, this became a non-recoverable expense for the company that everyone wanted to avoid as much as possible.

Over the last decades, obviously, many things have changed to the point that the above is now outdated. Three of the main changes causing this are Agile, SaaS and connectivity. First, the introduction of agile ways of working across many industries brought with it the need to continuously test code and to be at production quality at all times. This has caused a major increase in automated testing solutions. These solutions are a classical case of exchanging OPEX with CAPEX in that it’s quite costly to put them in place, but once you have them, the cost of running tests is orders of magnitude lower than before. As a consequence, it becomes entirely feasible to run tests continuously and to be at production quality even though the software is under development. 

Second, cloud-based software as a service (SaaS) offerings adopted continuous deployment and DevOps as the default mode of working. The business model is typically continuous, eg funded by advertising income or subscription fees, meaning that every improvement in the product results in immediate positive business outcomes. Thus, constant evolution and improvement of the offering by adding new features, running A/B tests to optimize features and generally using a continuous data stream for decision-making is the norm. This has led to users expecting continuous improvement in not just SaaS offerings but any of their products, including mobile devices, computers, televisions, kitchen equipment and vehicles.

Third, the cost of connectivity has continuously dropped over the years and there typically is no economic reason to not include some form of it in any product costing a few tens of euros. Once systems are connected, the cost of software updates drops to close to zero and, similar to SaaS offerings, it becomes feasible to bring data back from the field to inform product development going forward. This is now becoming increasingly accepted and several of the companies I work with push software to the field regularly and have started forms of A/B testing in embedded systems.

For the companies that haven’t yet adopted continuous deployment, the primary inhibitor is the lack of a business model allowing them to generate revenue from shipping improvements to the field. However, there we can also see a clear trend of products complementing exclusive monetization through the initial sale with all kinds of continuous business models. Most higher-end car brands have complementary services with an associated monthly fee, as do the major mobile phone brands. The one area where the adoption of DevOps has been inhibited is for products requiring forms of certification, but changes are rapidly materializing there as well.

Post-deployment, continuous updates of software in products in the field are becoming the new normal in many industries and where they haven’t yet adopted this, the trends are clearly pointing in that direction. This is a good thing both for producers and for users as newly developed functionality sitting in your software repository without being used is a humongous waste, similar to factories having large inventories of parts. It locks up dead capital and doesn’t do anyone any good. So, for all of you working on an offering, how can you make sure it gets better every day it gets used?

To get more insights earlier, sign up for my newsletter at jan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch), Medium or Twitter (@JanBosch). Categories