The traditional way of software development is to first agree on what to build, then build the functionality and finally commence testing and fixing defects until the quality is at an acceptable level. As anyone who has spent any time in this field knows, no software is ever flawless, but the concept is to get the critical and major defects removed and bring the level of minor defects down to a reasonable number.
As part of the continuous value delivery to customers, we need to push out updates to systems in the field very regularly, ie at least every four weeks but potentially much more frequently. To be able to do so without causing major quality issues requires a different approach to system quality. The basic principle is that the software, as well as the system as a whole, is always at production quality and we never allow the quality to drop below that level.
To achieve this, for every check-in of new code to the source code control system, we need to verify that it doesn’t cause any quality concerns. This requires that we build the new executable and then test it to ensure that no defects were introduced, meaning that the preexisting functionality still works as intended and that the new code works as intended.
Of course, for any system of non-trivial complexity, the time it takes to build and test it is longer than the average time between check-ins by the different teams and engineers on these teams. In response, most companies build up a staged model where each check-in is tested to a limited extent to ensure that the most obvious mistakes aren’t present. Then, subsequent testing activities will combine multiple new contributions and test the system including these.
A useful way to visualize the complete set of testing activities between individual engineers checking in code and releasing it to the customer is the Continuous Integration Visualization Technique (CIVIT). In the example below, each box indicates one testing activity. Here, after acceptance testing (the leftmost column of activities), testing is performed every hour, every day and every week. Of course, the test suite for the one-hour test is much smaller than the daily and weekly ones, but the idea is to continuously build up confidence in the quality of the code. In this example, the release organization still conducts separate, manual testing before releasing the software, but depending on the criticality of the system, it’s also possible to automatically release software upon it successfully passing the most elaborate test activity.
One aspect not shown is the root-cause analysis process included in this way of working. Every defect that slips through to the field is analyzed from the perspective of improving the test suite to ensure that this defect or similar ones will be caught going forward. Working in this way allows companies to transition from a situation where they’re managing hundreds or even thousands of known defects to a situation where the number of known defects in the field is in the single digits.
The second process that needs to be instantiated in this context is the maintenance of the test suites. This includes maintaining traceability between requirements and test cases, assigning test cases to the best suite (run more or less frequently), removing obsolete and duplicate test cases and addressing flaky tests.
Digitalization implies more frequent, if not continuous, delivery of value to customers. The main carrier of value is software, resulting in DevOps or continuous deployment. To ensure quality, we need to establish and maintain a build and test infrastructure that limits the number of quality issues that slip through to the field. This requires not only the infrastructure itself but also supporting processes such as root-cause analysis and test case maintenance. As John Ruskin said: “Quality is never an accident. It’s always the result of intelligent effort.”