Building software-intensive systems from scratch is far from trivial. One of the main reasons is that it’s hard to capture concisely and precisely what the system should look like in terms of functionality. Even if all stakeholders individually have a clear understanding of what they want, it doesn’t mean that the expectations are aligned. In many cases, there are conflicting needs and wishes that need to be managed.
Traditionally, this is addressed by embarking on a requirements elicitation process where the needs and wishes from all stakeholders are captured, conflicts are identified and resolved, resulting in a clear and crisp requirements specification that precisely states what the system should provide in terms of functional and non-functional behavior.
The challenge with requirements engineering is that it has at least three inherent problems. First, the assumption that capturing requirements and then building the system will result in everyone being happy is fundamentally false as requirements change all the time. The rule of thumb in software engineering is that requirements change at a rate of 1 percent per month. For a complex system with 10,000 requirements (not atypical for the industries I work in), that means 100 new or changed requirements per month. Assuming a development project lasting 36 months (a typical car project), that means 3,600 new and changed requirements – an overhaul of more than a third of the requirements.
Second, the assumption is that requirements can capture the required functionality of the system precisely. In practice, however, requirements only capture a small part and 90 percent or more of what the system should do is left to the interpretation of the engineers building it. The standard response has been to write more elaborate specifications, describing things in more detail. This doesn’t work in practice as it only adds more statements that can be freely interpreted by others. It tends to lead to lots of discussion and disagreement concerning the correct interpretation of each requirement.
Third, requirements describe what customers think they want, but not what they really use and benefit from in practice. In requirements engineering, one of the cardinal rules is that requirements should specify problem domain functionality and avoid describing solution domain functionality. In practice, however, even the requirements in the problem domain already describe solutions based on stakeholder understanding of what a solution should look like. There’s ample evidence of the gap between espoused theory (what people say they do) and theory-in-use (what people really do). Requirements tend to capture the “espoused” theory as they’re based on what stakeholders say they want.
So, the first outdated belief in software is that requirements matter and are critical for product success. As you might gather, I’m not at all convinced that this is the case. So, what should we do instead? Although I won’t provide a ready-made solution, the overall concept is to move away from the notion that the system is defined, built and put in operation, but to accept that any software-intensive system will be in perpetual beta, meaning it continues to evolve for its entire economic life. In many domains, it’s better to accept that the system’s full scope will only become clear over time, things change constantly and that there are better ways than upfront defined requirements to manage the functionality growth. At least three principles go a long way to addressing this challenge.
The first is to focus on outcomes rather than requirements. In earlier posts, I’ve talked about value modeling as an approach to describe what measurable outcomes we’re looking to accomplish. Quantitatively described outcomes have several benefits over traditional requirements. First, the desired outcomes tend to be much more stable than requirements. Second, the preciseness of these outcomes forces any conflicts between stakeholders to surface so that these can be addressed and a decision can be made. Third, it forces stakeholders to put relative priorities on the desired outcomes, stating which ones are more important than others.
The second principle is that if we accept that we can’t know all the requirements before the start of development and that some are even unknowable, it’s much better to take an evolutionary approach with small steps. This allows us to take an experimental approach where rather than talking about requirements, we use the notion of hypotheses. A hypothesis is an unproven statement about the world that can be tested with an experiment. In our context, we state that building some functionality will have some measurable (and typically positive) impact on one or more of our desired outcomes with an acceptable negative impact on other outcomes. By building a slice of the functionality and measuring the effect, we can develop a better understanding of the relationship between the functionality and the measurable impacts on system and user behavior. In earlier work, we developed the HYPEX model that describes this in more detail.
Third, in those cases where you can’t get away from specifying the functionality of part of the system, capture the desired functionality in test cases that can be executed automatically. This approach is used in one of the Software Center companies and it works well for them. The main reason for its success is twofold. First, it forces an in-depth discussion between stakeholders, product management and development so that the waste due to rework is minimized. Second, as most companies use a continuous integration and test approach, once the test case is part of the test suite, any later changes to the system that cause the case to fail are captured immediately, causing immediate fixes. This ensures continuous feature growth without breaking existing functionality.
I’m of course aware that many industries, such as defense and automotive, have set up their business ecosystem interfaces based on requirements specifications and that changing this will take time. Still, also in these industries, the first examples of agile contracting and subscription models allowing for continuous approaches are appearing. So, we’re moving away from the traditional requirements-driven approaches and adopting more continuous, outcome-based and data-driven ones. Where would you like to be? Desperately holding on to outdated beliefs and practices or proactively inventing the future? I know what I prefer!
To get more insights earlier, sign up for my newsletter at jan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch), Medium or Twitter (@JanBosch).