{"id":1859,"date":"2024-06-17T12:15:44","date_gmt":"2024-06-17T12:15:44","guid":{"rendered":"https:\/\/janbosch.com\/blog\/?p=1859"},"modified":"2024-06-17T12:15:45","modified_gmt":"2024-06-17T12:15:45","slug":"from-agile-to-radical-experiment","status":"publish","type":"post","link":"https:\/\/janbosch.com\/blog\/index.php\/2024\/06\/17\/from-agile-to-radical-experiment\/","title":{"rendered":"From Agile to Radical: experiment"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"683\" src=\"https:\/\/janbosch.com\/blog\/wp-content\/uploads\/2024\/06\/experiment-5594881_1920-1024x683.jpg\" alt=\"\" class=\"wp-image-1861\" srcset=\"https:\/\/janbosch.com\/blog\/wp-content\/uploads\/2024\/06\/experiment-5594881_1920-1024x683.jpg 1024w, https:\/\/janbosch.com\/blog\/wp-content\/uploads\/2024\/06\/experiment-5594881_1920-300x200.jpg 300w, https:\/\/janbosch.com\/blog\/wp-content\/uploads\/2024\/06\/experiment-5594881_1920-768x512.jpg 768w, https:\/\/janbosch.com\/blog\/wp-content\/uploads\/2024\/06\/experiment-5594881_1920-1536x1024.jpg 1536w, https:\/\/janbosch.com\/blog\/wp-content\/uploads\/2024\/06\/experiment-5594881_1920.jpg 1920w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Image by Gerd Altmann from Pixabay<\/figcaption><\/figure>\n\n\n\n<p>One of the worst misconceptions in software engineering is the assumption that if we build software based on a requirement specification, test it according to the spec and deliver it to our customers, we\u2019ve delivered value to these customers. This may be the case when a small team of consultants develops software for a single, competent customer, but when things scale in terms of the number of customers, teams and features, it rapidly becomes much less clear what constitutes value.<\/p>\n\n\n\n<p>Our research, as well as research by others, shows that roughly half to two-thirds of all new features are never or hardly ever used. Consequently, the R&amp;D effort invested in building these is a complete waste. How do we end up in a situation where half or more of all new features are simply waste? What happens in companies that causes work to be prioritized in such a way that we do the wrong things?<\/p>\n\n\n\n<p>This was the core of the [product management post series](https:\/\/bits-chips.nl\/article\/strategic-digital-product-management\/) where we explored this challenge as well as solution approaches. However, in my experience, at heart, the main issue is that the so-called experts in the company or at customers simply have an incorrect mental model concerning the impact of the activities they\u2019re advocating. Assuming good intent, which I think is reasonable in most contexts, they push for R&amp;D efforts that they think will have the best possible impact on the outcomes. The problem is that these assumptions are mostly, or at least partially, wrong.<\/p>\n\n\n\n<p>There are at least three reasons why our beliefs about the impact of R&amp;D efforts are incorrect: fuzzy definitions, lack of feedback loops and politics. First, most companies I work with, when asked what value their offering provides to customers, respond with rather imprecise descriptions. These descriptions tend to fall into the \u201cworthwhile many versus vital few\u201d trap in that the aspects offered up by the people I talk to are relevant but not always the highest-priority \u201cvital few\u201d factors. In addition, the descriptions tend to be qualitative in nature and not very precise in definition. For instance, one company I work with referred to \u201ccustomer confidence,\u201d meaning the customer wouldn\u2019t have to worry about the problem their offering claimed to solve. Finally, there often is no willingness to trade off between various factors or aspects. For instance, how much \u201ccustomer confidence\u201d are we willing to sacrifice to gain market share? Or how are price and quality related to each other?<\/p>\n\n\n\n<p>Second, although we tend to make predictions about the impact of new functionality, sometimes very precise ones, we seldom go back afterward to determine whether they were indeed accurate. Instead, we\u2019re busy pitching the next set of features we want to see built and pitching for these to be prioritized. Without a feedback loop, there\u2019s no learning. Consequently, we\u2019re stuck in our flawed world model and never get corrected.<\/p>\n\n\n\n<p>Finally, the third reason is that companies are made up of humans and humans constantly maneuver a social grid where most people in our network need to get what they ask for at least occasionally or periodically to make sure that they don\u2019t become our enemies or at least detractors. That means that in many contexts, I\u2019ve noticed that ideas, concepts and features get prioritized even though almost everyone knows that they won\u2019t have any impact whatsoever and are, in fact, pure R&amp;D waste. The work is only prioritized to placate certain individuals. As long as the game is played based on these principles, it\u2019s impossible to increase R&amp;D effectiveness.<\/p>\n\n\n\n<p>To address this, we need to accept a very basic principle: most of the time, we have very little, if any, idea about the impact a feature or function will have. Our offerings tend to operate in quite complex contexts with other systems and are used by humans who are difficult, if not impossible, to predict and who have a huge internal discrepancy between what they say they do (espoused theory) and what they actually do (theory-in-use). It feels incredibly uncomfortable, especially as we\u2019re expected to be experts in our field and our reputation depends on acting like one, to admit that some things simply are unknowable. It is, however, the starting point of any recovery process. Until we admit that the impact of our R&amp;D efforts is unknowable, we can\u2019t make progress toward resolving this issue.<\/p>\n\n\n\n<p>Once we admit that things are unknowable, the next step is to stop viewing the world in terms of requirements and instead start to look at it in terms of hypotheses and experiments. Rather than prioritizing requirements, our role is to collect and define hypotheses. These tend to be of the form \u201cif we build this, the effect will be that.\u201d The next step is to prioritize our hypotheses for evaluation. Evaluation takes place through experiments.<\/p>\n\n\n\n<p>Although we tend to think about experiments in terms of the scientific method and hard, quantitative, data, in this context it\u2019s preferable to look at them as techniques to iteratively build more confidence in the hypothesis. So, initially, an experiment may take the form of \u201cpresent the idea to ten customers and gauge interest to determine if there\u2019s sufficient interest.\u201d In this case, we can beforehand state that at least six have to claim that this is of interest and something they\u2019re willing to pay for.<\/p>\n\n\n\n<p>Once we\u2019ve established that customers say they\u2019re interested in a specific feature, the next step becomes measuring their actual behavior. Here, an experiment can be a small-scale prototype or an A\/B test where only a few people are exposed to the feature. This allows us to measure at a small scale whether people do what they say they would do. If this is also successful, based on success criteria developed before the experiment is conducted, we can scale things up.<\/p>\n\n\n\n<p>A scaled-up approach can, for example, be a full-fledged A\/B test where a significant percentage of customers and users is exposed to the new feature with the intent of measuring engagement and behavior. If a large-scale A\/B test is successful, we\u2019ve established that the feature is viable and relevant and that it needs to be productized (beyond the point that the A\/B test required).<\/p>\n\n\n\n<p>Half of all R&amp;D effort is waste because we prioritize the wrong things for development. This is because we\u2019re not clear on what we\u2019re seeking to accomplish, we don\u2019t learn from past mistakes and misconceptions and we\u2019re subject to politics and social forces, causing us to prioritize efforts we know are wasteful from the beginning. Instead, we have to start with the belief that the impact of new features and functions is, by and large, unknowable. That insight then leads us to work with hypotheses and experiments instead of requirements. Rather than one experiment, we encourage a sequence of experiments that incrementally build up confidence in the validity of the hypothesis. To end with a quote by John Finley: \u201cMaturity of mind is to endure uncertainty.\u201d<\/p>\n\n\n\n<p><em>Want to read more like this? Sign up for my newsletter at\u00a0<a href=\"https:\/\/mailto:jan@janbosch.com\/\">jan@janbosch.com<\/a>\u00a0or follow me on\u00a0<a href=\"https:\/\/janbosch.com\/blog\">janbosch.com\/blog<\/a>, LinkedIn (<a href=\"https:\/\/www.linkedin.com\/in\/janbosch\/\">linkedin.com\/in\/janbosch<\/a>),\u00a0<a href=\"https:\/\/janbosch.medium.com\/\">Medium<\/a>\u00a0or Twitter (<a href=\"https:\/\/twitter.com\/JanBosch\">@JanBosch<\/a>).<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>One of the worst misconceptions in software engineering is the assumption that if we build software based on a requirement specification, test it according to the spec and deliver it to our customers, we\u2019ve delivered value to these customers. This may be the case when a small team of consultants develops software for a single, &#8230; <a title=\"From Agile to Radical: experiment\" class=\"read-more\" href=\"https:\/\/janbosch.com\/blog\/index.php\/2024\/06\/17\/from-agile-to-radical-experiment\/\" aria-label=\"Read more about From Agile to Radical: experiment\">Read more<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"generate_page_header":"","footnotes":""},"categories":[4,8,3],"tags":[],"_links":{"self":[{"href":"https:\/\/janbosch.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/1859"}],"collection":[{"href":"https:\/\/janbosch.com\/blog\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/janbosch.com\/blog\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/janbosch.com\/blog\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/janbosch.com\/blog\/index.php\/wp-json\/wp\/v2\/comments?post=1859"}],"version-history":[{"count":2,"href":"https:\/\/janbosch.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/1859\/revisions"}],"predecessor-version":[{"id":1862,"href":"https:\/\/janbosch.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/1859\/revisions\/1862"}],"wp:attachment":[{"href":"https:\/\/janbosch.com\/blog\/index.php\/wp-json\/wp\/v2\/media?parent=1859"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/janbosch.com\/blog\/index.php\/wp-json\/wp\/v2\/categories?post=1859"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/janbosch.com\/blog\/index.php\/wp-json\/wp\/v2\/tags?post=1859"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}