Microsoft conducted an experiment a decade back and learned about 65–70% of ideas failed to improve the metric they intended to improve.
This problem still lingers today. Worse, many teams don’t recognize they are failing — their metrics tell them they are successful. Odds are this is you.
“Every feature built by a software team is built because someone believes it will have value, yet many of the benefits fail to materialize.” — Microsoft ThinkWeek paper
I once held a meeting with my client and boldly proclaimed they were not providing any value. You can imagine their reaction. It rattled them and provoked a response laden with confusion, anger, and defensiveness.
They worked hard every day doing the work laid out before them. They delivered the requirements on time. They were following their leadership’s guidance. They cared about the users. How dare I suggest they weren’t providing any value.
My assertion sparked a two-week conversation that transitioned into a change in how they approached product management. It was their inflection point.
A false sense of success is dangerous. The common metrics leveraged by software development teams, often labeled as agile metrics, are a trap. How success is commonly measured is misleading.
If you, like many teams, are measuring success at the end of your iteration by how well you delivered your commitments, you may not be delivering the value you think you are.
To help you find your inflection point, I will break down the value dilemma and give you the key to unlocking this enigma.
The Value Dilemma
In 2001, the Agile Manifesto said we need to deliver valuable software. Year after year, the respondents of the Annual State of Agile Survey proclaim business value is one of their top success measures. The question is, what does business value mean and how do you measure it?
Value isn’t in what you do. Value is in what is achieved as a result of what you do.
Modern software development teams talk about delivering business value frequently. The problem is their focus tends to stop at delivery. Value being what is achieved as a result of what you do puts the awareness and measurement of that value beyond where most teams operate.
That’s the dilemma — value can only be validated post-deployment and many development teams’ measures of success often stop at deployment.
If your teams aren’t following the outcome of their deployments to where value is validated, how are they measuring value delivered? One way teams do this is via Business Value Points.
Measuring Value Via Business Value Points
Business Value Points (BVPs) is gaining momentum in my part of the world. Gaining momentum is interesting since there are articles written about BVPs since at least 2009. Never-the-less, it was still a topic at the 2021 Minimum Viable Conference put on by the Agile Alliance.
BVPs are subjective values assigned to proposed work items that help rank priorities based on relative perceived value.
Proponents, like the speaker at the Minimum Viable Conference, admit BVPs are used in lieu of trying to estimate real, tangible value because it’s easier. BVPs are also better than straight prioritization because it weights the proposed work — puts them in relative order similar to how we estimate stories using story points.
Tracking business value with subjective points is easy to do. The Product Owner or the team collectively assigns value points to each work item. The team sums the points of all the work committed to for a sprint or iteration. The number of points delivered compared to the points planned is the percentage of business value delivered.
The quest to have a quantifiable way to measure the value delivered using BVPs follows the Streetlight Effect. It’s easy to measure. It’s easy to understand.
But Business Value Points do not represent actual business value. They do not indicate whether you improved the outcome you intended to improve. All they indicate is you delivered a percentage of the total estimated value points you planned to deliver.
The problem isn’t so much the subjective estimation and ranking of work — there is still value in that. The problem is declaring the business value earned at delivery.
Delivery of planned commitments is not business value.
If value isn’t in what you do, then delivering all your commitments on time, and all the effort involved in achieving that, isn’t value. Tracking business value points instead of story points doesn’t change anything.
When you measure success at delivery, you’re measuring success by what you do — what you deliver. Whether you’re measuring stories, story points, or value points delivered, the measure is more about planned versus actual than the value of what was delivered.
Value Can Only Be Known Post-deployment
When Microsoft surmised two-thirds of ideas failed to improve the metric they intended to improve, they did so by validating the impact of a change post-deployment.
The value wasn’t in delivering the work. Despite how confident someone was about the idea pre-development, the value was in what was achieved as a result of what they delivered.
The Agile Manifesto said, “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”
Who decides what’s valuable? The customer or the user decides what’s valuable.
When do they know if it’s valuable? After validating what is achieved as a result of what was delivered — post-deployment.
Microsoft learned that we are mostly wrong in predicting the business value of our ideas. That means what you score as an 8 in BVPs before development isn’t likely to be as impactful post-deployment.
This demonstrates why requirements are bad and we should have the mindset that we’re working from assumptions that need to be validated.
How Should You Measure Value?
Traditional return on investment (ROI) calculations can be difficult and time consuming — not always worth the effort. Not all work has a financial impact. How do you measure ROI of risk reduction, policy adherence, customer satisfaction?
Knowing real, tangible value is hard. Even if you have valid numbers to work with, judgment is still required to assess relative value and priority. Some level of subjectivity will always be involved.
The key to understanding how to measure value is knowing that value can only be known post-deployment.
Did your output have the outcome you intended it to have? Did it move the business measure closer to its target? Do you know how to measure that?
Whenever possible, use real before-and-after data.
You’re spending $x now and the proposed change should reduce costs by n%. You can measure that impact over time to validate the value delivered.
You’re receiving 100 support calls on a related topic and you want to eliminate those by improving something within the user experience. You can measure that. If you dig deeper, you can pull the cost avoidance out of that as well.
Did user feedback improve after the change? Did your risk scores go down?
Real, tangle ROI is always preferred. It’s not always worth the effort to calculate exactly, but don’t let that give you an easy out either.
Business Value Points is subjective. You believe one thing is more valuable than another. You may have some data you use to support your belief. You may not have any data — it just seems logical or your experience and knowledge lead you in a particular direction.
These spider-senses are valid. Your knowledge and experience is valuable. applying your subjective assessment to BVPs is valuable IF you assess your scoring of the features post-deployment.
What score would you give the thing now that it’s been in the user’s hands for a while? Does it seem to provide the value you intended? Even if you don’t have solid data, how is your post-assessment compared to your pre-assessment? How do you learn from that?
Other Measures Considerations
I want to be clear — other measures are still valuable. Deployment frequency, cycle time — time it takes to actively move ideas through the development cycle, and team health. These are leading indicators, meaning improvements in these operational areas is expected to indirectly support the delivery of value.
Be careful what you measure though. Productivity measures can easily be detatched from outcome measures. The more your success measures require you to follow the outcome weeks or months post-deployment the better.
It’s worth repeating. Value isn’t in what you do. Value is in what is achieved as a result of what you.
Development team success measures need to look beyond the delivery date. They should be accountable for the outcomes.
You can deliver all commitments on time with perfect quality. You can deploy daily. You can continuously improve your processes. But if what you deliver isn’t achieving the business objectives, where’s the value in that?
Do you use Business Value Points today? Do you claim business value earned based on what you delivered compared to what you planned? If so, I encourage you to revisit your assessment weeks or months later on that same work. Compare the post-deployment assessment to your pre-development assessment. Feel free to reach back to me and report your findings.
I write about strategy, business, leadership, product development, and Agile. Challenge what you know. Get blog updates and short weekly insights in your inbox — subscribe on this page.