Measuring the Business Value of Technology Investments

I’ve spent my entire technical management career searching for the answer to one of the great mysteries: “Am I getting good value for my technology spend?” While I have yet to discover a practical way to answer this question, my experience taught me some rules of thumb that seemed to calm the doubts, at least for a while.

First, shipping more often is ALWAYS perceived as going faster. So more often than not, the shift to Agile and showing (and shipping) working software helps demonstrate the value. But of course, that ultimately culminates in “continuous delivery” as the ideal and is increasingly commonplace.

Second, while we almost always have to choose between scope and schedule, organizations never really remember if you missed your scope. However, they ALWAYS remember when you miss your schedule. I’ve always found it ironic that the word “execution” has such a poignant double meaning when managing your reputation as an engineering leader.

The search for the holy grail is ongoing. But as I think more about it, I’m reminded of a great discussion held at an Agile Transformation meetup I attended several years ago. The topic was metrics. And a lively brainstorming session produced a variety of organization and team-based measurements that together told the Agile maturity story of a given organization.

As we talked through the totality of the list, we noted how dangerous any one measurement can be because of the “law of unanticipated consequences.” We also discussed the overwhelming feeling that imposing too many metrics can bring to a team. We were able to reduce the list to a set of balanced metrics that certainly ring true but also leave a lot to the imagination in translating them into practical, measurable outcomes:

  • Cycle Time, or Time-To-Value
  • Value
  • Quality

Each of these is a blog of its own, but one, in particular, jumped out at me because of a new (to me) idea that the discussion suddenly clarified for me. And so, the remainder of this monologue will be about measuring “value.”

In principle, measuring the new business value delivered by your engineering investment is simple — just watch to see if your revenue goes up, right? Unfortunately, in practice, I’ve noticed that there are lots of reasons why you can’t separate the signal from the noise:

1. Many things (pricing, promotions, advertising, market conditions, competition, etc.) can change that affect revenue — we don’t get to hold those things constant and only change the feature set available to the market.

2. In many situations, features built by development don’t get deployed right away, so there can be long delays before they even have a chance to impact revenue.

3. Even if available in the product and deployed, some features must be operationalized or advertised before they can impact sales or customer retention. In some situations, I’ve seen proof that while sales or operations constantly demand and get more features, those features don’t get used because they are not promoted or even enabled.

So, what else can we do if product revenue is not a valuable measure of increased value delivered by engineering? If, in finance terms, we want to calculate an ROI, how do we measure the R?

This is where the “aha!” moment happened — while thinking about exactly that, I realized that Scrum already provides a team-based mechanism for dealing with these questions. And we use it to quantify the complexity or cost of building a story. The scale is arbitrary; all that matters is that you can say that one story is more or less complex than another. Velocity, or story points completed / sprint, normalizes for a team, creating the ability to gauge the work that can get done in any given sprint more accurately. The product owner is there to answer questions and guide the team, but ultimately the team finds a consensus estimate based on the Fibonacci series (1, 2, 3, 5, 8, 13…)

Why not use the same approach for estimating value? Why not ask the product owner to work with the team to provide similar relative value estimates? The product owner would lead the discussion and drive the decision, but they need to explain to the rest of the team why some stories have more value than others. In doing so, they will help the team members understand the use cases and believe in their work’s worthiness. Similarly, product owners will finally have a mechanism for showing technical team members that many stories that end up written to fulfill technical demands often do not have much demonstrable business value. That doesn’t mean they don’t need to get done, but it helps the team understand that if they focus too much on such work, their value delivery suffers.

Imagine a backlog of 10 stories with various story and value point estimates:

Looking at stories this way allows us to understand that they fall into four key types:

  • Type 1: High Value, Low Complexity — the “no brainer” stories; just do them!
  • Type 2: Low Value, Low Complexity — beware! Teams are tempted to work on these because they feel they are getting a lot done, but they are also not changing the product’s value enough. Postpone doing these until later if you can, or use them to round out each sprint.
  • Type 3: High Value, High Complexity — these stories are the big rocks that tend to matter most but are easy to postpone. Teams should probably tackle them early in the release.
  • Type 4: Low Value, High Complexity — this is the land of “technical debt”; like eating your vegetables, they should be done but are often postponed.

This approach also gives us a way to talk about Return on Investment or ROI. There have been so many times in which I have watched teams deadlock as such:

“What’s the priority of this story?”

“Depends on how hard it is to do. How hard is it?”

“I won’t answer that until you tell me how important it is. What’s its priority?”

Lather, rinse, repeat…

I see that there are often two value systems at work here. At the risk of offending many, I think that engineers tend to prioritize building the highest value stories first. Product owners, on the other hand, tend to think more broadly in terms of Return on Investment (or ROI) and the time and effort it takes to realize that value. Even though these stereotypes may not be accurate, you will see different people set different priorities but have no common language for explaining why they differ. Let’s see how this might look for the example above, ordered from top priority to bottom priority:

What was the first thing I noticed when I did this exercise? The highest and lowest in each are almost exactly opposite! No wonder we can’t agree!

Let’s switch gears and get back to the high-level metrics related to “Value.” If we now take Value points as a given, we have some new things we can calculate and name. I’d like to stick with physics analogies:

  • Value Points / Story Points = “Story Impact” (much better than ROI, IMHO)
  • Value Points / Sprint = “Team Power”
  • Value Points Developed = “ Potential Energy”
  • Value Points Shipped = “Kinetic Energy”

Whether a development manager or leading a development consulting practice, I was always looking for a way to quantify the value delivered and whether important decisions such as process changes, outsourcing to lower-cost geographies, and team staffing or structure changes were affecting the productivity of my teams. Unfortunately, every measure of productivity I have ever seen has led to perverse unanticipated consequences when used. Has anyone else tried using source lines of code? With this, I can prove that assembly language was the most productive language ever invented.

Remember when I mentioned that “shipping more often” was almost always perceived as “delivering value faster”? Measured over a long enough time, the average team power is the same — the real difference is in Cycle Time or Time-to-Value; how fast can I go from idea to the feature in use?

What next? Try this idea out in practice. Put it to use in a few future projects. Use the terminology to improve communication both within the teams and with business decision-makers or clients trying to understand if they are still getting good value for their money.

Original post found here.

Authored by Craig Knighton:

Happiness for Craig is building successful products with collaborative teams. For him, there is magic to be found at the intersection of business and technology. Craig helps clients nurture new ideas, often in healthcare, and to plan scalable mobile and SaaS architectures. He aims to determine how organizations can successfully implement new technologies like artificial intelligence and cloud architectures.

An electrical engineer with an MBA and a 25-year career as a software developer, clients — from startups to enterprises — look to Craig for his technical and strategic expertise. His previous titles include VP of Engineering, VP of Engineering and Technical Operations, VP of Development at Gearworks, LiquidSpace, and Spok, respectively.

Long interested in improving patient experiences through the thoughtful application of software, Craig is actively involved in founding a non-profit charitable organization that will deliver technology solutions that facilitate better services for children with special health needs. In the past Craig volunteered in the emergency department at Coon Rapids Mercy Hospital.

When he isn’t helping businesses build innovative solutions, Craig enjoys his view of the Mississippi River or taking his Harley Davidson for a ride.



Trusted guidance, global expertise, secure integration. We design and develop custom software solutions that deliver digital transformation at scale.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store

Trusted guidance, global expertise, secure integration. We design and develop custom software solutions that deliver digital transformation at scale.