What is the Value vs Effort Matrix? Explanation, Guide, and How to Avoid its Pitfalls

*This article is part of the product *roadmapping prioritization chapter of our product roadmapping guide. Check out the full Product Roadmapping 101 guide here.

The value vs. effort matrix

The value vs. effort matrix is one of the most popular frameworks for prioritizing features from your backlog to your product roadmap.

The trouble is, with many implementations, it doesn’t work.

Controversial opinion: The Value vs. Effort Matrix can lead you astray if you’re not careful. Read on to learn how to avoid the pitfalls.

So make sure you understand how to avoid the pitfalls of this method if you’re going to use it.

Here’s what it is, why it’s often not implemented the best way, and how you can fix it so you don’t build the wrong thing.

TL;DR

  • You create the matrix by scoring each feature on value and effort and then plotting them on a two-dimensional chart.
  • You can then see which features are quick wins and which are time sinks. You’d prioritize the former and not the latter.
  • There are some serious limitations with this method because most people score “value” and “effort” badly and because most people do it without talking to their customers
  • But, it’s probably better than just using your gut.
  • Download our Value vs. Effort Matrix template to get started.

What is the Value vs. Effort Matrix?

The Value vs. Effort Matrix (also known as the Value vs. Complexity Matrix or the Impact vs. Effort model), is a decision-making tool that product managers use to prioritize features based on their potential value and the effort required to complete them.

It is a 2x2 matrix with value on one axis and effort on the other axis. It helps you quickly see your entire product backlog in terms of each feature's costs and benefits.

The value vs. effort matrixThe Value vs. Effort Matrix, with “value” on the x-axis and “effort” on the y-axis.

To use it, you first score tasks based on their relative value and effort. Then you plot them on the matrix. At its best, the matrix can give you a visual that helps you see which features give you a lot of bang for your buck, and which could be a waste of resources.

What is Value?

In the matrix, the “value” dimension refers to the benefits or impact a feature will have. How you measure (estimate) this depends a ton on what your business goals are. So it’s flexible. It could include:

  • How many of your users will be affected
  • The extent to which customer engagement or satisfaction would be impacted
  • The potential impact on customer acquisition
  • Differentiation of your product from others on the market
  • Market demand for a feature

And more.

How do you estimate business value? This is really the most important question of the entire Value vs. Effort Matrix method.

Many people advocate assigning value impact scores based on your gut feeling using numerical scales (I’ve personally seen scales from 1 to 5, 1 to 20, and 1 to 100).

Those are arbitrary scales. Another option is to count the number of users that will be impacted by the new feature. Still, another option is to count the monthly recurring revenue (MRR) of each person that asked for a feature, and add it up to get each feature’s cumulative MRR. These methods are less arbitrary and involve less guessing.

However you estimate it, use the same method for all the features in a roadmapping session.

What is Effort?

Effort is how much work a feature will be. This dimension is sometimes called “complexity” or “cost”. It’s basically how much of your resources and Dev time you’ll need to do the feature.

You can use rough T-shirt sizes (Small, Medium, Large, Xtra large) to estimate the effort for each feature, or you can get more specific and use story points or product development hours.

What do the quadrants mean?

Once you’ve estimated value and effort, you can plot each feature on the matrix. Then, you can cut it into the following four quadrants:

  • High-value, low-effort. This is your “Quick Win” quadrant. These features are the low-hanging fruit—easy to do and will have good value. Many recommend building these features first.
  • High-value, high-effort. These are your “Big Bets” or “Strategic Initiatives”—large projects that you think would pay off with big value. They could include big UI changes or new functionalities. You’d probably want to work through the projects in this quadrant carefully one at a time.
  • Low-value, low-effort. This is your “Fill-ins” or “maybes” quadrant. These projects might not offer a ton of value, but at least they’re easy. You can prioritize these onto your roadmap after the other two quadrants to fill up some small space.
  • Low-value, high-effort. Finally, these are your “Time Sinks” or “Money Pit”. They’re projects that would take a long time to do but would create only limited value. In general, you’d want to make these your last priority.

gotohomepage

Note: Savio helps B2B SaaS Customer Success, Product, and Sales teams organize and prioritize product feedback and feature requests. Learn more about Savio here.

Strengths of the Value vs. Effort Matrix

The Value vs. Effort Matrix is popular precisely because it can be a powerful tool for product teams and other stakeholders when used properly.

  • It’s lean. Much of the value of the matrix is in its ease and simplicity. You don’t need any elaborate calculations. This simplicity allows individuals and teams to focus on the most important aspects of decision-making, without getting bogged down in unnecessary details or complexity.
  • It’s clear and the criteria are useful. By scoring tasks based on their value and effort, the matrix provides a clear framework for prioritizing tasks. This helps individuals and teams focus on tasks that are most likely to deliver value, while deprioritizing tasks that are low-value and high-effort.
  • It’s visual. The matrix provides a clear visual representation of the relative value and effort of each task, making it easy to see which tasks are easy and provide value, and which are time sinks. This helps teams quickly assess the potential costs and benefits of each feature.

Weaknesses of the Value vs. Effort Matrix

Simple isn’t always better. There are some serious limitations that you should consider when using this method. (Some veteran PMs even say that it just doesn’t work.)

I’m going into detail on these weaknesses and their pitfalls so you know how to avoid them and avoid costly mistakes.

1. We’re not good at estimating value

This is perhaps the biggest weakness of the method: estimates of value can often be a shot in the dark. In general, our gut feelings are bad at this. Research consistently suggests that we overestimate benefits when making future plans.

But more specifically, I’ve seen how most PMs actually do this method. In many cases, they simply put down a number on an arbitrary scale based on little data. They trust their gut feeling.

It often doesn’t work, as a paper from Microsoft on testing features makes clear:

It is humbling to see how bad experts are at estimating the value of features (us included).

What you can do: As a first step, ensure that other teams have a chance to validate any scores you set arbitrarily. That can mean including your CS, Sales, or other customer-facing teams for the value or impact a feature will have.

To go even further, try to use more specific metrics for impact and value. For example, in Savio, you keep track of which of your customers (or prospects or churned customers) asked for a feature. Then, Savio pulls in revenue data from your single source of truth. For any given feature request, you can quickly see the cumulative MRR of all the customers that asked for that feature. 
screenshot 1In Savio, you can see MRR right from your feature request list. It’s a much more precise measure for “value”.

Cumulative MRR is a much more specific way of estimating “value” than arbitrarily assigning a score on a made-up scale.

2. We tend to underestimate effort

One of the well-known biases that humans face is the planning fallacy—the tendency for teams to be overly optimistic about the time and resources that it will take to accomplish a task in the future. For product managers, it’s easy to underestimate the time needed for features.

What you can do: Send draft effort estimates to your Dev team to validate to ensure your estimates are accurate. Also consider building detailed specs ahead of time for larger features for even more accurate estimates.

3. Some features are important even if they won’t provide value

Another problem with this method is that it doesn’t take into account different kinds of value. That can affect your prioritization.

For example, customer feature requests are valuable to customers directly. But important strategic initiatives that may set you up for a new market may not be as valuable to your current customers (or at least, it may not be as easy to quantify). Fixing tech debt may also be valuable in an abstract way, but it may not rank high on metrics for value to customers.

You may find that tech debt and strategic initiatives get underprioritized, depending on how you define (and score) “value”.

What you can do: Consider building tech debt, customer requests, and strategic initiatives “buckets” into your dev budget so that you always have some space to prioritize from each of them.

4. Scores aren’t static

Some PMs go through and rank each feature on value and effort once, and those scores stay there. But “value” can easily change over time given what your customers want or what the market demands.

What you can do: Re-evaluate your scores regularly so they’re up to date when you’re ready to roadmap.

5. Doesn’t directly tie scores to customer needs

Finally, this is my biggest issue with the Value vs. Effort model: it’s not directly tied to what your customers tell you they want.

It can be. If you choose a value metric that’s based on your customer feedback and feature requests, then sure. For example, if you decide your “value” score will be the number of Enterprise customers that asked for each feature, then yes, it’s directly related to what your customers want.

But most PMs don’t do that—most just give an arbitrary score, sometimes only based on what they think their customers want. That’s a mistake. And it could lead you to build the wrong features.

What you can do: Make sure your customers are represented in the value score. That could mean doing a survey to see what they want. Or just building a feedback system to collect their requests over time.

However you do it, make sure you’re incorporating that voice of the customer.

How to create a Value vs. Effort Matrix: Step-by-step

Making the matrix is quite simple. (Download our Value vs. Effort template here to get started.)

1. Make a list of the features you’re considering and what to prioritize within.

A list of feature requests in Google SheetsHere’s an example list of features.

2. Score each one on “value” or “impact” and on “effort” or “cost”.

The same list of features as before, but now each one has been scored on effort and impactIn this example, each of the features has been scored on effort and impact. Note that we've just used “High” and “Low” as scoring options, but you can use any scoring system that makes sense to you.

3. Now, plot each feature on a chart with value on one axis and effort on the other.

Features placed on a value vs. effort matrix

You’ve made your matrix. Now you’ll be able to see, visually, which features the low-hanging fruit and which aren’t as obviously good.

How to do it in Savio

Savio is designed to help you prioritize feature requests. We have our own method for figuring out the priority, but you can still implement this value vs. effort method if you’d like.

The nice thing is, that if you’re already using Savio, you already have your big list of feature requests ready to go. (Step 1, check ✅)

1. Score each feature for effort

You can change a feature’s effort score by editing its fields. You can also change the buckets to best suit your workflow.

screenshot3In Savio, you can easily assign an effort score (which you can customize).

2. Choose your value metric

We like the cumulative MRR metric because it's the sum of the MRR from each customer that asked for a feature. Higher cumulative MRR means higher customer impact in terms of the amount of money your customers are spending on your product.

But you might also choose:

  • Number of requests for a feature
  • Opportunity revenue
  • Number of churned customers requesting a feature

Or some other metric. These will be already included in your feature requests and they come from your customer source of truth (Intercom, Help Scout, Salesforce, etc.)

3. Filter and sort

Next, dig into your feature request data to find features with high value for low effort.

For example, filter to find feature requests that are low effort, and then sort them by value to see the lowest effort feature requests that are the highest value.

screenshot4

Here, we’ve filtered to find only features scored as “very low” or “low” on effort. Then we sorted by MRR. The top feature is the highest MRR feature that’s low effort.

You could also filter by the number of requests if you think that’s a better metric for “value”.

screenshot7Here, we’ve sorted by the number of requests for each feature. Popularity is one potential metric that you can use for “value” although it’s not always the best one.

homepage

Note: Savio helps B2B SaaS Customer Success, Product, and Sales teams organize and prioritize product feedback and feature requests. Learn more about Savio here.

What about alternative prioritization frameworks?

There are lots of other feature prioritization methods you could choose to use instead of value vs. effort. Here are some of the other most popular:

  • ICE scoring. Very similar to value vs. effort, but also takes into account your confidence in your scoring.
  • RICE framework. Very similar to value vs. effort, but value is broken up into two metrics: reach and impact. It also considers your confidence in your scoring as well.
  • Weighted scoring. Similar to the other two, but more flexible because you can include whatever criteria you like.
  • The MoSCoW method. This method categorizes features into must-haves, should-haves, could-haves, and won’t-haves. I’m not a huge fan, but it might be helpful for some.
  • The Kano method. This method categorizes product features by how they’ll impact customer experience. I love that this method uses customer surveys and data to make categorization calls.
  • User mapping. Prioritizes based on how customers use the product and what comes next in the user story.
  • The Savio prioritization model. First, keep track of what your customers are asking you, along with customer data. Then look for the features that best accomplish your specific business goals.

Read more about the other prioritization frameworks here.

Takeaway: Value vs Effort isn’t the best PM framework but it can work if you’re careful.

Value versus effort is a popular PM framework because it’s simple. It’s just a specific implementation of trusty economic principles: cost vs. benefit.

And that’s great, as far as it goes—value and effort are important dimensions.

But in practice, there are lots of issues with this model. If you’re just looking at a list of feature requests in Excel and going through them one by one saying, “This feels like an 8 of 10 in terms of value” you’re taking a big risk that you’ll build the wrong thing.

In my view, some of the other models—like RICE and ICE—are better because they consider the confidence you have in your score.

Or, check out the method we use at Savio. (It’s my favorite—ha!)

Next: Head to the Product Roadmapping Guide

Last Updated: 2023-04-19

Kareem Mayan

Kareem is a co-founder at Savio. He's been prioritizing customer feedback professionally since 2001. He likes tea and tea snacks, and dislikes refraining from eating lots of tea snacks.

Want more articles like this?

Product Leaders from Slack, Zapier, and Appcues read our newsletter to delight customers, lower churn, and grow revenue.

Prioritize high-value Feature Requests

Centralize customer feedback from HubSpot, Intercom, and Slack.

Prioritize high-value features sorted by churned revenue or MRR.

Close the loop for Sales and CS by automating status updates from JIRA.

Learn more

Contents

Centralize, Organize, and Prioritize Your GTM Team Feature Requests

Centralize customer Feature Requests from Slack, HubSpot, Intercom, Zendesk, SFDC, Help Scout, and more.