8 Product Prioritization Frameworks: Explanations, Guide, and How to Avoid Mistakes

A rice plant. RICE is a popular feature prioritization framework for roadmapping.Rice: Not just a product feature prioritization framework anymore.

✨ Looking for a magic algorithm to choose the right product feature every time? ✨

You found it! 🪄

With just 387 easy payments of $199.95…

A magic feature algorithm? Wouldn’t that be nice!

Just kidding—there’s no magic wand waving in product management.

Feature prioritization is hard:

  • How can you know what features are the most valuable?
  • Once you identify them, how do you know which to ship first?
  • How can you handle changing priorities and direction?

You’re probably looking for a good way to answer those questions.

(And to a way to make sure you don’t blow your dev budget on the wrong thing. And a way to avoid getting your a$$ handed to you in the next product meeting—believe me, I’ve been there 🤦‍♂️ ).

Prioritizing features to your roadmap is both art and science and, unfortunately, there’s no framework that removes all the risk or makes prioritization effortless. Sorry.

Still, prioritization frameworks can give you a foundation to help you think about the decisions. And, they can help you justify your decisions and look smart in front of your colleagues (#forthewin).

In this article, we’ll cover 8 popular frameworks that I see people using (for better or for worse, as we’ll see):

Note:* This is part of Chapter 4 in our big product roadmapping series. Check out the rest of the product roadmapping basics guide for more: what roadmaps are, how to build them, and how to manage stakeholders.*

8 Popular prioritization techniques for product managers

So without any more introduction, let’s dive into your framework options and if/when they’re useful.

1. Savio’s prioritization method

I’ve been a PM or product lead for over 20+ years. I’ve used several other prioritization frameworks over the years, but they all seemed to be only tangentially interested in the voice of the customer.

Few—if any—of the other frameworks tie your prioritization directly to what your customers have told you they want. They aren’t customer-centric.

That seemed like a massive problem. So my business partner Ryan (the man, the myth, the legend) and I developed our own framework that centers around customer feedback—what your customers are asking you for.

How it works: Briefly, the strategy is this:

  1. Get the data. Set up a system to collect customer feedback and feature requests and enrich it with data from your source of customer truth.

  2. Specify goals. Get clear on your specific business goals for the roadmap timeline (e.g. reduce churn).

  3. Segment your requests. Create a feature shortlist by filtering your feature backlog by what relevant customers (e.g. churned customers) ask for most.

  4. Sort by secondary factors. Then, prioritize that shortlist further using other attributes like the recency of the requests, strategic alignment, priority scores, and effort scores.

  5. Calculate a dev budget. Figure out how many development hours you have to spend on your features. (Tip: break it up into “buckets” for customer requests, strategic features, and tech debt—for example, 50% customer requests, 25% strategic, and 25% tech debt).

  6. Spend your budget. Then fill up your Dev resource budget with the top requests you’ve identified until you run out. Spend them on different buckets (requests, strategic features, tech debt) in the proportion you specified.

For example: Imagine your goal was to maximize impact on revenue. You can filter and sort your requests by cumulative MRR to find the features that have the highest cumulative revenue associated with them.

(Watch me do it: https://www.youtube.com/watch?v=Z0BOHwJOG8w)

screenshot8 Example list of features. Screenshot from Savio.

Once you have that short list, estimate the time needed for each feature. Then put those on your roadmap until you fill up the “customer requests” time bucket.

Read more: How to prioritize based on customer data like plan and revenue

Pros:

  • This method puts voice of the customer front and center
  • It uses your customer data, like revenue, strategically to help align your roadmap with your business goals
  • It’s collaborative because you use data from across the organization—customer success, support, and sales
  • It’s easy to justify to stakeholders because you know exactly who asked for the features and how much they’re worth to your company

Cons:

  • You need to have a feedback system in place to implement this. That’s good practice for every company, but it takes some time upfront to build.

Read the complete Savio method guide: How to Prioritize Feature Requests and Build a Product Your Customers Actually Want

2. Value vs. Effort Matrix

At the heart of the value vs. effort matrix framework is the objective of identifying low-hanging fruit—the features that are easy (low effort) and will have an impact (high value).

How it works:

  1. Assign each potential feature a “value” (or “impact”) score.

  2. Assign each potential feature an “effort” (or “complexity”) score.

  3. Optional: plot them on a graph with both dimensions.

  4. Prioritize the “High-value, low-effort” quick win features first. High-value, high-effort features are big bets and might be worth it. Low-value, low-effort features might also be worth it. Stay away from low-value, high-effort features.

The value vs. effort matrix

The value vs. effort matrix looks simple, but often doesn’t work as well as we’d like.

I like the idea of prioritizing based on value and effort. Indeed, our Savio method includes consideration of value and effort—that’s totally important. But the problem is that “value” is often hard to quantify, and I’ve often seen PMs mostly guessing on that metric.

Pros:

  • Focuses on value and effort
  • Makes a fun chart

Cons:

  • It can be difficult to accurately estimate both value and effort—it’s often just an arbitrary assignment based on a Product team member’s initial assessment.
  • It’s not usually very customer-centric. The “value” metric might take into account customers—it could use  “number of customers affected” as a proxy for “value” or “impact”. But that doesn’t consider which customers want the feature.
  • It’s not clear where tech debt and big strategic projects fit in, so it can be hard to get that balance right with this framework.

Unfortunately, this model’s simplicity is deceptive because it obscures a very complex task (estimating future customer value and future effort) that we tend to be quite bad at. The result is that it can lead you to pick the wrong feature.

My takeaway: Consider skipping this one—it’s often too simplistic.

If you’re going to use it, double-check your effort estimates with your Dev team and validate your value estimates with your CS and Sales teams.

Consider also using more concrete, measurable metrics. For example, in Savio, you can use MRR or opportunity revenue as a more objective measure of value.

Read the guide: The Effort vs. Value Framework for feature prioritization

screenshot4Example list of features to use the Effort vs. Value Framework. Screenshot from Savio.

3. Weighted Scoring

The weighted scoring model is a flexible prioritization framework that helps product managers and decision-makers evaluate and rank features, projects, or initiatives based on multiple criteria.

How it works: You choose relevant criteria for your prioritization decision. Then you assign a weight to each criterion to indicate its relative importance and score each feature against these criteria. The final score for each feature is calculated by multiplying the assigned score with the respective weights, which helps determine the overall priority.

Here's a step-by-step guide to using the weighted scoring model:

  1. Identify criteria. With your team and stakeholders, determine the criteria that are relevant and important. Common criteria include business value, user impact, cost, risk, and resource requirements.

  2. Assign weights. For each criterion, assign a weight that reflects its relative importance compared to other criteria. The sum of all weights should equal 100% or 1.

  3. Score features. Evaluate each feature against the criteria, and assign a score on a predefined scale (e.g., 1-10). The score should represent how well the feature aligns with or satisfies the criterion. Again, use your teams to gut-check the accuracy of scores.

  4. Calculate weighted scores. Multiply the score for each criterion by its corresponding weight. Then, sum these weighted scores for each feature to obtain the total weighted score.

  5. Rank features. Sort the features by their total weighted scores in descending order. This will help you prioritize the features with the highest scores, as they align best with your goals and objectives.

*A weighted scoring scorecard. Source: *Smartsheet

Pros:

  • This lets you customize the criteria that are important to you for your particular goals
  • You can compare and prioritize features of different types
  • You can include a criterion for importance to your customers in the model

Cons:

  • Again, the scoring is usually pretty arbitrary giving the appearance of objectivity when there might not really be any
  • It doesn’t have a good way to decide between customer requests, tech debt, and strategic features

My takeaway: I see this as an extension of the value vs. effort matrix because you can consider those two factors along with several others. I like that it’s flexible and you can apply it using a number of different criteria.

I would just caution you to be careful about how you’re scoring—try to do it in a way where you’re basing scoring decisions on clear customer and business data.

Implement it: Here’s your weighted decision matrix scoring guide and template

gotohomepage

4. Kano Model

The Kano model is named after Noriaki Kano, the consultant who published the model in a 1984 paper in the Journal of the Japanese Society of Quality Control.

How it works: At a high level, the model brings together product development and customer satisfaction. It tries to classify customer preferences into five categories, roughly translated into English as:

  • Must-be quality. The “must haves” that customers expect and take for granted. For example, A remote control is a must-be feature with modern televisions.
  • One-dimensional quality. Attributes or features that please customers when they’re there and leave customers dissatisfied when they’re missing. For example, good customer service satisfies customers; bad customer service can leave them angry.
  • Attractive quality. Attributes that please customers when they’re there, but that customers won’t miss if they’re not there. For example, a confetti blast when you tick off a to-do on a checklist is great, but no one’s upset if it’s not there.
  • Indifferent quality. Attributes that are neither good nor bad and that neither please nor dissatisfy your customers. For example, your SaaS product’s data infrastructure might fall here. There might be good business reasons to upgrade it, but your customers probably won’t care one way or the other (as long as they can access their data).
  • Reverse quality. These features, when you build them, cause dissatisfaction. For example, requiring lots of information to complete a transaction can be a reverse quality feature.

Typically, PMs using this method try to prioritize and knock off the Must-be quality category early, and then go for one-dimensional quality features. The attractive quality features are good if you have some extra time for them. Indifferent quality features should have a clear business case. And reverse quality features should be identified and avoided.

Pros:

  • It’s very customer-centric, putting user experience at the center of the development process.
  • The Kano model classifies features by—get this—talking to customers. We love that.
  • Some methods for classifying attributes or features are supported by empirical evidence.
  • It acknowledges that some features have a negative impact on user experience—few of the other frameworks do that.

Cons:

  • You have to talk to customers. Many PMs want to use the Kano method without doing the customer research that it requires, but it doesn’t work like that.
  • There are many ways to actually do the classification of features in each category. I won’t get into it, but some common methods for classifying attributes or features don’t work and should be avoided (for example, the importance grid and the critical incident technique have questionable reliability and validity).

My takeaway: The model focuses on customer feedback, which is right (in my opinion). Some implementations of the model have also been validated by research… just make sure you’re picking the right one—lots of product management SaaS company content teams are promoting the ones that have questionable reliability and validity. 😬

Read the Guide: Everything you need to know about the Kano model.

5. RICE Scoring System

The rice method is labor intensive and requires a ton of water 🌾.

Kidding, somebody stop me.

The RICE acronym stands for Reach, Impact, Confidence, and Effort. It basically is an expanded version of the value vs. effort matrix, adding reach—another value metric IMO—and confidence.

It’s also kind of a specific implementation of weighted scoring. (Notice that many of these prioritization frameworks use similar criteria, they just apply them slightly differently).

How it works: First, you calculate (or estimate) the following values:

  • Reach. The number of users affected by the feature within a specific time frame. E.g. 2,000 customers per quarter.
  • Impact. The degree to which the feature is expected to positively affect users. For example, use 3 to indicate “massive impact”, 2 for “high impact”, 1 for “medium impact”, 0.5 for “low impact” and 0.25 for “minimal” impact.
  • Confidence. The level of certainty about the estimates for Reach and Impact. Use a percentage where 100% is high confidence, 80% is medium confidence, and 50% is low confidence. Below 50% is a wild guess.
  • Effort. The amount of work required to implement the feature. Estimate in person months—the amount one dev can do in a month. For example, feature A might require 2 person-months, so put 2.

To calculate the final RICE score for each potential feature, use the formula: (Reach × Impact × Confidence) / Effort.

The RICE framework calculation: reach times impact times confidence divided by effortThe RICE framework formula. Multiply reach, impact, and your confidence score together, and then divide the product by effort.

You’ll get a number that represents impact relative to Dev resources required. Higher values represent higher impact for the effort, and would usually be prioritized.

Pros:

  • Is more specific about what impact and effort are than the value vs. effort matrix, so it might be easier to quantify accurately
  • The “confidence” metric helps make up for uncertainty or error in the estimates—which makes it a step up from value vs. effort.
  • This method helps you decide between new product features that are otherwise difficult to compare.

Cons:

  • “Reach” considers all customers equally, but it’s often the case that some customers matter more than others. RICE can’t really account for that.
  • “Impact” is super abstract as “the degree to which the feature will positively affect users” and is not easy to specify in practice. This ends up often being a bit of a guess.
  • The single score makes it tempting to simply build the first things on the list. But there might be good reasons to build features lower down on the list, too.

screenshot8Savio shows you the cumulative MRR for each feature—the sum of MRR from each customer who has asked for that feature.

My takeaway: The RICE framework is useful because it prompts you to think about which projects will have the most impact and how much effort they’ll take. And it accounts for uncertainty. This method can be useful to identify clear winners and losers.

But I’d caution you to remember to not use the RICE score as a hard-and-fast rule. You might still want to prioritize features lower down on the list.

Want to implement? How to use RICE—and avoid the pitfalls.

6. ICE Scoring Model

The ICE scoring technique is a lot like the RICE method—just with a little more 80s rap.

The ICE method: How Vanilla Ice prioritizes SaaS features. Just kidding.

Kidding again—last one I promise.

If you see lots of similar letters as RICE, you're not wrong: they use similar factors. ICE stands for Impact, Confidence, and Ease.

How it works: Assign scores from 1 to 10 on the three factors:

  • Impact. An estimate of the potential effect or benefit that the idea or feature will have on the key metric or goal you are trying to achieve. Higher scores represent greater impact.
  • Confidence. This represents the level of certainty you have in the estimated impact and ease of implementation for a given idea or feature. Higher scores indicate greater confidence in your estimates.
  • Ease. An estimate of how easy the idea or feature is to implement, considering required resources, technical complexity, and amount of time needed. Higher scores indicate that the implementation is easier or requires less effort.

Once you have your scores you just multiply them together: ICE = Impact * Confidence * Ease. Then you can sort your features by ICE scores. Features with higher scores might be better candidates for prioritizing.

Pros:

  • It’s pretty easy to calculate and gives you a quick and dirty way to approach prioritization
  • It makes sense to think about impact and ease, and the confidence estimation helps account for uncertainty
  • You could easily create fun expressions to start your roadmapping sessions, like, just off the top of my head, “Let’s get Icey!”

Cons:

  • Like the other metrics, I’m not sure how valid or reliable people will be scoring impact and ease in practice. Even taking confidence into account, it’s easy to end up with distorted scores.
  • It doesn’t directly take into account your customer’s needs. They could show up in the “impact” score, depending on how you define it, but they also might not.
  • It’s not clear how this method works for different categories of features—customer requests, strategic features, and tech debt.

My takeaway: I like a good quick-and-dirty calculation, but I think you have to be careful with how you do it. If I were using this framework, I’d try to pick an objective measurement that was directly related to business goals for the “Impact” score. For example, I might use cumulative MRR for each feature because it ties your decision directly to revenue.

I’d also be careful to do a gut check of each score with your other teams, and also save room on your roadmap for features that might not have a huge impact on customers but are still important (like tech debt).

Read more: The complete guide to the ICE scoring model

7. The MoSCoW Method

The MoSCoW framework is similar to the Kano method in that it focuses on categorizing new features. MoSCoW just stands for Must-have, Should-have, Could-have, and Won't-have. (We’ve just given “high-priority”, “medium-priority”, and “low-priority” new names. Fancy, right?).

How it works: At a high level, MoSCoW prioritization means that you classify your list of features into priority buckets:

  • Must-have. These are features that are really important—without them, your product won’t meet the needs of your users.
  • Should-have. These are important but not necessary features.
  • Could-have. The features here are nice to have but not super necessary.
  • Won’t have. These are the features that you’ve already decided you’re not going to include this time.

After you categorize each feature, you prioritize the roadmap. You’ll start by including all the must-haves. Then you’d go through and put in the should-haves. Then you might include could-haves if you have some extra time or resources for them.

Pros:

  • It’s nice that the features in the “must-have” column are obvious choices.

Cons:

  • Assigning features to categories is largely arbitrary.
  • There’s no clear way to prioritize features from within each category, so you still are left with lots of uncertainty around your choices.
  • Once you’re past the MVP stage, most roadmaps don’t have clear “must-have” features (that’s why the prioritization process is hard). So in practice, you often end up having big lists in the “should-have” and “could-have” columns.
  • It doesn’t clearly consider customer needs—or at least, the consideration of what customers need is put on your teams, rather than directly tied to customer feedback.

My take: For me, this method basically just says that some features are high priority and some aren’t. That doesn’t feel very helpful, especially when you’re staring down a potential backlog of thousands of potential features.

If you’re going to use it, you’re probably going to want to combine it with another method. It can help give you a rough cut of the features you need to put on the roadmap and the ones you definitely won’t… but after that, you’ll probably need another framework to decide between the should-haves and could-haves.

Read the guide: Everything you need to know about the MoSCoW method

8. User Story Mapping

Story mapping is much different from the previous approaches—it’s not a scoring model or a categorization model. Instead, it shifts focus to how your customers use your product.

How it works: You start by identifying and collecting user stories—an informal, conversational description of a feature from the perspective of your user. Then you map them (often on a whiteboard or using sticky notes):

  1. Put big stories—“activities”—at the top of the map. For example, “managing email” might be an activity for an email client product.

  2. Break those big activities down into smaller stories or “tasks”. For example, “managing email” might include “send email”, “delete email”, and “mark as unread”. You can include sub-tasks if necessary.

  3. Arrange the tasks under the relevant activities and subtasks under that. And then arrange them all by time, so you have earlier activities and tasks coming before later ones.

  4. Then prioritize subtasks under the tasks. Put them higher up to indicate that they’re more important or lower down to indicate they’re less important.

  5. Then walk through the map to identify any missing activities, tasks, or subtasks and revise.

That ends up giving you a skeleton of the product. You end up prioritizing the sub-tasks or details for each task by choosing those highest in its column.

*What user story mapping can look like. Source: *Jeff Patton and Associates.

Pros:

  • It’s customer-centric because it puts your customer needs at the heart of your prioritization.
  • It lets you see the big picture so that you see each feature in context with that larger-scale view.
  • It feels logical—you build pieces of your product together and can add details in further releases.
  • The result is a visual document that can help remind everyone of the bigger picture.

Cons:

  • You can miss important details if you don’t work directly with customers to tell you what’s important and what will provide them value.

My take: I like it, and I think it’s especially important to consider when you’re first building a product or in the first few releases.

Want to implement? Here’s your full guide to user story mapping

homepage

Prioritization starts at the top

Remember, every one of the above frameworks requires a product vision and strategy. They’re prereqs.

Why?

Because you can’t prioritize until you know what matters to you. Each of the above frameworks requires you to understand “value” in some way. Value is inherently tied to some goal outcome. And that’s defined by your vision and strategy.

Before diving head first into a product prioritization framework, you have to be clear on:

  • The product vision: Why does your product exist? What difference will it make in the world, and for who?
  • The product strategy: What’s the high-level approach you’re going to take to accomplish that vision?

Those pieces are critical because they underlie all the prioritization decisions you’ll make for features. Without clarity there, product management prioritization frameworks will be of little help.

Prioritization framework FAQ

Still have questions? Here are some frequently asked questions about prioritization frameworks.

What is a prioritization framework?

A prioritization framework is a structured approach or methodology used by product managers, project managers, and decision-makers to evaluate, rank, and prioritize tasks, features, projects, or initiatives based on their importance, value, or potential impact.

Frameworks help teams make objective, data-driven decisions, allocate resources efficiently, and focus on the most crucial or impactful aspects of their work.

In product management, we use prioritization frameworks to decide what features to prioritize on our product roadmap.

Who should prioritize features?

Product teams usually lead roadmapping exercises, but all customer-facing teams should be involved.

What is product roadmapping?

Product roadmapping is the process of creating a visual representation of the strategic plan and development timeline for a product. It outlines the key features, enhancements, and milestones that the product team intends to achieve over a specified period.

A product roadmap serves as a communication tool that helps align stakeholders, such as product managers, developers, designers, marketing teams, and executives, on the product's vision, priorities, and progress.

What roadmap type should I use?

It depends who your primary audience is. Check out our roadmap type guide here for advice choosing the best type of your needs.

Up next*:* How to build a roadmap with your prioritized features

Last Updated: 2023-04-19

Kareem Mayan

Kareem is a co-founder at Savio. He's been prioritizing customer feedback professionally since 2001. He likes tea and tea snacks, and dislikes refraining from eating lots of tea snacks.

Want more articles like this?

Product Leaders from Slack, Zapier, and Appcues read our newsletter to delight customers, lower churn, and grow revenue.

Prioritize high-value Feature Requests

Centralize customer feedback from HubSpot, Intercom, and Slack.

Prioritize high-value features sorted by churned revenue or MRR.

Close the loop for Sales and CS by automating status updates from JIRA.

Learn more

Contents

Centralize, Organize, and Prioritize Your GTM Team Feature Requests

Centralize customer Feature Requests from Slack, HubSpot, Intercom, Zendesk, SFDC, Help Scout, and more.