What is RICE Prioritization Scoring? Explanation, Guide, Calculator, and How to Avoid the Pitfalls

*This article is part of the *product roadmapping prioritization chapter of our product roadmapping guide. Check out the full Product Roadmapping 101 guide here.

RICE Scoring (TL;DR)

  • The RICE model is a product management framework for scoring and prioritizing features.
  • RICE stands for reach, impact, confidence, and effort.
  • You calculate a feature’s RICE score by rating it on reach, impact, effort, and confidence in your ratings. Then you multiply your reach, impact, and confidence scores and divide by effort.
  • Higher RICE scores indicate features that have higher reach and impact for less effort.

What is the RICE model?

The RICE model is a decision-making tool used by product managers to prioritize features, projects, or initiatives. RICE stands for Reach, Impact, Confidence, and Effort—the four factors considered when evaluating and scoring items in a backlog or list of potential projects.

What does the RICE acronym stand for?

  • Reach: This is an estimation of the number of people or users who will be affected by the feature in a time period. Higher reach implies that the project will benefit more users. Many people assign this score by estimating the number of users affected by the feature over a specific period (usually a month or quarter).
  • Impact: This is an estimate of how much the project will contribute to the user's satisfaction, retention, or revenue. Impact is sometimes scored on the following scale: minimal (0.25), low (0.5), medium (1), high (2), or massive (3).
  • Confidence: Confidence is a measure of how certain you are about the estimates for Reach, Impact, and Effort, and is usually expressed as a percentage. For example, you could use 100% to indicate high, 80% for medium, and 50% for low confidence.
  • Effort: This is an estimation of the total amount of work required to complete the project, usually measured in person-months or person-hours. Lower effort implies that the project can be completed more quickly or with fewer resources.

To calculate the RICE score, you can use the following formula: RICE Score = (Reach * Impact * Confidence) / Effort

The RICE formula.

By comparing the RICE scores of different projects or features, teams can prioritize them based on their potential value and the resources required to implement them. It can help PMs choose the most impactful and feasible projects to go earliest on their roadmap.

RICE compared to Value vs. Effort Matrix

RICE is really just an extension of the popular Value vs. Effort Matrix, which prioritizes features based on two factors: value and effort. Really, both frameworks are evaluating the cost and the benefits of each feature.

There are a few differences.

  1. They evaluate “value” differently. In the Value vs. Effort matrix “Value” is a single metric (that you choose). RICE breaks it down into two things: reach and impact. So it’s a bit more flexible.

  2. RICE accounts for your confidence. One problem with prioritization frameworks is that most people are guessing a bit when they’re assigning value and effort scores. RICE includes a confidence score, which helps account for how certain you are of your guesses.

  3. RICE gives you a final score that you can use to compare features. The Value vs. Effort Matrix just gives you a visual chart that you can use to compare features.

gotohomepage

How to use the RICE framework—step-by-step guide

To use the RICE model effectively for prioritizing projects or features, follow these steps.

1. Identify the projects or features.

Start by creating a list of potential new product features to be prioritized. This can include both new ideas and existing items in your backlog.

2. Define the factors.

Ensure that your team has a clear understanding of the four RICE factors (Reach, Impact, Confidence, and Effort). You may need to adapt the definitions to suit your organization's specific context or goals.

3. Score each factor

Now, do the scoring for each dimension.

  • Estimate reach. For each project or feature, estimate the number of users who will be affected by it. You may need to specify a specific time frame for this, for example, the number of users that might use the feature in a month. You can also use a value like cumulative MRR here.
  • Estimate impact. Assess the potential impact of each project or feature on user satisfaction, retention, or revenue. You can use a predefined scale (such as minimal, low, medium, high, or massive) to get your impact score. Ideally, this will be informed by what your customers are telling you about how important the feature is.
  • Estimate effort. Calculate the total amount of work required to complete each project, typically measured in person-hours or person-months. This should include all resources needed, such as development, design, and testing efforts.
  • Estimate confidence. Determine your team's confidence level in the Reach, Impact, and Effort estimates for each project. Express this as a percentage, with 100% being absolute certainty and less than 50% representing a wild guess.

4. Calculate RICE scores

Use the RICE formula to calculate the score for each project or feature: RICE Score = (Reach * Impact * Confidence) / Effort

5. Rank and prioritize.

Rank the projects or features based on their RICE scores, from highest to lowest. Typically, you’ll choose the features highest on the list to prioritize and build first.

6. Gut check with your team

Share the RICE scores and your priorities with your team and stakeholders. Discuss any discrepancies, concerns, or additional insights that may affect the prioritization. This can also help identify any potential biases or errors in the estimates.

7. Iterate and update

Regularly re-evaluate and update your RICE scores as new information becomes available or as your organization's goals and priorities change. This ensures that your team's focus remains aligned with your overall objectives and that resources are allocated effectively.

RICE scoring example

Here’s an example. Imagine you were scoring the following list of features that you had collected from your customers. Your list is:

  • Zapier integration
  • Improvements to permissions and roles
  • Streak CRM integration

screenshot2*Example list of features to calculate RICE scores. Screenshot from *Savio.

Here’s how you calculate the RICE score.

First, we would define each criterion. We’ll use:

  • Cumulative MRR for each feature as our measure for “Reach” (we like this because it accounts for both the number of users and their value to us)
  • A scale from minimal (0.25) to massive (3) for “Impact
  • A judgment out of 100% for “Confidence”
  • An estimate of the person-hours needed to build the feature for “Effort”

Next, we would score each feature on those factors. We’ll use a template we created in Google Sheets (you can download it here). We’ll say:

  • Feature 1: The Zapier integration feature has a cumulative MRR of $4,250, we expect it will have a small impact (0.5), it will require 40 dev hours, and we have medium confidence in our estimates of those factors.
  • Feature 2: The Improved permissions and roles feature has a cumulative MRR of $1,600, we expect it will have a large impact (2), it will require 60 dev hours, and we have high confidence in our estimates (100%).
  • Feature 3: We estimate that the Streak CRM integration has a cumulative MRR of $750, will have a medium impact (1), and will require 30 dev hours. But we’re not confident in those estimates, so we’ll rate confidence 50%.

*Example scoring of each feature on the four RICE factors—Reach, Impact, Confidence, and Effort. Screenshot of *our RICE template.

Next, apply the formula. Multiply Reach with Impact and Confidence, and then divide the product by Effort. We’ve built the formula into our RICE scoring template.

Here, we’ve applied the formula and calculated a RICE score for each feature.

Finally, sort by the RICE score to get an ordered list of your features by RICE score. You would normally give features at the top of your list a higher priority on your product roadmap.

Here, we’ve sorted the list by RICE score to see the highest score at the top of the list. Improved permissions and roles is the feature with the highest score, so we might give it a higher priority than the other options.

Strengths of the RICE model for prioritization

The RICE model has several strengths that make it a valuable tool for prioritizing projects or features.

1. Objectivity

The RICE model provides a structured and data-driven approach to decision-making, reducing the influence of personal biases and opinions. By using quantifiable factors, teams can make more informed choices based on objective criteria.

2. Easy to understand

The RICE model is relatively simple, using just four factors to evaluate projects. This makes it easier for team members to understand the reasoning behind prioritization decisions and communicate them to stakeholders.

3. Considers important factors

RICE prioritizes features based on their reach, their impact, and their effort or cost. That makes sense—those three are definitely biggies. It seems reasonable to choose them out of all the possible ones to use.

Also, I like that RICE goes beyond the Value vs. Effort Matrix by accounting for the uncertainty built into the scores.

4. Prioritization becomes clear

The other nice thing about RICE is that you end up with a list of features and each feature has a score. Then you can sort the list and you essentially have a list of your priorities. It’s very simple to use.

Weaknesses of RICE scoring

The model does have some weaknesses. They include the following.

1. Scores can easily be inaccurate

A huge problem—not with the framework itself, but with the majority of implementations—is that most PMs are just guessing the reach, impact, and effort scores.

It’s a problem for two reasons.

  1. We’re (humans) often bad at estimating. Specifically, we tend to overestimate the value of features (reach and impact)  and underestimate the effort they’ll take. Inaccurate scores have big implications: You might choose the wrong features and waste your dev budget.

  2. Guessing lowers your confidence, which lowers the RICE score. RICE scores take into account “confidence” by essentially penalizing features about which you’re not able to provide confident guesses. That makes sense, but it also kind of sucks for those features. Imagine a super valuable feature that’s low effort, but you don’t choose it because you didn’t feel confident in your estimates. Not great for your product.

What to do: My suggestion is to try to be as accurate as possible by using metrics that you can be confident in.

For example, if you keep your feature requests in Savio, you’ll know exactly how many people have asked for the feature. You’ll also know how much monthly recurring revenue (MRR) each of those customers has. So you can quickly see a cumulative MRR score for each feature.

That gives you a “reach” score metric—the number of customers that want a feature and the revenue they give you—that’s based on data, rather than estimation.

screenshot 1Savio shows you the cumulative MRR for each feature—the sum of MRR from each customer who has asked for that feature.

For “effort” estimates, validate your guesses with your Dev team to make sure that you’re accurate on those, too.

2. RICE isn’t necessarily customer-centric

The other big issue with RICE scoring is that it’s not necessarily customer-centric. By that, I mean that it doesn’t necessarily push you toward the features your customers are asking you for.

The way I see most people do RICE, they define “Reach” as the number of customers that would be impacted by the feature. Then “Impact” is often some sort of rough estimate on an arbitrary scale based on what PMs think the impact will be. Often those estimates are made without talking to customers.

Sometimes, they’re not even starting with an understanding of the complete set of features that customers are requesting.

If you’re not careful, you can do the entire RICE prioritization process without using any direct input from your customers.

What to do: Instead, build the voice of your customer into the RICE metrics. Start with a list of features that your customers have explicitly asked for. Consider using interviews or surveys to augment the list and solicit more new feature ideas. And then define “Reach” and “Impact” scores in such a way that they’re directly tied to your customers needs.

Guide: 24 Ways to Generate Feature Ideas

3. Potential for gaming

Since the RICE model relies on subjective estimations, team members may be tempted to inflate or deflate certain factors to influence the priority of their preferred projects.

This can happen even unintentionally—you or your team may just unconsciously lean to rating your favorite features as high impact and low effort.

What to do: Consider having several people do the ratings and combine them in an average or some other metric. The more heads doing the rating, the more likely no single person’s bias will influence the scores one way or another.

4. Undervalues tech debt

The RICE model tends to prioritize projects based on their immediate potential impact on users and their reach. As a result, it may devalue the importance of addressing technical debt— improvements to the underlying tech infrastructure, codebase, or development processes.

While addressing technical debt might not have an immediate, visible impact on users, it can significantly improve the long-term maintainability, stability, and scalability of a product. On the other hand, if you neglect tech debt, you can slow down your development process, increase the risk of bugs, and reduce your ability to innovate.

So don’t forget those pieces when you’re doing RICE.

What to do: To ensure a balanced prioritization, consider setting aside some percentage of your roadmap Dev budget for tech debt.

For example, you might decide to spend 50% of your “Dev budget” on customer requests, 25% on strategic features, and 25% on tech debt. That way, you make sure you’re not consistently putting off or devaluing any one of those “buckets” of improvements.

homepage

Alternative prioritization frameworks

There are lots of other feature prioritization methods you could choose to use instead of RICE. Here are some of the other most popular:

  • Value vs. Effort matrix. A quick and dirty way to find quick wins by scoring each feature on the value it would generate and the effort it would take to build. It’s similar to RICE and shares many of its strengths and weaknesses.
  • ICE model. Very similar to value vs. effort and RICE and shares many of their strengths and weaknesses.
  • Weighted scoring. Similar to RICE, ICE, and Value vs. Effort, but more flexible because you can include whatever factors you like, not just value, effort, reach, and confidence.
  • The MoSCoW method. This method categorizes features into must-haves, should-haves, could-haves, and won’t-haves. I’m not a huge fan, but some people like it.
  • The Kano Method. The Kano method categorizes features into buckets based on how they affect user experience. It can work, but it’s quite involved and takes time to implement properly.
  • Story mapping. Prioritizes based on how customers use the product and what comes next in the story.
  • The Savio model. This is our model. Basically, you first keep track of what your customers are asking you for, along with customer data. Then you prioritize the features that best accomplish your specific business goals.

Final takeaways

So after all that—what’s up with the RICE model?

The model can be a great framework for PMs building software:

  • It helps you make better-informed decisions and optimize for features that will give you the biggest impact for the most users with the least effort.
  • It provides you with a ranked list—you can prioritize by starting at the top and working your way down.
  • It’s relatively easy to calculate and implement. You can do it quickly.

Just remember:

  • How well the framework works depends on the scoring system you use. If you’re just guessing on your scores, you could easily end up building the wrong features.
  • Nothing about RICE requires that you talk to your customers or understand them. If you want to be customer-centric, make sure you are using metrics for reach and impact that are connected to your customer feedback.
  • This model will consistently undervalue technical debt. Make room for that in your prioritization system.
  • This model looks objective, but it’s not really. Make sure you have several dispassionate raters scoring features to avoid people inflating scores for their pet features.

If you’re not sure whether it’s the right prioritization model for you, do a mock roadmapping session with our template and see if you like its mouth feel. Also, consider trying out one of the many alternatives, or have a peek at the strategy we use at Savio.

Up Next:

Last Updated: 2023-05-11

Kareem Mayan

Kareem is a co-founder at Savio. He's been prioritizing customer feedback professionally since 2001. He likes tea and tea snacks, and dislikes refraining from eating lots of tea snacks.

Want more articles like this?

Product Leaders from Slack, Zapier, and Appcues read our newsletter to delight customers, lower churn, and grow revenue.

Prioritize high-value Feature Requests

Centralize customer feedback from HubSpot, Intercom, and Slack.

Prioritize high-value features sorted by churned revenue or MRR.

Close the loop for Sales and CS by automating status updates from JIRA.

Learn more

Contents

Centralize, Organize, and Prioritize Your GTM Team Feature Requests

Centralize customer Feature Requests from Slack, HubSpot, Intercom, Zendesk, SFDC, Help Scout, and more.