What is the ICE Scoring Model for Feature Prioritization? Overview, Guide, and Template

An ice mountain representing the ICE scoring model by Sean Ellis

***This article is part of the ***product roadmapping prioritization chapter of our product roadmapping guide. Check out the full Product Roadmapping Guide here.

Let’s talk about the ICE scoring method for feature prioritization. (I promise not to make any Vanilla Ice jokes.)

Look, lots of PMs love it. I think it’s fine—just be careful how you do it to make sure you’re thinking about your customers and not systematically biasing your product.

Here’s the full guide on exactly what you need to know.

ICE Scoring (TL;DR)

  • The ICE (Impact, Confidence, and Ease) scoring method is a prioritization framework used to evaluate potential initiatives or ideas.
  • The method asks evaluators (PMs and other team members) to score ideas based on their impact, confidence, and ease of implementation, and then combine those scores to determine a final priority score.
  • ICE guides you to focus on the costs and benefits of each feature, as well as the level of confidence you have in your scoring.
  • Even still, most PMs are essentially guessing when they’re assigning impact and ease scores, rather than using their customers’ feedback, which can significantly impact scores.
  • To quickly implement ICE, download our template and calculator.

What is the ICE Scoring Model?

The ICE Scoring Model is a simple framework for prioritizing features and product ideas based on their impact, ease of implementation, and your confidence in your scoring of impact and ease. It was developed by Sean Ellis, the originator of growth hacking (and, appropriately, the author of Hacking Growth).

How are ICE scores calculated?

To calculate the ICE score, you first assign each feature or initiative in your product backlog a score on impact, confidence, and ease of implementation, using a scale from 0 to 10. You then multiply the scores together using the following formula: ICE Score = (Impact * Confidence * Ease). The higher the ICE score, the higher the priority for the feature or project.

To find an ICE score for a feature, score it on Impact, Confidence, and Ease, and then multiply them together.

This model is widely used in product management, marketing, and other fields to help teams make informed decisions about which features or projects to pursue.

What is “impact” and how can you score it?

Impact refers to the potential effect or benefits that a feature or project would have on the user or business. A high-impact feature would generate significant value for the user or business, such as improving retention or increasing revenue; a low-impact feature may have some value but is not essential or significant.

The way you measure impact depends on your goals and can be super subjective. That’s why this is probably the most difficult part of finding an ICE score. Here are some ways to think about impact:

  • What you and your team think the impact will be (not great)
  • The number of users impacted
  • The cumulative MRR of customers that asked for each feature
  • The new revenue each feature would be likely to generate
  • The value associated with an increase in retention that each feature would generate
  • The decreases in costs a given feature is expected to produce

Ideally, the more you can use objective measures to estimate impact scores, the better.

What is “confidence” and how can you score it?

Confidence refers to your level of certainty or confidence in the scores you assign for impact and ease. This piece of the formula helps guide you towards features and initiatives that have outcomes that are more certain.

Estimating confidence level is difficult, too. I like the Confidence Meter system that Itamar Gilad offers (see image below).

Itamar Gilad’s* measure for measuring confidence scores in the ICE framework. Source.*

In his scheme,

  • If the only piece of evidence that your feature will have an impact is that you think it will, give a confidence score of 0.01.
  • If you’re assigning confidence scores based on the opinions of your coworkers and managers, you can give a score of 0.1.
  • If you have a few customers asking for a feature, you can be a bit more confident that it will have an impact—score it 0.5.
  • If you’re assigning your impact and ease scores on market research, you can use a confidence score between 1 and 3.
  • If you’re basing your impact estimates on rigorous longitudinal user studies, you can be more confident, and assign scores between 3 and 7.
  • Finally, if you’re assigning impact scores based on actual launch data, you have high confidence in your estimates of impact, so you can assign a confidence score up to 10.

What is “ease” and how can you score it?

Ease refers to the level of effort or complexity required to implement the feature or project. A high ease score means that the feature is easy to implement and requires little effort or resources, while a low ease score means that the feature is complex or difficult to implement.

Again, this metric can be difficult to estimate. In general, it’s good practice to run your estimates by your development team to be as accurate as possible.

ICE vs. RICE—What’s the difference?

The ICE scoring framework is very similar to the RICE prioritization framework—both score features based on how much value they provide relative to the effort they take. Both also take into account your confidence in your scoring.

The main difference between the two models is that RICE includes “reach” in addition to “impact”. This creates a bit more of a distinction between how many users would use a feature and how valuable it would be. In other words, it essentially weights the score a little more towards the “benefits” that a feature would provide than ICE does.

Which is better—ICE or RICE? Neither is obviously better than the other—choose the one that makes the most sense to you.

gotohomepage

How to use the ICE framework—a step-by-step guide and example

Here’s your complete guide to using the ICE model to score new features, product ideas, or initiatives. I’ve included examples using our template—download your copy here.

1. Identify the set of projects or features

Create a list of new product features to prioritize. These could be new product ideas that your team has had or feature requests from your customers.

In this example, we have three potential features on our list that we’ll consider building.

2. Score the features on impact, confidence, and ease

Now score each feature, using the guidance above. When possible, try to base your scores on concrete data, like the number of users that have requested a feature or the cumulative MRR tied up in a feature.

Here, we’ve scored each feature for impact, confidence, and ease. We rated each of them on a scale from 0 to 10.

3. Calculate the total ICE score

Now, use the ICE formula to calculate the final priority score for each project or characteristic. The formula is: ICE Score = (Impact * Confidence * Ease).

You can see the ICE score for each feature in the right-most column.

4. Run your scores by your team

One major weakness of the ICE scoring model is that it relies heavily on you having accurate scores for features. If you’re off, you could end up prioritizing the wrong things.

Your team can help you get those accurate scores.

5. Sort and prioritize

Now that you have ICE scores for each feature, rank them in descending order. That way, you can quickly find the features with the highest scores at the top of your list.

Here, we’ve sorted features by their ICE score so the features earlier in the list have higher scores.

You can now start prioritizing features on your product roadmap.

Usually, you’d start by prioritizing the features highest on the list, but it depends. There might be good reasons that you prioritize other features first—maybe they align better with your strategy, or maybe a lighthouse customer asked for it (or maybe your CEO wants it).

Prioritizing is always part art and part science. Start with the ICE scores, but then use your judgment to finalize which features actually make the cut.

Pros of the ICE Scoring Model

The ICE model has strengths that can be crucial to your approach to prioritization problems.

  • Objectivity. The ICE model gives you a scoring system to prioritize your features. That helps reduce the chance that you or your team will pick features because of egos, personal biases, or politics and instead choose those expected to have a relatively large impact for little effort.
  • Evaluates important factors. ICE prioritizes features based on impact, reliability, and ease of implementation. That makes sense to me—impact and effort are two super important factors in prioritization, and I think it also makes sense to attenuate scores for confidence.
  • Easy to understand. The ICE model is very simple to understand and use. It considers only three factors. Your team and other stakeholders will be able to quickly understand how to use and apply it, potentially saving you lots of time in prioritization debates.
  • Prioritization is clear. The end result is your features in a ranked list. There’s no question about which features should be prioritized (some other prioritization frameworks, like the Kano framework and the MoSCoW framework, don’t provide that tidy clarity).

Cons of the ICE Scoring Model

At the same time, there are some clear downsides to ICE. I do think it’s a useful framework, just be aware of the pitfalls so you can avoid them.

Many people misunderstand what ICE scores mean

First, I’ve seen lots of product managers make the mistake of believing that ICE scores mean something. They don’t really—they’re just a combination of scores that you’ve assigned to features. They’re only useful to the extent that they can help you compare features to each other… There's no “good” or “bad” ICE score and no ideal target score.

Similarly, some PMs talk about ICE scores as if they can help identify winners or ideas that will work. Again, no: they help you identify the features that are highest on some ratio of impact to effort, with confidence taken into account. They can’t guarantee that a feature is “good” or that it will work.

What to do: Just remember ICE scores are made up. They’re useful in helping you compare features to each other, but they don’t have any meaning outside of that.

Scores can be inaccurate

ICE scores are only as good as your estimates for impact, ease, and effort. But in general, we’re not that great at estimating future impact or effort. Researchers have noticed what they’ve called the planning fallacy: we tend to overestimate value and underestimate effort.

One good thing about ICE is that it tries to consider that uncertainty in the formula with the confidence rating. But even still, if you estimate any of the three factors poorly, you’ll end up with a distorted score that can greatly affect your product decisions.

What to do: Take steps to increase the accuracy of your estimates.

  • Use concrete data, when possible, like estimating impact using the number of requests for a feature or its cumulative MRR
  • Run estimates by your team members to make sure they make sense

screenshot8Savio can help you use concrete metrics to assign scores for impact. For example, you can easily see the cumulative MRR for each feature (cumulative MRR is the sum of the MRR from each customer who has asked for that feature). This gives you a more objective measure of impact than, for example, simply guessing.

ICE is not necessarily customer-centric

Another problem with ICE is that it’s not necessarily customer-centric. Sure, it can be, as long as you’re using customer feedback and feature requests as part of your scoring for impact. But many PMs don’t.

What to do: Take into account your customers’ voice when assigning scores. That can mean building your feature list using customer feature requests, or using customer surveys to estimate impact.

However you do it, don’t forget to pull your customer into the ICE prioritization process somehow.

Potential for gaming

One of the benefits of ICE is that it offers a potentially unbiased method for finding the features that are likely to give you the most bang for your buck.

And it can do that—as long as you’re scoring is unbiased.

But you or your team members can also potentially game the system by scoring your favored features (even unintentionally) higher for impact, ease, or confidence. To the extent that your scoring is biased, the final ICE scores will be too.

What to do: Make sure the scoring is done by at least a few different people so that any one person’s bias could be diluted. Also, scores that differ greatly from each other can point you to where you may need more data or discussion to score accurately.

Under-emphasizing technology debt

The ICE model tends to prioritize projects based on their potential impact on users and their ease. One consequence is that tech debt projects usually don’t end up with very high impact scores, and so they tend to be deprioritized by ICE.

But while tech debt initiatives may not have a direct or immediate impact on customers, they can have a major impact on a product's long-term maintenance, stability, and scalability. Neglecting tech debt can slow down your development process, increase the risk of bugs, and reduce your ability to innovate.

So you’ll want to get those projects in your roadmap somehow.

What to do: I like to think about my roadmap as having a development budget, and then setting aside “buckets” of time for different categories of work: customer requests, strategic features, and tech debt.

For example, you might spend 50% of your dev time on requests, 25% on strategic features, and 25% on tech debt. Then, you can prioritize features using ICE from within each bucket.

That way, every roadmap has some time set aside for tech debt, even if its ICE score wouldn’t put it at the top of the list.

homepage

What about alternative prioritization frameworks?

There are many other resource prioritization methods that you could choose to use instead of ICE. Here are some of the most popular ones:

  • Value x Effort matrix. A simpler version of ICE that doesn’t consider confidence. I don’t love this one—I think you’re better off using ICE.
  • RICE Scoring Model. A close cousin of ICE. What makes it different is that in addition to “impact”, you also estimate “reach”. Also, you typically estimate “effort” instead of “ease”, so the formula changes a bit.
  • Weighted scoring. Similar to ICE, RICE, and Value vs. Effort, but it’s even more flexible because it lets you include any factors you want.
  • The MoSCoW method. This method categorizes features into must-haves, should-haves, could-haves, and won’t-haves. I don't find it super useful, but some people like it.
  • The Kano Method. This framework categorizes features into buckets based on how they affect the user experience. I love that it’s customer-centric, but it’s more work to do properly.
  • Story mapping framework. This method prioritizes based on how customers use the product. It’s quite different from the other frameworks and is a good method to be familiar with.
  • The Savio method. This is the strategy we use. Basically, you track what your customers are asking for and match those requests with their other data (like MRR). Then you can prioritize the features that best meet your specific business goals.

Final takeaways

The ICE model is a simple, easy-to-use prioritization technique that can help improve product decision-making.

It’s useful because it can give product teams a short list of features that are likely to have a fairly large impact relative to how much effort they are.

Just make sure you’re scoring in a way that’s unbiased and relatively accurate. And try to build the voice of your customers into the scoring so that you end up with a product your customers actually want.

*Want to implement ICE? Try out our *(free) calculator and template.

Still not sure?

Fair enough—picking a roadmap prioritization framework can be complicated. Take a look at the other models out there, or learn more about how we do it at Savio.

Up next: The 8 Most Common Prioritization Frameworks—and How to Choose One

(*Thanks to the **following **articles *for helping me better understand ICE and giving more context about its strengths, weaknesses, and how to do it well.)

Last Updated: 2023-04-30

Kareem Mayan

Kareem is a co-founder at Savio. He's been prioritizing customer feedback professionally since 2001. He likes tea and tea snacks, and dislikes refraining from eating lots of tea snacks.

Want more articles like this?

Product Leaders from Slack, Zapier, and Appcues read our newsletter to delight customers, lower churn, and grow revenue.

Prioritize high-value Feature Requests

Centralize customer feedback from HubSpot, Intercom, and Slack.

Prioritize high-value features sorted by churned revenue or MRR.

Close the loop for Sales and CS by automating status updates from JIRA.

Learn more

Contents

Centralize, Organize, and Prioritize Your GTM Team Feature Requests

Centralize customer Feature Requests from Slack, HubSpot, Intercom, Zendesk, SFDC, Help Scout, and more.