What is the ICE Scoring Model for Feature Prioritization? Overview, Guide, and Template
Let’s talk about the ICE scoring method for feature prioritization. (I promise not to make any Vanilla Ice jokes.)
Look, lots of PMs love it. I think it’s fine—just be careful how you do it to make sure you’re thinking about your customers and not systematically biasing your product.
Here’s the full guide on exactly what you need to know.
ICE Scoring (TL;DR)
The ICE (Impact, Confidence, and Ease) scoring method is a prioritization framework used to evaluate potential initiatives or ideas.
The method asks evaluators (PMs and other team members) to score ideas based on their impact, confidence, and ease of implementation, and then combine those scores to determine a final priority score.
ICE guides you to focus on the costs and benefits of each feature, as well as the level of confidence you have in your scoring.
Even still, most PMs are essentially guessing when they’re assigning impact and ease scores, rather than using their customers’ feedback, which can significantly impact scores.
To quickly implement ICE, download our template and calculator.
What is the ICE Scoring Model?
The ICE Scoring Model is a simple framework for prioritizing features and product ideas based on their impact, ease of implementation, and your confidence in your scoring of impact and ease. It was developed by Sean Ellis, the originator of growth hacking (and, appropriately, the author of Hacking Growth).
How are ICE scores calculated?
To calculate the ICE score, you first assign each feature or initiative in your product backlog a score on impact, confidence, and ease of implementation, using a scale from 0 to 10. You then multiply the scores together using the following formula: ICE Score = (Impact * Confidence * Ease). The higher the ICE score, the higher the priority for the feature or project.
To find an ICE score for a feature, score it on Impact, Confidence, and Ease, and then multiply them together.
This model is widely used in product management, marketing, and other fields to help teams make informed decisions about which features or projects to pursue.
What is “impact” and how can you score it?
Impact refers to the potential effect or benefits that a feature or project would have on the user or business. A high-impact feature would generate significant value for the user or business, such as improving retention or increasing revenue; a low-impact feature may have some value but is not essential or significant.
The way you measure impact depends on your goals and can be super subjective. That’s why this is probably the most difficult part of finding an ICE score. Here are some ways to think about impact:
What you and your team think the impact will be (not great)
The number of users impacted
The cumulative MRR of customers that asked for each feature
The new revenue each feature would be likely to generate
The value associated with an increase in retention that each feature would generate
The decreases in costs a given feature is expected to produce
Ideally, the more you can use objective measures to estimate impact scores, the better.
What is “confidence” and how can you score it?
Confidence refers to your level of certainty or confidence in the scores you assign for impact and ease. This piece of the formula helps guide you towards features and initiatives that have outcomes that are more certain.
Estimating confidence level is difficult, too. I like the Confidence Meter system that Itamar Gilad offers (see image below).
In his scheme,
If the only piece of evidence that your feature will have an impact is that you think it will, give a confidence score of 0.01.
If you’re assigning confidence scores based on the opinions of your coworkers and managers, you can give a score of 0.1.
If you have a few customers asking for a feature, you can be a bit more confident that it will have an impact—score it 0.5.
If you’re assigning your impact and ease scores on market research, you can use a confidence score between 1 and 3.
If you’re basing your impact estimates on rigorous longitudinal user studies, you can be more confident, and assign scores between 3 and 7.
Finally, if you’re assigning impact scores based on actual launch data, you have high confidence in your estimates of impact, so you can assign a confidence score up to 10.
What is “ease” and how can you score it?
Ease refers to the level of effort or complexity required to implement the feature or project. A high ease score means that the feature is easy to implement and requires little effort or resources, while a low ease score means that the feature is complex or difficult to implement.
Again, this metric can be difficult to estimate. In general, it’s good practice to run your estimates by your development team to be as accurate as possible.
ICE vs. RICE—What’s the difference?
The ICE scoring framework is very similar to the RICE prioritization framework—both score features based on how much value they provide relative to the effort they take. Both also take into account your confidence in your scoring.
The main difference between the two models is that RICE includes “reach” in addition to “impact”. This creates a bit more of a distinction between how many users would use a feature and how valuable it would be. In other words, it essentially weights the score a little more towards the “benefits” that a feature would provide than ICE does.
Which is better—ICE or RICE? Neither is obviously better than the other—choose the one that makes the most sense to you.
How to use the ICE framework—a step-by-step guide and example
Here’s your complete guide to using the ICE model to score new features, product ideas, or initiatives. I’ve included examples using our template—download your copy here.
1. Identify the set of projects or features
Create a list of new product features to prioritize. These could be new product ideas that your team has had or feature requests from your customers.
In this example, we have three potential features on our list that we’ll consider building.
2. Score the features on impact, confidence, and ease
Now score each feature, using the guidance above. When possible, try to base your scores on concrete data, like the number of users that have requested a feature or the cumulative MRR tied up in a feature.