What is the Weighted Scoring Model for Feature Prioritization? Overview, Guide, and Template

An oldfashioed scale with metal cups. Its a reference to weighing different criteria in the weighted scoring model.Weighted decision matrix is a flexible product management prioritization framework. Here’s how to use it (well).

Weighted scoring (also called the weighted decision matrix) is a flexible technique product managers can use to prioritize features to your roadmap. You define the criteria, set weights based on importance, and then calculate an overall score.

Done well, it’s like the “build your own sundae” of prioritization frameworks—you choose the toppings (criteria) you want. Yummy.

But done badly, it’s can be like one of the novelty drinks from Day Drinking with Seth Myers… you could end up with something you didn’t want.

Done badly, weighted scoring can turn into the product management equivalent of mixing together a bunch of different wines.

So here’s everything you need to know about weighted scoring—what it is and how to avoid messing it up.

TL;DR

  • Weighted scoring is a method for prioritizing software features where you score each feature using criteria and weights that you choose
  • Its biggest advantage is that you can choose to include any criteria that matter to you and you can weigh some criteria as more important than others
  • But if you don’t choose your criteria well, you could end up with a scoring system that leads you to choose the wrong features.
  • Download this free weighted decision matrix template to get started in Google Sheets.

What is weighted scoring for feature prioritization?

Weighted scoring is a technique used in product and project management to prioritize features and initiatives based on a set of criteria you choose. It gives your team a system for making informed decisions about which features to put on your roadmap.

Why use it?

Weighted scoring is a way for you to systematically evaluate your set of features against criteria you choose. It helps you identify the best feature based on those criteria.

That helps your decision-making. You get a ranked list of features so you can spend your dev resources building features that will—ideally—actually improve your product.

Weighted scoring vs. RICE vs. ICE

Weighted scoring is similar to other prioritization frameworks that depend on scoring like RICE and ICE. All three of these models work by assigning scores to features for some set of criteria and then calculating a total score. That score forms the basis of your prioritization.

The difference is just in what criteria the models use.

  • The RICE framework uses three criteria (reach, impact, and effort) plus a score for your confidence in your scoring.
  • The ICE framework uses two criteria (reach and ease) and then accounts for your confidence in your scoring.
  • The weighted scoring model is more flexible because you can use any criteria you want. You can build in reach, impact, effort, and confidence if you like, but you can also use any other criteria that strike your fancy.

How to create a weighted scoring model—step-by-step guide

Implementing weighted scoring is pretty straightforward.

  1. Identify your criteria. First, your team needs to establish which criteria matter for evaluating features. These criteria may include factors like business value, user impact, implementation effort, strategic alignment, or risk.

  2. Assign weights. Next, assign each criterion a numerical value for weight based on its relative importance. The weights should add up to 100% (or 1). If you’re not sure, assign the same weights across criteria.

  3. Score features. Get together the list of features from your backlog to prioritize. For each feature, assign a score to every criterion based on how well it meets the criterion. Scores should be assigned using a numerical scale (e.g., 1-5 or 1-10).

  4. Calculate the weighted average score. Multiply the score of each criterion by its corresponding weighting value, and then sum the results to get the weighted average score for each feature.

  5. Rank features. Sort the features based on their weighted scores. You would normally prioritize the highest-scoring features—those at the top of your list.

  6. Review and adjust. You can periodically review and adjust the criteria, weights, and scores from one roadmapping session to the next.

Tip: Remember that prioritization is always part art and part science. This method is useful as a first cut, but you might not necessarily prioritize the features with the highest scores. There might be some good reason to prioritize features with lower scores.

gotohomepage

Weighted decision matrix example (and template)

Let’s work through an example. Imagine we were trying to prioritize our list of features (let’s say there are only three, for simplicity):

  • Zapier integration
  • Improvements to permissions and roles
  • Streak CRM integration

screenshot2*First, get your list of feature requests. If you *centralize your feature requests in Savio, this part is already done for you.

Note: We’ll be using our weighted scoring calculator template for this example. Download it here.

1. Identify criteria

Let's start by defining the four criteria we might use for this example:

  1. Cumulative monthly recurring revenue (MRR). This will be the sum of the MRR for each customer that asked for the feature.

  2. Implementation ease. We’ll score each feature on “ease”. Higher scores mean that features are easier to build. (If you use “effort” instead, make sure to make this inverse, so that higher scores mean more effort).

  3. Lack of risk. We’ll assign a score out of 10 to represent some measure of risk that the feature won’t meet our expectations for some reason. Higher scores will mean less risk.

  4. Strategic alignment. We’ll assign a score out of 10 to represent how well the feature aligns with our product strategy and vision. Higher scores mean better alignment.

We’re using Value (MRR), Ease, Risk, and Alignment as criteria here. It’s just an example—Use the criteria you think are best.

2. Assign weights

Now we assign weights based on each criterion’s importance. In general, I like to assign equal weights, but let's imagine for the sake of this experiment that MRR was more important than the other criteria, and risk was less important. We might assign the following weights:

  1. Cumulative MRR: 40%

  2. Implementation ease: 25%

  3. Lack of risk: 10%

  4. Strategic alignment: 25%

Note: weights should always add up to 100% (or 1).

Here we’ve assigned weights to each score. I assigned different weights in this example to show how that works, but unless you’re sure about weights, it’s often best to keep them all equal.

3. Score features

Now we’ll score each feature out of 10.

Note: for MRR, we’ll score by assigning the highest MRR score as 10. Each of the remaining MRR scores will be given a score as a percentage of the first, and then multiplied by 10. The reason I’m doing that is to put it on the same scale as the other scores.

  • Improvements to permissions and roles: We’ll give it a 3.8 for MRR (Its MRR is $1,600, which is 38% of the Zapier integration’s $4,250). We’ll give it a 9 on implementation ease, a 5 for risk, and a 9 for strategic alignment.
  • Zapier integration: it has the highest cumulative MRR of the three ($4,250), so it gets a score of 10 on cumulative MRR. We’ll score it a 6 for implementation ease, a 7 for risk, and a 7 for strategic alignment.
  • Streak CRM integration: We’ll give it a 1.8 for MRR (Its MRR of $750 is 18% of the Zapier integration’s $4,250). We’ll score it a 4 on implementation ease, a 4 on risk, and a 5 on strategic alignment.

Here we’ve assigned scores out of 10 for each feature on the four criteria.

3. Calculate the weighted scores

Now, we'll multiply the scores by the respective weights and sum the results. You’ll end up with the weighted scores for each feature.

  • Improvement to permission and roles: (3.8 * 0.4) + (9 * 0.25) + (5 * 0.1) + (9 * 0.25) = 1.52 + 2.25 + 0.5 + 2.25 = 6.52
  • Zapier integration: (10 * 0.4) + (6 * 0.25) + (7 * 0.1) + (7 * 0.25) = 4 + 1.5 + 0.7 + 1.75 = 7.95
  • Streak CRM integration: (1.8 * 0.4) + (4 * 0.25) + (4 * 0.1) + (5 * 0.25) = 0.72 + 1 + 0.4 + 1.25 = 3.37

Here we’ve calculated the final priority scores for each feature.

4. Rank the features

These features, from highest to lowest final score, are:

  • Zapier integration (7.7)
  • Improvement to permission and roles (6.52)
  • Streak CRM integration (3.37)

Finally, we sorted the rows by priority score so that the highest scores are at the top.

In this example, the Zapier integration has the highest score and so you might choose to make it the first priority for your product roadmap.

Pros of weighted scoring

Some of the benefits of weighted scoring include:

  1. Objective prioritization. By using a set of predefined criteria and weights, weighted scoring can reduce subjectivity and personal biases among decision-makers, leading to more objective and data-driven decisions.

  2. Clear communication. The method provides a transparent and easily understandable approach to a product decision-making process. It’s easy to communicate the rationale behind decisions to stakeholders and team members.

  3. Adaptability. The weighted scoring method is flexible and can be easily adjusted to accommodate changes in project requirements, market conditions, or other factors that may impact priorities. It can accommodate any criteria you like that you can quantify.

  4. Stakeholder alignment. Weighted scoring can encourage collaboration among teams and stakeholders because it requires you to define criteria and assign weights. Selecting criteria as a group helps create a shared understanding of what matters for new product features.

  5. Easy comparison. You can easily compare the resulting weighted scores across features to understand their relative importance and make quick prioritization decisions. You can even compare between different kinds of improvements, like customer requests, strategic features, and technical debt.

Cons of weighted scoring

That said, there are some potential pitfalls of the method. Here’s what they are and how you can address them.

You can choose the wrong criteria

This is the big problem. The criteria you use will dictate which features you end up choosing.

If you decide that you care most about the name of the feature—which one sounds the coolest—you might end up ranking features that have cool names, not necessarily those that will make the best product.

Obviously, you’re probably not going to do that. But the point remains: the weighted scoring doesn’t give you any guidance on deciding what criteria to use for prioritization. That gives you lots of freedom, but the risk is that you choose criteria that lead you to systematically choose features that aren’t useful or needed.

What to do: Make sure you choose criteria that really matter for building good products. Solid criteria include how much value a feature provides, how much it costs to build, its alignment with your product strategy, and so on.

Scoring can be done badly

Even if you have the right criteria, it can be complicated to score them.

Most Product teams do scoring by pulling a number that feels right to them out of their head. “How much impact do I think this will have? Let’s say… 7 out of 10.”

That’s not great. The more arbitrary your scoring method, the worse results you’ll get using this method.

What to do: Instead, try to choose concrete data that you have for your criteria.

For example, in the above example, we used cumulative MRR as a metric for value because we don’t have to guess about that—it’s a solid number that we have based on the customer feedback and feature requests we receive.

Try to use that solid data when you can. If you do have to estimate scores, make sure you gut-check your scores with knowledgeable team members. For example, if you’re estimating effort to build a feature, make sure you double-check those estimates with the product development team.

It’s not always customer-centric

Here’s where I admit that I have certain strongly held opinions about prioritization.

One of those opinions is that if you want any chance of making a good product, you have to be considering your customers’ needs. And not in a superficial way—in a genuine “I-systematically-collect-customer-feedback-and-apply-it” kind of way.

(I know—very controversial.)

Weighted scoring doesn’t necessarily accommodate customer needs. You need to make sure you’re building your customer’s voice into the framework. Without doing that, you risk building a product that doesn’t meet your users’ needs or have market fit.

What to do: Make at least one of the criteria in your framework tied to your customers.

For example, you could calculate the cumulative MRR of each feature by adding up the MRR of every customer that has asked for that feature. Then, you could put MRR in your scoring formula. That would ensure that features are scored higher when they are requested by the customers that make up a greater proportion of your revenue.

Read more: Why collect and use customer feedback

homepage

The weighting is non-trivial

Another challenge is choosing the weights for each category. If you assign too much weight to a specific criterion, you could overemphasize it and de-emphasize other important factors.

There’s actually a bigger technical discussion here about how to determine weights in a measurement tool, but I’m going to leave that to academics. Suffice it to say that there’s some disagreement about whether (and when) weighting is useful.

What to do: In general, I would lean towards just using equal weights across the criteria you choose. So, if you have 5 criteria in your weighted scoring card, I’d keep them all weighted at 0.2 so that, together, they add up to 1.

The exception would be if you know that one criterion is much more important than the others. But unless you have a very good reason, I’d stick with equal weighting.

What about alternative prioritization frameworks?

There are lots of other feature prioritization methods you could choose to use instead of the weighted scoring matrix. Here are some other popular options:

  • Value vs. Effort matrix. A quick and dirty way cost-benefit analysis to find low-hanging fruit. It scores each feature on its value and the effort it would take to build.
  • RICE scoring framework. Very similar to value vs. effort, but value is broken up into two metrics: reach and impact. It also considers your confidence in your scoring as well.
  • ICE scoring. Very similar to value vs. effort, but also takes into account your confidence in your scoring.
  • The MoSCoW method. This method categorizes features into must-haves, should-haves, could-haves, and won’t-haves. I’m not a huge fan, but it might be helpful for some.
  • The Kano method. This method categorizes product features by how they’ll impact customer experience. I love that this method uses customer surveys and data to make categorization calls.
  • User story mapping. Prioritizes based on how customers use the product and what comes next in the story.
  • The Savio method. First, keep track of what your customers are asking you, along with customer data. Then look for the features that best accomplish your specific business goals.

Read more: How to prioritize your feature requests

Takeaway: The weighted scoring framework is flexible, but make sure you pick solid criteria

All in all, weighted scoring is a useful method to have in your back pocket. It’s a way of taking a bunch of criteria that you know are useful to you when you’re prioritizing, and applying them to a big list of features.

Done well, it can:

  • Give you a flexible set of custom criteria for evaluating your features
  • Generate a ranked list of your features by the criteria you choose
  • Provide a transparent way to justify product decisions that you have made.

Just remember:

  • The criteria you choose make or break the method—make sure you’re choosing criteria that really matter.
  • Also, make sure that at least some of your criteria are connected to your customer feedback—what your users say they want.
  • The weights you assign and your scoring also are critical. If you’re not sure, just use equal weights across criteria.

Get the template: Weighted decision matrix template and calculator

If you’re not sure whether it’s the right prioritization model for you, do a mock roadmapping session and try it out. If you’re not feeling it, consider the method we use at Savio (we’re PM veterans, and this is the strategy we’ve developed after 20+ years of experimenting.)

Up Next:

Last Updated: 2023-04-24

Kareem Mayan

Kareem is a co-founder at Savio. He's been prioritizing customer feedback professionally since 2001. He likes tea and tea snacks, and dislikes refraining from eating lots of tea snacks.

Want more articles like this?

Product Leaders from Slack, Zapier, and Appcues read our newsletter to delight customers, lower churn, and grow revenue.

Prioritize high-value Feature Requests

Centralize customer feedback from HubSpot, Intercom, and Slack.

Prioritize high-value features sorted by churned revenue or MRR.

Close the loop for Sales and CS by automating status updates from JIRA.

Learn more

Contents

Use HubSpot CRM? Collect and Prioritize Feature Requests

Get a central hub of customer feedback sorted by HubSpot attributes like churn or MRR.