How to Overcome The Flaws in RICE to Prioritize Your Product Roadmap
In this post we will help you overcome the major flaws in the RICE prioritization model. It was originally written for DoWhatWorks clients but we are sharing it publicly to help more people win more often.
One of our clients needed to boost the conversion rates on their sales page quickly. They had tons of great ideas. The problem was that they had a limited number of tests they could run and they needed to determine which ideas to actually try first.
If you are part of a growth or product team, you probably have more ideas than you can execute too. You aren’t alone. Major streaming brands, SaaS brands, large banks, health and ecommerce companies we work with face similar problems.
So how do you prioritize what to bet on?
We recommend you use the RICE framework with some important modifications to overcome some parts of the approach that are deeply flawed.
In this post, we will dive into the framework and share approaches to overcome the flaws to help you win more often.
What is RICE?
The RICE framework was popularized by Intercom and gained traction because it helps product and growth teams rank order potential initiatives.
The framework helps decision-makers determine which products, features, and growth initiatives to put on their roadmaps. You score each potential initiative based on four factors: Reach, Impact, Confidence, and Effort. Then, you plug those scores into a formula to get an overall score for each potential feature. Finally, you rank your initiatives based on the scores, with the highest scores being more important and prioritized first, and lower-scoring items to be pursued later.
How to use RICE and overcome its flaws
Let’s look at each of the four pillars of RICE.
Reach
As the saying goes, "You should fish where the fishes are." Similarly you should optimize where your customers are.
Reach is measured by the number of qualified visitors who see your optimization. It helps you focus on optimizations that will impact as many customers, or potential customers as possible in a given timeframe.
If you are doing all the hard work to improve an experience, make sure you are focusing on the experiences with the highest reach. These typically include your homepage, product, pricing, signup and landing pages.
Why focus on these pages first?
To illustrate the impact, let's say you are debating optimizing the following experiences:
The same lift in Page B will deliver 5X more of the desired outcome. All other things being equal, it’s a no brainer to start with Page B.
Potential pitfalls with Reach and how to overcome them
While calculating reach seems self evident, you want to avoid the following traps:
Trap 1: High volume pages that attract people with weak intent
It can be tempting to prioritize and optimize pages with millions of visitors, even if those visitors have low intent. Many content pages lure people in who may not be qualified. You do not want to write them off since they may become qualified, but you should be careful in setting timing expectations. If you are looking for an immediate boost in trials or revenue, give less weight to pages where visitors have low intent.
Trap 2: Optimizing irrelevant parts of high-intent pages.
The more you focus on areas central to your potential customer's journey, the more your impact will be amplified. Focus on the core aspects directly related to the primary questions people want to solve.
Trap 3: Ignoring strategically important pages with low volume (like ‘signup.’)
Some low volume experiences are pivotal to overall conversion rates. For example, 100% of your new customers go through the signup experience. Even though it will have less traffic compared to your home page, a lift here will boost the entire funnel. Do not just use the sheer volume, also factor the percentage of new customers who need to experience it to help you accomplish your aim.
Trap 4: Ignoring potential volume.
Marketing campaigns live and die based on the effectiveness of landing pages. Small improvements in conversion rates on a landing page can enable a marketing team to achieve cost-per-customer targets and ramp up the spending/traffic they drive to the page. If you aim to unlock volume by achieving certain conversion rates, factor future volume into the calculation.
If you can avoid these traps, factoring in Reach will pay off handsomely.
Impact
Impact refers to the degree of impact your bet might have on the outcome you are looking to improve. You might measure impact based on new customers, revenue growth, activity, trials, customer satisfaction, or cost savings.
Potential pitfalls with Impact and how to overcome them
Impact is a fundamentally flawed consideration. Most impact estimates are often just guesses based on no data. You only get a sense for impact after you have tried the optimization.
If you knew the impact, you would not need to run a test.
The main exception will be if you already have results from a test you ran with a subset of your customers, then you will have a good sense for the potential impact you will see when you roll it out to 100%. If you have no data your estimates will be works of fiction.
Don’t get bogged down here.
Just plug in a range of potential impact and ask, “Would I move forward if I got towards the top of the range?”
For example, plug in outcomes like a 33% lift, 20% lift, 10% lift, 5% lift and 1% lift and look at the outcome.
You don’t know what the lift will be and there is no guarantee that you will achieve any of these lifts. But if you would not be happy at the high end of the range, then you likely want to avoid this initiative. Your initiative lacks reach or is not big enough to be meaningful for you.
Do not get stuck here or spend too much time trying to predict the outcome level. It will only reveal itself to you upon completion of the test. Just pressure test it to make sure you are not wasting your time and move on.
Confidence
Confidence seeks to gauge the odds that this change will work as expected. A high confidence score indicates you expect a high likelihood of driving your desired outcome, while a low score indicates greater uncertainty.
Theoretically, you would use past experience, qualitative feedback from customers, and market signals to score your confidence in your assumptions.
Potential pitfalls with Confidence and how to overcome them
People often overestimate the likelihood that something will work. In fact, 80% of tests people run (according to Optimizely) do not positively move the needle. Unfortunately, most people have almost no data to inform their confidence score (hence the 4/5 fail rate).
Fortunately, you can use other people's tests to generate confidence scores.
Here’s how you can do it:
- If you use DoWhatWorks, find similar tests in your DoWhatWorks dashboard
- Treat the outcome of each experiment as a vote in favor or against an idea. The more often it wins, the more likely it’ll resonate.
- If something wins often, that signals an area of opportunity. Dive in here.
- If something loses almost every time, it’s a signal to stay away from it as it will likely hurt results.
- If you don’t get a definitive winner (i.e., wins 10/20 times), the variable may be a toss-up. It likely signals no change. Here you want to dive deeper to see what winning variants have in common. See what you can learn from the losers. Look for patterns to refine your approach.
For example, let’s say you are thinking of adding a QR code on your website to increase the number of app downloads.
Filter your DoWhatWorks Dashboard for all the A/B tests that focused on QR codes, where one version had a QR code and the other didn’t.
Use the pull-downs to filter for the ideas you are considering. You may also want to search by the number of variables to find the experiments that look most like what you want to run.
In this example, Robinhood added a QR code in the hero prompting potential customers to download the app. They ran this test for two weeks and it lost. So count it as a vote against adding the QR code.
Now look at other similar tests.
GoDaddy ran a similar experiment, but they called out the QR code more prominently. It also didn't win. So that offers another vote against.
Now rinse and repeat.
Venmo tried something similar, but the main call to action was replaced with a QR code in the hero. It also lost. So you now have three strong votes against using the QR code.
Repeat the process across multiple tests. You might find that something loses in 10/11 cases. That would signal that you should have low confidence it will work for you, and you might want to spend your time on something more promising.
When we do advanced analysis for clients, we layer in additional weights and factors like industry relevance and the degree to which a variable can be isolated. We also apply advanced statistical algorithms to get probabilistic BetScores. But even your own analysis will save you months of trial and error on things that are unlikely to work.
Fight the urge to just copy what’s popular or what’s appearing on a competitor's website. Instead, focus on what has been tested using this data to save you months of trial and error.
Effort
Effort refers to your best guess on how long it will take to launch and how much it will cost. It represents the “I” in “ROI”. So it would be wise to capture the required investment needed to launch the change. This can be determined in T-shirt sizes (S, M, L, XL). It can also be estimated in days (half day, 2-3 days, 1 week, 1 month, etc), number of teams, and dollar terms.
Potential pitfalls with Effort and how to overcome them
Effort estimations are often inaccurate, especially for creative optimizations. Creative experiments involve many unknowns making it difficult to estimate how long they will take to complete.
It’s a good rule of thumb to ask yourself, "Would I move forward if something took 2 times longer than expected to launch?" Then ask, “Would I move forward if it took 4 times longer?” If the answer is no, you must pressure test your effort assumptions.
Double your effort estimate. Now double it again. Would you still move forward?
Doing something with a small return can be tempting, especially if it is also low effort. But suppose you've underestimated the lift, and the results are worse than you expected. In that case, you will have a pretty terrible ROI.
To estimate effort, collaborate with stakeholders in the development process. Consider how to step your way into the launch with smaller launches. That will help you find hidden gotchas faster. It will also give you a signal from customers or potential customers that they will actually respond as you hope before building the entire feature.
Putting it all together
Once you score the dimensions, create a composite score for each initiative and rank the initiatives by that score. The resulting list will give you guidance on what will deliver the most bang for buck.
At the end of the day, you are looking to do the combination of things that will give you the most results for the lowest possible investment (effort).
Not everything will work. But if you are deliberate in assessing opportunities, use real data and avoid traps, you can dramatically increase your odds of success.
Image credit: https://unsplash.com/photos/ETRPjvb0KM0