In my last post, we discussed measuring tester engagement as a part of test health. Today, we are going to discuss measuring product success. With Centercode, the easiest way to get a gauge on your product’s performance during the test is to review the Delta Success score - the second of our key performance indicators (KPIs) for Delta. Success encapsulates how well your product is doing according to your testers if you were to release the product today. Success is designed to be considered alongside other industry measures of performance (e.g. star rating, NPS) and provides a more accurate representation of your product’s quality because of the way it’s measured.
Measuring Project Health With Delta Success
Whenever we talk about analyzing the results of a test, it’s important to first understand the confidence level of our data. One of the most common issues companies run into when testing products without using Centercode is poor tester engagement. Companies often find themselves in a situation where they’re making critical business decisions based on the star ratings or NPS from 5 - 10 actively engaged testers out of a much larger tester population. Neither of these traditional success metrics take into account the sample size and can grossly misrepresent the true findings of any research effort. Centercode’s Success KPI is directly related to our Health KPI, which effectively serves as a measure for confidence in the dataset. A success score will start out as N/A until the health score is high enough for Centercode to be confident in the assessment. Once health reaches a D-, the platform will provide an assessment of success, initially displayed as a wide range. As delta health increases over time, we’re able to be more confident in the overall dataset, and our delta success range will shrink. Once health has reached an A+ rating, the success score will change from a range to a single number.
Measuring Project Success
Another common problem we see in traditional ways of measuring product performance is the inadvertent influence that our regular use of these metrics has on our perception of them. Take star ratings for example, anyone who’s spent time shopping online knows that 1-5 isn’t a strictly linear scale and a 3-star product is nowhere near the middle of the road in terms of quality. People constantly reevaluate their definition of “good enough” when using star ratings based on a number of factors that may not even be decided when you begin testing your product, like how much your product costs. Additionally, metrics that ask if you’d recommend the product to a friend or family member, like NPS, can begin to fall apart when testing niche products or early prototypes as there’s no way to ensure if someone’s friends or family might fall within your target market.
Much of the Centercode platform has been designed around understanding participant sentiment and guiding them toward submitting feedback relative to that sentiment. Our Success metric then compares the impact of Issues, Ideas, and Praise to determine how well a Feature, Phase, or Product is doing. A Feature with an overwhelming amount of Praise will receive a very high score, whereas a product or Feature with a large majority of Issues will receive a low score. Because the Centercode Success metric is utilizing the Impact score of each piece of feedback and not just a simple count, you’re able to control the weighting for any Feature, allowing your most important Features to have a larger influence on the calculation of the Success score.
Leveraging Centercode Success for Project Health Report
Using Centercode Success can help avoid one particular area where Star Ratings and NPS fail to reflect reality which is when pricing information about the product in question is explicitly withheld. Budget products will almost always perform worse than premium products during testing if customers aren’t told about the pricing differences. Often we’ll see a premium product out perform a budget product during testing then fall behind once publicly released, making it seem like testing is providing unreliable results. With success, however, you can observe product quality independent from these influences. It’s probably okay if your budget product has a lower success score than your premium product. You should be setting specific goals for each product when you start testing (e.g. our budget product should hit 62 Success and our premium model should reach 84). Once you’ve set a target for your product, you can enter it on the Centercode Product Success dashboard and our system will tell you what you need to do to reach it. After building a dataset from multiple projects, success can provide a consistent and universal benchmark for assessing quality across all of your products regardless of price.
Incorporating success into your testing process will require some adjustment. Once you have it established as a primary KPI, it will not only provide you with more objective data, but also make it easier to know where to focus your attention. We’ve highlighted a number of areas where a success score is a more appropriate project health indicator than some traditional KPIs, but you can and should track data for multiple KPIs. To help with the transition, you might consider designing a model for the transition to relying on success from star or NPS ratings. Finding your organization’s target for “Good Enough” when it comes to success can take a while, and it’s one of the biggest challenges in our new connected world. But once you’re able to target and measure product performance via Success, you’ll be on your way to better, more refined products.
Check out Rob's write-ups on the remaining Delta KPIs: