This week Apple announced that you can now invite up to 10,000 beta testers through TestFlight to beta test your apps. Apple says this will help developers get more feedback on their apps before launch. While this may sound like great news (the more, the merrier, right?), there are actually a lot of reasons why you shouldn’t be beta testing your products with that many people.
You Can’t Handle That Much Feedback
Beta testing is all about collecting relevant feedback from targeted customers to improve your product before launch. That means gathering a carefully calibrated group of testers that reflect your target market to give detailed feedback on the product experience.
If you try to collect detailed feedback from 10,000 testers (and they actually provide it), you’ll be sifting through bug reports until next year. You’ll also probably be reading the same feedback over and over again as testers submit duplicate feedback or sentiments. We’ve found that the ideal tester team size is about 100-200 testers. You can be confident you’re going to catch most issues with a team that size and youll be able to quantify trends within your market. Once you surpass that number, it’s unlikely that you’ll get much value out of the additional testers, though your team will spend hundreds of hours supporting them.
It’s Too Risky
Putting your unreleased product into the hands of strangers is risky. Any one of those testers could leak details about your product, or tarnish your company’s reputation by publicly complaining about your upcoming release. The more people that have your app during beta, the greater the risk that you incur.
This is why we encourage you to keep your beta team to a manageable size. This makes it easier to ensure your testers have signed and understood your nondisclosure agreements and communicate the importance of maintaining confidentiality. It will also make it easier to track down the leak if one does occur.
Your Product Will Likely Break Before That Point
Many companies stress test their infrastructure as part of the beta phase, and will open their doors to thousands of testers to see if they can support the load. In our experience, however, companies far overestimate how many testers they need for a stress test. Often they’ll organize thousands of testers, only to have their app break at 500 testers, leaving thousands of frustrated fans unable to access the test.
Instead, we recommend starting small, with a few hundred testers, and working your way up to a larger stress test. This will help prevent frustration and likely garner valuable feedback along the way that you can use to improve your product before larger groups begin to use it.
More Testers Won’t Make Up for Poor Processes
Many companies are tempted to throw large numbers of testers at a product precisely because they’re concerned about low participation rates. If your historical participation rate is 5-10%, then you may calculate that you need thousands of extra testers to get the feedback you need. But adding all of those extra testers is only going to make your team’s job harder. They’ll need to spend time finding and supporting those testers, and you won’t know how to interpret the silence from the vast majority that don’t provide feedback. Were those testers turned off by the product? Did the app break for them? Without feedback, you’ll have no idea.
Also, because you’re only receiving feedback from a small percentage of your tester team, you won’t be able to draw conclusions about different segments of your target market, since you might not be getting feedback from the right balance of testers to uncover those trends. For example, if only “Super Users” are providing feedback, but 60% of your market falls into your “Casual User” persona, then you have no idea how most of your market is reacting to your product.
Instead, it’s better to focus on building a smaller group of carefully vetted testers that reflect the breakdown of your target market, and then focus your team on getting feedback from as many of them as possible. We consistently get over 90% participation on our tests and share many of our best practices that you can use to increase your participation rates.
When You Do Need Thousands of Testers
There are a few scenarios where you might need a massive amount of testers. One example is if you’re looking to run a field test to collect natural usage data on your app. You can use this data to see how people use your product over time, or to improve your machine learning algorithms before launch.
Or perhaps you’re looking to run a public beta to create buzz about your new game and you just want to get as many people as possible using the product and sharing their excitement with their friends.
In these scenarios, the focus isn’t on feedback, but rather on exposure and usage. And in both of these cases, youll have already run more focused alpha and beta tests before reaching this stage. The earlier tests helped you collect feedback that you were able to use to improve the product. Using the resulting release candidate version of your product for the public beta or field test will likely mean you don’t need to use TestFlight’s beta seats anymore.
In conclusion, if your primary goal is raw usage data, then 10,000 testers using your app might make sense. But if you want valuable, actionable feedback that will provide you with direction toward improving the success of your product, then throwing in thousands of testers won’t help you achieve that.
Instead, you need a group of carefully selected, fully supported testers that will provide you with detailed feedback about your product. This will ensure you have the relevant feedback you need to make data-based recommendations to improve your product and launch with confidence.
Want to find out how many of those carefully selected testers you need? Let our free Test Size Calculator run the numbers for you. Try it out now.