What are issues in delta testing?
In delta testing, issues are a type of feedback representing quality, interoperability, and real-world performance problems (often bugs) that have decreased a tester's experience with the product. Along with ideas and praise, it is one of the three key feedback types.
Testers submit issues as they use your product in their personal environments over a period of time. Unlike ideas, which offer suggestions for improvement, testers should only submit issues when the product is not working as designed. In fact, collecting issues and ideas as separate types of feedback is one of the first upgrades a product manager should make in a struggling user testing program. It makes a major difference in the intake, organization, and actionability of product feedback.
Form elements when collecting issues
Here are a few common elements of an Issue form:
- Summary: a single line summary of the issue
- Feature: the impacted experience or area of the product
- Steps to reproduce: a step-by-step account of what the tester did before encountering the issue to aid QA or admins who are attempting to reproduce it
- Technographics: a profile of the device on which the tester experienced the problem
- Attachments: any associated files (e.g., screenshots, logs, videos)
- Blocking status: whether or not the issue is preventing the tester from further testing
- Severity: a label indicating how strongly a tester perceived a given issue has impacted their experience (critical, major, minor, trivial)
- Status: where the issue is in the workflow or decision process
- Comments: an area where admins and other testers can collaborate and provide additional details on the submitted issue
- Votes: a place where testers can indicate they are also experiencing the issue
Prioritizing issues with Impact Scoring
Most tests will get feedback on more issues than they can fix and/or low-value issues that they won’t fix. You need a good system for prioritizing issues, so you know which issues you need to get in front of your development team. Impact Score is a great metric for prioritizing issues that helps you decide at a glance which pieces of feedback are more important.
To calculate the Impact Score of an issue, you multiply its prevalence or popularity (counts of submissions, votes, comments, etc.) by the importance of the relevant feature (a numerical weight you assign). This gives you a simple value that you can use to sort, compare, and filter all your feedback.
As you can see, making feedback collaborative for testers provides strong signals for product managers when it’s time to prioritize. Without options like voting and commenting, testers have to submit a full report, so fewer people may indicate that they’ve encountered a given issue.
What issues tell you about your product or feature
Issues reveal defects, bugs, or failures that testers have experienced with your product. Obviously, when testers encounter friction because your product isn't performing as designed, it negatively impacts the user experience. But analyzing the volume and impact of issues more closely provides valuable insight into the overall product experience.
High-Risk Features
By looking at the count, severity, and/or impact of issues encountered in each feature, you get a holistic view of which features have the highest risk of negatively impacting the customer experience. This is extremely helpful for prioritizing fixes in the product backlog.
Tuned features or low coverage?
What can a feature with few to no issues tell you? Well, either the feature is performing optimally or there isn't enough test coverage on that feature. If you see that testers have completed activities but submitted very few issues, odds are good that the feature is performing well. On the other hand, if you see some issues submissions but low activity completion, it could be a sign that the feature needs more attention.
Supporting your support team
Analyzing the quantity and the severity of issues within a specific feature can tell you which areas of your product could benefit from workarounds, knowledge base articles, or training.
Overall stability
By comparing the overall impact of issues to the impact of ideas and praise, you get a telling overview of the entire product experience. For example, if testers are submitting more issues than praise and ideas, it's an indication the product is unstable and needs improvements to wow your target market at release.