Was this article helpful? Product Survey Question
Measure content effectiveness in real-time and identify which articles resonate with readers to optimize your knowledge base and improve user self-service success.
Question type
Yes/No binary choice
Primary metric
CSAT (Customer Satisfaction Score)
Answer scale variations
| Style | Options |
|---|---|
| Typical choice | No Yes |
| More emphatic | Not helpful Very helpful |
| Direct feedback | Did not help Helped me |
Follow-Up Questions
Following every helpful article is a simple question: did it actually help? The thumbs up/down response tells you whether your content hit the mark, but the real insights come from understanding why readers found it helpful or unhelpful. These follow-ups reveal gaps in your content, validate what's working, and show you exactly how to improve your help documentation.
This open-ended follow-up captures the specific reasons behind each rating. Users might mention missing information, confusing explanations, outdated screenshots, or exactly what helped them solve their problem. These qualitative insights are gold for prioritizing content updates and understanding your readers' real needs.
Sometimes an article is well-written but doesn't match what the reader actually needs. This question separates content quality issues from discoverability and coverage gaps. If people rate articles as unhelpful because they're looking for something different, you might need better search, clearer titles, or net-new content on missing topics.
This structured follow-up helps you spot patterns in what's missing across your documentation. If multiple articles get requests for more screenshots or troubleshooting sections, that's a clear content strategy signal. You can prioritize improvements that will have the biggest impact across your entire help center.