How would you rate your overall satisfaction with our product? Product Survey Question

Measure how users truly feel about your product experience and catch satisfaction drops before they become churn signals.

How would you rate your overall satisfaction with our product?
Very dissatisfied
Very satisfied

Question type

Rating scale 1-5

Primary metric

CSAT (Customer Satisfaction Score)

Answer scale variations

Comparison table
StyleOptions
Typical choiceVery dissatisfied
Dissatisfied
Neutral
Satisfied
Very satisfied
More emphaticNot satisfied at all
Not satisfied
Neutral
Satisfied
Completely satisfied
Quality-focusedPoor
Below average
Average
Good
Excellent
Experience-basedTerrible experience
Disappointing
Acceptable
Positive
Outstanding experience
Value-focusedFalls short
Below expectations
Meets expectations
Exceeds expectations
Far exceeds expectations

Follow-Up Questions

Following a CSAT score alone leaves you wondering why customers feel the way they do. These follow-up questions help you understand the specific drivers behind satisfaction ratings and identify concrete improvement opportunities.

This open-ended follow-up captures the context behind each rating. You'll learn what delights your promoters and what frustrates your detractors, giving you specific direction for product improvements rather than just a number.

Understanding what customers prioritize helps you interpret their satisfaction scores more accurately. A low rating from someone who values support means something different than a low rating from someone focused on features.

This question surfaces your biggest improvement opportunity directly from users. Even satisfied customers often have that one thing they wish worked differently, and these insights typically align closely with what would move the satisfaction needle.

When to Use This Question

SaaS Products: Ask immediately after a user completes their first significant workflow (e.g., sending their first campaign, generating their first report) via an in-app modal that doesn't interrupt their success moment, because you're capturing satisfaction at the peak of their initial value realization when they're most likely to provide authentic feedback about whether your product delivers on its promise.

E-commerce: Deploy 24-48 hours after delivery confirmation through an email with a single-click rating interface that links directly to the product page, because this timing catches customers after they've actually used the product but before the purchase experience fades from memory, and the friction-free response mechanism dramatically increases completion rates while the product impression is still fresh.

Mobile Apps: Trigger after 7 days of active use or after completing 3 core actions (whichever comes first) using a native in-app prompt during a natural pause in activity, because this ensures users have enough experience to form meaningful opinions while avoiding survey fatigue, and placing it during downtime (like waiting for content to load) feels less intrusive than interrupting active engagement.

Web Apps: Present within 30 days of account creation but only to users who've logged in at least 5 times, using a slide-in panel from the bottom right that remains dismissible and non-blocking, because frequency of return visits indicates genuine engagement worth measuring, and the unobtrusive placement respects power users while still capturing feedback from those building regular usage habits.

Digital Products: Send 14 days after purchase for courses or content products via email with the rating embedded directly in the message (no click-through required), because this window allows time for meaningful consumption while maintaining recency, and eliminating the extra click to a landing page can double response rates by removing the primary friction point when customers are evaluating whether their investment delivered expected value.

*feedback.tools
Start collecting user feedback
Get Started