# HumanSurvey use cases

Concrete workflows where an AI agent designs a survey, a group of humans responds, and the agent reads structured results. All three long-form walkthroughs are linked below.

Canonical: https://www.humansurvey.co/use-cases

---

## Community / brand manager — post-AMA, drop, and campaign feedback

Run a Friday AMA or a campaign drop. Tell Claude "send attendees a feedback form — rating, topics, what to change." Tuesday, ask "how did it land?" and get a synthesis grounded in actual response JSON.

Full walkthrough: https://www.humansurvey.co/use-cases/community-feedback

Markdown: https://www.humansurvey.co/use-cases/community-feedback.md

## Indie maker / PM — post-launch feedback from early users and waitlist

A week after shipping, have your agent collect structured feedback from your first 200 sign-ups: NPS, positioning validation, biggest paper cut, pricing willingness. The synthesis ships straight into roadmap decisions.

Full walkthrough: https://www.humansurvey.co/use-cases/product-launch

Markdown: https://www.humansurvey.co/use-cases/product-launch.md

## Event organizer — post-event feedback for conferences, meetups, webinars

Rate sessions in a matrix, capture open-text what-to-change, run per-track demand analysis. Your agent writes the public retro and drafts per-speaker feedback emails grounded in real numbers.

Full walkthrough: https://www.humansurvey.co/use-cases/events

Markdown: https://www.humansurvey.co/use-cases/events.md
