Good UX research starts with a clear plan. Without one, you risk collecting data that does not answer your questions, wasting participants' time, and delivering findings that no one acts on.
Defining Research Objectives
Every research project begins with a question. But vague questions produce vague answers. Transform broad curiosity into specific, answerable objectives:
| Vague | Specific |
|---|---|
| "Is our app good?" | "Can new users complete the checkout flow without assistance?" |
| "What do users want?" | "What barriers prevent free users from upgrading to a paid plan?" |
| "Is the redesign better?" | "Does the new navigation reduce time-to-task for the top 5 user actions?" |
Write your objectives using this template: "We want to learn [what] about [who] in order to [decision we will make]."
Example: "We want to learn what causes cart abandonment among first-time mobile shoppers in order to decide which checkout improvements to prioritize in Q2."
The "in order to" clause is critical. If there is no decision that depends on the research, the research may not be worth doing.
Choosing Research Methods
Research methods fall into two major categories:
Qualitative Methods (Why)
These explore motivations, mental models, and pain points. Small sample sizes (5-12 participants) are sufficient:
- User interviews — one-on-one conversations to understand behavior and context
- Usability testing — watching users complete tasks to find friction points
- Diary studies — participants log experiences over days or weeks
- Contextual inquiry — observing users in their natural environment
- Card sorting — understanding how users organize and categorize information
Quantitative Methods (What and How Much)
These measure behavior at scale. Larger sample sizes (50-1000+) are needed for statistical significance:
- Surveys — structured questionnaires for large audiences
- A/B testing — comparing two versions to measure which performs better
- Analytics review — examining usage data, funnels, and drop-off rates
- Tree testing — measuring if users can find items in your information architecture
- First-click testing — measuring where users click first on a page
When to Use What
| Research Goal | Method |
|---|---|
| Understand user needs early in a project | Interviews, contextual inquiry |
| Validate a design concept | Usability testing, first-click testing |
| Measure a live feature's performance | Analytics, A/B testing, surveys |
| Improve information architecture | Card sorting, tree testing |
| Track experience over time | Diary studies, NPS surveys |
Most projects benefit from combining methods. Start qualitative to discover problems, then use quantitative to measure their scope.
Recruiting Participants
Your research is only as good as your participants. Recruit people who match your actual users, not just whoever is convenient:
Define Your Criteria
Be specific about who you need:
- Demographics — age, location, language
- Behavior — frequency of use, features used, platform (mobile vs desktop)
- Experience level — new users, power users, churned users
- Segment — free vs paid, industry, company size
Recruitment Channels
- Your own user base — email existing users who match criteria. Offer incentives.
- Recruitment agencies — UserTesting, Respondent, or local agencies handle screening
- Social media — post in relevant communities (be transparent about the purpose)
- In-product prompts — intercept users at the right moment with a participation invite
Sample Size Guidelines
| Method | Recommended Participants |
|---|---|
| Usability testing | 5-8 per user segment |
| Interviews | 8-12 total |
| Card sorting | 15-30 |
| Surveys | 100+ for meaningful quantitative data |
| A/B tests | Depends on effect size (use a sample size calculator) |
Incentives
Always compensate participants for their time. Common incentives:
- Gift cards ($50-100 for a 60-minute session)
- Product credits or extended trial periods
- Charitable donations in their name
- Cash via payment services
Under-compensating leads to no-shows and low-effort participation. Budget for incentives from the start.
Writing a Research Plan
Document your plan before you start. A research plan keeps everyone aligned and serves as a reference throughout the project:
- Background — what prompted this research? What do we already know?
- Objectives — what specific questions will this research answer?
- Method — which research method(s) will you use and why?
- Participants — who are they, how many, and how will you recruit them?
- Timeline — when will recruiting, sessions, analysis, and reporting happen?
- Logistics — remote or in-person? What tools? Who observes?
- Discussion guide or task list — the specific questions or tasks for sessions
- Deliverables — what will the output look like? Report, presentation, video clips?
Share the plan with stakeholders before starting. Their input helps ensure the research answers the right questions and that they trust the results.
Common Planning Mistakes
- Asking leading questions — "Don't you think this design is confusing?" biases the answer. Ask neutral questions instead.
- Testing with the wrong people — internal team members are not representative users.
- Scope creep — trying to answer too many questions dilutes everything. Limit objectives to 3-5 per study.
- Skipping the pilot — always run one pilot session to test your script, tasks, and technology before the real sessions.