Is AI interview based on my pre-set questions for automatic divergence, or just answering these questions? What's the difference from sending out surveys to respondents?
Question Type
Product Q&A
User's Real Concern
"I've prepared 10 questions. Will AI interview just go through my questions once and that's it? What's the difference from a survey?"
Core Anxiety: Worried that AI interviews are just "advanced surveys" without real follow-up and deep exploration.
Core Answer
AI Interview ≠ Automated Survey. AI interviews involve "intelligent divergence" and "deep follow-up" based on your questions.
Core Differences Among the Three
| Dimension | Survey | AI Interview | Human Interview |
|---|---|---|---|
| Follow-up Depth | ❌ No follow-up | ✅ Unlimited follow-ups | ✅ Limited follow-ups (time constrained) |
| Question Divergence | ❌ Fixed questions | ✅ Intelligent divergence based on responses | ✅ Impromptu divergence |
| Response Depth | Multiple choice or short answer | 3000+ word deep insights | Depends on interviewer's skill |
| Execution Efficiency | 1 hour collection | 3-4 hours (5-10 parallel) | 2-3 weeks (sequential) |
| Data Quality | ★★☆☆☆ (Shallow) | ★★★★☆ (Deep) | ★★★★★ (Deepest) |
3 Core Capabilities of AI Interviews
Capability 1: Intelligent Follow-up "Why" (5 Whys Technique)
Your Initial Question:
"Would you buy this ¥18 sparkling coffee?"
Traditional Survey:
- Would buy
- Wouldn't buy
- Maybe
- End
AI Interview Process:
Q1: "Would you buy this ¥18 sparkling coffee?"
- A1: "Probably not."
Q2 (Follow-up): "Why not?"
- A2: "Seems a bit expensive."
Q3 (Deep dive): "Is ¥18 expensive for you? What's your usual coffee budget?"
- A3: "I usually buy ¥12 Americanos."
Q4 (Comparison): "Would you buy it at ¥12?"
- A4: "Still no, mainly because sparkling + coffee sounds weird."
Q5 (True motivation): "Weird? Are you worried about the taste?"
- A5: "Yeah, and during the afternoon, I'm not in the mood to take risks. Coffee is for survival, not experimentation."
Q6 (Scenario exploration): "When would you be willing to try new flavors?"
- A6: "Morning or weekends, when I can afford the risk of disappointment."
Key Finding:
- ❌ Surface understanding: Price is too high (¥18 → ¥12)
- ✅ True motivation: In afternoon survival scenarios, users are unwilling to take the risk of disappointment
If using survey: You only get "probably not," never digging into the true motivation.
Capability 2: Intelligent Divergence Based on Responses
Your Initial Question:
"What do you think about this fitness app's AI personal trainer feature?"
Traditional Survey:
- Very interested
- Interested
- Neutral
- Not interested
- End
AI Interview Process:
Q1: "What do you think about the AI personal trainer feature?"
- A1: "Sounds good, but I'm worried the AI won't understand my physical condition."
Q2 (Diverge: Concerns): "What aspects worry you?"
- A2: "I have knee problems, worried the AI's recommended movements will worsen my injury."
Q3 (Diverge: Human experience): "Have you worked with a human trainer before? How did they handle your knee issue?"
- A3: "Human trainers adjust movements based on my condition, avoiding high knee-pressure exercises."
Q4 (Diverge: Expectations): "If AI could do this, would you trust it?"
- A4: "If the AI could let me input injury information and automatically adjust the training plan, I'd trust it."
Q5 (Diverge: Use scenarios): "When would you use an AI personal trainer?"
- A5: "When training at home, human trainers are too expensive, but I need supervision and correction."
Key Findings:
- Users don't reject AI, they worry "AI won't understand my physical condition"
- Solution: Allow users to input injury information, AI auto-adjusts plan
- Use scenario: Home training, complementing human trainers
If using survey: You only get "neutral," unable to discover real needs and solutions.
Capability 3: Identify Vague Responses, Proactively Clarify
Your Initial Question:
"What areas of our product need improvement?"
Traditional Survey:
- Open-ended: User fills in "It's okay, nothing special"
- End
AI Interview Process:
Q1: "What areas of our product need improvement?"
- A1: "It's okay, nothing special."
Q2 (Identify vagueness): "'Okay' means basically satisfied, or some areas unsatisfying but not affecting use?"
- A2: "Well... some features aren't very smooth."
Q3 (Ask for specifics): "Which features aren't smooth?"
- A3: "The note feature is too complex, took me forever to find the save button."
Q4 (Deep dive pain points): "Was it complex the first time, or still complex after several uses?"
- A4: "Used it several times, every time I have to think about how to operate."
Q5 (Comparison): "What note tools do you usually use? What do they do well?"
- A5: "I use the system's native memo app, open and write, saves automatically."
Key Findings:
- Note feature's real pain point: Complex operations, doesn't match user habits
- User expectation: Simple and direct, auto-save like memo apps
- Optimization direction: Simplify process, auto-save
If using survey: You only get "it's okay," unable to discover real pain points.
AI Interview vs Survey: Complete Comparison
Scenario Comparison: New Product Pricing Test
Research Goal: Test pricing acceptance for ¥18 sparkling coffee
Survey Approach
Question Design:
-
Would you buy ¥18 sparkling coffee?
- Yes
- No
- Maybe
-
What do you think is a reasonable price?
- ¥10-15
- ¥15-20
- ¥20+
Results:
- 30% would buy, 50% maybe, 20% wouldn't
- 60% think reasonable price is ¥15-20
Problems:
- ❌ Don't know "why" 30% would buy
- ❌ Don't know "why" 20% wouldn't
- ❌ Don't know how to optimize product and pricing
AI Interview Approach
Initial Question:
"Would you buy ¥18 sparkling coffee?"
Interview 5 users, deep follow-up:
Linda (Price-sensitive):
- Won't buy → because expensive → usually buys ¥12 Americano → would buy at ¥12? → still no → why? → afternoon survival doesn't allow risk
- Key insight: Adjust to morning experimentation scenario
Emma (Health-anxious):
- Maybe → 0 sugar is attractive → but worried about sweeteners → need to check ingredients → sparkling irritates stomach → health anxiety needs safety
- Key insight: Label "gentle formula, non-irritating"
Chloe (Social experimenter):
- Would buy → high appearance, can take photos → would share on social media → social currency value
- Key insight: Design attractive packaging
Results:
- ✅ Discovered 3 user types with different motivations
- ✅ Found optimization directions: scenario, safety, appearance
- ✅ Actionable improvement plans
AI Interview Execution Process
Step 1: You Prepare Initial Questions (3-5)
Example:
- What's your first impression of this product?
- In what scenarios would you use it?
- Can you accept the price of ¥X?
- Compared to competitors, where are the advantages?
- Any improvement suggestions?
Step 2: AI Intelligently Follows Up Based on Responses
You don't need to:
- ❌ Pre-design all follow-up questions
- ❌ Predict what users will say
- ❌ Manually control the interview process
AI automatically:
- ✅ Identifies vague responses, follows up for clarification
- ✅ Diverges new questions based on responses
- ✅ Uses "5 Whys" to dig into deep motivations
- ✅ Employs techniques like comparison, hypotheticals, scenario exploration
Step 3: AI Generates Deep Insight Report
Output Content:
- Each user's interview summary (3000+ words)
- Wonderful dialogue excerpts as evidence
- Key findings and user personas
- Actionable improvement suggestions
Common Questions
Q1: Can I control the depth of AI interviews?
Yes. You can set:
- Interview rounds: 5 rounds (quick) vs 10 rounds (deep)
- Follow-up strategy: Focus on core questions vs comprehensive exploration
- End conditions: Stop when goals are met
Q2: Will AI completely deviate from my questions?
No. AI interview divergence is "bounded divergence":
- ✅ Always centered around your core questions
- ✅ Reasonably extends based on user responses
- ✅ Won't wander into irrelevant topics
Example:
- You ask about pricing, AI won't ask "what music do you like"
- AI will ask: "What's your usual coffee budget?" (relevant)
Q3: Is there a big gap between AI interview and human interview follow-up capabilities?
Not much, AI is stronger in some aspects:
| Dimension | AI Interview | Human Interview |
|---|---|---|
| Follow-up Consistency | ★★★★★ (Stable each time) | ★★★☆☆ (Affected by state) |
| Follow-up Depth | ★★★★☆ (5-10 rounds) | ★★★★★ (Unlimited follow-ups) |
| Divergence Ability | ★★★★☆ (Data-based) | ★★★★★ (Impromptu inspiration) |
| Emotional Resonance | ★★★☆☆ | ★★★★★ |
| Efficiency | ★★★★★ (5-10 parallel) | ★★☆☆☆ (Sequential) |
Conclusion:
- Exploratory research, quick validation → AI follow-up is sufficient
- Deep emotional insights, high-stakes decisions → Human follow-up is better
Q4: When are surveys more suitable than AI interviews?
Surveys are suitable for:
- Large-scale data collection (100+ people)
- Simple multiple choice (no need to dig deep)
- Statistical analysis focus
AI interviews are suitable for:
- Exploring "why"
- 5-10 people deep insights
- Discovering new problems and directions
Final Takeaway
"AI interviews aren't automated surveys, they're intelligent follow-ups and deep divergence. You provide direction, AI digs for depth."
Remember:
- ✅ AI Interview = Your questions + AI follow-ups + Intelligent divergence
- ✅ Survey = Fixed questions, shallow answers
- ✅ Human Interview = Deepest follow-ups, but low efficiency
Related Feature: Interview vs Discussion Document Version: v2.1