The architectural evolution of a consumer research platform
We're building atypica, an AI-driven consumer research platform.
The goal is simple: let AI independently conduct user research β from observing social media, to simulating interviews, to generating insight reports.
Along the way we encountered specific problems and tried various approaches. This article documents that journey and the architectural framework we distilled from it β GEA (Generative Enterprise Architecture).
Users often say: "I want to understand young people's coffee preferences."
But that's not specific enough:
Traditional approach: Multi-turn dialogue to clarify requirements β Problem: You have to re-ask every time; nothing is reusable.
Our approach:
Instead of treating it as "requirement clarification," we treat it as "intent construction" β assembling an executable research intent directly from user input, team history, and existing Personas.
A single research session produces massive amounts of context:
This context has various characteristics:
Traditional approach: RAG retrieval β Problem: Retrieval is just the first step; continuous curation is still needed.
Our approach:
Treat context as a system to manage β similar to the mindset of DAM (Digital Asset Management): make the right assets available at the right time.
Research is not a linear process:
This requires an agent that continuously makes judgments, rather than following a preset workflow.
Traditional approach: Multi-Agent with each agent having a fixed role β Problem: Who plays the role of "continuous judgment"?
Our approach:
Split into two agents:
The Reasoning Agent is responsible for preparing context, deciding next steps, and adjusting direction.
Every research session accumulates experience:
If this experience isn't captured, you start from scratch next time.
Traditional approach: Documentation or tool calls β Problem: Documentation isn't structured enough; tool calls aren't flexible enough.
Our approach:
Codify experience as Skills β capability modules that can be dynamically loaded.
Let's walk through a typical research task: "I want to understand young people's coffee preferences."
Step 1: Intent Construction
Assemble intent from Memory (team's visual focus), Assets (tea beverage research).
Step 2: Reasoning Planning
Path: Observe β Interview β Report Preparation: Load scoutTask Skill, prepare social media MCP
Step 3: Execute Observation
Observe Xiaohongshu/Douyin using scoutTask methodology, collect 120+ posts. Discovery: "They say they value cost-effectiveness, but pay premium for aesthetics."
Step 4: Reasoning Adjustment
Contradiction detected β Load interview Skill β Verify through in-depth interviews
Step 5: Execute Interviews
Insight: Gen Z "cost-effectiveness" = function + aesthetics + social value
Step 6: Generate Report
Load reportGen Skill, output segmentation, insights, and recommendations.
Step 7: Knowledge Capture
Memory learns, Assets are enriched, Skills are optimized. Next time is more efficient.
The system doesn't make you fill out forms or go through multi-turn clarification dialogues. It directly constructs an executable research intent from your question, team history, and existing Persona library:
The Reasoning Agent plans the execution path and prepares context:
The Execute Agent works based on the prepared context:
The Reasoning Agent adjusts strategy based on execution results:
Throughout the process, context is continuously filtered, refined, and reorganized.
After research is complete, new assets enter the DAM system:
Next time a similar study is needed, the system is smarter.
To address these problems, our architecture gradually formed around four core components:
Left: External Infrastructure
Center: Core Process
Right: Context System (DAM)
Reasoning and Execute continuously interact with the Context System: retrieving memories, accessing data, loading methods.
Purpose: Transform vague input into executable intent
How it works:
Output: A clear intent containing research target, methods, and deliverables
Purpose: Manage various context assets
Two dimensions:
Core capabilities:
Purpose: Continuous reasoning and decision-making
Specific responsibilities:
What it doesn't do: It never executes tasks directly (that's the Execute Agent's job).
Execute Agent:
Skills (atypica's specific Skills):
Full content is loaded only when needed, avoiding context bloat.
The idea of "Universal Agent + Skills Library" comes from Anthropic's thinking in 2025 β rather than building multiple specialized agents, use a single universal agent paired with composable Skills. We align with this direction.
In atypica's practice, we combine this with the dual-agent architecture and apply Skills specifically to consumer research scenarios.
GEA doesn't replace RAG or Multi-Agent β it's a practical approach for specific scenarios.
The Context System leverages RAG's retrieval capabilities.
But it adds continuous curation and asset management β not just retrieval, but also filtering noise, establishing associations, and timely updates.
There are also multiple capability units (Skills).
But the dual-agent approach separates reasoning from execution, with Skills dynamically loaded as context β not fixed multiple agents, but composable capability modules.
There are still questions we're exploring:
Some human confirmation is still needed. Can it become fully automatic in the future?
When to keep? When to discard? How to balance quality and quantity?
Too fine-grained means high management cost; too coarse-grained means not flexible enough.
We've only validated it in consumer research. Other judgment-heavy work may require adjustments.
GEA is not a general-purpose architecture β it's a domain-native architecture for specific scenarios.
Characteristics: Vague starting points, uncertain processes, judgment at the core
Characteristics: Fixed processes, deterministic requirements, execution-focused
GEA is an architecture designed for "work that can't be written as an SOP." If your work can be described as a clear process, a traditional workflow engine is probably a better fit.
Β© 2025 atypica.AI - Pioneering Generative Enterprise Architecture
Q: "What do you value most?" β "Cost-effectiveness"Q: "But the 38 yuan coffee you posted..." β "The cup was just too good-looking"Q: "So good-looking is also cost-effective?" β "I can post it on social media"Target: 18-28 year olds in tier-1 citiesScenario: Daily coffee consumption decisionsDimensions: Brand preference, price sensitivity, social factorsMethods: Social media observation + simulated interviewsDeliverables: User segmentation + preference map ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β GEA Architecture β βββββββββββββ¬βββββββββββββββββββββββ¬ββββββββββββββββββββββββ€ β β β β β External β Core Process β Context System (DAM) β β Infra β β β β β Intent Layer β β’ Memory β β β’ LLM β (Needs + Context) β (Team memory) β β β β β β β β’ MCP β Reasoning ββββββββ β β’ Assets β β Social β β β (Enterprise data) β β Reports β Execute ββββββββ β Financials/Product β β CRM β β β β β β Outcome β β’ Skills β β β’ APIs β β (Methodologies) β β β β Research/Interview β β β β β βββββββββββββ΄βββββββββββββββββββββββ΄ββββββββββββββββββββββββMulti-Agent Architecture Dual-Agent Architecture(Traditional) (GEA / atypica)βββββββββββββ ββββββββββββββββββββββScout Agentβ β Reasoning Agent ββββββββ¬ββββββ β (Inference/Plan) β β ββββββββββ¬ββββββββββββββββββΌβββββββ β Prepare ContextβInterview β β Instruct ExecβAgent β ββββββββββΌββββββββββββββββββ¬βββββββ β Execute Agent β β β (Universal) ββββββββΌβββββββ ββββββββββ¬ββββββββββββReport Agentβ ββββββββ¬βββββββ ββββββββββΌβββββββββββ β β Skills Lib ββββββββΌβββββββ β β’ scoutTask ββStrategy β β β’ interview ββAgent β β β’ reportGen βββββββββββββββ β β’ strategy β βββββββββββββββββββββIssues: Benefits:β’ Fragmented context β’ Unified context mgmtβ’ High coordination cost β’ Clear reasoning pathβ’ Hard to reuse β’ Composable skillsSkills Progressive DisclosureβββββββββββββββββββββββββββββββββββββββLoad time (metadata only):ββββββββββββββββββββββββββββββββββββββββ Skills Library (1000+ skills) ββ ββ scout.md - Social observe ββ interview.md - Structured Q&A ββ reportGen.md - Report builder ββ ... ββββββββββββββββββββββββββββββββββββββββ β Minimal token usageRuntime (load on demand):ββββββββββββββββββββββββββββββββββββββββ Reasoning: "Need social observation"βββββββββββββββββ¬βββββββββββββββββββββββ β Load scout.mdββββββββββββββββΌββββββββββββββββββββββββ ## Scout Skill ββ ββ Observe social media behavior... ββ - Collect 5 samples ββ - Identify patterns ββ - Trigger reasoningThinking ββ ββ [scripts/scout.py] ββββββββββββββββββββββββββββββββββββββββAfter completion:Context reorganized, skill unloaded