MasteryMade · Experience PRD
Replace the static MasteryOS sidebar with an agent that generates the interface dynamically per user per session. The agent knows the expert's methodology (8-module extraction), knows the user's context (onboarding + history), and produces UI elements based on what that user needs. The channel is just delivery.
User Request (any channel)
→ Channel Adapter (Web/Telegram/WhatsApp/Voice → standard format)
→ Context Engine (WHO is this user, WHERE in their journey)
→ Expert Agent Core (Module 9 retrieval → what to surface)
→ Experience Renderer (agent output → channel-specific UI)
{
"user_id": "UUID", "expert_id": "UUID",
"state": "new | exploring | learning | applying | advanced",
"onboarding_complete": false,
"modules_exposed_to": [1, 2],
"frameworks_used": ["Framework A"],
"current_problem": "struggling with work-life alignment",
"preferences": { "communication_style":"direct", "depth":"frameworks_with_examples", "time":"busy" }
}
new → exploring: Completes onboarding (3-5 discovery prompts). exploring → learning: Engages with first framework. learning → applying: Reports trying framework in real life. applying → advanced: Used 3+ frameworks, returns with specific scenarios.
Agent asks 3-5 questions from Module 7 (pattern recognition): 1) "What brought you here?" → maps to diagnostic patterns. 2) "Biggest challenge with [domain]?" → identifies pain. 3) "Tried solving before?" → gauges experience. 4) "What would success look like?" → establishes goal. 5) "How much time?" → sets pace. Based on answers: select initial framework pathway from Module 5 scaffold order.
Loads meta-index (~500 tokens from PRD 4). Based on user state + query, follows retrieval pointers to load specific modules:
def generate_response(user_state, query, meta_index):
intent = classify_intent(query)
retrieval = meta_index.retrieval_pointers.match(intent, user_state)
loaded_context = fetch_expert_chunks(retrieval.modules_to_load)
response = claude_api(
system=build_expert_prompt(meta_index, user_state),
context=loaded_context,
message=query,
output={ "response_text":"", "ui_directives":[], "follow_ups":[], "state_updates":{} }
)
| Type | When | Content |
|---|---|---|
| framework_card | User needs to learn a concept | Title, summary, steps, visual |
| action_prompt | User should do something NOW | Specific action, time estimate, win condition |
| progress_indicator | Working through progression | Current step, what's next |
| resource_link | Supplementary material available | Link to NowPage playbook, video |
| reflection_question | User needs to think first | Self-assessment question |
| diagnostic_result | After discovery questions | Assessment, recommended path |
Web (NowPage/betaap.io): Framework cards as expandable panels. Action prompts as highlighted callouts with checkboxes. Follow-ups as clickable buttons. Full rich HTML.
Telegram: Formatted messages with inline keyboard for "Show steps"/"Tell me more". Action prompts with ✅ complete button. Messages <300 words with "expand" button.
WhatsApp: Simpler formatting. Numbered lists for options: "Reply 1 for Framework A, 2 for B." Images sent separately.
Voice: Response spoken in expert's style (Module 2 informs TTS prompt). UI directives → verbal prompts: "I have a framework that might help. Walk you through it?" Follow-ups as voice menu.
NowPage pages as self-executing AI assistants. Each expert's page includes hc-metadata (service identity, capabilities), hc-instructions (identity, voice, boundaries), hc-context-public (meta-index URL). The page content IS the knowledge source. Embedded agent reads visible text + JSON metadata. No separate API for basic interactions.
1. Samuel's public extraction (Gate 2) = initial knowledge base. 2. Build meta-index from Modules 1-5. 3. Create HC page at betaap.io. 4. Implement onboarding from Module 7. 5. Web channel only for PoC. 6. Measure: engagement rate, framework completion rate, return rate.
MASTERYMADE — PRD 7 of 12
Dominia Facta. Build what compounds.