MasteryMade · Intelligence PRD
Two lenses sharing data, different outputs. Competitor Intelligence: what others are doing, where the gaps are. Marketing Rubric: what works, what doesn't, what will work next. Both consume Gate 4 (competitors) + Gates 2/3 (expert's own content for comparison). Neither produces clone training data — they inform marketing and positioning only.
{ "entity_id": "UUID of expert", "competitor_ids": ["UUID array — Gate 4"], "analysis_depth": "quick | standard | deep" }
{
"positioning_map": {
"axes": { "x": "price_point", "y": "depth_of_engagement" },
"placements": [{ "entity":"", "x":7, "y":4, "positioning_statement":"" }],
"white_space": ["underserved positioning areas"]
},
"gap_analysis": {
"topics_no_one_covers": [], "formats_no_one_uses": [],
"audiences_no_one_targets": [], "objections_no_one_addresses": []
},
"ad_hook_taxonomy": {
"hook_categories": [{ "category":"pain_agitation", "examples":[], "frequency":0.65, "engagement":"high" }],
"winning_patterns": [], "saturated_patterns": [], "untested_patterns": []
},
"content_cadence": { "per_competitor": [{ "name":"", "posts_per_week":{}, "primary_format":"", "engagement_rate":0.0 }] },
"offer_comparison": [{ "competitor":"", "offers":[{"name":"","price":"","format":""}], "value_ladder":"" }],
"strategic_recommendations": { "positioning_opportunity":"", "content_gap":"", "offer_differentiation":"", "hooks_to_test":[] }
}
Categories: pain_agitation, curiosity_gap, social_proof, authority, before_after, contrarian, question_lead, story_open. Each ad classified by hook type + CTA type (click, sign up, buy, book, download) + creative format (video, image, carousel). Timeline analysis tracks launch date, run duration, seasonal patterns.
CREATE TABLE content_scores (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
content_id UUID NOT NULL REFERENCES content(id),
entity_id UUID REFERENCES entities(id),
hook_score FLOAT, hook_analysis TEXT,
body_score FLOAT, body_analysis TEXT,
cta_score FLOAT, cta_analysis TEXT,
format_score FLOAT, format_analysis TEXT,
overall_score FLOAT,
actual_engagement JSONB,
predicted_vs_actual FLOAT,
rubric_version INT NOT NULL DEFAULT 1,
scored_at TIMESTAMPTZ DEFAULT now()
);
Hook: 30% (disproportionately important — if they don't stop scrolling nothing else matters). Body: 25% (value density, story structure, framework usage, proof). CTA: 25% (clarity, motivation alignment, friction, urgency). Format: 20% (platform optimization, production quality, length, trend alignment).
Pattern interrupt, curiosity gap, relevance signal (self-selects audience), emotional trigger, specificity.
Batch score all content for entity + competitors. Top 20% = winners, bottom 20% = losers. Extract differentiating patterns. Output: specific, actionable differences.
Performance data re-ingests (PRD 8 feedback) → compare predicted score vs actual engagement → if systematic bias → adjust weights → increment rubric_version → re-score recent content → log to registry_changelog.
Built from Module 7 (pattern recognition) + entity metadata:
{
"avatar_name": "Stressed Professional Sarah",
"demographics": { "age_range":"35-45", "income":"$80K-150K", "role":"mid-management" },
"psychographics": { "pain_points":[], "desires":[], "beliefs":[], "objections":[] },
"media_consumption": { "platforms":[], "content_preferences":[], "scroll_behavior":"" }
}
Process: Generate 3-5 ICP avatars per expert. Per content piece: Claude role-plays each avatar reading content. Evaluate: stop scrolling? Read full? Click CTA? Share? Aggregate into synthetic engagement prediction. Compare to rubric score. When actual performance arrives, validate both against reality.
Patterns discovered for one expert inform all others. Group content scores by hook type across ALL experts/niches. If variance < 0.2 → tag as universal (works everywhere). If high variance → tag as niche-specific. Same for format, CTA, body patterns. Output: cross-portfolio insight report stored as Gate 1 content.
MASTERYMADE — PRD 5 of 12
Dominia Facta. Build what compounds.