MasteryMade · Execution Playbook

Paint by Numbers: The Asymmetric Execution Sequence

How 5 scattered workstreams collapse into one sequential domino chain. Each step builds the machine while serving the customer.

The 5 Workstreams Are One Pipeline

These look like separate projects. They're not. They're layers of the same system viewed from different angles.

#WorkstreamWhat it actually isWhere it lives today
1Jason's modular services planTHE MAP — architecture spec for everythingplan.jasondmacdonald.com (16 pages, 12 PRDs)
2RSS / intelligence dashboardTHE RADAR — signals filtered against goalsjasondmacdonald.com (briefs #47-51+, Neural Registry)
3Universal ingest + lensesTHE ENGINE — how data flows and gets processedDesigned in PRD 3 + PRD 2. Not built yet.
4MasteryOS extraction + clonesTHE FACTORY — how experts get extracted and deployedSkills in /mnt/skills/user/ (expert-*). Platform at betaapp.io. PRD 4 spec.
5Samuel's Align360THE FIRST UNIT TEST — proves the factory worksE:\align360\ (local). align360.asapai.net. design.align360.io. betaapp.io. Claude Code CLI sessions.

The reconciliation problem isn't about cross-linking pages. It's about execution order. The pages are documentation artifacts — they're useful but they're not the work. The work is: extract Samuel → prove the extraction pipeline → package as skill → deploy on betaapp.io → use the skill on next expert. Everything else (the map, the radar, the engine design) exists to guide and improve that loop.

The Domino

One action that makes everything else easier or unnecessary:

Run PRD 4 Module 1 extraction on Samuel's existing material.

This single step:

Paint by Numbers — The Execution Sequence

Each step builds the machine AND serves Samuel. No step exists purely for architecture. If it doesn't produce a usable artifact, it's not on this list.

Step 1: Ship Samuel's betaapp.io (this week)

PARALLEL — Will + Sumit

Wiring for Impact assessment working. Phase 0 tools responding. 5 alpha users can access. This doesn't wait for the extraction pipeline — it uses the existing System Prompt v6.1 and knowledge files. Good enough for Wave 1.

Win condition: Samuel logs in and completes Wiring for Impact assessment. 5 users invited.

Touches: betaapp.io · PRD 11 (SILO 2 ops)

Step 2: Module 1 extraction — FLC Wisdom Framework as structured rubric

JASON — Forge or Claude session

Source material: 3 extraction transcripts + governance doc + System Prompt v6.1. Extract the FLC Wisdom Framework (5 layers + Tri-Filter + Clarity Path) as Module 1 structured JSON per PRD 4 Section 4.4. Validate: does the rubric match how Samuel actually thinks in the transcripts?

Win condition: JSON rubric that passes human review. Rubric identifies at least 3 named frameworks, 5+ decision logic patterns, clear priority hierarchy.

Touches: E:\align360\ source files · PRD 4 · Align360 reconciliation

What this creates: The Module 1 extractor skill — reusable on any expert. Store the extraction in Supabase expert_extractions table (or local first, migrate later).

Step 3: Modules 2-8 — Decompose System Prompt v6.1

JASON — sequential, each validates against Module 1

The System Prompt v6.1 already contains most of this in monolithic form. This step DECOMPOSES it into modular JSON per module. Not re-extracting — transforming format.

Win condition: All 8 modules as validated JSON. Forward pass coherence check passes. Each module traces back to source material.

What this creates: The decomposition skill — takes ANY monolithic expert system prompt and decomposes it into 8 modules. This is the scale multiplier.

Step 4: Module 9 — Retrieval patterns from 14 background systems

JASON

Samuel's 14 background systems (Pathfinder, RhythmOS, Epistemic Drift Detection, etc.) ARE retrieval patterns. Convert to Module 9 routing rules: "When user asks about X → load Module Y, framework Z."

Win condition: Routing rules JSON. When tested against 10 sample queries, routes to correct framework 8/10 times.

What this creates: The meta-index (PRD 4 Section 4.9) — lazy-load ~500 tokens instead of loading all 36 stacks into context.

Step 5: Three-pass validation

JASON — uses expert-clone-scorer + expert-test-extractor skills

Forward pass (modules cohere). Backward pass (traces to source). Ground truth pass (clone output matches Samuel's actual responses from transcripts).

Win condition: All 3 passes pass. If ground truth shows gaps, iterate Modules 2-8.

Step 6: Deploy validated extraction to betaapp.io

JASON + SUMIT

Replace monolithic System Prompt v6.1 with modular extraction. Meta-index drives retrieval. Clone quality improves because it loads relevant modules instead of everything. This is the upgrade from Wave 1 → Wave 2 quality.

Win condition: Samuel notices improvement. Clone responses more accurate, more "him."

Step 7: Package extraction as reusable skill

JASON / FORGE

The Module 1-9 extraction steps become an automated skill. Input: entity_id + ingested content. Output: structured extraction per module + meta-index. Includes MANUAL.md + HC page.

Win condition: Run on a second expert's PUBLIC content (The WAY or Brain Muka). Produces reasonable Module 1 rubric without manual intervention.

What this creates: The Expert Factory — the core IP of MasteryMade. Every future expert goes through this skill.

Step 8: Competitor intel on Samuel's niche

AGENT SWARM / FORGE

Identify 5 competitors. Ingest public content. Run competitor lens. Produce positioning map + gap analysis. Give to Samuel as go-giver artifact.

Win condition: Competitor report that Samuel finds valuable. Proves the competitor analysis pipeline. Feeds content strategy.

Step 9: Content machine activated for Align360

AGENT SWARM

Use Module 2 (voice) + Module 4 (frameworks) + competitor gaps to generate content briefs. Apply Mastery Labs matrix. Generate first month's content calendar.

Win condition: 10 content pieces generated in Samuel's voice. Samuel approves at least 7/10 without edits.

Step 10: Three-wave launch

JASON + SAMUEL

Wave 1 (Step 1) already running. Wave 2: 50-100 invite-only with validated extraction. Wave 3: public launch with content machine running.

Where Everything Lives — The Cross-Reference Map

ArtifactLocationRoleLinks to
Architecture indexplan.jasondmacdonald.com/indexSite map — start hereAll PRDs, all reconciliation docs
Master Registryplan.jasondmacdonald.com/master-registryRoot architecture docAll 16 sections, all PRDs
12 PRDsplan.jasondmacdonald.com/prd-*Universal system specs↑ Master Registry · ↔ each other
Align360 reconciliationplan.jasondmacdonald.com/align360-reconciliationSamuel-specific gap analysis↑ Master Registry · ↓ boarding pack, command center, betaapp.io
This playbookplan.jasondmacdonald.com/execution-playbookSequenced execution plan↑ Master Registry · → all PRDs · → all Align360 artifacts
Boarding Packalign360.asapai.net/boarding-packExtraction process workspace↑ plan reconciliation · → betaapp.io
Command Centerdesign.align360.io/command-centerProject dashboard↑ plan reconciliation · → boarding pack · → betaapp.io
betaapp.ioalign360.betaapp.ioLive product (user-facing)↑ plan reconciliation
Local filesE:\align360\ (Claude Code CLI)Source material + system prompt→ Steps 2-5 of this playbook
Daily Briefsjasondmacdonald.com/brief-*Intelligence radar→ knowledge-registry · tracks execution progress
Knowledge Registryjasondmacdonald.com/knowledge-registryConcept graph across all briefsUpdated after each brief · nodes track architecture concepts
Extraction skills/mnt/skills/user/expert-*Reusable pipeline skills→ PRD 4 · → PRD 12 (MANUAL.md needed)

The Rule

Every step in the sequence produces THREE things:

  1. An artifact for Samuel — the customer gets value
  2. A reusable skill — the next expert is cheaper and faster
  3. A validation of the architecture — the PRD either works or gets corrected

If a step only produces one of these three, it's the wrong step. If it produces all three, it's the domino.

What NOT to Do

Monday Morning: What to Do First

  1. Ping Will: "Ship betaapp.io for Samuel this week. Wave 1, 5 users. Wiring for Impact + Phase 0 tools."
  2. Open a new Claude session (or Forge): Load E:\align360\Align360_System_Prompt_v6.1.md + extraction transcripts. Run Module 1: "Extract the FLC Wisdom Framework as a structured JSON rubric per this spec: [paste PRD 4 Section 4.4 Module 1 output structure]."
  3. Validate the rubric: Does it match how Samuel actually thinks in the transcripts? If yes, proceed to Modules 2-8. If no, iterate.
  4. Everything else follows from there.

MASTERYMADE · Execution Playbook · March 25, 2026

Dominia Facta. Build what compounds.