Skip to content

themacmarketer/course-building-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Course Building Agent

An orchestrated multi-agent pipeline that designs evidence-based curricula using Claude Code. A central orchestrator coordinates five specialist sub-agents through a sequential pipeline, enforcing quality gates between phases and presenting results to the user at key checkpoints.

Built using the FRAME methodology and grounded in 12 instructional design frameworks with peer-reviewed evidence ratings.

How It Works

User Brief
    │
    ▼
┌─────────────────────────────────────────────────────┐
│              ORCHESTRATOR (CLAUDE.md)                │
│  Collects brief → Delegates → Gates → Delivers      │
└──────────┬──────────────────────────────────────────┘
           │
           ├─→ Phase 1: Intake & Research
           │   Brief validation, domain + audience research
           │
           ├─→ Phase 2: Persona Builder
           │   3–5 pedagogical learner personas
           │       🛑 USER CHECKPOINT — confirm personas
           │
           ├─→ Phase 3: Curriculum Designer
           │   Objectives, strategies, modules, activities, assessments, slide specs
           │       🔵 USER CHECKPOINT (recommended) — review curriculum structure
           │
           ├─→ Phase 4: Persona Tester
           │   Walk-throughs, friction map, differentiation responses
           │
           └─→ Phase 5: Delivery Packager
               Design rationale, curriculum package, pre-flight checklist

The user provides a project brief (topic, audience, format, duration, delivery profile). The orchestrator validates the brief, checks for constraint conflicts, and then delegates to each sub-agent in sequence. Between every phase, a quality gate verifies the output before the next sub-agent begins. If a gate fails, the pipeline stops and asks the user how to proceed — it never silently papers over problems.

There are two user checkpoints: a mandatory checkpoint after Phase 2 (personas), because everything downstream depends on accurate personas, and a recommended (waivable) checkpoint after Phase 3 (curriculum design), where the user can review objectives, module structure, and slide specs before persona testing begins.

Directory Structure

course-building-agent/
│
├── CLAUDE.md                    ← Orchestrator instructions (Claude Code reads this)
├── ARCHITECTURE.md              ← How it all fits together
├── README.md                    ← This file
│
├── skills/                      ← Sub-agent skill files (one per phase)
│   ├── 01-intake-research/
│   │   └── SKILL.md             ← Phase 1: domain + audience research
│   ├── 02-persona-builder/
│   │   └── SKILL.md             ← Phase 2: pedagogical persona generation
│   ├── 03-curriculum-designer/
│   │   └── SKILL.md             ← Phase 3: objectives, modules, activities
│   ├── 04-persona-tester/
│   │   └── SKILL.md             ← Phase 4: walk-throughs, friction map
│   └── 05-delivery-packager/
│       └── SKILL.md             ← Phase 5: rationale, package, checklist
│
├── references/                  ← Shared knowledge base (all sub-agents access)
│   ├── rules.md                 ← 16 rules — the pipeline's operating system
│   ├── evidence-base.md         ← Full citations and evidence strength ratings
│   └── uploads/                 ← Upload expert reference materials and user-supplied documents here
│
├── scripts/                     ← Helper scripts
│   └── council-research.py      ← Sends research queries to LLM Council
│
├── examples/                    ← Example inputs for testing
│   └── example-brief.md         ← 5 sample briefs (workshop, e-learning, Zoom, academic, micro-learning)
│
└── outputs/                     ← Generated at runtime (not committed)
    ├── brief.md
    ├── external-research/       ← Optional Perplexity/NotebookLM outputs
    ├── phase1-research/
    ├── phase2-personas/
    ├── phase3-design/
    ├── phase4-testing/
    └── phase5-delivery/

Quick Start

Option 1: Full Agent (Recommended)

Open this directory as your project in Claude Code. The CLAUDE.md file is read automatically as the project instructions. Then give it a brief:

> Design a 2-day workshop on data visualisation for 25 mid-career marketers

The orchestrator handles delegation, gates, and checkpoints. It will ask you for any missing brief fields before starting.

Option 2: Individual Sub-Agents

Run a specific phase manually:

# Run just the research phase
claude --skill skills/01-intake-research/SKILL.md

# Run just persona building (after research is done)
claude --skill skills/02-persona-builder/SKILL.md

Option 3: As a Skill in Another Project

Copy the skills/ and references/ directories into your own project's .claude/skills/ directory. Reference them from your own CLAUDE.md.

The Pipeline in Detail

Pre-Run: Workspace Hygiene

Before every new course generation, the orchestrator checks whether outputs/ contains files from a previous run. Stale artefacts cause cross-contamination — sub-agents inherit workspace context and may ingest files from prior courses. The orchestrator asks you to delete or archive old outputs before proceeding.

Step 0: Collect the Brief

The orchestrator collects the following before any work begins:

Input Required? Example
Topic Yes "Data visualisation for marketers"
Target audience Yes "25 mid-career analysts, mixed technical ability"
Delivery format Yes "2-day face-to-face workshop"
Duration Yes "14 hours (2 × 7-hour days)"
Delivery profile Yes Mode, facilitation model, session structure, platform, interaction model
Constraints Optional "Corporate IT restrictions", "No admin access"
Business objective Optional "Increase self-service analytics capability"
Expert reference materials Recommended Upload to references/uploads/ — practitioner guides, slide decks, tool documentation
Recency window Optional "Last 6 months", "Last 2 years" — how far back to research recent developments

If any required field is missing or too vague, the orchestrator stops and asks for clarification. It does not guess.

Expert reference materials and user-supplied documents (audience data, existing content, prior analysis) should be uploaded to references/uploads/ before starting the pipeline. This is the single designated location for all user-provided inputs that Sub-Agent 1 consumes during the research phase. The orchestrator will verify the folder's contents and explicitly list each file when delegating to Sub-Agent 1.

Step 0b: Constraint Compatibility Check

Before designing anything, the orchestrator tests all constraints for mutual compatibility: format–activity conflicts, audience–complexity mismatches, duration–scope mismatches, facilitation–format mismatches, and technology–interaction mismatches. Conflicts are presented to the user with options to adjust, prioritise, or design a documented compromise.

Step 0c: External Research Enrichment (Optional)

The orchestrator offers several options for external research before the pipeline starts:

Four options are presented: (a) run external research manually via Perplexity Pro and NotebookLM Pro (25–35 min user effort), (b) skip and rely on built-in research only, (c) provide expert reference materials only, or (d) run LLM Council research (automated) — sends the 5 research queries to a local LLM Council that provides web research + 5-model deliberation + peer review + chairman synthesis, fully automated in 15–25 minutes. Option (d) requires the LLM Council backend to be running at localhost:8001 (see ~/Code/personal/llm-council).

If external research is chosen, the orchestrator generates ready-to-use prompt templates with placeholders filled from the brief, creates template files for each research prompt, and verifies every research file before proceeding.

Phase 1: Intake & Research

Sub-Agent 1 validates the brief and produces two research documents: a domain research summary (7 mandatory sections covering practitioner workflows, tool features, anti-patterns, regional context, best practices, misconceptions, recent developments, and prerequisites) and an audience research summary (5 mandatory sections covering roles, prior knowledge, motivations, barriers, and success indicators).

Three quality gates check structural completeness, domain depth (catching research that is structurally complete but content-shallow), and cross-contamination.

Phase 2: Persona Builder

Sub-Agent 2 generates 3–5 pedagogical learner personas, each with 6 required attributes (name and context, background, current Bloom's level, motivation, friction points, success criterion). Every persona set must include a sceptic/reluctant learner, a covert struggler, and a role outlier. Chain-of-thought reasoning appears before each persona, and four validation checks run after all personas.

After the quality gate passes, the orchestrator presents personas to the user and waits for explicit confirmation. The pipeline does not proceed on silence.

Phase 3: Curriculum Designer

Sub-Agent 3 produces five files: Bloom's-tagged learning objectives, instructional strategies with framework citations and alternatives, a sequenced module structure with Merrill and Kolb audits per unit, activities/assessments with alignment verification and TPACK checks, and structured slide specifications for every presentation-style activity. A content depth self-audit scores every module on specificity, practitioner workflow mapping, and activity concreteness.

After the quality gate passes, the orchestrator presents a curriculum summary and offers the user a review checkpoint (waivable).

Phase 4: Persona Tester

Sub-Agent 4 stress-tests the curriculum by walking each persona through every module. It produces narrative walk-throughs (with bias calibration weighting negatives 2×), a friction map scoring every persona × module cell on 5 dimensions individually (6 for async/self-paced), and differentiation responses for every Amber or Red cell using one of four response types. Every finding is framed as a hypothesis.

Phase 5: Delivery Packager

Sub-Agent 5 compiles three stakeholder-ready deliverables: a design rationale document (with evidence citations, strength ratings, and boundary conditions for every decision), a complete curriculum package, and a format-adaptive pre-flight checklist with a validation plan.

Step 6: Final Presentation

The orchestrator presents a summary of key design decisions, friction hotspots, what remains hypothesis vs confirmed, and recommended next steps for real-learner validation. The user decides whether to accept, revise, or iterate.

Shared Infrastructure

The 16 Rules

All sub-agents operate under 16 rules defined in references/rules.md. These are the pipeline's operating system:

Rule Name Core Principle
R1 Backward Design First Outcomes → assessments → activities, never "topics first"
R2 Bloom's Tagging Every objective tagged with a Bloom's level
R3 Merrill's Five Principles Audit Every module checked against P1–P5
R4 Kolb Cycle Completeness CE/RO/AC/AE as design checklist (not learning styles)
R5 Cognitive Load Management Prerequisites first; ≤3 new concepts without consolidation
R6 Pedagogical Personas 3–5 data-derived personas with 6 required attributes
R7 Synthetic Positivity Bias Weight negatives 2×; "all clear" is a red flag
R8 5-Dimension Friction Map Score each persona × module on 5 dimensions; never aggregate
R9 Design Rationale Documentation Every decision cited, evidence rated, boundaries noted
R10 Alignment Verification No orphan activities, no untested objectives
R11 TPACK Completeness Every recommendation addresses Technology, Pedagogy, Content
R12 Input Fidelity Never fabricate; never assume; state gaps
R13 Differentiation Response System 4 response types with framework citations
R14 Format-Aware Adaptation Async formats need different activities
R15 Constraint Conflict Detection Flag impossible combinations before designing
R16 Delivery Mode Adaptation Format-specific evidence; do not assume face-to-face

Quality Gates

Every transition between phases passes through a quality gate. Gates use a Hard Stop Protocol: when any check fails, the orchestrator stops immediately, states the specific failure with evidence, states what is needed to pass, and asks the user how to proceed. It does not resume until the user responds.

Additional safeguards include sub-agent input scoping (each sub-agent receives only its required files), cross-contamination scans at every gate (grepping for terms from prior courses), and a domain depth advocacy role where the orchestrator actively assesses whether research enables practitioner-level activities, not just structurally complete sections.

Evidence Base

The pipeline draws on 12 instructional design frameworks, each rated for evidence strength (Strong, Medium, or Weak). Ratings are defined in references/evidence-base.md. Strong means peer-reviewed and replicated; Medium means widely accepted with limited large-scale testing; Weak means expert consensus or novel application. All Weak-rated elements are treated as hypotheses.

Supported Delivery Formats

The pipeline adapts to any delivery format through its delivery profile system:

  • Face-to-face workshops (single or multi-day)
  • Live online sessions (Zoom/Teams)
  • Self-paced e-learning (with or without voiceover)
  • Academic semesters (lectures + tutorials + labs)
  • Micro-learning (5–15 minute units)
  • Blended formats (mixed sync/async)

Format-specific adaptations are applied throughout — from research depth and structural vocabulary to activity types, assessment formats, and friction dimensions.

Key Design Decisions

Why sub-agents instead of one monolithic prompt? Each phase has different cognitive demands. Research needs breadth; persona building needs empathy and data synthesis; curriculum design needs structural thinking; testing needs adversarial thinking; packaging needs precision. Splitting into sub-agents gives each focused instructions and the right mental model.

Why quality gates? Without gates, errors compound silently. A flawed persona produces a flawed friction map which produces flawed differentiation. Gates catch problems at the cheapest point to fix them.

Why a user checkpoint after personas? Personas are the highest-leverage input in the pipeline. Everything from objectives to friction maps depends on them. A 5-minute review here saves hours of rework downstream.

Why 16 rules instead of general guidance? Testing showed that general guidance produces inconsistent results. Specific rules with framework citations produce auditable, repeatable designs.

Provenance

This agent was built using the FRAME methodology:

  1. Find evidence — 12 instructional design frameworks researched
  2. Role & Rules — 16 rules derived from the research, with chain-of-thought requirement
  3. Assemble — materials from a real workshop project used as test case
  4. Measure & Modify — test run with 4 personas, 6-question critique, 7 improvements
  5. Expand & Embed — product tested with 4 test cases (2 typical, 2 edge), 3 prompt edits applied

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages