AI Founder OS
7 modules · 43 lessons

The Curriculum

From setup to production, one module at a time.

Each module follows the same pattern: plan it in ChatGPT, build it with Claude, prove it compiles and works. The early modules move slowly on purpose. By the end you are shipping features in a single sitting.

Key idea

You do not need to finish every module before you start building. Module 1 gives you enough structure to ship real work. The rest deepens the discipline as you go.

Key idea

You do not need to finish every module before you start building. Complete modules 0 through 2, then build alongside the rest.
Operator loop: 6-step cycle from define to ship1Define goal2Draft spec3Execute4Handoff5Verify6Ship

The operator loop

Every module follows this cycle. Six steps, every time.

M0Module 0Optional

Environment Setup

What you get: Your machine runs the same stack you will deploy. No surprises later.

You produce: A repo that builds, lints, and deploys from your laptop.

What you cover
  • ·Install Node, Git, and VS Code
  • ·Create a Next.js project with strict TypeScript
  • ·Set up ESLint and Prettier so formatting is never a conversation
  • ·Configure .env.local for secrets
  • ·Run your first build and confirm it deploys
M1Module 1

Architecture Foundation

What you get: A folder structure you can hand to any AI agent without it breaking things.

You produce: Architecture diagram and a written folder contract for your project.

What you cover
  • ·Decide structure before writing code
  • ·How the App Router maps to real files
  • ·Separate pages, components, lib, and API routes
  • ·Pick naming conventions and stick to them
  • ·Draw boundaries so AI edits stay where you want them
  • ·Write your first Continuity Packet
M2Module 2

Working with AI

What you get: You get predictable results because you give precise instructions.

You produce: A prompt library you actually reuse, with spec and execution templates.

What you cover
  • ·Use ChatGPT to plan, Claude to build
  • ·One task per prompt, one outcome per task
  • ·Write specs with context, constraints, and what done looks like
  • ·Execution prompts: name the files, ask for diffs only
  • ·When to keep a conversation vs. start fresh
  • ·Build a prompt kit you reach for every day
  • ·Spot hallucinations before they reach your codebase
M3Module 3

Local-First Development

What you get: You build and test everything on your machine before it goes anywhere else.

You produce: A feature branch shipped end-to-end from localhost.

What you cover
  • ·Why local beats cloud IDEs for real shipping
  • ·The VS Code extensions worth installing
  • ·Run the full stack locally: server, database, auth
  • ·Hot reload so you see changes in under a second
  • ·Small commits with messages that explain why
  • ·Keep local, preview, and production identical
M4Module 4

Verification and Proof

What you get: You know a change works because you proved it, not because it looked right.

You produce: A proof log template and a checklist for catching regressions.

What you cover
  • ·Run typecheck, lint, and build before every single commit
  • ·Tell AI to include verification steps in its output
  • ·Check what broke after every change, not just what you added
  • ·Use TypeScript strict mode so the compiler catches what you miss
  • ·Know the difference between build errors and runtime errors
  • ·Save proof artifacts: build logs, screenshots, terminal output
  • ·Use governor blocks to stop AI from touching files it should not
M5Module 5

Cost Control

What you get: You spend less on AI because you stop wasting tokens on the wrong model.

You produce: A cost tracker and a workflow tuned for your actual usage.

What you cover
  • ·How token pricing works across GPT, Claude, and alternatives
  • ·Pick the cheapest model that can do the job
  • ·Send less context: know what to include and what to leave out
  • ·When to batch requests vs. work interactively
  • ·Cache outputs you will need again
  • ·Track cost per feature so you see where the money goes
M6Module 6

Shipping to Production

What you get: Your app is live, monitored, and you know how to roll back if something breaks.

You produce: A production deploy with monitoring, DNS, and a rollback plan.

What you cover
  • ·Final checks: environment variables, secrets, DNS records
  • ·Deploy to Vercel with the same config you tested locally
  • ·Run smoke tests after every deploy
  • ·Set up basic monitoring for errors and uptime
  • ·Roll back quickly when something goes wrong
  • ·Keep shipping: small PRs, prove each one, repeat

What good looks like

A finished module means a committed deliverable and a clean build. If you can run npm run build and point to the commit, the module is done.
Proof pipeline: code change through typecheck, build, git proofs to shipCode changeTypecheckBuildGit proofsShip

Build & verify pipeline

Every change passes through this pipeline before it ships.

Module 1 in detail

The Architecture Foundation

Invariant

What you get: A folder structure you can hand to any AI agent without it breaking things.

You produce: Architecture diagram, folder contract, and your first Continuity Packet.

1. The problem nobody talks about

Without a clear structure, AI-generated code drifts with every prompt. Files multiply, names collide, and each new feature breaks the last one. The issue is not the AI. It is that nobody told it where things go.

2. The idea

Architecture is a contract you write before any code exists. Every file has one job, every folder has a clear boundary. When you make the contract explicit, AI stops guessing and edits stay where you expect them.

3. How it works in practice

Plan first. Define the folder contract, route map, and file-ownership rules in ChatGPT before touching code.

Build next. Hand Claude the spec with explicit file targets. It scaffolds the structure in minimal diffs.

Check last. Run typecheck and build. Confirm every route resolves, every import is valid, and the proof log is clean.

4. File Structure That Prevents Fragility

File structure
app/ layout.tsx # Shell: nav, footer, providers page.tsx # Landing page dashboard/page.tsx # Auth-gated dashboard api/ # Server-only routes stripe/ admin/ components/ # Shared UI (Nav, Footer, cards) lib/ # Utilities, Stripe client, helpers public/ # Static assets

Each layer owns one concern. Pages never import from API routes. Components never call server functions directly. This boundary is what keeps AI edits safe.

5. Common Failure Patterns

  • ·Letting AI create folders on the fly. You end up with duplicates.
  • ·No naming convention. Files collide when two agents work at once.
  • ·Mixing server and client code. Builds break silently at deploy.
  • ·Skipping the architecture spec. Every prompt starts from zero.

6. Exercise

Write a Continuity Packet for your project. Document the route map, folder contract, and which files belong where. Then use a single Claude prompt to scaffold the structure. Run the build. If it passes, commit and move on.

Module 2 in detail

Working with AI

Proof

What you get: Predictable results because you give precise instructions.

You produce: A prompt library you actually reuse, with spec and execution templates.

1. Why most people get bad results

They paste vague requests and hope for the best. The output looks plausible but drifts from what they wanted. Wrong files get edited, scope creeps with every response, and there is no way to tell if the result is correct. The model is not the problem. The input is.

2. The approach

Keep each AI task small: one input, one outcome, one check. Separate planning from building and use different models for each. ChatGPT writes the spec. Claude writes the code. You verify the proof.

3. Plan, build, check

Plan in ChatGPT. Describe what you want, include context and constraints, and define what done looks like. The output is a brief, not code.

Build in Claude. Paste the spec with explicit file targets and a governor block. Claude produces minimal diffs against exactly the files you listed.

Check yourself. Run typecheck, build, and a quick manual test. If something fails, feed the exact error back to Claude with the same governor block. Do not broaden scope.

4. Prompt Skeleton

A reusable template you can paste into any execution session. Fill the blanks, keep the guardrails.

Prompt skeleton
TASK: [one-sentence description] FILES TO MODIFY: - [path/to/file-1.tsx] - [path/to/file-2.ts] DO NOT TOUCH: - [list protected files] REQUIREMENTS: 1. [concrete requirement] 2. [concrete requirement] 3. [concrete requirement] VERIFICATION: - npm run build passes - git show --stat HEAD shows only intended files OUTPUT: minimal diff, TypeScript clean.

5. Common Failure Patterns

  • ·No file targets. The AI edits whatever it wants.
  • ·Planning and coding in the same prompt. Output drifts.
  • ·No verification step. Regressions compound quietly.
  • ·Adding requirements after execution starts. The contract breaks.
  • ·Ignoring hallucinations. The code references things that do not exist.

6. Exercise

Pick one feature from your project. Write a spec in ChatGPT with context, file targets, and what success looks like. Paste it into Claude using the skeleton above. Run typecheck and build. If both pass, commit. If not, feed the exact error back with the same file list until it is clean. That is one full cycle.