Pandai

Curriculum

Twelve weeks. Three phases.Free.

Vibe coding → engineering depth → track specialization. Each module pairs curated external YouTube videos for the fundamentals with a Pandai SHIP project for the practice. Same spine in Pandai Open (free) and the in-person cohort.

How the curriculum is structured

Curated externals. Pandai-built layer on top.

We don’t reinvent fundamentals — we curate the best external explanations and stack the Pandai-specific work on top: SHIP projects, Indonesian context, AI-Native code review, and the operator toolkit you actually use day-to-day.

01

External fundamentals

Each module starts with curated YouTube videos for the underlying concepts — Karpathy, the Anthropic team, others who already explained this stuff well. We don’t reinvent what the field already taught.

02

Pandai layer

SHIP project. Indonesian context. Code review against AI-Native discipline. This is where the Pandai work happens.

03

Operator toolkit

Slides. Timelines. Spreadsheets. Images. Admin docs. All built with AI as you go — because engineers ship more than code.

Pre-week (W0)

M0 — Set up your engineer brain

Async qualifier before Week 1. Python or TypeScript fluency. Git basics. Docker basics. API basics. Cursor + Claude. If you can’t clear M0, the rest of the curriculum will be painful; we’d rather you spend a few extra weeks here than skip it.

Three phases

AI Vibe Coding → Engineering Depth → Track Specialization.

The 12 weeks split into three phases. Each phase is a coherent block; you can dip in mid-program if you have prior context, but the intended path is straight through.

01Weeks 1–4

AI Vibe Coding

Ship something useful with Claude. By Friday of Week 4 you’ve built an agent that does real work — not a toy. The on-ramp for every track.

  1. Week 1 · M1

    Claude Code as your dev environment

    SHIP: A real-annoyance CLI built and shipped from a fresh terminal. Roommate can install it.

  2. Week 2 · M2

    LLM fundamentals — RLM first

    SHIP: Token + cost calculator across Claude / Gemini / OpenAI for an Indonesian-language workload. Long-context vs chunked variants compared.

  3. Week 3 · M3

    Agents and MCP

    SHIP: A 3-tool agent built end-to-end from off-the-shelf MCPs. (Writing your own MCP comes in W4.)

  4. Week 4 · M4

    Practical agent shipping

    SHIP: Pick one and ship it: video (Remotion + Claude), slides (Sheets/Slides MCP), Excel reconciliation, Indonesian forms (KTP / NPWP / tax), or a website-registration agent.

02Weeks 5–7

Engineering Depth

From toy to product. From solo coder to AI-orchestrator. Auth, schema, eval harness, observability, deploy — with the Bella architecture as the through-line. Then five days of track exposure before you pick.

  1. Week 5 · M5

    Bella case study

    SHIP: A re-implemented Bella-style auth + scheduler slice with your own eval harness and one observability dashboard.

  2. Week 6 · M6

    Become the CEO of your codebase

    SHIP: A feature shipped end-to-end using the Think → Plan → Build → Review → Ship → Reflect loop, with Claude Code subagents playing roles you used to do alone. Observability dashboard included.

  3. Week 7 · M7

    Pick your track

    SHIP: A 1-page track-rank memo. Pick your track Friday based on five 1-day deep dives.

03Weeks 8–12

Track Specialization

Pick your specialty. Five tracks — AI Engineer, Software, Data, Quality, Tech Ops. The AI Engineer track ends in verifiable closed-loop systems (autoresearch), not fine-tuning. Every track ships a capstone.

  1. Week 8 · M8

    Track depth begins — evals before everything

    SHIP: AI Engineer: an eval harness over your W4 agent, with metrics that matter for your use case. Other tracks: first deep-dive project in your specialty.

  2. Week 9 · M9

    Build your first closed-loop system

    SHIP: AI Engineer: an LLM-wiki — a system that researches questions, grades its own answers via your ontology, and updates what it knows. Cohort: Gate 2 placements begin for the next 40%.

  3. Week 10 · M10

    RAG, finally — for what RLM can’t carry

    SHIP: AI Engineer: a RAG slice (chunking, pruning, GraphRAG over your ontology) motivated by a cost or freshness ceiling you measured. Others: integration + observability project on your prior SHIP.

  4. Week 11 · M11

    Self-improving agents in production

    SHIP: Real-client capstone build. AI Engineer: a verifiable closed-loop — the agent runs, grades its own output, and is measurably better Friday than Monday. MLOps essentials wired in: serving, prompt versioning, token analytics, human-in-loop.

  5. Week 12 · M12

    Demo Day · Gate 3

    SHIP: Capstone demoed live to the cohort. AI Engineer: self-improving autoresearch — the system gets better while the audience watches. Plus a final pass on go-to-market thinking. Top performers join Metatech as FDEs.

Week by week

The full schedule.

One module per week. SHIP project at the end of each. Public GitHub repo for everything you build.

  1. W0 (pre-week)M0

    Set up your engineer brain

    SHIP:Python/TS, Git, Docker, API basics, Claude Code installed. AI woven into daily life.

  2. W1M1

    Claude Code as your dev environment

    SHIP:Real-annoyance CLI (using AI → bending AI)

  3. W2M2

    LLM fundamentals — RLM first

    SHIP:Token + cost calculator (RLM vs chunked)

  4. W3M3

    Agents and MCP

    SHIP:3-tool agent with your own MCP (automation → agentic-AI arc)

  5. W4M4

    Practical agent shipping

    SHIP:Video / slides / Excel / Indonesian forms / website registration

  6. W5M5

    Bella case study

    SHIP:Auth + scheduler slice + eval harness + 100/40 memory rule applied

  7. W6M6

    Become the CEO of your codebase

    SHIP:Feature shipped via Think→Plan→Build→Review→Ship→Reflect loop

  8. W7M7

    Pick your track

    SHIP:Track-rank memo after five deep dives

  9. W8M8

    Track depth — evals before everything

    SHIP:AI Eng: eval harness over W4 agent. Others: first deep-dive project

  10. W9M9

    Build your first closed-loop system

    SHIP:AI Eng: LLM-wiki (self-grading researcher). Cohort: Gate 2 placements

  11. W10M10

    RAG, finally — for what RLM can’t carry

    SHIP:AI Eng: RAG slice + GraphRAG over your ontology. Others: integration project

  12. W11M11

    Self-improving agents in production

    SHIP:Real-client build. AI Eng: closed-loop, measurably better by Friday

  13. W12M12

    Demo Day + Gate 3

    SHIP:Capstone demoed live. AI Eng: self-improving autoresearch + GTM thinking

Two ways through

Take it free, async. Or take it in-person, with placement.

Same modules either way. The free version is yours to take at your pace. The cohort version adds the operational simulator (The Yard), the real-client capstone, and the placement pipeline.