Pandai · 02 / 12

Under the hood. What's behind the prompt.

You used AI for a week. Now look at what's actually on the other side of the prompt.

The framing · ~50 sec

The deep dive · ~30 min

Now go in.

The full lecture from Listiarso Wastuargo. What's an LLM, agent, agentic AI — with the working definitions you'll use the rest of the program.

The vocabulary

Words you'll use the rest of the program.

  • LLMthe model itself — a token predictor
  • RLMLLM + a reasoning loop around it
  • agentan RLM with tools
  • agentic AIagents that run autonomously
  • RAGretrieval-augmented generation
  • MCPa protocol for giving agents new tools

The contrarian truth

RLM > RAG. Until it isn't.

Long context windows beat retrieval. 200,000 tokens is a 500-page book. For most use cases: just paste it in. We'll come back to RAG later — only when long context can't carry it anymore.

Your turn

Build a token + cost calculator.

Compare Claude, Gemini, and OpenAI for an Indonesian-language workload. Long-context variant vs chunked variant — measure the gap.

(When app.pandai.academy ships, this section becomes a SHIP-submission form: GitHub URL, Loom URL, notes. Instructor reviews and leaves feedback.)