Agents / Standard term
Review loop
A structured cycle where an agent runs, a person inspects the output, corrections feed back into the agent's instructions, and the next run improves.
Every agent produces imperfect output, especially early on. A review loop makes that imperfection useful by capturing what went wrong, why, and what instruction change would prevent it next time. After a writing agent drafts a report, the reviewer notes that it buried the recommendation in paragraph four. That correction becomes a permanent instruction: lead with the recommendation. Over successive cycles, the agent's instructions accumulate the reviewer's judgment and the error rate drops. The loop works only when corrections are specific enough to become rules and durable enough to persist across sessions.
Builder example
Teams that skip the review loop get stuck in a pattern: the agent makes the same mistakes, the person fixes them manually every time, and the automation never improves. The review loop is the mechanism that converts human judgment into lasting agent capability. Without it, you are paying for AI-generated first drafts and doing the real work yourself.
Common confusion: A review loop is more structured than ad-hoc quality checks. It includes a defined review cadence, a format for capturing corrections, and a process for updating the agent's instructions or context files. Glancing at the output and thinking "looks fine" is not a review loop.