Vibe work / Standard term
Human-in-the-loop
A system design where a person must approve, edit, or reject an AI action before it takes effect. The human checkpoint is a hard gate: the action does not proceed without explicit sign-off.
HITL places a mandatory human review at a decision point in an AI workflow. An AI might draft a customer email, but the system blocks sending until a person reads and approves it. A coding agent might propose a database migration, but the change cannot execute until a developer confirms. The design pattern comes from older human factors and autonomous systems research, where safety-critical processes (aviation, nuclear operations, surgical robotics) required human confirmation at high-stakes steps. AI adoption brings the same principle to everyday software and business workflows.
Builder example
Builders should use HITL checkpoints wherever a wrong output can damage trust, privacy, money, safety, or reputation. The design cost is real: each checkpoint adds latency and requires a clear interface for the reviewer. Placing checkpoints at the right steps, neither too few nor too many, is one of the most consequential design decisions in any AI product.
An agent drafts a reply to a client using meeting notes and CRM history.
Let the agent draft and attach evidence. Require the human to approve before sending.
Common confusion: HITL means the action is blocked until a human approves. A vague promise that "someone can review outputs later" is not human-in-the-loop; it is post-hoc monitoring, which catches problems after they have already taken effect.