Agents / Standard term
Human-on-the-loop
An oversight pattern where the AI agent acts on its own while a person monitors its work and can intervene when something goes wrong.
Human-in-the-loop means every action waits for explicit approval. Human-on-the-loop gives the agent freedom to act while a person watches from the side, reviewing logs, alerts, or summaries and stepping in only when needed. An agent that processes routine expense reports might run on its own all day while a finance manager reviews a daily summary and flags anything unusual. The person is a supervisor who can intervene, not a gatekeeper who approves every step.
Builder example
This pattern captures the speed benefit of automation for lower-risk tasks where requiring approval at every step would eliminate the time savings. An email-drafting agent that queues messages for batch review once a day is far more useful than one that interrupts you for every message. The key design decision: what triggers a human alert? Set thresholds on cost, sensitivity, confidence, or novelty so the supervisor sees what matters.
Common confusion: Monitoring and approval are different levels of oversight. If a mistake would be expensive or irreversible (sending money, deleting data, contacting a customer), human-in-the-loop approval is safer. Reserve on-the-loop for actions you can review after the fact and undo if needed.