Reasoning / Standard term
Verifier
Any check that tells you whether an AI output is acceptable: a unit test, a schema validator, a scoring rubric, or a second model that reviews the first one's work. Verifiers make AI automation safe enough to trust.
When a model generates code, a verifier might be a test suite that runs the code and confirms it produces correct output. When a model writes a medical summary, a verifier might be a checklist confirming every claim maps back to a source document. Verifiers range from deterministic (does this JSON match the schema?) to probabilistic (does a second model rate this response as accurate?). The strongest AI pipelines pair a fast generator with a reliable verifier, because catching errors after generation is almost always cheaper than preventing them during generation.
Builder example
Building better verifiers is the highest-impact investment in any AI pipeline. A mediocre model paired with a strong verifier (unit tests, schema checks, citation lookups) consistently outperforms a powerful model with no verification. Before upgrading your model or adding complexity, ask whether you can build a cheap, reliable check for the output you care about.
Common confusion: Some tasks are easy to generate and hard to verify (creative writing, open-ended strategy), while others are hard to generate and easy to verify (code, math, data extraction). AI risk is highest where verification is hardest, because you cannot reliably catch failures.