Reasoning / Research term
Reflection / self-critique
Asking a model to review its own output, find problems, and produce a revised version. Like giving someone a chance to proofread their own work before submitting it.
After the model generates an initial answer, you prompt it to critique that answer: what is wrong, what is missing, what could be clearer? Then you ask it to produce a corrected version. If a model drafts a legal summary, you might follow up with "Check this summary against the original document and flag any claims that are unsupported." The technique works best when the critique prompt asks different questions than the generation prompt, forcing the model to examine its work from a new angle.
Builder example
Self-critique is a cheap first-pass quality check, but it has a ceiling. A model that made an error often has the same blind spot when reviewing that error. For a product recommendation engine, self-critique might catch formatting issues and obvious contradictions, but it will rarely catch subtle factual errors or biased reasoning. Treat it as one layer in a review stack.
Common confusion: Models can confidently defend their own mistakes. Self-critique is biased toward agreement with the original output, especially when the model is uncertain: exactly when you need critique most.