Glossary definitionBrowse the neighboring terms

Context / Standard term

Grounding

Tying a model's answer to specific, checkable sources like retrieved documents, database records, or tool outputs, so the reader can verify each claim.

Without grounding, an AI answer is a confident guess drawn from training patterns. A grounded answer traces every claim to a specific source: "Your return window is 30 days, per the policy document dated March 2026." This transforms the experience from "the AI says so" into "the AI says so because this source says so," giving the reader something concrete to verify. Grounding matters most when the answer involves facts that change over time: prices, policies, or regulations.

Builder example

If your AI assistant answers HR questions, grounding means each answer cites the specific policy document it drew from. Without this, employees cannot tell whether the assistant is quoting real policy or generating something plausible. When a grounded answer turns out to be wrong, you can trace the error to a specific source and fix it. When an ungrounded answer is wrong, you are debugging a black box.

You ask about your refund policy. The model gives a plausible-sounding answer based on what refund policies usually look like, not your actual policy.

Retrieve the actual policy document, cite the section, and make it easy for the user to verify by linking to the source.

Common confusion: Retrieving documents and grounding answers are two separate steps. A system can retrieve the right document and still generate an answer that ignores or contradicts it. Grounding requires verifying that the output actually reflects the retrieved evidence.