Glossary definitionBrowse the neighboring terms

Reasoning / Standard term

Chain-of-thought (CoT)

Getting a model to work through intermediate steps before giving its final answer, the same way you might show your work on a math problem. Consistently improves accuracy on multi-step tasks.

Ask a model to solve "what is 17 times 24" directly and it might guess wrong. Prompt it to multiply step by step and it performs far more reliably. Chain-of-thought (CoT) works because each intermediate step becomes context for the next one, reducing the chance the model loses track of where it is in a complex problem. Modern reasoning models like Claude with extended thinking and OpenAI's o-series do this automatically, sometimes showing a summary of their work and sometimes hiding it entirely.

Builder example

For straightforward tasks like classification or simple extraction, CoT adds latency and cost with little benefit. For multi-step math, logic, planning, or code generation, it can be the difference between a usable system and a broken one. Know which category your task falls into before deciding whether to request step-by-step reasoning.

You ask for a customer onboarding guide and the model jumps straight into prose.

Have it outline the customer's starting point, blocker, example, and success check before drafting.

Common confusion: The steps a model shows you may look logical yet still contain errors or post-hoc rationalization. Visible reasoning helps with debugging, but it does not prove the answer is correct.