Glossary definitionBrowse the neighboring terms

Reasoning / Research term

Self-consistency

Run the same question through the model several times, then pick the answer that comes up most often. A majority-vote approach to improving reliability on tasks with clear correct answers.

Picture asking five different people to independently solve the same math problem. If four get 42 and one gets 37, you can be fairly confident the answer is 42. Self-consistency applies this logic to AI: sample multiple reasoning paths from the model (using temperature or other randomness) and the answer that appears most frequently is likely correct. Correct reasoning converges on the same answer, while errors scatter in different directions.

Builder example

Self-consistency is one of the simplest ways to boost accuracy on math, logic, and classification tasks with a definite right answer. If your model gets a question right 75% of the time on a single try, running five samples and majority-voting can push accuracy above 90%. The cost scales linearly (five samples cost five times as much), so it works best for high-value decisions where the extra spend is justified.

The model estimates a usage-based bill three different ways and two answers agree.

Use agreement as a confidence signal, then verify the arithmetic or show the customer the check.

Common confusion: Majority voting fails when the model has a systematic bias. If all five samples share the same flawed assumption, they converge on the same wrong answer with high confidence.