Failures / Research term
Capability elicitation
Finding the right prompt, tool setup, or configuration to unlock abilities a model already has but does not show by default.
A model may be fully capable of a task yet fail completely under a naive prompt. Elicitation is the systematic process of discovering what works: providing worked examples, giving tool access, breaking the task into steps, or adjusting temperature and other settings. Early large language models seemed unable to do multi-step math until researchers discovered that adding "let's think step by step" to the prompt unlocked the ability. The capability was always there; it needed the right trigger.
Builder example
If you test a model with one prompt style and conclude it cannot do the job, you may be abandoning a capable model prematurely. Teams that invest in elicitation routinely discover that models can handle tasks they initially dismissed, saving the cost of switching providers or building workarounds.
Common confusion: Elicitation uncovers capabilities that already exist in the model's weights. Fine-tuning can be part of the process, and the core goal is always surfacing what the model already knows how to do.