Context / Research term
Meta-prompting
Giving an AI instructions about how to interpret and follow instructions, so it handles new tasks more reliably without per-task hand-holding.
Most prompting tells a model what to do: summarize this, rewrite that, answer this question. Meta-prompting tells the model how to approach work in general. You might instruct it to always ask a clarifying question before starting, to flag assumptions, to prefer short answers unless asked for detail, or to cite its sources. These instructions shape behavior across many tasks rather than steering a single response. A well-tuned meta-prompt turns a generic assistant into one that matches your working style.
Builder example
Every time you correct an AI's approach rather than its output, you are doing meta-prompting by hand. The correction cost adds up. Writing these instructions once in a system prompt or context file means the model arrives at your preferred behavior without repeated steering, and new team members inherit the same working norms automatically.
Common confusion: Meta-prompting is easy to confuse with prompt engineering, but they operate at different levels. Prompt engineering shapes one request. Meta-prompting shapes the model's stance across all requests: how it handles ambiguity, how much it explains, when it pushes back, and what it assumes about the user.