Glossary definitionBrowse the neighboring terms

Training / Standard term

Fine-tuning

Additional training on a smaller, focused dataset that adapts a pretrained model to a specific task, domain, or style, like specialization training after a general education.

A pretrained model already understands language and general knowledge. Fine-tuning trains it further on examples specific to your needs: medical records, legal documents, a particular writing tone, or a structured output format like JSON. Fine-tuning on thousands of customer support conversations, for example, teaches the model your company's voice, product terminology, and resolution patterns. The model absorbs these specialized examples into its core behavior, so you no longer need elaborate prompts to get the right output.

Builder example

Fine-tuning is worth pursuing when you need consistent behavioral patterns that prompting alone cannot reliably produce: a specific output format every time, domain-specific vocabulary used correctly, or a distinctive writing style at scale. It also reduces per-request cost by encoding instructions into the model's weights instead of passing them in every prompt. Many needs that feel like fine-tuning problems can be solved more cheaply with good prompts or retrieval-augmented generation (RAG) first.

Common confusion: Fine-tuning does not inject private knowledge into a model the way a database does. The model learns patterns from your data, so it may generalize or hallucinate around facts that appeared infrequently in the training set.