Context / Research term
Context rot
The gradual decline in a model's reliability as its input gets longer or more cluttered, even when the input is well within the token limit.
As you keep adding documents, instructions, and conversation history to a model's input, its answers start to degrade. Research shows models are especially likely to miss information buried in the middle of long inputs, a pattern called the "lost-in-the-middle" effect. Imagine handing someone a 200-page briefing packet and asking a question whose answer is on page 97: they will often miss it. Context rot names this predictable decline.
Builder example
A million-token context window does not mean you can paste a million tokens and get good results. Build a meeting-notes assistant that loads an entire quarter of transcripts into every call, and the model will start missing key decisions buried deep in the text. Shorter, well-organized inputs consistently outperform longer dumps.
You drop fifty contracts into a conversation and ask about a renewal clause. The model misses it because the relevant section was buried in the middle of a massive input.
Retrieve only the relevant contract and clause, put it near the start of the prompt, and ask the model to cite the specific passage.
Common confusion: The model can technically "see" every token in its window. The problem is attention: it struggles to locate and prioritize the right information when surrounded by large volumes of text.