Control / Research term
Superposition
The way neural networks store far more concepts than they have neurons, by overlapping many concepts across the same neurons. Think of how multiple radio stations share the same airwaves using different frequencies.
Picture a model's brain as having far fewer storage bins than the number of ideas it needs to remember. The model solves this by distributing each concept across a pattern of many neurons, and those patterns overlap, so the same neurons participate in representing many different concepts. Most concepts are rarely needed at the same time, which makes the overlap work well for performance. The cost is readability. Looking at one neuron's activity tells you almost nothing, because that neuron is simultaneously contributing to dozens of unrelated concepts. This is why researchers need tools like sparse autoencoders to pull the overlapping signals apart.
Builder example
Superposition is why AI models are so hard to interpret. Understanding it changes how you should evaluate any explainability claim. Any tool or vendor explaining model behavior at the neuron level must address how they handle superposition. Sparse autoencoders exist specifically to untangle superimposed concepts into individually readable patterns.
Common confusion: Superposition in AI has no connection to the quantum physics term. Here, it describes a compression technique: encoding many concepts in overlapping patterns across shared neurons.