Safety / Standard term
Alignment
Making sure an AI system pursues the goals humans intended, follows the boundaries they set, and avoids harmful side effects.
Alignment spans a wide range. In everyday product work, it means your AI assistant summarizes what the user asked for without inventing facts or ignoring instructions. In research, it extends to ensuring that increasingly powerful systems remain safe and controllable as their capabilities grow. The common thread is the gap between what you wanted the system to do and what it actually optimizes for. A customer support bot that resolves tickets by giving users false information is 'working' by one metric and misaligned by any reasonable standard.
Builder example
Every AI feature you ship has an alignment surface: the distance between your intended outcome and what the model actually optimizes. A summarization tool that silently drops key details. A recommendation engine that maximizes clicks at the expense of user trust. A code assistant that writes insecure shortcuts to finish faster. These are all alignment failures in production. Defining your intended outcome, your unacceptable shortcuts, and your review signals is practical alignment work.
Common confusion: Alignment shows up every time a model technically completes a task while violating the spirit of what you asked for. It is a daily product concern, not a distant philosophical debate about superintelligence.