Attacks / Standard term
Excessive agency
Giving an AI model more permissions, tools, or autonomy than it needs for its job, which multiplies the damage when anything goes wrong.
When a model has access to capabilities it does not need, every mistake or attack becomes more dangerous. Consider a meeting-notes summarizer that also has permission to send emails, delete calendar events, and access the company directory. If a prompt injection tricks the model, or if the model simply misinterprets a request, it can now do real damage across systems it never needed to touch. The harm from any AI failure scales directly with the power the AI was granted.
Builder example
This is one of the most preventable AI risks. A customer-facing chatbot with database write access, an internal assistant with admin credentials, a code reviewer with deployment permissions: each turns a small error into a serious incident. The same least-privilege principle that applies to employee accounts applies to AI agents.
The user needs summaries from a Drive folder, but the connector also grants edit and delete permissions.
Use read-only scopes, separate write actions, and require human approval for destructive operations.
Common confusion: Excessive agency is a design flaw, not an attack. It does not cause problems on its own. It dramatically amplifies the impact of every other vulnerability on this list, from prompt injection to data exfiltration.