Slop / Research term
Workslop
AI-generated workplace content that looks polished and complete, then falls apart when someone tries to act on it. The sender appears productive; the receiver inherits hidden cleanup work.
Workslop arrives as a memo, project plan, slide deck, or analysis that looks finished at a glance. The formatting is clean, the sections are neatly labeled, and the language is confident. The problem surfaces when a colleague tries to act on it: the action items are vague, the numbers do not trace back to real sources, the recommendations contradict each other on closer reading, and follow-up questions reveal that the author never verified the substance. The net effect is negative productivity, because the receiver now spends time untangling a polished-looking document that created more work than it saved.
Builder example
Builders of workplace AI tools should measure value from the receiver's perspective, not the sender's. A tool that helps someone generate a beautiful brief in five minutes has failed if it costs the recipient thirty minutes of cleanup, clarification, and fact-checking. The real test of a workplace AI feature is whether the artifact actually moves the next person's work forward.
A teammate sends an AI-written competitive analysis with executive-summary formatting, no sources, no tradeoffs, and no recommendation.
Require the output to include source links, assumptions, options, and the question it is meant to answer. If it only looks finished, it is workslop.
Common confusion: The early research on workslop is survey-based and carries methodological caveats, yet the workplace pattern it describes is widely recognized. The term names something most knowledge workers have already experienced: receiving AI-assisted work that looked complete and turned out to be hollow.