October 30, 2025 • 5 min read
AI Performance Monitoring: Support or Surveillance?

CX Analyst & Thought Leader
October 30, 2025

Employee skepticism of AI is certainly nothing new, but it’s what employees dislike about AI that’s changing.
In the past, employees were primarily concerned about AI because they feared the immediate and long-term job loss it would almost certainly cause. Conversations about “automating away human empathy” filled break rooms and boardrooms, and customers became increasingly worried about not being able to talk to a real person when they desperately needed to.
Now, many employees see workplace AI adoption in the same way as they see workplace social media accounts: a necessary evil that many wish they could quit, but can’t – because using it is both a job requirement and a performance monitoring tool.
To put it bluntly: many employees see AI-powered performance monitoring as little more than corporate surveillance masquerading as “quality assurance.”
The Cost Of Automated Employee Surveillance? More Work.
Employee “AI Fatigue” is especially palpable in contact centers, where traditional agent performance metrics like CSAT, FCR, call transfer rate, and average handle time still reign supreme.
At times, it can feel to employees like AI is taking its revenge, punishing agents with low performance scores simply because their human minds have a much better understanding of the nuance and spontaneity human-to-human customer conversations require.
As executives increasingly panic about the “lack of ROI” from their AI investments, employees feel like AI is using its algorithm to shift the blame onto them, all in the hopes of preventing contract cancellations and ugly divorces between businesses and their technology partners.
Then there’s the fact that the agent-facing AI tools that promised to ease agent workload have actually added to it. 77% of employees say AI has increased their workload, while nearly 50% admitted that they still can’t figure out how to achieve the high levels of productivity that AI promised (and that their supervisors expect.)[*]
That’s the quiet reality of AI in the modern workplace. What began as an assistive tool now feels more like surveillance to many employees. And what started as employee skepticism of AI has morphed into full-blown resentment of it.
The Algorithmic Gaze: Worse Performance, Unhappier Agents
AI-powered monitoring promised fairness and objectivity. Instead, it’s producing unhappier, less creative, and more resistant employees.
Research from Cornell suggests that employees find AI performance monitoring tools much more controlling and intrusive than performance monitoring done by a human supervisor. This loss of autonomy makes agents feel dehumanized and more robotic in their interactions with customers. One participant in the Cornell research said AI monitoring tools convinced them to “try to become like a mannequin during tests now.” [*]
That line captures the core dysfunction. Employees start to perform for the algorithm, not the customer. They constrain their behavior to avoid misinterpretation: they speak differently, they over-correct their tone, they follow AI Agent Assist suggestions even though they know there’s a more efficient and effective way to do things.
It’s no wonder, then, that agent burnout rates are soaring. 71% of full-time employees now describe themselves as burned out, a figure directly tied to AI-driven productivity pressures. Many say the expectation to stay “continuously active” has become the new form of compliance, that constant activity is valued over quality.
The study also found that the use of AI in performance monitoring makes employees more likely to engage in “resistance behaviors.” Those resistance behaviors include:
- Gaming the AI monitoring tools (16% of employees admit to keeping unnecessary apps open, while 15% schedule fake emails)
- Deliberately sabotaging a company’s AI efforts (31% of agents admit to doing just that out of frustration or distrust in AI)
- Quitting en masse (One in six employees say they’ve considered quitting because of invasive AI monitoring)
Measuring Without Understanding
AI-driven coaching tools claim to offer “meaningful” or “actionable” feedback. But what happens when an AI-powered coach misreads an agent’s silence as disinterest or the tone of your voice as irritation?
For a growing number of employees, the line between feedback and control has disappeared.
AI performance tools thrive on the illusion of precision. The dashboards look objective. The numbers feel neutral. But much of what’s being measured doesn’t actually matter.
Emotion-detection systems misclassify voices. Productivity trackers penalize breaks. Algorithms confuse action with effectiveness and efficiency. The metrics look clear, but the meaning behind them isn’t.
When every action is logged and rated, the idea of meaningful work starts to collapse. Employees begin to disengage. They stop volunteering ideas, stop mentoring others, and stop caring about anything that doesn’t move a metric. The result is a new kind of CX mediocrity: people chasing the right signals instead of doing the right things.
Trust Can’t Be AI-Optimized
AI promised to remove the drudgery from work. Instead, it’s rebranded control as optimization.
The technology isn’t the problem, the obsession with constant measurement and surveillance is. A workplace built on constant observation produces compliance while killing creativity.
Maybe the real sign of intelligence in a modern workplace isn’t how much it tracks, but how much it still lets people breathe.