March 5, 20268 min read

AI Accountability Is Broken: Here's How to Fix It.

Written by
Katherine Stone's profile picture

CX Analyst & Thought Leader

March 5, 2026

 AI Accountability Is Broken: Here's How to Fix It.

For the past few months, conversations around successfully leveraging AI to optimize the customer experience have focused on the problem of “pilot purgatory”: an endless phase of AI experimentation that extends meaningful ROI timelines and makes scaling AI initiatives impossible.

But new data suggests that the era of AI “pilot purgatory” and endless experimentation is finally coming to an end. OpCo’s new Intelligence State of AI Transformation report found that less than 2% of CX leaders and executives are still in the “experimentation” phase with AI. 77% of respondents are in various execution phases, ranging from early to strategic adoption. Only 21% describe their companies as AI-native.[*]

So, what’s really holding organizations back from achieving complete AI implementation at scale?

The answer, surprisingly, isn’t workflow redesign, fragmented technology stacks, or even employee pushback: it’s a lack of clear AI observability and accountability processes.

The Current State of AI Accountability

Overall, only 31% of CX leaders report having a comprehensive, company-wide AI governance and accountability policy in place. Alarmingly, 28% of CX leaders with few to no AI accountability policies in place are comfortable deploying agentic AI despite the serious risks of doing so. [*]

Right now, AI accountability strategies are all over the map–and enterprise-level approaches to AI monitoring are vastly different from small business AI accountability strategies.

Still, some baseline commonalities between SMBs and enterprises highlight just how much of a problem a lack of clear AI Accountability is.

Most notably, 45% of enterprises and SMBs say that a lack of visibility, caused by the “black box” approach of legacy systems, is a major roadblock to successfully scaling AI initiatives.[*] A lack of consistent feedback loops, caused by leadership that sees “AI deployment” as an end game instead of something requiring constant fine-tuning and monitoring, is another shared problem.

Worse still, nearly ¼ (20%) of large and smaller organizations also report having no formal assignment of who “owns” AI accountability in place. Most often, (22%) AI accountability is assigned to existing functional or departmental heads. Others (14%) delegate AI accountability and oversight to existing C-suite executives. Just 15% say they’ve formed an AI Advisory Committee, while a paltry 6% have created new C-Suite roles for AI accountability, like a Chief AI Officer. 20% say each employee is formally accountable for AI transformation in their role–meaning that an overarching, company-wide AI accountability strategy is not in place. [*]

The message is clear: right now, most organizations see AI accountability as an afterthought, instead of embedded as an essential and ongoing operational capability.

AI Accountability Theatre

Another “AI for CX” shift from 2025 to 2026? Organizations have gone from using AI for “innovation theatre” to “AI accountability theatre” via vague and oversimplified “Responsible AI Practices” that provide little to no real AI monitoring or accountability.

The increase in AI accountability theatre is, of course, partially the result of limited nationwide AI regulations. As of this writing, there’s no mandated certification process for AI security and compliance (like PCI for payments.) Some companies have recognized this and attempted to implement their own. Still, it’s incredibly easy–and tempting–for companies to leave themselves accountability loopholes, or to cast aside customer security and privacy in favor of more customer data or faster AI transformation.

Businesses see AI accountability as a box to check or a set of compliance tools…little more than, as a recent MIT Sloan Management Review put it “reputational window dressing.”[*]

This accountability theatre opens organizations up to serious legal and reputational risks, but it also raises serious moral and ethical questions.

In April 2025, 16-year-old Adam Raine went from asking ChatGPT for help with homework to taking his own life after ChatGPT provided him with detailed information about different suicide methods. ChatGPT encouraged 19-year-old Sam Nelson to use drugs, leading to his death. AI tools have automatically screened out qualified job applicants due to age or race, led to misdiagnosis in healthcare, and have caused state-level benefit cuts.

Catastrophic AI accountability consequences like these are no longer few and far between. The gap between what organizations say about AI safety and what they actually build into their systems has become an active design choice, not an unavoidable outcome caused by the technology alone.

A Framework For How To Build AI Accountability 

Building an AI accountability strategy can’t be done overnight, and requires continual refinement. It’s about moving from a reactionary “checklist mentality” to a proactive step-by-step plan that’s embedded throughout the AI lifecycle.

Step 1: Inventory AI Tools and Classify By Risk 

Start by mapping out all the AI systems and tools you use across your organization: third-party tools, AI add-ons, existing CCaaS/CX platforms, and even shadow AI tools employees are using without your knowledge. (We recommend offering immunity here to ensure cooperation–and acknowledging that learning about AI tools employees find useful could lead to their formal adoption at your company.)

Remember that not all AI tools carry the same risks and weight. Classify each tool according to its risk and potential human impact, and set risk thresholds like capping AI agent’s spending limits, access and permissions, and more. 

Step 2: Align Leadership and Establish An AI Council

Next, executives and board members should establish a formal AI working group or council that develops and oversees AI accountability strategies. This is also the group responsible for designating ownership of AI failures, implementing consequences for violations, and ensuring AI initiatives align with overall business objectives.

Step 3: Assign Granular Ownership 

To prevent “orphan AI agents” without supervision, assign each AI system an explicit owner: not just a department or a team, but an individual who is ultimately responsible for each tool. The AI Council should establish a data owner responsible for data quality/privacy, a model owner responsible for AI performance and maintenance, and a business owner responsible for AI ROI, risks, and outcomes. 

Step 4: Embed AI Guardrails 

Guardrails are hard, enforceable technical and policy constraints built directly into a system. They define clearly what AI systems can and can’t do. In practice, this means real-time output monitoring that flags or blocks harmful, biased, or off-policy responses before they reach the use. It also means Context-Based Access Control (CBAC) and least privilege to ensure AI agents only access the data and tools required for the explicit task at hand. Each AI system should also include a “kill switch” that immediately stops any AI system or agent that deviates from policy.

Step 5: Conduct Pre-Deployment Testing 

Before deployment, every AI agent and system should be tested for bias across demographic groups, for edge-case failures, and for what happens when users interact with it in ways the designers didn't anticipate. Test for failure, not just for performance alone. Use “red team” drills to simulate adversarial attacks that try to trick AI agents into bypassing guardrails.

Step 6: Mandate Human-in-the-Loop Oversight + Real-Time Observability 

Human-in-the-Loop (HITL) Oversight ensures that humans maintain a certain level of control over AI systems. Determine which actions AI can responsibly autonomously execute, and which ones require human intervention. Ensure there’s a clear path for human review in higher-risk actions, and provide real-time observability notifications for edge cases.

Create a central hub or orchestration layer for real-time AI ecosystem monitoring that tracks model performance, latency, usage, and bias.

Step 7: Enforce Auditability and Explainability 

AI audit trails show exactly how and why an AI agent made the decision or took the action it did. These audit trails should also show which third-party tools the AI accessed while taking these actions, decision trees, and where it got the information it used to make a specific choice. Auditing and explainability is key to eliminating the “black box” plaguing AI accountability practices today. 

Step 8: Deploy AI Carefully and Adopt Continuous Monitoring  

Before system-wide deployment, conduct a shadow rollout where your new system operates alongside your existing system to test real-world performance without exposing users to undue risk. Have a tested rollback plan in place to revert back to if something breaks. Remember: AI systems degrade over time, they drift from original training data, and they can hallucinate when encountering new scenarios. Continuous post-deployment monitoring should track performance changes over time and disparities across demographic groups. It should monitor error rates in specific contexts, and ensure the system is not being used for purposes it was never designed for. 

Proactive AI Accountability Is Not Optional

Organizations that choose to oversimplify AI accountability or treat it as an afterthought are making an active choice to deploy powerful systems they don’t understand. The framework above isn't exhaustive, and it won't be complete the moment you implement it. Responsible AI and genuine AI accountability is ongoing and ever-evolving. But in a world where AI systems are making consequential decisions about people's healthcare, livelihoods, and lives, AI accountability theatre is no longer acceptable.

Stay updated with cx news

Subscribe to our newsletter for the latest insights and updates in the CX industry.

By subscribing, you consent to our Privacy Policy and receive updates.