Trust Falls and Agentic Calls: Healthcare’s Next Leap.
- Rai Basharat

- Mar 23
- 11 min read
What Healthcare Leaders Told Us about AI Trust at HIMSS 2026

I opened the room with a simple ask: “Raise your hand if you would trust an AI agent to schedule your own mother’s surgery.” Most hands went up. No surprise there. Then I followed with, “Now keep it raised if you’d trust it to approve her prior auth.”
Here is what I did not expect. Contrary to my own preconceived notions, the hands mostly stayed up. A few people never raised their hand in the first place, which was honest and worth noting. But the room did not give me the dramatic drop I had anticipated. These healthcare leaders were more comfortable with autonomous AI than I assumed they would be, and that told me something important: the industry has moved faster in its thinking than many of us advisors have given it credit for.
This was our focus group at HIMSS 2026 in Las Vegas: “Trust Falls and Agentic Calls: Healthcare’s Next Leap.“ Fifty-five minutes with health system leaders, clinical informaticists, payer-side operators, and health IT executives. We talked about where agents belong, where they don’t, who is liable when they get it wrong, and what patients actually expect. We also ran a live survey of 21 participants. What came back was honest, complicated, and occasionally contradictory—which is to say, it sounded like real people thinking through a hard problem.
In one or two words, what is the absolute key to mainstream adoption of agentic AI in healthcare? 5 respondents (24%) answered Trust for this question.
And the timing could not be more relevant. Healthcare AI is no longer experimental. According to Menlo Ventures, 22 percent of healthcare organizations have now implemented domain-specific AI tools, a tenfold increase over 2023. Health systems are leading adoption at 27 percent, followed by outpatient providers at 18 percent and payers at 14 percent. The money is moving too: ambient clinical documentation alone generated $600 million in revenue in 2025, up 2.4 times year over year, and coding and billing automation added another $450 million. The agentic AI in healthcare market is projected to grow from $1.8 billion in 2026 to nearly $20 billion by 2034. This is not a pilot conversation anymore. This is an industry that is spending real money and discovering, in real time, that the governance has not kept up.
The Room Is Optimistic. Cautiously.
Fifty-seven percent of our survey respondents described themselves as “cautiously optimistic but needing more proof.” Thirty-eight percent said they were “excited and ready for implementation.” One person, five percent, was highly skeptical. Nobody said they were completely opposed. That distribution tracks with what I hear in client engagements every week: people want this to work—they just don’t want to be the ones it fails on first.
And the proof they want is operational, not theoretical. They are not waiting for another white paper. They want to see a prior auth agent actually reduce denials at a health system that looks like theirs. They want to see a call center triage bot handle 10,000 calls a week without a compliance incident. Consider the math behind that urgency: an AMA survey from 2025 found that clinicians complete roughly 39 prior authorizations per week and spend about 13 hours on the process, with most reporting that it contributes directly to burnout. McKinsey estimates that AI-enabled revenue cycle management could deliver a 30 to 60 percent reduction in cost to collect. Health systems collectively spend more than $140 billion annually on revenue cycle operations, and the CAQH Index pegs the savings opportunity from automating routine transactions like eligibility, claims, and prior auth at $20 billion. The ROI case is not theoretical. It is staring at the ceiling of every CFO’s office.
Hallucinations Keep Everyone Up at Night
When we asked about the single biggest barrier to trust, 57 percent said hallucinations and clinical inaccuracy. That number did not surprise me. What did surprise me was how far ahead it was. Loss of the human-to-human empathy connection came in at 19 percent. Data privacy and security at 14 percent. Hidden algorithmic bias at 10 percent.
I expected privacy to rank higher, honestly. HIPAA has been the dominant conversation in healthcare IT for two decades. But this group told us something different: they are less worried about data leaking out and more worried about bad reasoning going in. A hallucinated care pathway, a confidently wrong claim code, an agent that auto-approves something it should have flagged. That kind of failure is hard to catch because it looks like competence. And it is not just a theoretical risk. Payers are now deploying AI systems that can review and deny claims in seconds, processing denials at a scale and speed that manual provider workflows cannot match. The percentage of providers reporting denial rates above 10 percent surged from 30 percent in 2022 to 41 percent in 2025. When both sides are running agents, the accuracy question becomes an arms race.
We also asked an open-ended question: “In one or two words, what is the absolute key to mainstream adoption of agentic AI in healthcare?” The answers clustered hard. Accuracy. Trust. Transparency. Repeatedly. One respondent wrote, “Cybersecurity, i.e., trust. Are we meeting FDA, NIST/FedRAMP, or IEEE UL 2933 standards?” Another wrote simply, “Oversight guardrails.” A third said, “Fail-safe options for human in the loop with simple communication.” These are people who think in systems, not slogans.
71 Percent Would Look Under the Hood
Here is the finding I keep coming back to. We asked, “If an AI agent recommends a care pathway that contradicts your initial clinical judgment, what is your most likely response?” Seventy-one percent said they would dive into the AI’s logic and citations to see what they might have missed. Nineteen percent would consult a human colleague for a tie-breaker. Only 10 percent would reject the AI’s suggestion outright.
Think about that for a second. Seven out of ten clinicians and clinical leaders in this room said their first instinct, when an AI disagrees with them, is to check whether the AI might be right. That is not the response of people who fear the technology. That is the response of professionals trained to follow evidence wherever it leads. But it puts enormous pressure on explainability. If most of your clinical workforce is going to open the hood when the AI challenges them, the engine underneath had better make sense.
Which brings us to what explainability actually means to these people. Forty-eight percent said the most important feature is a plain-English summary of the system’s decision-making steps. Twenty-four percent wanted links to peer-reviewed literature supporting the AI’s action. Another 24 percent wanted a confidence score displayed in a clear UI. Five percent prioritized an instant “escalate to human” button. Nobody wants a black box. They want reasoning they can read, evaluate, and explain to the patient standing in front of them. That aligns with the Joint Commission and CHAI’s Responsible Use of AI in Healthcare guidance, released in September 2025, which calls on health systems to build formal governance structures with mechanisms for disclosing AI use and educating both staff and patients. The Joint Commission is developing a voluntary AI certification program for its network of 22,000 accredited healthcare organizations. The industry is formalizing what our focus group already knew intuitively: explainability is not a nice-to-have. It is a clinical requirement.
Patients Are Already Ahead of Us
Sixty-seven percent of respondents agreed that within five years, patients will prefer the speed of an autonomous AI for minor urgent care over waiting for a human. One-third disagreed. But even the skeptics acknowledged the pressure is real. Patients have already made this choice in banking, tax prep, and grocery delivery. Healthcare will not stay the exception forever.
Seventy-one percent of our respondents said patients must be told whenever AI handles their logistics. Full transparency, every time. The remaining 29 percent preferred a conditional approach: disclose only when the AI impacts clinical care directly. Nobody chose not to disclose at all. Zero. That is worth sitting with for a moment.
But disclosure alone is not enough. A national CHAI survey of 1,456 patients, conducted by NORC at the University of Chicago, found that 93 percent of patients reported at least one concern about the use of AI in healthcare and 51 percent said AI actually makes them trust healthcare less. However, more than 80 percent said that trust would increase if clear accountability measures were in place. The data is telling us that transparency without accountability feels performative. Patients do not just want to be told AI is involved. They want to know who is responsible when it gets something wrong.
What I found even more interesting was the generational tension underneath these numbers. In the room discussion, several providers admitted they feel less compelled to explain AI’s role to older patients who don’t ask about it. Meanwhile, Gen Z patients are walking into appointments having already consulted three AI tools, compared treatment options on a symptom checker, and read a Reddit thread about their diagnosis. They are not passively receiving care. They are researching before the provider enters the room. One participant put it bluntly: “I have to be more prepared now because the patient already is.” The training norms inside organizations have to catch up with this. Staff need to know how to explain AI’s role, how to override a recommendation when their judgment says otherwise, and how to document the handoff between human and machine. Most organizations have not built this into onboarding or continuing education yet.
Nobody Knows Who Is Liable, and That Is a Problem
We asked who should bear primary legal liability if an autonomous clinical AI makes a harmful error. Sixty-seven percent said we need an entirely new model of shared liability. Fourteen percent pointed to the software vendor. Fourteen percent said the attending physician. Five percent said the health system. The current frameworks were built for a world where a human being made every clinical decision. When an AI agent auto-codes a charge and triggers an audit, or initiates a prior auth that delays care, the liability question gets murky in a way nobody has resolved yet.
The industry is basically saying: we know this is coming, we know the rules don’t fit, and we need new ones before something goes wrong. Until those rules exist, every agentic deployment carries legal risk that governance has to address explicitly. Oklahoma’s HB1915 proposed a comprehensive framework requiring governance bodies for AI oversight and performance evaluations tied to patient outcomes. Similar bills are expected across multiple states in 2026. Meanwhile, the Joint Commission’s voluntary AI certification will likely become a de facto standard. The regulatory patchwork is forming fast, and organizations without a governance framework are going to find themselves scrambling.
This is also why billing models are shifting. The smarter vendors are moving toward outcome-based pricing. If an agent improves your clean claim rate by 15 percent, the vendor earns based on that result. If it doesn’t deliver, the cost adjusts. That kind of pricing is not just commercially appealing. It is a governance mechanism. It forces vendors to own the performance of their agents instead of shipping a product and collecting a license fee. When your vendor’s revenue depends on the agent working correctly, you have a fundamentally different accountability relationship than when they just sell you seats.
Where Agents Work and Where They Don’t (Yet)
The focus group was clear: start with admin, not clinical. Scheduling, patient access call centers, claims status, revenue cycle workflows. These are high-volume, repeatable processes with measurable outcomes and clear escalation paths. One participant’s patient access center handles 10,000 calls a week, and an AI agent could triage 60 percent of them. Others are already piloting transfer-of-care coordination agents that save hours of nursing time daily.
The vendor ecosystem is moving in the same direction. Waystar announced in January 2026 that it is building an end-to-end autonomous revenue cycle using an agentic network. Epic now connects more than 1,000 hospitals and 22,000 clinics to TEFCA via Epic Nexus. Oracle’s Health Clinical AI Agent reported a roughly 30 percent reduction in daily documentation time across more than 30 specialties. UiPath launched agentic AI for prior authorization and claim denial management at ViVE 2026. Startups like VoiceCare AI are piloting agents at Mayo Clinic that make outbound calls to payers for benefit verification, sitting on hold for up to two and a half hours so staff don’t have to. McKinsey reports that in 2025, more than 30 percent of providers prioritized AI implementation for seven specific revenue cycle use cases, up from four or five in 2023 and 2024. The market is not waiting around.
Clinical decision-making is a different conversation, and the room was unanimous about it. A licensed operator has to stay in the loop for anything touching diagnosis, treatment planning, or medication management. That is not a knock on the technology. It is a recognition that governance maturity has to catch up before autonomy expands. We asked the room, “If an agent improved your clean claim rate by 15 percent, but you couldn’t fully explain how it coded a charge, would your compliance officer sign off?” The silence was its own answer.
Shadow AI Is Already Here
While leadership debates governance in the boardroom, the frontline has already decided. A Wolters Kluwer survey of 518 healthcare workers in December 2025 found that 40 percent had encountered unauthorized AI tools in their workplace and nearly 20 percent admitted to using them. Half said they did it for a faster workflow. A third said their organization lacked approved tools with the functionality they needed. One in ten said they had used an unauthorized AI tool for a direct patient care use case.
This is not a failure of people. It is a signal of unmet need. And the risk is real: IBM’s 2025 Cost of a Data Breach report ranked healthcare as the costliest industry for breaches for the fourteenth consecutive year, with the average breach costing $7.4 million. Twenty percent of surveyed organizations suffered a breach due to shadow AI specifically. The path forward is not to ban personal AI use. It is to provide governed alternatives that are fast enough and good enough that people stop reaching for the unauthorized ones.
When It Goes Wrong, How You Respond Matters More than How You Prevented It
We asked what the best response is after a trust-breaking event. Forty-three percent said a transparent post-mortem shared with all staff. Twenty-nine percent wanted to retrain the model with staff input. Nineteen percent said mandate a human in the loop for that workflow permanently. Only 10 percent said shut the system down.
I was struck by how measured these responses were. These are not people who would panic at the first error. They understand that AI systems will get things wrong, the same way human systems do. What matters is whether the organization responds with transparency or silence. The 43 percent who want post-mortems are describing a culture that learns from failure. The 29 percent who want staff involved in retraining are saying something I hear constantly in my work: we don’t trust a model we can’t shape.
The Real Barrier Is Not the Technology
I asked the room, “If I asked you to show me the documented process for how a claim moves from submission to collection, could you?” Most people laughed, which was the answer. You cannot automate a workflow that does not exist on paper. An agent cannot follow a care pathway nobody has mapped. The number one barrier to agentic AI in healthcare is not model accuracy or regulatory ambiguity. It is undocumented processes. And with 70 percent of healthcare leaders reporting early to mid-stage AI maturity, according to a recent industry assessment, the gap between ambition and readiness is still wide.
I also asked whether they were building an AI strategy or buying an AI platform, because those are two very different things and only one survives a vendor pivot. Your EHR vendor is going to ship AI features whether you are ready or not. Epic, Oracle, and a growing roster of startups are embedding agents into workflows right now. The organizations that come out ahead will not be the ones running the most agents. They will be the ones that can tell you exactly what each agent does, who is accountable when it fails, and what happens next.
One participant said something at the end that stuck with me. I had asked everyone to name one thing they would do differently about AI governance in the next 90 days. She said, “I’m going to stop treating governance like a project and start treating it like a practice.” I don’t think I can say it better than that.
Ready to build an AI governance practice, not just a plan?
Talk to CNXN Helix: cnxnhelix.com




Comments