top of page
Search

The Battle of the Bots: What EHR Decision Engines Mean for Wound Care in 2026

Updated: 6 hours ago

The Shift Nobody Saw Coming


For decades, the electronic health record lived up to its name. It was a record. A place to document what happened, store what was ordered, and generate what got billed. Clinicians clicked through menus, typed their notes, and moved on.


What is the difference? While the record keeper documents the past, the decision engine predicts the future—using real-time data to flag risks like sepsis or stalled wound healing before they escalate.


Today’s EHR is being re-architected into an advanced Clinical Decision Support (CDS) engine. Instead of passively storing data, modern systems actively analyze it. They track whether patients are adhering to treatment plans. They surface risks before symptoms appear. They generate clinical documentation by listening to the visit itself.


The EHR is no longer just where care is recorded. It’s where care is guided.


Two groups of AI robots square off—sleek clinical bots on left, menacing payer bots stamped DENIED on right—while a warm-lit rural hospital stands between them, claims paperwork littering the ground

But here’s the plot twist most health systems missed: payers built decision engines too.


While providers invest in AI to streamline documentation and catch compliance gaps early, insurers have deployed their own algorithmic systems at scale. These are not clinical support tools. They are denial engines, designed to adjudicate claims in bulk, flagging thousands for rejection in seconds.


The result is an AI arms race. Decision engine versus decision engine. And wound care and HBOT programs are caught squarely in the middle.


In an era where the ONC’s HTI-1 transparency rules are pulling back the curtain on algorithms, understanding both sides of this AI arms race is the only survival strategy for 2026.


For wound care leaders, understanding both sides of this battle is no longer optional.


The good news is that the regulatory environment is finally catching up. New federal guidance, state-level human-in-the-loop laws, and mounting legal pressure are forcing transparency into systems that operated as black boxes for years. Programs that understand these shifts, and prepare for them, will thrive. Those that don’t risk being outpaced by algorithms they never learned to see.


What "Decision Engine" Actually Means


From Clicking Menus to Monitoring Plans


The traditional EHR was transaction-oriented. Click to place an order. Click to complete a note. Click to bill. The system recorded what clinicians did, but it never asked a more important question: was the plan working?


Decision engines invert that logic. They are plan-centric, built to continuously monitor whether a patient is progressing toward a defined clinical goal and to alert the care team when something drifts off course.


The most visible example is the ambient scribe. Instead of spending twenty minutes typing after every visit, clinicians speak naturally while AI listens, interprets, and drafts documentation in the background. This isn’t a marginal convenience. Nearly 70% of physicians report EHR-related stress and burnout, much of it driven by documentation burden. Tools that cut charting time by 40% or more aren’t just efficiency upgrades. They’re retention strategies.


Beyond documentation, predictive alerting is where decision engines earn their name. Rather than generic allergy pop-ups that everyone clicks past, these systems surface patient-specific risk. They flag elevated likelihood of sepsis, acute kidney injury, or wound deterioration, sometimes hours before overt symptoms appear.


That’s no longer record-keeping. That’s decision support.


Implementation Barriers: Dirty Data, Black Box AI, and Alert Fatigue


On paper, the promise is transformative. In practice, adoption has been uneven.


Only about 30% of healthcare organizations report successfully integrating AI-driven workflows into daily operations. Most are still navigating cost overruns, scope creep, and frontline resistance.

Three technical barriers consistently slow progress:


Dirty data. Legacy EHRs are filled with incomplete fields, inconsistent formats, and years of documentation workarounds. Decision engines trained on messy data produce unreliable recommendations. There is no algorithmic shortcut around data hygiene.


The black-box trust gap. Many models generate recommendations without explaining their reasoning. Clinicians are understandably hesitant to follow guidance they can’t interrogate, especially when liability still attaches to their signature.


Alert fatigue 2.0. When decision engines fire too often, clinicians tune them out. The same burnout problem reappears, just in algorithmic form. More alerts don’t improve care. Better-targeted alerts do.


None of this means decision engines aren’t worth pursuing. It means success hinges less on the algorithm itself and more on implementation discipline: workflow integration, clean data, and trust built with the clinical team.


The Regulatory Landscape Just Shifted


If decision engines are going to influence clinical care, regulators want to understand how they work. In 2026, they’re finally forcing the issue.


After years of ambiguity, federal and state authorities are drawing clearer lines around what AI systems can recommend, how transparent they must be, and where human judgment remains non-negotiable. 


FDA's January 2026 Update: Single-Recommendation Freedom


The FDA’s updated guidance on Clinical Decision Support (CDS) represents a meaningful shift in how AI tools are classified and regulated.


The headline change is single-recommendation freedom.


Historically, regulators pushed AI tools to present clinicians with multiple options rather than a single recommendation, preserving human choice by design. Under the new FDA Jan 2026 guidance, if only one treatment path is clinically appropriate based on established standards of care, an AI system may recommend it directly without being classified as a regulated medical device.


There’s a critical condition: the clinician must be able to independently verify the reasoning.


That requirement exposes the “time-critical” trap. If a system must act instantly, such as triggering a stroke response or emergency intervention, it remains a regulated medical device. The logic is straightforward: when there is no time for human verification, the AI is the decision maker. Furthermore, recommendations must be grounded in “well-understood and accepted sources”, including peer-reviewed literature, clinical guidelines, and established standards of care. Proprietary black-box logic is officially a regulatory liability.


The Transparency Mandate: HTI-1 and FAVES


The Office of the National Coordinator for Health IT (ONC) added its own enforcement layer through the HTI-1 rule.


As of early 2025, EHR vendors are required to disclose how predictive algorithms function. Clinicians are entitled to a baseline level of transparency, including training data sources, performance characteristics, and known limitations.


This framework is captured in the FAVES standard:


Fairness

Appropriateness

Validity

Effectiveness

Safety


If a decision engine cannot demonstrate these attributes clearly and consistently, regulators are signaling that it should not be influencing clinical care. Silent algorithms are no longer acceptable infrastructure.


State-Level “Human-in-the-Loop” Laws


While federal agencies refine guidance, states are moving faster and drawing harder boundaries.


Illinois and Georgia now prohibit insurers and providers from making coverage or treatment decisions based solely on AI output. A human decision-maker with authority to override the algorithm must be involved. Texas and Illinois have also enacted informed-consent requirements, mandating that patients be informed, in some cases explicitly, when AI is used to support medical decision-making.


The message across jurisdictions is consistent: Algorithms may support clinical judgment, but they cannot replace it.


The Battle of the Bots: Payers Have Decision Engines Too


Provider-side decision engines are built to help. They flag clinical risk, streamline documentation, and surface compliance gaps before they become problems.


Payer-side decision engines are built for something else entirely: Industrialized Friction.


Inside the Denial Engine


While hospitals invest in AI to support care delivery, insurers have deployed algorithmic systems optimized for a single objective: cost containment at scale.


These are not clinical decision support tools. They are denial engines, designed to adjudicate claims in bulk by scanning for statistical patterns that trigger automatic rejection.


The scale is staggering:


Cigna’s PXDX system reportedly allowed medical directors to "review" and deny hundreds of claims in seconds without opening a patient file.


UnitedHealthcare’s nH Predict algorithm established predetermined lengths of stay, triggering automatic denials once a "predicted" date was hit—regardless of actual patient progress.


The Impact: AI-driven systems can generate rejection rates up to 16 times higher than traditional human review.


This isn’t precision medicine. It’s industrialized friction.


The Appeal Bet: Why 90% of Denials Fail


Here’s the part payers rarely emphasize: most of these denials don’t survive scrutiny.


Insurers are betting on provider fatigue. While AI flags roughly 50% of claims, only about 1% are ever formally appealed. Payers know that the administrative burden of fighting a denial is often more expensive than the claim itself.


But when appeals do move forward, the outcome is revealing.


Roughly 90% of AI-driven denials that reach an administrative law judge are overturned.


That’s not a marginal failure rate.


The system isn't optimized for accuracy; it's optimized for deterrence. A denial works not because it’s right, but because it’s exhausting to fight.


The 2026 Lawsuit Tracker: AI on Trial


The courts are beginning to respond to "black box" adjudications.


Three of the nation’s largest insurers now face class-action litigation tied directly to algorithmic claims processing:


UnitedHealth Group (nH Predict): Accused of using predictive models to prematurely terminate Medicare Advantage coverage. Court filings cite error rates approaching ninety percent when denials were appealed.


Cigna (PXDX): Allegations that medical directors spent an average of 1.2 seconds reviewing each flagged claim, effectively rubber-stamping batch denials affecting more than 300,000 coverage requests.


Humana (nH Predict via NaviHealth): Similar claims that algorithmic targets were used to deny post-acute and rehabilitative care to vulnerable beneficiaries.


These cases are ongoing. But the pattern is already reshaping compliance expectations across the industry.


SHS Insight: At Shared Health Services, we’ve long trained wound care teams to build documentation that tells a complete clinical story. Not just to satisfy charting requirements, but to produce records that withstand scrutiny from review, whether human or algorithmic-driven.


In the age of denial engines, that discipline isn’t optional.


It’s armor.


What This Means for Wound Care and HBOT


Every service line feels the pressure of algorithmic review, but wound care and hyperbaric oxygen therapy feel it first.


These programs are in the crosshairs because they involve extended treatment courses, advanced therapies with narrow coverage criteria, and rigid documentation requirements that denial engines are trained to interrogate.


In other words, they generate exactly the data patterns payer algorithms are built to scrutinize.


Three Auto-Denial Triggers Wound Care Teams Must Know


Payer algorithms don’t read charts; they scan for patterns. Certain documentation gaps trigger automatic rejection before a human reviewer ever sees the claim.


1. The Conservative Care Gap: Most Local Coverage Determinations (LCDs) require 30 days of standard wound care before advanced therapies are approved. If your documentation lacks a clear "Start" and "End" date for failed therapies, the algorithm flags it as "criteria not met."


2. Wagner Grade and Wound Area Mismatch: Coverage for Cellular and/or Tissue-Based Products (CTPs) is frequently tied to wound severity and surface area. When measurements recorded in the chart don’t align with the units billed, algorithms flag claims for “excessive quantity” or “medical necessity not established.”


Inconsistent measurement technique across visits creates the appearance of billing irregularities, even when treatment decisions are clinically appropriate.


The algorithm isn’t judging care quality. It’s detecting inconsistency.


3. Frequency Cap Violations: Many payers limit CTP applications to a set number (often eight applications per episode). per wound episode, often at eight. Once that threshold is crossed, subsequent claims auto-deny regardless of healing trajectory.


The algorithm doesn’t know the wound is eighty percent closed and needs two final applications to complete healing. It only knows the cap has been exceeded.


Context never enters the calculation.


The Upside: Decision Engines That Work FOR You


This is where provider-side decision engines change the equation.


When implemented correctly, clinical decision support systems don’t fight payer algorithms. They preempt them.


A well-implemented clinical decision support system can:


  • Track conservative care windows automatically, flagging missing milestones before claims are submitted

  • Standardize wound measurement protocols, reducing variation that triggers algorithmic scrutiny

  • Alert teams before frequency caps are reached, allowing clinical and billing staff to coordinate coverage strategy

  • Surface missing documentation elements while the patient is still in treatment, not after the claim is denied


The goal isn’t to game the system. It’s to ensure the care already being delivered is documented in a way denial engines can’t misinterpret.


The ROI Reality Check: Turning Data into Dollars


Early adopters of AI-assisted compliance tools aren't just saving time; they are protecting their bottom line.


  • Documentation time: Reduced from roughly 20 minutes per patient to under 5 minutes (an improvement of up to 85% improvement)

  • Avoidable denials: 10-20% reduction in claims requiring rework

  • Administrative efficiency: Average workflow improvement of 23% average

  • Pressure injury prevention: One health system reported reducing annual pressure injury costs from $3.6M to $700K through predictive alerting

  • HBOT completion rates: Programs using adherence tracking see healing rates climb from 60% to 75%+ when patients complete prescribed courses


These aren't aspirational projections. They're operational results from early adopters who invested in implementation discipline, not just software licenses.


SHS Insight: At SHS, we help wound care teams build documentation workflows that satisfy both clinical standards and algorithmic review criteria. Not by adding complexity, but by embedding compliance checkpoints into the care process.


The best defense against a denial engine isn't a bigger appeals department.


It's a chart that never triggers the flag in the first place.


The Arms Race: Fighting Fire with Fire


The market response to denial engines was predictable: build defense bots.


A new generation of provider-side AI now scans denial letters, cross-references clinical documentation, and drafts appeal responses in minutes rather than hours. These Autonomous Appeal Systems work backward from the rejection, identifying missing elements, mislabeled fields, or coding mismatches that triggered the denial in the first place.


Some go further. "Agentic AI" systems perform predictive denial management by pre-scrubbing claims before submission. These tools flag documentation gaps, LCD compliance failures, and coding inconsistencies before a payer algorithm ever sees the file. The objective is straightforward: fix the problem before it becomes a denial.


It's a logical evolution, though it's not a complete solution.


What AI Can't Replace: The Clinical Narrative


Algorithms excel at pattern detection. They can identify missing dates, inconsistent measurements, and exceeded thresholds with speed and consistency.


What they cannot do is construct the clinical narrative that makes complex cases defensible.

A denial engine does not know why a wound required twelve skin substitute applications instead of eight. It only knows the cap was exceeded. A successful appeal doesn’t merely cite the rule. It explains the patient’s clinical course, the response to therapy, and why continued treatment was medically necessary.


That explanation comes from clinical judgment. It comes from understanding payer logic well enough to anticipate objections before they appear. It comes from documentation habits built prospectively, not retrofitted after a rejection arrives.


Automation can support the process, but it cannot substitute the storytelling found in a well-built chart.


SHS Insight: SHS has operated on a human-in-the-loop philosophy for more than twenty-five years, long before regulators formalized the concept. We equip wound care teams with the training, documentation discipline, and compliance frameworks that allow advanced technology to work as intended.


The most sophisticated algorithm in the world can’t rescue a chart that was never built to tell a complete clinical story.


Preparing Your Wound Care Program for the Battle of the Bots


The battle of the bots isn't coming; it's here. Programs that wait for “clarity” will find themselves reacting to denials instead of preventing them.


Preparation doesn't require a million-dollar software license—it requires the operational discipline to make your data "bot-proof."


Four Steps to Algorithm-Resistant Documentation


1. Audit your conservative care documentation: Review your last twenty Cellular and/or Tissue-Based Product (CTP) or HBOT claims. Can you find clear "Start" and "End" dates for the 30-day window? If a human can't find it in 30 seconds, a payer algorithm has already denied it.


2. Standardize wound measurement protocols: Variation is a denial trigger. Whether you use 3D imaging or manual rulers, Consistency > Perfection. Consistency avoids algorithmic flags.


3. Track frequency caps proactively: Don't discover you've exceeded an episode limit after the denial arrives. Embed tracking into your workflow and use your EHR’s Clinical Decision Support (CDS) so clinical and billing teams can coordinate before thresholds are crossed.


4. Document the "Why," not just the "What":

A chart that lists interventions without explaining clinical reasoning is a chart built for denial. Every note must satisfy the "FAVES" standard (Fairness, Appropriateness, Validity, Effectiveness, and Safety) now expected under HTI-1 regulations.


SHS Insight: Building algorithm-resistant documentation isn't a one-time project — it's an operational discipline. SHS partners with wound care teams to embed compliance checkpoints into daily workflows so audit readiness becomes habit, not crisis response.


Final Word: The Chart Is the Battlefield for Wound Care Compliance


The AI arms race in healthcare isn't theoretical; it's operational. Decision engines are already shaping what gets documented, what gets submitted, and what gets paid.


Payer algorithms will continue to evolve. Provider tools will respond. Regulations like HTI-1 will tighten. Through all of it, one constant remains: the clinical story captured in the chart is the foundation everything else depends on.


Programs that invest in documentation discipline today won't just survive the "Battle of the Bots"—they'll be positioned to thrive regardless of which algorithms come next.


The technology will change, but the fundamentals of medical necessity won't.


Ready to "Bot-Proof" your wound care or HBOT program?


Don't wait for the next denial. Let’s build your clinical armor today.


Call (800) 474-0202 or email sales@sharedhealthservices.com

bottom of page