AI in Wound Care & HBOT: Job Protections, Trust, and the Future of the Field
- mdavis107
- 2 hours ago
- 6 min read
The Real Stakes: Jobs, Compliance, and Patient Trust
Across every industry, frontline workers, office assistants, corporate staff, and skilled tradesmen are asking the same uneasy question: Will AI take my job? From manufacturing floors to executive suites, fears about automation and job security dominate headlines. Healthcare is no exception. In this rapidly expanding AI revolution, clinicians are not only worried about livelihoods — they’re worried about the safety and quality of patient care when roles are deleted and algorithms step in.
That tension came into focus recently as hundreds of Kaiser Permanente employees rallied for contract protections against AI¹. Their message was clear: frontline staff want a voice in how technology is deployed. The concern isn’t isolated: researchers warn that poorly governed AI in medicine can amplify risks of bias, error, and compliance failures². Globally, frameworks are emerging that push health systems to adopt transparent, ethical standards³ — echoed in Kaiser’s own “7 Principles of Responsible AI in Healthcare”⁴ and broader policy work shaping the U.S. healthcare landscape⁵. For wound care and hyperbaric oxygen therapy (HBOT) programs, the question is even sharper: how do we adopt AI responsibly without eroding trust?
Conversely, Kaiser Permanente framed the issue differently. “We recognize that AI and new technologies influence how work is done, so we are committed to supporting our workforce as they learn and adopt more effective ways of working to deliver on our mission. AI automates tasks, not jobs, supporting employees and care providers by giving them more time to focus on caring for members and patients,” a spokesperson told Becker’s¹.
This isn’t just Kaiser’s story. Across industries — from journalism to Hollywood to manufacturing — AI is sparking the same labor debate: untested tool vs. job eraser. Healthcare is simply the latest, and perhaps highest-stakes, front line.

The Wound Care Perspective
The Kaiser rally is a reminder of what happens when technology enters the workplace without trust. Even if your wound care or HBOT team isn’t protesting outside the hospital, the same questions will surface quietly inside the clinic: Will this tool support me — or replace me?
What frontline staff see as erosion of their role, hospital leaders may frame as efficiency. That gap in perception is where resistance, burnout, or turnover can start. And when the public hears “AI protest,” the message isn’t about productivity — it’s about patient safety.
For wound care programs, the stakes are clear: AI adoption can’t just be about efficiency. It has to equip staff, protect compliance, and preserve the sense that their work still matters.
Where AI Touches Wound Care Today
From automated wound imaging and documentation tools to predictive analytics and revenue cycle automation, AI is already shaping the wound care space. These technologies promise new levels of efficiency — but they also intersect with the daily workflows of nurses, HBOT technologists, and program directors. The result? A sense of excitement, tempered by caution.
A Glimpse Into What’s Coming
The AI conversation is no longer theoretical. Ambience Healthcare recently announced Chart Chat — an Epic-integrated AI “copilot” that allows clinicians to query information directly within the electronic health record (EHR)⁶. By pulling from patient history, BMJ Best Practice, and advanced fine-tuning, it can surface answers in seconds that once required minutes of searching.
For wound care and HBOT teams, that kind of functionality could mean:
Rapid access to qualifying diagnosis details for HBOT.
Faster identification of relevant comorbidities or contraindications.
Easier cross-checking of clinical documentation with payer policies.
The upside is obvious. But so are the risks: without governance, an EHR-embedded tool that’s wrong even once can create far more disruption than it prevents. A single inaccurate note can ripple into payer denials, compliance flags, or audit exposure.
For senior leaders, those risks are front of mind. But for frontline nurses, HBOT techs, and other allied health staff, documentation can feel like a routine checklist rather than a narrative that defends care. If an AI-generated note misses a key qualifier or frames the story poorly, it’s not just a clerical error — it’s a compliance liability.
The Scrutiny Gap
Scrutiny is nothing new in wound care and HBOT. Clinicians are used to chart reviews, payer audits, and the constant pressure of defending documentation. Adding AI assistants into the compliance equation could complicate things further, as humans and algorithms are judged on very different terms. Workers frame AI as “untested and unregulated,” while health systems counter that it’s “just a tool to support clinicians.” That clash in perception creates a new layer of risk for programs trying to stay compliant and defensible.
How Errors Are Handled with Clinicians
An allied healthcare professional who consistently documents appropriately and provides excellent patient care still works under constant oversight: chart reviews, compliance checks, and the occasional corrective note or Targeted Probe and Educate (TPE). A wound care nurse or hyperbaric technologist who falls short in their duties risks retraining, closer supervision, or, in some cases, termination and replacement. But in every case, a human can explain their reasoning, adjust workflows, and be retrained to improve processes.
The Algorithmic Margin for Error
An AI that executes its assigned tasks consistently but makes a single error in documentation could trigger payer denials, expose the program to audit risk, or invite regulatory scrutiny. Unlike a human, an algorithm cannot explain why it made a choice, nor can it be “retrained” in real time in a treatment room. One flaw can multiply silently, affecting every patient record it touches.
This is the paradox: a human mistake can be explained, corrected, and turned into a teaching moment. An AI mistake lives on the record, multiplying until it becomes a compliance crisis.
Protecting Roles, Preserving Trust
The real issue isn’t whether AI will enter wound care — it already has. The issue is how it will be governed. If tools are rolled out without clear guardrails, staff risk being reduced to button-pushers instead of valued clinical contributors. That shift erodes both trust and program stability.
And the stakes go beyond friction between clinicians and leadership. Patients often choose where they receive care because of human connection — the nurse who lingers to check on them in the treatment room, the hyperbaric technologist who remembers their family, the physician who explains a complex treatment in plain, accessible language. If patients begin to feel like they’re receiving care from a screen instead of a team, confidence in the entire program can fade.
With the right approach, AI doesn’t replace people — it equips them. A well-designed framework can:
Strengthen documentation so records are audit-ready.
Reinforce compliance by aligning notes with payer expectations.
Support reimbursement defense by surfacing the right qualifiers at the right time.
Free up clinicians to spend more time with patients, not less.
Trust takes root when staff see AI as a tool in their toolbox rather than a silent critic — and when patients see that technology clarifies their story of care instead of clouding it.
SHS Perspective
At Shared Health Services, we believe AI should be a support tool — never a substitute for patient care. Our role is to help hospitals and physician practices evaluate technologies responsibly, with compliance, staff voice, and patient experience at the center. In wound care and HBOT, people carry the mission. Technology should amplify their expertise, strengthen documentation, and defend reimbursement — but it can never replace human judgment or connection.
References
Bruce G. Kaiser Permanente workers rally for AI job protections. Becker’s Hospital Review | Healthcare News & Analysis. Published August 21, 2025. Accessed August 19, 2025. https://www.beckershospitalreview.com/healthcare-information-technology/ai/hundreds-of-kaiser-permanente-workers-rally-for-ai-job-protections/
Badal K, Lee CM, Esserman LJ. Guiding principles for the responsible development of artificial intelligence tools for healthcare. Communications Medicine. 2023;3(1). Published April 01, 2025. Accessed August 21, 2025. doi: https://doi.org/10.1038/s43856-023-00279-9
Klaus Moosmayer. The 4 principles of responsible AI in medicine. World Economic Forum. Published January 16, 2025. Accessed August 21, 2025. https://www.weforum.org/stories/2025/01/the-4-principles-of-responsible-ai-in-medicine/
Daniel Yang. AI in Health Care: 7 Principles of Responsible Use. Kaiserpermanente.org. Published August 19, 2024. Accessed August 21, 2025. https://about.kaiserpermanente.org/news/ai-in-health-care-7-principles-of-responsible-use
Lee NT. Health and AI: Advancing responsible and ethical AI for all communities. Brookings. Published March 3, 2025. Accessed August 21, 2025. https://www.brookings.edu/articles/health-and-ai-advancing-responsible-and-ethical-ai-for-all-communities/
Bruce G. Ambience Healthcare unveils Epic-integrated AI copilot. Becker’s Hospital Review | Healthcare News & Analysis. Published August 21, 2025. Accessed August 19, 2025. https://www.beckershospitalreview.com/healthcare-information-technology/ai/ambience-healthcare-unveils-epic-integrated-ai-copilot/