
Hospitals have thrown their doors open to a new kind of technology that can think, act and make changes faster than any human. And with healthcare generating more than 30% of the world’s data, the industry has, for better or worse, become the ultimate proving ground for AI, from all of the good it can support to all of the threats that it amplifies.
AI is being woven into diagnostics, scheduling, documentation and the overarching integration and interoperability plans that make healthcare better. And it’s happening at a faster rate than security and privacy can keep up with.
The same technology that drives precision, personalization and efficiency in care also multiplies the entry points for cyberattacks and hackers—and according to a recent HIMSS survey, more than 72% of health leaders report high levels of concern for data privacy risks. Keeping this new healthcare environment safe means rethinking how we interact with and trust AI.
Power with consequences
According to HIMSS, more than 60% of healthcare professionals value AI for its ability to recognize health patterns and identify potential diagnoses. And more than 86% of respondents have already implemented AI in some way. Agentic AI, in particular, is making its way across the industry without pause.
Before AI, there was one tool for every task in healthcare. AI platforms do not fit this existing paradigm, and often a single AI platform may serve a multitude of use cases and interact with multiple systems.
The problem is that agentic AI multiplies a healthcare system’s exposure to attacks. Since a single model can touch dozens of systems and move data between them, it creates new vulnerabilities systemwide that security teams can’t always see until something goes wrong. That’s because most users don’t fully understand what agentic AI can do or what permissions it can inherit and take with it through the system.
Because agentic AI can make decisions and carry out tasks, these systems can act on behalf of users to perform complex operations and connect tools across health networks automatically. Anything a person can access, the AI can access—unless guardrails are put in place to prevent it—and it can do it faster than humans without fatigue or hesitation.
CISOs must treat agentic AI not like a tool or resource, but like any other user or operator. That means managed access, recorded actions and required encryption and multifactor authentication before it touches or manages health information or major system components. Without these guardrails in place, AI agents could pose exponentially more risk than a compromised user.
Amplification of threats
For every new AI-based tool that promises to modernize healthcare, CISOs have to assume that attackers are adopting the same and building upon it. AI is already powering phishing, ransomware and data-theft campaigns that are faster and more difficult to detect.
Cyber attackers now use AI to combine stolen health data with insurance and personal information to craft urgent, personalized messages that lack all the hallmarks of phishing and social engineering campaigns like spelling and grammatical errors. Industrywide, health records remain the most valuable asset on the dark web because they contain enough information to link clinical and financial information with identity. Today, a patient could easily receive a message offering a “cure” based on their very real diagnosis; that’s how easy it is for data to be used against the very people it was meant to help.
Ransomware operations are also becoming more sophisticated, helping cybercriminals map networks, locate valuable data and trigger their software automatically. In fact, a recent MIT Sloan survey found that up to 80% of new ransomware attacks are enhanced or run by AI. And a single systemwide vulnerability can be exploited in mere minutes.
For CISOs, the reality is clear. Prevention alone isn’t enough. Active monitoring to support early detection and containment and immutable backups to rebuild critical operations without negotiation are key to resilience.
‘Zero trust’ to build trust
Hospital systems and security frameworks weren’t built for autonomous systems that have the same privileges as humans. And here, where so much is at stake, zero trust should be the default.
This means governance has to extend beyond firewalls and compliance checklists to manage how AI operates and thinks, what data it can access and who is accountable when something goes wrong. Every request for access must be verified continuously and environments segmented to prevent one error or breach from sweeping through the entire system. Every vendor and system must be vetted thoroughly before deployment with explicit clarity on data ownership, prompts, outputs and overrides. All of this takes “free” tools out of the running for use in healthcare as they typically come with the highest cost: exchanging data and control for free software.
Real defense isn’t just about having controls; it’s about those controls doing the job well. And those organizations that treat AI security as part of patient safety will be the ones that maintain trust where others lose it.
Automation meets fortification
Every improvement in AI changes the foundation of healthcare and healthcare security. The same technology that is improving care and patient lives is rewriting the rules of technology defense, and it’s doing it near-daily. Cybercriminals are moving faster than ever, using AI to automate and perfect their attacks. Thus, we’re faced with a reality where it’s no longer about whether we can prevent every breach, but whether we have the tools, resources and protocols in place to get back to work.
Healthcare CISOs have the opportunity now to build defenses that withstand what’s coming, and that defense requires three key things:
- Auditing AI exposure to identify every system, dataset and workflow connected to AI.
- Rebuilding and strengthening governance to define ownership, access, and accountability for all AI operations before they have access to anything.
- Reinforcing and building upon defense to ensure that zero-trust segmentation and authentication and immutable backups are the heart of the plan.
It’s resilience, not speed, that marks progress. And in this age of agentic AI—and whatever comes next—resilience will depend not on how well hospitals use AI but how well CISOs understand, contain and govern the power AI is given.
About Ben Scharfe
Ben Scharfe is the EVP for AI at Altera Digital Health. A seasoned leader in the healthcare technology sector, Ben has dedicated more than ten years to fostering growth and innovation at Harris and Altera. His expertise spans financial leadership, client support and executive management, contributing to Harris’ robust expansion, including the successful acquisition and integration of numerous entities. Currently leading AI initiatives, Ben champions the adoption of AI and innovation with the purpose of enhancing patient-provider interactions and patient outcomes.

