
As hyperbolic words go, transformation ranks near the top of the list. Yet, when something is truly transformative, it’s undeniable. And that is exactly what we have been witnessing with the use of artificial intelligence (AI) within the healthcare industry; a true digital transformation revolution.
With the AI healthcare market valued at $26.69 billion in 2024, and projected to reach $613.81 billion by 2034, this transformation is not only reducing operational friction in healthcare organizations, but more importantly improving both patient outcomes and staff workflow performance.
This exciting transformation though, is coming at a cost; increased cybersecurity vulnerabilities. Risks too many healthcare professionals are not yet prepared to handle.
How AI Diagnostics and CDS Tools Become Targets
Before AI, traditional healthcare cybersecurity systems prioritized the protection of patient data, albeit electronic health records (EHRs), imaging files or billing information. However, as AI-based systems not only store data but are involved in the interpretation of data for patient related decisions, the stakes have changed. This has upgraded the stakes for what a healthcare organization could lose once exposed, as evidenced in the following examples of emerging cyber threats to health systems:
- Model Manipulation: Adversarial attacks are when the actors make small but targeted changes to the input data, which in turn causes the model to analyze the wrong data, for example, a malignant tumor is mistaken as benign, leading to catastrophic consequences.
- Data Poisoning: Attackers who access training data for AI model development can damage it, which leads to harmful or unsafe medical recommendations.
- Model Theft and Reverse Engineering: Attackers can obtain AI models through theft or logical examination to extract the model’s weaknesses, then either build new malicious versions or replicate existing models.
- Fake Inputs and Deepfakes: The injection of artificial patient information, manipulated medical records, and imaging results through systems leads to misdiagnosed treatments.
- Operational Disruptions: Medical institutions are using AI systems to make operational decisions, such as ICU triage. The disablement or corruption of these systems creates serious operational disruptions that put both patients at risk and result in critical delays throughout entire hospitals.
Why the Risk is Unique in Healthcare
A mistake in healthcare could easily mean life and death. Therefore, wrong diagnoses due to a corrupted AI tool are more than a financial liability; it is an immediate threat to patient safety. Furthermore, recognizing a cyberattack can take time, but the compromise of an AI tool can be instantly fatal if the lead clinicians use faulty information on their patients’ treatment. Unfortunately, securing an AI system in this industry is extremely hard due to legacy infrastructures and limited resources, not to mention the complex vendor ecosystem.
What Healthcare Leaders Must Do Now
It is critical that leaders in the industry consider this threat carefully and prepare a defense strategy accordingly. Data is not the only asset that requires heavy protections. AI models, the training processes, and the entire ecosystem need protecting as well.
Here are key steps to consider:
1. Conduct comprehensive AI risk assessments
Conduct thorough security evaluations before implementing any AI-based diagnostic or Clinical Decision Support (CDS) tools to understand both functionality and scenarios under attack, thus deducing suitable plans for each scenario.
2. Implement AI-specific cybersecurity controls
Follow cybersecurity practices made for AI systems by conducting adversarial attack monitoring and model output validation, as well as ensuring secure algorithm update procedures.
3. Secure the supply chain
Require third-party vendors to provide detailed information about securing their models, along with training data and update procedures. The development and maintenance of most AI solutions occur with third-party vendors. Research by the Ponemon Institute has found that vulnerabilities in third-party vendors have accounted for 59% of healthcare breaches. Therefore, healthcare organizations must ensure the language of contracts should enforce explicit cybersecurity measures that pertain to AI technologies.
4. Train clinical and IT staff on AI risks
Both clinical personnel and IT staff need thorough training about the particular security weaknesses existing within AI systems. The staff must receive training that enables them to detect irregularities in AI output signals indicating potential cyber manipulation.
5. Advocate for standards and collaboration
A standard regulation and procedure for AI security is critical. The industry must also collaborate and share common and unique vulnerabilities that their AI system possess so that others could evaluate theirs. The Health Sector Coordinating Council and HHS 405(d) program provide essential foundations, yet additional measures are necessary.
The Future of AI in Healthcare Depends on Trust
AI is key to unlock revolutionary diagnostic performance, efficient care delivery, and overall better patient outcomes. However, if this development is interfered with by cybersecurity vulnerabilities, we would witness a loss of clinicians’ and patients’ trust in these tools, thus stalling the adaptation of new technology. The worst-case scenario is when the patients have to suffer the damage.
Security measures for AI systems must become an integral part of every stage in AI creation and implementation. It is a clinical imperative. Healthcare leaders need to protect AI-based diagnostics and clinical decision support tools through equivalent operational procedures that they would use for other systems.
Healthcare innovation for the future depends on building trust as its fundamental basis. Without secure and effective AI systems that could enhance our performance, we would not be able to earn and preserve that trust.