Close Menu

    Subscribe to Updates

    Get the latest creative news from Healthradar about News,Health and Gadgets.

    Bitte aktiviere JavaScript in deinem Browser, um dieses Formular fertigzustellen.
    Wird geladen
    What's Hot

    Women’s Sexual Health: Wisp Expands At-Home Diagnostics

    28. Juni 2025

    Jaan Health Secures $25M for AI-Powered Proactive Care Platform

    28. Juni 2025

    America’s Coming Smoke Epidemic

    28. Juni 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    healthradar.nethealthradar.net
    • Home
    • Ai
    • Gadgets
    • Health
    • News
    • Contact Us
    Contact
    healthradar.nethealthradar.net
    Home»News»Emerging Cyber Threats to AI-Based Diagnostics and Clinical Decision Support Tools
    News

    Emerging Cyber Threats to AI-Based Diagnostics and Clinical Decision Support Tools

    HealthradarBy Healthradar28. Juni 2025Keine Kommentare5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Emerging Cyber Threats to AI-Based Diagnostics and Clinical Decision Support Tools
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Emerging Cyber Threats to AI-Based Diagnostics and Clinical Decision Support Tools
    Ed Gaudet, CEO and Founder of Censinet

    As hyperbolic words go, transformation ranks near the top of the list. Yet, when something is truly transformative, it’s undeniable. And that is exactly what we have been witnessing with the use of artificial intelligence (AI) within the healthcare industry; a true digital transformation revolution. 

    With the AI healthcare market valued at $26.69 billion in 2024, and projected to reach $613.81 billion by 2034, this transformation is not only reducing operational friction in healthcare organizations, but more importantly improving both patient outcomes and staff workflow performance. 

    This exciting transformation though, is coming at a cost; increased cybersecurity vulnerabilities. Risks too many healthcare professionals are not yet prepared to handle.

    How AI Diagnostics and CDS Tools Become Targets

    Before AI, traditional healthcare cybersecurity systems prioritized the protection of patient data, albeit electronic health records (EHRs), imaging files or billing information. However, as AI-based systems not only store data but are involved in the interpretation of data for patient related decisions, the stakes have changed. This has upgraded the stakes for what a healthcare organization could lose once exposed, as evidenced in the following examples of emerging cyber threats to health systems:

    • Model Manipulation: Adversarial attacks are when the actors make small but targeted changes to the input data, which in turn causes the model to analyze the wrong data, for example, a malignant tumor is mistaken as benign, leading to catastrophic consequences. 
    • Data Poisoning: Attackers who access training data for AI model development can damage it, which leads to harmful or unsafe medical recommendations.
    • Model Theft and Reverse Engineering: Attackers can obtain AI models through theft or logical examination to extract the model’s weaknesses, then either build new malicious versions or replicate existing models.
    • Fake Inputs and Deepfakes: The injection of artificial patient information, manipulated medical records, and imaging results through systems leads to misdiagnosed treatments.
    • Operational Disruptions: Medical institutions are using AI systems to make operational decisions, such as ICU triage. The disablement or corruption of these systems creates serious operational disruptions that put both patients at risk and result in critical delays throughout entire hospitals.

    Why the Risk is Unique in Healthcare

    A mistake in healthcare could easily mean life and death. Therefore, wrong diagnoses due to a corrupted AI tool are more than a financial liability; it is an immediate threat to patient safety. Furthermore, recognizing a cyberattack can take time, but the compromise of an AI tool can be instantly fatal if the lead clinicians use faulty information on their patients’ treatment. Unfortunately, securing an AI system in this industry is extremely hard due to legacy infrastructures and limited resources, not to mention the complex vendor ecosystem. 

    What Healthcare Leaders Must Do Now

    It is critical that leaders in the industry consider this threat carefully and prepare a defense strategy accordingly. Data is not the only asset that requires heavy protections. AI models, the training processes, and the entire ecosystem need protecting as well. 

    Here are key steps to consider:

    1. Conduct comprehensive AI risk assessments
    Conduct thorough security evaluations before implementing any AI-based diagnostic or Clinical Decision Support (CDS) tools to understand both functionality and scenarios under attack, thus deducing suitable plans for each scenario.

    2. Implement AI-specific cybersecurity controls
    Follow cybersecurity practices made for AI systems by conducting adversarial attack monitoring and model output validation, as well as ensuring secure algorithm update procedures.

    3. Secure the supply chain
    Require third-party vendors to provide detailed information about securing their models, along with training data and update procedures. The development and maintenance of most AI solutions occur with third-party vendors. Research by the Ponemon Institute has found that vulnerabilities in third-party vendors have accounted for 59% of healthcare breaches. Therefore, healthcare organizations must ensure the language of contracts should enforce explicit cybersecurity measures that pertain to AI technologies.

    4. Train clinical and IT staff on AI risks
    Both clinical personnel and IT staff need thorough training about the particular security weaknesses existing within AI systems. The staff must receive training that enables them to detect irregularities in AI output signals indicating potential cyber manipulation.

    5. Advocate for standards and collaboration
    A standard regulation and procedure for AI security is critical. The industry must also collaborate and share common and unique vulnerabilities that their AI system possess so that others could evaluate theirs. The Health Sector Coordinating Council and HHS 405(d) program provide essential foundations, yet additional measures are necessary.

    The Future of AI in Healthcare Depends on Trust

    AI is key to unlock revolutionary diagnostic performance, efficient care delivery, and overall better patient outcomes. However, if this development is interfered with by cybersecurity vulnerabilities, we would witness a loss of clinicians’ and patients’ trust in these tools, thus stalling the adaptation of new technology. The worst-case scenario is when the patients have to suffer the damage. 

    Security measures for AI systems must become an integral part of every stage in AI creation and implementation. It is a clinical imperative. Healthcare leaders need to protect AI-based diagnostics and clinical decision support tools through equivalent operational procedures that they would use for other systems.

    Healthcare innovation for the future depends on building trust as its fundamental basis. Without secure and effective AI systems that could enhance our performance, we would not be able to earn and preserve that trust.



    Source link

    AIBased Artificial Intelligence Clinical Cyber cybersecurity decision Diagnostics Emerging Support Threats tools
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCDC Votes Against Flu Shots With Thimerosal: What to Know
    Next Article America’s Coming Smoke Epidemic
    ekass777x
    Healthradar
    • Website

    Related Posts

    News

    Women’s Sexual Health: Wisp Expands At-Home Diagnostics

    28. Juni 2025
    News

    Jaan Health Secures $25M for AI-Powered Proactive Care Platform

    28. Juni 2025
    News

    AI Detects Brain Tumors Smaller Than a Grain of Rice — With 97% Accuracy

    27. Juni 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    New surgeon general nominee cofounded a16z backed health app with DOGE operative

    1. Juni 20257 Views

    Neurode wants to treat and track ADHD symptoms through a wearable headband

    1. Juni 20255 Views

    Whoop wants everyone to give a whoop about the new Whoop 5.0

    1. Juni 20254 Views

    Baby Boomers’ Luck Is Running Out

    14. Juni 20253 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Bitte aktiviere JavaScript in deinem Browser, um dieses Formular fertigzustellen.
    Wird geladen
    About Us

    Welcome to HealthRadar.net — your trusted destination for discovering the latest innovations in digital health. We are dedicated to connecting individuals, healthcare professionals, and organizations with cutting-edge tools, applications

    Most Popular

    New surgeon general nominee cofounded a16z backed health app with DOGE operative

    1. Juni 20257 Views

    Neurode wants to treat and track ADHD symptoms through a wearable headband

    1. Juni 20255 Views
    USEFULL LINK
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    QUICK LINKS
    • Ai
    • Gadgets
    • Health
    • News
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    Copyright© 2025 Healthradar All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.