Close Menu

    Subscribe to Updates

    Get the latest creative news from Healthradar about News,Health and Gadgets.

    Bitte aktiviere JavaScript in deinem Browser, um dieses Formular fertigzustellen.
    Wird geladen
    What's Hot

    Strava runs are continuing to leak sensitive military information, with over 500 UK soldiers the latest to be exposed

    4. April 2026

    My Fight Back Against Menopause Started in a Boxing Gym

    4. April 2026

    Young Adults With Hypertension Have 27% Higher Risk

    4. April 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    healthradar.nethealthradar.net
    • Home
    • Ai
    • Gadgets
    • Health
    • News
    • Contact Us
    Contact
    healthradar.nethealthradar.net
    Home»Gadgets»It’s not easy to get depression-detecting AI through the FDA
    Gadgets

    It’s not easy to get depression-detecting AI through the FDA

    HealthradarBy Healthradar2. April 2026Keine Kommentare6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    It’s not easy to get depression-detecting AI through the FDA
    Share
    Facebook Twitter LinkedIn Pinterest Email


    For the past seven years, the California-based startup Kintsugi has been developing AI designed to detect signs of depression and anxiety from a person’s speech. But after failing to secure FDA clearance in time, the company is shutting down and releasing most of its technology as open-source. Some elements may even find a second life beyond healthcare, like detecting deepfake audio.

    Mental health assessments still largely rely on patient questionnaires and clinical interviews, rather than the lab tests or scans common in physical medicine. Instead of focusing on what someone is saying, Kintsugi’s software analyzes how it is being said. The idea isn’t new — speech patterns like pauses, sentence structure, or speed are known indicators of various mental health issues — but Kintsugi says its AI can pick up subtle shifts that may be less obvious to human observers, though it has not publicly detailed exactly which features drive its models’ predictions. In peer-reviewed research, the company reported results broadly in line with established self-report screening tools for depression using short speech samples.

    The company pitched the technology as a complement — or potential alternative — to self-reported screening tools.

    The company pitched the technology as a complement — or potential alternative — to self-reported screening tools like the Patient Health Questionnaire-9, or PHQ-9, a staple of primary care and psychiatry. These tools are supposed to be used alongside formal clinical assessment, and although they are widely validated, screening rates can be low, they depend on patients accurately describing symptoms, and they may not capture the full set of symptoms associated with mental health disorders. Kintsugi argued its voice-based model could provide a more objective signal, expand screening to more patients, and be deployed at scale across health systems, insurers, and employer programs. Doing so, however, would require FDA clearance.

    Kintsugi had been seeking FDA clearance through the agency’s “De Novo” pathway, a route meant for novel, low-risk medical devices without an existing equivalent on the market. While intended to streamline approval for new kinds of products, it is still a process that can require years of data collection and regulatory review. Kintsugi’s founder and CEO Grace Chang told The Verge a lot of time was spent teaching the regulator about AI. The framework also fits AI poorly; much is designed with more traditional devices in mind — think hip implants, surgical tools, pacemakers — whose design remains largely fixed once approved. For AI systems, that can mean locking a model that would otherwise continue to be optimized and updated over time.

    The FDA fits AI poorly; much is designed with more traditional devices in mind.

    Despite the Trump administration’s hard push to cut red tape and get AI products into the real world as soon as possible, Chang said regulatory experts tell her that “there’s nothing that helps them do that except loud yelling from the top.” The approval process was further slowed by federal government shutdowns. The startup ran out of funding waiting for its final submission.

    Efforts to raise additional funds faltered as the company’s runway shortened. Rather than accept “predatory” short-term offers to meet payroll — Chang said one proposal offered around $50,000 a week in exchange for $1 million in equity — the team decided to open-source most of its technology so others might continue the work. Investors were not happy.

    Open-sourcing a mental health screening model also raises concerns about misuse. Tools designed to flag signs of depression or anxiety could, in theory, be deployed outside clinical settings, such as by employers or insurers, without the safeguards typically required in healthcare. Obviously that shouldn’t happen, but once released publicly there is little to prevent the technology from being used in ways its creators did not intend.

    There are other complications, too. Nicholas Cummins, a senior lecturer in speech analysis and responsible AI in health at King’s College London, told The Verge that open-source releases often lack the detailed “paper trail” regulators expect, including a clear record of how a model was trained, validated, and tested for safety. Without that, he said, bringing a product built on the technology through FDA approval could prove difficult.

    Open-sourcing a mental health screening model also raises concerns about misuse.

    More likely, Cummins suggested, companies would treat the model as a starting point and layer their own data and validation processes on top. Even then, he cautioned voice-based systems remain imperfect and carry a “reasonable” risk of errors, he warned, especially for conditions like depression, which manifest differently across individuals, languages, and cultural contexts and depend heavily on the diversity and structure of speech data used in training.

    Chang did not dismiss concerns about potential misuse, but said “it’s less of a concern in practice than it might appear in theory.” The organizations with the greatest incentives to abuse the technology, she argued, are also those that “face the highest barriers to actually deploying it.” In Chang’s view, “the more realistic risk is underuse, not misuse.”

    While Kintsugi’s mental health screening technology has been open-sourced, Chang said not all of the company’s technology has been released publicly. In part, this is for security concerns, she said, as chief among it is technology that can detect synthetic or manipulated voices.

    Chang said the capability emerged when the team experimented with AI-generated speech to strengthen its mental health models. The synthetic audio lacked the vocal signals the model was trained to recognize, revealing that it could be used to distinguish between human and AI-generated voices. It is a growing challenge given the proliferation of AI slop and fraudulent deepfakes and one that has yet to be reliably solved. It is a potentially lucrative opportunity, and, thankfully for Kintsugi, an area that is not subject to FDA oversight.

    Chang declined to speculate on her next move or whether Kintsugi’s security-focused technology might resurface, but she said she wishes someone else would build on the company’s work and carry it through the final stages of the FDA process. But without broader changes, Kintsugi’s shutdown is unlikely to be the last example of startup timelines clashing with medical regulation, and Chang said she hopes that reality doesn’t deter other founders from trying.

    Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

    • Robert Hart

      Robert Hart

      Robert Hart

      Posts from this author will be added to your daily email digest and your homepage feed.

      See All by Robert Hart

    • AI

      Posts from this topic will be added to your daily email digest and your homepage feed.

      See All AI

    • Health

      Posts from this topic will be added to your daily email digest and your homepage feed.

      See All Health

    • Report

      Posts from this topic will be added to your daily email digest and your homepage feed.

      See All Report

    • Science

      Posts from this topic will be added to your daily email digest and your homepage feed.

      See All Science



    Source link

    AI depressiondetecting easy FDA Health report science
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCommon blood pressure pill could make certain cancer treatments more powerful
    Next Article Merit Medical acquires View Point for $140M
    ekass777x
    Healthradar
    • Website

    Related Posts

    Health

    The Internet’s Hidden Neurochemical ‘Harvesting’ and the health of our children (and us)

    4. April 2026
    News

    Can Digital Health Platforms Support Teen Digital Detox and Behavior Change? –

    4. April 2026
    Health

    Marathon Health Appoints Chris Pricco as CEO to Accelerate Advanced Primary Care Growth

    4. April 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Luna ring review | TechRadar

    26. Dezember 2025137 Views

    Serena-backed health tech lands first FDA approval for home cervical cancer test

    31. Mai 2025136 Views

    Natural Cycles launches wristband to replace thermometers for its FDA-cleared birth control app

    16. Januar 2026118 Views

    Headspace for Cigna Healthcare Enhances Mental Health Support

    11. November 2025115 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Bitte aktiviere JavaScript in deinem Browser, um dieses Formular fertigzustellen.
    Wird geladen
    About Us

    Welcome to HealthRadar.net — your trusted destination for discovering the latest innovations in digital health. We are dedicated to connecting individuals, healthcare professionals, and organizations with cutting-edge tools, applications

    Most Popular

    Luna ring review | TechRadar

    26. Dezember 2025137 Views

    Serena-backed health tech lands first FDA approval for home cervical cancer test

    31. Mai 2025136 Views
    USEFULL LINK
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    QUICK LINKS
    • Ai
    • Gadgets
    • Health
    • News
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    Copyright© 2025 Healthradar All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.