
Every HR and talent acquisition (TA) professional has experienced a bad hire. You carefully review and assess a candidate’s skills, experience, and credentials. You evaluate them through a series of interviews with multiple stakeholders. You’re all but certain they’ll be a perfect fit for the role. And, then, three weeks into their tenure you realize you’ve made a huge mistake.
The occasional bad hire is all but inevitable. But now, HR and TA professionals have something much more insidious to worry about—hiring someone that is not who they say they are; or isn’t even real. With the arrival of generative AI and deepfakes, the world of applicant fraud has entered a whole new era. Now, bad actors are using AI and deepfakes to fabricate synthetic identities or impersonate others in order to gain access to sensitive data, systems, and more. And, given the highly-sensitive nature of its data and operations, the healthcare industry is ripe for exploitation.
From Occasional Fibs to Outright Fabrication: Applicant Fraud Has Evolved
Applicant fraud is nothing new. Job applicants have been inflating credentials, falsifying educational backgrounds, and even faking references for decades. But, with the advent of generative AI and deepfaked media, the phenomenon has shifted from piece-meal manipulation to wholesale fabrication.
Today, you can find job applicants that aren’t merely deceptive, but entirely synthetic—where their resume, cover letter, LinkedIn profile, and even physical likeness has been generated from thin air using AI.
Imagine your organization is hiring for a new IT administrator role. This is a fully-remote position and the candidate will be tasked with monitoring databases, troubleshooting software, and other, similar tasks. You post the job listing on your website and wait. Amid the dozens or hundreds of genuine candidates, however, there is one candidate that stands out.
This candidate’s resume, cover letter, and LinkedIn profile all line up perfectly with what you’re looking for in a candidate, with extensive, highly-relevant experience, and all the necessary skills and credentials. It seems almost too good to be true, but you go ahead and schedule an interview over Zoom to feel them out. On Zoom you’re greeted by a friendly face who thoughtfully and eloquently answers your interview questions with poise and professionalism. They demonstrate expertise and possess a wealth of knowledge. There is a slight lag in the audio, but you don’t think much of it.
You hire the candidate, mail them their work device, and within weeks of them starting their new job, your security team begins to notice odd behavior: sensitive data being accessed and exfiltrated, malicious software being installed, lateral movement being made through the network. Soon after the security team takes action, you discover that nothing about that candidate was real. Behind the mask was a malicious actor using sophisticated technology to get hired and gain access to your sensitive systems and data.
From GenAI to Deepfakes: The Anatomy of AI-Powered Applicant Fraud
The above scenario might sound like a Black Mirror episode, but it’s much more realistic (and much easier to pull off) than you might think. In fact, something very similar to this exact scenario happened last year, when a cybersecurity company Knowbe4 was duped into hiring a North Korean operative from the DPRK. If a large, sophisticated firm like Knowbe4 can fall victim to these attacks, then it’s safe to say that anyone is at risk.
That’s because the deck is very much stacked against organizations today. With a few widely-available AI tools, a little bit of time, and some audacity, almost anyone could pull off a scam of this sort. And, unfortunately, that means this trend isn’t going away anytime soon. In fact, in a 2025 report, Gartner forecasted that, by 2028, 1 in 4 candidate profiles will be fake.
So, how do they do it? Well, it begins with your very own job posting. With the job description, hiring details, and readily available information about your organization, the fake applicant can prompt a generative AI tool to create the perfect cover letter and resume for the job. For the identity, they will also use generative AI along with deepfaked images to build out an online presence on sites like LinkedIn, either impersonating a real individual or fabricating an entirely synthetic one.
They can list fake references, spoof phone numbers, and use real-time deepfake audio tools to alter their voice or impersonate others. Tools to do this are already cheap, plentiful, and widely available. Many of the latest audio deepfake tools require just a few seconds of reference audio in order to clone a voice, as well as offering countless ready-made, synthetic voices.
And for the interview stage, deepfake video tools can be used to superimpose synthetic likeness or impersonate real ones in real time. This very technique was used to great effect in 2024, when a hacker used deepfake technology to impersonate a company’s CFO on a Zoom call. Looking and sounding just like the real CFO, the hacker successfully convinced a more junior finance employee to wire $25 million to an account under his control.
What You Can Do to Detect and Defend Against AI-Powered Applicant Fraud
As these types of scams become more common and more sophisticated, HR and TA professionals will have to adopt new tools and strategies to defend against them. Thankfully, there are clear steps that every organization can take today to help reduce their risk of falling victim to these attacks.
First and foremost, hospital recruiters and TA specialists should be wary of the “overly-polished” or “mirror” application materials. Resumes and cover letters that appear too good to be true, and/or parrot back too much of the information and verbiage directly from the job application should raise suspicion. With that being said, this isn’t always a sign of a fake or fraudulent applicant. A recent global survey from Indeed found that over 70% of job applicants now use generative AI tools during their job hunt. So, signs of its use are far from being evidence enough of fraud.
Instead, organizations should be on the lookout for overuse. And even then, it should be taken as a red flag, not definitive evidence of a scammer. Things that should really raise the alarm are things like IP addresses that don’t match the candidate’s stated location, the use of virtual phone numbers, and persistent VPN use. While no one single thing here tells you a candidate is fraudulent, these signals taken together should encourage you to take pause.
Perhaps the best defense, however, comes when engaging with the candidate live. The most powerful defense is of course an in-person interview. But, those aren’t always an option. Especially when dealing with IT hires and contractors.
But fear not, there are tricks you can use to detect deepfake use during video calls:
- Go Off Script – Asking the applicant unusual or unexpected questions can be a good way to assess their authenticity. For example, we often ask candidates to turn around and touch whatever is hanging on their wall behind them to prove authenticity.
- Look for Lag – If you’re noticing persistent disconnects between the applicants audio and video (lag), or if their mouths aren’t tracking with the words they’re saying, stop and ask if they could potentially close applications or move to improve their connection.
- Wave Hello – Movement can be a good tell for video deepfakes. Ask the candidate to wave their hand in front of their face and look for glitches or irregularities in their video.
Strange as these interview tactics may seem, there’s a good chance they’ll soon become par for the course. Changes in the threat landscape require changes in thinking. And with the stakes being as high as they are, a little bit of uncomfortable talk is well worth it for your organization’s security.
Don’t Let Your Organization Get Left Behind
As these types of technologies continue to evolve and advance, healthcare organizations will have to learn how to fight fire with fire. While the guidance above can go a long way to protecting your healthcare organization, AI and deepfakes are continuing to grow more powerful and convincing by the day.
And with the healthcare industry consistently ranked as a “favorite target” of threat actors, it will become increasingly important for healthcare organizations to adopt AI-enabled tools and technologies of their own—ones aimed at detecting and preventing these types of scams long before even coming face-to-face with these “fake” applicants.
About Julia Frament
Julia Frament is the Global Head of HR for AI-powered email security organization IRONSCALES. She focuses on aligning HR strategies with business goals while keeping the company’s people at the heart of everything it does. She leads a global team of HR Generalists, HR Business Partners, and the Head of Global Talent, working together to empower IRONSCALES’ workforce and drive meaningful growth.

