In This Article
Don’t believe everything you see on the internet. This is one of the fundamental rules of using the worldwide web, and it dates back to the earliest days of dial-up. In the ’90s, chain emails spread hoaxes like wildfire, but those of us who weren’t entirely gullible knew to question anything we received with “FW: FW: FW:” in the subject line. The rise of chat rooms and social media made it even easier to spread misinformation to thousands of strangers.
Around the same time, photo editing software was becoming vastly more powerful and accessible, so we began to see the spread of Photoshopped hoax images across the internet. Much like chain letters had morphed into chain emails, this type of hoax was nothing new, but the advancement of tech made it far easier to create and disseminate. Photo manipulation techniques that used to require hours of labor in a photographer’s darkroom could now be accomplished in seconds.
Today, misinformation has reached another new frontier: artificial intelligence. Publicly accessible AI tools are being used to automate the creation of so-called deepfakes, a term based on the “deep learning” neural network technology that’s harnessed to create them. Now, instead of manually blending images together in Photoshop, we can let AI do the hard work for us. And it’s not only useful for still images — deepfake technology can also process each frame of a video to swap a subject’s face (for example: youtu.be/CDMVaQOvtxU). AI can also be used to closely mimic a human voice based on audio samples and read back any text the user inputs (youtu.be/ddqosIpR2Mk).
Deepfakes aren’t just a hypothetical threat — they’re already being used to manipulate, confuse, or outright deceive viewers. In 2020, the parents of Joaquin Oliver, a 17-year-old who died during the Parkland school shooting, used deepfake technology to recreate their dead son’s likeness and create a video where he urged young Americans to vote for more aggressive gun control. More recently, deepfakes of both Ukrainian President Zelensky and Russian President Putin appeared in an attempt to encourage the opposing side’s troops to surrender or retreat; the latter clip was aired on Russian TV in what the Kremlin decried as a “hack.”
Cybercriminals are also using deepfake technology to persuade unsuspecting businesses and individuals to transfer money or give up sensitive info. And in the most twisted cases, AI is being used to generate deepfake pornography of people — even children — who are totally unaware of the disturbing and humiliating way their likenesses are being altered. A 2022 report by the U.S. Department of Homeland Security stated, “Non-consensual pornography emerged as the catalyst for proliferating deepfake content, and still represents a majority of AI-enabled synthetic content in the wild … The ramifications of deepfake pornography have only begun to be seen.”
It’s becoming increasingly difficult to distinguish real photos, video, and audio from deepfakes. What would you do if you suspected you may be the target of a manipulative deepfake attack by cybercriminals? How can you verify information you receive before falling prey to these high-tech social engineering tactics? We asked cybersecurity professional W. Dean Freeman and international risk management expert David Roy to weigh in on this complex attack.
Targeted by cybercriminals
Cloudy; high 47 degrees F, low 38 degrees F
You’re the head IT guy at a family owned business near Seattle. Most of the time, your job duties consist of basic tech support and PC troubleshooting for the company’s 23 employees. Your employer deals with various vendors overseas, so you’re used to getting messages at odd hours with not-so-good English. Aside from a few obvious Nigerian prince scam emails and run-of-the-mill malware, the company hasn’t experienced any substantial cybersecurity threats in the past, but you’ve always tried to maintain good security protocols regardless.
On a Tuesday afternoon, you get a panicked phone call from Susan, the owner of the company. She says her brother Dan, who is currently on a business trip to visit suppliers throughout Eastern Europe, sent her a video message a few minutes ago. In the video, he explained that one of the company’s key suppliers is owed a substantial amount of money and is demanding immediate payment. Supposedly, if they don’t receive the payment within the next few hours, they’ll switch to an exclusive partnership with your biggest competitor. Susan knows this would be catastrophic and might even cause the company to go out of business.
She tells you she immediately tried to call Dan back and called the supplier, but neither one is picking up — cell phone coverage isn’t the best in that country, and it’s outside normal business hours. She’s considering sending money to the indicated account, since she’s absolutely certain that the person in the video was her brother. It sounded and looked just like him, and he appeared to have knowledge about the business and its suppliers.
However, she wants to know if you have any ways to verify the video first. What methods or tools can you use to check the legitimacy of the video message and its sender? If you determine it’s a deepfake, what other steps should you take to protect the business (and its owners) from similar cyberattacks in the future?
Alright, so I’m either facing the impending financial destruction of my employer, or an extremely sophisticated threat actor, and I need to figure out which one it is fast. Two out of the three outcomes here have me looking for a new job soon if I don’t stay on top of this. Luckily, I’m pretty good at what I do, if I do say so myself, and I’ve seen similar threats before. Just because there might be a new technology at play here doesn’t really change most of the fundamentals.
First, let’s think through the possible situations:
Each of these potential situations is going to have its own set of tells and its own incident response playbook, as well as some specific countermeasures. They also share some preparatory mitigations in common.
So, whenever employees, especially key staff or executives travel, the risk of data breaches resulting in intellectual property theft go way up, and depending on the region of travel and the industry you’re in, the threat varies. Knowing this, I’ve coached Dan and others on the following:
To varying degrees, I’ve also tried to lock down services such as email, such that users either have to be on the local network or VPN to send and receive their company email, and am leveraging defense in depth techniques, using proven technologies, for endpoint and network security, as well as access to cloud-based applications. While nothing is foolproof, the cost of attacking the network is much higher than it used to be when I first started at this job.
I’ve also prepared myself to identify and counter new and emerging threats, through self-study and formal classes. I stay on top of the state of the art in my craft so that I continue to be an asset, but also understand that it’s an arms race, and the momentum is generally with the attackers.
About: Nothing about this image is real. It was created using the free AI image generation tool PlaygroundAI.com in less than 30 seconds. The site also allows users to upload images to give the AI “inspiration.”
Now that I’m on the phone with Susan, we have a potential catastrophe, and one where there isn’t necessarily a specific playbook yet. Luckily, it’s a small company with little red tape, but given the circumstances, it could have proved disastrous if she hadn’t called me and had just sent the money. It’s time to think fast, but think thoroughly.
To make sure I have my bases covered, I’m going to run through three parallel investigations: financial, traditional incident response, and also look specifically at the video to see if I can tell whether it’s genuine.
The financial investigation will go as follows:
If the request is to a financial entity that’s known to us and the account is associated with the client, then that dramatically lowers the risk. It still doesn’t mean that the request is 100 percent genuine though.
The traditional forensic investigation will focus both on the email itself, specifically checking the header information to see where it truly originated from, as well as correlating that activity with logs from the company mail server and VPN. If it looks like Dan actually sent the email from his laptop, through the company server, then again, this reduces the chance that it’s false. If it comes from his personal email, that’d get my hackles up. If the mail headers are clearly forged, then we have multiple potential issues, and a deepfake is certainly a possibility.
So, the bank is in the same general region as the client but isn’t a bank that we’ve done business through before. We can’t know who the account belongs to at this time, so that’s a dead end but definitely suspicious. The email didn’t come through Dan’s corporate account, but did come from his personal Gmail account. The email was sent through Gmail. Dan’s apparently used his Gmail account to send a few messages to Susan recently, so she wouldn’t have thought that was unusual. At this point, I go ahead and disable Dan’s corporate accounts and make sure that he doesn’t have any access to company data.
Given these facts, it’s going to be important to review the video. The two major possibilities at this point are the video being a deepfake, or it being a genuine video and made under duress. The fact that Susan is Dan’s sister and is convinced the video is of Dan means it very well may well be a duress situation.
I have Susan pull a bunch of photos of Dan from social media and her cell phone and send them to me so I can bring them up on one monitor, while watching the video on another. Reviewing the video critically, I look for the following telltale signs that the video is a deepfake:
While lighting and other technical factors could produce the appearance of waxy skin or out-of-sync audio in a genuine video, the biometric factors are going to be the major giveaway. This is because of how deepfake videos are generally produced.
Feeding the generative AI with still images to produce the likeness tends to result in deepfake videos where the eyes don’t blink at all, which is generally unnatural for people. Additionally, the computer has a hard time lining up both eyes toward the same focal point when trying to adjust for movement, so if Dan appears cross-eyed in the video but is known not to be in real life, that’d be a good indicator as well.
Most deepfakes are made by applying an extruded face image onto a live actor. Often, the angles are hard to make for the computer and the human driving the production, so the AI won’t overlay on top of the ears, hands, etc. Because of this, I pay very close attention to any differences in Dan’s ears in the video from what I can see in the known photos of him I have available.
Because Dan often represents the company in public events, there’s ample opportunity for fraudsters to collect voice samples to synthesize is voice, but since any potential attackers are likely neither native English speakers, nor are they American from our region, there very well may be differences in speech cadence, grammar, and word selection that indicate that the voice is “reading” a script and isn’t actually Dan talking.
After identifying the potential indicators, I also review the video metadata to see if I can gain any insight into when and where it was recorded and on what device. Any mismatch between where Dan is and what type of phone he has will confirm that the video is fake.
About: Using voice samples collected from social media and other public sources, AI can recreate any human voice and use it to read a given script.
We have enough evidence at this point to know the request is fake, and that Dan’s personal Gmail has been compromised. I’ve already disabled Dan’s account but will need to make a more thorough DFIR (Data Forensics and Incident Response) investigation into the network to see if any corporate data may have been compromised.
I convince Susan not to make the payment, but she’s still very shaken up. We’re almost certain the video is fake due to biometric mismatches that she didn’t catch at first — watching on her phone in a panic — which means he’s probably not kidnapped, but he’s still out of pocket and his status is unknown. Due to tensions in the region, we’re still worried.
At this point, we have multiple priorities, one being getting ahold of Dan. I have Susan call the U.S. Embassy in the country where Dan is and report that he may be the victim of a crime and ask to be put in touch with the relevant authorities. Additionally, we’ll still continue to try contacting Dan directly, and through the hotel, as well as any business associates at the vendor.
To help tie up loose ends on the cyber side, I’d likely reach out to industry contacts at relevant companies, as well as contacts made through the FBI’s InfraGard program. I may still have to fill out the paperwork, but friends and associates can help get the answer faster.
Deepfakes are a major issue, particularly as an information warfare weapon, and have societal level impact. As a tool for cybercrime, they’re basically just a particularly nasty tool in the phisherman’s toolkit. Like all cyber and information weapons, there’s a red team/blue team arms race for generative AI and the detection of its output. Luckily, AI still doesn’t beat actual intelligence, so long as you properly apply it.
In my opinion, defenders, whether professional incident response staff, or the average person who may be subject to an AI-fueled crime attempt, are best served by approaching the issue with strong critical thinking skills and a healthy dose of skepticism (the same mental tools that’ll help you ferret out “fake news” or determine if you’re being targeted for a “grandkid” phone scam).
Following defined DFIR protocols will help give you additional context within which to evaluate the media, in addition to looking for the “tells.”
Of course, prevention is worth an ounce of cure. So, what are some things that could’ve prevented this scenario from unfolding the way it did?
First of all, just like protecting against facial recognition, limiting the amount of data about you (photos, video, voice) that can be used to generate fakes is important.
Second, limit the crossover between personal and business IT systems. Your name probably isn’t Hillary, so eventually it’ll catch up to you.
Third, establish protocols with your organization or family for how requests like money transfers would be made, such as having key phrases or “safe words” that need to be present to authenticate the request. Treat any request that deviates from protocol or isn’t authenticated as illegitimate until proven otherwise.
Lastly, if you work for an organization that has the budget, seeking out tools designed for identifying deepfakes and starting to train models based on high-profile members of your organization, such as C-level executives, is worth exploring. The sooner you have those systems and models in place before an incident, the more useful they’ll be if there is one.
In this scenario, the business owner is between a rock and a hard place, as deepfakes are becoming commonplace as a method of social engineering. However, there are still prudent steps in preparing for this type of situation. First, any person doing business internationally should be aware of the inherent risks this presents by default. Regardless of your business vertical, understanding how to work in high-risk places, particularly developing nations, and places where bribery and corruption are rampant is important.
You may face scenarios like this via subcontractors and supply chain partners, so it’s prudent to have diligence on your downstream operations, the financial status of your partners, and awareness of geopolitical risks presented in each region where you operate, especially those around personnel and information security.
Educating international travelers about information security best practices is critical to the success of operational security. These practices are a good start: ensuring that data-blocking phone chargers (sometimes called “USB condoms”) are being used in order to prevent data theft, swapping mobile devices (such as cell phones and laptops) for those that don’t contain proprietary company data before leaving the U.S., and performing general security awareness training with all staff annually.
But even with all of the cool tech in the world, you can’t remediate or patch a human — they’re an organization’s largest information security vulnerability. For this, you can only drill, train, and reinforce the importance of identifying social engineering, phishing, data mining threats, physical threats to obtain sensitive information, and the most difficult, resisting bribes (of all kinds).
To enhance operational security, additional levels of personnel validation should also be in place. Code words for team members or projects should be used for identity verification. However, these should be codes that are not stored electronically in the case of a data breach; instead, choose codes that will be easy to remember for the end users even in stressful situations. Other unique personnel identifiers such as authentication tools can also be used (such as one-time keys and codes from encryption devices).
All of the aforementioned methods are low cost and can be easily executed by an organization of any size, but of course, there are much more robust methods for businesses that do a higher volume of international travel. These methods include utilizing satellite communications for conferences, private international transport, and coordinating with local U.S. intelligence resources in host countries ahead of critical meetings (for organizations doing work on behalf of the U.S. government). Like most things, your risk mitigation capability is commensurate to the amount of cash you want to spend.
Above: Senior citizens are often the target of cyberattacks, since they tend to be less tech-savvy than younger people. It’s a good idea to discuss common forms of phishing and social engineering with your older family members and colleagues.
The most intriguing part of the deepfake and AI craze might just be its mystery. Being able to tell fact from fiction quickly enough to make an important judgment is a challenge, and until this key piece is figured out, the risk will only become greater. Being able to disprove an AI-generated deepfake by validating identity (especially during an information breach) may end up being impossible in the case of a threat actor controlling an information systems environment.
In this situation, identity validation is key — assuming this method isn’t compromised as part of the communication breach. For organizations, and more commonly, individuals who don’t have access to software that can break down metadata, file structure, or other code that compiles a video, there are some easy methods that can at least begin to let a user understand if a video, photo, or other method of communication is faked.
Most importantly, one can start with simple geography. In the scenario, Susan is expecting a communication from Eastern Europe, however, if the deepfake video has clues that the subject appears to be in a place completely removed from the expected region, this might be an easy step in the process of elimination. Follow-up communications (if successful) can aid in understanding the origin. If this isn’t possible, or helpful, human and emotional intelligence can be used, as long as the person evaluating the suspected deepfake has familiarity with the person in question.
Voice cues, such as stuttering, tone, inflection, accent, and cadence of speech can be used along with physical cues such as blinking, general eye/pupil movement, breathing, and how facial movement aligns with voice tone and emotion. These items, mixed with any other communications received (such as texts, emails, voicemails, etc.) can be evaluated as a whole to determine if you are dealing with a malicious actor, versus a colleague that wants to relay important information with spotty cell and data coverage.
In a perfect world, the organization should train all personnel on security awareness so they can identify malicious actors, but also so they can differentiate themselves from those criminals when communicating with coworkers across the world. In many cases, cultures, translation tools, and the phrases/words we use can appear “non-standard” to international colleagues, and in turn, look somewhat suspicious. But if everyone is on the same page, and ensures clear communication methods are in place, this reduces the risk of misinterpretation.
As terrifying as it might seem (and unreal, despite Liam Neeson movies), business travelers do get abducted overseas. Understanding the reality of this, and how to prepare is enough to fill an entire handbook, but there are a few ways to prepare. First, avoid places where this happens — business travelers (especially Americans) working for large multinationals are the most commonly kidnapped and most valuable. This commonly occurs in places like Iraq, Colombia, Mexico, Yemen, and various parts of Northern Africa.
That said, knowing where your personnel are at all times helps. Many international cellular services now offer satellite mobile device tracking for cell phones, laptops, and geotags for international travelers. This isn’t only intended for safety, but also for purposes of compliance with U.S. export regulations — a nice complement to both safety and operational security. This can help pinpoint if your colleagues are out of place, or exactly where they should be, when they should be there.
If you suspect your colleague has been kidnapped, most importantly, avoid contacting local police (odds are, they might be in on it). Contact your embassy, your insurance company (more on that later), and any additional resources that might assist in evidence collection or ransom extraction. With that in mind, having insurance helps. Rescue, kidnapping, and extortion insurance can carry millions of dollars in coverage — enough to make nasty threat actors hand over your colleague. In conjunction, an organization of any size these days should also have cyber insurance that covers data breach, ransomware, data theft, and eraserware events.
Make sure to preserve any (deepfake or not) information that has been provided from your suspected kidnapped colleague and be ready to provide information that could assist in locating them. In most cases, U.S. Federal agencies (mostly the FBI) and the State Department will have much more sophisticated tech to determine the validity of information you’ve been provided. Once you’ve quickly done all of this, it might be worth a heads up to other folks in your company traveling OCONUS to halt any additional travel and return home asap.
Above: Deepfake technology can learn a face from existing photos and videos, then superimpose it onto a live actor’s body. It can also generate new faces from scratch based on common facial features and parameters.
All things considered, this would be a pretty difficult scenario for an organization of any size. However, implementing and executing basic principles of personnel and operational security, paired with a process-driven international travel safety approach can go a long way. Effective communication methods with planned touchpoints, code words/obfuscation of information, and general information security best practices can be the difference between a deepfake compromising a business and resulting in an expensive wire transfer to an unsavory character, or a normal chaotic day at the office.
It’s important to remember that these attacks hit close to home as well. Deepfake phone calls threatening harm to a family member unless a ransom is paid or impersonating debt collectors for a known financial strain that was discovered through a stolen identity are becoming more commonplace. These types of events are draining ordinary people dry just because of a simple scam that a teenager can pull off with limited technology. To combat this, implementing “home-based” security awareness for your household members is good practice. In most cases, these sorts of targeted attacks focus on the elderly or individuals who have a history of financial hardship, as they become vulnerable and easily exploited targets for threat actors.
Considering all of the factors at play, the human element emerges as the most important. Awareness, intelligence, and critical decision making are paramount in being able to identify any sort of deepfake and justifying an appropriate response. With this exploit increasing in volume every day, preparedness and a proactive approach mean everything.
In September 2022, multinational cybersecurity firm Trend Micro released a report that stated, “The growing appearance of deepfake attacks is significantly reshaping the threat landscape for organizations, financial institutions, celebrities, political figures, and even ordinary people.” It continued, “The security implications of deepfake technology and attacks that employ it are real and damaging.
As we have demonstrated, it is not only organizations and C-level executives that are potential victims of these attacks but also ordinary individuals. Given the wide availability of the necessary tools and services, these techniques are accessible to less technically sophisticated attackers and groups, meaning that malicious actions could be executed at scale.” We’d recommend anyone interested in this topic read the full report — search for “How Underground Groups Use Stolen Identities and Deepfakes” on TrendMicro.com.
The report concludes with several recommendations for users concerned about deepfake attacks. Individuals should use multi-factor authentication for online accounts, set up logins based on biometric patterns that are less exposed to the public (e.g. irises and fingerprints) rather than simple facial recognition, and limit exposure of high-quality personal images and videos on social media.
Remember that every selfie, video, or audio clip you post can be fed into AI deepfake tools. The less data the bad guys have access to, the less accurate their fakes will be. For businesses, Trend Micro recommends authenticating each user/employee by three basic factors: something the user has (like a physical key), something the user knows (like a password), and something the user is (biometrics). Each factor should be chosen wisely based on analysis of the criminal threat landscape.
W. Dean Freeman
W. Dean Freeman, CISSP-ISSEP CSSLP C|OSINT is a veteran of the cybersecurity industry with over 15 years of professional experience ranging from threat intelligence to systems security engineering, and over 25 years of hobbyist experience in related domains. He’s a regular contributor to RECOIL OFFGRID magazine, and his writings on preparedness and self-defense topics have, or will, appear in other publications as well. He lives with his family in central Texas.
David Roy (a pseudonym, as required by his line of work) is a global risk management and information security executive at a multinational technology firm and specializes in critical infrastructure security. He has worked in this space for well over a decade and holds multiple industry certifications in information security, with an educational background in geopolitics and risk management. He has worked in both the private and public sector during his career, and has extensive experience working across North America, Europe, the Middle East, and Southeast Asia.
Subscribe to Recoil Offgrid's free newsletter for more content like this.