Since they started showing up online in 2017, deepfakes have been sprouting up with more frequency.
While many of the examples that you see on YouTube and TikTok are uncanny and entertaining manipulations of face-swapped celebrities dubbed with each other’s voices, the eerie technology behind the phenomenon has the potential for serious cyber security risks for SMBs.
In one 2019 instance, an energy company in the United Kingdom was scammed out of $243,000, an amount one employee thought he was sending to one of the company’s Hungarian suppliers.
Turns out, that account was false, and the scam became one of the first instances of an AI-generated voice used to swindle a business. Forrester predicts that these scams are only going to increase in the next few years.
More recently, the FBI issued an advisory warning about deepfakes being used in sophisticated social engineering exploits.
So, what are deepfakes?
Deepfakes are manipulated photos, videos or audio recordings of a person that are created through machine learning techniques, and the hyper-realistic end results are unnerving. By creating a deepfake, a person can make it seem like the subject of the media has said or done something that they might never have before.
The resulting products are created by running several pieces of source material—like previous photos, videos, or audio recordings of the intended subject—through a deep-learning system, which studies the content from several angles. Once this is done, competing AIs, known as generative adversarial networks (GANs), detect flaws and continue to refine the product to a more believable level.
Why should I be concerned?
Anyone, using this technology, has the ability to create illegitimate media of anyone, companies are at risk of multiple social engineering and market manipulation tactics.
Upper management and executives should be especially wary of the sophistication of deepfakes, as they are the most likely targets of these fabricated products in a business setting.
In the case of your business, we will briefly discuss what each type can be used for, and how cyber criminals pair them together with other cyber security threats like phishing emails to create more elaborate attacks.
Video
Video deepfakes are the most common examples proliferating social media, and in the coming years, they will likely become issues for businesses as well. With the ability to produce videos of anyone, cyber criminals could create public service announcements by a CEO or other executive. Or, they could seem to confess to crimes or other scandals that they never committed. These videos have the potential to damage business reputations and stocks.
Video deepfakes could also be used to create false Skype calls. These can then be used as another level of fraud in which a criminal could create a video of an executive asking to complete money transfers or requests to reveal private data. And who wouldn’t answer a request in a video call from their “supervisor?”
The technology is not perfected, so there are a few ways you can distinguish a real video from a fake one, like unnatural mouth movements, confusing/misplaced shadows and a lack of blinking, but the tech is steadily improving, and it’s already becoming difficult to tell what is real.
Audio
Audio deepfakes are a higher priority concern at the present time, as they are more difficult to differentiate. An audio deepfake is an AI-created, falsified recording of a person’s voice. In most cases, they are near perfect matches—and cause for alarm.
In the case of the UK company mentioned above, the employee who sent the money to the Hungarian account had received a phone call from his CEO—or at least, so he thought. The voice on the end of the line sounded exactly like the CEO, right down to the cadence, tonality, and slight German accent.
In fact, some of the AI systems available can create a flawless clone of someone’s voice after listening to as little as four seconds of source audio.
Combination Attacks
Often, these deepfakes are paired with other forms of communication, including an email or text message that looks as if it came from the subject of the audio or video recording. So, someone will first receive a phone call. A few hours later, he or she might also receive an email or text, reinforcing a request. Both would appear to be legitimate requests from an executive or another superior.
The dual nature of the scam makes it harder to identify as such, especially where financial assets are concerned. By making a request, say for a wire transfer, in more than one form, the attacker can lower an employee’s guard and likely make the scam more successful just by making it appear that the follow-up contact is a verification of the initial message.
The abundance of deepfakes, whether in isolation as audio or video or as a combination attack, is only expected to grow in the coming years. The technology is still being improved, but it is becoming more and more sophisticated every day. That also means it’s becoming more difficult to tell reality from fiction.
The United States and Chinese governments have acted in response to the rise of deepfakes, with American’s requiring mandatory annual reports on the nature of them. China, meanwhile, has already been pressing changes on deepfakes as criminal offenses. Even Facebook has banned deepfake content from their platform, wanting to avoid the spread of falsities and disinformation.
Be proactive. Be prepared.
Since these attacks are currently few and far between, cybersecurity professionals have a window of opportunity to act against them with potential solutions.
However, it is as equally important for businesses be proactive about these dangers. Just as with phishing, ransomware and data breaches, employees need to stay up to date on the latest threats and trends of the technology landscape, including the dangers of deepfakes.
We recommend your employees complete ongoing cybersecurity training, especially as criminals continue to implement more sophisticated strategies.
Businesses also need structured verification procedures for the release of data or money. Staff should never complete a wire transfer or release private information to someone after just a phone call or a single email. Follow-up procedures and policies should be standardized to make sure that requests are legitimate—and to avoid schemes like the Hungarian CEO spoof mentioned earlier.
If you would like to learn more about cyber security training or implementing IT policies within your organization, contact CoreTech today.