Social engineering attacks have posed a challenge to cyber security for years. No matter how strong your digital security is, authorized human users can always be manipulated to open the door to a clever cyber attacker.
Social engineering typically involves tricking an authorized user into taking an action that enables online attackers to bypass physical or digital security.
One common trick is to make the victim anxious to make them more careless. Attackers may pretend to be the victim’s bank, with an urgent message that their life savings are at risk and a link to change their password. But of course, the link goes to a fake bank website where the victim inadvertently reveals their real password. The attackers then use this information to steal funds.
But today we find ourselves facing a new technology that may completely change the playing field for social engineering attacks: synthetic media.
What are synthetic media?
Synthetic media is video, audio, images, virtual objects, or words produced or assisted by artificial intelligence (AI). This includes deep fake video and audio, AI-generated art based on text, and AI-generated digital content in virtual reality (VR) and augmented reality (AR) environments. It also includes AI typing, which can enable a foreign language speaker to interact as a detailed native speaker.
Deepfake data is generated using an artificial intelligence self-training methodology called Generative Adversarial Networks (GANs). This method pits two neural networks against each other, with one trying to simulate data based on a large sample of real data (images, videos, audio, etc.), and the other judging the quality of that fake data. They learn from each other so that the data simulation network can produce convincing fakes. There is no doubt that the quality of this technology will improve rapidly as it also becomes less expensive.
Art generated by artificial intelligence with text more complicated. Simply put, AI takes an image and adds noise to it until it becomes pure noise. It then reverses this process, but with text input that causes the noise removal system to point to large numbers of images with specific words associated with each in its database. Text input can affect the direction of noise removal according to the theme, style, details, and other factors.
Many tools Available to the public, each specializes in a different area. Very soon, people may legitimately choose to take pictures of themselves rather than being photographed. Some startups are already using online tools to make all employees look like they were shot in the same studio with the same lighting and photographer, when in reality, they’ve fed some random snapshots of each employee into the AI and let the software generate a consistent visual output.
Synthetic media do threaten security
Last year, A.J A criminal gang stole 35 million dollars Using deep fake voice to trick an employee of a company in the UAE into believing that the manager needed the money to acquire another company on behalf of the organisation.
It is not the first attack of its kind. In 2019, the director of a German subsidiary in the UK got a call from his CEO asking to transfer €220,000 — or so he thought. She was Scammers who use deepfake audio to impersonate a CEO.
And it’s not just a sound. Some malicious actors are said to have used real-time deepfake video in an attempt to fraudulently recruit, According to the FBI. They use consumer deepfakes to conduct interviews remotely, impersonating already qualified candidates. We can assume that these were mostly social engineering attacks because most of the applicants were targeting IT and cyber security jobs, which would have given them privileged access.
Real-time video deepfake scams have been mostly or wholly unsuccessful. Modern consumer deepfakes aren’t good enough yet, but they soon will be.
The future of social engineering based on synthetic media
in this bookDeepfakes: The Coming InfocalypseWriter Nina Schick estimates that about 90% of all online content may be synthetic media within four years. Although we once relied on images and videos for validation, the synthetic media boom will upend all of that.
The availability of online tools to create AI-generated images will facilitate identity theft and social engineering.
Real-time deepfake video technology will enable people to appear in video calls as someone else. This may provide a disguised way to trick users into malicious actions.
Here’s one example. Use of artificial intelligence art site “draw anyone,“I demonstrated the ability to combine the faces of two people and ended up with what looks like an image that looks like both of them at the same time. This allows a cyber attacker to create an ID card with a picture of a person whose face is known to the victim. They can then pose with a fake ID that looks like the identity thief and the target.
There is no doubt that AI media creation tools will pervade future reality and augmented reality. Meta, formerly Facebook, introduced an artificial intelligence-powered synthetic media engine called Make a video. As with the new generation of artistic AI engines, Make-A-Video uses text prompts to create videos for use in virtual environments.
How to protect against synthetic media
As with all defenses against social engineering attacks, education and awareness are key to reducing the threats posed by synthetic media. New training approaches will be crucial; We must discard our basic assumptions. That voice on the phone that sounds like the CEO may not be the CEO. This Zoom call may appear to be a known qualified candidate, but it may not be.
In short, the media—audio, video, images, and written words—are no longer reliable forms of authentication.
Organizations should research and explore emerging tools from companies like Deeptrace and Truepic that can detect synthetic videos. HR departments must now embrace AI fraud detection to evaluate resumes and job candidates. Above all, embrace a The engineering of mistrust in everything.
We are entering a new era where synthetic media can fool even the most astute of humans. We can no longer trust our ears and eyes. In this new world, we must make our people vigilant, skeptical, and well-equipped with the tools that will help us fight the coming scourge of social engineering attacks on synthetic media.