Deepfakes are images or videos coupled with audio voice cloning created using a form of artificial intelligence (AI) to make it appear as though an individual did or said something when, in fact, that act never took place.
In 2019, 63% of American said made-up or altered videos and images create confusion about the facts of current issues and events according to Pew Research Center report.
With the 2024 presidential elections on the horizon and U.S. Senate Majority Leader Chuck Schumer hosting tech leaders and experts at the artificial intelligence (AI) forum on September 13, 2023, the state of AI and its role in deep fakes is consistently making headlines.
Scott Hermann, CEO of IDIQ, says deepfakes usually target individuals in the public eye to discredit them, spread misinformation about them, or otherwise harm their character or credibility. “As advancements in technology continue to reshape our digital landscape, the rise of deepfake crime has become a significant concern,”he said.
“These can be especially dangerous when used for political motivation, as this technology can make it seem like a political figure has said or done almost anything,” said Hermann. “Media chaos can ensue when the files are shared with outlets looking to discredit or support a specific party or candidate. As a more lasting effect of a situation like this, citizens can distrust information they find about a political figure, good or bad, and create a zero-trust situation. “
“Misinformation has been running rampant of late, and not only from the use of AI and deepfakes. In addition to the concept of fake news, we are dealing with different tools scammers can use with AI technology,” said Hermann. “It can be difficult to tell the difference between what fact or opinion is, or in the case of AI-generated content, what is real or fake.”
Hermann says that while AI technology is moving and evolving quickly, tools needed to identify AI content, good or bad, are also keeping pace. “Google SynthID is a great example of this as a technology that can identify AI-generated content without disrupting the content itself, a sort of quiet verification that can be easily fact-checked,” he said.
Paul Kan, AI Business consultant and CEO of Nothing Kills Dreams, said that, unfortunately, the use of tools to perform fraud, spread misinformation, create false impersonations and publish malicious content has been around for a long time.
“With AI, it is now a more sophisticated mechanism that is being used to carry out these intentions in a much more realistic manner,” he said.
“The term deepfake comes from the combination of the words deep learning – AI algorithms that teach themselves to improve using large data sets such as images and video and fake describing the convincing video and voice hoaxes it generates,” said Kan. “While deepfakes frequently alter original material by substituting one person for another, they can also produce brand new content that falsely shows someone acting or speaking in ways they never did.”
Kan says that access to AI, which is rapidly improving by the day, has led to dangers around false information, especially regarding political propaganda and its effect on elections.
“From a consumer standpoint, there have been both beneficial and detrimental use cases,” said Kan. “On the one hand, there are legitimate uses of deepfakes such as for entertainment or customer support purposes, but on the other hand, there are serious threats when it comes to deepfake pornography and extortion.”
Kan cites the headlines from 2017 when revenge porn and explicit content featuring actors whose faces were swapped. “There was also a clip that went viral in 2023 of a case where perpetrators were using deepfake software to extort and threaten a woman using a realistic imitation of her daughter’s voice,” said Kan.
AI to curb deep fakes
Hermann says deep fake videos are created using two competing deep learning algorithms – one that makes or generates the content, a generator, and another that checks the content for accuracy, a discriminator. “These are run against each other to create a generative adversarial network, which produces the outcome of a deepfake,” he said.
“With spotting a deepfake, the devil is in the details,” said Hermann. “When verifying whether a video is legitimate or created, closely inspect the movements of the individual’s face, especially their eyes, as they may not be blinking naturally, and lips, as they may not be syncing correctly with the actual words in the audio.”
Hermann says it is essential to compare the video in question with a verified video to look for ticks, mannerisms, unnatural movements and other behavioral signs usually exhibited by that individual that may be missing or present when they shouldn’t be.
“AI software companies are generating this new technology to further innovation and creativity, and part of this is also to ensure that users feel safe with this technology,” said Kan. “Being a part of the solution would be beneficial to the developers of this technology, and I do believe they also have a responsibility to prevent malicious deepfakes, protect original content and trusted sources.”
But Kan says he doesn’t believe there is a silver bullet solution to preventing the malicious use and spread of deepfakes. However, a multifaceted approach – software to detect deep fake technology, education, awareness, regulation, and policies – is an excellent place to start.
“Organizations and businesses are already working on software that can better detect and block deepfakes. Fighting software with software is certainly one way to go,” said Kan. “Although platforms like Twitter, Youtube and Facebook are already using blockchain technology to authenticate the source of content and ban manipulated media, I believe this should be implemented and enforced in all social and public platforms.”
Regarding education and awareness, Kan says that, like with any new technology, it will require more media and technology-savvy consumers.
“Educating the public on the threats that deepfakes pose as well as steps that they can take to protect themselves is important and necessary to lower the risks of falling victim to deepfake content,” said Kan. “I believe that consumers should be more cautious about the content they post online, and as we live in the age of social media- it is important to be educated on how AI now only needs a few pictures to generate a viable and convincing deepfake.”
“Best practices for consumers could include questioning the accuracy of the content they consume, fact-checking and verifying the source of content will have a large impact in protecting consumers against falling victim to deepfakes,” added Kan.
“We joke a lot about funny concepts like our phones listening to us,” said Hermann. “We know social media platform managers can use misinformation flags or even implement bans for harmful information sharing, so we would be better off acknowledging and encouraging these capabilities and using them to our advantage, especially since so many deepfakes and other AI-generated content are shared and spread through social media channels.”
One of Hermann’s ideas is to create roles that help protect the integrity of videos. “A social platform disinformation liaison can be extremely helpful in determining the source, channels traveled and intention of certain content to ensure the integrity of videos and other content shared on these platforms,” said Hermann. “An AI expert liaison can be very helpful to not only verify content but to help in creating legislation and guardrails around AI use, specifically in political and public news and information contexts. “
Is regulation the answer?
Kan believes there is a chance of regulation. “Although the road there will take some time to figure out as the government tries to wrap its head around this technology,” he said.
“One of the things to consider is it depends on the jurisdiction of the regulation. For example, China is the first country to regulate deep synthesis technologies, requiring all deepfake content to be marked as modified,” said Kan. “The challenge we face in a Western society is that it is not as easy to implement in a democratic state vs. a surveillance state.”
Kan says the answer might need to target more specific use cases, such as pornography or election candidates. “Ten states have already passed legislation so far prohibiting certain uses of deepfakes like in political candidates and pornography,” said Kan. “While current tort laws can cover certain issues with deepfakes like non-consensual pornography, it is important to update the current definitions and wording to include the use of AI-generated content.”
A Washington Post article in 2021 said that 72% of Americans polled don’t trust Facebook to handle their data responsibly. A Pew Research Center study in 2021 showed that 62% of Americans believe their online and offline activities are being tracked and monitored by companies and the government with some regularity. And 81% of the public said the potential risks they face because of data collection by companies outweigh the benefits.
When it comes to tech billionaires deciding how data will be used, Kan says that it’s never comfortable to know they are determining how his data is being used and how he is being regulated.
“However, I’m also cognisant that, unfortunately, it has been that way for a long time,” said Kan. “My viewpoint has always been to focus on what we can control, and our most powerful tool collectively is our voice; the more we use it, the more we can advocate for what we need and want.”
Kan says that we need to advocate for transparency, privacy and consent.
“It’s important for us to demand transparency on how our data is being used, where it is being exposed and shared, as well as having the ability to opt in and out,” said Kan. “As we’ve seen many instances, a collective unified voice allows us to move the needle and have a say in how we are regulated in the age of AI.”