From politics to popular culture, for years, people have been finding ways to spread lies, alter truths and manipulate audiences. But never have we lived in a time when the information we think we are witnessing coming from a certain person, is not coming from that person at all, but from an artificially intelligent simulated version of that person, aka, a deepfake.
Deep Fakes are audio recordings or videos that look and sound just like the real thing. They use artificial intelligence software to create moving images of anyone that can be downloaded and can be used by anyone who knows what they are doing. They work by following a system of that monitors how people move and talk as well a mirroring specific facial expressions.
Deepfakes and social media
These simulations are becoming more widespread in all of our surroundings. But it is on social media where they have real prevalence.
For example, it was only a few days ago that an eerie video of Facebook founder, Mark Zuckerberg, appeared on Instagram saying: “Imagine this for one second, one man with total control of billions of people’s stolen data. All their secrets, their lives, their futures. I owe it all to Spector. Spector showed me that whoever controls the data, controls the future”.
The video, of course, turned out to be fake, but whoever had created it was trying to give a message to the world and expertly used the deepfake software to do this.
But as well as being used to mock celebrities, there is a fear that runs deep when it comes to using AI to replicate real humans. In response to the Zuckerbug video, a meeting was called in the House of Representatives due to criticisms over the video not being taken down soon enough. Experts speaking at the meeting are now warning that deepfake videos even have the potential to lead to “violent outbreaks”.
There are now real concerns that if not dealt with now, deep fakes may now have an impact on politics. Clint Watts, a researcher at the Foreign Policy Research Institute, spoke of the rise of deepfakes having the impact of propaganda when getting into the wrong hands.
He said: “Those countries with the most advanced AI capabilities and unlimited access to large data troves will gain enormous advantages in information warfare. The circulation of deepfakes may incite physical mobilisations under false pretences, initiating public safety crises and sparking the outbreak of violence.”
He further spoke of how fake and misleading messages spread across WhatsApp have already caused problems leading up to the elections in India.
Watts continued his point by saying: “The spread of deepfake capabilities will only increase the frequency and intensity of these violent outbreaks.”
Consent, security and laws are also major factors that need to be examined regarding the subject. Professor Dannielle Citron from the University of Maryland already warns of the disruption to politics this kind of AI is having.
“Platforms should have a default law since we can’t automatically filter and block deepfakes yet.”
Whether or not a person has involvement in the making of a deepfake, they can still contribute to the spread of the film through doing something as simple as liking or commenting on videos on social media accounts. This becomes hard to take down due to the constant floods of information. It also adds to the spreading of the mis-information that videos may contain.
What can be doing about dealing with the negative side of deepfakes?
One way to find a solution to the issue is through fines being put in place for social media accounts that share deep fake videos, it was suggested during the talk of the House of Representatives.
David Doermann, a professor working at the Artificial Intelligence Institute at The University of Buffalo, believes that solving the issue is down to the individual by being able to flag up malicious content, and the tech giants in charge of the platforms being able to automatically seek out and destroy fakes as there are currently laws in place that prevent social media sites from being accountable for published content.
On the subject, Doermann said: “Even if we don’t take down videos, we need to provide warning labels. We need to continue to put pressure on social media platforms to realise the way their platforms are being used.”
But, before we all start building bunkers in the fear that humans can never be trusted again, remember that for now, deep fake technology does not allow for a complete flawless copy to be made and that tools are already being developed to detected deep fakes.