We all recognize them: Lincoln standing for a portrait, Lenin addressing Russian soldiers, a woman crouched beside a student's body on the ground at Kent State University in 1970. But did you know those canonized points-in-time were altered?
Photo manipulation has been around for almost as long as the technology itself, a practice reserved for a few particularly-skilled individuals around the world. But that is all changing with the use of machine learning in video production, technology better known as deepfakes. Now, almost anyone can download the computer code and programs required to create digitally-altered videos of whoever, wherever or whatever they want. That is, if you know where to look.
“I’ve been thinking about these problems in my lab for about 20 years now,” said Hany Farid an expert on digital authentication teaching at UC Berkeley. “We have more and more examples of manipulated media being used to disrupt democracies, sow civil unrest, revenge porn, spectacular amounts of disinformation that lead to health crisis, and so on and so forth.”
NBC Bay Area’s Investigative Unit is looking behind the screen to understand what deepfake technology is and the threats it poses by diving into one of the more prominent concerns: election interference. Along the way it became apparent these fake videos have inflicted untold damage on targeted victims – who are predominantly women – and have the potential to do much more.
In our first of three digital extras exploring deepfake technology, we look at the surprisingly basic system it uses to fabricate faces of people who look real but never actually existed. But from those simple beginnings have come new variations on the technology that can produce content of anyone, anywhere, saying or doing anything.
“Whether that is state sponsored, whether that is the campaigns themselves doing it, whether that is trolls, whether that is just a bunch of teenagers in Macedonia trying to just make a buck, we are seeing the injection of fake information being used to disrupt elections,” Farid said. “And I think that we still have not, probably, experienced the worst of that.”
Deepfake videos of political figures are already out there, but have largely been made for educational purposes: Nixon announcing a disaster on Apollo 11, President Barack Obama (voiced by director Jordan Peele) making questionable statements about the Trump administration, UK’s Prime Minister endorsing his Labour Party opponent Jeremy Corbyn.
The videos are funny, but Farid poses a possibility: what happens when a deepfake video is released in the days leading up to a highly divided election, handing the presidency to one candidate before the broader public realizes they’ve been duped.
“That’s going to be the ball game,” said Farid, who is working on deepfake detection methods and likens the process to an “arms race” between the researchers and producers. The rapid evolution and spread of this technology took many by surprise – a danger for some, an opportunity for others.
That situation is why Assemblymember Marc Berman introduced a bill in early 2019 making it illegal for anyone to produce or distribute altered media of candidates 60 days before an election with an intent to deceive the public. It also provides legal mechanisms that could stop the spread of altered political media and was signed by Gov. Gavin Newsom earlier this year. The law is one of the first of its kind in the county – something Berman, who also chairs California’s election committee, said he is proud of.
“I do have a lot of concern, especially after what we saw in the 2016 elections and the disinformation campaign waged by bad actors,” Berman said. “There are a lot of folks out there who are going to be interested in trying to trick voters and influence elections here in California and the U.S.”
A federal version of Berman’s bill is being floated around Washington D.C., but concerns about the limitations of both pieces of legislation have already been raised.
Experts in the field of digital authentication, artificial intelligence and digital rights say imposing a timeframe, like 60-days, isn’t much of a deterrence considering the span and influence of social media. It’s an industry, they say, that needs more transparency and accountability than what is established by federal communications law.
First Amendment experts raise a different issue: the law gives politicians an ability to censor media they disagree with by claiming it’s fake. This concern, raised by groups like the ACLU, is why the 60-day timeframe was added in, said Berman, adding that he and his staff worked closely with constitutional law scholars to finalize the bill, which was one of two he introduced last year dedicated to addressing deepfake technology.
The other, an amendment to the state’s digital privacy laws, provides clearer legal avenues for victims of deepfake, nonconsensual pornography to stop the spread of the content and bring lawsuits against those responsible. Gov. Newsom signed both bills this year.
Digital and women’s rights advocates say more states need to follow California’s lead by offering similar avenues and protections for deepfake victims. The production of deepfake adult content isn’t a novelty – it’s the technology’s origin and is disproportionately impacting women and underrepresented groups.
In the second digital extra on deepfake videos, Investigative Reporter Candice Nguyen explains where and how the term got it's start. Along the way, she discusses who is most often subjected to the weaponization of this technology and what is being done to stop it.
It all started with a reddit user, some machine-learning code publicly available and the faces of famous female celebrities mapped into pornographic videos. But from a small corner of the web, that Reddit user, who called himself Deepfake, set a technology in motion that could threaten social stability, economies and democracies around the world.
Those issues are highly concerning for Farid, but there’s one thing worrying him more: plausible deniability and the loss of a shared fact system. The problem is best posed with a question:
“What happens when we live in a world where any image, any video, any audio recording can be fake?” he asks. “Well then, nothing is really real anymore.”
Three's a charm. If you want to hear more from Hany Farid about the evolution of deepfake technology, some of the dangers it poses and what he thinks we can do about it, you're in luck. Check out our extended Q&A with him below: