Thursday, November 14, 2019

Can Democracy Survive Deep Fakes?





The Perpetual Cat & Mouse Game

First Amendment vs Deep Fakes. Imagine if lying – at least lying on things that matter – were illegal. In some cases, it already is. Aside from that perpetual “you cannot yell fire in a crowded theater,” there are criminal and civil sanctions for fraud, failure to disclose on solicitations for funding, intentionally mislabeling, lying on a credit application, etc. In some cases, you have to prove actual damages, in others specific statutory violations. Courts are loathe to impose prior restraints, and if volume of truly illegal mendacity is so pervasive, so overwhelming, that courts either cannot handle the volume or each wrong is too small to merit legal action, the practical reality is that even material lies may not be penalized in any way.

From seduction and used car sales, from testimonials and conspiracy theories, the United States runs on lies. If the standards for the above may have some legal risks, there are literally no (or very limited) legal or practical remedies against politicians who either state what they want as if they can get it… or simply make up stuff, denigrate their opponents and lie, lie, lie. The old, “how can you tell if a politician is lying…” joke.

But what is not so funny is the impact of social media as a political influencer and a new technological reality that threatens democracy to its core: the ability to create highly credible audio and video productions using the voice and face of a political opponent saying and doing things they never said or did.

Evan Halper, writing for the November 5th Los Angeles Times, explains: “Election officials and social media firms already flummoxed by hackers, trolls and bots are bracing for a potentially more potent weapon of disinformation as the 2020 election approaches — doctored videos, known as ‘deep fakes,’ that can be nearly impossible to detect as inauthentic.

“In tech company board rooms, university labs and Pentagon briefings, technologists on the front lines of cybersecurity have sounded alarms over the threat, which they say has increased markedly as the technology to make convincing fakes has become increasingly available.

“On Tuesday [11/5], leaders in artificial intelligence [unveiled] a tool to push back — it includes scanning software that UC Berkeley has been developing in partnership with the U.S. military, which the industry will start providing to journalists and political operatives. The goal is to give the media and campaigns a chance to screen possible fake videos before they could throw an election into chaos.

“The software is among the first significant efforts to arm reporters and campaigns with tools to combat deep fakes. It faces formidable hurdles — both technical and political — and the developers say there’s no time to waste.

“‘We have to get serious about this,’ said Hany Farid, a computer science professor at UC Berkeley working with a San Francisco nonprofit called the AI Foundation to confront the threat of deep fakes… ‘Given what we have already seen with interference, it does not take a stretch of imagination to see how easy it would be,’ he added. ‘There is real power in video imagery.’

“The worry that has gripped artificial intelligence innovators is of a fake video surfacing days before a major election that could throw a race into turmoil. Perhaps it would be grainy footage purporting to show President Trump plotting to enrich himself off the presidency or Joe Biden hatching a deal with industry lobbyists or Sen. Elizabeth Warren mocking Native Americans.

“The concern goes far beyond the small community of scientists… ‘Not even six months ago this was something available only to people with some level of sophistication,’ said Lindsay Gorman, a fellow at the Alliance for Securing Democracy, a bipartisan think tank. Now the software to make convincing fakes is ‘available to almost everyone,’ he said.

“‘The deep-fakes problem is expanding. There is no reason to think they won’t be used in this election.’…Facebook has launched its own initiative to speed up development of technology to spot doctored videos, and it is grappling over whether to remove or label deep- fake propaganda when it emerges. Google has also been working with academics to generate troves of audio and video — real and fake — that can be used in the fight.”

Software exists that analyzes patterns of speech and movement that can authenticate whether a video is substantially congruous with those patterns as to be authentic, but as artificial intelligence fine tunes those videos, will that analytical tool still work? California has a new law that takes effect on January 1st making it illegal to depict as real the people contained in these deep fakes, but can the statute even sustain a First Amendment challenge. But even the least sophisticated manipulation, like slowing down a recording, can make the speaker appear as they are inebriated … like recent viral video of House Speaker Nancy Pelosi. But then there is the money factor.

The driving force behind the monetization of social media is obviously “traffic.” But traffic is often driven by anger and outrage: “Digital platforms try to engage users with their services for as long and as intensively as possible. This lets them sell ads and gather personal data, which then generate more value. It turns out that lies generate outrage and fear and these emotions generate engagement. So as long as a platform’s financial returns align with outrage, it is optimized for information rubbish. It’s difficult to stop the dissemination of bad information, consistent with free speech values. But what we can do is check the dominance of platforms that profit from misinformation and empower users to defend against it.

“Political advertisers — like pretty much all advertisers — have to buy from Facebook. The ads they run are not like broadcast TV and radio ads. Rather, they can be micro-targeted to very small segments of the public, often those most susceptible to conspiracy theories or fearmongering. These platforms take advantage of what Jonathan Albright has called ‘data-driven ‘psyops’’ that can ‘tailor people’s opinions, emotional reactions, and create ‘viral’ sharing.’” Rutgers University Professor Ellen P. Goodman and Karen Kornbluh, director of the Digital Innovation and Democracy Institute at the German Marshall Fund, writing for the November 10th Los Angeles Times. In short, fake news and conspiracy theories are good for business. Do social media conglomerates even want truth, to filter out fakes? Well over half of all Americans get some or all of their “news” from social media.

So, exactly who gets to be the gatekeeper? Facebook? Google? A governmental agency staffed with political appointees? What if such a government agency is allowed such power… and an incumbent Congress refuses to fund it? Whom do you trust? Is a gatekeeper even viable? Given the proliferation of conspiracy theories against various political factions, does credibility expand to “acceptable as if true” with such solid “evidence”? How does a voter reach a reasoned opinion when the basis of their assumption is complete based on deep fakes?

              I’m Peter Dekom, and this technology, malevolently applied, just might be a challenge that makes true democracy impossible… and brings up some particularly horrible thoughts about the alternatives.

No comments: