We know that through the application
of self-learning artificial intelligence, social media messaging can be
tailored automatically to cater to the biases and preferences of just about any
recipient. Key words, tracking websites people visit, gathering personal and
demographic information from any number of sites, some perhaps hacked, and
linking it all back to a specific individual allows the construction of a
rather shocking database for that individual.
If there is a predisposition to
conspiracy theories, an inherent racial or ethnic bias, medical issues, stance
on political issues including party affiliation, etc. all that information can
be used to construct targeted messaging intended to cause a certain result.
Outrage and anger against the political foe of the purveyor of this
information. A feeling of political impotence that might keep that opponent
from even casting a ballot. Or a sympathetic eye/ear willing to spread the
underlying disinformation to others under their own signature. Adding their personal
credibility to a false statement.
It is only going to get worse, even
as “bot-driven” messaging may be required to be disclosed or banned altogether
(First Amendment issues). Perps simply ignore the proscription of the
limitation knowing that they can always assert the First Amendment at any
attempt to curtail their manipulative and usually fake messaging. Despite the
rather universal knowledge that not everything communicated over the internet
is true, it is alarming to watch how gullible people are still convinced that
it is. Especially older users who are not raised with the skepticism that
younger users have.
But if you think that’s bad,
artificial intelligence can take random recordings of an individual’s voice and
use those intonations and accent to create believable soundalike statements
that are complete fabrications. Literally putting words into someone else’s
mouth. It gets worse still as presently sophisticated AI programs can actually
take images and visual recordings of real people and, with that information,
create very credible audio-visual footage showing that individual, “facing the
camera” with accurate lip and mouth movement, uttering words that they never
said. Images can be slowed down to make a speaker appear inebriated (Nancy Pelosi
found that out the hard way). Yeah, well, you say, that requires a pretty
sophisticated computer, major file server storage capacity and state-of-the-art
AI. You mean like the Russians and the Chinese have?
Or perhaps just an app that anyone can use? Like
the currently popular Foto Face Swap app that works with most smart phone
cameras. The publisher tells us: “Foto
Face Swap lets you interchange faces in any picture. Finally, an easy way to
swap faces in any picture. Just select the image where you want to switch the
faces, and the picture with the face you want to insert. FotoFaceSwap guides
you throughout the process. Enlarge, reduce or rotate the faces. Modify the
colors to fit the background. And add any text to your composition. Then save,
email or print your new picture. Have fun with your friends, enemies and even
with celebrities.” Oh, and this is just one of several such ubiquitous smart
phone apps. Like the one that works a little too well, Zao, a Chinese app that
is particularly adept at making “deepfakes.”
As the Los
Angeles Times tells us (September 3rd), “Zao’s smooth and quick integration of faces
into videos and internet memes is what makes it stand out… Chinese face-swap
app Zao rocketed to the top of app store charts over the weekend, but user
delight at the prospect of becoming instant superstars quickly turned sour as
privacy implications began to sink in.
“Launched recently, Zao is currently topping
the free download chart on China’s iOS store. Its popularity has also pushed
another face-swap app, Yanji, to fifth place on the list… Users of the app
upload a photo of themselves to drop their likeness into popular scenes from
hundreds of movies or TV shows. It’s a chance to be the star and swap places in
a matter of moments with the likes of Marilyn Monroe, Leonardo DiCaprio or Jim
Parsons as Sheldon Cooper on ‘The Big Bang Theory.’
“The photo uploads have proved problematic,
however. Users can provide an existing photo or, following on-screen prompts,
create a series of photos in which they blink their eyes and open their mouths
to help create a more realistic ‘deepfake.’” So problematic that watchdog
consumers and privacy advocates, and more than a few regular users, forced Zao to
modify its terms of usage, which in turn decimated the value of the app.
“An earlier version of Zao’s user agreement
stated that the app had ‘free, irrevocable, permanent, transferable, and
relicense-able’ rights to all this user-generated content… Zao has since
updated its terms — the app now says it won’t use head shots or mini videos
uploaded by users for purposes other than to improve the app or things
preapproved by users. If users delete the content they upload, the app will
erase it from its servers as well.
“But the reaction has not been quick enough.
Zao has been deluged by a wave of negative reviews. Its App Store rating now
stands at 1.9 stars out of five after more than 4,000 reviews. Many users
complained about the privacy issue…. ‘We understand the concern about privacy.
We’ve received the feedback, and will fix the issues that we didn’t take into
consideration, which will need a bit of time,’ a statement posted to Zao’s
account on social media platform Weibo said.
“On Monday [9/2], the China E-Commerce
Research Center urged authorities to look into the matter… The app ‘violates
certain laws and standards set by the nation and the industry,’ the research
house said in a statement, citing Wang Zheng of the Taihang Law Firm.” LA
Times. So what? That such a ubiquitous app, readily available today from
several sources, can create such believable fake views is the headline.
Technology is only getting more robust; it will only get more realistic… not to
mention that truly sophisticated tekkies can already easily exceed anything
that Zao can do.
The First Amendment was never designed to take
such technology into consideration. But that essential element in our Bill of
Rights also defines the essence of democracy. The proliferation of simple fake
news was and is pretty nasty, but the next generations of “fakeness” could
easily undermine the entire fabric of an open and free society.
I’m
Peter Dekom, and you wonder if democracy can survive the “fake news generating”
available technology… or if we deploy filtration and editing functions whether
those writing the filtering software would literally become the autocrats
running what used to be democracies.
No comments:
Post a Comment