“These [data privacy] issues are of urgent societal importance… The data market, which lies at the core of the internet economy, will not function efficiently absent policy interventions. Yet, our rules, regulations, and understanding of the market haven’t evolved much over the last 20 years. Our work provides a conceptual framework to help policymakers think through these issues, which are extremely complex and will require a range of interventions.”
Dirk Bergemann, Yale Professor of
Economics
You’d think with the tsunami of
global privacy legislation flowing everywhere – from the European Union’s GDPR (General
Data Protection Regulation) to US state statutes like California’s Consumer
Privacy Act – that we are protected from all manner of online privacy
intrusions from hackers (we know that’s not happening) and from large,
mainstream corporations. Ask anyone Millennial aged and younger, and they will
tell you that the notion of privacy left the building years ago.
Data by itself is a strange beast,
and too much of it becomes unmanageable. Enter artificial intelligence (AI)
that can, without continuous human command and control, spot and analyze
patterns within groups and even within individual behavior. Like when you send
an email or text to a friend or make a social media post, and suddenly ads pop
up on your screen, as do emails and texts suggesting purchases that relate to
what you thought were private postings and communications.
Here’s a big reveal. The big
commercial enterprises don’t even need your personal data anymore. Those AI
programs are happy to find the overall groupings that you might fall into based
on little things, like Amazon or eBay purchases. And those groupings are so
detailed, and people within those groups behave so predictably, that there may
be no further need to analyze what you do individually. Gotchya!
Remember, there is a lucrative trade
in data. In addition to the revenues generated in consumer transactions, these
behemoths also generate massive metadata from just about everything you do.
That metadata has economic value in and of itself. It is commoditized, bought
and sold. But good metadata can get very pricey, so these companies seem to
have found shortcuts where buying data from constant individual monitoring is
no longer a justifiable expense.
These trends are so fascinating that they
are the subject of many academic studies. Writing for the October 15th
YaleNews, Mike Cummings explores a recent approach from some high-profile Yale
economists: “Shopping online. Searching the
internet. Posting to social media. These ordinary activities allow Amazon,
Google, Facebook and other digital behemoths to amass unprecedented amounts of
people’s data while offering them little, if any, compensation in return.
“In a new paper, ‘The
Economics of Social Data,’ Yale
economists offer insight into why big tech companies can gobble up so much
personal information at such a small price. It turns out that any one person’s
data aren’t worth that much to Google et al. because the companies already know
so much about our behavior from the data of people who resemble us, the
researchers assert.
“By collecting data from every email,
purchase, post, or search on their platforms, the tech giants have amassed
valuable information about entire groups of people who share similar
characteristics, the researchers explained. The social nature of this data
allows the companies to make statistical inferences that predict a wide range
of behaviors, such as where people travel, how they live, and the kinds of
products they prefer, according to the study.
“When a shopper is... on Amazon, the company
already knows a lot about their preferences based on data from people who share
similar characteristics… ‘When a shopper is looking for a book on
Amazon, the company already knows a lot about their preferences based on data
from people who share similar characteristics,’ said Dirk
Bergemann, the Douglass and Marion Campbell
Professor of Economics in the Faculty of Arts and Sciences and a co-author of
the study. ‘The additional information that the shopper provides companies is
not that valuable, therefore they don’t need to compensate for the additional
data. This creates an imbalance that favors the companies over individual
consumers.’
“Bergemann and his co-authors — MIT economist
Alessandro Bonatti and Tan
Gan, a graduate student in Yale’s
economics department — show that the data’s social nature generates ‘social
externalities’ that give the big digital companies an advantage in obtaining
data and potentially threaten people’s privacy. ‘Externalities’ are side
effects of economic activity that can produce positive or negative outcomes.
For example, bees kept for honey might pollinate nearby crops, which is a
positive externality. Increased air pollution is a negative externality caused
by people driving to work.
“Whenever someone posts on Facebook it can
produce a social externality, Bergemann said. A post about, say, the purchase
of new fashionable product conveys information that Facebook can use to predict
the likelihood that people in the same or similar social networks will buy that
product. While consumers might derive some value from this externality through
the company’s targeted advertising, Facebook reaps a far greater benefit from
it in increased sales of digital advertising. Moreover, few regulations are in
place to prevent firms from trading on this social data in ways that harm
consumers, he said.
“Regulators have tried to level the playing
field between consumers and big tech companies by attempting to secure
consumers’ control over their information. This, they hope, will lead to people
being compensated for the data they choose to share. But his approach is
insufficient to protect privacy, the researchers argue, because, due to the
externalities of social data, people relinquish control of their information to
big-tech firms for little to no compensation…
“In addition to giving people control of their
data, policymakers should seek to make stored data easily transferrable and
removable so that consumers can regain control of their data, said Bergemann.
They also should attempt to increase competition among tech firms for people’s
information, he noted.
“[The big digital firms] profit from gathering
massive amounts of data while consumers gain very little from handing over
their information… ‘The big digital firms have near monopolies on the
personal data,’ he said. ‘They profit from gathering massive amounts of data
while consumers gain very little from handing over their information. People
would experience greater benefits if the companies were in direct competition
for the personal data.’” The notion of “individuality” seems to fall by the
wayside. Once you are in a group, your behavior is assumed to be lemming-like
predictable. The digital universe if flying at warp speed followed by an
old-world piston powered propeller-driven legislative airplane that lags years
behind.
“A ballot initiative in California (Proposition
24) gives consumers opt-in and opt-out right for their online activities. “I f
this proposal is meant to give people more privacy and more rights over how
their data are used, why is it opposed by the American Civil Liberties Union
and the Consumer Federation of California? And if it’s meant to stop online
businesses from making money by exploiting personal data, why aren’t internet
companies lining up to try to kill it?
“A quick read of the measure itself proves
impossible. Proposition 24 clocks in at 52 pages of dense technical language
concerning the intricacies of online data collection, as intelligible to a
layperson as the user manual of an aircraft carrier… In broad strokes, the 2019
consumer privacy law gave Californians the right to know what data companies
collect on them, the right to get the data deleted and the right to tell
companies not to package and sell the data to other companies…
“[But there’s the reason why Internet
companies might like this proposed law:] Some groups that initially backed the
California Consumer Privacy Act wanted the new measure to come back stronger
than the original attempt: If you’re going to put a new law in front of voters,
at a time when there’s a public awareness of the need for privacy, why not make
it opt-in, rather than opt-out, and include a private right of action to let
the people of California do their own enforcement?
“The ACLU of Northern California, which has
spearheaded a number of privacy initiatives, came out strongly against
Proposition 24. ‘We believe that there should be an opt-in framework for
collection and use of people’s personal information,’ said Jacob Snow, an
attorney with the group, ‘and we believe there should be strong enforceable
rights backed up by a private right of action, and we have fought for them in
the past.’… Adam Schwartz, staff attorney with the digital rights group
Electronic Frontier Foundation, said that he sees Proposition 24 as ‘a mixed
bag of partial steps backward and partial steps forward, and a lot of missed
opportunities.’” Los Angeles Times, October 16th.
Not quite enough, but perhaps enough to slow
the train for more comprehensive and genuine privacy rights to a crawl. And
believe me, this is only the beginning. Those Web-driven mega corporations make
a lot of campaign contributions and pay a lot of lobbyists. Better than a sharp
stick in the eye?
I’m
Peter Dekom, and the more you know about this “invasive species of digital
intrusion,” the more you can vote for political representatives who reflect
your privacy concerns.
No comments:
Post a Comment