The United States just might be brought to its knees by the “Big Lie” and the reactions of unpatriotic “patriots” wrapping themselves within a vile pseudo-constitutional mantle of crass manipulation, horrific distortion, threatened and actual violent vigilantism and simpering red state legislation catering to this lowest common denominator. Foreign powers dance through our social media, planting seeds of destruction to our democratic institutions, conspiracy theorists engage large swaths of emotionally vulnerable Americans into inane suppositions and explanations, many of us face a miserable disease and possible death-by-disinformation, and the believers of these unfortunate untruths now carry some of the largest political constituencies we have ever witnessed. Racism soars, violence against minorities finds “justification” from mendacity and bigotry, and our already horrifically polarized American body politic just might fracture into a genuine civil war. Were democracy and our First Amendment designed to withstand this onslaught? What can be done to revive America and allow it to move on?
Writing for the April 8th FastCompany.com, author, Stanford StartX entrepreneur and pioneer in the commercialization of Psychometric Artificial Intelligence, Avi Tuschman, puts this all into a simple perspective: “What has gone wrong? Google, YouTube, and Facebook are the world’s top three websites outside of China. Their status is not an accident: They organize the world’s information exceptionally well according to popularity-driven algorithms. Despite these platforms’ benefits, popularity has an uncomfortable relationship with factuality. Systems that optimize for viral content quickly spread unreliable information. Over a quarter of the most viewed English-language coronavirus YouTube videos contain misinformation. On Twitter, MIT scholars have calculated that fake news travels six times faster than true stories.” Thus, a system that magnifies and disseminates based on popularity statistically favors disinformation!
Tuschman continues: “The most dystopian feature of our time is not that we face formidable challenges; in a different era we might have had enough shared beliefs to navigate a bloodless presidential transition, vaccine hesitancy, racial tension, and even climate change. Today, however, our body politic suffers from an informational infection that hinders our ability to adequately respond to these serious threats.
“Enough misinformation has been injected into social media channels to keep our society divided, distrustful, and hamstrung. AI-powered recommendations of user-generated content have exacerbated political polarization. And foreign governments have expertly manipulated these algorithms to interfere in the 2016 and 2020 U.S. elections. Russia-sponsored disinformation on YouTube has continued to generate billions of views beyond election years and fomented conspiracy movements the FBI has deemed a domestic terror threat.
“Social media is also the primary vector of the COVID-19 “infodemic,” which contributes to the 60 percent of deaths conservatively determined to be avoidable. The head of the WHO has warned that ‘fake news spreads faster and more easily than this virus, and is just as dangerous.’”
The biggest legitimizing accelerant of this malevolent trend has been Donald Trump and his GOP elected enablers, willing to sacrifice democracy to extend their otherwise fading power for at least a few more years… if the country survives their efforts. Their entire approach is to label truth as a threat and to fix a problem that they have created. If there weren’t so many followers, if our political system were not so skewed so as to favor rural over urban voters 1.8 to 1 (remember that California, with 39 million people has the same two US Senators as does Wyoming with about 600 thousand people), we would be “just going through a phase will end soon.” The new populist GOP is counting on continuing that “phase” for as long as they can. Can truth negate this tsunami of disinformation?
Tuschman: “How can we improve our information systems to save lives? As NYC Media Lab’s Steve Rosenbaum has pointed out, neither the tech platforms nor governments can be trusted fully to regulate the internet, ‘So it’s like we want this magical entity that isn’t the government, that isn’t Facebook or YouTube or Twitter.’… Rosenbaum is absolutely correct: solving the misinformation crisis requires a ‘magical’ third entity that lacks any incentive to manipulate information for economic or political ends. However, it is the tech companies that could build a sufficiently fast and scalable system for distinguishing facts from falsehoods. Such a solution is not only theoretical—its key components are already well developed and proven.
“Among the world’s top websites there is one exceptional case that has not evolved to sort content by popularity. The fifth-largest website outside of China organizes the world’s information according to reliably documented facts. It ranks higher than Amazon, Netflix, and Instagram. This website is Wikipedia… But how accurate is it, really? In 2005, a blind study in Nature concluded that Wikipedia had no more serious errors than the Encyclopædia Britannica. In 2007, a German journal replicated these results with respect to Bertelsmann Enzyklopädie and Encarta. By 2013, Wikipedia had become the most-viewed medical resource in the world, with 155,000 articles written in over 255 languages, and 4.88 billion page views that year. Between 50% and 70% of physicians and over 90% of medical students use Wikipedia as a source for health information.
“Today, Wikipedia is cited in federal court documents and is relied upon by Apple’s Siri and Amazon’s Alexa. Google draws heavily from Wikipedia, providing excerpts for their search engine’s popular Knowledge Panel. Wikipedia’s handling of COVID-19 was described in The Washington Post as ‘a ray of hope in a sea of pollution.’…
“Here’s how it could work: A tiny percentage of social media content contains viral misinformation deleterious to public health. Tech companies should start by implementing policies to make such content eligible for open-source fact-checking. The platforms could use several mechanisms to pass suspect content to a distributed review process. Here, fact-checking users would utilize the same open-source software and mechanisms that have successfully evolved on Wikipedia to adjudicate verifiability. The ‘visible process’ of fact-checking would occur on a MediaWiki, ideally governed by a multi-stakeholder organization. The facts themselves—the ‘ground truth’—would be English-language Wikipedia text, from articles that meet minimum authorship and editorship thresholds.
“A massive third-party workforce—social media users—is already available to power this solution. Wikipedia demonstrates that millions of volunteers will check facts without monetary compensation. In fact, research shows that people are wired to punish moral transgressions in exchange for only the resulting dopamine stimulation to the brain. Indeed, altruistic punishment already constitutes a significant proportion of social media activity today. Tech platforms need only harness this instinct to clean up harmful misinformation.
“Further research on fact-checker behavior could help clarify the required scale of a user-powered content moderation mechanism. Some Wikipedia editors might not want to work for the ‘benefit’ of a for-profit company. However, social media fact-checkers will likely come from a far larger pool of people. There are precedents for crowd-sourced work contributing to large tech companies. For example, Local Guides enrich Google Maps with a significant amount of information; this motivation loop works because they are not intrinsically motivated to work for Google, but to help friends and family.
“When recruiting fact-checkers, social media companies should convey two important points: (1) Fact-checking benefits the community by reducing misinformation; and (2) harmful content will be deranked and demonetized, reducing the profitability of bad content for both third-party creators and the tech platform… How quickly we adapt the most successful fact-checking technology to our popularity-maximizing social news platforms has immense ramifications.”
But won’t all these lies, conspiracy theories simply die out in time, particularly since none of their predictions has been proven, sustained or come to pass? Perhaps or perhaps that could result in partial success, but time has shown a remarkable resilience in such beliefs, which seem to adapt to their obvious contradictions. And it just might be a race against time. Will the country survive the distortions quickly enough, if at all?
I’m Peter Dekom, and in an era of incomprehensibly rapid change laced with existential threats, lies, mythology and conspiracy theories have become accepted palliatives to so many people that they just might not be reversible.
No comments:
Post a Comment