Tuesday, January 12, 2021

Should We Section 8 Section 230

While most publicly published media has responsibility for everything disseminated on their old-world platforms, albeit with a little more leeway than the general public under the First Amendment (“free press” and “free speech”), it does not enjoy what seems to be a virtually blank check to avoid responsibility for what appears on social media websites. But then, traditional media are not vehicles for literally billions of user-generated pieces of content and were already well-established media that did not seem to need a congressional boost. 

Traditional media have their own limitations against knowingly and maliciously defaming or invading privacy under US Supreme Court decisions: NY Times vs Sullivan [1964 – dealt with defamation] and Time, Inc vs Hill [1967 – dealt with privacy]. But Facebook and Twitter have a sweeping exemption, primarily because of the mass of third party content that they did not originate, in Section 230 of the Communications Decency Act of 1996. The litany of user-generated content included malicious lies that literally pushed conspiracy theories as fact, posted pornography (even including children), copyright violations and infringement and gory violence that would make a normal stomach churn. 

The sheer volume of content made policing these social media sites a humongous task. Under pressure from Congress, major media sites began moving beyond basic “take down notices” for certain obvious offenders (particularly, explicit sex, ultra-violence and copyright infringement) to address toxic lies that were literally helping to spread coronavirus infections and encouraging violence from extremists. Some toxic posting individuals were totally denied access to the sites. For high profile but consistent violators, most notably the President of the United States, postings were often accompanied with a formal notice noting the inaccuracy.

Politically, both parties are unhappy about the 230 exemption and the seemingly unbridled power of social media sites, particularly Facebook, Twitter and Google. Their CEOs were repeatedly called to congressional hearings where they were excoriated by representatives of both political parties. Dems felt there was inadequate protection against the toxicity of conspiracy theorists; Republicans railed at some notion that the conservative voice was effectively being stifled by these social media behemoths.

The President vetoed the $741 billion defense bill in material part because it did not contain a provision repealing Section 230, which he believed was necessary since it had begun to question and repress his postings. Even Joe Biden seems to favor the repeal of Section 230. The issue is not simply the repeal of the Section but whether there needs to be some sort of replacement that might be necessary. A flat repeal could generate chaos and a tsunami of unintended consequences. There is no perfect solution, but as artificial intelligence increases in sophistication and deployment, there are paths to making this “wild, wild west” vastly more acceptable, creating recourse to those injured by the malignancy of postings from hidden sources.

Writing for the January 6th FastCompany.com, Shuman Ghosemajumder – Global Head of Artificial Intelligence at F5. He was previously CTO of Shape Security and Global Head of Product for Trust and Safety at Google – explains what is and what might be: “So is it possible to abolish Section 230? Would that be a good idea? Doing so would certainly have immediate consequences, since from a purely technical standpoint, it’s not really feasible for social media platforms to operate in their present manner without some form of Section 230 protection. Platforms cannot do a perfect job of policing user-generated content because of the sheer volume of content there is to analyze: YouTube alone gets more than 500 hours of new videos uploaded every minute.

“The major platforms use a combination of automated tools and human teams to analyze uploads and posts, and flag and mediate millions of pieces of problematic content every day. But these systems and processes cannot just linearly scale up. You can see extremely large-scale copyright violation detection and takedowns, for example, but it’s also easy to find pirated full-length movies that have stayed up on platforms for months or years.

“There is a huge difference between these systems being pretty good and being perfect—or even just good enough for platforms to take broad legal responsibility for all content. It’s not a question of tuning algorithms and adding people.  Tech companies need different technology and approaches.

“But there are ways to improve Section 230 that could make many parties happier… One possibility is that the current version of Section 230 could be replaced with a requirement that platforms use a more clearly defined best-efforts approach, requiring them to use the best technology and establishing some kind of industry standard they would be held to for detecting and mediating violating content, fraud, and abuse. That would be analogous to standards already in place in the area of advertising fraud.

“Only a few platforms currently use the best available technology to police their content, for a variety of reasons. But even holding platforms accountable to common minimum standards would advance industry practices. There is language in Section 230 right now relating to the obligation to restrict obscene content which only requires companies to act ‘in good faith.’ Such language which could be strengthened along these lines.

“Another option could be to limit where Section 230 protections apply. For example, it might be restricted only to content that is unmonetized. In that scenario, you would have platforms displaying ads only next to content that had been sufficiently analyzed that they could take legal responsibility for it. The idea that social media platforms profit from content which should not be allowable in the first place is one of the things most parties find objectionable, and this would address that concern to some extent. It would be similar in spirit to the greater scrutiny which is already applied to advertiser-submitted content on each of these networks. (In general, ads are not displayed unless they go through content review processes which have been carefully tuned to block any ads that violate the network’s policies.)” 

It would seem that there is sufficient middle ground where Republicans and Democrats can agree, if they would simply rise above their vituperative rhetoric. It is high time to take a now-outdated law from 1996 – well before the explosion of social media and the serious deployment of artificial intelligence – into the 21st century.

I’m Peter Dekom, and common sense – if that notion still exists – should lead Congress to a viable, if imperfect, improvement in the dissemination of lies and personal attacks that truly destabilize our entire system of government. 


No comments: