Friday, December 30, 2022

Does TikTok Really Equal Teenage Candy Laced with Razor Blades?

 Timeline

Description automatically generated

The feds – including the Senate Homeland Security and Government Affairs Committee – are investigating TikTok as a national security threat, a direct user data pipeline to the company’s owners (ByteDance) in the People’s Republic of China. Hence the Chinese government. The Center for Countering Digital Hate (CCDH) is drilling down on how the Website has become a search engine for vulnerable teens, especially young girls struggling with their body image and feeling the brunt of peer denigration and judgement. TikTok offers de facto lessons on how just plain young teen and tween girls (theoretically, the minimum user age is 13) just don’t measure up to a variety of bizarre standards… and what they can do about it. From “how-to” lessons on anorexia and self-punishment by self-inflicted cuts and harm… to suicide. Various states have taken to banning the service itself, but enforcement is all but impossible at that level.

“Deadly by Design” is a 48-page CCDH report (released December 15th) that outlines the extensive TikTok reach and the unmistakable toxic dangers of this Website, upping ByteDance’s current valuation to a staggering $300 billion. Clint Rainey, writing for the December 15th FastCompany.com, summarizes the highlights of that report:

“Today, two-thirds of American teenagers are on TikTok, for an average of 80 minutes per day. CCDH says standard accounts were exposed to more pro-eating disorder or self-harm content every 3.5 minutes—that is, 23 daily exposures for the average TikToker. Vulnerable accounts were shown 12 times more harmful content, however. While standard accounts saw a total of six pro-suicide videos (or 1.5 apiece, spread over 30 minutes), the vulnerable accounts were bombarded with another every 97 seconds…

“Every 39 seconds, TikTok served the harmful content to the ‘standard’ fake accounts created as controls [in an online test by the CCDH]. But the group also created ‘vulnerable fake accounts—ones that indicated a desire to lose weight. CCDH says this group was exposed to harmful content every 27 seconds… [S]peaking to press, CCDH chief executive Imran Ahmed called TikTok’s recommendations ‘the social media equivalent of razor blades in candy—a beautiful package, but absolutely lethal content presented within minutes to users.’…

“CCDH’s definition of ‘harmful content’—versus pro-ED or pro-self-harm—could admittedly make the group some enemies. It lumped educational and recovery material in with negative content. Its researchers say that in many cases they couldn’t determine the intent of videos, but the group’s bigger argument is that even positive content can cause distress, and there’s no way to predict this effect on individuals: That is why trigger warnings were invented.

“Like all of the major social platforms, TikTok—which last year hit one billion monthly active users, the largest proportion of whom haven’t reached drinking age—has enacted policies for years that are supposed to eliminate harmful content. User guidelines ban pro-ED and pro-self-harm content, a policy enforced by a tag-team of AI and human moderators. Nevertheless, activists still accuse TikTok of doing too little to halt its proliferation. Recently, the company has stepped up efforts in response. Searches for glaring hashtags (#eatingdisorder, #anorexia, #proana, #thinspo) now redirect to a National Eating Disorder Association helpline. Teenagers reportedly merit ‘higher default standards for user privacy and safety,’ and policies have focused on them; they aren’t supposed to see ads for fasting or weight-loss cures anymore, for instance.” Good luck with that. Reality is obviously quite different.

There are so many legal pitfalls, restrictions and quicksand surrounding Web-based platforms that disseminate third party content. Section 230 of the federal Communications and Decency Act is at the center of the quagmire. It creates a “safe harbor” for purported neutral platforms from liability for such third-party content, while still allowing filtering content for copyright infringements, criminal activity, incitement to violence, clearly dangerous or hateful postings and harms to individual users, particularly children.

While both the GOP and the Democrats want to amend Section 230, they approach this from polar opposite positions. Republicans want to protect children, but otherwise they favor Elon Musk’s “say almost anything” approach to Twitter, in which even obvious dis- or misinformation is permitted. Including statements that vaccines are dangerous, even the wildly successful mRNA inoculations, QAnon theories, the “Big Steal” and replacement theories, positions that stir up the base and justify right-wing militias to take action. Democrats want these platforms to be held responsible for monitoring and removing dangerous dis- and misinformation.

While Europe, where many of these Websites are equally popular, is clamping down on these abusive uses of social media, sites in the United States hide behind the First Amendment, unwilling to accept that the harm they are causing very much mirrors an unjustified cry of “fire” in a crowded theater. As sited in Congressional testimony, controversy and catering to those most like to be addicted to toxic messaging creates Web traffic, which makes site that much more valuable to advertisers trying to reach hordes of targeted viewers. Money.

I’m Peter Dekom, and it seems to be a rule that where there is a place to hide under US law, money always seems to trump what is right.

No comments: