TikTok Lawsuit May Forever Change Social Media

A lawsuit was recently brought against TikTok, which may end up altering the legal landscape for social media platforms operating in the U.S.

The lawsuit has its origins in the tragic death of a 10 year-old girl who, while engaging in a trendy but extremely dangerous activity on Tik Tok, sadly lost her life.

In 2021, young Nylah Anderson, was exposed to a viral meme in her TikTok feed. The video that presented itself was called “The Blackout Challenge.”

Social media platforms are loaded with supposedly cool game-like challenges, many of which are relatively harmless. But this particular challenge was anything but low risk.

Devastatingly for Nylah and her family, the specific activity that was advocated was to choke oneself until one lost consciousness. Nylah participated in the challenge and tragically passed away in the process.

Her family filed a lawsuit against TikTok, but the trial court threw out the case, based on the traditional statutory protections enjoyed by social media platforms.

However, a federal appellate court came to a different conclusion. The court held that the lawsuit could go forward because of the manner in which TikTok used its technology, finding that the platform’s algorithm may have promoted the harmful content that led to a fatal outcome for the young girl.

The court’s decision stated the following: “While no one person at TikTok curates content for anyone’s feed, it is fair to call the algorithm the arbiter, and the algorithm is programmed by TikTok…”

Social media platforms, such as TikTok, Facebook, Instagram, X (formerly Twitter), and others, have been protected by a 25 year-old law passed by Congress, which was intended to shield platforms that came into being during the internet’s infancy.

The early days of the internet featured platforms such as AOL, Compuserve, and Prodigy, which functioned as conduits that passively provided access to content, rather than actively influencing what would appear in users’ accounts.

Consequently, as part of the Communications Decency Act of 1996, protections were set up in order to shield these passive online services from liability for content that was posted by third parties.

For these early gateways to the web, revenue arrived in the form of subscription fees.

Today’s platforms have a completely different revenue model. Advertising as well as sharing user data comprise the primary sources of income.

The aim of modern social media companies is to acquire, and perhaps more importantly, to maintain its users.

The complex and sophisticated algorithm is the tool that enables a company to consistently maintain its users.

TikTok’s “For You” page, Facebook’s feed, Instagram’s recommendations, and X’s “For You” page are controlled by algorithms that learn what an individual likes to view, and subsequently, based on knowledge of a person’s interests, bring content from other users into the individual’s account. 

In essence, not only do modern social media platforms provide access to content, but they curate what users see via pre-programmed algorithms.

The TikTok lawsuit could have major implications for all of the major modern social media companies, since they all use algorithms to curate content.

If Nylah’s family prevails in its lawsuit, the resulting precedent could mean an effective end to the legal protections under which social media concerns have been operating.

TikTok, Facebook, Instagram, X, and other platforms would then face a significant shift from the protections they have enjoyed under Section 230 of the Communications Decency Act.

In order to avoid future liability, modern social media platforms would be legally responsible to re-design their algorithms in such a way as to prevent the delivery of harmful content.

It very well may be that loss of a precious life will spell the beginning of the end to the outdated legal protections that social media platforms have been enjoying at the expense of the innocent ones.

The Digital Threat to Free Expression

google-facebook-apple-spotify-big-tech-censorship-640x480

Recently, in a series of unprecedented moves on the part of four major social media platforms, free expression was deliberately brought to a halt.

That the thwarting of the free expression in question took place on the same day adds to the alarming nature of the action by the digital powers that be.

Alex Jones’s InfoWars content was banished from Facebook, Apple, YouTube, and Spotify. The move appears to have been a coordinated effort.

The removal of the content was evidently motivated by a desire to rid the platforms of supposed hate speech. However, the same platforms continue to display pages that have far more incendiary and/or offensive content than InfoWars posted.

Provocateur Jones’s site was a convenient quarry for tech companies to begin their purge of content that they subjectively deem undesirable.

However, tech giants have laid down a track record that indicates they cannot be trusted to maintain a fair venue for the marketplace of ideas.

Approximately 70 percent of the people within our country now obtain their news from Google and Facebook. Additionally, the major tech concerns have a virtual stranglehold on the manner in which billions of people around the globe communicate.

Truth be told, there has never been a more massive concentration of media power than that which is squarely in the hands of Google, Facebook, Apple, Twitter, and a smattering of other internet companies.

As digital companies go about the business of justifying censorship, many are looking for solutions via regulation.

Restraints on speech imposed by private companies are not protected by the First Amendment, and companies do not have a legal obligation to provide freedom of speech to their users. While internet companies were once fierce advocates of free expression, this is unfortunately not the case anymore.

Being larger than many governments of countries throughout the world, the tech giants act in a quasi-governmental manner when they eliminate or limit speech within their internet province.

Some have proposed turning the big tech giants into public utilities. Others have urged breaking up the companies through the use of anti-trust law, a logical idea when considering that the major tech firms have essentially become a monopoly with no significant competition, e.g., Google’s dominance of the internet video market and Facebook’s rule over the social media sector.

British Prime Minister Theresa May recently suggested that social media platforms be treated like news organizations, which would render them responsible for content appearing on their platforms.

Rep. Steve King has recommended revisiting the law that shields internet companies from being treated as the publisher of content users’ posts, thus restoring legal responsibility for defamatory and other tortious or criminal content that is published. The Iowa congressman is referring to a statutory provision that made the current internet social media landscape possible: Section 230 of the Communications Decency Act.

Publishers of content are typically liable for the material they disseminate, even when the content originates from individual unpaid contributors, such as a “letter to the editor.”

In 1996, when the web as we know it was still in its infancy, Congress passed the Communications Decency Act. An amendment to the original bill, Section 230, stated, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The statute protected Internet providers from being deemed news organizations and gave legal immunity to the tech companies, ostensibly to foster industry growth and freedom of speech.

The U.S. Supreme Court stripped away much of the bill in 1998, but Section 230 was left unscathed.

Later precedents interpreted Section 230 broadly so that digital platform companies could grow exponentially, without serious concern for illegal speech placed on their platforms. And grow they did, to become the gargantuan companies that they are today, complete with secret algorithms that render selected users invisible. At the start, the young companies would not have been economically feasible minus the provision.

The law also prevents liability in the event “objectionable” material is removed. If the companies do choose to eliminate offensive user-created content, their immunity is not forfeited.

These massive companies are essentially being treated by the law as if they are still mere startups. Although many in the tech community see Section 230 as sacrosanct, i.e., not to be touched, the provision was modified by a bi-partisan coalition in Congress earlier this year. President Trump signed legislation amending Section 230 in April 2018, denying some legal immunity to internet platforms in order to fight sex trafficking.

More carve outs of the statute, or the threat of such, will get the attention of the tech giants and perhaps motivate them to return to the free and open platforms they once wanted to be.