Supreme Court Sidesteps Ruling on Scope of Internet Liability Shield

The Supreme Court said on Thursday that it would not rule on a question of great importance to the tech industry: whether You Tube could invoke a federal law that shields internet platforms from legal responsibility for what their users post in a case brought by the family of a woman killed in a terrorist attack.

The court instead decided, in a companion case, that a different law, one allowing suits for “knowingly providing substantial assistance” to terrorists, generally did not apply to tech platforms in the first place, meaning that there was no need to decide whether the liability shield applied.

The court’s unanimous decision in the second case, Twitter v. Taamneh, No. 21-1496, effectively resolved both cases and allowed the justices to duck difficult questions about the scope of the 1996 law, Section 230 of the Communications Decency Act.

In a brief, unsigned opinion in the case concerning YouTube, Gonzalez v. Google, No. 21-1333, the court said it would not “address the application of Section 230 to a complaint that appears to state little, if any, plausible claim for relief.” The court instead returned the case to the appeals court “to consider plaintiffs’ complaint in light of our decision in Twitter.”

The Twitter case concerned Nawras Alassaf, who was killed in a terrorist attack at a nightclub in Istanbul in 2017 for which the Islamic State claimed responsibility. His family sued Twitter and other tech companies, saying they had allowed ISIS to use their platforms to recruit and train terrorists.

Justice Clarence Thomas, writing for the court, said the “plaintiffs’ allegations are insufficient to establish that these defendants aided and abetted ISIS in carrying out the relevant attack.”

That decision allowed the justices to avoid ruling on the scope of Section 230 of the Communications Decency Act, a 1996 law intended to nurture what was then a nascent creation called the internet.

Section 230 was a reaction to a decision holding an online message board liable for what a user had posted because the service had engaged in some content moderation. The provision said, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Section 230 helped enable the rise of huge social networks like Facebook and Twitter by ensuring that the sites did not assume legal liability with every new tweet, status update and comment. Limiting the sweep of the law could expose the platforms to lawsuits claiming they had steered people to posts and videos that promoted extremism, urged violence, harmed reputations and caused emotional distress.

The ruling comes as developments in cutting-edge artificial intelligence products raise profound questions about whether laws can keep up with rapidly changing technology.

The case was brought by the family of Nohemi Gonzalez, a 23-year-old college student who was killed in a restaurant in Paris during terrorist attacks there in November 2015, which also targeted the Bataclan concert hall. The family’s lawyers argued that YouTube, a subsidiary of Google, had used algorithms to push Islamic State videos to interested viewers.

A growing group of bipartisan lawmakers, academics and activists have grown skeptical of Section 230 and say that it has shielded giant tech companies from consequences for disinformation, discrimination and violent content across their platforms.

In recent years, they have advanced a new argument: that the platforms forfeit their protections when their algorithms recommend content, target ads or introduce new connections to their users. These recommendation engines are pervasive, powering features like YouTube’s autoplay function and Instagram’s suggestions of accounts to follow. Judges have mostly rejected this reasoning.

Members of Congress have also called for changes to the law. But political realities have largely stopped those proposals from gaining traction. Republicans, angered by tech companies that remove posts by conservative politicians and publishers, want the platforms to take down less content. Democrats want the platforms to remove more, like false information about Covid-19.