Supreme Court overturns ruling holding platforms responsible for users' criminal activity
The Supreme Court has maintained that internet platforms such as social media are not responsible for users' content and actions, even if it results in criminal conduct or death.
US Supreme Court
On Thursday, the Supreme Court ruled that Twitter could not be held responsible for aiding and abetting an ISIS-executed terrorist attack, upholding immunity granted by Section 230.
The original lawsuit was by the family of Nawras Alassaf, who was among 39 people killed in a January 2017 ISIS attack in Istanbul. Filed in California, the Antiterrorism Act allows U.S. nationals to sue those who "aids and abets, by knowingly providing substantial assistance" to terrorism.
The logic is that since Twitter and others knew ISIS used their services, tech companies didn't work hard enough to remove the content from view. Twitter believes that knowing there's terrorism isn't the same as knowing about "specific accounts that substantially assisted" in the attack, which it could directly act upon.
Section 230 refers to the part of the 1996 Communications Decency Act that immunizes websites or services for content generated by its users, assuming a "good faith" effort is made to moderate illegal content.
In short, platforms like Facebook, YouTube, and Twitter cannot be considered the publisher of that content if it is posted there by someone else.
A problem with the existence of Section 230 is that some parties believe it is too protective of service providers. In addition, the law's definition is broad on purpose, leading to accusations that it is being overused.
Part of that is the definition of the objectionable content that could be considered removable. When it comes to political content, the removal of that content could be thought of as a political commentary or, worse, censorship.
While Section 230 remains the status quo, growing bipartisan support for changes suggests this may not always be true.
A full repeal of Section 230 is unlikely, and tweaking is more plausible. However, while support for changes may be bipartisan, each party wants different and incompatible changes.
Read on AppleInsider
US Supreme Court
On Thursday, the Supreme Court ruled that Twitter could not be held responsible for aiding and abetting an ISIS-executed terrorist attack, upholding immunity granted by Section 230.
The original lawsuit was by the family of Nawras Alassaf, who was among 39 people killed in a January 2017 ISIS attack in Istanbul. Filed in California, the Antiterrorism Act allows U.S. nationals to sue those who "aids and abets, by knowingly providing substantial assistance" to terrorism.
The logic is that since Twitter and others knew ISIS used their services, tech companies didn't work hard enough to remove the content from view. Twitter believes that knowing there's terrorism isn't the same as knowing about "specific accounts that substantially assisted" in the attack, which it could directly act upon.
Section 230 refers to the part of the 1996 Communications Decency Act that immunizes websites or services for content generated by its users, assuming a "good faith" effort is made to moderate illegal content.
In short, platforms like Facebook, YouTube, and Twitter cannot be considered the publisher of that content if it is posted there by someone else.
A problem with the existence of Section 230 is that some parties believe it is too protective of service providers. In addition, the law's definition is broad on purpose, leading to accusations that it is being overused.
Part of that is the definition of the objectionable content that could be considered removable. When it comes to political content, the removal of that content could be thought of as a political commentary or, worse, censorship.
While Section 230 remains the status quo, growing bipartisan support for changes suggests this may not always be true.
A full repeal of Section 230 is unlikely, and tweaking is more plausible. However, while support for changes may be bipartisan, each party wants different and incompatible changes.
Read on AppleInsider
Comments
Had the Court ruled the other way, the case would likely have went back to the district court where Section 230 might have become an issue. As it was, the district court didn't reach the Section 230 issue because it didn't need to. And when the Ninth Circuit reversed the district court, it didn't address the Section 230 issue because the district court hadn't done so. Without the petition to the Supreme Court (and cert grant), the district court - after being reversed on the JASTA claim - would likely have addressed the Section 230 issue. Then its decision on that might have been appealed to the Ninth Circuit before possibly being appealed to the Supreme Court.
At any rate, this Supreme Court decision tells us nothing about how Section 230 might protect Twitter and others in similar situations.
The individual posting is the responsible party full stop.
There are two distinct protections provided for (or clarified) by Section 230 which often get conflated. First, there's an unqualified protection against being treated as the publisher of information provided by others. That's 47 USC §230(c)(1):
So Twitter, e.g., isn't legally liable for defamatory postings made by others. It isn't responsible for others' speech just because it provides resources they might use to propagate such speech. That subsection is also, btw, what protects you and I when we simply quote someone else's speech. The point is, in general I'm responsible for my own speech (to include comments I might make regarding others' speech) but not for the speech of others. In that way Section 230(c)(1) provides protections for everyone using the internet - ISPs, so-called platforms, users - and without it (or common law to substantially the same effect) the internet as we know it couldn't exist.
Then there's another protection provided by 47 USC §230(c)(2):
That subsection provides protection against civil liability for, e.g., taking down content provided by others. So Twitter, e.g., can censor speech which it finds "otherwise objectionable" so long as it acts in good faith in doing so. Generally speaking, someone can't (successfully) sue Twitter for taking down their (or others') content.
The key point here though is that the protections provided by those respective subsections aren't linked. If someone acts in bad faith in censoring some content, they might be liable - if there's a statutory or common law basis for such liability - for that censoring. Bob, e.g., might be able to (successfully) sue Twitter for its bad faith action in taking down his Tweet. But that bad faith doesn't then make Twitter liable for anything posted by others which it leaves up. It isn't treated as the publisher or speaker of such content. Full Stop. That remains true regardless of its good or bad faith efforts to censor other content.
EDIT: To be clear, I'm only talking about Section 230 here. It provides protections against civil liability. By its own terms it doesn't block enforcement of federal criminal laws. To the extent anyone on the internet violates federal criminal laws, they can be held accountable for doing so. But that's a separate matter from the protections provided by Section 230.