Supreme Court overturns ruling holding platforms responsible for users' criminal activity

Posted:
in General Discussion
The Supreme Court has maintained that internet platforms such as social media are not responsible for users' content and actions, even if it results in criminal conduct or death.

US Supreme Court
US Supreme Court


On Thursday, the Supreme Court ruled that Twitter could not be held responsible for aiding and abetting an ISIS-executed terrorist attack, upholding immunity granted by Section 230.

The original lawsuit was by the family of Nawras Alassaf, who was among 39 people killed in a January 2017 ISIS attack in Istanbul. Filed in California, the Antiterrorism Act allows U.S. nationals to sue those who "aids and abets, by knowingly providing substantial assistance" to terrorism.

The logic is that since Twitter and others knew ISIS used their services, tech companies didn't work hard enough to remove the content from view. Twitter believes that knowing there's terrorism isn't the same as knowing about "specific accounts that substantially assisted" in the attack, which it could directly act upon.

Section 230 refers to the part of the 1996 Communications Decency Act that immunizes websites or services for content generated by its users, assuming a "good faith" effort is made to moderate illegal content.

In short, platforms like Facebook, YouTube, and Twitter cannot be considered the publisher of that content if it is posted there by someone else.

A problem with the existence of Section 230 is that some parties believe it is too protective of service providers. In addition, the law's definition is broad on purpose, leading to accusations that it is being overused.

Part of that is the definition of the objectionable content that could be considered removable. When it comes to political content, the removal of that content could be thought of as a political commentary or, worse, censorship.

While Section 230 remains the status quo, growing bipartisan support for changes suggests this may not always be true.

A full repeal of Section 230 is unlikely, and tweaking is more plausible. However, while support for changes may be bipartisan, each party wants different and incompatible changes.

Read on AppleInsider

Comments

  • Reply 1 of 9
    This is a good ruling. It’s impossible for social media platforms to police all content. Particularly when it comes to “objectionable” content.
    dewmechasmronnmuthuk_vanalingambeowulfschmidt
  • Reply 2 of 9
    chasmchasm Posts: 3,306member
    The people who wanted to abolish or rewrite Section 230 were so zealous in their fever to prevent “suppression” of certain views that they did not seem to understand that if Section 230 were substantially altered, sites would simply eliminate any opportunity for users to comment at all, for fear of litigation.

    And those sites that exist primarily to skirt the line of promoting actions considered illegal would simply be shut down.

    What would THAT do for underrepresented or currently-unpopular viewpoints, I wonder?
    williamlondonbeowulfschmidtgatorguy
  • Reply 3 of 9
    carnegiecarnegie Posts: 1,078member
    I'm headed out the door so I won't get lost in the details of this decision or too far into the procedural stance of the case as it reached the Supreme Court. But I did want to point out that Section 230 had nothing to do with this decision. The Court found that the plaintiffs hadn't sufficiently made out a claim under the Justice Against Sponsors of Terrorism Act, which was the basis for what remained of the suit.

    Had the Court ruled the other way, the case would likely have went back to the district court where Section 230 might have become an issue. As it was, the district court didn't reach the Section 230 issue because it didn't need to. And when the Ninth Circuit reversed the district court, it didn't address the Section 230 issue because the district court hadn't done so. Without the petition to the Supreme Court (and cert grant), the district court - after being reversed on the JASTA claim - would likely have addressed the Section 230 issue. Then its decision on that might have been appealed to the Ninth Circuit before possibly being appealed to the Supreme Court.

    At any rate, this Supreme Court decision tells us nothing about how Section 230 might protect Twitter and others in similar situations.
    ronnwatto_cobramuthuk_vanalingamchasmavon b7
  • Reply 4 of 9
    lowededwookielowededwookie Posts: 1,143member
    I think it’s reasonable to preclude social platforms from guilt of terrorist activities. They’re not posting these messages. 

    Where would it stop? Hold vehicle dealers responsible because someone bought a car then mowed people over on the footpath?

    The individual posting is the responsible party full stop.
    watto_cobragatorguy
  • Reply 5 of 9
    mattinozmattinoz Posts: 2,322member
    chasm said:
    The people who wanted to abolish or rewrite Section 230 were so zealous in their fever to prevent “suppression” of certain views that they did not seem to understand that if Section 230 were substantially altered, sites would simply eliminate any opportunity for users to comment at all, for fear of litigation.

    And those sites that exist primarily to skirt the line of promoting actions considered illegal would simply be shut down.

    What would THAT do for underrepresented or currently-unpopular viewpoints, I wonder?
    Yes but if Platforms are go the other way and are completely hands off with moderation, it will have the same effect. Customers* will walk away from all the noise and the bots. 
    watto_cobraronn
  • Reply 6 of 9
    Well, score one, if incomplete, for the good guys.
  • Reply 7 of 9
    chasmchasm Posts: 3,306member
    mattinoz said:
    Yes but if Platforms are go the other way and are completely hands off with moderation, it will have the same effect. Customers* will walk away from all the noise and the bots. 
    Yes, but as mentioned Section 230 only indemnifies publishers IF they make a “good faith” effort to censor comments that either call for or cause illegal acts. If sites don’t do this, they can expect to be held liable for the consequences of that negligence.

    Unless they’re rich enough, of course. :)
  • Reply 8 of 9
    carnegiecarnegie Posts: 1,078member
    chasm said:
    mattinoz said:
    Yes but if Platforms are go the other way and are completely hands off with moderation, it will have the same effect. Customers* will walk away from all the noise and the bots. 
    Yes, but as mentioned Section 230 only indemnifies publishers IF they make a “good faith” effort to censor comments that either call for or cause illegal acts. If sites don’t do this, they can expect to be held liable for the consequences of that negligence.

    Unless they’re rich enough, of course. :)
    That's not really how Section 230 works. I think a lot of people have been misled on this point in part because, as with so many other things, there's been a lot of inaccurate reporting on how Section 230 works.

    There are two distinct protections provided for (or clarified) by Section 230 which often get conflated. First, there's an unqualified protection against being treated as the publisher of information provided by others. That's 47 USC §230(c)(1):

    No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.


    So Twitter, e.g., isn't legally liable for defamatory postings made by others. It isn't responsible for others' speech just because it provides resources they might use to propagate such speech. That subsection is also, btw, what protects you and I when we simply quote someone else's speech. The point is, in general I'm responsible for my own speech (to include comments I might make regarding others' speech) but not for the speech of others. In that way Section 230(c)(1) provides protections for everyone using the internet - ISPs, so-called platforms, users - and without it (or common law to substantially the same effect) the internet as we know it couldn't exist.


    Then there's another protection provided by 47 USC §230(c)(2):

    No provider or user of an interactive computer service shall be held liable on account of—

    (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

    (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).


    That subsection provides protection against civil liability for, e.g., taking down content provided by others. So Twitter, e.g., can censor speech which it finds "otherwise objectionable" so long as it acts in good faith in doing so. Generally speaking, someone can't (successfully) sue Twitter for taking down their (or others') content.

    The key point here though is that the protections provided by those respective subsections aren't linked. If someone acts in bad faith in censoring some content, they might be liable - if there's a statutory or common law basis for such liability - for that censoring. Bob, e.g., might be able to (successfully) sue Twitter for its bad faith action in taking down his Tweet. But that bad faith doesn't then make Twitter liable for anything posted by others which it leaves up. It isn't treated as the publisher or speaker of such content. Full Stop. That remains true regardless of its good or bad faith efforts to censor other content. 



    EDIT: To be clear, I'm only talking about Section 230 here. It provides protections against civil liability. By its own terms it doesn't block enforcement of federal criminal laws. To the extent anyone on the internet violates federal criminal laws, they can be held accountable for doing so. But that's a separate matter from the protections provided by Section 230.

    edited May 2023 avon b7StrangeDaysronnmuthuk_vanalingamgatorguybeowulfschmidt
  • Reply 9 of 9
    mattinozmattinoz Posts: 2,322member
    chasm said:
    mattinoz said:
    Yes but if Platforms are go the other way and are completely hands off with moderation, it will have the same effect. Customers* will walk away from all the noise and the bots. 
    Yes, but as mentioned Section 230 only indemnifies publishers IF they make a “good faith” effort to censor comments that either call for or cause illegal acts. If sites don’t do this, they can expect to be held liable for the consequences of that negligence.

    Unless they’re rich enough, of course. :)
    So given it is basically a daily occurrence FB andTwitter will have sponsored posts for clear and illegal scams why has the hammer never fallen on them?
Sign In or Register to comment.