'Safe Tech Act' could strip Section 230 user content protections from websites

Posted:
in General Discussion edited February 8
A trio of Democratic Senators have unveiled a new bill dubbed the Safe Tech Act that would fundamentally change Section 230 and could make internet companies with any monetization at all liable for user-generated content.

Credit: WikiCommons
Credit: WikiCommons


The proposed legislation, introduced by senators Mark Warner, Mazie Hirono, and Amy Klobuchar, makes several key changes to Section 230 of the Communications Decency Act, which currently shields internet companies from liability for user-posted content.

One of those changes is an alternation of the core language of Section 230 that could strip protections in situations where payments are involved -- whether the provider or user has accepted payment to make the speech available, or has funded the creation of the speech in any way, including ads served in the content. In a tweet, Sen. Warner said that online ads are "a key vector for all manner of frauds and scams."

It could also affect paid services and premium hosting platforms like Substack or Patreon, too. Under the current language of the proposal, companies could be held liable for harmful content if any ads from any provider are displayed, not just abusive ones.

The new legislation also opens the door for internet companies to be held liable in situations of cyberstalking, targeted harassment, and discrimination.

Specifically, the Safe Tech Act would create a carve-out that would allow individuals to file lawsuits against companies if the material they host can cause "irreparable harm." It would also allow lawsuits for human rights abuses overseas.

The legislation also won't impair the enforcement of civil rights, antitrust, talking, harassment, or intimidation laws.

"The SAFE TECH Act doesn't interfere with free speech - it's about allowing these platforms to finally be held accountable for harmful, often criminal behavior enabled by their platforms to which they have turned a blind eye for too long," said Sen. Warner in a tweet.

The SAFE TECH Act doesn't interfere with free speech - it's about allowing these platforms to finally be held accountable for harmful, often criminal behavior enabled by their platforms to which they have turned a blind eye for too long. (1/8)https://t.co/Q1bGydU44N

-- Mark Warner (@MarkWarner)


There are already some criticisms of the legislation, including from Democratic Sen. Ron Wyden -- one of the original authors of Section 230.

"Unfortunately, as written, it would devastate every part of the open internet, and cause massive collateral damage to online speech," Sen. Wyden told TechCrunch. "Creating liability for all commercial relationships would cause web hosts, cloud storage providers and even paid email services to purge their networks of any controversial speech."

Section 230 has been a target of criticism from both Democrats and Republicans in recent years. Although both sides of the aisle have different goals in mind, Democratic and Republican lawmakers have both signaled that they would like to see the protections reformed.
«1

Comments

  • Reply 1 of 26
    22july201322july2013 Posts: 2,102member
    I recall an AI admin saying that they would probably abolish these forums if 230 was revoked. Can one of them chime in here to say whether that's still true under this change?
  • Reply 2 of 26
    Mike WuertheleMike Wuerthele Posts: 6,105administrator
    I recall an AI admin saying that they would probably abolish these forums if 230 was revoked. Can one of them chime in here to say whether that's still true under this change?
    It is still true under this proposal. Wyden is right, Warner is wrong.

    It isn't cost-effective to pre-moderate comments. There is no way to make it cost-effective on what is already not a profitable aspect of operations.
    edited February 8 muthuk_vanalingamroundaboutnowviclauyycCloudTalkinGraeme000chasmcommentzillacommand_f
  • Reply 3 of 26
    JonGJonG Posts: 6unconfirmed, member
    This edit to 230 is likely unconstitutional, as it means that the government would passing a law that would require companies to limit free speech. Prior restraint of free speech is exactly what the 1st Amendment protects from.  230 as written now furthers the first amendment, any re-write like this will likely run afoul of SCOTUS pretty quickly (whether or not it is conservative) as they have been usually unanimous on 1st amendment protections being very widely interpreted.
  • Reply 4 of 26
    sdw2001sdw2001 Posts: 17,421member
    JonG said:
    This edit to 230 is likely unconstitutional, as it means that the government would passing a law that would require companies to limit free speech. Prior restraint of free speech is exactly what the 1st Amendment protects from.  230 as written now furthers the first amendment, any re-write like this will likely run afoul of SCOTUS pretty quickly (whether or not it is conservative) as they have been usually unanimous on 1st amendment protections being very widely interpreted.
    I’m skeptical that it’s unconstitutional.  The government is really removing liability protection civilly, not requiring private companies to limit speech.  I’m not saying I agree with the proposal, because I absolutely don’t.  As a general rule, anything Hirono is for is a bad idea.  As for SCOTUS, you seem to be implying that “liberal” justices are more likely to support free speech.  While that may have been true in a classical sense, today’s political alignment is precisely the opposite. It is the “conservative” justices who err more on the side of freedom of expression. You are correct though that court in general seems to favor freedom of expression, especially as compared to several lower courts.  
    viclauyycMplsP
  • Reply 5 of 26
    it's an interesting way to attack free speech...while the government themselves wouldn't be punishing someone for what they said, which would be a direct violation of free speech, they would be punishing the people who created the platform that people are speaking from. They're basically telling these site owners that either you shut them up or we're coming for you and where's the line of enforcement stop? can AppleInsider be punished because a user went full on HAM calling for violence on another site and also implied it here without directly saying it, guilty by association.
  • Reply 6 of 26
    Why not just pass a law that makes the commenters pay 70% of the damage and the forum companies pay 30%?

    In fact, I think the anonymity of internet is kinda bad thing. So much hatful and damaging thing had been committed because of that. They should pass a law that make people use their real identity, or at least only one identity for all internet activities. It is not like NSA don’t know who we are and what we did last night.
    command_f
  • Reply 7 of 26
    viclauyyc said:
    Why not just pass a law that makes the commenters pay 70% of the damage and the forum companies pay 30%?

    In fact, I think the anonymity of internet is kinda bad thing. So much hatful and damaging thing had been committed because of that. They should pass a law that make people use their real identity, or at least only one identity for all internet activities. It is not like NSA don’t know who we are and what we did last night.
    what you're asking for would be a violation of free speech because you would be punishing the individual who made the comment. The other issue is who determines the appropriateness of the comment? There are easy things like direct calls for violence but things like hate speech are judgement calls and very subjective - just take what we're seeing now, you can look all across twitter and facebook and see hateful speech towards white people (with suggestions of violence) going unpunished and even at time celebrated but say something like you disagree with kneeling during the anthem or question the looting that took place during protests and you're attacked as a racist. No one really wants to support the idea of applying equitable standards towards hate speech. I don't doubt that even now that someone will read this comment and think I'm just a bitter caucasian which is absolutely hysterical....
    aderutter
  • Reply 8 of 26
    I support free speech* but the internet has proved that there need to be some constraints. Paedophilia, terrorism, misogyny and crazy levels of bullying, to name but a few, cannot be left unchecked and unchallenged.

    There is always a problem of where to draw a line. We have, at least in the UK, have laws against media (TV, newspapers and others) publishing the gross offences (those that we pretty much all agree are wrong). There are time-tested mechanisms for handling borderline cases by taking publishers to court and letting judges decide.

    I fail to see why we don't test social media in the same way: I simply don't believe that the likes of Google and Facebook can extract the last iota of marketable profiling from a user's post yet fail to detect non-borderline unacceptable content (and, in fact, Google and others have actually done admirable, and successful, work against child pornography). Such capabilities can be made available to all forum hosts.

    If a website collects and disseminates information, how is that different from a news site? What distinguishes them from publishers? If it looks like a duck and quacks like a duck...

    *Free speech should admit that you can discuss any subject. It's when it becomes damaging that it's a problem - hard to describe but I think most grown-ups can recognise that when they see it.
  • Reply 9 of 26
    If I were AI or similar website I’d probably take my non-revenue generating comment feature down.  Why take the risk? 
    muthuk_vanalingam
  • Reply 10 of 26
    maestro64maestro64 Posts: 4,909member
    Just another example, you think you have problem now wait until you see the solution the government comes up with.
    muthuk_vanalingam
  • Reply 11 of 26
    MplsPMplsP Posts: 2,939member
    maestro64 said:
    Just another example, you think you have problem now wait until you see the solution the government comes up with.
    the truly ironic part of it all is that some of the people pushing the hardest for 'reform' of section 230 because of perceived 'infringement of freedom of speech,' are the very people that twitter, Facebook, etc would be canceling first.
  • Reply 12 of 26
    mcdavemcdave Posts: 1,602member
    Why should monetisation be a factor? Surely commercials don’t come into this & isn’t there already legislation against false advertising irrespective of the media used?

    Although maybe AI could offload the comments section to a platform that works well.
  • Reply 13 of 26
    chasmchasm Posts: 2,340member
    I recall an AI admin saying that they would probably abolish these forums if 230 was revoked. Can one of them chime in here to say whether that's still true under this change?
    It is still true under this proposal. Wyden is right, Warner is wrong.

    It isn't cost-effective to pre-moderate comments. There is no way to make it cost-effective on what is already not a profitable aspect of operations.
    And you can assume the same about every website-connected forum, not just this one. Small-to-medium sites like AI and any other specialty sites (or even mainstream sites with forums) you care to name are not making anywhere near enough money to sink more money into additional moderation for forums.

    Also: be VERY VERY careful what you wish for, "conservatives" ...  study after study shows that rightward-leaning folks tend to be the ones spreading a lot of misinformation and Russian (and other types of) propaganda ... Parler and Gab (et al) would be under the same requirements to moderate, and would likely fold in due course ...
  • Reply 14 of 26
    Good. Do it. Now. 

    Out of the chaos will come a new - hopefully more chastened - order. They’ll all survive, and we’ll be better off. 
  • Reply 15 of 26
    bluefire1bluefire1 Posts: 1,105member
    I recall an AI admin saying that they would probably abolish these forums if 230 was revoked. Can one of them chime in here to say whether that's still true under this change?
    It is still true under this proposal. Wyden is right, Warner is wrong.

    It isn't cost-effective to pre-moderate comments. There is no way to make it cost-effective on what is already not a profitable aspect of operations.
    You already pre-moderate comments. Anytime there’s even a hint of controversy in any article, we can be sure “Discussion Not Found” will be listed under Comments.
  • Reply 16 of 26
    I recall an AI admin saying that they would probably abolish these forums if 230 was revoked. Can one of them chime in here to say whether that's still true under this change?
    It is still true under this proposal. Wyden is right, Warner is wrong.

    It isn't cost-effective to pre-moderate comments. There is no way to make it cost-effective on what is already not a profitable aspect of operations.
    Exactly right, its' unmanageable. Each private company or person should be allowed to manage their property as they see fit. The Constitution only prohibits "government" censorship not "private" censorship.
    muthuk_vanalingam
  • Reply 17 of 26
    I recall an AI admin saying that they would probably abolish these forums if 230 was revoked. Can one of them chime in here to say whether that's still true under this change?
    It is still true under this proposal. Wyden is right, Warner is wrong.

    It isn't cost-effective to pre-moderate comments. There is no way to make it cost-effective on what is already not a profitable aspect of operations.
    It would be sad to lose these columns. We can't see, of course, how much work you do behind the scenes but the appearance is that this is generally a sensible set of people who argue their differences without very often descending to simple abuse or worse. The odd person gets banned and that is presumably the result of your patience becoming exhausted.

    However, I would hope that any final legislation was proportionate and accepted that moderation could not be perfect. This would be consistent with judging the amount of harm likely: the bigger the audience, the bigger the potential harm and hence the higher the standards to be applied. However, the bigger audience also tends to go with the bigger means to manage (eg Facebook probably has more ability to moderate than AI).

    The difference between a thread full of vitriol and threats and one with a single intemperate comment is pretty clear. If so, the odd inappropriate post slipping past would not be cause for legal action if it was removed once highlighted. With automated aid, I would hope that your workload would not change too much. But perhaps you are already removing a huge number of unacceptable posts that we, thankfully, never see.
  • Reply 18 of 26
    Mike WuertheleMike Wuerthele Posts: 6,105administrator
    bluefire1 said:
    I recall an AI admin saying that they would probably abolish these forums if 230 was revoked. Can one of them chime in here to say whether that's still true under this change?
    It is still true under this proposal. Wyden is right, Warner is wrong.

    It isn't cost-effective to pre-moderate comments. There is no way to make it cost-effective on what is already not a profitable aspect of operations.
    You already pre-moderate comments. Anytime there’s even a hint of controversy in any article, we can be sure “Discussion Not Found” will be listed under Comments.
    That's not pre-moderation of user-generated content. That's us making a decision that you guys won't follow the forum guidelines, which exist for a reason, so we decide to start with it shut to maintain cost effectiveness of available labor.

    In the last year, less than 0.1% of all AI news started with comments shut from the jump. If you count tips, this increases to 0.4%, because of the inane "this is basic, why did you post this?" prevalence of comments, which doesn't help the folks that are actually looking how to do things.

    This number has decreased has folks of all political persuasions have been banned. In regards to that ban total, of the population that has been banned, it consists of 51% of commenters with a demonstrable left of center bent, and 49 with a right of center.

    Nearly everything starts open, as I have demonstrated. If it becomes cost ineffective to moderate because of forum-goer behavior, we shut it down -- and this can happen very, very quickly. If this reform is passed, then we simply won't have a comment section as we simply don't have the time, manpower, or money to premoderate every single content that is posted. There's a big difference between the two. And, just because you didn't see it open, doesn't mean it didn't start open.
    edited February 9 muthuk_vanalingamcommand_fbeowulfschmidtMplsP
  • Reply 19 of 26
    Mike WuertheleMike Wuerthele Posts: 6,105administrator

    mcdave said:
    Why should monetisation be a factor? Surely commercials don’t come into this & isn’t there already legislation against false advertising irrespective of the media used?

    Although maybe AI could offload the comments section to a platform that works well.
    It's isn't about "false advertising." As the bill is written, ANY advertising, on the forum or host itself, counts as monetization as it pertains to being subject to 230 protections.
  • Reply 20 of 26
    Mike WuertheleMike Wuerthele Posts: 6,105administrator
    command_f said:
    I recall an AI admin saying that they would probably abolish these forums if 230 was revoked. Can one of them chime in here to say whether that's still true under this change?
    It is still true under this proposal. Wyden is right, Warner is wrong.

    It isn't cost-effective to pre-moderate comments. There is no way to make it cost-effective on what is already not a profitable aspect of operations.
    It would be sad to lose these columns. We can't see, of course, how much work you do behind the scenes but the appearance is that this is generally a sensible set of people who argue their differences without very often descending to simple abuse or worse. The odd person gets banned and that is presumably the result of your patience becoming exhausted.

    However, I would hope that any final legislation was proportionate and accepted that moderation could not be perfect. This would be consistent with judging the amount of harm likely: the bigger the audience, the bigger the potential harm and hence the higher the standards to be applied. However, the bigger audience also tends to go with the bigger means to manage (eg Facebook probably has more ability to moderate than AI).

    The difference between a thread full of vitriol and threats and one with a single intemperate comment is pretty clear. If so, the odd inappropriate post slipping past would not be cause for legal action if it was removed once highlighted. With automated aid, I would hope that your workload would not change too much. But perhaps you are already removing a huge number of unacceptable posts that we, thankfully, never see.
    Moderating these forums is an incredible amount of work, because everything generates a significant amount of unacceptable posts, and so far, automation is useless.

    On the aggregate, we know who you are, and you aren't children. Simple abuse, and threats are way too common for what is, on the average, a significantly older population than, say, Reddit.
    command_fgatorguy
Sign In or Register to comment.