Justice Department unveils legislation to alter Section 230 protections

Posted:
in General Discussion edited September 2020
The U.S. Department of Justice has unveiled new draft legislation that would reform Section 230, a key protection for technology companies from users posting illicit content.

US Attorney General William Barr
US Attorney General William Barr


In June, the DOJ announced a set of reforms that would hold technology companies more legally responsible for the content that their users posted. That followed President Donald Trump signing an executive order that sought to weaken social media platform protections.

The proposed legislation on Wednesday would narrow the criteria that online platforms would need to meet to earn Section 230 liability protections. One of the main reforms is carving out immunity in cases such as child sexual abuse. It would need to be passed by Congress.

It also includes a "Bad Samaritan" section that could deny immunity to platforms that don't take action on content that violates federal law, or fail to report illegal material. That's similar to a variant of the EARN IT act that passed in the Judiciary Committee in July.

The proposal also states that nothing in the statute should prevent enforcement under separate laws, such as antitrust regulations.

Section 230 of the Communications Decency Act protects online platforms from being liable for the content that users post. It also requires them to moderate and remove harmful content to be extended Section 230 protections.

Those protections allowed technology platforms and the internet to flourish in their nascent years, but have since come under scrutiny. Trump signed the executive order in May, for example, after Twitter fact checked one of his tweets.

That scrutiny has bled into other areas of technology oversight. At what was supposed to be an antitrust hearing in July, many members of the U.S. Senate Judiciary Committee criticized companies like Facebook for alleged "censorship" of political views.

The Justice Department, specifically, has been looking into Section 230 reforms for the better part of a year. Attorney General William Barr said in December 2019 that the department was "thinking critically" about Section 230, and later invited experts to debate the law in February.
«1

Comments

  • Reply 1 of 28
    carnegiecarnegie Posts: 1,016member
    Here are the DOJ’s proposed changes.

    I think the changes the Administration is trying to make are bad. But I don’t think they’ve worded the legislation such that it actually does make those changes.
    chasmwatto_cobra
  • Reply 2 of 28
    gatorguygatorguy Posts: 23,510member
    Seems as tho what the current administration apparently wants would lead to MORE censorship and not less. Companies will err on the side of caution and simply remove anything that might have the appearance of being untrue as they'd now be responsible for what the public posts on their platforms? Geesh, that sounds totally unmanageable and the only bulletproof solution would be blocking all comments. 
    ronntmayAlex1NPascalxxjony0
  • Reply 3 of 28
    docno42docno42 Posts: 3,733member
    Sigh - politicians get it wrong, yet again. 

    I don't think it's unreasonable to allow platforms to moderate content.  Personally I'd rather they not censor anything and let me decide after a warning (and being able to disable "warnings" would be better still - the amount of ideological "warnings" is getting nuts on YouTube).

    What's unreasonable is the completely oblique and inconsistent way they moderate content, often with guidelines that are non-existent or unpublished.  

    Sunlight is the best disinfectant.  I think if they want to keep their 230 safe harbor, companies should be required:

    1)  Publish publicly their specific rules.  No more "we reserve the right for any reason".  You want to hid behind any reason?  No problem - no more 230 for you.
    2)  Publish publicly all moderation, citing specifically how the rules from #1 were used for the moderated outcome.

    It won't be perfect, but it should lead to a LOT more consistency.  Twitter is rife with left wing violence, but if any other group posts violent content it's removed immediately.  It should all be removed, or none of it removed.  

    As others pointed out, these proposed changes are just going to encourage companies to error on the side of caution more, which isn't good 
    watto_cobracat52
  • Reply 4 of 28
    carnegiecarnegie Posts: 1,016member
    Now that I've read more from the DOJ, I don't think it intended to do what I previously thought it intended to do - i.e., remove the immunity which providers have for content they allow if they choose to censor some material in objectionable ways.

    The wording of the proposed changes wouldn't have that effect and it seems that wasn't the intended effect. These changes aren't as substantial - or as bad - as I had originally thought. Even if this proposal was enacted, providers (and users, for that matter) would still enjoy unconditioned immunity for (i.e. not be treated as publishers of) the speech of others which they left up.


    EDIT: I should be clear, the immunity provided by Section 230(c)(1) wouldn't be unconditional. But it wouldn't be conditioned on the provider not censoring certain material. The proposed changes actually place some meaningful conditions on Section 230(c)(1) immunity.
    edited September 2020 chasmwatto_cobra
  • Reply 5 of 28
    It's sad that both Biden and Trump want to do away with 230. Both have been very vocal about it, especially Biden. I wish they would leave it alone. The one positive is that with Trump in office it probably will never pass. 
    anantksundaramwatto_cobra
  • Reply 6 of 28
    Mike WuertheleMike Wuerthele Posts: 6,568administrator
    gatorguy said:
    Seems as tho what the current administration apparently wants would lead to MORE censorship and not less. Companies will err on the side of caution and simply remove anything that might have the appearance of being untrue as they'd now be responsible for what the public posts on their platforms? Geesh, that sounds totally unmanageable and the only bulletproof solution would be blocking all comments. 
    This is our (and our lawyer's) interpretation as well.
    tmaychasmfastasleepAlex1Nronnjony0
  • Reply 7 of 28
    All this is because the POTUS wants to punish internet companies for removing his posts that intentionally spread misinformation. That’s it. Poor place to craft policy from...
    dewmetmaychasmfastasleepITGUYINSDdrdavidMacProAlex1Nronnmuthuk_vanalingam
  • Reply 8 of 28
    docno42 said:
    Sigh - politicians get it wrong, yet again. 

    I don't think it's unreasonable to allow platforms to moderate content.  Personally I'd rather they not censor anything and let me decide after a warning (and being able to disable "warnings" would be better still - the amount of ideological "warnings" is getting nuts on YouTube).

    What's unreasonable is the completely oblique and inconsistent way they moderate content, often with guidelines that are non-existent or unpublished.  

    Sunlight is the best disinfectant.  I think if they want to keep their 230 safe harbor, companies should be required:

    1)  Publish publicly their specific rules.  No more "we reserve the right for any reason".  You want to hid behind any reason?  No problem - no more 230 for you.
    2)  Publish publicly all moderation, citing specifically how the rules from #1 were used for the moderated outcome.

    It won't be perfect, but it should lead to a LOT more consistency.  Twitter is rife with left wing violence, but if any other group posts violent content it's removed immediately.  It should all be removed, or none of it removed.  
    Nonsense. You don’t like that private platform owners are exerting their own freedom of expression by removing content that violates their private platform rules. It’s their billboard, they get to decide what goes on it. They don’t have to publish material that breaks their rules, intentionally spreads misinformation, advocates violence, etc. It’s in their interest not to do so. The right-winger Qanon-believer who shot up a pizza parlor thankfully didn’t kill anyone, but just as easily could have. It’s in the private platform’s interest not to assist such lunatics and their plans. 

    Twitter is not rife it’s left wing violence. It’s rife with video of protests, the vast majority of which is peaceful. It also documents police violence and government violation of the freedom to peaceably assemble. Equating videos of racist right-winger violence against people to videos of property damage is fallacy. 
    edited September 2020 sphericdewmechasmdrdavidAlex1Nronn
  • Reply 9 of 28
    carnegiecarnegie Posts: 1,016member
    gatorguy said:
    Seems as tho what the current administration apparently wants would lead to MORE censorship and not less. Companies will err on the side of caution and simply remove anything that might have the appearance of being untrue as they'd now be responsible for what the public posts on their platforms? Geesh, that sounds totally unmanageable and the only bulletproof solution would be blocking all comments. 
    These changes are problematic in a number of ways. And, yeah, one effect might be to cause more censorship. But the changes don't go as far as I'd originally thought and they don't make providers responsible for the speech of others, even if they censor some speech.

    One thing these changes do is require a notification system which providers must use to take down material that, e.g., has been determined by a court to be defamatory. But the changes don't mean that providers need to take down material preemptively to be on the safe side, so to speak, so that they aren't liable for defamation posted by others. Providers still wouldn't be liable for the speech of others except in limited contexts, to include if such speech has been determined to be defamatory and the provider was given proper notice of that and they still didn't remove it.
    watto_cobra
  • Reply 10 of 28
    This is an attempt to silence the First Amendment and allow only those that `govern' to speak for the masses. Sorry, but these attempts are laughable and a waste of time per usual by Republicans. November is already baked in and the GOP is all that will suffer.
    tmayITGUYINSDdrdavidStrangeDaysMacProronnwatto_cobra
  • Reply 11 of 28
    elijahgelijahg Posts: 2,669member
    Does this also mean a hosting company hosting a customer site with misinformation would be obliged to shut it down? If someone ran a webserver at home with some flat-earther style crap on, their ISP would be obliged to shut it down? Ultimately would this mean AWS would be obliged to shut down Twitter's access to their servers if someone posted some lies? That's how I read this, and if that's correct, that's completely absurd.
    edited September 2020
  • Reply 12 of 28
    sflocalsflocal Posts: 6,020member
    docno42 said:
    I don't think it's unreasonable to allow platforms to moderate content.  Personally I'd rather they not censor anything and let me decide after a warning (and being able to disable "warnings" would be better still - the amount of ideological "warnings" is getting nuts on YouTube).

    Problem I see are people - really dumb, easily-manipulated people, too eager to believe anything they read off the Internet, and then run with it, spreading like wildfire.

    I'm getting to the point where things need to be vetted.  As long as it's based on facts, truth, real-science, then yes, don't censor anything.  But when any nut-job (including #45) is allowed to post nonsense as fact, then I'm beginning to thing that platforms like Facebook and Twitter need to start taking responsibility.  It's hard, and I have zero idea how anyone, or anything can pull that off. 
    tmayAlex1Nwatto_cobra
  • Reply 13 of 28
    gatorguy said:
    Seems as tho what the current administration apparently wants would lead to MORE censorship and not less. Companies will err on the side of caution and simply remove anything that might have the appearance of being untrue as they'd now be responsible for what the public posts on their platforms? Geesh, that sounds totally unmanageable and the only bulletproof solution would be blocking all comments. 
    Many web sites in fact have blocked off their comments.
    watto_cobra
  • Reply 14 of 28
    BeatsBeats Posts: 3,073member
    sflocal said:
    docno42 said:
    I don't think it's unreasonable to allow platforms to moderate content.  Personally I'd rather they not censor anything and let me decide after a warning (and being able to disable "warnings" would be better still - the amount of ideological "warnings" is getting nuts on YouTube).

    Problem I see are people - really dumb, easily-manipulated people, too eager to believe anything they read off the Internet, and then run with it, spreading like wildfire.

    I'm getting to the point where things need to be vetted.  As long as it's based on facts, truth, real-science, then yes, don't censor anything.  But when any nut-job (including #45) is allowed to post nonsense as fact, then I'm beginning to thing that platforms like Facebook and Twitter need to start taking responsibility.  It's hard, and I have zero idea how anyone, or anything can pull that off. 

    I see this a TON with iKnockoff morons. There's also selective facts like "Apple's screens break lol!"
    edited September 2020
  • Reply 15 of 28
    dewmedewme Posts: 4,647member
    docno42 said:
    Sigh - politicians get it wrong, yet again. 

    I don't think it's unreasonable to allow platforms to moderate content.  Personally I'd rather they not censor anything and let me decide after a warning (and being able to disable "warnings" would be better still - the amount of ideological "warnings" is getting nuts on YouTube).

    What's unreasonable is the completely oblique and inconsistent way they moderate content, often with guidelines that are non-existent or unpublished.  

    Sunlight is the best disinfectant.  I think if they want to keep their 230 safe harbor, companies should be required:

    1)  Publish publicly their specific rules.  No more "we reserve the right for any reason".  You want to hid behind any reason?  No problem - no more 230 for you.
    2)  Publish publicly all moderation, citing specifically how the rules from #1 were used for the moderated outcome.

    It won't be perfect, but it should lead to a LOT more consistency.  Twitter is rife with left wing violence, but if any other group posts violent content it's removed immediately.  It should all be removed, or none of it removed.  
    Nonsense. You don’t like that private platform owners are exerting their own freedom of expression by removing content that violates their private platform rules. It’s their billboard, they get to decide what goes on it. They don’t have to publish material that breaks their rules, intentionally spreads misinformation, advocates violence, etc. It’s in their interest not to do so. The right-winger Qanon-believer who shot up a pizza parlor thankfully didn’t kill anyone, but just as easily could have. It’s in the private platform’s interest not to assist such lunatics and their plans. 

    Twitter is not rife it’s left wing violence. It’s rife with video of protests, the vast majority of which is peaceful. It also documents police violence and government violation of the freedom to peaceably assemble. Equating videos of racist right-winger violence against people to videos of property damage is fallacy. 
    The private platform part is what makes this so challenging from a regulation perspective. When you boil Facebook, Twitter, et al down to their core they really are no more than brokers that provide a way to connect their only real customers, advertisers, with a monetizeable asset - subscriber eyeballs. From a business perspective the only thing Facebook or Twitter care about is selling access to its subscriber's eyeballs to its advertiser customers. Everything else is just noise and overhead. As long as the number of eyeballs stays steady or keeps increasing, life is good for Facebook and Twitter.

    Sure, to keep all those eyeballs showing up to consume advertising, Facebook and Twitter have to provide something in return, that's the overhead cost for Facebook and Twitter. If subscribers start getting nasty towards one another or if the advertisers start crafting their ads to attract a specific audience or promote self interests beyond simply making more money for themselves, negative side effects start to take place. The problem is that the fundamental business model, i.e., connecting advertisers to eyeballs, has no inherent control mechanism to avoid negative side effects, like social disruption. Facebook and Twitter can continue to profit while the negative side effects being emitted from the platform run amok, so something must be done. In control system theory the solution to unconstrained responses is always the same: a negative feedback loop. 

    The only question is who gets to apply the negative feedback. In my mind the benefactors of the business model should be the ones responsible for keeping their system under control, just like the operator of a nuclear power plant is responsible for keeping their plant operating safely while deriving a financial profit for operating the plant. Government regulation in the Facebook and Twitter case would be really directed at mitigating the negative side affects, not controlling the system itself.

    Bottom line: I would place full  responsibility on the owners of the system (social platform business), in other words require them to moderate their content with strict penalties for not doing so or letting negative side effects leak out on to the general public, like a radiation cloud. They should not be allowed to take a position that they don't care if their system is abused or causes public suffering. They are getting off way too easy.

    edited September 2020 pscooter63Alex1NPascalxx
  • Reply 16 of 28
    I suppose this is one reason so many have adopted Disqus to replace their commenting sections (that is, for sites still allowing comments).
    Alex1N
  • Reply 17 of 28
    gatorguy said:
    Seems as tho what the current administration apparently wants would lead to MORE censorship and not less. Companies will err on the side of caution and simply remove anything that might have the appearance of being untrue as they'd now be responsible for what the public posts on their platforms? Geesh, that sounds totally unmanageable and the only bulletproof solution would be blocking all comments. 
    This is our (and our lawyer's) interpretation as well.
    This was the obvious intent, since the tech companies decide to remove certain content and not other content.

    Now they will get a law that makes them liable for everyone's stupidity. It could not happen to a nicer group of companies. They first sold everyone on they can share whatever they like and companies got to say it was their fault. Then the "I'm Offended and Boycott"  crowd showed up and these companies changed their position and allowed certain kinds of insults and shut down others.
    elijahgcat52
  • Reply 18 of 28
    chasmchasm Posts: 2,615member
    Mostly very good, very smart comments on this topic, so thanks all for that. Good discussion.

    I agree with Mike that there would be a chilling effect if this were to pass, and there really needs to be a different level of protection for ISPs versus platforms (i.e. Facebook et al). There's a fair chance a court would stop any actual enforcement of this proposal anyway, because this legislation is rooted in, and originates from, a temper tantrum that ironically would also be "censored" (fact-checked) if the rules was in effect -- indeed under this proposal a certain prolific tweeter would be in more trouble, not less!

    As is, the proposal is at least in part untenable, and indeed (and most ironically) the fringe extremes of both right and left would see more censorship -- and I'll leave it as an exercise for the reader to go check surveys of which side is (currently) the most guilty of spreading misinformation and false claims. If certain congresspeople think they're being "censored" (they are not) now, just wait until these proposals are actually enforced. Let's just call this "the law of unintended consequences."

    Taking a look at Section 230 as it stands now, today ... I can't find anything wrong with it other than a lack of and/or selective enforcement, or sufficient penalties/deterrents for platforms that are making almost no discernible effort to moderate or block illegal or known-false postings. I would like to think that Barr's proposals are just a milksop to show the misinformer-in-chief that he's "doing something" with very little actual tangible effect, but I don't trust that guy or indeed the agency (agencies, plural, really) as far as I can throw them. While Barr's proposal is not entirely without merit, this is yet another case of bureaucrats creating more regulations where actual, careful enforcement of the existing regulations would have done the job without politicizing the matter IMO.
    Alex1Nmuthuk_vanalingam
  • Reply 19 of 28
    dewme said:
    docno42 said:
    Sigh - politicians get it wrong, yet again. 

    I don't think it's unreasonable to allow platforms to moderate content.  Personally I'd rather they not censor anything and let me decide after a warning (and being able to disable "warnings" would be better still - the amount of ideological "warnings" is getting nuts on YouTube).

    What's unreasonable is the completely oblique and inconsistent way they moderate content, often with guidelines that are non-existent or unpublished.  

    Sunlight is the best disinfectant.  I think if they want to keep their 230 safe harbor, companies should be required:

    1)  Publish publicly their specific rules.  No more "we reserve the right for any reason".  You want to hid behind any reason?  No problem - no more 230 for you.
    2)  Publish publicly all moderation, citing specifically how the rules from #1 were used for the moderated outcome.

    It won't be perfect, but it should lead to a LOT more consistency.  Twitter is rife with left wing violence, but if any other group posts violent content it's removed immediately.  It should all be removed, or none of it removed.  
    Nonsense. You don’t like that private platform owners are exerting their own freedom of expression by removing content that violates their private platform rules. It’s their billboard, they get to decide what goes on it. They don’t have to publish material that breaks their rules, intentionally spreads misinformation, advocates violence, etc. It’s in their interest not to do so. The right-winger Qanon-believer who shot up a pizza parlor thankfully didn’t kill anyone, but just as easily could have. It’s in the private platform’s interest not to assist such lunatics and their plans. 

    Twitter is not rife it’s left wing violence. It’s rife with video of protests, the vast majority of which is peaceful. It also documents police violence and government violation of the freedom to peaceably assemble. Equating videos of racist right-winger violence against people to videos of property damage is fallacy. 
    The private platform part is what makes this so challenging from a regulation perspective. When you boil Facebook, Twitter, et al down to their core they really are no more than brokers that provide a way to connect their only real customers, advertisers, with a monetizeable asset - subscriber eyeballs. From a business perspective the only thing Facebook or Twitter care about is selling access to its subscriber's eyeballs to its advertiser customers. Everything else is just noise and overhead. As long as the number of eyeballs stays steady or keeps increasing, life is good for Facebook and Twitter.

    Sure, to keep all those eyeballs showing up to consume advertising, Facebook and Twitter have to provide something in return, that's the overhead cost for Facebook and Twitter. If subscribers start getting nasty towards one another or if the advertisers start crafting their ads to attract a specific audience or promote self interests beyond simply making more money for themselves, negative side effects start to take place. The problem is that the fundamental business model, i.e., connecting advertisers to eyeballs, has no inherent control mechanism to avoid negative side effects, like social disruption. Facebook and Twitter can continue to profit while the negative side effects being emitted from the platform run amok, so something must be done. In control system theory the solution to unconstrained responses is always the same: a negative feedback loop. 

    The only question is who gets to apply the negative feedback. In my mind the benefactors of the business model should be the ones responsible for keeping their system under control, just like the operator of a nuclear power plant is responsible for keeping their plant operating safely while deriving a financial profit for operating the plant. Government regulation in the Facebook and Twitter case would be really directed at mitigating the negative side affects, not controlling the system itself.

    Bottom line: I would place full  responsibility on the owners of the system (social platform business), in other words require them to moderate their content with strict penalties for not doing so or letting negative side effects leak out on to the general public, like a radiation cloud. They should not be allowed to take a position that they don't care if their system is abused or causes public suffering. They are getting off way too easy.

    Frist, all the platforms started out allowing any content (and I am not talking about illegal content i think everyone agrees this should not be available) and everything was fine and good.

    Then a group of people who decided what they were reading, or worse what others were reading was not acceptable to them and they did not want other to read or see. They decided to pressures other companies to stop spending money with the tech companies which some of that money worked its way to the content generators. Before we knew it we had content being removed, shadow ban, or demonetized. Now we have a group of people (not liable to anyone) deciding what content is acceptable and not acceptable and who is allow to make money for their content.

    The companies have couple of chooses here, go back to the original model where all content is welcome, or they can come up with a set of standards and control all content to those standard and the lowest voices do not get to decide or they become liable for everyone content. They tried playing the middle ground and only response to the outrage crowds and to claim they are not responsible for people's content, but in reality they were acting editor and deciding what people could read and see.

    I am sorry in the US individuals do not get to decide what other can say and do and who is allow to make money for want they say and produce. In this country people vote with their own patronage, they do not get to force others to see things their way. 
  • Reply 20 of 28
    Mike WuertheleMike Wuerthele Posts: 6,568administrator
    maestro64 said:
    dewme said:
    docno42 said:
    Sigh - politicians get it wrong, yet again. 

    I don't think it's unreasonable to allow platforms to moderate content.  Personally I'd rather they not censor anything and let me decide after a warning (and being able to disable "warnings" would be better still - the amount of ideological "warnings" is getting nuts on YouTube).

    What's unreasonable is the completely oblique and inconsistent way they moderate content, often with guidelines that are non-existent or unpublished.  

    Sunlight is the best disinfectant.  I think if they want to keep their 230 safe harbor, companies should be required:

    1)  Publish publicly their specific rules.  No more "we reserve the right for any reason".  You want to hid behind any reason?  No problem - no more 230 for you.
    2)  Publish publicly all moderation, citing specifically how the rules from #1 were used for the moderated outcome.

    It won't be perfect, but it should lead to a LOT more consistency.  Twitter is rife with left wing violence, but if any other group posts violent content it's removed immediately.  It should all be removed, or none of it removed.  
    Nonsense. You don’t like that private platform owners are exerting their own freedom of expression by removing content that violates their private platform rules. It’s their billboard, they get to decide what goes on it. They don’t have to publish material that breaks their rules, intentionally spreads misinformation, advocates violence, etc. It’s in their interest not to do so. The right-winger Qanon-believer who shot up a pizza parlor thankfully didn’t kill anyone, but just as easily could have. It’s in the private platform’s interest not to assist such lunatics and their plans. 

    Twitter is not rife it’s left wing violence. It’s rife with video of protests, the vast majority of which is peaceful. It also documents police violence and government violation of the freedom to peaceably assemble. Equating videos of racist right-winger violence against people to videos of property damage is fallacy. 
    The private platform part is what makes this so challenging from a regulation perspective. When you boil Facebook, Twitter, et al down to their core they really are no more than brokers that provide a way to connect their only real customers, advertisers, with a monetizeable asset - subscriber eyeballs. From a business perspective the only thing Facebook or Twitter care about is selling access to its subscriber's eyeballs to its advertiser customers. Everything else is just noise and overhead. As long as the number of eyeballs stays steady or keeps increasing, life is good for Facebook and Twitter.

    Sure, to keep all those eyeballs showing up to consume advertising, Facebook and Twitter have to provide something in return, that's the overhead cost for Facebook and Twitter. If subscribers start getting nasty towards one another or if the advertisers start crafting their ads to attract a specific audience or promote self interests beyond simply making more money for themselves, negative side effects start to take place. The problem is that the fundamental business model, i.e., connecting advertisers to eyeballs, has no inherent control mechanism to avoid negative side effects, like social disruption. Facebook and Twitter can continue to profit while the negative side effects being emitted from the platform run amok, so something must be done. In control system theory the solution to unconstrained responses is always the same: a negative feedback loop. 

    The only question is who gets to apply the negative feedback. In my mind the benefactors of the business model should be the ones responsible for keeping their system under control, just like the operator of a nuclear power plant is responsible for keeping their plant operating safely while deriving a financial profit for operating the plant. Government regulation in the Facebook and Twitter case would be really directed at mitigating the negative side affects, not controlling the system itself.

    Bottom line: I would place full  responsibility on the owners of the system (social platform business), in other words require them to moderate their content with strict penalties for not doing so or letting negative side effects leak out on to the general public, like a radiation cloud. They should not be allowed to take a position that they don't care if their system is abused or causes public suffering. They are getting off way too easy.

    Frist, all the platforms started out allowing any content (and I am not talking about illegal content i think everyone agrees this should not be available) and everything was fine and good.

    Then a group of people who decided what they were reading, or worse what others were reading was not acceptable to them and they did not want other to read or see. They decided to pressures other companies to stop spending money with the tech companies which some of that money worked its way to the content generators. Before we knew it we had content being removed, shadow ban, or demonetized. Now we have a group of people (not liable to anyone) deciding what content is acceptable and not acceptable and who is allow to make money for their content.

    The companies have couple of chooses here, go back to the original model where all content is welcome, or they can come up with a set of standards and control all content to those standard and the lowest voices do not get to decide or they become liable for everyone content. They tried playing the middle ground and only response to the outrage crowds and to claim they are not responsible for people's content, but in reality they were acting editor and deciding what people could read and see.

    I am sorry in the US individuals do not get to decide what other can say and do and who is allow to make money for want they say and produce. In this country people vote with their own patronage, they do not get to force others to see things their way. 
    Facebook and Twitter are businesses and not public works or goods. Even as 230 stands if this modification ever comes to pass, they can't be forced to unilaterally allow any content to be published on the platform, and cannot be forced to allow everybody to make money on the platform. And "acting editor" as you put it is exactly what 230 legally requires!

    They may "demonetize" or "shadow ban" as they see fit.
    edited September 2020 fastasleepAlex1Nronn
Sign In or Register to comment.