Justice Department unveils legislation to alter Section 230 protections
The U.S. Department of Justice has unveiled new draft legislation that would reform Section 230, a key protection for technology companies from users posting illicit content.

US Attorney General William Barr
In June, the DOJ announced a set of reforms that would hold technology companies more legally responsible for the content that their users posted. That followed President Donald Trump signing an executive order that sought to weaken social media platform protections.
The proposed legislation on Wednesday would narrow the criteria that online platforms would need to meet to earn Section 230 liability protections. One of the main reforms is carving out immunity in cases such as child sexual abuse. It would need to be passed by Congress.
It also includes a "Bad Samaritan" section that could deny immunity to platforms that don't take action on content that violates federal law, or fail to report illegal material. That's similar to a variant of the EARN IT act that passed in the Judiciary Committee in July.
The proposal also states that nothing in the statute should prevent enforcement under separate laws, such as antitrust regulations.
Section 230 of the Communications Decency Act protects online platforms from being liable for the content that users post. It also requires them to moderate and remove harmful content to be extended Section 230 protections.
Those protections allowed technology platforms and the internet to flourish in their nascent years, but have since come under scrutiny. Trump signed the executive order in May, for example, after Twitter fact checked one of his tweets.
That scrutiny has bled into other areas of technology oversight. At what was supposed to be an antitrust hearing in July, many members of the U.S. Senate Judiciary Committee criticized companies like Facebook for alleged "censorship" of political views.
The Justice Department, specifically, has been looking into Section 230 reforms for the better part of a year. Attorney General William Barr said in December 2019 that the department was "thinking critically" about Section 230, and later invited experts to debate the law in February.

US Attorney General William Barr
In June, the DOJ announced a set of reforms that would hold technology companies more legally responsible for the content that their users posted. That followed President Donald Trump signing an executive order that sought to weaken social media platform protections.
The proposed legislation on Wednesday would narrow the criteria that online platforms would need to meet to earn Section 230 liability protections. One of the main reforms is carving out immunity in cases such as child sexual abuse. It would need to be passed by Congress.
It also includes a "Bad Samaritan" section that could deny immunity to platforms that don't take action on content that violates federal law, or fail to report illegal material. That's similar to a variant of the EARN IT act that passed in the Judiciary Committee in July.
The proposal also states that nothing in the statute should prevent enforcement under separate laws, such as antitrust regulations.
Section 230 of the Communications Decency Act protects online platforms from being liable for the content that users post. It also requires them to moderate and remove harmful content to be extended Section 230 protections.
Those protections allowed technology platforms and the internet to flourish in their nascent years, but have since come under scrutiny. Trump signed the executive order in May, for example, after Twitter fact checked one of his tweets.
That scrutiny has bled into other areas of technology oversight. At what was supposed to be an antitrust hearing in July, many members of the U.S. Senate Judiciary Committee criticized companies like Facebook for alleged "censorship" of political views.
The Justice Department, specifically, has been looking into Section 230 reforms for the better part of a year. Attorney General William Barr said in December 2019 that the department was "thinking critically" about Section 230, and later invited experts to debate the law in February.
Comments
I think the changes the Administration is trying to make are bad. But I don’t think they’ve worded the legislation such that it actually does make those changes.
I don't think it's unreasonable to allow platforms to moderate content. Personally I'd rather they not censor anything and let me decide after a warning (and being able to disable "warnings" would be better still - the amount of ideological "warnings" is getting nuts on YouTube).
What's unreasonable is the completely oblique and inconsistent way they moderate content, often with guidelines that are non-existent or unpublished.
Sunlight is the best disinfectant. I think if they want to keep their 230 safe harbor, companies should be required:
1) Publish publicly their specific rules. No more "we reserve the right for any reason". You want to hid behind any reason? No problem - no more 230 for you.
2) Publish publicly all moderation, citing specifically how the rules from #1 were used for the moderated outcome.
It won't be perfect, but it should lead to a LOT more consistency. Twitter is rife with left wing violence, but if any other group posts violent content it's removed immediately. It should all be removed, or none of it removed.
As others pointed out, these proposed changes are just going to encourage companies to error on the side of caution more, which isn't good
The wording of the proposed changes wouldn't have that effect and it seems that wasn't the intended effect. These changes aren't as substantial - or as bad - as I had originally thought. Even if this proposal was enacted, providers (and users, for that matter) would still enjoy unconditioned immunity for (i.e. not be treated as publishers of) the speech of others which they left up.
EDIT: I should be clear, the immunity provided by Section 230(c)(1) wouldn't be unconditional. But it wouldn't be conditioned on the provider not censoring certain material. The proposed changes actually place some meaningful conditions on Section 230(c)(1) immunity.
Twitter is not rife it’s left wing violence. It’s rife with video of protests, the vast majority of which is peaceful. It also documents police violence and government violation of the freedom to peaceably assemble. Equating videos of racist right-winger violence against people to videos of property damage is fallacy.
One thing these changes do is require a notification system which providers must use to take down material that, e.g., has been determined by a court to be defamatory. But the changes don't mean that providers need to take down material preemptively to be on the safe side, so to speak, so that they aren't liable for defamation posted by others. Providers still wouldn't be liable for the speech of others except in limited contexts, to include if such speech has been determined to be defamatory and the provider was given proper notice of that and they still didn't remove it.
I see this a TON with iKnockoff morons. There's also selective facts like "Apple's screens break lol!"
Sure, to keep all those eyeballs showing up to consume advertising, Facebook and Twitter have to provide something in return, that's the overhead cost for Facebook and Twitter. If subscribers start getting nasty towards one another or if the advertisers start crafting their ads to attract a specific audience or promote self interests beyond simply making more money for themselves, negative side effects start to take place. The problem is that the fundamental business model, i.e., connecting advertisers to eyeballs, has no inherent control mechanism to avoid negative side effects, like social disruption. Facebook and Twitter can continue to profit while the negative side effects being emitted from the platform run amok, so something must be done. In control system theory the solution to unconstrained responses is always the same: a negative feedback loop.
The only question is who gets to apply the negative feedback. In my mind the benefactors of the business model should be the ones responsible for keeping their system under control, just like the operator of a nuclear power plant is responsible for keeping their plant operating safely while deriving a financial profit for operating the plant. Government regulation in the Facebook and Twitter case would be really directed at mitigating the negative side affects, not controlling the system itself.
Bottom line: I would place full responsibility on the owners of the system (social platform business), in other words require them to moderate their content with strict penalties for not doing so or letting negative side effects leak out on to the general public, like a radiation cloud. They should not be allowed to take a position that they don't care if their system is abused or causes public suffering. They are getting off way too easy.
Now they will get a law that makes them liable for everyone's stupidity. It could not happen to a nicer group of companies. They first sold everyone on they can share whatever they like and companies got to say it was their fault. Then the "I'm Offended and Boycott" crowd showed up and these companies changed their position and allowed certain kinds of insults and shut down others.
Then a group of people who decided what they were reading, or worse what others were reading was not acceptable to them and they did not want other to read or see. They decided to pressures other companies to stop spending money with the tech companies which some of that money worked its way to the content generators. Before we knew it we had content being removed, shadow ban, or demonetized. Now we have a group of people (not liable to anyone) deciding what content is acceptable and not acceptable and who is allow to make money for their content.
The companies have couple of chooses here, go back to the original model where all content is welcome, or they can come up with a set of standards and control all content to those standard and the lowest voices do not get to decide or they become liable for everyone content. They tried playing the middle ground and only response to the outrage crowds and to claim they are not responsible for people's content, but in reality they were acting editor and deciding what people could read and see.
I am sorry in the US individuals do not get to decide what other can say and do and who is allow to make money for want they say and produce. In this country people vote with their own patronage, they do not get to force others to see things their way.
They may "demonetize" or "shadow ban" as they see fit.