'Safe Tech Act' could strip Section 230 user content protections from websites
A trio of Democratic Senators have unveiled a new bill dubbed the Safe Tech Act that would fundamentally change Section 230 and could make internet companies with any monetization at all liable for user-generated content.

Credit: WikiCommons
The proposed legislation, introduced by senators Mark Warner, Mazie Hirono, and Amy Klobuchar, makes several key changes to Section 230 of the Communications Decency Act, which currently shields internet companies from liability for user-posted content.
One of those changes is an alternation of the core language of Section 230 that could strip protections in situations where payments are involved -- whether the provider or user has accepted payment to make the speech available, or has funded the creation of the speech in any way, including ads served in the content. In a tweet, Sen. Warner said that online ads are "a key vector for all manner of frauds and scams."
It could also affect paid services and premium hosting platforms like Substack or Patreon, too. Under the current language of the proposal, companies could be held liable for harmful content if any ads from any provider are displayed, not just abusive ones.
The new legislation also opens the door for internet companies to be held liable in situations of cyberstalking, targeted harassment, and discrimination.
Specifically, the Safe Tech Act would create a carve-out that would allow individuals to file lawsuits against companies if the material they host can cause "irreparable harm." It would also allow lawsuits for human rights abuses overseas.
The legislation also won't impair the enforcement of civil rights, antitrust, talking, harassment, or intimidation laws.
"The SAFE TECH Act doesn't interfere with free speech - it's about allowing these platforms to finally be held accountable for harmful, often criminal behavior enabled by their platforms to which they have turned a blind eye for too long," said Sen. Warner in a tweet.
There are already some criticisms of the legislation, including from Democratic Sen. Ron Wyden -- one of the original authors of Section 230.
"Unfortunately, as written, it would devastate every part of the open internet, and cause massive collateral damage to online speech," Sen. Wyden told TechCrunch. "Creating liability for all commercial relationships would cause web hosts, cloud storage providers and even paid email services to purge their networks of any controversial speech."
Section 230 has been a target of criticism from both Democrats and Republicans in recent years. Although both sides of the aisle have different goals in mind, Democratic and Republican lawmakers have both signaled that they would like to see the protections reformed.

Credit: WikiCommons
The proposed legislation, introduced by senators Mark Warner, Mazie Hirono, and Amy Klobuchar, makes several key changes to Section 230 of the Communications Decency Act, which currently shields internet companies from liability for user-posted content.
One of those changes is an alternation of the core language of Section 230 that could strip protections in situations where payments are involved -- whether the provider or user has accepted payment to make the speech available, or has funded the creation of the speech in any way, including ads served in the content. In a tweet, Sen. Warner said that online ads are "a key vector for all manner of frauds and scams."
It could also affect paid services and premium hosting platforms like Substack or Patreon, too. Under the current language of the proposal, companies could be held liable for harmful content if any ads from any provider are displayed, not just abusive ones.
The new legislation also opens the door for internet companies to be held liable in situations of cyberstalking, targeted harassment, and discrimination.
Specifically, the Safe Tech Act would create a carve-out that would allow individuals to file lawsuits against companies if the material they host can cause "irreparable harm." It would also allow lawsuits for human rights abuses overseas.
The legislation also won't impair the enforcement of civil rights, antitrust, talking, harassment, or intimidation laws.
"The SAFE TECH Act doesn't interfere with free speech - it's about allowing these platforms to finally be held accountable for harmful, often criminal behavior enabled by their platforms to which they have turned a blind eye for too long," said Sen. Warner in a tweet.
The SAFE TECH Act doesn't interfere with free speech - it's about allowing these platforms to finally be held accountable for harmful, often criminal behavior enabled by their platforms to which they have turned a blind eye for too long. (1/8)https://t.co/Q1bGydU44N
-- Mark Warner (@MarkWarner)
There are already some criticisms of the legislation, including from Democratic Sen. Ron Wyden -- one of the original authors of Section 230.
"Unfortunately, as written, it would devastate every part of the open internet, and cause massive collateral damage to online speech," Sen. Wyden told TechCrunch. "Creating liability for all commercial relationships would cause web hosts, cloud storage providers and even paid email services to purge their networks of any controversial speech."
Section 230 has been a target of criticism from both Democrats and Republicans in recent years. Although both sides of the aisle have different goals in mind, Democratic and Republican lawmakers have both signaled that they would like to see the protections reformed.
Comments
It isn't cost-effective to pre-moderate comments. There is no way to make it cost-effective on what is already not a profitable aspect of operations.
There is always a problem of where to draw a line. We have, at least in the UK, have laws against media (TV, newspapers and others) publishing the gross offences (those that we pretty much all agree are wrong). There are time-tested mechanisms for handling borderline cases by taking publishers to court and letting judges decide.
I fail to see why we don't test social media in the same way: I simply don't believe that the likes of Google and Facebook can extract the last iota of marketable profiling from a user's post yet fail to detect non-borderline unacceptable content (and, in fact, Google and others have actually done admirable, and successful, work against child pornography). Such capabilities can be made available to all forum hosts.
If a website collects and disseminates information, how is that different from a news site? What distinguishes them from publishers? If it looks like a duck and quacks like a duck...
*Free speech should admit that you can discuss any subject. It's when it becomes damaging that it's a problem - hard to describe but I think most grown-ups can recognise that when they see it.
Although maybe AI could offload the comments section to a platform that works well.
However, I would hope that any final legislation was proportionate and accepted that moderation could not be perfect. This would be consistent with judging the amount of harm likely: the bigger the audience, the bigger the potential harm and hence the higher the standards to be applied. However, the bigger audience also tends to go with the bigger means to manage (eg Facebook probably has more ability to moderate than AI).
The difference between a thread full of vitriol and threats and one with a single intemperate comment is pretty clear. If so, the odd inappropriate post slipping past would not be cause for legal action if it was removed once highlighted. With automated aid, I would hope that your workload would not change too much. But perhaps you are already removing a huge number of unacceptable posts that we, thankfully, never see.
In the last year, less than 0.1% of all AI news started with comments shut from the jump. If you count tips, this increases to 0.4%, because of the inane "this is basic, why did you post this?" prevalence of comments, which doesn't help the folks that are actually looking how to do things.
This number has decreased has folks of all political persuasions have been banned. In regards to that ban total, of the population that has been banned, it consists of 51% of commenters with a demonstrable left of center bent, and 49 with a right of center.
Nearly everything starts open, as I have demonstrated. If it becomes cost ineffective to moderate because of forum-goer behavior, we shut it down -- and this can happen very, very quickly. If this reform is passed, then we simply won't have a comment section as we simply don't have the time, manpower, or money to premoderate every single content that is posted. There's a big difference between the two. And, just because you didn't see it open, doesn't mean it didn't start open.
It's isn't about "false advertising." As the bill is written, ANY advertising, on the forum or host itself, counts as monetization as it pertains to being subject to 230 protections.
On the aggregate, we know who you are, and you aren't children. Simple abuse, and threats are way too common for what is, on the average, a significantly older population than, say, Reddit.