carnegie
About
- Username
- carnegie
- Joined
- Visits
- 213
- Last Active
- Roles
- member
- Points
- 3,613
- Badges
- 1
- Posts
- 1,085
Reactions
-
Biden administration: Apple & Broadcom should quit stalling and pay Caltech $1.1 billion i...
9secondkox2 said:What the heck is up with this administration???
this is a PRIVATE SECTOR DISPUTE!Let the justice system play out. It’s not like the Biden family hasn’t been doing much stalling of their own.There is an order and process to things.The Biden admin needs to remember that this is America and not some communist dictatorship.
That said, I think the SG is wrong and the Federal Circuit's decision in this case should be reviewed. That decision changed how things previously worked and could have broad implications for invalidity defenses in patent infringement suits. And, although there's considerable nuance to the issue, I think the Federal Circuit's decision is inconsistent with a plain reading of the law in question. Apple and Broadcom, and other similarly situated parties, shouldn't be estopped (in infringement cases) from making invalidity arguments which they didn't raise in IPR petitions, even if their failure to raise such arguments at that stage was intentional. That's not what the law requires. It only bars the making of such arguments (in infringement cases) if they were raised (and rejected) or reasonably could have been raised in actual inter parties reviews rather than petitions asking for IPRs.
The Federal Circuit had it right before and, despite this recent Federal Circuit panel's claim, nothing in the Supreme Court's SAS Institute decision required it to change course on its interpretation of the law in question. -
WB's rebranded Max app launch has some issues on Apple TV
-
TikTok users take legal action against Montana over controversial ban
bestkeptsecret said:I am curious about how a state-wide ban is implemented/ enforced. Does the ban mean no one in Montana can use TikTok and that it should not be installed on their phones, or does it mean that no content can be uploaded onto TikTok from Montana?
What about visitors? Do they need to delete TikTok before entering the state?
That said, a bit about how the law purports to work: TikTok users wouldn't be violating the law. TikTok itself and mobile app stores would be the ones violating the law. It would be a violation of the law every time someone (1) accessed TikTok within (the territorial jurisdiction of) Montana, (2) was offered the ability to access TikTok within (the territorial jurisdiction of) Montana, or (3) was offered the ability to download TikTok within (the territorial jurisdiction of) Montana. Every time one of those things happened, TikTok or the mobile app store would be subject to a $10,000 fine. Apple, e.g., would have to somehow prevent TikTok from being available to download in Montana. It might also need to somehow block TikTok's ability to work in Montana.
I don't think they were much concerned with whether the law was at all feasible - i.e., whether it would be practical for covered entities to comply with it - because, again, even those who enacted the law (assuming they have measurable brain function) don't expect that it will ever go into effect. It's blatantly unconstitutional for multiple reasons. -
Supreme Court overturns ruling holding platforms responsible for users' criminal activity
chasm said:mattinoz said:Yes but if Platforms are go the other way and are completely hands off with moderation, it will have the same effect. Customers* will walk away from all the noise and the bots.Unless they’re rich enough, of course.
There are two distinct protections provided for (or clarified) by Section 230 which often get conflated. First, there's an unqualified protection against being treated as the publisher of information provided by others. That's 47 USC §230(c)(1):So Twitter, e.g., isn't legally liable for defamatory postings made by others. It isn't responsible for others' speech just because it provides resources they might use to propagate such speech. That subsection is also, btw, what protects you and I when we simply quote someone else's speech. The point is, in general I'm responsible for my own speech (to include comments I might make regarding others' speech) but not for the speech of others. In that way Section 230(c)(1) provides protections for everyone using the internet - ISPs, so-called platforms, users - and without it (or common law to substantially the same effect) the internet as we know it couldn't exist.
Then there's another protection provided by 47 USC §230(c)(2):
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).That subsection provides protection against civil liability for, e.g., taking down content provided by others. So Twitter, e.g., can censor speech which it finds "otherwise objectionable" so long as it acts in good faith in doing so. Generally speaking, someone can't (successfully) sue Twitter for taking down their (or others') content.
The key point here though is that the protections provided by those respective subsections aren't linked. If someone acts in bad faith in censoring some content, they might be liable - if there's a statutory or common law basis for such liability - for that censoring. Bob, e.g., might be able to (successfully) sue Twitter for its bad faith action in taking down his Tweet. But that bad faith doesn't then make Twitter liable for anything posted by others which it leaves up. It isn't treated as the publisher or speaker of such content. Full Stop. That remains true regardless of its good or bad faith efforts to censor other content.
EDIT: To be clear, I'm only talking about Section 230 here. It provides protections against civil liability. By its own terms it doesn't block enforcement of federal criminal laws. To the extent anyone on the internet violates federal criminal laws, they can be held accountable for doing so. But that's a separate matter from the protections provided by Section 230.
-
Supreme Court overturns ruling holding platforms responsible for users' criminal activity
I'm headed out the door so I won't get lost in the details of this decision or too far into the procedural stance of the case as it reached the Supreme Court. But I did want to point out that Section 230 had nothing to do with this decision. The Court found that the plaintiffs hadn't sufficiently made out a claim under the Justice Against Sponsors of Terrorism Act, which was the basis for what remained of the suit.
Had the Court ruled the other way, the case would likely have went back to the district court where Section 230 might have become an issue. As it was, the district court didn't reach the Section 230 issue because it didn't need to. And when the Ninth Circuit reversed the district court, it didn't address the Section 230 issue because the district court hadn't done so. Without the petition to the Supreme Court (and cert grant), the district court - after being reversed on the JASTA claim - would likely have addressed the Section 230 issue. Then its decision on that might have been appealed to the Ninth Circuit before possibly being appealed to the Supreme Court.
At any rate, this Supreme Court decision tells us nothing about how Section 230 might protect Twitter and others in similar situations.