Marvin
About
- Username
- Marvin
- Joined
- Visits
- 126
- Last Active
- Roles
- moderator
- Points
- 6,833
- Badges
- 2
- Posts
- 15,522
Reactions
-
Meta CEO mocks Apple for 'sitting on' iPhone 20 years later despite doing the same with Fa...
Stabitha_Christie said:Snopes is not a Meta fact checker. Bringing them up is a complete red herring as they are in no way relevant to the conversation.
https://www.seattletimes.com/business/meta-moves-to-end-fact-checking-program/
"After enormous public pressure, Zuckerberg turned to outside organizations like The Associated Press, ABC News and the fact-checking site Snopes, along with other global organizations vetted by the International Fact-Checking Network, to comb over potentially false or misleading posts on Facebook and Instagram and rule whether they needed to be annotated or removed."
The inclusion of Snopes was just to highlight the vagueness in some kinds of facts that are being checked, it wasn't a critique of Snopes, they are a well-regarded fact-checker. Most fact-checkers are, they just tend to have a left-center bias, which affects how middle-ground issues are handled.I can't say Meta is trying to increase hurtful content but clearing the way for people to claim women are property and LGBTQ+ people are all mentally ill does increase the amount of harmful content and misinformation. It also doesn't make the platforms less oppressive. It opens the door for harassment.There are issues that are not so heavily agreed on. One Zuckerberg touched on is blasphemy where people in Pakistan wanted him sentenced to death for allowing images of their religion on the platform. The issue with censorship is where it's appropriate to draw the line. Just saying it opens the door for harassment covers a lot of things. Should they not allow people to say someone is fat or ugly or that they aren't good at their job. One of the top comments on a Youtube video about censorship was that people often mistake hate speech for speech they hate and if censorship is going to work reliably, it needs a consistent standard.
As you alluded to earlier, people all start setting limits of free speech from a common ground but fairly quickly draw the line at things they are personally sensitive about. Things one person is more sensitive to, is less sensitive for someone else. That doesn't mean they agree with it, they just don't think it reaches a level of offensiveness that requires removal.Elon Musk has said many things that are outright lies; it is mind-bending that anyone would defer to him. Also, he regularly surpasses speech on X that he disagrees with. Not that it is factually incorrect but he simply doesn't like it. He has in no way lived up to his claims to be a free speech absolutist, rather the opposite.
The BBC interview shows his mindset and intent, he hasn't lived up to this stated intent and I don't think anyone can run a large scale platform only using the law as a base guideline and then tacking on subjective rules. I think he is overly optimistic about how humanity operates at scale.
It's easy to lose sight of the scale of these operations, they are getting hundreds of millions of posts per day. They don't have millions of employees so they rely on software to automate the process.
That's why I think the solution lies in a better convention for conversation. They have to setup a social contract that rewards constructive input instead of destructive. This can be seen on forum sites, the comments on forum sites are usually way above the standard that you get on social media because there's a different convention for conversation. Neutering political conversation helps of course. Politics more often divides people instead of finding common ground.It's important to clarify the word 'fact' here because it's being taken at face value to mean something that is known to be true. Fact-checkers should really be called something else because what they are checking are not yet considered to be facts. The accounts they find to be false are not facts.No one decides what true or isn't. Facts are self evident.
Facts are immutable. Community notes are in control of the platform and its users.
Zuckerberg clarified that their existing policies surrounding facts will remain in place. If something is factually/indisputably terrorist activity, trafficking etc, this won't be left to community notes and will be removed.
Things that are left to notes are unverifiable accounts. If both sides in a war zone try to spread information of war crimes, they don't always have a way of verifying these accounts and their bias would tend to fall on which side of the war they hope will be successful. In the war in Ukraine, Western media is more likely to believe accounts from Ukraine than from Russia and justifiably so but this justified bias doesn't make an account factual.
The point about injecting bleach is valid and caused real harm and I'm sure these kind of statements will fall under their normal guidelines that will remain in place. This isn't something that's vague or debatable.First of all, thank you for composing such a well formed and articulate post. It’s always a pleasure to read a person’s opinion when it is presented so clearly. Whether or not I agree or disagree with what you’re saying isn’t the point. You’ve provided food for thought and added another perspective to consider along with all of the other perspectives stated here.
One response I have to your post is concerning using AI to moderate social media content. I’m not so sure that would work because we’d have to all agree on which AI we would all agree to use and there would always be claims that the AI is itself biased. In the end I doubt that humans will ever defer to AI in policing their personal behavior, which includes a lack of self discipline, empathy, and self moderation.
There will be other ways to accomplish the same goal of encouraging constructive conversation but on a large scale, there needs to be a consistent, automated solution. These sites will use AI in their tools already but they clearly aren't being used to encourage constructive input. They can even have monetary rewards for the most constructive users. -
Meta CEO mocks Apple for 'sitting on' iPhone 20 years later despite doing the same with Fa...
auxio said:leighr said:The end of so called “fact checkers” is a welcome relief for true free speech. When one person, or group, has the power to decide what is ‘true’ or not, we are all in trouble. See exhibit one: China, or even worse, North Korea. We all need to fight against this sort of abuse of money and power, and while I am not a huge Facebook fan, I’m glad that they are following X’s lead in allowing free speech.
There is a reality out there which has undeniable facts about how things work, despite what all the crooks in the world seeking money/power are trying to convince you of for their own personal gain.
Facebook's original intent was to fact-check clearly factual things (Zuckerberg mentioned flat earth as an example) and only those things but it evolved into political bias where they felt they were deciding between opinions. Some of the bias was in which things they chose to fact-check. This makes up a lot of the bias in the media that people don't pay attention to, which is selective reporting. One news company will push a story down or not even report it that doesn't fit with their general views or audience and another company will do the opposite with the same story.
Fact-checkers sometimes leave articles labeled 'inconclusive' if something is say 60% likely to be true. What is true or not isn't always binary, it can be a probability until more information is available or subject to interpretation and things that are considered indisputable are 99%+ true. A lot of things, especially in politics, have a lower probability of being true because they rely on hearsay, who witnessed the event and whether they are credible. Deciding whether someone is a credible witness is a biased process.
Here's an example where a fact-checker is trying to figure out if a word that was spoken had an apostrophe and had a different meaning, both interpretations are plausible but not everyone will see each interpretation as being equally plausible:
https://www.snopes.com/news/2024/10/30/biden-trump-supporters-garbage/
This is the kind of vague political nonsense that is a waste of time getting involved with. Politics is an argument that never ends and at the scale Facebook operates at, it is a near impossible task to fact-check everything in a consistent and reliable way. They will get a reasonable proportion of the checks correct but those are rarely the problem because they are obvious to most people too, it's the vague ones that slip through.
There are highly respected officials saying on record that UFOs with non-human creatures exist and mainstream news outlets report it, should this be reported and spread as truthful without them providing evidence or suppressed as misinformation until they provide evidence:
https://www.newsweek.com/ufos-exist-what-experts-told-congress-1985865
It's not enough to absolve themselves of responsibility by saying they aren't reporting that what is being said is true, only that those people said it (which is objectively true) because they could do the same for a quack doctor that says drinking pineapple juice every day prevents cancer. The 'we're just the messenger' defense.
The addition of memes/satire complicates things. If someone is spreading misinformation in text, it can be labelled as such and suppressed but it can be wrapped into a meme and permitted to spread because it's just a joke.
Having a central committee overseeing what constitutes what's true or not has the benefit of being authoritative and methodical but it also has a limited perspective, especially on international issues. While using the public for fact-checking can allow crazies to hijack the process, at a large enough scale this doesn't happen because the extremists are usually in the minority. The community notes on X.com have been very reliable and unbiased and that's what Facebook plans to use. It will probably catch misinformation, misleading content, AI content much faster than before because people who have a vested interest in opposing it will try harder to get it corrected than a team whose job is to clean up after it becomes a liability for them.
The platform operators aren't trying to increase hateful content or misinformation, they are trying to make their platforms the least oppressive platforms for discussion. Elon Musk's interview with the BBC shows how each side views the problem:
https://www.youtube.com/watch?v=bRkcLYbvApU
Musk says at 18:00 that they want to try to limit speech to what is limited by law. The people of a country agree what speech should be suppressed by law and that's what the platform abides by. This is a reasonable stance but obviously leaves platforms wide open for abuse on a large scale.
I think the focus on factual accuracy isn't the best approach. Misinformation and hateful rhetoric spreads much less in person than it does online and the reason for this I suspect is civility. In the real world, there's a social contract where people exchange information politely and with positive intent. This is opposite in the digital world because the social contract is different. Online, people who are centrists are nobodies, people who have opposing views are enemies, people who have more extreme views on a supporting side are allies and reward each other with affirmation. This breeds ever more extreme perspectives because it rewards it. Unfortunately, the more people spend online, this is making its way into the real world.
Focusing on promoting and rewarding civil conversation would create more wholesome platforms where the participants don't want to share hateful or misleading content because they won't be rewarded for it. This can be aided using AI where when someone posts a comment with expletives, incendiary content, some kind of attack, it will be detected and the platform can tell the user to write their comment more politely or it will be suppressed and count negatively against their digital persona. The digital persona will be tagged as a hateful, misleading persona and the worst personas and content can be suppressed by the platform. This can be applied retroactively. Reward people who are polite, tackle behavior rather than ideas, how people express themselves rather than what they express and the platforms will improve.
An example AI summary of an online user would say something like: this user frequently posts aggressive messages, is politically left/right/center, shares offensive/misleading memes, promotes extremist users etc and put the summary right where they can see it. Embarrass them into being a better person online.