Marvin
About
- Username
- Marvin
- Joined
- Visits
- 130
- Last Active
- Roles
- moderator
- Points
- 7,004
- Badges
- 2
- Posts
- 15,583
Reactions
-
Meta CEO mocks Apple for 'sitting on' iPhone 20 years later despite doing the same with Fa...
Stabitha_Christie said:Snopes is not a Meta fact checker. Bringing them up is a complete red herring as they are in no way relevant to the conversation.
https://www.seattletimes.com/business/meta-moves-to-end-fact-checking-program/
"After enormous public pressure, Zuckerberg turned to outside organizations like The Associated Press, ABC News and the fact-checking site Snopes, along with other global organizations vetted by the International Fact-Checking Network, to comb over potentially false or misleading posts on Facebook and Instagram and rule whether they needed to be annotated or removed."
The inclusion of Snopes was just to highlight the vagueness in some kinds of facts that are being checked, it wasn't a critique of Snopes, they are a well-regarded fact-checker. Most fact-checkers are, they just tend to have a left-center bias, which affects how middle-ground issues are handled.I can't say Meta is trying to increase hurtful content but clearing the way for people to claim women are property and LGBTQ+ people are all mentally ill does increase the amount of harmful content and misinformation. It also doesn't make the platforms less oppressive. It opens the door for harassment.There are issues that are not so heavily agreed on. One Zuckerberg touched on is blasphemy where people in Pakistan wanted him sentenced to death for allowing images of their religion on the platform. The issue with censorship is where it's appropriate to draw the line. Just saying it opens the door for harassment covers a lot of things. Should they not allow people to say someone is fat or ugly or that they aren't good at their job. One of the top comments on a Youtube video about censorship was that people often mistake hate speech for speech they hate and if censorship is going to work reliably, it needs a consistent standard.
As you alluded to earlier, people all start setting limits of free speech from a common ground but fairly quickly draw the line at things they are personally sensitive about. Things one person is more sensitive to, is less sensitive for someone else. That doesn't mean they agree with it, they just don't think it reaches a level of offensiveness that requires removal.Elon Musk has said many things that are outright lies; it is mind-bending that anyone would defer to him. Also, he regularly surpasses speech on X that he disagrees with. Not that it is factually incorrect but he simply doesn't like it. He has in no way lived up to his claims to be a free speech absolutist, rather the opposite.
The BBC interview shows his mindset and intent, he hasn't lived up to this stated intent and I don't think anyone can run a large scale platform only using the law as a base guideline and then tacking on subjective rules. I think he is overly optimistic about how humanity operates at scale.
It's easy to lose sight of the scale of these operations, they are getting hundreds of millions of posts per day. They don't have millions of employees so they rely on software to automate the process.
That's why I think the solution lies in a better convention for conversation. They have to setup a social contract that rewards constructive input instead of destructive. This can be seen on forum sites, the comments on forum sites are usually way above the standard that you get on social media because there's a different convention for conversation. Neutering political conversation helps of course. Politics more often divides people instead of finding common ground.It's important to clarify the word 'fact' here because it's being taken at face value to mean something that is known to be true. Fact-checkers should really be called something else because what they are checking are not yet considered to be facts. The accounts they find to be false are not facts.No one decides what true or isn't. Facts are self evident.
Facts are immutable. Community notes are in control of the platform and its users.
Zuckerberg clarified that their existing policies surrounding facts will remain in place. If something is factually/indisputably terrorist activity, trafficking etc, this won't be left to community notes and will be removed.
Things that are left to notes are unverifiable accounts. If both sides in a war zone try to spread information of war crimes, they don't always have a way of verifying these accounts and their bias would tend to fall on which side of the war they hope will be successful. In the war in Ukraine, Western media is more likely to believe accounts from Ukraine than from Russia and justifiably so but this justified bias doesn't make an account factual.
The point about injecting bleach is valid and caused real harm and I'm sure these kind of statements will fall under their normal guidelines that will remain in place. This isn't something that's vague or debatable.First of all, thank you for composing such a well formed and articulate post. It’s always a pleasure to read a person’s opinion when it is presented so clearly. Whether or not I agree or disagree with what you’re saying isn’t the point. You’ve provided food for thought and added another perspective to consider along with all of the other perspectives stated here.
One response I have to your post is concerning using AI to moderate social media content. I’m not so sure that would work because we’d have to all agree on which AI we would all agree to use and there would always be claims that the AI is itself biased. In the end I doubt that humans will ever defer to AI in policing their personal behavior, which includes a lack of self discipline, empathy, and self moderation.
There will be other ways to accomplish the same goal of encouraging constructive conversation but on a large scale, there needs to be a consistent, automated solution. These sites will use AI in their tools already but they clearly aren't being used to encourage constructive input. They can even have monetary rewards for the most constructive users. -
Thinner, smarter, more connected: What to expect from a 2025 Apple TV
oberpongo said:Make it an Apple TV Pro. Including support for 8k and with an M4 to rival any PlayStation 5 or XBOX in terms of gaming.
The Apple TV is $129, 64GB SSD, A15 (iPhone 13), 4GB RAM.
The Mac mini M4 is $599, 16GB RAM, 256GB SSD. This is half the performance of XBox/PS5, which sell for $450.
They'd have to use an iPhone chip to get the price down and they'd need 8GB RAM as some games use over 5GB:
It might be easier to allow people to dock their iPhone to a TV as there are tens of millions of them in use already, this video shows it works well (7:00):
Then it's more like a Nintendo Switch that docks to a TV.
They might be able to do A18 Pro, 8GB, 64GB for $299 but people would then be weighing up vs an XBox Series S at $300.
In terms of unit volume, getting people to dock iPhones to their TV with a first party dock and supported controller (Xbox One, PS5) would be higher. People can even dedicate an old iPhone to the TV dock. -
Meta CEO mocks Apple for 'sitting on' iPhone 20 years later despite doing the same with Fa...
auxio said:leighr said:The end of so called “fact checkers” is a welcome relief for true free speech. When one person, or group, has the power to decide what is ‘true’ or not, we are all in trouble. See exhibit one: China, or even worse, North Korea. We all need to fight against this sort of abuse of money and power, and while I am not a huge Facebook fan, I’m glad that they are following X’s lead in allowing free speech.
There is a reality out there which has undeniable facts about how things work, despite what all the crooks in the world seeking money/power are trying to convince you of for their own personal gain.
Facebook's original intent was to fact-check clearly factual things (Zuckerberg mentioned flat earth as an example) and only those things but it evolved into political bias where they felt they were deciding between opinions. Some of the bias was in which things they chose to fact-check. This makes up a lot of the bias in the media that people don't pay attention to, which is selective reporting. One news company will push a story down or not even report it that doesn't fit with their general views or audience and another company will do the opposite with the same story.
Fact-checkers sometimes leave articles labeled 'inconclusive' if something is say 60% likely to be true. What is true or not isn't always binary, it can be a probability until more information is available or subject to interpretation and things that are considered indisputable are 99%+ true. A lot of things, especially in politics, have a lower probability of being true because they rely on hearsay, who witnessed the event and whether they are credible. Deciding whether someone is a credible witness is a biased process.
Here's an example where a fact-checker is trying to figure out if a word that was spoken had an apostrophe and had a different meaning, both interpretations are plausible but not everyone will see each interpretation as being equally plausible:
https://www.snopes.com/news/2024/10/30/biden-trump-supporters-garbage/
This is the kind of vague political nonsense that is a waste of time getting involved with. Politics is an argument that never ends and at the scale Facebook operates at, it is a near impossible task to fact-check everything in a consistent and reliable way. They will get a reasonable proportion of the checks correct but those are rarely the problem because they are obvious to most people too, it's the vague ones that slip through.
There are highly respected officials saying on record that UFOs with non-human creatures exist and mainstream news outlets report it, should this be reported and spread as truthful without them providing evidence or suppressed as misinformation until they provide evidence:
https://www.newsweek.com/ufos-exist-what-experts-told-congress-1985865
It's not enough to absolve themselves of responsibility by saying they aren't reporting that what is being said is true, only that those people said it (which is objectively true) because they could do the same for a quack doctor that says drinking pineapple juice every day prevents cancer. The 'we're just the messenger' defense.
The addition of memes/satire complicates things. If someone is spreading misinformation in text, it can be labelled as such and suppressed but it can be wrapped into a meme and permitted to spread because it's just a joke.
Having a central committee overseeing what constitutes what's true or not has the benefit of being authoritative and methodical but it also has a limited perspective, especially on international issues. While using the public for fact-checking can allow crazies to hijack the process, at a large enough scale this doesn't happen because the extremists are usually in the minority. The community notes on X.com have been very reliable and unbiased and that's what Facebook plans to use. It will probably catch misinformation, misleading content, AI content much faster than before because people who have a vested interest in opposing it will try harder to get it corrected than a team whose job is to clean up after it becomes a liability for them.
The platform operators aren't trying to increase hateful content or misinformation, they are trying to make their platforms the least oppressive platforms for discussion. Elon Musk's interview with the BBC shows how each side views the problem:
https://www.youtube.com/watch?v=bRkcLYbvApU
Musk says at 18:00 that they want to try to limit speech to what is limited by law. The people of a country agree what speech should be suppressed by law and that's what the platform abides by. This is a reasonable stance but obviously leaves platforms wide open for abuse on a large scale.
I think the focus on factual accuracy isn't the best approach. Misinformation and hateful rhetoric spreads much less in person than it does online and the reason for this I suspect is civility. In the real world, there's a social contract where people exchange information politely and with positive intent. This is opposite in the digital world because the social contract is different. Online, people who are centrists are nobodies, people who have opposing views are enemies, people who have more extreme views on a supporting side are allies and reward each other with affirmation. This breeds ever more extreme perspectives because it rewards it. Unfortunately, the more people spend online, this is making its way into the real world.
Focusing on promoting and rewarding civil conversation would create more wholesome platforms where the participants don't want to share hateful or misleading content because they won't be rewarded for it. This can be aided using AI where when someone posts a comment with expletives, incendiary content, some kind of attack, it will be detected and the platform can tell the user to write their comment more politely or it will be suppressed and count negatively against their digital persona. The digital persona will be tagged as a hateful, misleading persona and the worst personas and content can be suppressed by the platform. This can be applied retroactively. Reward people who are polite, tackle behavior rather than ideas, how people express themselves rather than what they express and the platforms will improve.
An example AI summary of an online user would say something like: this user frequently posts aggressive messages, is politically left/right/center, shares offensive/misleading memes, promotes extremist users etc and put the summary right where they can see it. Embarrass them into being a better person online. -
How to switch modes in macOS Sequoia's Calculator app
Ofer said:jeffharris said:Is there a way to just make the thing BIGGER.On my 27” 4K screen it looks like a postage stamp.
Luckily I can move it to my MacBook Pro’s screen, but that’s doesn’t solve the problem of Apple’s inability to include a method to scale UI elements for higher resolution screens.
The reason they don't scale individual elements is because it results in a mess like what can happen in Windows (3:36):
The laptops are using scaling too. The 16" Macbook Pro has a native resolution of 3456 x 2234. The highest scaled resolution is 2056 x 1329, which is about 60% of native.
If the default scaling options don't look good, there are more options by holding the option-key. On one of the display scaling options, option-click and it will turn into a list. Toggle 'show all resolutions' and pick one that looks good.
On a 27" display, 2560 x 1440 is a usable resolution. On a 32", somewhere around 70-80% scaled would probably be ok (around 1600p). -
iPhone 17 rumored to return to curved sides with new material blending
SnickersMagoo said:What the heck does it matter when you use a case? Edges curved or smoother???? Really??? Boy you guys need something important to think about....
They generally round the edges on the flat-sided iPhones so it feels ok to hold but I think the iPhone X form factor was nearly perfect and newer models don't feel as comfortable to hold. Even iPhone 11 was bulkier and heavier.