Marvin

About

Username
Marvin
Joined
Visits
131
Last Active
Roles
moderator
Points
7,007
Badges
2
Posts
15,585
  • Meta CEO mocks Apple for 'sitting on' iPhone 20 years later despite doing the same with Fa...

    auxio said:
    leighr said:
    The end of so called “fact checkers” is a welcome relief for true free speech. When one person, or group, has the power to decide what is ‘true’ or not, we are all in trouble. See exhibit one: China, or even worse, North Korea. We all need to fight against this sort of abuse of money and power, and while I am not a huge Facebook fan, I’m glad that they are following X’s lead in allowing free speech. 
    Can you imagine what would happen if all the engineers who check bridge designs, buildings, engines in cars, etc thought this way?

    There is a reality out there which has undeniable facts about how things work, despite what all the crooks in the world seeking money/power are trying to convince you of for their own personal gain.
    That's not what it's about though and this was covered in the interview. Zuckerberg said the government had been forcing them to suppress information and had people screaming obscenities down the phone to suppress truthful information, they were trying to get memes/satire taken down.

    Facebook's original intent was to fact-check clearly factual things (Zuckerberg mentioned flat earth as an example) and only those things but it evolved into political bias where they felt they were deciding between opinions. Some of the bias was in which things they chose to fact-check. This makes up a lot of the bias in the media that people don't pay attention to, which is selective reporting. One news company will push a story down or not even report it that doesn't fit with their general views or audience and another company will do the opposite with the same story.

    Fact-checkers sometimes leave articles labeled 'inconclusive' if something is say 60% likely to be true. What is true or not isn't always binary, it can be a probability until more information is available or subject to interpretation and things that are considered indisputable are 99%+ true. A lot of things, especially in politics, have a lower probability of being true because they rely on hearsay, who witnessed the event and whether they are credible. Deciding whether someone is a credible witness is a biased process.

    Here's an example where a fact-checker is trying to figure out if a word that was spoken had an apostrophe and had a different meaning, both interpretations are plausible but not everyone will see each interpretation as being equally plausible:

    https://www.snopes.com/news/2024/10/30/biden-trump-supporters-garbage/

    This is the kind of vague political nonsense that is a waste of time getting involved with. Politics is an argument that never ends and at the scale Facebook operates at, it is a near impossible task to fact-check everything in a consistent and reliable way. They will get a reasonable proportion of the checks correct but those are rarely the problem because they are obvious to most people too, it's the vague ones that slip through.

    There are highly respected officials saying on record that UFOs with non-human creatures exist and mainstream news outlets report it, should this be reported and spread as truthful without them providing evidence or suppressed as misinformation until they provide evidence:

    https://www.newsweek.com/ufos-exist-what-experts-told-congress-1985865

    It's not enough to absolve themselves of responsibility by saying they aren't reporting that what is being said is true, only that those people said it (which is objectively true) because they could do the same for a quack doctor that says drinking pineapple juice every day prevents cancer. The 'we're just the messenger' defense.

    The addition of memes/satire complicates things. If someone is spreading misinformation in text, it can be labelled as such and suppressed but it can be wrapped into a meme and permitted to spread because it's just a joke.

    Having a central committee overseeing what constitutes what's true or not has the benefit of being authoritative and methodical but it also has a limited perspective, especially on international issues. While using the public for fact-checking can allow crazies to hijack the process, at a large enough scale this doesn't happen because the extremists are usually in the minority. The community notes on X.com have been very reliable and unbiased and that's what Facebook plans to use. It will probably catch misinformation, misleading content, AI content much faster than before because people who have a vested interest in opposing it will try harder to get it corrected than a team whose job is to clean up after it becomes a liability for them.

    The platform operators aren't trying to increase hateful content or misinformation, they are trying to make their platforms the least oppressive platforms for discussion. Elon Musk's interview with the BBC shows how each side views the problem:

    https://www.youtube.com/watch?v=bRkcLYbvApU

    Musk says at 18:00 that they want to try to limit speech to what is limited by law. The people of a country agree what speech should be suppressed by law and that's what the platform abides by. This is a reasonable stance but obviously leaves platforms wide open for abuse on a large scale.

    I think the focus on factual accuracy isn't the best approach. Misinformation and hateful rhetoric spreads much less in person than it does online and the reason for this I suspect is civility. In the real world, there's a social contract where people exchange information politely and with positive intent. This is opposite in the digital world because the social contract is different. Online, people who are centrists are nobodies, people who have opposing views are enemies, people who have more extreme views on a supporting side are allies and reward each other with affirmation. This breeds ever more extreme perspectives because it rewards it. Unfortunately, the more people spend online, this is making its way into the real world.

    Focusing on promoting and rewarding civil conversation would create more wholesome platforms where the participants don't want to share hateful or misleading content because they won't be rewarded for it. This can be aided using AI where when someone posts a comment with expletives, incendiary content, some kind of attack, it will be detected and the platform can tell the user to write their comment more politely or it will be suppressed and count negatively against their digital persona. The digital persona will be tagged as a hateful, misleading persona and the worst personas and content can be suppressed by the platform. This can be applied retroactively. Reward people who are polite, tackle behavior rather than ideas, how people express themselves rather than what they express and the platforms will improve.

    An example AI summary of an online user would say something like: this user frequently posts aggressive messages, is politically left/right/center, shares offensive/misleading memes, promotes extremist users etc and put the summary right where they can see it. Embarrass them into being a better person online.
    Wesley_Hilliardmuthuk_vanalingamavon b7dewmeblastdoorsconosciuto
  • How to switch modes in macOS Sequoia's Calculator app

    Ofer said:
    Is there a way to just make the thing BIGGER. 
    On my 27” 4K screen it looks like a postage stamp.

    Luckily I can move it to my MacBook Pro’s screen, but that’s doesn’t solve the problem of Apple’s inability to include a method to scale UI elements for higher resolution screens.
    Agreed! How do they not have this scaling capability built-in. I end up using lower, non-native resolutions on my external display because of this
    The scaled resolutions are different from lower resolutions. The buffer is rendered at a high resolution and everything scales to fit better in the window.

    The reason they don't scale individual elements is because it results in a mess like what can happen in Windows (3:36):



    The laptops are using scaling too. The 16" Macbook Pro has a native resolution of 3456 x 2234. The highest scaled resolution is 2056 x 1329, which is about 60% of native.

    If the default scaling options don't look good, there are more options by holding the option-key. On one of the display scaling options, option-click and it will turn into a list. Toggle 'show all resolutions' and pick one that looks good.

    On a 27" display, 2560 x 1440 is a usable resolution. On a 32", somewhere around 70-80% scaled would probably be ok (around 1600p).
    watto_cobra
  • iPhone 17 rumored to return to curved sides with new material blending

    What the heck does it matter when you use a case?    Edges curved or smoother????    Really???    Boy you guys need something important to think about....
    Curved edges allow a case to sit flush or below the screen, which makes it easier to swipe the display from the sides.





    They generally round the edges on the flat-sided iPhones so it feels ok to hold but I think the iPhone X form factor was nearly perfect and newer models don't feel as comfortable to hold. Even iPhone 11 was bulkier and heavier.
    dewmelotones
  • Apple Intelligence summaries are still screwing up headlines

    chasm said:
    Hire a person or three to write unbiased, accurate summaries based on them actually reading the stories = better solution for Apple (we expect this type of crap from Google)

    Someday AI will be able to do accurate summaries. But if today isn't that day, don't use AI for a job it clearly can't yet do.
    The volume of info is likely too high for human summaries for every news service in every country, every single day and they can't read summaries of personal messages for privacy. There could be a separate AI that checks the summaries for errors. This AI would compare the summary and the main text to see if there's a mismatch. At the very least, they should flag sensitive summaries that mention things like people dying, for human review. When personal messages get flagged for either sensitive content or the summary doesn't make sense, it can fall back to cropped text.

    A news summary like "Brazilian tennis player Rafael Nadal comes out as gay." would get flagged as sensitive and raised for human review. Until the review is done, it can show an important sentence from the article cropped to fit the space.
    watto_cobra
  • Banned macOS & Linux players of 'Marvel Rivals' have been given a reprieve

    hurting players simply because they want to play your game? who cares what OS it's on.
    It's due to these being competitive multiplayer games. Cheating ruins the game as it affects the rankings:



    They use anti-cheat software that tries to make sure there's no other software modifying the game to allow things like auto-aim and shooting through walls. When the game is run through compatibility software, either the anti-cheat isn't running or the software is being modified at a lower level in order to run, which looks like it's using cheats.

    This is why a lot of these games don't get native ports as it's harder on some systems to run anti-cheat code at such a low level (kernel-level). There should be a way to tackle this at the OS level for all games though the same way video DRM is handled.

    The OS can ensure that all of the memory used by an app isn't modified by another app at runtime, possibly using encrypted memory containers. To ensure the binary isn't modified on disk, they verify a hash of the files. In the rare case the methods are bypassed, they can review the player's activity online. It shouldn't need a custom kernel-level driver to verify software integrity, the OS provider should be able to guarantee this.

    Maybe a sandboxed area of the OS that behaves like mobile operating systems would be an option and protected games run inside the sandbox. This would have an inaccessible filesystem.
    watto_cobraroundaboutnowchasm