Meta CEO mocks Apple for 'sitting on' iPhone 20 years later despite doing the same with Fa...

124

Comments

  • Reply 61 of 88
    Marvinmarvin Posts: 15,554moderator

    Snopes is not a Meta fact checker. Bringing them up is a complete red herring as they are in no way relevant to the conversation.
    According to the news, Meta used multiple 3rd party fact-checking services:

    https://www.seattletimes.com/business/meta-moves-to-end-fact-checking-program/

    "After enormous public pressure, Zuckerberg turned to outside organizations like The Associated Press, ABC News and the fact-checking site Snopes, along with other global organizations vetted by the International Fact-Checking Network, to comb over potentially false or misleading posts on Facebook and Instagram and rule whether they needed to be annotated or removed."

    The inclusion of Snopes was just to highlight the vagueness in some kinds of facts that are being checked, it wasn't a critique of Snopes, they are a well-regarded fact-checker. Most fact-checkers are, they just tend to have a left-center bias, which affects how middle-ground issues are handled.
    I can't say Meta is trying to increase hurtful content but clearing the way for people to claim women are property and LGBTQ+ people are all mentally ill does increase the amount of harmful content and misinformation. It also doesn't make the platforms less oppressive. It opens the door for harassment. 
    There are issues that are not so heavily agreed on. One Zuckerberg touched on is blasphemy where people in Pakistan wanted him sentenced to death for allowing images of their religion on the platform. The issue with censorship is where it's appropriate to draw the line. Just saying it opens the door for harassment covers a lot of things. Should they not allow people to say someone is fat or ugly or that they aren't good at their job. One of the top comments on a Youtube video about censorship was that people often mistake hate speech for speech they hate and if censorship is going to work reliably, it needs a consistent standard.

    As you alluded to earlier, people all start setting limits of free speech from a common ground but fairly quickly draw the line at things they are personally sensitive about. Things one person is more sensitive to, is less sensitive for someone else. That doesn't mean they agree with it, they just don't think it reaches a level of offensiveness that requires removal.
    Elon Musk has said many things that are outright lies; it is mind-bending that anyone would defer to him. Also, he regularly surpasses speech on X that he disagrees with. Not that it is factually incorrect but he simply doesn't like it. He has in no way lived up to his claims to be a free speech absolutist, rather the opposite.
    He definitely doesn't live up to being a free speech absolutist and this was apparent before he took over because it's inevitable that people do this. People all have this same flaw. Inwardly people see themselves as the arbiters of truth because people have no reason to lie to themselves but they only have their own perspective.

    The BBC interview shows his mindset and intent, he hasn't lived up to this stated intent and I don't think anyone can run a large scale platform only using the law as a base guideline and then tacking on subjective rules. I think he is overly optimistic about how humanity operates at scale.

    It's easy to lose sight of the scale of these operations, they are getting hundreds of millions of posts per day. They don't have millions of employees so they rely on software to automate the process.

    That's why I think the solution lies in a better convention for conversation. They have to setup a social contract that rewards constructive input instead of destructive. This can be seen on forum sites, the comments on forum sites are usually way above the standard that you get on social media because there's a different convention for conversation. Neutering political conversation helps of course. Politics more often divides people instead of finding common ground.
    No one decides what true or isn't. Facts are self evident.
    Facts are immutable. Community notes are in control of the platform and its users.
    It's important to clarify the word 'fact' here because it's being taken at face value to mean something that is known to be true. Fact-checkers should really be called something else because what they are checking are not yet considered to be facts. The accounts they find to be false are not facts.

    Zuckerberg clarified that their existing policies surrounding facts will remain in place. If something is factually/indisputably terrorist activity, trafficking etc, this won't be left to community notes and will be removed.

    Things that are left to notes are unverifiable accounts. If both sides in a war zone try to spread information of war crimes, they don't always have a way of verifying these accounts and their bias would tend to fall on which side of the war they hope will be successful. In the war in Ukraine, Western media is more likely to believe accounts from Ukraine than from Russia and justifiably so but this justified bias doesn't make an account factual.

    The point about injecting bleach is valid and caused real harm and I'm sure these kind of statements will fall under their normal guidelines that will remain in place. This isn't something that's vague or debatable.
    dewme said:

    First of all, thank you for composing such a well formed and articulate post. It’s always a pleasure to read a person’s opinion when it is presented so clearly. Whether or not I agree or disagree with what you’re saying isn’t the point. You’ve provided food for thought and added another perspective to consider along with all of the other perspectives stated here.

    One response I have to your post is concerning using AI to moderate social media content. I’m not so sure that would work because we’d have to all agree on which AI we would all agree to use and there would always be claims that the AI is itself biased. In the end I doubt that humans will ever defer to AI in policing their personal behavior, which includes a lack of self discipline, empathy, and self moderation. 
    Thanks for the kind reply. For the AI, this would be some kind of machine learning. Generative AI has reliability issues but machine learning is good at consistent labelling. The models used could be biased but there can be open-source models. These would be primarily trained to detect civility in conversation but can also detect bias in different topics. As long as the same models are used on each platform, they will have consistent output. They can be used to test mainstream media sites for bias. With open source models, individuals won't be able to corrupt them for their own gain.

    There will be other ways to accomplish the same goal of encouraging constructive conversation but on a large scale, there needs to be a consistent, automated solution. These sites will use AI in their tools already but they clearly aren't being used to encourage constructive input. They can even have monetary rewards for the most constructive users.
    muthuk_vanalingamdewmeblastdoor
     2Likes 1Dislike 0Informatives
  • Reply 62 of 88
    mac_dogmac_dog Posts: 1,098member
    leighr said:
    The end of so called “fact checkers” is a welcome relief for true free speech. When one person, or group, has the power to decide what is ‘true’ or not, we are all in trouble. See exhibit one: China, or even worse, North Korea. We all need to fight against this sort of abuse of money and power, and while I am not a huge Facebook fan, I’m glad that they are following X’s lead in allowing free speech. 
    One group? Did you include government in that statement?
     0Likes 0Dislikes 0Informatives
  • Reply 63 of 88
    Marvin said:

    Snopes is not a Meta fact checker. Bringing them up is a complete red herring as they are in no way relevant to the conversation.
    According to the news, Meta used multiple 3rd party fact-checking services:

    https://www.seattletimes.com/business/meta-moves-to-end-fact-checking-program/

    "After enormous public pressure, Zuckerberg turned to outside organizations like The Associated Press, ABC News and the fact-checking site Snopes, along with other global organizations vetted by the International Fact-Checking Network, to comb over potentially false or misleading posts on Facebook and Instagram and rule whether they needed to be annotated or removed."

    The inclusion of Snopes was just to highlight the vagueness in some kinds of facts that are being checked, it wasn't a critique of Snopes, they are a well-regarded fact-checker. Most fact-checkers are, they just tend to have a left-center bias, which affects how middle-ground issues are handled.
    I can't say Meta is trying to increase hurtful content but clearing the way for people to claim women are property and LGBTQ+ people are all mentally ill does increase the amount of harmful content and misinformation. It also doesn't make the platforms less oppressive. It opens the door for harassment. 
    There are issues that are not so heavily agreed on. One Zuckerberg touched on is blasphemy where people in Pakistan wanted him sentenced to death for allowing images of their religion on the platform. The issue with censorship is where it's appropriate to draw the line. Just saying it opens the door for harassment covers a lot of things. Should they not allow people to say someone is fat or ugly or that they aren't good at their job. One of the top comments on a Youtube video about censorship was that people often mistake hate speech for speech they hate and if censorship is going to work reliably, it needs a consistent standard.

    As you alluded to earlier, people all start setting limits of free speech from a common ground but fairly quickly draw the line at things they are personally sensitive about. Things one person is more sensitive to, is less sensitive for someone else. That doesn't mean they agree with it, they just don't think it reaches a level of offensiveness that requires removal.
    Elon Musk has said many things that are outright lies; it is mind-bending that anyone would defer to him. Also, he regularly surpasses speech on X that he disagrees with. Not that it is factually incorrect but he simply doesn't like it. He has in no way lived up to his claims to be a free speech absolutist, rather the opposite.
    He definitely doesn't live up to being a free speech absolutist and this was apparent before he took over because it's inevitable that people do this. People all have this same flaw. Inwardly people see themselves as the arbiters of truth because people have no reason to lie to themselves but they only have their own perspective.

    The BBC interview shows his mindset and intent, he hasn't lived up to this stated intent and I don't think anyone can run a large scale platform only using the law as a base guideline and then tacking on subjective rules. I think he is overly optimistic about how humanity operates at scale.

    It's easy to lose sight of the scale of these operations, they are getting hundreds of millions of posts per day. They don't have millions of employees so they rely on software to automate the process.

    That's why I think the solution lies in a better convention for conversation. They have to setup a social contract that rewards constructive input instead of destructive. This can be seen on forum sites, the comments on forum sites are usually way above the standard that you get on social media because there's a different convention for conversation. Neutering political conversation helps of course. Politics more often divides people instead of finding common ground.
    For point 1: Meta didn't employ those companies to fact-check for them; they used them as sources. That is an important distinction. And as I pointed out, there was a lack of transparency on Meta's part. Yes, some things are vague, and some things are not, and some things don't really need to be fact-checked because their impact is insignificant (someone used flat-earth folks for an example; who cares what they spout off about? There is no real detriment to their beliefs). But to say that, say, some things are vague, so we shouldn't do anything seems like an extreme approach. There is room for nuance, or I guess in Zuckerberg's world, there just isn't. 

    For point 2: Correct, but my two examples are real examples of changes that Meta made. You are making a false equivalence here. Attacking a minority group is in no way the same as telling in individual that they are bad at the job or ugly (body shamming for size is probably a little closer). The latter are personal insults, and while they should be left out of discourse, they aren't harassing a community. Anyway, the problem it isn't simply content creators making content saying women are property and LGBTQ+ people are mentally ill. I can deal with that as I don't have to engage with the content. It's that the policy change means women and LGBTQ+ creators can be targeted with these types of attacks in comments, in direct messages, or that their content can be stitched to incorporate these types of attacks. I'm a queer person that owns a small business and up until this week used Instagram for promotional purposes. The attacks were already obnoxious enough, but now I can't try to mitigate them by reporting them; these things are now fair game. I can either lock down my account so people can't comment or share my content (kinda defeats the purpose of using it for promotion) or I can accept that I am now open to even more harassment. What is particularly messed up is these folks can send me direct messages attacking me for my sexuality, and if I do something like post a screenshot of it, I'll have my account suspended for promoting harassment against them. Seriously, that has happened. At the time, Meta wanted creators to report those types of comments rather than deal with it ourselves. Now what? Meta has given them the okay to do this, and I just have to shut up and take it. I just opted to delete my account because my time will be better spent doing other things than dealing with attacks. And, yes, there are groups of people out there that just look for LGBTQ+ accounts to spam with vitriol. 

    For point 3, I think you missed my point. Using Elon Musk as a source for anything. His relationship with the truth is so poor that you can't accept anything he says as accurate. 

    I can understand where you are coming from when you say that better conventions for conversations are important, and I think it is a bit naive. I also think it means you are probably lucky enough that you are not in a group that these conversations are about.  I can respectfully disagree with people unless the disagreement is on my humanity and right to exist. The latter is what Zuckerberg has now decided is a reasonable discussion that people should have. It's his business and his decision, I think it's the wrong decision

    Lastly, thanks for the civil conversation on the subject. If this were the tone of all conversations around politics I would be 100 percent with you on just needing to continue to have better convention for conversations. I'm going to not respond anymore on this thread because I feel like I have probably chimed in more than enough. If you do respond I will happily read it and will consider anything you say. Take care. 


    edited January 12
    muthuk_vanalingamroundaboutnowmike1
     3Likes 0Dislikes 0Informatives
  • Reply 64 of 88
    Sitting on iPhone? He'd better think about the Metaverse that he boldy announced a couple of years ago...
    If Apple had a similar product intro, financial analysts would immediately deem Apple as "doomed". 
    The abolition of fact checking is really scary.
    “Repeat a lie often enough and it becomes the truth
    Who used to say that?
    Joseph Goebbels, chief propagandist of the Third Reich.
    qwerty52sconosciuto
     2Likes 0Dislikes 0Informatives
  • Reply 65 of 88
    Incredibly bad headline. Facebook has not "done the same" as Apple for the last 20 years. That concedes the argument to Zuck in an idiotic way.

    Because Apple has not simply "sat on" the iPhone. Since then, they rolled out the iPad, which defined an entire new product category, continues to define it, and is so far superior to all the competition that there is no real competition.

    And then they rolled out the Apple Watch, which also defines its category and has no real competition.

    And it's the same thing with AirPods and AirPods Pro.

    And it's the same thing with Apple Silicon.
     0Likes 0Dislikes 0Informatives
  • Reply 66 of 88
    mattinozmattinoz Posts: 2,613member
    Incredibly bad headline. Facebook has not "done the same" as Apple for the last 20 years. That concedes the argument to Zuck in an idiotic way.

    Because Apple has not simply "sat on" the iPhone. Since then, they rolled out the iPad, which defined an entire new product category, continues to define it, and is so far superior to all the competition that there is no real competition.

    And then they rolled out the Apple Watch, which also defines its category and has no real competition.

    And it's the same thing with AirPods and AirPods Pro.

    And it's the same thing with Apple Silicon.
    Not to mention the iPhone created the market opportunity with the AppStore for most the startups meta have acquired for any fake growth they think they have created. 

    I sense a new wave of creators as the fill the voids X and meta are leaving to achieve “world-domination”
     0Likes 0Dislikes 0Informatives
  • Reply 67 of 88
    opinionopinion Posts: 122member
    The world would have been a better place without Meta and other social media.
     0Likes 0Dislikes 0Informatives
  • Reply 68 of 88
    8002580025 Posts: 184member
    It's a sad sad state of affairs when an individual, company, business and/or industry take to denigrating another individual, company, business and/or industry in a misguided effort to build themselves up.
    qwerty52sconosciutomike1
     3Likes 0Dislikes 0Informatives
  • Reply 69 of 88
    leighr said:
    The end of so called “fact checkers” is a welcome relief for true free speech. When one person, or group, has the power to decide what is ‘true’ or not, we are all in trouble. See exhibit one: China, or even worse, North Korea. We all need to fight against this sort of abuse of money and power, and while I am not a huge Facebook fan, I’m glad that they are following X’s lead in allowing free speech. 
    Exactly. Especially when the so called fact checkers have the facts wrong.  
    A fact is a fact because is THE FACT: something that really has happened! 
    The fact can’t be wrong or right!
    The fact is the reality, the truth. 
    And the factchecker is there not to say if the fact is wrong or right but to check if it has really happened or not. 
    So, if you are talking the truth , why should be afraid from factchecking ?

    sconosciuto
     1Like 0Dislikes 0Informatives
  • Reply 70 of 88
    Wow this is a lot of conflicting opinions I hope y’all got it off your chest. And a little side note Wes’ comment “I'm assuming this means that when a teacher graded your paper and marked an answer incorrect, it was a violation of your free speech?” Was 🔥.
    sconosciutoqwerty52
     2Likes 0Dislikes 0Informatives
  • Reply 71 of 88
    He's a man child plus he's not comfortable in a world with legs.
    sconosciuto
     1Like 0Dislikes 0Informatives
  • Reply 72 of 88
    quadra 610quadra 610 Posts: 6,759member
    I appreciate the competitive banter-- it's going to happen, but the exaggerations and downplaying shouldn't be *too* obvious. "Apple's innovations are frozen in time" Apple Silicon's release, for example, was only a few years ago.
     0Likes 0Dislikes 0Informatives
  • Reply 73 of 88
    leighr said:

    I’m glad that they are following X’s lead in allowing free speech.
    tmayJohnDinEUkillroy
     3Likes 0Dislikes 0Informatives
  • Reply 74 of 88
    bulk001bulk001 Posts: 828member
    Just because one does not like the source of the criticism or because it can apply to them too, does not mean it should be dismissed out of hand. Keep your friends close and your enemies closer. Under Cook the stock price has been up like a rocket. But new innovation that has brought new shareholder value has been questionable. Working on stuff like titan while missing the AI movement for AR would seem to me to be as big a miss as mobile was to Microsoft when you look at the front (say chatGTP) and back ends (say Nvidia style chips and then options like their Digits processing unit for $3,000!). If, as Cook claims, Apple has been working on AI for many years (was it the past 7?) then that team needs to be fired. Now AR headsets are innovation and it could pay off like Newton may have set the basis for an iPhone but what if that time, money and human genius was put to work on something like OpenAI and chip development for AI. Instead we mostly get a better camera, a marginally faster chip, and some emojis that are presented as amazing and groundbreaking. 
     0Likes 0Dislikes 0Informatives
  • Reply 75 of 88
    wood1208wood1208 Posts: 2,944member
    Meta has destroyed so many lives and young generation all over the world that someone needs to hang Mark high!!!
    edited January 13
     0Likes 0Dislikes 0Informatives
  • Reply 76 of 88
    dewmedewme Posts: 6,005member
    Marvin said:

    Thanks for the kind reply. For the AI, this would be some kind of machine learning. Generative AI has reliability issues but machine learning is good at consistent labelling. The models used could be biased but there can be open-source models. These would be primarily trained to detect civility in conversation but can also detect bias in different topics. As long as the same models are used on each platform, they will have consistent output. They can be used to test mainstream media sites for bias. With open source models, individuals won't be able to corrupt them for their own gain.

    There will be other ways to accomplish the same goal of encouraging constructive conversation but on a large scale, there needs to be a consistent, automated solution. These sites will use AI in their tools already but they clearly aren't being used to encourage constructive input. They can even have monetary rewards for the most constructive users.

    You're welcome. Your response Marvin said:

    Snopes is not a Meta fact checker. Bringing them up is a complete red herring as they are in no way relevant to the conversation.
    According to the news, Meta used multiple 3rd party fact-checking services:

    https://www.seattletimes.com/business/meta-moves-to-end-fact-checking-program/

    "After enormous public pressure, Zuckerberg turned to outside organizations like The Associated Press, ABC News and the fact-checking site Snopes, along with other global organizations vetted by the International Fact-Checking Network, to comb over potentially false or misleading posts on Facebook and Instagram and rule whether they needed to be annotated or removed."

    The inclusion of Snopes was just to highlight the vagueness in some kinds of facts that are being checked, it wasn't a critique of Snopes, they are a well-regarded fact-checker. Most fact-checkers are, they just tend to have a left-center bias, which affects how middle-ground issues are handled.
    I can't say Meta is trying to increase hurtful content but clearing the way for people to claim women are property and LGBTQ+ people are all mentally ill does increase the amount of harmful content and misinformation. It also doesn't make the platforms less oppressive. It opens the door for harassment. 
    There are issues that are not so heavily agreed on. One Zuckerberg touched on is blasphemy where people in Pakistan wanted him sentenced to death for allowing images of their religion on the platform. The issue with censorship is where it's appropriate to draw the line. Just saying it opens the door for harassment covers a lot of things. Should they not allow people to say someone is fat or ugly or that they aren't good at their job. One of the top comments on a Youtube video about censorship was that people often mistake hate speech for speech they hate and if censorship is going to work reliably, it needs a consistent standard.

    As you alluded to earlier, people all start setting limits of free speech from a common ground but fairly quickly draw the line at things they are personally sensitive about. Things one person is more sensitive to, is less sensitive for someone else. That doesn't mean they agree with it, they just don't think it reaches a level of offensiveness that requires removal.
    Elon Musk has said many things that are outright lies; it is mind-bending that anyone would defer to him. Also, he regularly surpasses speech on X that he disagrees with. Not that it is factually incorrect but he simply doesn't like it. He has in no way lived up to his claims to be a free speech absolutist, rather the opposite.
    He definitely doesn't live up to being a free speech absolutist and this was apparent before he took over because it's inevitable that people do this. People all have this same flaw. Inwardly people see themselves as the arbiters of truth because people have no reason to lie to themselves but they only have their own perspective.

    The BBC interview shows his mindset and intent, he hasn't lived up to this stated intent and I don't think anyone can run a large scale platform only using the law as a base guideline and then tacking on subjective rules. I think he is overly optimistic about how humanity operates at scale.

    It's easy to lose sight of the scale of these operations, they are getting hundreds of millions of posts per day. They don't have millions of employees so they rely on software to automate the process.

    That's why I think the solution lies in a better convention for conversation. They have to setup a social contract that rewards constructive input instead of destructive. This can be seen on forum sites, the comments on forum sites are usually way above the standard that you get on social media because there's a different convention for conversation. Neutering political conversation helps of course. Politics more often divides people instead of finding common ground.
    No one decides what true or isn't. Facts are self evident.
    Facts are immutable. Community notes are in control of the platform and its users.
    It's important to clarify the word 'fact' here because it's being taken at face value to mean something that is known to be true. Fact-checkers should really be called something else because what they are checking are not yet considered to be facts. The accounts they find to be false are not facts.

    Zuckerberg clarified that their existing policies surrounding facts will remain in place. If something is factually/indisputably terrorist activity, trafficking etc, this won't be left to community notes and will be removed.

    Things that are left to notes are unverifiable accounts. If both sides in a war zone try to spread information of war crimes, they don't always have a way of verifying these accounts and their bias would tend to fall on which side of the war they hope will be successful. In the war in Ukraine, Western media is more likely to believe accounts from Ukraine than from Russia and justifiably so but this justified bias doesn't make an account factual.

    The point about injecting bleach is valid and caused real harm and I'm sure these kind of statements will fall under their normal guidelines that will remain in place. This isn't something that's vague or debatable.
    dewme said:

    First of all, thank you for composing such a well formed and articulate post. It’s always a pleasure to read a person’s opinion when it is presented so clearly. Whether or not I agree or disagree with what you’re saying isn’t the point. You’ve provided food for thought and added another perspective to consider along with all of the other perspectives stated here.

    One response I have to your post is concerning using AI to moderate social media content. I’m not so sure that would work because we’d have to all agree on which AI we would all agree to use and there would always be claims that the AI is itself biased. In the end I doubt that humans will ever defer to AI in policing their personal behavior, which includes a lack of self discipline, empathy, and self moderation. 
    Thanks for the kind reply. For the AI, this would be some kind of machine learning. Generative AI has reliability issues but machine learning is good at consistent labelling. The models used could be biased but there can be open-source models. These would be primarily trained to detect civility in conversation but can also detect bias in different topics. As long as the same models are used on each platform, they will have consistent output. They can be used to test mainstream media sites for bias. With open source models, individuals won't be able to corrupt them for their own gain.

    There will be other ways to accomplish the same goal of encouraging constructive conversation but on a large scale, there needs to be a consistent, automated solution. These sites will use AI in their tools already but they clearly aren't being used to encourage constructive input. They can even have monetary rewards for the most constructive users.
    You're welcome. Your response motivated me to dig a little deeper into some of the ways AI can reduce the biases that can result from learning data sources that are biased to begin with, for whatever reasons. The de-biasing techniques seem very reasonable but also involve domain experts and data source validation involving humans. This is no surprise. When we start to look at AI processing of realtime data sources I think there will still need to be a form of a human-in-the-loop. Even with the promising future of machine learning the human-in-the-loop will to a great extent be none other than ourselves, both individually and collectively via upvoting and downvoting in addition to the scoring done by the AI processing. I'm okay with that and now that AppleInsider has reinstitute the down voting we can start to get a small taste of how the approach may work. We will see how we can handle this on our own while unaided by AI, ML, etc. What works here may not scale to the level needed on Meta's and Musk's platforms.
     0Likes 0Dislikes 0Informatives
  • Reply 77 of 88
    leighr said:
    The end of so called “fact checkers” is a welcome relief for true free speech. When one person, or group, has the power to decide what is ‘true’ or not, we are all in trouble. See exhibit one: China, or even worse, North Korea. We all need to fight against this sort of abuse of money and power, and while I am not a huge Facebook fan, I’m glad that they are following X’s lead in allowing free speech. 
    I'm assuming this means that when a teacher graded your paper and marked an answer incorrect, it was a violation of your free speech? No one has the power to decide what is true or not. Reality is the only truth. But by making fact checking crowdsourced, you remove truth from fact, and I guess that's why they say we're in a post truth society. People only want to hear things they agree with, and everything else is deemed a lie, "fake news."

    It is tragic how bad things have become. People mistrusting professionals, believing conspiracies, hating everything different from themselves.

    It is tragic. I mourn the death of knowledge, of fact, of reality, of compassion. The only thing left is greed and hate in the people that follow him. It is pitiful.
    Two things here:
    1.  Being a 'professional' doesn't make one immune from scrutiny as professionals are often wrong.  These people have very real motivations to say what they say, vs. what you think you're hearing.  I.e. walking into a doctor's office and they're pushing the flu shot, regardless of your personal risk.  What they don't disclose, is if they get X% of people a flu shot, they get a financial bonus.  If the internet had existed in the 40's, saying "smoking causes cancer" would have been flagged as misinformation by "professionals".

    2.  These "professional fact checkers" were taking down legitimate information.  In February 2021, I was among the first to get vaccinated.  A friend, who is a nurse, worked at a hospital administering vaccines to seniors and once they opened a pack, they had to use it within a certain time, so when there were no-shows she'd call friends/family as to not waste them.  In any case, my entire body broke out in welts (spontaneous chronic urticaria).  In posting trying to find information online to see if anyone else was experiencing this, multiple platforms took my posts down and flagged them as "misinformation" that the vaccines could cause this, despite my doctor telling me she saw it with the Moderna shot (but not Pfizer or J&J).  To this day, I still have the welts if I don't take an antihistamine.  It started as daily, but now I'm good if I take it every 2-3 days.  Since then, it's  been recognized as a possible side effect, but it was just as much a fact then as it is now and definitely was not misinformation.


    edited January 13
    muthuk_vanalingam
     0Likes 0Dislikes 1Informative
  • Reply 78 of 88
    Wesley_Hilliardwesley_hilliard Posts: 467member, administrator, moderator, editor
    CMABooty said:
    leighr said:
    The end of so called “fact checkers” is a welcome relief for true free speech. When one person, or group, has the power to decide what is ‘true’ or not, we are all in trouble. See exhibit one: China, or even worse, North Korea. We all need to fight against this sort of abuse of money and power, and while I am not a huge Facebook fan, I’m glad that they are following X’s lead in allowing free speech. 
    I'm assuming this means that when a teacher graded your paper and marked an answer incorrect, it was a violation of your free speech? No one has the power to decide what is true or not. Reality is the only truth. But by making fact checking crowdsourced, you remove truth from fact, and I guess that's why they say we're in a post truth society. People only want to hear things they agree with, and everything else is deemed a lie, "fake news."

    It is tragic how bad things have become. People mistrusting professionals, believing conspiracies, hating everything different from themselves.

    It is tragic. I mourn the death of knowledge, of fact, of reality, of compassion. The only thing left is greed and hate in the people that follow him. It is pitiful.
    Two things here:
    1.  Being a 'professional' doesn't make one immune from scrutiny as professionals are often wrong.  These people have very real motivations to say what they say, vs. what you think you're hearing.  I.e. walking into a doctor's office and they're pushing the flu shot, regardless of your personal risk.  What they don't disclose, is if they get X% of people a flu shot, they get a financial bonus.  If the internet had existed in the 40's, saying "smoking causes cancer" would have been flagged as misinformation by "professionals".

    2.  These "professional fact checkers" were taking down legitimate information.  In February 2021, I was among the first to get vaccinated.  A friend, who is a nurse, worked at a hospital administering vaccines to seniors and once they opened a pack, they had to use it within a certain time, so when there were no-shows she'd call friends/family as to not waste them.  In any case, my entire body broke out in welts (spontaneous chronic urticaria).  In posting trying to find information online to see if anyone else was experiencing this, multiple platforms took my posts down and flagged them as "misinformation" that the vaccines could cause this, despite my doctor telling me she saw it with the Moderna shot (but not Pfizer or J&J).  To this day, I still have the welts if I don't take an antihistamine.  It started as daily, but now I'm good if I take it every 2-3 days.  Since then, it's  been recognized as a possible side effect, but it was just as much a fact then as it is now and definitely was not misinformation.


    It's certainly complicated, especially since science isn't set in stone and is in constant flux. However, it is still important to trust science and its institutions because they are not a monolith. If something is currently scientific fact, it had to get there through mountains of texting and peer reviews to even be considered something more than a simple theory. And many scientific principles are still theory, which means they could one day be proven wrong when the right information is presented.

    However, that doesn't mean that random people on the internet should be allowed to spread misinformation based on hunches or things they heard from a guy on social media. Doctors can disagree on the best way to handle a medical issue, sure, but random users on an internet forum have no ability to discuss whether or not a vaccine is safe or useful. People can "do their own research" but it will never be enough to counter a proper education. Yes, professionals can be wrong, but that doesn't mean a random person online is right.

    Fact checking exists to ensure that information is spread with context and nuance and removes outright misinformation from the algorithms -- at least when it is working correctly. It doesn't stop users from having discussions or debates, but it does stop people from making grand proclamations without any evidence. And how a platform chooses to handle fact checking is up to them. Meta may have had fact checkers, but it didn't stop the flood of misinformation on the platform entirely. It slowed it for sure, but there is so much that some always got through. Now there's no effort to stop it.

    To your points:

    1. I'm not sure if you're referring to the rumors about Blue Cross Blue Shield, but they are not true. Private doctors not dealing with providing medications or services through federal programs might try to have some kind of kickback, but there are a lot of laws that prevent kickbacks so it isn't the norm. With vaccines, it just doesn't happen. Also, yes, it was normal for vaccine administrators to contact anyone and everyone to ensure they used up their stock of vaccines or else they would go bad.

    The incentive to get as many vaccines into as many people as possible is to save lives. Wasting vaccines caused unnecessary sickness and death during the pandemic.

    2. As I said, science changes. I'm not sure how you phrased your posts or where you posted, but it was likely moderators, not fact checkers, that took down your post for fear of contributing to misinformation. For example, people claiming vaccines made them magnetic or more sick caused some to not get vaccinated at all.

    How a platform chooses to handle fact checking is up to them. That said, I'm sorry to hear you had this reaction, and it seems to be a known effect under investigation some minuscule portion of the population has reported. I hope they find a way to undo that effect in the future, and at least it seems it's not causing long term harm.


    Still, none of this means we shouldn't have fact checking on the internet or that we shouldn't trust institutions. That's the fun part, we can question them and seek more information, but we should trust when people who know what they are talking about tell you something important. Like getting a vaccine, evacuating from a natural disaster, or that abortion is healthcare that prevents tragic maternal deaths.

    Misinformation got some guy elected president twice! It's a powerful tool and needs to be defeated or else we'll continue to devolve into a hateful, angry, ignorant society. (more than we are)
     0Likes 0Dislikes 0Informatives
  • Reply 79 of 88
    Somewhat hilariously, I just posted a comment that "will appear after it's been moderated".
     0Likes 0Dislikes 0Informatives
  • Reply 80 of 88
    mattinozmattinoz Posts: 2,613member
    bulk001 said:
    Just because one does not like the source of the criticism or because it can apply to them too, does not mean it should be dismissed out of hand. Keep your friends close and your enemies closer. Under Cook the stock price has been up like a rocket. But new innovation that has brought new shareholder value has been questionable. Working on stuff like titan while missing the AI movement for AR would seem to me to be as big a miss as mobile was to Microsoft when you look at the front (say chatGTP) and back ends (say Nvidia style chips and then options like their Digits processing unit for $3,000!). If, as Cook claims, Apple has been working on AI for many years (was it the past 7?) then that team needs to be fired. Now AR headsets are innovation and it could pay off like Newton may have set the basis for an iPhone but what if that time, money and human genius was put to work on something like OpenAI and chip development for AI. Instead we mostly get a better camera, a marginally faster chip, and some emojis that are presented as amazing and groundbreaking. 
    AI has yet to show broad customer support. Just look at what Microsoft did to pricing this month to trick people into paying for AI instead of trying to promote it as an upgrade. 

    There is no market acceptance of AI as yet and I wonder if Apple is actually behind given their approach should create a broader revenue base for developers and Apple. No tricks just exposure of opportunities.
    killroy
     1Like 0Dislikes 0Informatives
Sign In or Register to comment.