Big Tech firms pledge to control AI, but Apple is not joining them

Posted:
in General Discussion edited July 2023

A consortium formed by OpenAI, Google, Microsoft, and artificial intelligence safety firm Anthropic, says the companies will establish best practices for the AI industry, although so far it will be doing so without Apple.

Logos of the four firms in the new consportium: OpenAI, Anthropic, Microsoft and Google
Logos of the four firms in the new consportium: OpenAI, Anthropic, Microsoft and Google



Just as Apple was noticeably absent from a separate AI safety initiative announced. by the White House, so its failing to join this new consortium raises questions. None of the companies involved have commented on whether Apple was even invited to either initiative, but both projects aim to be industry-wide.

"Today, Anthropic, Google, Microsoft and OpenAI are announcing the formation of the Frontier Model Forum," announced Google in a blog post, "a new industry body focused on ensuring safe and responsible development of frontier AI models."

"The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem," it continued, "such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards."

The definition of a frontier AI system, says the consortium, is "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.

"Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," Brad Smith, vice-chair and president of Microsoft, said in the announcement.. "This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity."

Overall, the new project aims to establish best practices for controlling AI. But AI expert Emily Bender of the University of Washington, has told Ars Technica the new consortium is really intended as "an attempt to avoid regulation; to assert the ability to self-regulate."

Bender is "very skeptical" of AI firms self-regulating, and says that regulation should come from governments "representing the people to constrain what these corporations can do."

Back in 2016, Google and Microsoft were also founding members of a similar organization called The Partnership on AI. Notably, Apple was a member -- and still is today.

Read on AppleInsider

Comments

  • Reply 1 of 18
    fumifumi Posts: 23member
    Meaningless. All these companies will pursue AI and the untold riches it will bring with little or no accountability. 
    StrangeDaysforegoneconclusionappleinsideruserdewmecitpeksFileMakerFellerwatto_cobraAlex1N
  • Reply 2 of 18
    robin huberrobin huber Posts: 3,964member
    Apple brands itself as responsible and privacy forward. Perhaps they don’t feel the need to participate in fig-leaf events such as this, that they’re going to do the right thing because that’s what they do? Also, I think Apple may end up being a client of AI rather than a purveyor, much like they get screens from Samsung. So it’s up the vendors to assure the safety of their wares. 
    edited July 2023 ravnorodomwatto_cobraAlex1N
  • Reply 3 of 18
    melgrossmelgross Posts: 33,510member
    Apple brands itself as responsible and privacy forward. Perhaps they don’t feel the need to participate in fig-leaf events such as this, that they’re going to do the right thing because that’s what they do? Also, I think Apple may end up being a client of AI rather than a purveyor, much like they get screens from Samsung. So it’s up the vendors to assure the safety of their wares. 
    Apple is apparently working on their own. There have been several reports on that with some detail, including here. So I don’t think they will be using someone else’s version.
    ravnorodomFileMakerFellerpscooter63watto_cobraAlex1N
  • Reply 4 of 18
    melgrossmelgross Posts: 33,510member
    From what I read on this organization, it seems to be mostly “regulating” specifics of how this software will work, rather than the more theoretical concepts of how to determine when it becomes dangerous, and how to prevent that. I can see Apple not being interested in that view, at least, at this time. 
    ravnorodomFileMakerFellerwatto_cobraAlex1N
  • Reply 5 of 18
    Or maybe Apple knows how advanced their technology is.  Why would you sit down at the poker table when you invented a new roulette machine.
  • Reply 6 of 18
    22july201322july2013 Posts: 3,573member
    Has anyone on earth ever said, "I buy from [Microsoft|Google] because of their views on privacy and security"? 

    Although I believe that AI is one of the biggest inventions in human history, I'm not sure what Apple should do about it. How can Apple build AI servers that provide the same privacy and security as Apple's stellar reputation currently provides? Will Apple design it so that the AI runs on the user's own computer (as they do with HomeKit Secure Video, which has similar privacy issues as AI)? How much storage and how much computation power would a user need to get a modicum of AI running on their home computer? If I needed 10 TB of storage and 10 CPUs to get AI working on my computer, I would probably buy that. Privacy is that valuable to me. Apple could sell a lot more hardware (at least to me) if they put AI on a home computer that I could purchase. Does anyone at Apple want to sell a lot more hardware to me?
    watto_cobra
  • Reply 7 of 18
    ravnorodomravnorodom Posts: 697member
    No need to join them. Apple already got Siri as foundation to work from. Just beef it up to Siri 2.0 with AI implementation.
    watto_cobra
  • Reply 8 of 18
    No need to join them. Apple already got Siri as foundation to work from. Just beef it up to Siri 2.0 with AI implementation.
    Siri 2.0, the future of Artificial Irritating! 😂 
    williamlondonFileMakerFellermuthuk_vanalingamAlex1N
  • Reply 9 of 18
    danoxdanox Posts: 2,875member
    Apple brands itself as responsible and privacy forward. Perhaps they don’t feel the need to participate in fig-leaf events such as this, that they’re going to do the right thing because that’s what they do? Also, I think Apple may end up being a client of AI rather than a purveyor, much like they get screens from Samsung. So it’s up the vendors to assure the safety of their wares. 

    Apples path is different because they have a in-house OS across several devices, real in house hardware design and engineering, and will never be on the same path as Google, Meta, Amazon, or even Microsoft, I don’t see where Apple gets anything out of participating in the big lie. 

    Google is a client of Apple, in so far as Google writes out a check for billions of dollars per year for a default search position within iOS and Mac OS, and Apple being historically, a vertical computer company, probably doesn’t need help from Google or Microsoft.
    williamlondonravnorodomFileMakerFellerwatto_cobraAlex1N
  • Reply 10 of 18
    dewmedewme Posts: 5,376member
    Actions speak louder than pledges. 

    What are the tangible deliverables, other than groupthink proclamations, that this consortium will actually deliver?

    I'm placing my money on Apple to deliver something real in a real product or collection of products while the consortium compiles reams of meeting minutes, PowerPoints, and white papers.
    citpeksdanoxFileMakerFellerpscooter63watto_cobraAlex1N
  • Reply 11 of 18
    ctt_zhctt_zh Posts: 67member
    danox said:
    Apple brands itself as responsible and privacy forward. Perhaps they don’t feel the need to participate in fig-leaf events such as this, that they’re going to do the right thing because that’s what they do? Also, I think Apple may end up being a client of AI rather than a purveyor, much like they get screens from Samsung. So it’s up the vendors to assure the safety of their wares. 

    Apples path is different because they have a in-house OS across several devices, real in house hardware design and engineering, and will never be on the same path as Google, Meta, Amazon, or even Microsoft, I don’t see where Apple gets anything out of participating in the big lie. 

    Google is a client of Apple, in so far as Google writes out a check for billions of dollars per year for a default search position within iOS and Mac OS, and Apple being historically, a vertical computer company, probably doesn’t need help from Google or Microsoft.
    Apple being a vertical company hasn't really given it an advantage with AI / ML.

    For AI training using "best of breed" devices is most important.. e.g. Cloud Service... Google Cloud or Microsoft Azure, Operating System... Linux, GPU... nVidia. Remember, Apple's Ajax is based on Google's JAX machine learning framework and runs on the Google Cloud.

    Even for device deployment, Apple controlling its own hardware / software hasn't given it an advantage over, for example, the Google Pixel line in terms of ML features.

    Perhaps in the coming years we'll see drastic improvements for one architecture over another. Apple's implementation of its models in hardware may indeed be better than Qualcomm's implementation of Llama 2 on device, or Google's model deployment to its Tensor chips... we'll see but it's far from guaranteed.   
    williamlondon
  • Reply 12 of 18
    ctt_zh said:
    danox said:
    Apple brands itself as responsible and privacy forward. Perhaps they don’t feel the need to participate in fig-leaf events such as this, that they’re going to do the right thing because that’s what they do? Also, I think Apple may end up being a client of AI rather than a purveyor, much like they get screens from Samsung. So it’s up the vendors to assure the safety of their wares. 

    Apples path is different because they have a in-house OS across several devices, real in house hardware design and engineering, and will never be on the same path as Google, Meta, Amazon, or even Microsoft, I don’t see where Apple gets anything out of participating in the big lie. 

    Google is a client of Apple, in so far as Google writes out a check for billions of dollars per year for a default search position within iOS and Mac OS, and Apple being historically, a vertical computer company, probably doesn’t need help from Google or Microsoft.
    Apple being a vertical company hasn't really given it an advantage with AI / ML.

    For AI training using "best of breed" devices is most important.. e.g. Cloud Service... Google Cloud or Microsoft Azure, Operating System... Linux, GPU... nVidia. Remember, Apple's Ajax is based on Google's JAX machine learning framework and runs on the Google Cloud.

    Even for device deployment, Apple controlling its own hardware / software hasn't given it an advantage over, for example, the Google Pixel line in terms of ML features.

    Perhaps in the coming years we'll see drastic improvements for one architecture over another. Apple's implementation of its models in hardware may indeed be better than Qualcomm's implementation of Llama 2 on device, or Google's model deployment to its Tensor chips... we'll see but it's far from guaranteed.   
    I disagree. Apple has moved more of it's ML/AI features to on device than Google/Android has. That is due to the A series processors and Apple's vertical integration. From a user perspective it's really not noticeable but from a privacy and security standpoint it's important. 
    danoxFileMakerFellertmaywatto_cobraAlex1N
  • Reply 13 of 18
    robin huberrobin huber Posts: 3,964member
    After seeing Oppenheimer to remind us, we should have no illusions about how adept mankind is at letting the genie out of the bottle. We click our tongues at how risky it is, but can’t stop ourselves from prying out the stopper. We’re doomed by our very natures. 
    edited July 2023 FileMakerFellerwatto_cobraAlex1Nappleinsideruser
  • Reply 14 of 18
    citpekscitpeks Posts: 246member
    dewme said:
    Actions speak louder than pledges. 

    What are the tangible deliverables, other than groupthink proclamations, that this consortium will actually deliver?

    I'm placing my money on Apple to deliver something real in a real product or collection of products while the consortium compiles reams of meeting minutes, PowerPoints, and white papers.

    Precisely.  Industry associations and trade groups serve to propagate PR, astroturf the public, lobby the government, and project a good image, while never forgetting the true purpose is to advance the goals of its members.

    This is a dog and pony show to give the public the impression that these companies will try to act responsibly, and keep the regulators off their back, not that Congress will do anything substantive anyway.

    Every time some company gets caught doing something sketchy, or stupid, the boilerplate PR says "We value blah blah blah, and are committed to blah blah blah, and so on."

    Never admit fault, never outline how they're going to address the issue, or mitigate another occurrence.  Get out of jail free, because nobody will hold their feet to the fire (including the public), and move on, business as usual.  How many data breaches has T-Mobile suffered in their history?  They've had two this year alone, to go with those in the past.  SOS, DD.

    Apple, has its own interests, of course, and is no different in that respect.  But one thing is has never felt the need to do is participate in something that it feel doesn't serve it, or its users in a meaningful way.  This is just another example.

    Actions, not words.  The FBI knows first hand about this in its dealings with Apple.  The tech industry has come out against the UK's proposal to ban encryption.  If it does indeed pass, then the proof will be in the pudding.  Care to place bets on who follows through with their threats to leave the market?  Even Apple isn't a sure bet in that regard, but at least its odds are better than a Facebook, etc.
    FileMakerFellerwatto_cobradewmeAlex1N
  • Reply 15 of 18
    22july201322july2013 Posts: 3,573member
    After seeing Oppenheimer to remind us, we should have no illusions about how adept mankind is at letting the genie out of the bottle. We click our tongues at how risky it is, but can’t stop ourselves from prying out the stopper. We’re doomed by our very natures. 
    True, but do you think Russia would have chosen not to build the bomb if America hadn't? Or do you think Russia would have built the bomb but been more prudent in its use than America has been?
    watto_cobra
  • Reply 16 of 18
    "...Safe and responsible development of frontier AI models..."
    Indeed!
    watto_cobra
  • Reply 17 of 18
    dewmedewme Posts: 5,376member
    citpeks said:
    dewme said:
    Actions speak louder than pledges. 

    What are the tangible deliverables, other than groupthink proclamations, that this consortium will actually deliver?

    I'm placing my money on Apple to deliver something real in a real product or collection of products while the consortium compiles reams of meeting minutes, PowerPoints, and white papers.

    Precisely.  Industry associations and trade groups serve to propagate PR, astroturf the public, lobby the government, and project a good image, while never forgetting the true purpose is to advance the goals of its members.

    This is a dog and pony show to give the public the impression that these companies will try to act responsibly, and keep the regulators off their back, not that Congress will do anything substantive anyway.

    Every time some company gets caught doing something sketchy, or stupid, the boilerplate PR says "We value blah blah blah, and are committed to blah blah blah, and so on."

    Never admit fault, never outline how they're going to address the issue, or mitigate another occurrence.  Get out of jail free, because nobody will hold their feet to the fire (including the public), and move on, business as usual.  How many data breaches has T-Mobile suffered in their history?  They've had two this year alone, to go with those in the past.  SOS, DD.

    Apple, has its own interests, of course, and is no different in that respect.  But one thing is has never felt the need to do is participate in something that it feel doesn't serve it, or its users in a meaningful way.  This is just another example.

    Actions, not words.  The FBI knows first hand about this in its dealings with Apple.  The tech industry has come out against the UK's proposal to ban encryption.  If it does indeed pass, then the proof will be in the pudding.  Care to place bets on who follows through with their threats to leave the market?  Even Apple isn't a sure bet in that regard, but at least its odds are better than a Facebook, etc.
    Another thing ... there is currently a lot of animosity between the political bureaucracy and tech industry. I think it is in everyone's best interest to turn down the heat and let cooler heads prevail. This is a pattern that Apple has followed under Tim Cook much more than under Steve Jobs and I think it is a better, or at least more effective, working model for where we are today. When Steve was changing the world few people wanted to get in his way because everyone was reaping the benefits of the tidal change. It was all unquestionably good, well, except for some of Apple's shortsighted or tunnel vision competitors.

    Obviously, times have changed. Today at least some of the political bureaucrats and control freaks view Apple and other tech companies as gaining too much mindshare and having too much influence over the population at large. I have no doubt that social media and the immediacy that products like the iPhone and networked apps provide in the realm of social media has affected the course of history that is unfolding in real time before our eyes. Some of it is good, some of it is bad, and some is very bad. Those who believed that they controlled the narrative in the past feel like they've lost at least some of that control to social media and those who have learned to utilize it to their benefit. The arrival of AI and some of the misperceptions around its potential influence scares the living crap out of those who already feel diminished by the negative artifacts represented in existing technical platforms and social interaction capabilities that are somewhat controlled by human intelligence.

    My advice is to follow the deescalation tactics employed by Benjamin Franklin, which was to always engage with your detractors and find out where their concerns really lie before trying to reach a mutually beneficial outcome from positions of equal footing. Rather than putting together an industry, or big tech company, sponsored consortium around managing AI, engage with those in government and neutral third parties like academia who are also working towards solving these same exact problems. I'm talking specifically about teaming up with the National Institute of Standards and Technology (NIST) who has been heavily focused in this same exact area. Why not bring in academic powerhouses like CMU, Stanford, Cal Tech, MIT, etc., as well? Invite more perspectives and apply more brain power towards coming up with a solution. But most of all, try to move away from the corrosive us-vs-them mentality. We all have a common enemy: uncontrollable, destructive, and weaponized AI that undermines truth and trust.

    Yeah, there's always the fear that when you invite more people in you're just going to simply end up with a much larger non-functional committee that doesn't deliver any tangible outcomes. But at the very least it would alleviate the concern that these tech companies are simply conspiring to do an end-around-run around those who end up becoming the regulators or "AI Cops." The worst possible outcome would be for "AI Cops" to be empowered when they have not participated in the creation of the solution. We don't want uninformed and paranoid enforcers to start enacting controls and restraints that are ineffective, non enforceable, benefit crippling, or a self-induced impediment to progress and global competitiveness.

    There is imo a lot of value to be harvested from the proper, reasonable, ethical, and constructive application of AI. We cannot afford to screw it up where it is good over an irrational fear of where it could be bad. Doing things in isolation and without broad support and buy-in from all concerned stakeholders on all sides is not a viable solution. Sometimes there are no shortcuts.

    FileMakerFellerAlex1N
Sign In or Register to comment.