Big Tech firms pledge to control AI, but Apple is not joining them
A consortium formed by OpenAI, Google, Microsoft, and artificial intelligence safety firm Anthropic, says the companies will establish best practices for the AI industry, although so far it will be doing so without Apple.
Logos of the four firms in the new consportium: OpenAI, Anthropic, Microsoft and Google
Just as Apple was noticeably absent from a separate AI safety initiative announced. by the White House, so its failing to join this new consortium raises questions. None of the companies involved have commented on whether Apple was even invited to either initiative, but both projects aim to be industry-wide.
"Today, Anthropic, Google, Microsoft and OpenAI are announcing the formation of the Frontier Model Forum," announced Google in a blog post, "a new industry body focused on ensuring safe and responsible development of frontier AI models."
"The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem," it continued, "such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards."
The definition of a frontier AI system, says the consortium, is "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.
"Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," Brad Smith, vice-chair and president of Microsoft, said in the announcement.. "This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity."
Overall, the new project aims to establish best practices for controlling AI. But AI expert Emily Bender of the University of Washington, has told Ars Technica the new consortium is really intended as "an attempt to avoid regulation; to assert the ability to self-regulate."
Bender is "very skeptical" of AI firms self-regulating, and says that regulation should come from governments "representing the people to constrain what these corporations can do."
Back in 2016, Google and Microsoft were also founding members of a similar organization called The Partnership on AI. Notably, Apple was a member -- and still is today.
Read on AppleInsider
Comments
Although I believe that AI is one of the biggest inventions in human history, I'm not sure what Apple should do about it. How can Apple build AI servers that provide the same privacy and security as Apple's stellar reputation currently provides? Will Apple design it so that the AI runs on the user's own computer (as they do with HomeKit Secure Video, which has similar privacy issues as AI)? How much storage and how much computation power would a user need to get a modicum of AI running on their home computer? If I needed 10 TB of storage and 10 CPUs to get AI working on my computer, I would probably buy that. Privacy is that valuable to me. Apple could sell a lot more hardware (at least to me) if they put AI on a home computer that I could purchase. Does anyone at Apple want to sell a lot more hardware to me?
Indeed!
Obviously, times have changed. Today at least some of the political bureaucrats and control freaks view Apple and other tech companies as gaining too much mindshare and having too much influence over the population at large. I have no doubt that social media and the immediacy that products like the iPhone and networked apps provide in the realm of social media has affected the course of history that is unfolding in real time before our eyes. Some of it is good, some of it is bad, and some is very bad. Those who believed that they controlled the narrative in the past feel like they've lost at least some of that control to social media and those who have learned to utilize it to their benefit. The arrival of AI and some of the misperceptions around its potential influence scares the living crap out of those who already feel diminished by the negative artifacts represented in existing technical platforms and social interaction capabilities that are somewhat controlled by human intelligence.
My advice is to follow the deescalation tactics employed by Benjamin Franklin, which was to always engage with your detractors and find out where their concerns really lie before trying to reach a mutually beneficial outcome from positions of equal footing. Rather than putting together an industry, or big tech company, sponsored consortium around managing AI, engage with those in government and neutral third parties like academia who are also working towards solving these same exact problems. I'm talking specifically about teaming up with the National Institute of Standards and Technology (NIST) who has been heavily focused in this same exact area. Why not bring in academic powerhouses like CMU, Stanford, Cal Tech, MIT, etc., as well? Invite more perspectives and apply more brain power towards coming up with a solution. But most of all, try to move away from the corrosive us-vs-them mentality. We all have a common enemy: uncontrollable, destructive, and weaponized AI that undermines truth and trust.
Yeah, there's always the fear that when you invite more people in you're just going to simply end up with a much larger non-functional committee that doesn't deliver any tangible outcomes. But at the very least it would alleviate the concern that these tech companies are simply conspiring to do an end-around-run around those who end up becoming the regulators or "AI Cops." The worst possible outcome would be for "AI Cops" to be empowered when they have not participated in the creation of the solution. We don't want uninformed and paranoid enforcers to start enacting controls and restraints that are ineffective, non enforceable, benefit crippling, or a self-induced impediment to progress and global competitiveness.
There is imo a lot of value to be harvested from the proper, reasonable, ethical, and constructive application of AI. We cannot afford to screw it up where it is good over an irrational fear of where it could be bad. Doing things in isolation and without broad support and buy-in from all concerned stakeholders on all sides is not a viable solution. Sometimes there are no shortcuts.