Apple Intelligence will adhere to new and vague federal artificial intelligence safeguards...

Posted:
in iOS

Apple, amongst other big tech companies, has agreed to a voluntary and toothless initiative that asks for fairness in developing artificial intelligence models, and monitoring of potential security or privacy issues as the development continues.

Craig Federighi standing in front of a large screen with 'Apple Intelligence' in colorful letters, inside a modern, minimalistic room.
Future expansions to Apple Intelligence may involve more AI partners, paid subscriptions



In an announcement on Friday morning, the White House has clearly said that Apple has agreed to adhere to a set of voluntary artificial intelligence safeguards that the Biden administration has crafted. Other big tech firms have already signed onto the effort, including Alphabet, Amazon, Meta, Microsoft, and OpenAI.

As they stand, the guidelines aren't hard lines in the sand against behaviors. The executive order that Apple has agreed to in principle has six key tenets.

  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government

  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy

  • Protect against the risks of using AI to engineer dangerous biological materials

  • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content

  • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software

  • Order the development of a National Security Memorandum that directs further actions on AI and security



Under the executive order, companies are asked to share results of compliance testing with each other, and the federal government. There is also a voluntary security risk assessment called for in the order, the results of which are also asked to be shared far and wide.

As of yet, there are no penalties for non-compliance, nor is there any kind of enforcement framework. In order to be eligible for federal purchase, AI systems must be tested in advance of submittal to any requests for proposal.

There is no monitoring for compliance of the terms of the order, and it's not clear if any actual egally enforceable guardrails will be applied. What's also not clear is what might happen to the agreement under a future administration.

There has been bipartisan efforts at regulation of AI development, but there has been little movement or discussion after initial lip service. A resumption of that debate isn't expected any time soon, and certainly not before the November 2024 elections.

The Executive Order in October that preceded Apple's commitment was itself launched before a multinational effort in November 2023 to lay out safe frameworks for AI development. It's not yet clear how the two efforts dovetail, or if they do at all.

The White House is expected to hold a further briefing on the matter on July 26.



Read on AppleInsider

Comments

  • Reply 1 of 1
    blastdoorblastdoor Posts: 3,516member
    I seem to recall an Apple executive saying that it’s far better for government to state their goals and then leave it to industry to figure out how to achieve those goals than it is for government to keep the goals a mystery and micromanage firms.

    this sounds consistent with that view, and a refreshing contrast with the European Commission 
    JanNLwatto_cobra
Sign In or Register to comment.