Apple is pouring money into Siri improvements with generative AI
Apple has boosted its budget for developing artificial intelligence, emphasizing creating conversational chatbot features for Siri -- allegedly spending millions of dollars daily on research and development.

Generative AI could be used to bolster Siri's abilities
In May, it was learned that Apple had been recruiting more engineers to work on generative AI projects. While the company didn't make an official forward-looking statement, CEO Tim Cook said that generative AI is "very interesting."
But, the story of Apple's foray into generative AI starts much earlier than May. Four years ago, Apple's head of AI, John Giannandrea, formed a team to work on large-language models (LLMs), the basis of generative AI chatbots like ChatGPT.
Apple's conversational AI team, the Foundational Models team, is led by Ruoming Pang, who previously worked at Google for 15 years. The team has a significant budget, and trains advanced LLMs using millions of dollars daily. Despite having only 16 members, their advancements rival those of OpenAI, which spent over $100 million to train a similar LLM.
According to The Information, at least two other teams at Apple are working on language and image models. One group focuses on Visual Intelligence, generating images, videos, and 3D scenes, while another works on multimodal AI, which can handle text, images, and videos.
Currently, Apple is planning to integrate LLMs into Siri, its voice assistant. This would allow users to automate complex tasks using natural language, similar to Google's efforts to improve their voice assistant. Apple believes its advanced language model, Ajax GPT, is better than OpenAI's GPT 3.5.
Ultimately, incorporating LLMs into Apple products has its challenges. Unlike some competitors who use a cloud-based approach, Apple prefers running software on-device for better privacy and performance. However, Apple's LLMs, including Ajax GPT, are quite large, which makes it difficult to fit them onto the iPhone due to their size and complexity.
There are precedents for shrinking large models, such as Google's PaLM2, which comes in different sizes, including one suitable for devices and offline use. While it's unclear what Apple's plans are, the company could opt for smaller LLMs for privacy reasons.
In May, Internal documents and anonymous sources leaked details of Apple's internal ban on ChatGPT-like technology and the plans for its own LLM.
Read on AppleInsider
Comments
At this point, it's pointless to guess how data will be routed. That said, Apple's commitment to user privacy is very, Very, VERY strong. iCloud, iMessage, etc. My best guess is that Apple's production service will run on their own servers in their own datacenters. Remember that datacenters aren't free. You need to put hardware in them, get people to run them, etc.
Over time, Apple has made a concerted effort to wean themselves from Google's grasp. It didn't happen overnight either.
In any case Apple will never disclose the development process to the public.
But that doesn't mean the data is being "routed through Google" - it's a private cloud with built-in encryption; the cloud vendor does not have access to the encryption keys and cannot access the plaintext version of the data.
JAX is an open source framework, I would guess that "Apple JAX" involves some extension code that is customised based on observations and analysis of the existing data set and perhaps knowledge of the devices on which (portions of) the code will be run in the future.
That's where the companies who specialise in these things come in, but it comes at a price.
Apple and Google are not the enemies some forum members would like to believe they are.