Apple bans internal use of ChatGPT-like tech over fear of leaks, according to leaked docum...
Internal documents and anonymous sources leak details of Apple's internal ban on ChatGPT-like technology and the plans for its own Large Language Model.

Apple employees restricted from using ChatGPT
Large Language Models have exploded in popularity in recent months, so it isn't a surprise to learn Apple could be working on a version for itself. However, these tools are unpredictable and tend to leak user data, so employees have been blocked from using competing tools.
A report from The Wall Street Journal details an internal Apple document restricting Apple employees from using ChatGPT, Bard, or similar Large Language Models (LLMs). Anonymous sources also shared that Apple is working on its own version of the technology, though no other details were revealed.
Other companies have also restricted the use of LLM tools internally, like Amazon and Verizon. These tools rely on access to massive data stores and constantly train based on user behavior.
While users can turn off chat logging for products like ChatGPT, the software can break and leak data. Leading companies like Apple to distrust the technology for fear of leaking proprietary data.
If you're not an Apple employee using a work iPhone, the ChatGPT app just launched on the App Store. Look out, though, there are many imitation apps trying to scam users.

Apple could bring LLMs to Siri
The buzzword "Artificial Intelligence" tends to be thrown around rather loosely when discussing LLMs, GPT, and generative AI. It's been said so many times, like during Google I/O, that it has mostly lost all meaning.
Neural networking and machine learning are nothing new to Apple, despite pundit coverage suggesting Apple is somehow behind in the "AI" race. Siri's launch in 2011, followed by advances in computational photography on iPhone, shows Apple has always been involved in intelligent computing via machine learning. LLMs are an evolution of that same technology set, just on a much larger scale.
Some expect Apple to reveal some form of LLM during WWDC in June, but that may be too soon. The technology has only just begun to accelerate, and Apple tends to wait until its internal version matures before releasing such things.
Apple's fear of data leaks by LLMs isn't misplaced. However, as evidenced by Thursday's reporting, the data leaking problem still seems to suffer from the human element.
Read on AppleInsider

Apple employees restricted from using ChatGPT
Large Language Models have exploded in popularity in recent months, so it isn't a surprise to learn Apple could be working on a version for itself. However, these tools are unpredictable and tend to leak user data, so employees have been blocked from using competing tools.
A report from The Wall Street Journal details an internal Apple document restricting Apple employees from using ChatGPT, Bard, or similar Large Language Models (LLMs). Anonymous sources also shared that Apple is working on its own version of the technology, though no other details were revealed.
Other companies have also restricted the use of LLM tools internally, like Amazon and Verizon. These tools rely on access to massive data stores and constantly train based on user behavior.
While users can turn off chat logging for products like ChatGPT, the software can break and leak data. Leading companies like Apple to distrust the technology for fear of leaking proprietary data.
If you're not an Apple employee using a work iPhone, the ChatGPT app just launched on the App Store. Look out, though, there are many imitation apps trying to scam users.
Apple and the "AI race"
Apple works on a lot of secret projects that never see the light of day. The existence of internal development of LLM technology at Apple isn't surprising, as it is a natural next step from its existing machine learning systems.
Apple could bring LLMs to Siri
The buzzword "Artificial Intelligence" tends to be thrown around rather loosely when discussing LLMs, GPT, and generative AI. It's been said so many times, like during Google I/O, that it has mostly lost all meaning.
Neural networking and machine learning are nothing new to Apple, despite pundit coverage suggesting Apple is somehow behind in the "AI" race. Siri's launch in 2011, followed by advances in computational photography on iPhone, shows Apple has always been involved in intelligent computing via machine learning. LLMs are an evolution of that same technology set, just on a much larger scale.
Some expect Apple to reveal some form of LLM during WWDC in June, but that may be too soon. The technology has only just begun to accelerate, and Apple tends to wait until its internal version matures before releasing such things.
Apple's fear of data leaks by LLMs isn't misplaced. However, as evidenced by Thursday's reporting, the data leaking problem still seems to suffer from the human element.
Read on AppleInsider
Comments
LOLITA read the Wall Street Journal everyday and answered questions on it. It could answer questions in Chinese as easily as English, without any translation. My role was to optimise the 'dialogue' module.
The difference between those efforts and the latest LLMs, and other buzzwords, is that LOLITA attempted to model a deep, semantic understanding of the material while current AI only uses big and now-abundant data to imitate a human response.
As ever in the UK, research funding ran out and no attempt was made to exploit the project commercially. In the US, I feel it would have been a little different and the World may have been different.
Neural net based AI only replicates all the mistakes and failings of the average human. AI can achieve so much more.
But it was never intended to it empowers people. If it would you caould easily discover all those lies politics throw on humanity but current model is controlable and can avoid that.
“Regurgetative AI” more accurate than “generative AI”.
I'm using the bing chat feature almost every day now. It’s useful, but it’s more like the next step in search engines than the precursor to Mr. Data.
Is a gross over-simplification of what these tools do. In fact, it may not even be an oversimplification, but rather just flat-out wrong. When you are typing, are you performing "highly advanced cut-and-paste"?
Apple bans internal use of ChatGPT-like tech over fear of leaks, according to leaked document...
I see the irony in that title. Apple isn't using ChatGPT and they still have leaks.
It seems like AI is being demonized by people who don’t really understand what it is.
Pasting in code, or part of a business or marketing plan can, or even asking certain questions can leak information, especially when said information is sent to external servers owned by someone else.
Kind of surprised that more companies haven't figured this out.
AI is just a marketing term that's used to try and avoid lawsuits.
As have you, since birth.