Apple bans internal use of ChatGPT-like tech over fear of leaks, according to leaked docum...

Posted:
in General Discussion edited May 2023
Internal documents and anonymous sources leak details of Apple's internal ban on ChatGPT-like technology and the plans for its own Large Language Model.

Apple employees restricted from using ChatGPT
Apple employees restricted from using ChatGPT


Large Language Models have exploded in popularity in recent months, so it isn't a surprise to learn Apple could be working on a version for itself. However, these tools are unpredictable and tend to leak user data, so employees have been blocked from using competing tools.

A report from The Wall Street Journal details an internal Apple document restricting Apple employees from using ChatGPT, Bard, or similar Large Language Models (LLMs). Anonymous sources also shared that Apple is working on its own version of the technology, though no other details were revealed.

Other companies have also restricted the use of LLM tools internally, like Amazon and Verizon. These tools rely on access to massive data stores and constantly train based on user behavior.

While users can turn off chat logging for products like ChatGPT, the software can break and leak data. Leading companies like Apple to distrust the technology for fear of leaking proprietary data.

If you're not an Apple employee using a work iPhone, the ChatGPT app just launched on the App Store. Look out, though, there are many imitation apps trying to scam users.

Apple and the "AI race"

Apple works on a lot of secret projects that never see the light of day. The existence of internal development of LLM technology at Apple isn't surprising, as it is a natural next step from its existing machine learning systems.

Apple could bring LLMs to Siri
Apple could bring LLMs to Siri


The buzzword "Artificial Intelligence" tends to be thrown around rather loosely when discussing LLMs, GPT, and generative AI. It's been said so many times, like during Google I/O, that it has mostly lost all meaning.

Neural networking and machine learning are nothing new to Apple, despite pundit coverage suggesting Apple is somehow behind in the "AI" race. Siri's launch in 2011, followed by advances in computational photography on iPhone, shows Apple has always been involved in intelligent computing via machine learning. LLMs are an evolution of that same technology set, just on a much larger scale.

Some expect Apple to reveal some form of LLM during WWDC in June, but that may be too soon. The technology has only just begun to accelerate, and Apple tends to wait until its internal version matures before releasing such things.

Apple's fear of data leaks by LLMs isn't misplaced. However, as evidenced by Thursday's reporting, the data leaking problem still seems to suffer from the human element.


Read on AppleInsider
«1

Comments

  • Reply 1 of 25
    timmilleatimmillea Posts: 247member
    I worked on LOLITA (Large-scale, Object-based, Linguistic Interactor, Translator, and Analyser) at the University of Durham (UK) in the 1990's. Half my undergraduate education was in AI. Then.  I worked in artificial intelligence before anyone I knew knew what it meant. I think they still don't. "Simulating successful human, external behaviour" - like composing a great symphony, writing and directing a film, and so on. Not cheating in politics or killing people. 

    LOLITA read the Wall Street Journal everyday and answered questions on it. It could answer questions in Chinese as easily as English, without any translation. My role was to optimise the 'dialogue' module.

    The difference between those efforts and the latest LLMs, and other buzzwords, is that LOLITA attempted to model a deep, semantic understanding of the material while current AI only uses big and now-abundant data to imitate a human response.

    As ever in the UK, research funding ran out and no attempt was made to exploit the project commercially. In the US, I feel it would have been a little different and the World may have been different.

    Neural net based AI only replicates all the mistakes and failings of the average human. AI can achieve so much more. 

    edited May 2023 frantisekAlex_VgregoriusmdewmeappleinsideruserScot1elijahgbyronlAlex1Nbeowulfschmidt
  • Reply 2 of 25
    frantisekfrantisek Posts: 760member
    timmillea said:
    I worked on LOLITA (Large-scale, Object-based, Linguistic Interactor, Translator, and Analyser) at the University of Durham (UK) in the 1990's. Half my undergraduate education was in AI. Then.  I worked in artificial intelligence before anyone I knew knew what it meant. I think they still don't. "Simulating successful human, external behaviour" - like composing a great symphony, writing and directing a film, and so on. Not cheating in politics or killing people. 

    LOLITA read the Wall Street Journal everyday and answered questions on it. It could answer questions in Chinese as easily as English, without any translation. My role was to optimise the 'dialogue' module.

    The difference between those efforts and the latest LLMs, and other buzzwords, is that LOLITA attempted to model a deep, semantic understanding of the material while current AI only uses big and now-abundant data to imitate a human response.

    As ever in the UK, research funding ran out and no attempt was made to exploit the project commercially. In the US, I feel it would have been a little different and the World may have been different.

    Neural net based AI only replicates all the mistakes and failings of the average human. AI can achieve so much more. 


    But it was never intended to it empowers people. If it would you caould easily discover all those lies politics throw on humanity but current model is controlable and can avoid that.
  • Reply 3 of 25
    blastdoorblastdoor Posts: 3,443member
    “Algorithmic Insights” would be a more accurate description of “AI”.

    “Regurgetative AI” more accurate than “generative AI”.

    I'm using the bing chat feature almost every day now. It’s useful, but it’s more like the next step in search engines than the precursor to Mr. Data. 
    Alex_Vwatto_cobra
  • Reply 4 of 25
    mr. hmr. h Posts: 4,870member
    "despite pundit coverage suggesting Apple is somehow behind in the "AI" race" is clearly written to suggest that you don't agree that Apple is behind in the AI race. Well, I'd love some of what you're drinking, because from my perspective, Siri is a woeful piece of garbage whilst I have had several helpful interactions with ChatGPT 4, Bard, and Bing. I also have several pieces of Art on my wall that were generated by DALL·E 2.
    williamlondon
  • Reply 5 of 25
    blastdoorblastdoor Posts: 3,443member
    mr. h said:
    "despite pundit coverage suggesting Apple is somehow behind in the "AI" race" is clearly written to suggest that you don't agree that Apple is behind in the AI race. Well, I'd love some of what you're drinking, because from my perspective, Siri is a woeful piece of garbage whilst I have had several helpful interactions with ChatGPT 4, Bard, and Bing. I also have several pieces of Art on my wall that were generated by DALL·E 2.
    I think the argument would be that there’s more to “AI” than LLMs, and that apple is doing ok in those other things. 

    I’m not sure I buy that argument, but I think that would have to be the argument.

    watto_cobraAlex1N
  • Reply 6 of 25
    These are the same companies that are fine with AI scraping the internet for other people's IP. 
    danoxwatto_cobrawilliamlondonAlex_VAlex1N
  • Reply 7 of 25
    blastdoor said:
    “Algorithmic Insights” would be a more accurate description of “AI”.

    “Regurgetative AI” more accurate than “generative AI”.

    I'm using the bing chat feature almost every day now. It’s useful, but it’s more like the next step in search engines than the precursor to Mr. Data. 
    Highly advanced cut-and-paste. 
    watto_cobrawilliamlondonAlex_VAlex1N
  • Reply 8 of 25
    mr. hmr. h Posts: 4,870member
    These are the same companies that are fine with AI scraping the internet for other people's IP. 
    It's not really that much different to how humans learn.

    Highly advanced cut-and-paste.
    Is a gross over-simplification of what these tools do. In fact, it may not even be an oversimplification, but rather just flat-out wrong. When you are typing, are you performing "highly advanced cut-and-paste"?
    edited May 2023 williamlondonAlex1N
  • Reply 9 of 25
    22july201322july2013 Posts: 3,652member

    Apple bans internal use of ChatGPT-like tech over fear of leaks, according to leaked document...


    I see the irony in that title. Apple isn't using ChatGPT and they still have leaks.
    mazda 3sAlex1N
  • Reply 10 of 25
    danoxdanox Posts: 3,128member

    Apple bans internal use of ChatGPT-like tech over fear of leaks, according to leaked document...


    I see the irony in that title. Apple isn't using ChatGPT and they still have leaks.
    Could that be because the weakest link is always the humans in the chain? Like usual with all leaks.
    watto_cobraAlex1N
  • Reply 11 of 25
    hexclockhexclock Posts: 1,291member
    Banned for fear of leaks, says a leaker. You can’t make this stuff up. 
    watto_cobramazda 3swilliamlondonAlex1N
  • Reply 12 of 25
    bala1234bala1234 Posts: 147member
    I don't know why this is made out to be such a big deal. It should be standard template issued by most big companies. The risk is not just leaking their own copyrighted code and other information but also the reverse risk of accidental use of copyrighted code generated by the LLMs.
    dewmewatto_cobraScot1Alex1N
  • Reply 13 of 25
    dewmedewme Posts: 5,560member
    timmillea said:
    I worked on LOLITA (Large-scale, Object-based, Linguistic Interactor, Translator, and Analyser) at the University of Durham (UK) in the 1990's. Half my undergraduate education was in AI. Then.  I worked in artificial intelligence before anyone I knew knew what it meant. I think they still don't. "Simulating successful human, external behaviour" - like composing a great symphony, writing and directing a film, and so on. Not cheating in politics or killing people. 

    LOLITA read the Wall Street Journal everyday and answered questions on it. It could answer questions in Chinese as easily as English, without any translation. My role was to optimise the 'dialogue' module.

    The difference between those efforts and the latest LLMs, and other buzzwords, is that LOLITA attempted to model a deep, semantic understanding of the material while current AI only uses big and now-abundant data to imitate a human response.

    As ever in the UK, research funding ran out and no attempt was made to exploit the project commercially. In the US, I feel it would have been a little different and the World may have been different.

    Neural net based AI only replicates all the mistakes and failings of the average human. AI can achieve so much more. 

    Very interesting indeed. As someone who understands AI, does it make sense to you why Apple considers ChatGPT-like implementations to be an exceptional threat due to its apparent (but possibly misunderstood) relationship to "AI?"  

    To me the threats associated with ChatGPT-like implementations are no different than any other potential point of egress from a protected perimeter, all of which fall under the concerns of information security (InfoSec) best practices. The threat associated with an employees "conversing" with an external entity using ChatGPT is really no different than an employee conversing with an old college buddy or neighbor about what he/she is working on at work. The threat isn't really due to the AI but the existence of a new egress point that is compromised by the unintentional leaker, not the AI.

    It seems like AI is being demonized by people who don’t really understand what it is.
    edited May 2023 appleinsideruserAlex1N
  • Reply 14 of 25
    hmlongcohmlongco Posts: 553member
    dewme said:
    It seems like AI is being demonized by people who don’t really understand what it is.
    One doesn't need to know the definition to understand that using it involves giving it access to information that could be logged, stored, and analyzed.

    Pasting in code, or part of a business or marketing plan can, or even asking certain questions can leak information, especially when said information is sent to external servers owned by someone else.

    Kind of surprised that more companies haven't figured this out.
    danoxwatto_cobraScot1Alex1N
  • Reply 15 of 25
    dewmedewme Posts: 5,560member
    hmlongco said:
    dewme said:
    It seems like AI is being demonized by people who don’t really understand what it is.
    One doesn't need to know the definition to understand that using it involves giving it access to information that could be logged, stored, and analyzed.

    Pasting in code, or part of a business or marketing plan can, or even asking certain questions can leak information, especially when said information is sent to external servers owned by someone else.

    Kind of surprised that more companies haven't figured this out.

    Pasting code, disclosing non-public parts of business plans, or asking certain (leading) questions outside of the trust/secure boundaries are all InfoSec leaks that have existed since the very first human had something to hide from the second human. Most companies and government agencies including the military have figured this out and have policies in place to prevent such leaks. Far too few companies other than the military and government contractors actually implement real processes and put systems in place to follow through on their information security policies. Even where systems are in place, as we've seen on several occasions and specifically with the recent disclosure of secret documents, these systems still fail. The root cause of these failures have usually been humans, some intentional and some due to incompetence. So far I don't know of a scenario where AI was the root cause of the failure. 

    I'm not disputing the dangers of AI, I'm simply asking someone who has some experience with AI to weigh in on the relationship between InfoSec failures and AI. If we arbitrarily attribute all of the bad stuff that can happen to AI we're probably overlooking some of the root causes of failures, one of which is the really sloppy job many organizations do when it comes to information security.  Additionally, we're probably overlooking some of the potential ways that AI may be able to help us prevent some of these root cause failures, which is why we don't want to demonize it without fully understanding it.

    I'm assuming you know that companies, especially publicly held companies, already disclose a wealth of information to the public including financial disclosures, quarterly and annual results, forecasts, patent disclosures, executive summaries, white papers, etc. All of this information is out there and consumable by good guys and bad guys looking to form a profile of what the company is up to, even more so when aggregated with related data, say from suppliers and partners. We see it here on AppleInsider, and we've seen the damage that occurs when someone on the inside leaks information that should not have been leaked to the outside.   

    One of my takes on AI dangers, at least from what I understand, is similar to the dangers or flaws attributed to automation or the internet. When you automate a process you can increase the production rate and capacity exponentially. But what if your process is flawed? You end up with a massive quality of defective things, very quickly. Similarly, the internet and especially social media allows communication on a massive scale. But what if the information conveyed on the global communication network is flawed? You end up with disinformation on a massive scale, which is potentially deadly. If we demonized automation or the internet we'd be in very bad shape today. 

    I suspect that AI will follow a similar pattern, being neither inherently bad nor inherently good, but also being a massive amplifier for carrying out the human intentions behind it, whether good or bad. This is probably why a lot of folks want to put a pause on it to better understand the pros and cons and put safeguards in place to ward off the bad things. The question that I have right now is whether the applications we are seeing right now, like ChatGPT, are really indicators of the damage potential of AI as a whole, or just ways to amplify existing problems and deficiencies, like extremely sloppy and grossly naive information security within organizations, which should have been corrected well before ChatGPT arrived on the scene? 
    mr. hAlex1N
  • Reply 16 of 25
    mr. h said:
    These are the same companies that are fine with AI scraping the internet for other people's IP. 
    It's not really that much different to how humans learn.
    AI programs don't learn. They just leverage enormous databases in a way that can be marketed to the average person as "learning". The example of that would be the GO playing AI that was touted as too advanced for a human opponent to beat and then was beaten over and over and over with a strategy that wasn't in it's database. It couldn't recognize what was being done even though a reasonably proficient human opponent would. 

    AI is just a marketing term that's used to try and avoid lawsuits.

    edited May 2023 williamlondon
  • Reply 17 of 25
    mr. h said:
    Highly advanced cut-and-paste.
    Is a gross over-simplification of what these tools do. In fact, it may not even be an oversimplification, but rather just flat-out wrong. When you are typing, are you performing "highly advanced cut-and-paste"?
    I don't need access to a gigantic database of previously written text in order to type or write. AI programs do. Turn off the database access and they're useless. 
    edited May 2023 danoxwilliamlondon
  • Reply 18 of 25
    DarkMouzeDarkMouze Posts: 37member
    mr. h said:
    "despite pundit coverage suggesting Apple is somehow behind in the "AI" race" is clearly written to suggest that you don't agree that Apple is behind in the AI race. Well, I'd love some of what you're drinking, because from my perspective, Siri is a woeful piece of garbage whilst I have had several helpful interactions with ChatGPT 4, Bard, and Bing. I also have several pieces of Art on my wall that were generated by DALL·E 2.
    I suspect that what the author is referring to is the research that Apple is currently doing for future implementation and products. Comparing Siri, a relatively mature, old-school product, to other companies’ current cutting edge releases isn’t really the point.
    williamlondon
  • Reply 19 of 25
    mr. hmr. h Posts: 4,870member
    mr. h said:
    Highly advanced cut-and-paste.
    Is a gross over-simplification of what these tools do. In fact, it may not even be an oversimplification, but rather just flat-out wrong. When you are typing, are you performing "highly advanced cut-and-paste"?
    I don't need access to a gigantic database of previously written text in order to type or write. AI programs do. Turn off the database access and they're useless. 
    No, that is not what an LLM is. The model was trained on a large database, but it does not access that database when generating responses.
    hmlongcowilliamlondonelijahgchutzpah
  • Reply 20 of 25
    hmlongcohmlongco Posts: 553member
    I don't need access to a gigantic database of previously written text in order to type or write. AI programs do. Turn off the database access and they're useless. 
    As pointed out, they've been "trained" on a large corpus of information.

    As have you, since birth.
    gatorguymr. hwilliamlondonelijahg
Sign In or Register to comment.