Apple bans internal use of ChatGPT-like tech over fear of leaks, according to leaked document...
I see the irony in that title. Apple isn't using ChatGPT and they still have leaks.
I wouldn't call that ironic. Now if the title said "Apple has quashed all leaks, according to leaked document" then that would be ironic. Removing a potential source of leaks doesn't mean Apple can't have, doesn't know about potential issues, and may simply may not be able to curtail all possible leaks because Apple needs to be run by humans who will have access to certain aspects of their business to do their job so I don't think it's ironic that Apple wants to stop leaks despite knowing that leaks will still occur. Even now we have many fewer full fledged product leaks than we used to in the past despite Apple's increased popularity and growth.
I don't need access to a gigantic database of previously written text in order to type or write. AI programs do. Turn off the database access and they're useless.
As pointed out, they've been "trained" on a large corpus of information.
As have you, since birth.
Baloney. The "training" IS the database, i.e., exact duplicates of millions of written documents or millions of pixel base images. That's what the "AI" constantly accesses. The idea that computers are going to retain knowledge without exact copies of the material being stored somewhere is ludicrous.
Human memory doesn't work the same way. You don't have word for word copies of every written piece of information you've seen in your lifetime available to pull up and rearrange in order to "write" something. Like I said, the reason these companies try to draw parallels to human intelligence and memory is to hide the way the systems really work and try to avoid lawsuits.
mr. h said: No, that is not what an LLM is. The model was trained on a large database, but it does not access that database when generating responses.
Computers have to access databases. That's how computer memory works. You don't load your video game once and then erase it from memory because the computer is going to somehow "remember" how to run the game. The game has to be installed on a drive.
Look at the article: companies like Apple are worried that using 3rd party AI = their valuable IP being stored in a 3rd party database for the AI to access. Adobe has already admitted that their plan to use AI to generate artwork is entirely based on the AI constantly accessing their stock image library AND that the artists whose images are being used need to be paid for it.
I don't need access to a gigantic database of previously written text in order to type or write. AI programs do. Turn off the database access and they're useless.
As pointed out, they've been "trained" on a large corpus of information.
As have you, since birth.
Baloney. The "training" IS the database, i.e., exact duplicates of millions of written documents or millions of pixel base images. That's what the "AI" constantly accesses. The idea that computers are going to retain knowledge without exact copies of the material being stored somewhere is ludicrous.
No. That is wrong. That is not how generative AI works. It's how a lot of people think it works, leading to the kinds of fallacious claims that you are making. Generative AI works in a much, much closer way to how your brain works than you either understand or are willing to admit.
An LLM's training and learning really is similar (not identical) to how you learnt your native language as a baby. No-one could "explain" to you what language is - you were just exposed to a lot of it, and your brain had to form new neural pathways to try to figure it out. Your brain detected that it was hearing the same "kinds" of sounds repeatedly, and that re-inforced certain pathways in your brain. That forms the basic scaffold that enabled your brain to then recognise whole words, and from there, start to figure out how to use those words yourself.
Comments
Human memory doesn't work the same way. You don't have word for word copies of every written piece of information you've seen in your lifetime available to pull up and rearrange in order to "write" something. Like I said, the reason these companies try to draw parallels to human intelligence and memory is to hide the way the systems really work and try to avoid lawsuits.
Look at the article: companies like Apple are worried that using 3rd party AI = their valuable IP being stored in a 3rd party database for the AI to access. Adobe has already admitted that their plan to use AI to generate artwork is entirely based on the AI constantly accessing their stock image library AND that the artists whose images are being used need to be paid for it.
An LLM's training and learning really is similar (not identical) to how you learnt your native language as a baby. No-one could "explain" to you what language is - you were just exposed to a lot of it, and your brain had to form new neural pathways to try to figure it out. Your brain detected that it was hearing the same "kinds" of sounds repeatedly, and that re-inforced certain pathways in your brain. That forms the basic scaffold that enabled your brain to then recognise whole words, and from there, start to figure out how to use those words yourself.