Apple's generative AI may be the only one that was trained legally & ethically

2»

Comments

  • Reply 21 of 31
    eriamjheriamjh Posts: 1,681member
    I guess I’ll go out on a limb and ask what part of using copyrighted works to train anything was illegal against their terms of service?

    What law protects the works from being read?  

    People spout words like being “used without permission” but where is it identified that this use requires permission? 

    I saw an artists work.    Do I need their permission to use it for thought or a computer program?  Do I need to credit someone for their work to be used as input?  

    Let’s bring this concept full circle.  What does copyrighting anything disallow its use, training, inspiration, application, or analysts from anything else?  

    Some guy who did stuff once said “Good artists create.  Great artists steal.”  What’s being stolen?   Did I just steal that quote?  
    Was it copyrighted or is it general knowledge of a thing because it was said and written down?  Do I have to credit the person who said it?  Do I owe him money?   Was he a jerk to people?  Is he even alive?
    do I owe his heirs something now?

    I find the arguments to be word salad and gobbledygook.   The word “used” is vague when it comes to the application of an artists work.  I can’t pay them for thinking about their work.  Why should I pay them because I fed a JPG of it into a computer to created someone else from it?

    Explain what’s immoral or wrong about any of this and justify it by defining how machine learning changes any other “use” of copyrighted work.   Define “use”.

    I welcome counterpoints that justify that point logically and legally.   Perhaps some sides are trying to create a way of being compensated for something they didn’t intend to happen.  Sounds like  money grab.  

    “Stop not paying me for something I never thought I could demand money for!”
    edited April 25 williamlondon
  • Reply 22 of 31
    AppleZuluAppleZulu Posts: 2,075member
    eriamjh said:
    I guess I’ll go out on a limb and ask what part of using copyrighted works to train anything was illegal against their terms of service?

    What law protects the works from being read?  

    People spout words like being “used without permission” but where is it identified that this use requires permission? 

    I saw an artists work.    Do I need their permission to use it for thought or a computer program?  Do I need to credit someone for their work to be used as input?  

    Let’s bring this concept full circle.  What does copyrighting anything disallow its use, training, inspiration, application, or analysts from anything else?  

    Some guy who did stuff once said “Good artists create.  Great artists steal.”  What’s being stolen?   Did I just steal that quote?  
    Was it copyrighted or is it general knowledge of a thing because it was said and written down?  Do I have to credit the person who said it?  Do I owe him money?   Was he a jerk to people?  Is he even alive?
    do I owe his heirs something now?

    I find the arguments to be word salad and gobbledygook.   The word “used” is vague when it comes to the application of an artists work.  I can’t pay them for thinking about their work.  Why should I pay them because I fed a JPG of it into a computer to created someone else from it?

    Explain what’s immoral or wrong about any of this and justify it by defining how machine learning changes any other “use” of copyrighted work.   Define “use”.

    I welcome counterpoints that justify that point logically and legally.   Perhaps some sides are trying to create a way of being compensated for something they didn’t intend to happen.  Sounds like  money grab.  

    “Stop not paying me for something I never thought I could demand money for!”
    You’re under the mistaken impression that there isn’t law and precedent governing these things already. “Use” is already well defined in the law. 

    Humans learn from the work of other humans. You may read, listen to or view the creative work of other humans, often even without purchasing or owning a copy. If you have an eidetic memory, it’s possible you could even retain an exact copy of other people’s creative work in your brain. You may not, however, legally take copies of others’ work and store them on shelves, in closets, or on hard drives without getting permission, paying for it, or otherwise legally obtaining it. Anything outside of your skull is not an allowable storage space for unpaid or unlicensed copies of others’ intellectual property. Still, if nobody knows you have that stuff, you might get away with it. 

    It’s when you start producing what is ostensibly your own output that laws and rules particularly come into play. There are “fair use” allowances for quoting others’ work exactly. It’s when you start doing so extensively while claiming it’s your own work product that it becomes a particular problem. Students and young artists will go through phases of showing their influences until, through mistake and intent, they find their own voice and create original work. They’ve learned from their influences, but are now doing their own thing. 

    If you’re not good, or if you’re just unethical, you just keep using others’ work and call it your own. Others will see you as derivative, unoriginal or as a hack. Legally you will be seen as a plagiarist, copyright violator and thief. If you seek or obtain income from it, it’ll really get some attention. 

    If you’re taken to court, it will be determined if you were exposed to a prior work, and if so, whether or not your subsequent work in question is sufficiently altered to be original. If not, you lose. 

    John Lennon wrote many completely original songs, but he had to pay Chuck Berry when he quoted “You Can’t Catch Me” too directly in “Come Together.”

    Generative AI starts with intake of vast quantities of prior work. Thus far, its output is a derivative collage and pastiche. The algorithms act as a digital blender, but are incapable of originality. 

    Ask a good writer to write about green eggs and ham in the style of Dr. Seuss, and she will probably decline, but if she wants the challenge, she might write about that subject, but mimic the style of “One Fish Two Fish Red Fish Blue Fish,” just to try to be clever. 

    Put the same request to generative AI (or a human hack) and you’ll get an almost verbatim regurgitation of the original work, because the AI is not a creative process. It is storage, blender, regurgitation, and the above instruction short circuits the blender. 

    Rap artists often use samples of others’ work as the basis for their own. The rap they perform on top of the samples may be brilliantly original, but they must still get permission and pay for the use of the sampled work. Generative AI does the sampling bit, but not the original part. 

    Because generative AI isn’t really learning from prior work and then creating its own original output, it must seek permission and pay for the work of others before it generates its collages derived from it. Because generative AI is not a sentient being, its owners and producers are the ones who are on the hook for storing and use of the works of other people. 
    edited April 25 danoxmuthuk_vanalingamroundaboutnowwilliamlondonnrg2watto_cobramattinoz
  • Reply 23 of 31
    XedXed Posts: 2,686member
    Legal & ethical? It’s so alarming. A “legal” LLM model from CCP washes1.4 billion brains; a “balance” LGBTQ-ethics would reckon a tran-woman a woman; and whose god is “they” ( because you cannot use he or she anymore.) “What is a man and what is a woman” becomes an Apple-controlled answer. The best and only way is to give iPhone user to choose and install. 

    Besides “upgrade” means replacement, LLM is being manipulated from Large Language Model to “Language Learning Model” in Apple’s terminology.  Can we trust Apple?
    Well that was a lot of nothing.
    roundaboutnowwilliamlondonwatto_cobra
  • Reply 24 of 31
    Just preparing us for a Siri-level stupid and useless gpt from Apple. And Apple being rich is trying to raise the cost for everyone else doing generative Ai. 
    edited April 25 eriamjhwilliamlondon
  • Reply 25 of 31
    Legal & ethical? It’s so alarming. A “legal” LLM model from CCP washes1.4 billion brains; a “balance” LGBTQ-ethics would reckon a tran-woman a woman; and whose god is “they” ( because you cannot use he or she anymore.) “What is a man and what is a woman” becomes an Apple-controlled answer. The best and only way is to give iPhone user to choose and install. 

    Besides “upgrade” means replacement, LLM is being manipulated from Large Language Model to “Language Learning Model” in Apple’s terminology.  Can we trust Apple?
    I totally agree with you point. But Apple is one of the companies that are not that woke, I still trust them. (unlike Meta or google)
    watto_cobra
  • Reply 26 of 31
    From what I can read in the article, all the licensing is pretty one sided biased in its learning. This may be another piece of propaganda programming.
  • Reply 27 of 31
    Xed said:
    Looks like excuses have already started.
    Ethics are an excuse for what exactly? Being responsible?
    No. For the failure it is going to be. 
    williamlondon
  • Reply 28 of 31
    Xed said:
    This site should be called applerumors.com or something similar. 

    Here, we get a lot of news containing „may“, „might“ etc. 

    Apple is searching for an excuse here. 
    An excuse for what exactly?

    An excuse for pushing the AI agenda. Judging at the current consensus regarding AI art, Apple's effort might end up in vain, even if what they claim is true. The people will still continue to associate AI with plagiarism and art theft. And the people will still readily boycott Apple's AI despite the facts. 


    williamlondonwatto_cobra
  • Reply 29 of 31
    XedXed Posts: 2,686member
    Xed said:
    Looks like excuses have already started.
    Ethics are an excuse for what exactly? Being responsible?
    No. For the failure it is going to be. 
    Let it fail before you make such claims. The iPhone was a failure to many people even after it launched. Would you claim it's a failure now?

    Xed said:
    This site should be called applerumors.com or something similar. 

    Here, we get a lot of news containing „may“, „might“ etc. 

    Apple is searching for an excuse here. 
    An excuse for what exactly?

    An excuse for pushing the AI agenda. Judging at the current consensus regarding AI art, Apple's effort might end up in vain, even if what they claim is true. The people will still continue to associate AI with plagiarism and art theft. And the people will still readily boycott Apple's AI despite the facts. 
    Now there's an AI agenca? JFC! Next you'll tell us that we've always been at war with East AI.
    williamlondonwatto_cobra
  • Reply 30 of 31
    jdwjdw Posts: 1,383member
    Apple News is "quality" but how many of us even here on AppleInsider read it daily?  I certainly don't.  On mobile, I tend to use Flipboard and find it more than adequate, although I tend to get a good chunk of my Apple news here on AppleInsider.

    As to copyright and ethics, few end users will give heed to that in the end.  Most people just want the best AI possible.  And when the AI wars have been waged, 10 years hence, I doubt most consumers will look back on the early days of AI and scream, "Yeah, but Apple was the most ethical back then."  This is not to promote illegal or unethical activity, mind you.  It's just reality.  

    Apple needs to have some serious edge in its AI as compared to the competition which has nothing to do with whether they paid everyone appropriately, dotted every i and crossed every t.  And in light of how bad Siri is, I myself am going to be seriously disappointed if Apple releases something that isn't even as good as ChatGPT3, which isn't all that great.  I basically want an AI that does what I tell it to do or gives me the results I need no lets than 80% of the time.  I don't want to hear ChatGPT style excuses like "I was only trained on data up through XX/XX" and uses that as a cop-out for being unable to properly answer a question.  I would also like the ability to have my own trainable AI, and I don't mind sharing that to the greater AI out on the net.  In other words, an AI that Apple trains along side every single user of that same AI.  Imagine how much faster it could improve in that case.
    watto_cobra
  • Reply 31 of 31
    MarvinMarvin Posts: 15,388moderator
    jdw said:
    Apple News is "quality" but how many of us even here on AppleInsider read it daily?  I certainly don't.  On mobile, I tend to use Flipboard and find it more than adequate, although I tend to get a good chunk of my Apple news here on AppleInsider.
    I use Apple News every day in addition to other apps. According to Apple, it has over 125 million monthly users, likely more since 2020:

    https://techcrunch.com/2020/04/30/apple-news-hits-125-million-monthly-active-users

    Flipboard said they have a similar amount but they are on multiple platforms so probably half or less what Apple has on iOS:

    https://techcrunch.com/2022/12/13/magazine-app-flipboard-adds-support-for-original-content-and-conversation-with-new-notes-feature/
    jdw said:
    I don't want to hear ChatGPT style excuses like "I was only trained on data up through XX/XX" and uses that as a cop-out for being unable to properly answer a question.  I would also like the ability to have my own trainable AI, and I don't mind sharing that to the greater AI out on the net.  In other words, an AI that Apple trains along side every single user of that same AI.  Imagine how much faster it could improve in that case.
    That's not a cop-out, it takes a lot of processing power to train an AI, they can't do it in real-time. They might be able to do incremental training using LoRa like weekly/monthly patch updates and then larger updates spread out over a few months but web search combined with the model could work:

    https://www.microsoft.com/en-us/research/publication/lora-low-rank-adaptation-of-large-language-models/

    Some of the online AI services use the input from users to train them but they learned the hard way this can go very wrong (it only took 24 hours):

    https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
Sign In or Register to comment.