Courts say AI training on copyrighted material is legal

Jump to First Reply
Posted:
in General Discussion

A ruling in a U.S. District Court has effectively given permission to train artificial intelligence models using copyrighted works, in a decision that's extremely problematic for creative industries.

The word 'Anthropic' overlays computer code, a gavel, and a person writing, symbolizing law, technology, and human involvement.
Anthropic logo on top of coding and court imagery



Content creators and artists have been suffering for years, with AI companies scraping their sited and scanning books to train large language models (LLMs) without permission. That data is then used for generative AI and other machine learning tasks, and then monetized by the scraping company with no compensation for the original host or author.

Following a ruling by a U.S. District Court for the Northern District of California issued on Tuesday, companies are being given free rein to train with just about any published media that they want to harvest.

The ruling is based on a lawsuit from Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson against Anthropic dating back to 2024. The suit accused the company of using pirated material to train its Claude AI models.

This included Anthropic creating digital copies of printed books for AI model training.

The ruling from Judge William Alsup -- a judge very familiar to readers of AppleInsider -- rules in favor of each side in various ways. However, the weight of the ruling certainly sides with Anthropic and AI scrapers in this instance.

Under the ruling, Judge Alsup says that copies used to train specific LLMs was justifiable as fair use.

"The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup commented.

For physical copies that were converted from a print library to a digital library, this was also deemed fair use. Furthermore, using that content to train LLMs was also fair use.

Alsup compared the author's complaint to if the same argument was used against an effort to train schoolchildren how to write well. It's not clear how that applies, given that artificial intelligence models are not considered "schoolchildren" in any legal sense.

In that argument, Alsup ruled that the Copyright Act is intended to advance original works of authorship, not to "protect authors against competition."

Where the authors saw a small amount of success was in the usage of pirated works. Creating a library of pirated digital books, even if they are not used for the training of a model, does not constitute fair use.

That also remains the case if Anthropic later bought a copy of a pirated book after pirating it in the first place.

On the matter of the piracy argument, the court will be holding a trial to determine damages against Anthropic.

In May, it was reported that Apple was working with Anthropic to integrate the Claude Sonnet model into a new AI-powered version of Xcode, to help reshape developer workflows.

Bad news for content makers



The ruling is terrible for artists, musicians, and writers. Other professions where machine learning models could be a danger to their livelihoods will have issues too -- like judges who once said that they took a coding class once, and therefore knew what they were talking about with tech.

AI models take advantage of the hard work and life experiences of media creators, and pass it off as its own. At the same time, it leaves content producers with few options to take to combat the phenomenon.

As it stands, the ruling will clearly be precedent in other lawsuits in the AI space, especially when dealing with the producers of original works that are pillaged for training purposes.

Over the years, AI companies were attacked for grabbing any data they could to feed the LLMs, even content scraped from the Internet without permission.

This is a problem that manifests in quite a few ways. The most obvious is in generative AI, as the models could be trained to create images in specific styles, which devalues the work of actual artists.

A example of a fightback is a lawsuit from Disney and Universal against Midjourney, which surfaced in early June. The company behind the AI image generator is accused of mass copyright infringement, for training the models on image of the most recognizable characters from the studio.

The studios unite in calling Midjourney "a bottomless pit of plagarism," built on the unauthorized use of protected material.

When you have two major media companies that are usually bitter rivals uniting for a single cause, you know it's a serious issue.

It's also a growing issue for websites and publishers, like AppleInsider. Instead of using a search tool and viewing websites for information, a user can simply ask for a customized summary from an AI model, without needing to visit the site that it has sourced the information from in the first place.

And, that information is often wrong, combined with data from other sources, polluting the original meaning of the content. For instance, we've seen our tips on how to do something plagiarized with sections reproduced verbatim, and mashed up out of order with that from other sites, making a procedure that doesn't work.

The question of how to deal with compensating the lost revenues of publishers is still one that has not yet been answered in a meaningful way. There are some companies that have been trying to stay on the more ethical side of things, with Apple among them.

Apple has offered news publishers millions to license content, for training its generative AI. It has also paid for licenses from Shutterstock, which helped develop its visual engines used for Apple Intelligence features.

Major publishers have also taken to blocking AI services from accessing their archives, doing so via robots.txt. However, this only stops ethical scrapers, not everyone. And, scraping an entire site takes server power and bandwidth -- which is not free for the hosting site that's getting scraped.

The ruling also follows after an increase in efforts from major tech companies to lobby for a block on U.S. states introducing AI regulation for a decade.

Meanwhile in the EU, there have been attempts to sign tech companies up to an AI Pact, to develop AI in safe ways. Apple is apparently not involved in either effort.



Read on AppleInsider

«13

Comments

  • Reply 1 of 49
    timpetustimpetus Posts: 73member
    I seem to remember someone offering a program that "poisons" image data in a way that is undetectable by the human eye, but makes it not only useless for AI training but actually harmful to any AI model trained using the photo. This is a great way to (a) protect your work without relying on nearly impossible detection and legal enforcement measures and (b) accelerate the inevitable destruction of so-called AI image generation. I believe no matter what, we will get to a state of GIGO with "AI" soon, where the verifiably human-generated pool of training data will continually shrink in comparison to the massive deluge of AI-generated data, some of which will be unidentifiable as such at time of selection.
    spliff monkeyronnkillroy
     3Likes 0Dislikes 0Informatives
  • Reply 2 of 49
    Xedxed Posts: 3,232member
    timpetus said:
    I seem to remember someone offering a program that "poisons" image data in a way that is undetectable by the human eye, but makes it not only useless for AI training but actually harmful to any AI model trained using the photo. This is a great way to (a) protect your work without relying on nearly impossible detection and legal enforcement measures and (b) accelerate the inevitable destruction of so-called AI image generation. I believe no matter what, we will get to a state of GIGO with "AI" soon, where the verifiably human-generated pool of training data will continually shrink in comparison to the massive deluge of AI-generated data, some of which will be unidentifiable as such at time of selection.
    I'm not saying that shouldn't be done, but it will end up in a back-and-forth war where AI gets trained to detect the issue, not unlike how the fight to detect spam or bots is a never-ending battle. It's just an inevitable element of bad actors in technology.
     0Likes 0Dislikes 0Informatives
  • Reply 3 of 49
    22july201322july2013 Posts: 3,838member
    If everyone who writes a comment on this page will send a fee to Dr Seuss for learning from his books to read and speak, then I will pay attention to their views if they oppose AI learning from published sources. But if you aren't willing to pay everyone that you learn from, for every word that comes out of your mouth, then I don't see why AI should have to pay either. Next, are we going to charge aliens for learning English by reading the radio waves that are being sent into deep space?
    XedATLMacFan1tiredskillsdanoxWesley_Hilliardsunman42spliff monkey12Strangerslibertyandfreeronn
     6Likes 15Dislikes 0Informatives
  • Reply 4 of 49
    DAalsethdaalseth Posts: 3,297member
    There is a reason that two years ago I pulled all of my art, stories, essays, books, all of it off of the web. 
    williamlondon12Strangersronnomar moralesnubussconosciuto
     3Likes 3Dislikes 0Informatives
  • Reply 5 of 49
    If everyone who writes a comment on this page will send a fee to Dr Seuss for learning from his books to read and speak, then I will pay attention to their views if they oppose AI learning from published sources. But if you aren't willing to pay everyone that you learn from, for every word that comes out of your mouth, then I don't see why AI should have to pay either. Next, are we going to charge aliens for learning English by reading the radio waves that are being sent into deep space?
    In addition to the fee that was paid when the books were purchased?

    Somehow you’ve convinced yourself that AI is sentient and learning from influences. It’s not. It has a database of pirated data that it uses to essentially copy/paste responses from.

    If I wrote a book called “Blue Eggs and Spam” and charged people for it, you better believe I’d be sued. It shouldn’t be any different when AI companies do it. 
    edited June 24
    tiredskillsAppleZuluspliff monkeywilliamlondon12Strangersronnomar moralesbeowulfschmidtnubusnumenorean
     14Likes 2Dislikes 0Informatives
  • Reply 6 of 49
    danoxdanox Posts: 3,841member
    If everyone who writes a comment on this page will send a fee to Dr Seuss for learning from his books to read and speak, then I will pay attention to their views if they oppose AI learning from published sources. But if you aren't willing to pay everyone that you learn from, for every word that comes out of your mouth, then I don't see why AI should have to pay either. Next, are we going to charge aliens for learning English by reading the radio waves that are being sent into deep space?
    The difference is, you are a live human being capable of learning and reasoning, AI at this point in time in history is a mirage, a facsimile….

    Another thing that is wrong is that these same AI model companies want the government to give them a moat around AI so that only they can benefit at the top of the pyramid. It is no surprise that these judges, government representatives are guardians of the upper 1% business interest time after time one would like to believe that, for once at some point in time that societal unselfishness, the greater good would rise to the top oh well….
    edited June 24
    spliff monkeyneoncat12Strangersronnomar moralessconosciuto
     3Likes 3Dislikes 0Informatives
  • Reply 7 of 49
    sunman42sunman42 Posts: 344member
    If everyone who writes a comment on this page will send a fee to Dr Seuss for learning from his books to read and speak, then I will pay attention to their views if they oppose AI learning from published sources. But if you aren't willing to pay everyone that you learn from, for every word that comes out of your mouth, then I don't see why AI should have to pay either. Next, are we going to charge aliens for learning English by reading the radio waves that are being sent into deep space?
    That’s the point, though: libraries, schools, or whatever grownup bought that book that helped you learn to read paid Dr. Seuss’s publisher, who paid him and his agent. And school teachers, under the fair use clause, could make copies of excerpts of those books to help in reading lessons. Where does any AI outfit other than Apple pay for anything the engines are scraping?

    But you’re on the right track on one thing: we should definitely start beaming Dr. Seuss books to the stars.
    spliff monkey12Strangersdanoxronnmuthuk_vanalingamrandominternetpersonStrangeDays
     6Likes 1Dislike 0Informatives
  • Reply 8 of 49
    AppleZuluapplezulu Posts: 2,504member
    If everyone who writes a comment on this page will send a fee to Dr Seuss for learning from his books to read and speak, then I will pay attention to their views if they oppose AI learning from published sources. But if you aren't willing to pay everyone that you learn from, for every word that comes out of your mouth, then I don't see why AI should have to pay either. Next, are we going to charge aliens for learning English by reading the radio waves that are being sent into deep space?
    Very few people have eidetic memory. Lots have read and learned from Dr. Seuss, but few, having read it once, could recite Green Eggs and Ham verbatim from memory. I once queried one of the popular LLM AI programs to write a story about green eggs and ham in the style of Dr. Seuss. The AI program then regurgitated the original, almost verbatim. 

    These programs consist of all this accumulated information scraped from wherever it can be scraped, combined with sufficient computational power to brute force a most-probable sequence of words in response to a submitted query. There is no reasoning or thinking or even learning in the human sense involved. 

    Had I submitted that green eggs and ham query to human writers, many would simply tell me Dr. Seuss had already written that. Some more creative people might think about it and do a mash-up, rewriting, say, Horton Hears a Who, but changing the story to be about green eggs and ham. Someone else might actually write an entirely original story about green eggs and ham, using a fresh helping of nonsense along with Seuss's characteristic rhyme and meter conventions. 

    The LLM AI, however, doesn't think at all, but rather spits out collages made from other people's work. A middle school or high school student has absorbed a tiny fraction of the amount of information indexed by and LLM program, they have received a tiny fraction of the programming (e.g. classroom instruction) of a LLM program, and will then apply a tiny fraction of the computational power used by AI to produce a written paper in response to a written instruction or assignment, and yet, an average or better student will, without committing plagiarism, produce a better written, more accurate, less hallucinatory paper than AI will. 

    Ai does not learn, it scrapes and indexes. AI does not think or create, regurgitates. 
    alterbentzionspliff monkeymattinoz12Strangersdanoxronnomar moralesnumenoreanforegoneconclusionStrangeDays
     10Likes 2Dislikes 0Informatives
  • Reply 9 of 49
    gatorguygatorguy Posts: 24,755member
    sunman42 said:
    If everyone who writes a comment on this page will send a fee to Dr Seuss for learning from his books to read and speak, then I will pay attention to their views if they oppose AI learning from published sources. But if you aren't willing to pay everyone that you learn from, for every word that comes out of your mouth, then I don't see why AI should have to pay either. Next, are we going to charge aliens for learning English by reading the radio waves that are being sent into deep space?
    That’s the point, though: libraries, schools, or whatever grownup bought that book that helped you learn to read paid Dr. Seuss’s publisher, who paid him and his agent. And school teachers, under the fair use clause, could make copies of excerpts of those books to help in reading lessons. Where does any AI outfit other than Apple pay for anything the engines are scraping?

    But you’re on the right track on one thing: we should definitely start beaming Dr. Seuss books to the stars.
    Google pays for some training data, signing licensing deals, as does OpenAI.
    12Strangersdanoxdewmemuthuk_vanalingamspliff monkey
     2Likes 2Dislikes 1Informative
  • Reply 10 of 49
    9secondkox29secondkox2 Posts: 3,590member
    Horrible decisions. Hopefully it reaches the SC and we get a sane precedent. 

    Plagiarism is plagiarism no matter if it’s human or AI. 
    edited June 24
    12StrangersXednewisneverenoughronnomar moralesnubus
     3Likes 3Dislikes 0Informatives
  • Reply 11 of 49
    mfrydmfryd Posts: 267member
    It's a complicated topic.

    There are good points on both sides of the training question.  On one hand, AI programs are being trained based on the hard work of previous human artists.  The AI companies are profiting, but the original artists get nothing. 

    On the other hand, the AI is not doing anything new.  It's common for individuals to study the work of others, and use that study to inform their work.  When interviewed, great directors often discuss how they have studied the works of great directors to learn their techniques and style.  The AI programs are simply really good at this.

    My understanding, is that an art student can study the works of a current artist, and produce new works in that style.   I don't believe an artist's style is protectable by copyright.  What an artist can't do, is to produce work that is essentially a copy of an existing copyrighted work, or that contains copyrighted elements (including copyrighted characters).  An artist also has to be careful that work done in someone else's style is not represented as being that artist's work.  If I were to write a book in the style of Dr. Seuss, I would need to make it very clear that the book was *not* a work by Dr. Seuss. 

    Copyright allows control over making copies of a creative work.  It does not allow control over works that were "inspired" by a copyrighted piece.

    An issue with current AI, is that it doesn't understand the limitations of copyright law, and can sometimes produce results that would typically be considered copyright infringement.  

    It's going to take a while to sort out what rights various parties should have.   There is more than one reasonable way to resolve the legal issues.  It will be interesting to see how Congress and the courts resolve these issues.

    Disclaimer: I am not an attorney, and this is not legal advice.  It is merely my imperfect understanding of some of the issues.
    muthuk_vanalingamspliff monkeyMisterKitrandominternetpersonwilliamlondon
     3Likes 2Dislikes 0Informatives
  • Reply 12 of 49
    danoxdanox Posts: 3,841member
    mfryd said:
    It's a complicated topic.

    There are good points on both sides of the training question.  On one hand, AI programs are being trained based on the hard work of previous human artists.  The AI companies are profiting, but the original artists get nothing. 

    On the other hand, the AI is not doing anything new.  It's common for individuals to study the work of others, and use that study to inform their work.  When interviewed, great directors often discuss how they have studied the works of great directors to learn their techniques and style.  The AI programs are simply really good at this.

    My understanding, is that an art student can study the works of a current artist, and produce new works in that style.   I don't believe an artist's style is protectable by copyright.  What an artist can't do, is to produce work that is essentially a copy of an existing copyrighted work, or that contains copyrighted elements (including copyrighted characters).  An artist also has to be careful that work done in someone else's style is not represented as being that artist's work.  If I were to write a book in the style of Dr. Seuss, I would need to make it very clear that the book was *not* a work by Dr. Seuss. 

    Copyright allows control over making copies of a creative work.  It does not allow control over works that were "inspired" by a copyrighted piece.

    An issue with current AI, is that it doesn't understand the limitations of copyright law, and can sometimes produce results that would typically be considered copyright infringement.  

    It's going to take a while to sort out what rights various parties should have.   There is more than one reasonable way to resolve the legal issues.  It will be interesting to see how Congress and the courts resolve these issues.

    Disclaimer: I am not an attorney, and this is not legal advice.  It is merely my imperfect understanding of some of the issues.

    AI can’t think and it can’t reason and because of that it knows no limitations today, however one day it will, but that day is decades away, but that does not mean you should get to scrape all of the copyrighted material since 1920 at your leisure but the protected class gets to do so.
    ronnomar moralesnumenoreanneoncatwilliamlondonWillfulJonsin
     3Likes 3Dislikes 0Informatives
  • Reply 13 of 49
    Considering the computing power necessary/available for these LLM's, how hard would it be for a system to quickly compare the results of a query to the original documents before presenting them to a user? Would that help prevent duplication?
     0Likes 0Dislikes 0Informatives
  • Reply 14 of 49
    mfrydmfryd Posts: 267member
    danox said:
    mfryd said:
    It's a complicated topic.

    There are good points on both sides of the training question.  On one hand, AI programs are being trained based on the hard work of previous human artists.  The AI companies are profiting, but the original artists get nothing. 

    On the other hand, the AI is not doing anything new.  It's common for individuals to study the work of others, and use that study to inform their work.  When interviewed, great directors often discuss how they have studied the works of great directors to learn their techniques and style.  The AI programs are simply really good at this.

    My understanding, is that an art student can study the works of a current artist, and produce new works in that style.   I don't believe an artist's style is protectable by copyright.  What an artist can't do, is to produce work that is essentially a copy of an existing copyrighted work, or that contains copyrighted elements (including copyrighted characters).  An artist also has to be careful that work done in someone else's style is not represented as being that artist's work.  If I were to write a book in the style of Dr. Seuss, I would need to make it very clear that the book was *not* a work by Dr. Seuss. 

    Copyright allows control over making copies of a creative work.  It does not allow control over works that were "inspired" by a copyrighted piece.

    An issue with current AI, is that it doesn't understand the limitations of copyright law, and can sometimes produce results that would typically be considered copyright infringement.  

    It's going to take a while to sort out what rights various parties should have.   There is more than one reasonable way to resolve the legal issues.  It will be interesting to see how Congress and the courts resolve these issues.

    Disclaimer: I am not an attorney, and this is not legal advice.  It is merely my imperfect understanding of some of the issues.

    AI can’t think and it can’t reason and because of that it knows no limitations today, however one day it will, but that day is decades away, but that does not mean you should get to scrape all of the copyrighted material since 1920 at your leisure but the protected class gets to do so.
    People are allowed to scrape as much copyrighted material as they like.  Machines are simply better at it.

    This is a common challenge with new technology.  In the past, certain activities were limited by the technology of the time.  Therefore, certain activities could not rise to the level where they were a common issue.  As technology improves, so do various abilities.

    For instance, 50 years ago we didn't really need laws governing the ability for private companies to track people.  If they wanted to track someone, they hired a private investigator, and he would follow the person of interest.  If you wanted to track 50 people, you would need 50 private investigators.  The available technology limited the collection of tracking data.   If a company wanted to track someone, and sell that information, they could.  It just wasn't a common thing.

    Today, the three major cellular companies maintain a real time database of where just about every adult is currently located.  They have to.  They need to know where you are so when someone calls you the signal only needs to go to the cell tower closest to you.  That data is extremely valuable.  Knowing where you are, and where you have been, makes it possible to make some very good guesses about your likes and dislikes.  That makes it possible to target you with ads, that are designed to appeal to your personal preferences, or feed off your personal fears.

    Once it becomes trivial to track people, we need to think about whether and how to regulate tracking.

    In the past, it wasn't possible to read a large percentage of what gets published.  It was even less possible to memorize every passage of every book you have ever read.   Now that computers are doing this, it's important that we consider whether we need new regulations and what should they be?
    randominternetpersonthtMisterKitdanoxwilliamlondonsconosciuto
     4Likes 2Dislikes 0Informatives
  • Reply 15 of 49
    22july201322july2013 Posts: 3,838member
    If everyone who writes a comment on this page will send a fee to Dr Seuss for learning from his books to read and speak, then I will pay attention to their views if they oppose AI learning from published sources. But if you aren't willing to pay everyone that you learn from, for every word that comes out of your mouth, then I don't see why AI should have to pay either. Next, are we going to charge aliens for learning English by reading the radio waves that are being sent into deep space?
    Somehow you’ve convinced yourself that AI is sentient and learning from influences. It’s not. It has a database of pirated data that it uses to essentially copy/paste responses from.
    By no means do I think AI is sentient. Stop putting words in my mouth. Also, I do not believe that sentience absolves an entity from paying fees for using data that it has absorbed from other beings, as you seem to be implying there. And I totally disagree with your explanation of AI. It is not a "copy" of data. In fact, every single time I use AI to get some data, it ALWAYS goes to the internet to look things up, because it hasn't "memorized the internet" as simpletons think it has.

    Up until now, the courts have been the entity that decides whether a "work" that has been "used for profit" has "infringed" on someone else's work. That's a perfectly valid system for going forward. AI doesn't change anything here. If anyone uses AI to write a plagiarized work, then the persons who benefit from that plagiarization should be suable. But we shouldn't stop AI from creating fair use derivatives of other people's work, just as you shouldn't be sued for writing a song that sounds vaguely similar to an ABBA song. If you can take advantage of "fair use", then so can other people who use AI for the same thing. After all, half the videos on Youtube are taking advantage of fair use laws, by using someone else's video or audio.
    ronnomar moralessconosciuto
     1Like 2Dislikes 0Informatives
  • Reply 16 of 49
    mfrydmfryd Posts: 267member
    If everyone who writes a comment on this page will send a fee to Dr Seuss for learning from his books to read and speak, then I will pay attention to their views if they oppose AI learning from published sources. But if you aren't willing to pay everyone that you learn from, for every word that comes out of your mouth, then I don't see why AI should have to pay either. Next, are we going to charge aliens for learning English by reading the radio waves that are being sent into deep space?
    Somehow you’ve convinced yourself that AI is sentient and learning from influences. It’s not. It has a database of pirated data that it uses to essentially copy/paste responses from.
    By no means do I think AI is sentient. Stop putting words in my mouth. Also, I do not believe that sentience absolves an entity from paying fees for using data that it has absorbed from other beings, as you seem to be implying there. And I totally disagree with your explanation of AI. It is not a "copy" of data. In fact, every single time I use AI to get some data, it ALWAYS goes to the internet to look things up, because it hasn't "memorized the internet" as simpletons think it has.

    Up until now, the courts have been the entity that decides whether a "work" that has been "used for profit" has "infringed" on someone else's work. That's a perfectly valid system for going forward. AI doesn't change anything here. If anyone uses AI to write a plagiarized work, then the persons who benefit from that plagiarization should be suable. But we shouldn't stop AI from creating fair use derivatives of other people's work, just as you shouldn't be sued for writing a song that sounds vaguely similar to an ABBA song. If you can take advantage of "fair use", then so can other people who use AI for the same thing. After all, half the videos on Youtube are taking advantage of fair use laws, by using someone else's video or audio.
    There are limits.

    It is certainly possible to write a book about a young wizard without violating J. K. Rowling's copyrights.  However, if your wizard has far too many elements in common with Harry Potter, you may very well be violating her copyright.

    However, If you ask AI to write a story about an orphan who finds out he is a wizard, and the AI's story has him going to Hogwarts, playing Quidditch, having a poor redheaded friend from a large family, etc., then it may very well be  a copyright infringement.

    This brings up the question as to what AI should do if the prompt is "write a new short story about Harry Potter that takes place during his first year at Hogwarts."   Such a story would likely violate J. K. Rowling's copyrights, as the characters in the Harry Potter stories are copyrighted intellectual property.
    ronnnumenoreanwilliamlondonsconosciutorezwits
     4Likes 1Dislike 0Informatives
  • Reply 17 of 49
    Meh. Seems emotional and sentimental. If you are placing your content on the web, you are practically posting it on the street for general view with absurd hopes of pennies trickling in on some desperate fancy rather than through proper business channels with an effective strategy of legally protecting and promoting yourself - childish. Most people who do such art that they may avoid other types of structured paid work - what do they expect when they treat their skill set as a hobby - likely not wanting to work for others on a structured gig - if that's even around much? What's even the issue here - not getting a piece of the trifling leavings of scrapers and edu-content pedlars? pedantic. Art needs to stop being a vague creation-vocation of the rando people and grow up. Successful society is based on complex businesses and legal structures requiring serious people acting seriously. Creativity is a real skill and needs focused training and  a hierarchy of knowledgeable people to propagate it through society. Sorry, but I have little symp for the dilettantes and dabblers hoping to otherwise avoid the soulless cubicle, construction site, and assembly line.
    ronnStrangeDayswilliamlondonsconosciutorezwits
     1Like 4Dislikes 0Informatives
  • Reply 18 of 49
    mfrydmfryd Posts: 267member
    Meh. Seems emotional and sentimental. If you are placing your content on the web, you are practically posting it on the street for general view with absurd hopes of pennies trickling in on some desperate fancy rather than through proper business channels with an effective strategy of legally protecting and promoting yourself - childish. Most people who do such art that they may avoid other types of structured paid work - what do they expect when they treat their skill set as a hobby - likely not wanting to work for others on a structured gig - if that's even around much? What's even the issue here - not getting a piece of the trifling leavings of scrapers and edu-content pedlars? pedantic. Art needs to stop being a vague creation-vocation of the rando people and grow up. Successful society is based on complex businesses and legal structures requiring serious people acting seriously. Creativity is a real skill and needs focused training and  a hierarchy of knowledgeable people to propagate it through society. Sorry, but I have little symp for the dilettantes and dabblers hoping to otherwise avoid the soulless cubicle, construction site, and assembly line.
    It's not that simple.  The AI companies are scraping material that isn't on the web.  They are scanning and scraping printed books.  They are scraping copyrighted movies.   

    They are scraping the copyrighted works of artists who earn their living licensing their work.
    ronnnumenoreanthtStrangeDayssconosciuto
     5Likes 0Dislikes 0Informatives
  • Reply 19 of 49
    This will hold until Disney gets involved.
    williamlondonsconosciutorezwits
     2Likes 1Dislike 0Informatives
  • Reply 20 of 49
    longfanglongfang Posts: 548member
    mfryd said:
    Meh. Seems emotional and sentimental. If you are placing your content on the web, you are practically posting it on the street for general view with absurd hopes of pennies trickling in on some desperate fancy rather than through proper business channels with an effective strategy of legally protecting and promoting yourself - childish. Most people who do such art that they may avoid other types of structured paid work - what do they expect when they treat their skill set as a hobby - likely not wanting to work for others on a structured gig - if that's even around much? What's even the issue here - not getting a piece of the trifling leavings of scrapers and edu-content pedlars? pedantic. Art needs to stop being a vague creation-vocation of the rando people and grow up. Successful society is based on complex businesses and legal structures requiring serious people acting seriously. Creativity is a real skill and needs focused training and  a hierarchy of knowledgeable people to propagate it through society. Sorry, but I have little symp for the dilettantes and dabblers hoping to otherwise avoid the soulless cubicle, construction site, and assembly line.
    It's not that simple.  The AI companies are scraping material that isn't on the web.  They are scanning and scraping printed books.  They are scraping copyrighted movies.   

    They are scraping the copyrighted works of artists who earn their living licensing their work.
    Would it be okay then if the scrapping were done via the AI’s “eyes” aka camera reading a physical book?
    thtwilliamlondon
     0Likes 2Dislikes 0Informatives
Sign In or Register to comment.