AI training isn’t only for mega-corporations. We can already train our own open source models, so we shouldn’t applaud someone trying to erode our rights and let people put up barriers that will keep out all but the ultra-wealthy. We need to be careful not weaken fair use and hand corporations a monopoly of a public technology by making it prohibitively expensive to for regular people to keep developing our own models. Mega corporations already have their own datasets, and the money to buy more. They can also make users sign predatory ToS allowing them exclusive access to user data, effectively selling our own data back to us. Regular people, who could have had access to a corporate-independent tool for creativity, education, entertainment, and social mobility, would instead be left worse off with fewer rights than where they started.
It’s not exactly the same thing, but here’s an article by Kit Walsh, who’s a senior staff attorney at the EFF explains how image generators work within the law. The two aren’t exactly the same, but you can see how the same ideas would apply. The EFF is a digital rights group who most recently won a historic case: border guards now need a warrant to search your phone.
Here are some excerpts:
First, copyright law doesn’t prevent you from making factual observations about a work or copying the facts embodied in a work (this is called the “idea/expression distinction”). Rather, copyright forbids you from copying the work’s creative expression in a way that could substitute for the original, and from making “derivative works” when those works copy too much creative expression from the original.
Second, even if a person makes a copy or a derivative work, the use is not infringing if it is a “fair use.” Whether a use is fair depends on a number of factors, including the purpose of the use, the nature of the original work, how much is used, and potential harm to the market for the original work.
And:
…When an act potentially implicates copyright but is a necessary step in enabling noninfringing uses, it frequently qualifies as a fair use itself. After all, the right to make a noninfringing use of a work is only meaningful if you are also permitted to perform the steps that lead up to that use. Thus, as both an intermediate use and an analytical use, scraping is not likely to violate copyright law.
If the students are using the works for purposes such as analyzing, critiquing, or illustrating a point, and not merely reproducing them, they have a strong case for fair use. That’s all these models are, original analysis of their training data in comparison with each other. This use is more likely to be considered transformative, meaning that they add something new or different to the original work, rather than merely copying it. If you need it said another way, here’s a link to a video about this sort of thing.
When an act potentially implicates copyright but is a necessary step in enabling noninfringing uses, it frequently qualifies as a fair use itself.
Yeah, I think they’ve got a chance. You also definitely don’t need to pay to use books. You can just receive it for free from someone. That’s why college course books make all those revisions and bundle in software to stop people from sharing.
I haven’t seen anyone that has been able to reproduce complete works from an LLM. Open AI also actively stops people from even trying to reproduce anything that resembles copyrighted materials. Signaling their commercial purpose isn’t to substitute for the plaintiff’s works. Filing suit doesn’t make their claims true, you should hold off on hasty judgements.
First of all, fair use is not simple or as clear-cut a concept that can be applied uniformly to all cases than you make it out to be. It’s flexible and context-dependent on careful analysis of four factors: the purpose and character of the use, the nature of the copyrighted work, the amount and substantiality of the portion used, and the effect of the use upon the potential market. No one factor is more important than the others, and it is possible to have a fair use defense even if you do not meet all the criteria of fair use.
Generative models create new and original works based on their weights, such as poems, stories, code, essays, songs, images, video, celebrity parodies, and more. These works may have their own artistic merit and value, and may be considered transformative uses that add new expression or meaning to the original works. Allowing people to generate text that they would otherwise pay writers to create that isn’t making the original redundant nor isn’t reproducing the original is likely fair use. Stopping people from cheaply producing non-infringing text doesn’t seem like something the courts would agree should be stopped just 'cause someone wants to get paid instead.
I think you’re being too narrow and rigid with your interpretation of fair use, and I don’t think you understand the doctrine that well.
You should know that the statistical models don’t contain copies of their training data. During training, the data is used just to give a bump to the numbers in the model. This is all in service of getting LLMs to generate cohesive text that is original and doesn’t occur in their training sets. It’s also very hard if not impossible to get them to quote back copyrighted source material to you verbatim. If they’re going with the copying angle, this is going to be an uphill battle for them.