Skip to main content
Home
Home

Recent Rulings in AI Copyright Lawsuits Shed Some Light, but Leave Many Questions

Age of Disruption

Recent Rulings in AI Copyright Lawsuits Shed Some Light, but Leave Many Questions

Technology Grid

The last few months have seen a flurry of activity in cases involving artificial intelligence (AI), including some of the first major rulings involving generative AI.

Andersen et al. v. Stability AI Ltd.

As we have previously discussed, this case arose in January 2023, when a collective of artists filed a class action lawsuit involving three AI-powered image generation tools that produce images in response to text inputs: Stable Diffusion (developed by Stability AI), Midjourney (developed by Midjourney), and DreamUp (developed by DeviantArt). The plaintiffs asserted that the models powering these tools were trained using copyrighted images scraped from the internet (including their copyrighted works) without consent. The defendants filed motions to dismiss, and the U.S. District Court for the Northern District of California recently issued a ruling on these motions. The court dismissed most of the plaintiffs' claims, with only one plaintiff's direct copyright infringement claim against Stability AI surviving. The court granted leave to amend the complaint on most counts, and the plaintiffs have since filed an amended complaint.

Note that at the motion to dismiss stage, the plaintiffs are merely required to allege enough facts to state a plausible claim, and courts are required to accept such allegations as true and draw all reasonable inferences in favor of the plaintiff. The truth of the allegations—and the availability of defenses such as fair use—are not resolved at the motion to dismiss stage.

Copyright Claims

The plaintiffs brought claims of direct copyright infringement, vicarious copyright infringement, and removal of copyright management information. As an initial matter, the court dismissed with prejudice the direct and vicarious copyright claims of two of the named plaintiffs because their allegedly infringed works were not registered with the U.S. Copyright Office (which is a requirement for initiating an infringement action in court). Only the direct and vicarious copyright claims of the remaining named plaintiff, Sarah Andersen, survived this initial inquiry because she registered some of her works, and her claims were limited to those registered works.

Although the defendants argued that Andersen did not identify with specificity which of her registered works she alleged were used for training Stable Diffusion, the court allowed Andersen's claims to proceed, finding her searches on "haveibeentrained.com," a tool that allows searching of the Laion-5B and Laion-400M image datasets, plausibly established her allegation that her works were included in those datasets, which are alleged to have been used to train Stable Diffusion. However, as discussed below, the court ended up dismissing all but one of her copyright claims on other grounds (with leave to amend):

  • Direct infringement claims against Stable Diffusion. The one claim that was not dismissed was Andersen's direct copyright infringement claim against Stability AI based on the copies made during training, as the court found that she sufficiently alleged Stability AI used her copyrighted works to train Stable Diffusion. While the defendants disputed the truth of those allegations, they did not dispute that the allegations were sufficient to survive a motion to dismiss.
  • Direct infringement claims against DeviantArt and Midjourney. In contrast, the court dismissed Andersen's direct copyright infringement claims against DeviantArt and Midjourney, with leave to amend, holding that the plaintiffs failed to allege sufficient facts regarding such defendants' use of the plaintiffs' copyrighted works for training purposes. The plaintiffs also alleged direct infringement based on (1) the distribution of Stable Diffusion (which they claimed contained compressed copies of the training data) and (2) the defendants' AI tools and their output being infringing derivative works of the training data. However, the court concluded that the plaintiffs had not adequately pled plausible facts in support of those claims.

The court agreed with the defendants that the claims regarding compressed copies seemed to contradict the plaintiffs' own explanation of how the diffusion model works and said that Andersen would need to define exactly what these alleged "compressed copies" were and provide plausible facts to support her theory. The order also asked Andersen to clarify how DeviantArt and Midjourney could be liable for direct copyright infringement when the plaintiffs alleged they merely provided access to Stable Diffusion as a library. The court indicated it was unclear whether these defendants could be liable for direct infringement if Stable Diffusion contains only algorithms and instructions that can be applied to the creation of images (that include only a few elements of copyrighted materials used to train the model). However, the court didn't entirely rule out the possibility that the plaintiffs could sufficiently plead such a claim and noted that there might be stronger inferences about how and how much of Anderson's protected content remains in Stable Diffusion if the plaintiffs could plausibly plead that the defendants' AI products allow users to create new works by expressly referencing Anderson's works by name.

The court did reject as implausible Andersen's assertion that all images generated through the use of Stable Diffusion constitute infringing derivative works of every image used to train Stable Diffusion. The court's ruling appears to indicate that the amended complaint will need to show that images generated through Stable Diffusion are substantially similar to Andersen's works.

  • Vicarious infringement. The plaintiffs also alleged that the defendants were vicariously liable for infringing derivative works created by third parties' use of the defendants' products. The court dismissed the vicarious infringement claim against Stability AI because Andersen failed to sufficiently allege that Stability AI had the ability to control the infringing actions of third-party users and that Stability AI benefited financially from those actions. The court also dismissed the vicarious infringement claims against DeviantArt and Midjourney because the plaintiffs did not allege claims of direct infringement against those defendants, noting that vicarious infringement necessarily requires an underlying act of direct infringement.
  • Removal of copyright management information. The plaintiffs also alleged violation of the Digital Millennium Copyright Act (DMCA) by intentionally removing or altering copyright management information (CMI) from their works and by distributing materials knowing that the materials had been removed or altered without authorization of the copyright owner. These claims also require that the defendants did such acts knowing, or having reasonable grounds to know, that it will induce, enable, facilitate, or conceal an infringement of copyright. The court noted that at the pleading stage, the claimant must plead facts plausibly showing that the alleged infringer had this required mental state, and it dismissed the DMCA claim, with leave to amend, holding that the plaintiffs' claims were wholly conclusory and did not provide the necessary details to support its allegation (including which specific CMI was allegedly removed and which of the defendants they contend removed it).

Other Claims

The court also dismissed, with leave to amend, the noncopyright claims brought against the defendants:

  • Right of publicity. The plaintiffs alleged in their complaint that the defendants violated their right of publicity both by using the plaintiffs' names to promote the products and by allowing users to use the tool to generate work "in the style of" their names (which they claim are uniquely associated with their art and distinctive artistic style). However, the court noted that the plaintiffs retreated from the claims about violating their "artistic identities" and shifted focus to claims based on the defendants' alleged use of their names in advertising their products. The court dismissed these claims, with leave to amend, finding that the plaintiffs did not provide specific facts to plausibly allege that defendants used the plaintiffs' names to advertise, sell, or solicit purchase of the defendants' products (or how use of the plaintiffs' names in text inputs would produce images that would harm the goodwill associated with their names). The court noted again that the plaintiffs' admission that no generated image was likely a "close match" for the plaintiffs' works seemed to contradict the plaintiffs' claims.
  • Unfair competition. The plaintiffs also alleged a number of unfair competition claims, including that users of the defendants' products were deceived into believing that one of the named plaintiffs was either the origin of, or approved or sponsored, images generated by the defendants' products, causing injury to the plaintiffs' goodwill. Additionally, the plaintiffs alleged unfair competition in the defendants' misappropriation and copying of the plaintiffs' art for commercial gain without permission or attribution in a manner likely to deceive the public. However, the court dismissed these claims, with leave to amend. First, the court noted that, to the extent such claims are based on copyright violations, they are preempted. Then, it found that, to the extent they are based on Lanham Act violations, the plaintiffs did not adequately allege how such use caused likelihood of confusion or deception regarding the origin of the generated outputs.
  • Breach of contract. With respect to defendant DeviantArt, the plaintiffs alleged breach of contract based on Deviant Art's violation of its own terms of service (TOS) and privacy policy, to which plaintiff McKernan and unspecified "others" are alleged to have agreed. Among other things, the plaintiffs alleged that unspecified provisions in the TOS prohibited using content from DeviantArt for commercial purposes and that the plaintiffs breached these terms by allowing Stability AI to use content from DeviantArt for commercial purposes. The court dismissed this claim, again due to a lack of specificity in the pleadings, noting that the plaintiffs did not allege that Stability AI is bound by the DeviantArt TOS or that the plaintiffs are third-party beneficiaries of the TOS such that the plaintiffs can sue to enforce such terms between DeviantArt and a third party. The plaintiffs are granted leave to amend but are told that they must "identify the exact provisions in the [TOS] they contend DeviantArt breached and facts in support of breach of each identified provision."

Amended Complaint

On November 29, 2023, the plaintiffs filed an amended complaint. The amended complaint bears little resemblance to the original complaint. It adds seven new individual plaintiffs and a new defendant, Runway AI (another generative AI tool company), and they have reframed many of their claims. The plaintiffs deleted the vicarious copyright infringement claim and replaced it with an inducement of copyright infringement claim. They also deleted their right of publicity and unfair competition claims, and added two Lanham Act claims—false endorsement and vicarious trade-dress infringement—and deleted their claim for declaratory relief. The amended complaint provides additional details about the plaintiffs' claims that Stable Diffusion contains "compressed copies" or "protected expression," and they provide quotes from academic papers and from Stability AI personnel, which they allege support their claim that the models contain protected expression.

Kadrey v. Meta Platforms (now consolidated with Chabon v. Meta Platforms)

Shortly after the Andersen motion to dismiss ruling, another court in the Northern District of California ruled on a motion to dismiss in a class-action case that was filed in July 2023 involving Meta's LLaMA large language models, (which the plaintiffs' allege were trained on their books). Meta moved to dismiss all claims except the one alleging copyright infringement based on unauthorized copying of the plaintiffs' books for training purposes, and the court granted the motion, with leave to amend. At the same time, the court issued its decision on a motion to dismiss in another class-action case, Chabon v. Meta Platforms, simply stating that Meta's joint motion to dismiss was granted for the reasons given in Kadrey v. Meta Platforms Inc. Shortly thereafter, the judge consolidated the Chabon and Kadrey cases, and the plaintiffs have since filed an amended complaint in the consolidated case.

Copyright Claims

In this case, the plaintiffs' theory regarding the model being a derivative work is slightly different from the Andersen case (although it was filed by the same lawyers). Rather than argue that the LLaMA models contain compressed copies of the training data, the defendants claim that these models are infringing derivative works because the models cannot function without the "expressive information" extracted from the plaintiff's books. In granting the motion to dismiss, the court dismissed this argument as "nonsensical" and stated that "there is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs' books."

The court also rejected the plaintiffs' theory that "every output of the model is an infringing derivative work" and that when third parties use the model, "every output …constitutes an act of vicarious infringement," noting that the complaint offers no allegations that the output is infringing (i.e., that it is recasting, transforming, or adapting the plaintiffs' books). Like in the Andersen case, the court rejected the notion that because the plaintiffs' books were copied as part of the training process, they do not need to allege any similarity between LLaMA outputs and their books to maintain a derivative infringement claim. The court stated that to prevail on a theory that outputs constitute derivative infringement, the plaintiffs would need to allege (and ultimately prove) that the outputs "incorporate in some form a portion of" the plaintiffs' books and concluded that plaintiffs would need to prove substantial similarity. As there were no such facts alleged here, the court granted the motion to dismiss these claims. As in the Andersen decision, the court granted the plaintiffs leave to amend their complaint.

The court also dismissed the plaintiffs' claims under DMCA §1202(b) because there were no facts alleged to support claims that the LLaMA models ever distributed the plaintiffs' books (much less that it did so "without their CMI") and their claims under DMCA §1202(a)(1) because the plaintiffs did not plausibly allege that the LLaMA models are infringing derivative works.

Other Claims

The court dismissed the unfair competition claims because it found that to the extent it is based on the surviving claim for direct copyright infringement, it is preempted, and to the extent it is based on allegations of fraud or unfairness separate from the surviving copyright claim, the plaintiffs had not come close to alleging such fraud or unfairness. The court also dismissed the last two claims for unjust enrichment and negligence due to preemption.

Amended Complaint

On December 12, 2023, the plaintiffs filed an amended complaint. The amended complaint contains only a single claim, which is for direct copyright infringement based on copies made during the training of the models.

Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc.

While recent copyright cases related to generative AI have attracted a great deal of attention, these are not the first copyright cases involving the unlicensed use of materials to train AI models. Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc., which was filed in the U.S. District Court for the District of Delaware in May 2020, centers on allegations that copyrighted headnotes from Thomson's Westlaw legal research database were used as training data for an AI legal research tool that was developed by ROSS Intelligence.

In September 2023, the court ruled that most issues in the case could not be resolved on summary judgment because "many of the critical facts in [the] case remain genuinely disputed." The court did, however, grant the plaintiffs' motion for summary judgment on the question of whether ROSS engaged in at least some actual copying of materials from Westlaw.

Among the key facts the court noted were in dispute related to fair use are the aims and outcomes of the training process. The court did not find that intermediate copying made in the training process is always transformative for purposes of evaluating the first fair use factor, but rather held that whether the intermediate copying is transformative depends on the precise nature of the use. The court said that if, as ROSS contended, the AI tool only studied the language patterns in the headnotes to learn how to produce judicial opinion quotes, then it would be transformative intermediate copying (following certain intermediate copying cases cited by ROSS). But if, as Thomson Reuters alleges, ROSS used the untransformed text of headnotes to get its AI to "replicate and reproduce the creative drafting done by Westlaw's lawyer-editors," then those intermediate copying cases would not apply. As a result, the court found that the question of whether this use was transformative is a material question of fact that the jury needs to decide. Such disputes will be central, as the court noted that "the first fair use factor comes down to the jury's finding of transformativeness."

Factual disputes also remain for trial regarding the remaining fair use factors. For the second factor (the nature of the copyrighted work), the court found that disputes about how original the copied Westlaw material is and "whether it is in fact protected, and how far that protection extends" must go to a jury. For the third factor (substantiality of the use), the court said a jury must resolve disputes related to whether "the scale of copying (if any) [engaged in by ROSS Intelligence] was practically necessary and furthered its transformative goals." And for the fourth factor (potential marketing impact), a jury must ultimately decide "hotly debated questions" related to market harm, such as whether "it is in the public benefit to allow AI to be trained with copyrighted material." The case is expected to go to trial in August 2024.

Takeaway

Because the initial rulings we have in the generative AI cases are very early stage rulings, and the courts granted broad leave to amend, they leave many crucial questions unanswered. However, these cases do suggest that claims that the models themselves and that the generated output of such models are infringing works may face an uphill battle, unless it can be shown that they either contain portions of the copyrighted content or that they are substantially similar to the copyrighted content. The Thomson Reuters case suggests that it may not be easy to resolve AI-related cases that involve fair use on summary judgment—at least not if the record indicates disagreement on how the tools at issue work, and not until judicial consensus develops on how the fair use factors should be applied to generative AI.

Follow us on social media @PerkinsCoieLLP, and if you have any questions or comments, contact us here. We invite you to learn more about our Digital Media & Entertainment, Gaming & Sports industry group and check out our podcast: Innovation Unlocked: The Future of Entertainment.

Blog series

Age of Disruption

We live in a disruptive age, with ever-accelerating advances in technology largely fueling the disruption permeating almost every aspect of our lives.

We created the Age of Disruption blog with the goal of exploring the emerging technologies reshaping society and the business and legal considerations that they raise. 

View the blog
Home
Jump back to top