Skip to main content
Home
Home

The FTC Can’t Stop Talking about AI

Perkins on Privacy

The FTC Can’t Stop Talking about AI

Artificial Intelligence

As the hype about generative artificial intelligence (AI) has grown, the Federal Trade Commission (FTC) has made clear that it intends to be at the forefront of federal agencies working to ensure the responsible use of AI. In just the last few months, it has spoken repeatedly about AI, issuing multiple warnings about the risks of AI technology. In many respects, this is not new terrain for the FTC. Since at least 2016, the FTC has cautioned about the potential for machine learning and AI algorithms to lead to discrimination against protected classes. The FTC recently reiterated those concerns and cautioned businesses about the use of generative AI to spread manipulative, fraudulent, or unsubstantiated content.

Discrimination. The Chair of the FTC, along with officials from the Consumer Financial Protection Bureau (CFPB), the Civil Rights Division of the Department of Justice (DOJ), and the Equal Employment Opportunity Commission (EEOC) issued a joint statement committing to vigorous use of their respective authorities to protect against bias and unlawful discrimination from AI. The FTC and the other agencies explain that AI relies on vast amounts of data to find patterns or correlations, which it then applies to new data to perform tasks or make recommendations and predictions. The agencies express concern that in doing so, AI-based systems will result in unlawful discrimination.

The agencies highlight three potential sources of such discrimination:

  1. Datasets. Unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors, which may correlate to protected classes.
  2. Opaque models. Lack of transparency about how AI models work, making it difficult to know whether an automated system is fair.
  3. How AI systems are used. Lack of understanding or consideration by developers as to how the AI systems they design will be used.

The statement notes that the FTC has issued a 2022 report outlining significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and a 2021 blog post on AI equity issues.

Exaggerated Claims and Lack of Substantiation About AI Capabilities

Beyond concerns about discrimination, in a series of blog posts ("Keep your AI claims in check," "Chatbots, deepfakes, and voice clones: AI deception for sale," and "The Luring Test: AI and the engineering of consumer trust"), the FTC has cautioned businesses about several consumer protection risks from the use of AI. First, the FTC has reminded businesses marketing AI capabilities to apply the type of "bread and butter" concepts that the FTC regularly demands, such as the following:

  • Don't exaggerate what an AI product can do. The FTC reminds marketers that AI-based products and services have to work as advertised, and claims must be backed by adequate proof. The FTC suggests that it will be scrutinizing claims in this arena, noting that "for FTC enforcement purposes—false or unsubstantiated claims about a product's efficacy are our bread and butter."
  • Don't make baseless claims that your product is AI-enabled. According to the FTC, businesses should not think they will "get away" with claims that their product is "AI-enabled" or "AI-powered" if it merely used an AI tool in the development process.

Fraud and Facilitation of Consumer Harm

Second, the FTC has sounded the alarm about the use of generative AI or synthetic media to create or spread fake and fraudulent content on a mass scale. It urges businesses to take the following steps:

  • Consider the potential for harm. The FTC advises businesses that develop or offer synthetic media or generative AI products to consider at the design stage and afterward, the reasonably foreseeable ways their creations could be misused for fraud or cause other harm. The FTC suggests that if there is a potential for such harm, businesses should consider whether they should make or sell the AI technology in question.
  • Deploy risk mitigation measures. The FTC urges businesses that if they decide to go ahead with an AI product, to "take all reasonable precautions" before releasing the technology. According to the FTC, a business warning its customers about misuse or telling them to make disclosures is insufficient. Instead, companies should build "durable, built-in deterrence measures."
  • Don't over-rely on post-release detection. According to the FTC, even if it is possible to detect AI-generated content containing fraudulent material, "the fraudsters will often have moved on by the time someone detects their fake content." In addition, according to the FTC, "the burden shouldn't be on consumers, anyway, to figure out if a generative AI tool is being used to scam them."
  • Don't use AI-generated content to mislead. The FTC warns advertisers against using AI-generated content in a way that misleads consumers, noting that "[c]elebrity deepfakes are already common and have been popping up in ads."

Manipulation and the Risk of Unearned Consumer Trust

Third, the FTC cautions that AI can be used to exploit and manipulate consumers, which may trust seemingly neutral machines more than other humans. It highlights three risks:

  • Dark patterns. The FTC expresses concern that AI chatbots will be used by unsavory actors to influence people's beliefs, emotions, and behavior, that is, to engage in what the FTC and other regulators ominously refer to as "dark patterns." The FTC emphasizes that businesses designing generative AI tools for "novel" uses, such as customizing ads to specific people or groups, should avoid manipulative design practices. The FTC expresses particular concern that consumers will be unfairly or deceptively steered into making "harmful decisions in areas such as finance, health, education, housing, and employment."
  • Undisclosed ads. The FTC also expresses concern that generative AI features may contain undisclosed ads and advises (consistent with its native advertising guidance) that (1) it should be clear when content is an ad; (2) search results or any generative AI output should clearly distinguish between paid versus organic content; and (3) it should be clear if an AI product's response steers consumers to a particular website, service provider, or product because of a commercial relationship.
  • Undisclosed communications with machines. According to the FTC, businesses should ensure that consumers know when they are communicating with a machine rather than a person (which, in some cases, may be required under California law).

The FTC describes the FTC staff as "focusing intensely" on AI, including generative AI. In light of the recent flurry of FTC public statements, we have no reason to doubt that the FTC is actively searching for enforcement targets to illustrate the points it has discussed in its public pronouncements. For companies looking to stay out of the FTC's crosshairs, the above guidance is worth considering.

Print and share

Authors

Explore more in

Blog series

Perkins on Privacy

Perkins on Privacy keeps you informed about the latest developments in privacy and data security law. Our insights are provided by Perkins Coie's Privacy & Security practice, recognized by Chambers as a leading firm in the field.  Subscribe 🡢

View the blog
Home
Jump back to top