One Rocky Sleigh Ride: Antitrust and AI Pricing
Artificial intelligence (AI) can provide invaluable assistance to the retail industry.
With just a few taps on a keyboard, it can generate helpful product descriptions for online storefronts. For the small business on a budget, AI-generated artwork can streamline the graphic design process. Need some extra hands? AI can help tackle hiring. And when it’s time to do it all again next quarter, AI can analyze customer patterns, trends, and preferences to guide investment and cost cuts.
But AI pricing requires special care. Specifically, use of AI to assist with setting prices could put businesses on the U.S. Department of Justice (DOJ) and Federal Trade Commission (FTC) naughty list. This Update provides takeaways and emerging developments from recent litigation in this space.
Humans Agreeing With Algorithmic Assistance
United States v. David Topkins, CR 15-00201 WHO (N.D. Cal. 2015)
David Topkins was the director of a company that sold posters through an online marketplace. Topkins not only directly communicated with his competitors to fix prices but also agreed with his competitors to use complementary algorithms to set prices in conformity with the scheme. The DOJ indicted Topkins, and he pleaded guilty to a felony.
Topkins’ agreement with his competitors falls under the traditional application of § 1 of the Sherman Act, which requires an “agreement” as a precondition to antitrust scrutiny. Agreements between competitors to fix pricing, even if using an AI’s assistance to do it, is still illegal.
Shared Algorithms That Use Sensitive Data
Gibson et al. v. Cendyn Group, LLC, et al. 2:23-cv-00140 (D. Nev.) and United States et al. v. RealPage, Inc., 1:24-cv-00710 (M.D.N.C.)
When competitors input their data into a common AI tool that then generates suggested prices, it may create a risk of a “hub and spoke” conspiracy, where a central AI tool (the “hub”) purportedly coordinates an illegal horizontal agreement among various competitors that use that tool (the “spokes”).
In Gibson, a private plaintiff alleged that hotels use a software, “Rainmaker,” to fix prices for Las Vegas hotel rooms. This case was dismissed by the U.S. District Court for the District of Nevada because the defendants could override the price recommendations, but the case is currently on appeal, and the DOJ has argued for revival in an amicus filing. In support of reviving the case, the DOJ argued that the district court erred in rejecting both vertical and horizontal restraint-of-trade theories by relying on the fact that the hotel defendants were not required by the software to accept the price recommendations. The DOJ argued that, under either the vertical or horizontal claim, the fact that the agreed-to prices provide starting point prices (as opposed to agreed-to final prices) does not prevent liability under the antitrust laws as a matter of law (for dismissal on the pleadings). Another ongoing case is RealPage. There, the DOJ alleges that a property management software company’s algorithms use competing landlords’ sensitive data to provide price “recommendations” that are effectively mandatory.
Industry Reports
United States v. Agri Stats, Inc., 0:23-cv-03009 (D. Minn.)
In Agri Stats, the DOJ alleges industry reports from a data compiler help chicken, pork, and turkey processors exchange sensitive pricing and production information. The DOJ claims that, although the data compiler purports to anonymize the data it collects, the meat processors “deanonymize” the data compiler’s reports to identify specific competitors and their operations. Once the meat processors deanonymize the reports, it is alleged that they can closely monitor a specific competitor’s output, cost, and price metrics.
From this emerging enforcement activity and caselaw, retailers should be mindful of the antitrust risks stemming from the use of AI tools and make the following inquiries:
- What external data is being used to train the AI tool or generate pricing recommendations?
- Antitrust risk may be heightened where the data is:
- Sourced from competitors.
- Competitively sensitive.
- Newer (closer to “real-time”).
- Granular.
- Antitrust risk may be heightened where the data is:
- How is your company’s internal data being used to train the AI tool or generate pricing recommendations?
- There is more antitrust risk if your company’s data is used to provide pricing recommendations to others, especially to competitors.
- How is data disseminated by the AI tool?
- Any external data dissemination should be aggregated and anonymized. Ensure the number of data inputs is sufficient to not allow for disaggregation or deanonymization.
- Does the AI tool suggest or set prices? Is the tool customizable?
- There may be more antitrust risk if the AI tool sets prices directly, particularly where it provides limited or no opportunity for human oversight and modification.
- There may be enhanced risk if the tool is not subject to customization (e.g., modifying settings within the AI tool, or the algorithm itself, to better reflect the unique aspects of your business's operations—and thereby avoiding identical AI recommendations across your industry).
Based on these themes, companies can create processes to explore utilizing AI pricing tools while limiting risk.
First, do not “set it and forget it.” When using an AI tool to assist with pricing, a human should always retain the ultimate authority to price independently. Without an agreement, “conscious parallelism” is not unlawful—so the more indices of independent decision-making regarding your pricing practices, the better.
Second, do not make it difficult for a human to override an AI recommendation. The de facto delegation of pricing authority to AI tools has been a key allegation in several antitrust complaints in this area.
Third, periodically monitor the recommendations that an AI pricing tool generates, comparing these recommendations to your company’s goals, objectives, and strategy. This is fundamental for the responsible and legally compliant use of AI in decision-making. It shows that your company is not delegating its decision-making and judgment to algorithms.