A 2026 Guide to Reassessing Algorithmic Antitrust Risk
The U.S. Department of Justice’s recent proposed settlement with a revenue management software provider provides a guide for companies reassessing their antitrust risk associated with using algorithmic pricing vendors.
Risk assessments are highly fact-specific and vary by industry and market structure, so companies should engage experienced antitrust counsel to evaluate their use of algorithmic tools and the associated vendor practices.
Background: Information Sharing, Algorithms, and Evolving Guidance
Most algorithmic pricing matters arise under Section 1 of the Sherman Act, which prohibits agreements that restrain trade. Traditional price-fixing is per se illegal. Other conduct—such as sharing competitively sensitive information directly with competitors or indirectly through third parties like survey firms or pricing software—has generally been evaluated under the rule of reason. However, there remains limited formal DOJ and FTC guidance delineating permissible versus impermissible algorithmic pricing and information sharing practices.
The closest guidance that agencies provided was the September 1993 Health Care Policy Statement[1] that provided an antitrust “safety zone” for hospitals to exchange price and cost information if certain requirements were satisfied. The guidance was issued as part of a then-White House priority to make healthcare more available and affordable. There was concern that healthcare providers had delayed cooperative cost-cutting arrangements because of uncertainty about antitrust restrictions. The “safety zone” requirements, listed below, were essentially designed to prevent competitors from reverse engineering each other’s competitively sensitive price or wage information shared through surveys:
- The survey is managed by a third party.
- The information collected for the survey is more than three months old.
- Data from at least five hospitals is aggregated and no participant was more than 25%.
- Data was sufficiently aggregated so that the sources cannot be identified.
Approximately 20 years after the “safety zone” was created, DOJ revoked it. DOJ’s press release revoking the guidelines called them “outdated” and noted they were now “overly permissive on certain subjects, such as information sharing.” An accompanying speech by a high-ranking Antitrust Division official, Doha Mekki, cited the application of the guidelines outside of healthcare and the use of AI and machine learning on data as two reasons for withdrawing the guidance. Reading between the lines, AI and machine learning were making old data more valuable and were making it easier, with a sufficiently large dataset, to identify previously de-identifiable data.
DOJ’s Proposed Settlement and What It Signals
Until DOJ’s recent—and first—proposed settlement agreement with a revenue management software provider to resolve algorithmic pricing allegations, the agencies have provided little guidance on how companies could use algorithmic pricing software without violating the antitrust laws. As part of the proposed settlement, the software provider agreed to a number of restrictions, including that it must do the following[2]:
- Cease having its software use competitors’ nonpublic, competitively sensitive information to determine price recommendations
- Cease using active lease data for purposes of training the models underlying the software, limiting model training to historic or backward-looking nonpublic data that has been aged for at least 12 months
- Not use models that determine geographic effects narrower than at a state level (which is broader than the markets alleged in the complaint)
- Remove or redesign features that limit price decreases or align pricing between competing users of the software
Although industry-specific, these restrictions illuminate broader risk themes for companies utilizing algorithmic pricing. Conduct that presents the highest antitrust risk includes direct or indirect sharing of nonpublic competitive information among rivals; pooling of nonpublic data to train or operate models; using nonpublic data to generate real-time or near-real-time recommendations; surfacing granular trend insights from nonpublic inputs; and implementing product features that harden price floors or otherwise stabilize market outcomes. Companies should scrutinize whether their vendors engage in any of these practices.
Beyond Price: Other Competitive Variables at Risk
Antitrust risks are not limited to pricing algorithms. As algorithms and AI continue to evolve, so do the potential antitrust risks associated with using them. When rivals are also using the same software vendors that employ algorithms or AI, companies should be aware of and evaluate the risks associated with using any software that determines output quantities or rates, allocates customers or geographies, recommends wages, or determines bid amounts.
Practical Takeaways
In light of the DOJ’s proposed settlement and the agencies’ skepticism toward legacy “safety zones,” companies should proactively reassess their algorithmic software vendor risk:
- Conduct privileged antitrust reviews of algorithmic tools. Map what data is ingested, how it is used, what models are trained on, the recency and sensitivity of data inputs, and how outputs are used in pricing and other competitive decisions.
- Tighten vendor diligence and contracting. Specify permissible data sources; restrict or prohibit use of nonpublic competitor data; require aging of nonpublic training data; limit geographic granularity; prohibit features that constrain price decreases or align pricing; and secure audit and monitoring rights.
- Monitor algorithmic behavior and outcomes. Track whether outputs appear to converge with rivals’ prices or other competitive variables, especially in markets with high transparency, concentrated structures, or shared vendor ecosystems. Validate that safeguards operate as designed and recalibrate as market conditions and models evolve.
The bottom line is the enforcement formal guidance is sparse, the environment is shifting, and algorithms can magnify traditional antitrust risks. A careful, recurring review of algorithmic software vendors, data pipelines, and vendor practices—with advice from experienced antitrust counsel—remains the most effective way to mitigate exposure while preserving the benefits of modern technology.
Endnotes
[1] See also DOJ and FTC Policy Statement (Aug. 1996).
[2] See DOJ Press Release (Nov. 24, 2025).