Practical Protections to Guard Against AI-Powered Algorithm Risk

Bloomberg Law's Professional Perspectives give authors space to provide context about an area of law or take an in-depth look at a topic that could benefit their practice.

The Bottom Line

  • Pricing algorithms are double-edged swords that help businesses enhance competition, but also expose them to legal risk from private plaintiffs and law enforcement.
  • Businesses can’t rely on pointing to their algorithms only using public information as a shield against legal action.
  • The threat of lawsuits shouldn’t keep businesses from deploying pricing algorithms, as some basic safeguards can minimize their risk.

Savvy businesses price goods and services using a variety of algorithms that can improve pricing strategies and, in many instances, enhance competition in the marketplace. However, with the introduction of artificial intelligence technology, algorithmic pricing software is becoming more powerful, more complex, and less transparent.

Pricing algorithms are computer programs that use both nonpublic and publicly available information to analyze large amounts of competitive information to develop pricing recommendations. These programs often leverage AI to enable businesses to price products or services rapidly, dynamically, and sometimes automatically. Businesses may also use these programs to determine at what price to purchase goods and services.

While this technology is flourishing, private plaintiffs and law enforcement agencies are threatening businesses that use algorithm-based pricing software, relying on antitrust laws to challenge the legality of such tools. Given the legal risks, organizations and their legal departments should understand how employees are using pricing algorithms in sales and purchasing, evaluate antitrust risk, and adopt sound information governance policies and practices that mitigate such risk.

Using Nonpublic Information

Pricing algorithms have come under scrutiny by regulators as well as the plaintiffs’ bar. In July 2024, the Department of Justice, Federal Trade Commission, UK Competition and Markets Authority, and the European Commission issued a joint statement warning of “the risk that algorithms can allow competitors to share competitively sensitive information, fix prices, or collude on other terms or business strategies in violation of our competition laws.”

In August 2024, the US sued RealPage, a vendor of pricing algorithm software for landlords, for violating Sections 1 and 2 of the Sherman Act following the filing of multiple federal antitrust class actions by private plaintiffs. Regulators and the plaintiffs’ bar claim that pricing algorithms rely on nonpublic competitor information to recommend pricing.

Regulators and plaintiffs further allege that businesses violate antitrust law by training algorithms using nonpublic information from competitors and then use the algorithm to justify inflating prices.

Although some courts have dismissed cases for failure to adequately allege the algorithm used nonpublic, competitively sensitive information to generate pricing, other courts have denied motions to dismiss where the use of nonpublic information was adequately alleged.

In In re RealPage, IncRental Software Antitrust Litig., the court denied a motion to dismiss a case involving rental pricing for multifamily apartments because the plaintiff adequately alleged that the “software inputs a melting pot of confidential competitor information through its algorithm and spits out price recommendations based on that private competitor data.”

The US District Court for the Western District of Washington also denied a motion to dismiss where the provider of software that makes pricing recommendations “advertised its revenue management software to lessors as a means of increasing rates” and explained that its software would only work as advertised if “each lessor client divulges its confidential and commercially sensitive pricing, inventory, and market data” for use by the software.

The US attorney general filed a statement of interest of the US in In re Multiplan Health Ins. Provider Litig., arguing that:

  • By “sharing information through an algorithm,” a “provider can create the same anticompetitive effects as a direct exchange between competitors.”
  • “An algorithm provider’s ‘pitch’ could constitute an invitation for collective action among competitors—by indicating to users that the same pitch was made to their competitors and insinuating that using the algorithm could help them avoid competition—and subsequent joint use of the algorithm could demonstrate acceptance of that invitation.”
  • “Any formula used to fix benchmark, component, recommended, or ‘starting point’ prices” can constitute a violation “even if end prices ultimately vary.”
  • Such a conspiracy can exist not just among sellers but also “among purchasers.”

Recent state legislation has similarly focused on regulating the use of nonpublic information. San Francisco and Philadelphia have both passed legislation prohibiting the use of certain software that uses nonpublic information.

At the federal level, the Senate introduced the Preventing Algorithmic Collusion Act, which would make it unlawful for a person to use or distribute any pricing algorithm that uses, incorporates, or was trained with nonpublic competitor data by creating a presumption of an anticompetitive agreement under certain circumstances. A user of the software—but not a developer or distributor—may rebut the presumption by demonstrating by clear and convincing evidence that it couldn’t have reasonably known the pricing algorithm used nonpublic competitor data.

Importantly, the clear and convincing standard is a high burden of proof, which, if enacted into final legislation, will heighten the difficulty in invoking this defense. While PACA’s fate is uncertain, such a statute would set a heightened standard of scrutiny for pricing tools using algorithms that rely on nonpublic data.

Using Public Data

Even if an algorithm uses publicly available information, its use might be subject to antitrust and price fixing scrutiny.

The statement of interest in In re Multiplan Health Insurance Provider Litigation describes how algorithms could be used in a hub-and-spoke conspiracy even without the sharing of nonpublic information. And it argues the “Sherman Act can cover concerted action by competitors on any ‘formula underlying price policies.’”

Modes of Analysis

Showing that parties who “collectively enjoy monopoly power” engaged in a concerted action by algorithm isn’t the end of the analysis. The question remains whether their action unreasonably restrains trade.

A restraint of trade may be unreasonable in one of two ways. First, it may be unreasonable per se without inquiry into its competitive effects. The classic per se unreasonable restraint is an agreement between competitors to fix prices.

Second, a restraint may be unreasonable under the “rule of reason,” which is a fact-specific assessment of the challenged conduct’s “effect on competition.” The “goal” in applying the rule of reason is to “distinguish between restraints with anticompetitive effects that are harmful to the consumer and restraints stimulating competition that are in the consumer’s best interest.”

The courts “weigh[] all of the circumstances” in a fact-specific inquiry. The question is “whether the defendants’ agreements harmed competition and whether any procompetitive benefits associated with their restraints could be achieved by ‘substantially less restrictive alternative’ means.” This inquiry generally proceeds in a three-step burden-shifting framework:

  • The plaintiff has the initial burden to prove that the challenged restraint has a substantial anticompetitive effect.
  • Should the plaintiff carry that burden, the burden then shifts to the defendant to show a procompetitive rationale or basis for the restraint.
  • If the defendant can make that showing, the burden shifts back to the plaintiff to demonstrate that the procompetitive efficiencies could be reasonably achieved through less anticompetitive means.

An assessment of risk associated with the use of pricing algorithms must consider both standards. Circumstances that might give rise to allegations of an agreement to fix prices must be investigated and analyzed. Pro- and anticompetitive effects should be evaluated.

Minimizing Risk

The explosion of AI is not only accelerating the adoption of pricing algorithms but also increasing antitrust risk for employing these algorithms. These pricing algorithms can also quickly proliferate data and create complex information governance issues if organizations don’t take proactive steps to manage this data, and document processes and information dissemination.

To mitigate risk, legal departments should determine how an organization is using algorithms in connection with pricing, whether acting as a seller or a purchaser, and whether those uses could implicate antitrust laws. Legal departments should also be consulted as new uses of AI pricing algorithms evolve, and be aware of how these changes affect updates to the organization’s information governance strategy.

Technology that combines and uses nonpublic pricing information may come under particular scrutiny and give rise to litigation risk. A health check will involve identifying any such technology in use and evaluating the likelihood of passing muster under antitrust laws. More specifically, legal departments should consider whether users collectively enjoy monopoly power—a threshold question in any antitrust assessment.

Legal departments should also consider how the algorithm was initially represented, sold, and purchased.

For commercial software, counsel should examine marketing materials and related communications. For proprietary software, lawyers should consider the software engineers’ input and documentation on the software’s development and operation.

A motion to dismiss was denied in Duffy v. Yardi Systems, Inc., because the software provider “advertised its revenue management software to lessors as a means of increasing rates” and explained that its software would work “only if each lessor client divulges its confidential and commercially sensitive pricing, inventory, and market data” for use by the software. This supported both concerted actions in a hub-and-spoke fashion and the unequivocal sharing of confidential information between potential competitors.

Even algorithms that use only publicly available information should be considered under the rule of reason framework to understand whether the adoption of an algorithm could be seen as a hub-and-spoke conspiracy to fix prices. Undertaking this analysis will allow an organization to assess its risk of litigation—the more pricing decisions that are made independently and unilaterally, the lower the risk. The analysis applies both to starting and final pricing.

In addition to assessing the potential for a finding of concerted action, it is also important to analyze whether the use of the algorithm unreasonably restrains trade. To what extent are there procompetitive benefits? Can it be shown that the algorithm lowers pricing for consumers? Does it increase output? Does it have other procompetitive benefits? To what extent could those benefits be achieved by substantially less restrictive alternative means?

Legal departments may also wish to conduct routine audits on the use and business case for pricing algorithms to understand how, when, and where algorithms could be creating unexplored information governance issues for the company.

Risk can be better managed with formal written policies for the use of pricing algorithms and for the retention of data concerning their use. Such policies can raise awareness and prevent the unchecked use of algorithms without a full analysis of the risks and benefits, as well as restrict use to appropriate personnel.

Policy documentation also underscores the good faith intent of an organization’s use of algorithms if ever needed in litigation or other proceedings. Policies can also ensure the retention of data and documents that tend to support and defend the use of pricing algorithms if challenged by regulators or private litigants.

Finally, using outside counsel can minimize the risk that any internal investigation and legal analysis of the use of pricing algorithms will be discoverable. The occasional dual purpose role of in-house attorneys as legal counselors and business advisers may complicate the ability to successfully invoke attorney-client privilege.

This difficulty is exacerbated by the uncertainty of what legal standard should be applied to the inquiry, such as the “primary purpose” test or the “significant legal purpose” test. Engaging outside counsel creates a buffer that can minimize the risk of blurred lines regarding attorney-client privilege.

Outlook

Algorithmic pricing software is growing in use, but it has also become a focus of private plaintiffs and law enforcement under antitrust laws. Organizations and their legal counsel will benefit from proactive information governance and related analyses to reduce the risk of allegations that pricing algorithms are a restraint of trade or anticompetitive.

Careful investigation before an inquiry is made or a lawsuit is filed—and thinking through information governance to eliminate unnecessary discovery targets—are critical to realize that goal.

The views expressed in this article are those of the authors and do not necessarily represent the views of their law firm or any of its clients.