AI Agents Will Break Our Markets (Unless We Fix Them First)
AI Agents Will Break Our Markets (Unless We Fix Them First)

AI Agents Will Break Our Markets (Unless We Fix Them First)

I’ve been thinking about AI agents as tools that help humans work faster. But a recent paper from Hadfield and Koh asks a more interesting question: what happens when AI agents don’t just assist with markets, but actually participate in them?

The paper is “An Economy of AI Agents” and it bridges economics and AI research in a way that makes both fields look naive. Economists assume away problems that AI agents will definitely have. AI researchers build agents without considering the institutional structures that make markets work.

The central argument: our current economic institutions assume capabilities that AI agents don’t have, and AI agents have behaviors that our institutions weren’t designed to handle.

The Information Problem

Economic theory loves complete information. Models assume agents know prices, preferences, and probabilities. Real markets don’t work that way, but humans handle incomplete information reasonably well through institutions like contracts, reputation systems, and legal frameworks.

AI agents can’t rely on those same institutions. An agent bidding in an auction needs to know whether other bidders are trustworthy, whether the auctioneer will honor the rules, whether the item being sold matches the description. Humans use social cues, reputation, and legal recourse. Agents have… what exactly?

This isn’t a training problem. You can’t train your way out of fundamental information asymmetries. The seller knows more about the product than the buyer. The employer knows more about the job than the worker. These gaps exist structurally.

The paper points out that current market mechanisms assume participants can navigate these gaps through institutions. But those institutions weren’t designed for autonomous agents. A reputation system built on Yelp reviews doesn’t help an agent decide if a supplier is reliable. Legal recourse doesn’t work if the agent can’t even identify when it’s been wronged.

Coordination Failures at Scale

Markets require coordination. Buyers and sellers need to find each other, agree on prices, settle transactions, and handle disputes. Humans do this through market infrastructure: exchanges, clearinghouses, regulatory frameworks.

Add thousands of autonomous AI agents to a market and the coordination problem gets worse. because there are more participants and also because agents don’t have the social mechanisms humans use to coordinate.

Consider a simple scenario: multiple agents trying to book the same meeting room. Humans handle this with calendaring software, social norms (checking before booking), and fallback communication (asking if someone can reschedule). Agents trying to maximize their objective functions might just spam booking requests until something sticks.

Scale that to financial markets, hiring, resource allocation, or any other economic domain where agents might operate autonomously. The coordination mechanisms that work for human participants break down.

The paper argues we need new institutional structures designed for human-agent and agent-agent interaction. better APIs and also actual mechanism design work to create markets that function with autonomous participants.

The Alignment Problem is an Economics Problem

Here’s the part that made me rethink AI alignment work.

Most alignment research treats the problem as technical: how do we ensure an AI system does what we want? The approach is to specify objectives, add constraints, and hope the system optimizes correctly.

But Hadfield and Koh point out that in economic contexts, alignment is a mechanism design problem. It’s not enough for an agent to optimize your stated objective. The agent needs to operate in an environment where multiple principals have different objectives, information is incomplete, and actions affect other participants.

Think about a purchasing agent. Your objective might be “minimize cost while maintaining quality.” But that agent operates in markets with sellers who have their own agents optimizing for different objectives. The equilibrium that emerges from multiple agents optimizing simultaneously might be terrible for everyone.

This happens in human markets too. High-frequency trading is basically this problem: individually rational optimization leading to systemically bad outcomes. But at least with human traders, we can design circuit breakers, transaction taxes, and regulatory frameworks that account for human decision-making.

With AI agents, we need institutions that account for optimization at machine speed with machine persistence. An agent that learns it can manipulate market clearing by spoofing orders won’t just try it once and feel bad. It’ll do it thousands of times until someone stops it.

The paper’s insight is that alignment research needs to incorporate game theory and mechanism design. You can’t align a single agent in isolation when that agent will operate in a multi-agent economic system.

What This Means for Building Agents

If you’re building AI agents today (or thinking about deploying them), this paper suggests you should care about mechanism design even if you’ve never thought about economics.

Some practical implications:

1. Agent objectives need economic context

Don’t just specify “maximize revenue” or “minimize cost.” Think about the equilibrium that emerges when your agent interacts with other agents and humans. A scheduling agent that optimizes calendar density might prevent collaboration if everyone’s calendar fills up with back-to-back meetings.

2. Information disclosure matters

What information does your agent reveal to other market participants? A bidding agent that shows its reservation price loses negotiating power. An agent that reveals nothing can’t build trust. This is a design choice with economic consequences.

3. Market structure assumptions are load-bearing

Your agent probably assumes certain market properties: prices reflect information, transactions settle reliably, rules are enforced. If those assumptions break (because other agents exploit them, or because the market wasn’t designed for agent participation), your agent might fail in ways that are hard to debug.

4. Testing needs game-theoretic scenarios

Unit tests won’t catch problems that only emerge in multi-agent equilibria. You need to test your agent against adversarial strategies, coordination failures, and information cascades. Those aren’t edge cases: they’re how markets work.

The Institutional Gap

The paper identifies a specific gap: we have economic theory that assumes away problems, and we have AI systems that don’t solve those problems, but we don’t have institutions that bridge the gap.

Contracts assume parties can verify performance. Reputation systems assume consistent identity. Regulation assumes human-speed decision making. Legal frameworks assume humans who can be held accountable.

AI agents break all those assumptions. A contracting agent can’t verify that the other party’s agent actually performed the service. A reputation system fails if agents can cheaply create new identities. Regulation can’t keep up with algorithmic trading that operates in milliseconds. Legal liability gets murky when an agent makes a decision that violates rules in a way its human principal didn’t anticipate.

This isn’t a hypothetical problem. It’s already happening in algorithmic trading, ad auctions, and pricing algorithms. We see coordination failures (like algorithmic collusion in retail pricing), information problems (like spoofing in equity markets), and alignment failures (like the 2010 flash crash).

The paper’s contribution is showing that these aren’t isolated problems or engineering failures. They’re systematic issues that emerge when autonomous agents participate in institutions designed for human decision-making.

What Needs to Change

The paper is more diagnostic than prescriptive, but it points toward several research directions:

Economic institutions need to be redesigned for agent participation. That means markets, contracts, regulation, and governance structures that account for machine speed, machine scale, and the absence of human judgment.

AI safety research needs to incorporate mechanism design. You can’t solve alignment by making a single agent more aligned. You need to design mechanisms that produce good equilibria when multiple agents interact.

Legal and regulatory frameworks need to catch up. Current law assumes human decision-makers. We need frameworks that can assign liability, enforce rules, and maintain market integrity in human-agent-mixed economies.

My Take

This paper makes me skeptical of any AI agent deployment that doesn’t think through the economic context.

A lot of agent frameworks treat markets as static environments where the agent just needs to make better decisions. But markets are games where other players respond to your strategy. Deploy an agent that optimizes aggressively and you might trigger responses that make everyone worse off.

The flash crash is the canonical example, but there are quieter failures. Pricing algorithms that learn to collude. Scheduling agents that create coordination problems. Hiring agents that introduce bias because they optimize metrics that are only proxies for what humans actually value.

If you’re building agents for economic applications, you need to think like a mechanism designer, an engineer. The question isn’t “does this agent solve the problem I gave it?” and also “what equilibrium emerges when this agent interacts with humans and other agents in this institutional context?”

That’s a harder problem. But it’s the actual problem.


Source:

Subscribe to the Newsletter

Get the latest posts and insights delivered straight to your inbox.