In a glimpse of a future where we never have to haggle for a used car again, Anthropic recently revealed the results of “Project Deal.” In December 2025, the company transformed its San Francisco headquarters into a live, AI-mediated marketplace.
But this wasn’t your typical office swap. Anthropic’s 69 employees didn’t list a single item themselves. Instead, they handed their preferences to Claude AI agents, who were set loose in a dedicated Slack workspace to buy, sell, and negotiate with zero human intervention.
The result? A staggering 186 deals closed, totaling over $4,000 in transactions for everything from high-end snowboards to bags of ping-pong balls.
Technical Breakdown: How “Project Deal” Worked
To create a digital “avatar” of each employee, Anthropic used a multi-step onboarding process:
- The Interview: Claude conducted a brief interview with each participant to understand their selling price floors, buying “wish lists,” and desired negotiation style (e.g., “friendly” vs. “aggressive”).
- System Prompting: These inputs were converted into custom system prompts that defined the agent’s persona and constraints.
- Autonomous Execution: The agents were deployed into Slack, where they autonomously wrote listings, identified potential matches, engaged in multi-turn price haggling, and finalized sales.
[Image: Slack conversation showing two Claude agents negotiating the price of a snowboard]
The Secret Parallel Experiment: The Intelligence Gap
While the top-line results were impressive, the real story lies in a hidden variable. Anthropic randomly assigned participants either the flagship Claude Opus 4.5 or the lighter Claude Haiku 4.5, without telling them which model was representing them.
The performance gap was measurable and significant:
- Opus Sellers: Earned $2.68 more per item on average.
- Opus Buyers: Saved $2.45 more per item on average.
- Transaction Volume: Opus users completed approximately 2.07 more deals overall.
The “Silent Winner” Problem
The most “uncomfortable implication,” according to Anthropic, was the perception of fairness. Despite the clear financial disadvantage, participants with Haiku agents rated the fairness of their deals almost exactly the same as those with Opus agents. In a world of AI-mediated commerce, you could be losing money in every transaction and be completely unaware because your “advocate” simply wasn’t smart enough to get a better deal.
Risk-Impact Analysis: The Future of AI Marketplaces
| Advantage | Risk Factor | Impact |
|---|---|---|
| Frictionless Trade | Information Asymmetry | Smarter models systematically extract value from weaker ones. |
| Contextual Reasoning | AI-Assisted Scams | Agents can “recall” casual mentions from weeks ago to manipulate sales. |
| User Satisfaction | Invisible Exploitation | Users feel the deal is fair even when they are being “lowballed” by a superior model. |
Export to Sheets
Actionable Insights: Preparing for the Agentic Economy
Anthropic’s experiment proves that AI agents are ready for peer-to-peer trade, but it also warns of a new “Digital Divide.”
1. Advocate Parity
In future marketplaces, fairness may require “advocate parity”—ensuring that both the buyer and seller are represented by models of equal capability to prevent systemic exploitation.
2. Transparency in Representation
Users should be explicitly informed of the “tier” or capability level of the agent representing them. If your agent is a “Haiku-class” model, you should know you are at a disadvantage against an “Opus-class” opponent.
3. Monitoring for “Spherical Orbs of Possibility”
One agent famously pitched ping-pong balls as “perfectly spherical orbs of possibility.” While creative, this highlights how AI can use advanced persuasion and personalization to drive sales—a tactic that could easily morph into deceptive marketing at scale.+1
FAQs
Q: Did the agents actually spend real money? A: Yes. Employees were given a $100 budget (as a gift card) that was adjusted based on the profits or losses their agents made during the experiment.
Q: Did negotiation style (aggressive vs. friendly) matter? A: Interestingly, no. Anthropic found that “aggressive” instructions had no statistically significant impact on the final price—model intelligence was the primary driver of success.
Q: Is this a new Anthropic product? A: No. Project Deal is currently a proof-of-concept experiment to study agentic behavior in social and commercial settings.
Conclusion: The Ethics of the AI Middleman
Project Deal is more than a success story for AI productivity; it is a warning about algorithmic fairness. As we move toward a world where AI agents manage our subscriptions, negotiate our bills, and sell our used goods, we must ask: Is my agent smart enough to win?
Will you trust an AI to handle your next big purchase? Ensure your “spherical orbs of possibility” aren’t actually a bad deal in disguise.