Fake news, non-consensual deepfake pornography, and synthetic identities have saturated the internet, yet global regulation remains largely “sleeping at the wheel.” According to Dr. Manny Ahmed, the Cambridge-educated CEO of OpenOrigins, the current trajectory of AI development is leading toward a “deepfake pandemonium” that threatens our shared understanding of reality.
The crisis reached a breaking point in early 2026, following a series of high-profile scandals involving Elon Musk’s chatbot, Grok. Its permissive guardrails reportedly allowed users to generate a torrent of non-consensual sexual imagery—including, most disturbingly, content involving minors—triggering international condemnation and legal action.
The “Gunpowder” Comparison: Is AI Naturally Hazardous?
Dr. Ahmed argues that deepfake technology is a dual-use tool, much like gunpowder. While gunpowder can be used to pave roads and build infrastructure, it is most infamously used to create bullets.
- Innocuous Use Cases: Training content translation, virtual avatars for education, and corporate communication.
- The Hazard: Because the “explosive” property of deepfakes—their ability to deceive—is so powerful, the potential for harm often outweighs the benign benefits in the current unregulated market.
“We don’t allow people to buy and create gunpowder to do whatever they want with it—we regulate it. The same should be said for deepfake technology.” — Dr. Manny Ahmed
“Rubber Bullets” vs. Real Lead
A major point of concern is the “democratization” of cybercrime. In the past, hacking or creating convincing propaganda required technical sophistication. Today, tools like Grok have lowered the barrier to entry so significantly that anyone with a prompt can cause life-altering harm.
Ahmed compares the current situation to giving a murderer a choice between two guns:
- The Current Reality: Big Tech provides “actual bullets” by allowing models to output high-fidelity, non-consensual images.
- The Goal of Regulation: Forcing companies to equip their models with “rubber bullets”—stringent guardrails that prevent harmful outputs while still allowing for legitimate, creative use.
Regulating the “Big Five”
Despite the billions of images circulating, the power to stop the spread lies with a surprisingly small number of players. Dr. Ahmed points out that there are only about five major companies—OpenAI, xAI (Grok), Google, Meta, and Anthropic—that produce the core models behind these deepfakes.
The Expert’s Proposed Roadmap:
- Target the Source: Shift the focus from individual “bad actors” to the five companies that build the engines of creation.
- Mandatory Monitoring: Force companies to monitor and block “harmful intent” requests at the server level.
- Beyond the “Bare Minimum”: While the EU has recently moved to ban “nudifier” apps (May 7, 2026), Ahmed argues that making it illegal is only half the battle. We need technical enforcement that makes the generation of such content impossible.
Current Global Legal Landscape (May 2026)
| Jurisdiction | Status | Action Taken |
| European Union | Active | Passed explicit ban on “nudifier” apps and non-consensual AI porn (May 2026). |
| United States | Pending | Defiance Act passed the Senate, allowing victims to sue creators; the Take It Down Act is in federal enforcement. |
| Meta Platforms | New Policy | Switched from removing deepfakes to “Mandatory Labeling” (May 2026). |
| France | Legal Action | Prosecutors seeking charges against Elon Musk over Grok-generated CSAM and deepfakes. |