The Identity War: Can an Iris Scan really differentiate us from AI?

The identity crisis of the internet is no longer a futuristic theory, it is a present-day reality. As AI models become indistinguishable from humans in text, voice, and even video, the race to prove personhood has led us to the iris-scanning Orb by World (formerly Worldcoin).

While the goal of distinguishing humans from bots is critical, the method currently being deployed raises significant questions about privacy, security, and the future of biometric data.

The Problem with Centralised Biometrics

The fundamental concern with the World project isn't the goal, but the architecture. Creating a global database of iris scans—the most unique biometric marker we have—creates a massive honey pot for hackers and state actors.

Storing this data in a US-based ecosystem is particularly concerning for those accustomed to the protections of the EU’s GDPR. Privacy laws in the United States remain fragmented and significantly more permissive than the strict "privacy by design" standards found in Europe. If a biometric database of this scale is compromised, you cannot "reset" your iris like you can a password. It is a permanent breach of identity.

Can Biometrics Survive the AI Hacking Era?

The second, perhaps more alarming, issue is the rapid advancement of AI hacking capabilities. Recent reports regarding highly capable AI agents, like Claude Mythos and other experimental models, suggest that AI is already capable of navigating and breaching complex systems faster than human cybersecurity teams.

If AI can now simulate complex human reasoning and hack sophisticated infrastructure, how long before it can generate a convincing iris? We have already seen AI bypass liveness checks and facial recognition. Biometrics were once considered the gold standard of security because they were tied to a physical body, but as AI bridges the gap between digital and physical simulation, even our eyes may no longer be proof enough of our humanity.

The Path Forward

Proving personhood is essential for a functioning digital economy, but the solution likely shouldn't be a privately owned, centralised biometric database. Instead, the focus should shift toward:

  • No Single Point of Failure: We shouldn't rely on a single biometric marker or a single company’s database.

  • Privacy over Convenience: The lax US privacy laws are a valid concern. We should be pushing for decentralised identity (DID) models where the user, not the "Orb," holds the keys.

  • Hardware is only Half the Battle: As AI agents become more autonomous (like Mythos), our identity checks must include behavioral and social signals, not just a snapshot of a pupil.

We are at a crossroads. We must solve the AI identity problem, but we must be careful not to build a surveillance infrastructure that is more dangerous than the bots it's trying to stop.

Next
Next

70% to the Moon: Visualising the AI Spending Gap in 2026