The Unknowable Line: Why Defining Superintelligence is the Hardest Problem in AI

Over 800 public figures, including "AI Godfathers" and tech leaders, recently signed an open letter calling for a ban on the development of Artificial Superintelligence (ASI) until clear safety conditions are met. This debate isn't about today's AI tools; it's about the future—specifically, a machine intelligence that vastly exceeds the cognitive abilities of the smartest human in every domain.

This prospect is genuinely thrilling. As a company that thrives on AI efficiency, we're excited by what ASI could mean for humanity: solving climate change, curing diseases, and achieving prosperity that's currently unimaginable.

But excitement must be tempered by a profound question: How do we know when to stop?

Who’s going to get to Super Intelligence first?

Defining the Super-Human Line

The hard part isn't developing the technology; it's drawing the line. What exactly is a "superintelligent" AI?

It’s far more than simply knowing the answers faster than a human. ASI is defined by capabilities that surpass us in all cognitive domains, including:

  • Self-Improvement: The capacity to recursively improve its own code and architecture, leading to an intelligence explosion.

  • Creative & Strategic Thinking: Generating novel scientific breakthroughs, devising complex, long-term global strategies, and solving problems that would take humans centuries to approach.

  • Emotional & Social Understanding: The ability to model human emotions, predict societal consequences, and navigate nuanced social dynamics at a superhuman level.

The problem for businesses is that there is no universally agreed-upon, measurable benchmark for when Artificial General Intelligence (AGI) becomes ASI. AGI is generally defined as human-level intelligence across all tasks; ASI is the leap beyond. Businesses, driven by the exponential economic value of every intelligence increase, are incentivised to push for the next, more profitable level.

The Existential Clock is Ticking

The race to ASI is happening incredibly fast. Sam Altman, the CEO of OpenAI, has famously stated that we could be just four to five years away from systems that qualify as superintelligent.

Given that timeline, the window we have to figure out the "stopping point" is terrifyingly short. How can we implement effective control mechanisms when the intelligence we are trying to control is rapidly becoming exponentially smarter than its creators?

The solution isn't to simply hope companies pause development. The letter's plea, while well-intentioned, is unlikely to succeed in a race driven by market pressure and national competition.

Governance over Guesswork

At BOND Digital, we support the ethical imperative for safety, but we believe the focus must shift from a vague "ban" to clear, verifiable, and enforceable governance over the resources required for development.

  1. Regulate Compute: The only known choke point is compute power - the enormous, expensive, and finite amount of hardware required to train these massive models. Regulation could focus on monitoring, licensing, and potentially capping the compute used for the most powerful training runs.

  2. Focus on Alignment: Businesses must prioritise alignment - ensuring the AI's goals are fundamentally consistent with human values at every single stage of development, not just as an afterthought.

We are excited by the potential of ASI, but we are absolutely committed to the principle that human curiosity should not come at the expense of human safety. The hard work of definition and governance must happen now.

Next
Next

The Transparency Tax: Why Labelling AI as AI is the Only Way to Build Trust