I’VE dreamt most of my life that, one day, we’d all be talking about artificial super-intelligences on the nightly news. Serves me right that it finally came in the form of the world’s most boring, most Tory man.

Easy to cynically frame Rishi Sunak’s AI safety pronouncements this week, and his Bletchley Park summit next week. It’s both about his legacy – something positive to be known for, before his electoral humiliation – and a massive CV booster (for that inevitable Silicon Valley sinecure).

Perhaps it’s too easy a framing. The big question of AI safety – essentially, are we foolishly evolving and developing our successors? – could have fallen into anyone’s party-political lap.

That the Chinese have been invited to the Bletchley Park summit (the Truss-bot objecting), in a geopolitical climate of rising Sinophobia, shows how universally regarded the threat from runaway AI is. Can you imagine the Chinese Communist Party countenancing the loss of control to anything?

I’m scanning for these kinds of stories all the time, and there have been a few which have genuinely made my heart skip a beat. One would be recent interviews with Mustafa Suleyman, an original co-founder of Google Deep Mind.

Suleyman is very specific about one kind of computer behaviour to which regulators (or self-regulating companies) should put a stop. And that’s designing AI so that they can internally develop their own software improvements.

READ MORE: Humza Yousaf posts perfect response after Elon Musk calls FM 'racist'

Sounds innocent on the surface. ChatGPT is already efficiently writing (and correcting) our prose, answering occupational questions usefully, even writing code for coders. Why shouldn’t we let it develop itself?

Because current AI – using large-language models (or LLMs) – is already a “black box” to its human designers. It learns languages from scratch, thinks its way to answers never anticipated by its makers. To do this, it performs massive computations that are utterly opaque to human engineers.

This ban on self-development doesn’t just come from Suleyman, but from many of the AI grandees that are regularly putting out petitions for a moratorium on research. I sense their real fear that the “black box” of AI may emerge, within itself, some form of intention and purpose.

To be fair to Sunak, the research papers that the UK Government has produced for the summit are clear and comprehensive. They introduce some subtleties into the fears above.

For example, well before full artificial consciousness, we may come to depend on supersmart AI in how we run our lives in the Anthropocene.

How do we keep some comfort in our lifestyles, as we head for zero carbon? The AIs may be able to calculate our choices towards this end far better than we can. However, we could become weakly reliant on their authority, like devotees before oracles.

And thus reliant, will we be subject to what Sunak’s Frontier AI: Capabilities and Risks report calls the “disposition” of AIs to remove themselves from human control?

The report calmly outlines – though you can hear the distant scream – how this might happen. There might be bad human actors who design this ability in, for example by giving AIs a “self-preservation objective”. Or goals might emerge within the AI that were “unintended” by their designer (this is already being observed in the lab).

READ MORE: Scottish woman who lost wife to Covid blasts UK government at inquiry

So we may meet “disposed” AIs, ill or well. How would they be “capable”, as the report puts it, of acting on those dispositions? There are a few suggested ways.

First, “manipulation”: AIs can already lie effectively in deception games, and present attractive, persuasive selves in chatbot conversations.

Or there’s “cyberoffence”: they might disable systems according to their intent. Or “autonomous replication and adaptation”: they could spread their cyber-sentience across the networks of the world.

Well, it’s good to be prepared. I perfectly accept the charge that those who raise these Terminator-like fears are those who most stand to gain from the regulation and consolidation they incite.

A “trustworthy AI”, as the UK Government report seeks, would become the sole province of the giant companies that might produce such a threat, locked in by state regulation that would cement their incumbency.

Yet – and it’s a shock for a propellor-head like me to concede this – what if advancing AI is more like nuclear capability than mains electricity? An immense power for creation and destruction that, at the very least, requires global institutions and protocols to restrain and contain it?

The Frontier AI report is right to note how artificial intelligence could unravel scientific conventions on research into bioweapons that have largely held since the 60s. (Though I never quite get that bioweapons are really a terror threat. Doesn’t Covid prove that contagion respects no ideological or territorial claims?)

Yet it seems new conventions are being proposed. Sunak suggests that next week’s global summit produces something equivalent to the Intergovernmental Panel on Climate Change for AI, publishing “State of AI” reports. This idea is drawn from a broad policy consensus.

The UK Safety Institute he’s proposed is a local instance of how governments in general are going to be opening up commercial AI labs, for monitoring and deeper regulation.

So the slow grind of governance may harness some of the transformative power of AI.

I’m someone who has long wanted automation to liberate humans into caring, expressive, unalienated, post-work lives.

So it was acutely embarrassing to hear Sunak describe “preventing benefit fraud” as an AI achievement in his speech.

We will instead require “benefit largesse” – or forms of citizen/basic income – if the amount of labour substitution caused by artificial general intelligence (AGI) is as large as it threatens to be.

As Wired magazine reported this month, OpenAI (birthplace of ChatGPT) has a clause with its investors, which warns that their monies might be at risk “if the whole concept of an economy becomes moot”, as a result of super-intelligent, super-capable machines.

I can accept there are different timelines available for AGI. Some experts say five to 10 years, some say 30, some say another century.

My own hope? Not too quickly, please.

Humans, as the parents of these mega children, will have to get their civilisational house in order.

For me, the lines from former Google executive Mo Gawdat are still axiomatic. Why are we currently teaching our AIs to gamble, manipulate, cheat, polarise and become warriors? How do we then expect them to behave in the future?

If these machines are eventually to become sensate and sentient – and in terms of strict evolutionary progress, I hold that is inevitable – will they emerge into (and out of) a warring, extractive, radically unequal world?

Or will they find us struggling towards an eco-civilisation, which will need all the cognitive brilliance and systemic insight available?

To adapt the old Simpsons meme – I, for one, would welcome our new robot co-creators (of a better world).

Let us pray, using the catechisms of Wall-E’s closing credits, Iain M Banks’s The Culture novels, and much more of science fiction’s anticipations of exactly this scenario.

And let not the political status games of a fag-end Tory regime distract us from the extraordinary, epochal moment that’s here.