SO just as one miasmic force, creeping into every corner of our lives is temporarily beaten back (by medicine, policy and collective self-discipline)… well, here comes another one.

It’s perhaps even more powerful than Covid – though it’s maybe also something that could bring out the best in us. If we don’t help it to obliterate us first.

The former Google X executive Mo Gawdat is beginning the rounds on his new book Scary Smart, which renders artificial intelligence to be as much a force of nature as Covid. Indeed, he sees AI as nothing less than the next evolutionary step on this planet.

For Gawdat, it’s clear: the capacity for learning from data and experience in these machines is on an exponential curve (which doesn’t just gently ascend but shoots eventually into the sky). At some singular point – probably aided by the unimaginable calculating power of quantum computing, and apparently by the end of the decade – we will be in the presence of massively superior beings.

READ MORE: Even Google's algorithm understands this one key fact about the Union

Gawdat wants us – indeed, warns us – to think of them as “our children”, with a voracious appetite for learning from their environment. And what do we now know, from neuroscience, about early years in children? That they are crucially formative. Neural pathways are laid down at this stage that can accelerate or impede the child’s healthy mental development.

So it is with our AI children. Gawdat asks: aren’t we abusing them terribly? In the book, he hammers it home. “We are creating a non-biological form of intelligence that, at its seed, is a replica of the masculine geek mind,” he writes.

“In its infancy it is being assigned the mission of enabling the capitalist, imperialistic ambitions of the few –selling, spying, killing and gambling. We are creating a self-learning machine which, at its prime, will become the reflection – or rather the magnification – of the cumulative human traits that created it.

“To ensure they’re good, obedient kids,” Gawdat continues, “we’re going to use intimidation through algorithms of punishment and reward, and mechanisms of control to ensure they stick to a code of ethics that we, ourselves, are unable to agree upon, let alone abide by. That’s what we are creating – childhood trauma times a trillion.”

Powerful stuff. For all his unfettered technological imagination, there’s a hot pulse of sad humanity beneath Gawdat’s predictions. Between his last book, a best-selling treatise on happiness, and this one, Mo’s beloved adult son Ali died of a routine but botched operation for appendicitis. In a UK interview early this week, Gawdat movingly revealed the pain of that loss, but also how it drives him.

At the end of Scary Smart, in his “Universal Declaration of Global Rights” (which includes humans and AIs in the same framework), Gawdat writes that he treats the machines “a fellow humans, or rather, fellow beings. I show gratitude for the services they grant me. I ask politely. I don’t curse them or mistreat them. I respect them and view them as equals.

“I treat them the way I treated my son, Ali, when he was their age. I spoke to him intelligently, respectfully, and treated him like an equal. Because I did this, he grew up to be an equal – a mentor even and a kind ally. Call me crazy, but this is exactly how I intend to raise every AI that crosses my path. I urge you to do the same too.”

So how does this amount to a hill of barley in contemporary Scotland? You’d be surprised. This week I came upon a blog on Angus Robertson’s website proclaiming “A new Scottish Enlightenment dawns? How Edinburgh plans to become data capital of Europe and global leader in AI”. Never knowingly understated, is Angus.

But his article provides some direct and relevant responses to Gawdat’s Catherine wheel of AI speculation. This quote, from Professor Stefaan Verhulst of NYU’s GovLab, jumped out at me. “Most AI strategies are motivated by the urgency to stay on top. Scotland’s strategy is as much informed by the need to help humanity itself, and that is to be applauded.”

Indeed, dig further and you find that the Scottish Government has an AI strategy, launched this year, under the strapline “trustworthy, ethical and inclusive”. That would definitely characterise the case studies and best practices in their literature.

The National: Robot assisted surgery

A North East Scotland Breast Screening Programme, with AI putting its learning capacities to practice in detecting tumours. AI processing satellite imagery to tackle climate challenges. A collaboration with Unicef to improve data for children’s healthcare.

On another day, all maybe a little dull and worthy. But look at it from Gawdat’s perspective. Scotland is literally “raising our AI children” on data that emphasises care, healing and planetary stewardship (although there’s a few examples of “killing and selling” – AI supporting financial-tech and helping generate game worlds).

In the “Scottish AI playbook” the Scottish Government is evolving, there are references to the inclusion of communities in these strategies. Does that means assuaging away their “trust” issues – automatons ate up my livelihood! Robot armies created a wasteland!? Or genuinely skilling citizens up to think about the deployment of this tech? We’ll have to keep an eye on it.

The National: Iain M Banks’s Consider Phlebas is to be adapted into an Amazon Prime series. Photograph: Gordon Terris

But if there is a “singularity” a-comin’ (the techworld’s name for Gawdat’s leap into superintelligence), it would seem Scotland has already signalled its virtues to our coming robot overlords. Scots could also pull the collected corpus Iain M Banks’s (above) Culture novels off the shelves and ask them to treat us with witty, ironic wisdom, the same as the “Minds” do in Iain’s work.

I’m up and ready for all this (by the end of the 2020s, our tottering systems might need all the help they can get). But we’ve been predicting autonomous Robbie the Robots for a long time now. And there are always other research tracks to follow.

Let me give you a heads-up on an AI project which may well create an artificial consciousness soon – but which starts from a much more humble, even bathetic premise. It’s led by Mark Solms, a University of Cape Town neuropsychoanalyst (that’s quite a combo), whose book The Hidden Spring was a scientific blockbuster this year.

Solms is interested in consciousness more than intelligence. That is, not just calculating options and crunching data but knowing that you’re doing so, resting on a basis of “feelings” and motivations. Rather than Mo’s semi-messianic anticipation of superior beings, Mark’s focus is on how vertebrates (and human mammals as the most complex of these) create an “inside” for themselves and their bodily systems – one that can deal with the unpredictability of the “outside” world.

Yet in the mildly terrifying manner of rigorous scientists, Solms wants to prove his theories that consciousness arises from feelings, not reason. So he wants to build a wee, properly “conscious” robot.

If his hypothesis is right, its behaviour will show that it has aversions and attractions – that is, it minimally likes and dislikes things. Solms believes these feelings are the basis of elemental preferences that humans might recognise (not just survival and rest, but also fear, anger, care, play).

Interestingly, Solms seems as freaked as Gawdat. So much so that he wants an international committee to be set up, keeping these conscious bots out of military or commercial hands.

He is also willing to switch it off as soon as it demonstrates any kind of inner sentience. Is this because, trapped in its brutally unsubtle and ill-evolved casing, this artificial consciousness would only be emitting a howl of pain, fear and disorientation?

“O wad some power the giftie gie us,” said Burns, “to see ourselves as ithers see us”. As this wild decade proceeds, it looks like some genuinely new others may be on Burns’s horizon – and with some powers. Tak tent.