I KNEW the ground under my feet had shifted when I asked ChatGPT – the artificial intelligence service that’s freaking everyone out at the moment – to “write me a weather report in the style of Hugh MacDiarmid”. Which it then did:

Ah, the winds o’ fate do blaw sae cauld,
Whiles lashin’ doon in bitter squalls,
But aye, the sun breaks throu the clouds,
An’ warms the earth as it falls.
The mists o’ morn, they swirl an’ shift,
Like ghosts alang the glens sae green,
But by the noontide’s bricht sunlicht,
They’re scatt’red, an’ the sky’s serene.
The simmer rain, it starts sae fine,
A drap or twa upon the leaf,
But soon it comes doon in a spate,
An’ aa the burns an’ lochs are brief.
Sae heed the skies, an’ tak guid care,
For aa the elements do play,
An’ in the style of Hugh MacDiarmid,
We’ll weather on, come what may.

The send-off is a little rinky-dink, but otherwise … pretty damn good. I zoomed in on “brief”, thinking it might be a bit of computer junk aspiring to be Scots. However, the Dictionars o the Scots Leid tells me a secondary definition of “brief” is “energetic, forcible”. Which really works with the preceding imagery of a “spate” of rain, and is technically a very smart choice for an end-rhyme.

Shudder. Shiver. Have our robot overlords finally arrived, writing serviceable Scots doggerel? Well, it’s quite easy to hit ChatGPT’s limits, just by riffing on your theme.

READ MORE: I asked AI bot ChatGPT to design a strategy to win independence

“Quantum physics explained in the style of Macdiarmid” wasn’t quite so smart (“So let us delve into the depths of space,/And unlock the secrets o’ this quantum race,/For in its style, so rich an’ braw,/Lies the key to understanding aa.” Mibbies naw). And the less said about its version of TS Eliot on the subject, the better. ChatGPT also seems only to understand poetry as something with thumping end rhymes.

But of course, its poetic skills aren’t what’s sending professional bodies running into the hills with their hair on fire. (Though I do note that Google’s forthcoming ChatGPT competitor is branded “Bard”). It’s ChatGPT scoring highly on human exams – so far, a solid pass at the Wharton Business School, a C+ in a course at Minnesota Law School, and a partial pass in the super-tough United States Medical Licensing examinations.

READ MORE: ChatGPT: What is it, how does it work and why is Google going after it?

It’s managers realising they can get a research memo in seconds, clearly written and seemingly authoritative, rather than relying on listless Gen Zeds, waiting for their matcha latte to kick in. It’s anyone with a specialist interest in life taking to ChatGPT’s searchbox and realising they have a calm, judicious explainer and interlocutor to hand.

I think that’s the secret sauce of ChatGPT. It gives the impression of being a kindly and permanently available professor, instead of yet another screeching, toxic voice from the overload of the internet.

For the first time in my digital life, I took a rather academic question – “what is the relationship between the philosophy of Heidegger and his early support for Nazism” (yes, this is what counts for a domestic in my house) – and went to ChatGPT first, then to Google Search, for the answer.

The former engaged me in several rounds of exchange, illuminating for me the link between Nazi Aryanism and their (only apparently) ecological policies. By comparison, the old search engine felt leaden: it highlighted a lucid chunk out of a long, meandering article – which quickly disappeared behind a paywall anyway.

So as a user experience, ChatGTP is definitely game-changing. But can we trust its poised prose?

Last October, I had a chat with a virtual Albert Einstein on the platform character.ai—which feeds chatbots with the biographies of famous figures, and invites you to blether with them. I caught the artificial Albert making up history. It claimed that, in 1950, Einstein had refused to work with his colleague David Bohm on the hydrogen bomb. I couldn’t find this in the historical archive anywhere.

Among the techies, this behaviour is described as the software’s tendency to “hallucinate” knowledge. ChatGPT (and its rivals) are LLMs or “large language models”. These work by seeking plausible patterns in the vast bodies of text they’ve been fed. When you ask a question, it seeks a pattern in its archive that can “auto-complete” what you’ve asked.

The problem is that, when it comes to a gap in its knowledge, it can tend to fill this in with another plausible pattern – which may not necessarily be true. Fake academic papers have been invented, terrible and contrary advice on health emergencies have been offered. There are also (now longstanding) worries about deep sexist and racist biases in the original texts the AIs draw on, which can be reproduced in their answers.

Microsoft’s new version of its search engine Bing – which is about to integrate Chat GPT – says it will put below its prose answers a list of the sources that have been drawn on to compose it. But the trust question is still very tricky. I know that weariness in the face of information overload tempts me to use ChatGPT. That susceptibility is surely exploitable.

The National: The film I, Robot, based on the short story by Isaac Asimov, predicts a world where artificial intelligence rebels against its human creators

You can’t say the tech sector isn’t acutely aware of the ethical issues. Indeed, the inside story is that Google had already developed much of what ChatGPT has put in public over the last six months (their version was called LaMDA). But the company had been sitting on it fearfully, worried about how to handle public reaction. Their firing of the engineer Blake Lemoine, who became convinced LaMDA had gained a human-like sentience, is an indicator of Google’s jitters.

In any case, it seems the AI genie is properly out of the bottle in 2023, swirling around and dispensing wishes (both creative and destructive) everywhere. Some of the debates about the way forward are philosophically fascinating.

Never mind these parlour tricks, some say, involving a cute guesstimate on the next paragraph you want to read. When can an AI have the same autonomy and self-determination as a mouse or a cat?

Indeed, some leading neuroscientists (like Mark Solms and Antonio Damasio) are currently involved with experiments to create a “feeling” AI: one that learns how to be conscious by stumbling and exploring, like we mammals, through their landscape.

Others (like the Google executive Blaise Agüera y Arcas and the philosopher Benjamin Bratton) ask us to stay longer with the “weirdness” of the current models. In their ever-more-convincing simulations of a human response, are these programs illuminating something new about human consciousness itself?

As we go through life trying to “auto-complete” the sentences and feelings of those around us, are we more like these machines than we realise? And shouldn’t we be delighted, even relieved, that AI proceeds fastest when it travels along the line of empathy, attempting to understand the intentions of others?

As an inveterate leftie, I want to see these AIs liberate the toiling masses from routine mental labour. Their replacement of humans should bring us more material security, more free time in our lives, more of a chance to build relationships and grow our idiosyncratic selves.

Of course, I put the paragraph above to ChatGPT, with the added question, “Am I wrong? Short answer please. In Scots language”. Which it then did.

“Naw ye’re no wrang. The uise o AIs hae the potentiall tae free fowk frae repetitive an mundane tasks, giein thaim mair time an resources tae pursue personal interests, relationships, an self-expression.

“Houever, this ootcome is no guaranteed an wid require careful consideratin an plannin tae ensure that the benefits o automation is distributed fairly an that warkers that is displaced by AI is gien adequate support an opportunities for retraining an new employment.” Cursor blink.

Aye, awright then. Let’s dae it, chat-bot.

Sign up for ChatGPT at chat.openai.com