REGULATION from artificial intelligence (AI) developers “cannot be the only step” taken to make sure the technology is safe in Scotland, according to an expert.

Four of the major forces in AI development – Google, Microsoft, OpenAI and Anthropic – launched Frontier Model Forum on Wednesday, an industry body slated to oversee the safe development of the technology.

Membership to the body will require organisations to be working on the most advanced AI technology, defined as "Frontier" models, with the goal of mitigating the dangers they might pose.

Dr Anil Fernando, a professor of video coding and communications at Strathclyde University whose work involves researching AI and machine learning, said we need to take a "holistic approach" to regulating the technology in Scotland.

He said: “The Frontier Model Forum can be taken as positive step forward, but it cannot be taken as the only step forward – its effectiveness will depend on how well it complements government regulations and how seriously the member companies commit to its objectives.

“A holistic approach to AI regulation that combines industry collaboration, government involvement and public engagement will be essential for achieving comprehensive AI safety.”

He said that left without proper regulation, AI “can be highly dangerous”.

He continued: “AI systems have the potential to be biased in decision making, invade privacy, or cause harm to individuals and the society.

“The big tech companies are driven by market competition and profit. Hence, self-regulation cannot be taken as replacement for government action. It could lead to overlooking potential risks to gain a competitive advantage.”

The founding companies of the forum have stated that collaborating with governments and policymakers is one of their four main aims, as well as advancing the safety of AI research, identifying best practices and helping to develop applications to fight humanity’s greatest challenges – such as climate change and cancers.

Speaking on the role of the Scottish Government in regulating AI, Innovation Minister Richard Lochhead said: “The regulation of AI is reserved to the UK Government, which has set out a non-statutory regulatory approach, in contrast to the EU’s much more ambitious EU AI Act.”

The EU AI Act is one of the most substantial attempts to regulate AI, establishing four different risk classifications for the technology, with the most severe being “unacceptable risk”. This would include, for example, models which can manipulate people or those which rate members of society based on their behaviour or socio-economic status.

AI systems falling into this category would be banned outright by the legislation.

The UK Government released its A Pro-Innovation Approach To AI Regulation white paper on March 29 this year establishing its short and long-term strategy around AI.

It has committed to working with industry and regulators, establishing partnerships and setting out a regulatory framework, but has not committed to any legislative proposals.

Lochhead added: “We recognise the transformational potential of AI, as well as opportunities and risks it could bring to the Scottish economy and society. The Scottish Government is working with the Scottish AI Alliance to take targeted actions, within the limits of devolved powers, to make Scotland a leader in the development and use of trustworthy, ethical and inclusive AI.

“We have also published a Digital Economy Skills Action Plan, to ensure our workforce has the skills to deliver economic prosperity for all of Scotland.”

The Scottish AI Alliance, a partnership between AI and data science hub The Data Lab and the Scottish Government, was launched in 2021 and “tasked with the delivery of Scotland’s AI strategy”, according to its website.

On Tuesday, the group launched a Communities Call Out, encouraging Scottish community groups, networks, charities and other organisations to voice their thoughts and concerns regarding AI in Scotland – with the intent to discover and consider how it might impact individuals’ lives.

It comes after the White House held talks last week with these top AI organisations, as well as other technology companies, and secured commitments to introduce a number of safeguards into their technology.

These included introducing watermarks on AI-generated content to make it clearly identifiable as “fake”, prioritising research on the risk of AI and sharing information with the US Government, among other commitments.

Fernando said that watermarking AI-generated content was a “step in the right direction”, but that it alone was not sufficient to address the multitude of risks posed by AI, such as “data privacy protection, intellectual property rights and adherence to ethical guidelines and societal norms”.

He pointed to six key areas where action should be taken at the governmental level to minimise the growing risks of AI technology:

• Establishment of independent advisory bodies to monitor development of AI technologies, assess their impact on society and provide guidance responsible AI practices.

• Creation of industry standards and certification to ensure AI systems meet the safety, fairness and reliability criteria.

• Reinforce data protection laws to safeguard individual rights.

• Allocate funds for research and development focused on addressing issues and risks associated with AI technology.

• Promote public awareness and education on AI to create better understanding and informed decision making in relation to AI technology.

• International collaboration to establish common ground to address cross border challenges of AI.

A UK Government spokesperson said: “As set out in our AI regulation white paper, our approach to regulation is proportionate and adaptable, allowing us to manage the risks posed by AI whilst harnessing the enormous benefits. 

“Our approach relies on collaboration between Government, regulators, and business.

"Additionally, the AI taskforce has also been equipped with an initial investment of £100 million to manage the safe development and deployment of AI. 

“Further to that, the UK will host the first major global summit on AI safety this autumn."