ARTIFICIAL intelligence (AI) technologies can present ethical challenges for businesses involved in developing them and new measures have been set out to minimise the risk of ethical lapses.

In a briefing paper, the Institute of Business Ethics said the real issue surrounded the use businesses made of AI, which should never undermine human ethical values.

Under the moniker ARTIFICIAL (Accuracy, Respect of privacy, Transparency, Interpretability, Fairness, Integrity, Control, Impact, Accountability and Learning) the IBE has worked with various organisations and technology experts to draw up a framework.

IBE director Philippa Foster Back described it as “a starter for 10 to get the discussion going”.

She said: “The topic of AI – its applications and ethical implications for business – is broad and requires a complex multi-stakeholder approach to be tackled. The IBE ARTIFICIAL Framework will provide an ethical foundation for these discussions going forward.

“The IBE suggests all organisations should consider their commitment to these values in the development of AI technologies, and in considering their future impact.”

She said it was essential companies knew the risks, impact and side effects that new technologies might have on businesses and stakeholders, as people can get so caught up in the technology and overlook some of the safeguards that have to be put in place.

“The people who make the decisions and are responsible in the business sense for the outcomes of how that AI is going to be used might not have a deep enough understanding of what’s actually deep in the works,” she said. “But at the end of the day they will be responsible, and if there’s a failure their reputations or the company’s reputation will be affected.”

She said the IBE Framework looked at all the aspects people should consider, along with how companies should think about it.

“We need to get a much wider level of discussion happening and particularly within companies. It’s too easy for technology to get ahead of the thinking.”

Professor Leslie Smith from the University of Stirling, a research theme co-leader with the Scottish Informatics and Computer Science Alliance, highlighted one of the issues raised.

He said: “If you are a small company and you outsource the design of some system to make decisions for you to some software or AI company and they produce it and you deploy it, and then it does things you find unethical, who’s to blame? Is it you, the company that built it, or was it the specification you gave? There are difficult ethical questions there.”

Smith said firms such as Amazon, Spotify and Google all used systems that used your past choices to influence new ones: “They’re quite careful in the ethical sense. They don’t want to say to you ‘ok you’ve been looking at this book or that book, maybe you’d like this’ – a large volume of pornography. for example, because that might be unethical or inappropriate.”

He added that there were big challenges ahead: “There are lots of issues ... and it’s good to see an organisation like the IBE starting the discussion.”