THE figures are startling. Almost a quarter of a million women who didn’t receive letters inviting them for breast screening. Nine years in which no-one noticed something had gone wrong. Hundreds of lives potentially cut short due to cancer going undetected.

It didn’t happen here. Public Health England uses a different IT system from the NHS in Scotland, and the only Scottish women affected by this massive systems failure are those who were living in England when they should have had their final screening, aged between 68 and 71.

But this error – the magnitude of it, the duration of it, and the devastating impact it will have had on so many families – should serve as a wake-up call for anyone working with technology, and in particular any field in which algorithms are replacing agency.

English Health Secretary Jeremy Hunt has blamed a “computer algorithm failure” for the screening scandal, adding that “for many years oversight of our screening programme has not been good enough”. But what exactly does “computer algorithm failure” mean? Computer programmes are only as reliable and trustworthy as the people who write the code. If a computer says no, it’s because a human being has told it to say no when certain conditions are met.

Clearly, the situation in England is a case of an algorithm going wrong. But we should also be seriously concerned about some of the times when they go “right”; when the consequences are not life-or-death but there’s still potential for huge harm on both an individual and societal level. With these tools now used in recruitment, medical diagnostics and even criminal sentencing, the only way to avoid them altogether is to move to a cabin in the woods.

One of the biggest obstacles to addressing concerns about algorithms is the lack of understanding of what they actually are and how they work. When Westminster’s Science and Technology Committee held an inquiry into the use of algorithms in public and business decision-making last year, Google very helpfully provided some explanations. “In the broadest sense, an algorithm is simply a set of instructions,” the company said in its evidence submission. “Cooking recipes, instructions on how to play a game, and walking directions are all everyday illustrations of what could be called algorithms.”

So far, so benign. Who doesn’t enjoy home cooking, or playing games, or going for a nice walk? But a key difference between these instructions and those powering search engines is that we can follow the steps ourselves. There’s no “black box” of mysterious code to prevent us from knowing whether our spaghetti Bolognese contains beef or horse. But Google says we don’t need to worry our pretty little heads about what goes on behind the wizard’s curtain, pointing out that “many technologies in society operate as ‘black boxes’ to the user – microwaves, automobiles, and lights – and are largely trusted and relied upon without the need for the intricacies of these devices to be understood by the user”.

Not everyone agrees that Google’s fiercely guarded algorithms are as innocuous as the wiring of a microwave. In her book Algorithms of Oppression, Safiya Umoja Noble blasts apart the notion that searching for information online is a politically and morally neutral activity, and argues that search engines reinforce racism and sexism.

In the wonderfully titled Weapons of Math Destruction, data scientist Cathy O’Neil provides an insider’s perspective on how algorithms threaten democracy. The title refers to mathematical models created by fallible human beings – including those with the best of intentions – that end up causing harm. Working in the field of “big data” in the aftermath of the global financial crisis, O’Neil began to question the contribution of her beloved mathematics to the world’s problems. “Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists,” she writes. “Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed ... while making the rich richer.”

It’s clear Google will resist strongly any efforts by governments to wrench open its black boxes – after all, its algorithms are the secret recipe on which its whole business is based. But where algorithms are used in the public sector, transparency must come as standard and – crucially – those relying on algorithms must have a thorough understanding of how they work. This means being alive to unintended consequences, such as an algorithm for recruitment favouring white, middle-class men simply because this demographic of worker has dominated the sector in the past.

The advance of technology brings many benefits, including cost savings that allow greater investment in human-to-human roles that can never be fully automated, such as nurses, carers and teachers. But it does not mean we can simply switch off our brains. Computers are not sexist, racist or otherwise prejudiced, but the flawed people who programme them may well be, even if they don’t realise it.

Ultimately, a computer doesn’t care whether a woman is invited for a breast screening or not, whereas we do. So we must make sure we are controlling the machines, rather than the other way round.