A DEAD child. A devastated family. A shell-shocked community. A poisonous post. A Twitter storm. A brace of outraged headlines.

If it seems too jaw-droppingly awful to be true, that’s kind of the point – jaw-droppingly awful is the new bar that must be met to ensure wall-to-wall media coverage, to guarantee the distinction of “trending”, and to grab a share of the attention that’s being focused on an unthinkable crime.

With a tug of the puppet strings, outrage erupts. Behaviour that would put anyone else at risk of disciplinary action at work can now be the basis of an entire career, and every one of us who has ever uttered a name that sounds like Hatie Plopkins – even if it’s accompanied by a string of expletives – is partly responsible.

Each time it seems as though this media-created monster has crossed the line, she goes further still. Does she even mean what she says any more? Did she ever? And does that matter when the genie is out of the bottle, the lies and slurs are out there, and there are plenty of frothing, witless right-wingers reading them who will nod along without stopping to check the facts?

The solution might seem simple: don’t feed the trolls. Don’t retweet, don’t quote, don’t dignify any stream of invective or a package of fake news with any sort of response (and perhaps don’t write a newspaper column about any of it either).

But does ignoring trolls actually have the desired effect?

The answer rather depends on whether we can agree on the desired effect, or indeed what constitutes a troll. It’s assumed that online trouble-makers thrive on the attention their provocative postings generate – indeed, according to many definitions that is the entire point of trolling – and it therefore seems logical to starve them of the oxygen they crave.

In 2016, psychologists from the Federation University Australia published research into the personality traits associated with trolling behaviour. Perhaps unsurprisingly, they found a correlation with antisocial traits such as psychopathy and sadism. But they found a much stronger association with what’s termed “negative social potency”. Professor Evita March wrote: “What really influences trolling behaviour is the social pleasure derived from knowing that others are annoyed by it. The more negative social impact the troll has, the more their behaviour is reinforced.”

Her advice, therefore, is straightforward: ignore them and they’ll stop.

But if someone with a large, ready-made audience is posting false, harmful and downright dangerous content, don’t we have a duty to do something, rather than nothing?

Police Scotland have not responded to specific examples of innuendo about Alesha McPhail’s murder, but they have made the pointed comment that “social media speculation relating to members of the community is both misleading and inaccurate”. Argyll and Bute MSP Mike Russell condemned the spreading of an “awful, divisive, hate-filled lie which is very painful to those already suffering.”

I should perhaps explain, for those who missed it, what a woman whose name sounds like Weighty Slopkins actually wrote. The difficulty is that to do so would play into her hands. To detail the many ways in which her claims are false and her insinuations nonsensical would take up the rest of this column, so instead I’d invite you to check out the forensic debunking on investigative journalism platform The Ferret.

Does debunking constitute “feeding”, even though it exposes claims as false? It certainly demonstrates that someone was annoyed enough to devote time and energy to setting the record straight.

Professor Susan Benesch, founder of the Dangerous Speech Project, isn’t convinced by the “don’t feed the trolls” mantra, citing a lack of evidence that it works, a lack of attention to the influence of social norms on views and behaviour, and a failure to recognise the significant impact on hate speech on those who see or hear it. She argues that not everyone who posts bile online is an irredeemable troll, and that sometimes the deployment of “counter-speech” can inspire retractions and apologies.

She points to evidence from the world of online gaming – a notoriously hate-filled environment – that suggests even the smallest of measures, such as the insertion of messages promoting civil behaviour or even changing font colours, can significant reduce “toxic” speech among players. Significantly, research by the creators of the hit game League of Legends found that in most cases it only required one sanction to nip peer-reported hateful behaviour in the bud.

While there are some lost causes who will never change their views or behaviour – especially when their are financial disincentives to doing so – Benesch believes most people are in the “malleable middle” and tend to shift their behaviours in response to what they perceive to be community norms. Shift the norms and you can change people’s views. Lead by example, and there’s a chance you can help make the word a slightly less hateful place.