|

Aim small, miss all: why the Online Safety Bill isn’t hitting the target

Aim small, miss all: why the Online Safety Bill isn’t hitting the target

Media Leaders

The Bill’s remit must be narrowed if it is to have a profound, long-lasting impact. Otherwise it risks helping no one.

Culture Secretary Nadine Dorries  pledged to make the UK “the safest place in the world to go online” when she spoke at the Renew Conference in London last month, advocating that regulation of the digital world should mimic that of traditional media, such as television.

No one can deny that hate and toxicity has found a comfortable home online. Ofcom research previously uncovered that almost half of adult Internet users in the UK have first-hand experience of online harm.

The government’s most recent solution to this comes in the form of the Online Safety Bill. The first draft – released in May 2021 – was designed to be a statement of intent. Unfortunately, it barely registered as a whisper.

Too many voices have tried to help shape the Bill. Some harbour concerns that freedom of speech will be stifled; others are firm believers in additional refining. But the result of these divided opinions means the Bill misses the mark entirely.

Criticism has come from all sides, with even other parliamentary bodies taking aim. Comments from the Digital, Culture, Media and Sport (DCMS) Committee were especially damning, highlighting a lack of robustness.

The Bill’s remit must be narrowed if it is to have a profound, long-lasting impact. Objectives must be clearly stated and set in stone. Otherwise, despite good intentions, it risks helping no one.

Filling in the (loop)holes

Almost one in five children in the UK have experienced cyberbullying. The number of deepfake pornography videos appearing online doubles every six months.

Harmful content is being published consistently without reproach, and ‘keyboard warriors’ – hiding anonymously behind their screens – are the primary culprits.

Imagine a world where car manufacturers decided the speed limit – or soft drinks manufacturers had the ability to control the sugar tax. This is where we are now with expecting social media sites to self regulate.

Profits cannot be pitted against safety. Social platforms must shoulder greater responsibility. This is where the Online Safety Bill comes in: giving Ofcom additional powers to impose fines and hold those who fail to enforce duty of care accountable.

However, this alone is not enough to keep consumers safe. Another major loophole that must be addressed is what constitutes ‘illegal’. For too long, the term ‘legal but harmful’ has provided platforms publishing user-created content a Get Out Of Jail Free card.

If alterations aren’t made, ‘breadcrumbing’ – a tactic employed by abusers looking to share child abuse-related content – will continue without any sanctions. What’s more, the social giants will not face any repercussions, as they are allowed to turn a blind eye courtesy of the ‘legal but harmful’ label.

Having the Online Safety Bill’s goals explicitly stated can help to put this trend to an end. Providing a clear outline of what falls under the umbrella of ‘harmful content’ – and the punishments accompanying these crimes – will offer much-needed clarity, as well as act as a deterrent.

Algorithms – help or hindrance?

The Online Safety Bill is a work in progress – it doesn’t fail in every department.

The ‘safety by design’ principles – a code of practice being enforced by Ofcom – forces social platforms to revise how content is shared and spread. This helps stop vulnerable users from falling down a rabbit hole full of harmful content.

Algorithms are instrumental in this regard. Consumers are crying out for improved digital experiences – 42% acknowledge frustration due to a lack of personalised content – and algorithms can help achieve this. If they are to be leveraged effectively, a balance must be struck.

Often used to identify and eliminate harmful content, they have to be considerate of the subtleties of human language. Context is incredibly important – it can be the difference between a joke and a hateful comment. Should algorithms be too strict, a side effect could be the repression of positive-influenced posts.

A successful Online Safety Bill would work to fuel positive content. Driving greater engagements – from a metrics perspective – coupled with stamping out harmful content is vital in preventing impressionable consumers from being subjected to torrid online experiences.

The winding road to change

Dorries – as Culture Secretary – needs her finger on the pulse to diagnose the problems plaguing the digital landscape.

The Online Safety Bill is a step in the right direction – but it is far from the finished article. Campaigners need to prioritise keeping the number of voices attempting to influence the Bill’s structure to a minimum, lest it fall short of expectations.

If the loopholes that have been identified in the Bill’s first draft are allowed to remain, social platforms will continue to reel off excuses as to why they fail to address harmful content. The strategy of appeasement has to be left behind if the status quo is going to be shifted.

Nat Poulter is co-CEO of social-first publisher Jungle Creations

Media Jobs