Everyone, everywhere: On addressing the dangers of the internet
Regulations are a part of everyday life. From food safety in the supermarket to road safety on the highway, fit-for-purpose regulations help keep us safe and improve our well-being. Yet there is one critical part of everyday life that has perhaps “flown under the radar” when it comes to oversight and regulation: the internet.
Chaired by Marilyn Little (Deputy Chief Executive – Regulation and Policy, Department of Internal Affairs), the June Leaders Integrity Forum focused on the “why and how” of addressing dangers on the internet. Marilyn pointed out that combating the rising tide of internet dangers, including harassment and bullying, radicalisation, extremism, and misinformation, is becoming an increasingly important topic.
However, while there are speed limits on the roads in the “offline world”, the “online world” is more of a free-for-all. Although we were taught from a young age how to cross the road, there is a question about how well young people are being taught to safely navigate the internet. Marilyn observed that policymakers are largely working with a patchwork of laws in this area, with many created before the advent of the internet and the World Wide Web (WWW).
Brent Carey (CEO, Netsafe) began his presentation by reminding us of the sheer scale of the challenge that faces us in combating online dangers. Artificial intelligence, fake news, hoaxes, and scams are on the rise, and “deepfakes” have become more sophisticated and prevalent. The rise of social media is blurring the boundary between our public and private lives. What is considered private information can now be easily leaked into the public domain.
With the advent of bots, “fake news”, the blurring of our public and private lives, and terrorists using the internet to spread extremist beliefs, Brent observes that many have a doom-and-gloom view of the future of the internet. The growth of misinformation and radicalisation can seriously erode the foundations of democracy, which is based on openness, transparency, and trust.
There are no easy solutions. Part of the challenge, Brent argued, is balancing the freedom of online self-expression and the need to collectively address the internet’s potential harms. Part of the answer lies in self-regulation and building an awareness of keeping ourselves safe. Educating our communities on internet safety is critical, and on this front Brent points towards Netsafe’s safety campaigns, community engagement, and suite of educational materials.
Ultimately, the proliferation of the internet means its challenges cannot be tackled by an individual or by one country. New Zealand’s small size and limited reach means we must work with other jurisdictions and major players in the industry to make a significant impact.
Rupert Ablett-Hampson (Acting Chief Censor, Classification Office[1]) outlined the delicate balancing act needed in his role, which is to determine what should – or should not – be considered objectionable content. The principles of independence, transparency, and fairness are core to fair and balanced censorship.
Rupert’s presentation focused on the extreme dangers on the internet – namely, the radicalisation and spread of terrorism. Work to combat extremist online content accelerated after the March 2019 Christchurch mosque attacks. Initiatives included the Christchurch Call, and a global effort by social media companies to better monitor and moderate extremist content. Recent terrorist attacks by radicalised extremists, and the issue of online misinformation more broadly, have renewed debates on the “why and how” of censorship.
However, radicalisation cannot be stopped by censorship and removing online material alone. As Rupert noted, one extremist event has the tendency to inspire others, even years later. For example, the terrorist who committed the March 2019 Christchurch mosque attacks referenced the Norwegian terrorist who committed the 2011 attacks in Oslo, Norway.
Radicalisation, explained Rupert, is often the result of gradual exposure rather than a sudden change of heart. Known as the “Rabbit Hole Effect”, many radicalised individuals were first exposed to extremist content on mainstream social media, such as Facebook and Twitter. As an unintended consequence, mainstream social media algorithms end up feeding misinformation to these individuals in a never-ending spiral. Gradually, they descend into the “Dark Net” and anonymous internet groups that have become hubs of hate speech and misinformation.
Despite the enormous role social media plays in spreading misinformation, much more can be done to combat misinformation and extremist content. Just as an example, according to British advocacy group Centre for Countering Digital Hate, 89 per cent of posts containing anti-Muslim hatred and Islamophobic content reported to major social media companies (including Facebook, Instagram, TikTok, Twitter, and YouTube) were not actioned on.[2]
Ultimately, says Rupert, censorship alone cannot solve this challenge. As well as the practical challenge of moderating and censoring the massive amount of information posted online daily, there are also legal issues in walking the line between censorship and free speech. Moreover, such an approach places accountability solely on government to regulate, monitor, and censor harmful content.
In place of censorship – or any single “silver bullet” solution to combat misinformation and harms on the internet – Rupert argues that an integrated, strategic approach is required. This approach requires social media companies, communities, and government to work together. There should be a particular focus on engaging with our rangatahi, who due to their age and early exposure to the internet, may be especially receptive to extremist and harmful information online.
As I reflected on the speakers’ insights, perhaps the most important theme that emerged was the critical need for collaboration in addressing the dangers on the internet. In the words of the Christchurch Call, there is a “need for action and enhanced co-operation among the wide range of actors with influence over this issue, including governments, civil society and online service providers, such as social media companies, to eliminate terrorist and violent extremist content online”.[3] As with other Forum attendees, I take comfort in the impressive work our two speakers and their organisations have done in this area, and will be doing in the months and years to come.
[1] Rupert Ablett-Hampson was Acting Chief Censor at the time of the event. He is now Deputy Chief Censor, with Caroline Flora as Chief Censor.
[2] https://counterhate.com/research/anti-muslim-hate/
[3] https://www.christchurchcall.com/about/christchurch-call-text/