By Kevin Veale | –
The “Christchurch Call” summit has made specific progress, with tech companies and world leaders signing an agreement to eliminate terrorist and violent extremist content online. The question now is how we collectively follow up on its promise.
The summit in Paris began with the statement that the white supremacist terrorist attack in Christchurch two months ago was “unprecedented”. But one of the benefits of this conversation happening in such a prominent fashion is that it draws attention to the fact that this was not the first time social media platforms have been implicated in terrorism.
It was merely the first time that a terrorist attack in a western country was broadcast via the internet. Facebook played a significant role in the genocide of Rohingya Muslims in Myanmar, as covered in the Frontline documentary “The Facebook Dilemma”. And this study demonstrated a link between Facebook use and violence against refugees in Germany.
Read more:
It’s vital we clamp down on online terrorism. But is Ardern’s ‘Christchurch Call’ the answer?
Better than expected outcome
I hope attention now turns to the fact that social media platforms profit from both an indifference to harassment and from harassment itself. It falls within the realms of corporate responsibility to deal with these problems, but they have done nothing to remedy their contributions to harassment campaigns in the past.
Online communities whose primary purpose is to terrorise the people they target have existed for many years, and social media companies have ignored them. Anita Sarkeesian was targeted by a harassment campaign in 2012 after drawing attention to the problems of how women are represented in videogames. She chronicled the amount of abuse she received on Twitter in just one week during 2015 (content warning, this includes threats of murder and rape). Twitter did nothing.
When the summit began, I hoped that pressure from governments and the threat of regulation would prompt some movement from social media companies, but I wasn’t optimistic. I expected that social media companies would claim that technological solutions based on algorithms would magically fix everything without human oversight, despite the fact that they can be and are gamed by bad actors.
I also thought the discussion might turn to removing anonymity from social media services or the internet, despite the evidence that many people involved in online abuse are comfortable doing so under their own names. Mainly, I thought that there would be some general, positive-sounding statements from tech companies about how seriously they were taking the summit, without many concrete details to their plans.
I’m pleased to be wrong. The discussion has already raised specific and vital elements. The New Zealand Herald reports that:
… tech companies have pledged to review their business models and take action to stop users being funnelled into extremist online rabbit holes that could lead to radicalisation. That includes sharing the effects of their commercially sensitive algorithms to develop effective ways to redirect users away from dark, single narratives.
Algorithms for profit
The underlying business model of social media platforms has been part of the problem with abuse and harassment on their services. A great deal of evidence suggests that algorithms designed in pursuit of profit are also fuelling radicalisation towards white supremacy. Rebecca Lewis highlights that YouTube’s business model is fundamental to the ways the platform pushes people towards more extreme content.
I never expected the discussions to get so specific that tech companies would explicitly put their business models on the table. That is promising, but the issue will be what happens next. Super Fund chief executive Matt Whineray has said that an international investor group of 55 funds, worth a US$3.3 trillion will put their financial muscle to the task of following up these initiatives and ensuring accountability. My question is how solutions and progress are going to be defined.
Social media companies have committed to greater public transparency about their setting of community standards, particularly around how people uploading terrorist content will be handled. But this commitment in the Christchurch Call agreement doesn’t carry through to discussions of algorithms and business models.
Are social media companies going to make their recommendation algorithms open source and allow scrutiny of their behaviour? That seems very unlikely, given how fundamental they are to their individual business models. They are likely to be seen as vital corporate property. Without that kind of openness it’s not clear how the investor group will judge whether any progress towards accountability is being made.
While the Christchurch Call has made concrete progress, it is important to make sure that we collectively keep up the pressure. We need to make sure this rare opportunity for important systemic changes doesn’t fall by the wayside. That means pursuing transparent accountability through whatever means we can, and not losing sight of fundamental problems like the underlying business model of social media companies.
One example of a specific step would be more widespread adoption of best ethical practice for covering extremist content in the news. There is evidence that not naming the perpetrator makes a difference, and the guidelines New Zealand media adopted for the coverage of the trial are another step in the right direction. A recent article from authors investigating the impact of digital media on democracy in New Zealand also points out concrete steps.
The Christchurch Call has made excellent progress as a first step to change, but we need to take this opportunity to push for systemic change in what has been a serious, long-term problem.
Kevin Veale, Lecturer in Media Studies, Massey University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
—–
Bonus video added by Informed Comment:
CBC: “‘Christchurch Call’ brings governments and tech giants together to curb extremism”