Social media now plays a fundamental role in all our lives, and I think most of us can appreciate the importance of a forum that facilitates freedom of expression. Despite this, my parents, and I’m sure some of yours, dramatically express how social media is a detriment to all those captured by it. Undoubtedly they are wrong BUT there is an element of truth to it. With the rise of social media the rules of engagement have changed, and a multitude of new issues have arisen and continue to arise. It is with these issues and online (but currently legal) harms with which I am engaging with today, and I will discuss what the future for social media looks like.
Offensive content online is something to be tackled, but how do we do it? What is offensive? When do we tackle such content - after it is posted or before? Our approach, almost universally, has been a notice and take-down system. When users see offensive content they can flag it, and if it is deemed to be offensive by the social media company in question then it will be removed. This system has some problems.
First of all, I question the decision-maker. I am not saying that Facebook knows nothing of what is offensive or not, but are we comfortable with a private company controlling the flow of information and opinions? You might think I am overstating the issue, but think about this. It is easy where the content in question is obviously offensive and harmful - discriminatory speech, explicit images and abuse (which may fall short of being criminal content) being a prime example. But what about harder cases?
Think about posts expressing controversial and suspect political views. Could and should such information be the target of censorship? Even if you think that such information should be targeted, are social media companies the right people to do it? In fact this issue has been identified by the UK’s White Paper on Online Harms, which was a direct response to accusations levelled at Twitter and Facebook regarding political bias. This poses a huge threat to democratic societies and freedom of expression more generally.
These problems exist outside of political speech. Some people’s stories about weight loss may be a source of body-image issues leading to significant psychological harm. Is this content now going to be controlled because it is capable of, and admittedly does, cause harm? Again the issue here is not whether such psychological harm is a problem (it obviously is), the issue is whether social media companies are the right people to solve it. With the threat of too much censorship, comes the risk of stifling freedom of expression.
Though the traditional approach has been to encourage notice and take-down, the UK’s recent proposals recommend social media companies owing its users a duty of care to prevent harm. This is designed, to some extent, to stop offensive content in its tracks before it is posted and is looking to move away from solely reactionary approaches to harmful content. For illegal content such as indecent images of children, this shift is welcomed. For content that is legal but still harmful, this system still exhibits many of the dangers discussed above. We still seem to be placing power in the hands of companies like Facebook to regulate content, despite them lacking democratic legitimacy and, to some extent, the competence to deal with such issues.
Users, legislators and regulators must tread carefully. Social media has opened up millions of opportunities and many people’s livelihoods depend on it, but we must also face the plethora of problems it has thrown at us. We want to get rid of illegal and uncontroversially harmful content, but I am concerned with where we draw the line. Perhaps a few decades ago, content discussing homosexuality would be targeted as offensive. Though there have been improvements as the decades have rolled, a threat to freedom of expression still persists. Any move in this area must be calculated and come to grips with the threat of too much censorship.
Bibliography (and some interesting reads)
Komentarze