You are on page 1of 4

Social Media and Hate Speech

Rab Nawaz Social media platforms, including Facebook, Twitter, YouTube and others, have become a significant part of our individual and collective lives. But with this has also come the prevalence of hate speech. It is undeniable that the on screen has an impact on the off screen and vice versa. What was previously unachievable by dozens of people in weeks is now within the reach of a single person within hours. Of all the pitfalls of social media, the murkiest one is its potential to spread hate speech and consequently affect the lives of millions. The fault line lies in the ill-defined nature and boundaries of both social media and hate speech. The content on social media differs from traditional media in many respects, including quality, reach, frequency, accessibility, usability, immediacy and permanence. But the most important aspect which distinguishes social media from other media is the freedom enjoyed by the user in generating and consuming the content. Unlike traditional media, there is no concept of any intermediacy, gate-keeping or editorial filter. The lack of such filters, coupled with an instant and global outreach, means an unrestricted, discretionary opportunity in spreading messages of ones choice. Mass media in todays time rests on the premise of free speech. While in some cases hate speech is quite obvious, there exists a thin line between free speech and hate speech. Hate speech, as defined by the Council of Europe, means any communication that denigrates a particular person or a group on the basis of race, color, ethnicity, gender, disability, sexual orientation, nationality, religion, or other characteristic. It can be in the form of any speech, gesture or conduct, writing, or display and usually marks incitement, violence or prejudice against an individual or a group. As a result it generates stigmas, stereotypes, prejudices and discriminatory practices against those who are constructed as being different. Most of the countries have laws against hate speech, though the vocabulary used varies to some degree. As the role of social media is construed largely in terms of increasing the boundaries of free speech through greater personal freedom and access to information, the problem of hate speech on social media becomes complicated. It would be helpful to see what kind of standards and filters in social media, if any, work to balance out the greater access to information with that of hate speech. For the sake of a general sense of the problem at hand, we can take a look at the three most common forms of social media, i.e. Facebook, Twitter and YouTube. On Facebook the primary editorial filter is the rules and restrictions provided in Facebook's Statement of Rights and Responsibilities. Every user has to accept these terms by virtue of signing up for an account on

the site. But ensuring that Facebook's community of more than 900 million users abides by the company's user policies is a task that requires hundreds of employees. In reality there are not enough people out there to check the content and ensure the implementation of these policies. With regard to hate speech, the Statement of Rights and Responsibilities states: "Facebook does not permit hate speech. While we encourage you to challenge ideas, institutions, events and practices, it is a serious violation to attack a person based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition." Facebook seldom takes action on its own; mostly it is users who report certain accounts, pages, comments or any other content they deem to be in violation of the Facebook policy. After receiving a report, Facebook reviews it and either removes it from the site if found abusive, or sends an email to the person who has reported the page stating that they do not think that the content in question constitutes hate speech. The editors who review these reports are based in various localities including California, Austin, Dublin and Hyderabad. These editors field user reports of inappropriate posts around the clock. Sometimes content on Facebook violates not just the company's policies, but the existing law of the land. Whenever something is found against the law, but not against Facebook policies, Facebook is at liberty to share it with corresponding law enforcement. Replying to a critical question about allowing hate speech on Facebook, the official statement released read: "We take our Statement of Rights and Responsibilities very seriously and react quickly to remove reported content that violates our policies. In general, attempts at humor, even disgusting and distasteful ones, do not violate our policies. When real threats or statements of hate are made, however, we will remove them. We encourage people to report anything they feel violates our policies using the report links located throughout the site." Twitter, on the other hand, seems to place far more emphasis on free speech. Unlike other social media networks including Facebook and Tumblr, Twitter's terms of service do not cover hate speech. However, according to a report in the Financial Times, Twitter is wrestling with minimizing hateful comments while attempting to preserve free speech at the same time. Dick Costolo, Twitter's chief executive, said he was frustrated by tackling the problem of "horrifying" abuse while maintaining the company's mantra that "tweets must flow". Twitter, self-described as the free speech wing of the free speech party, has largely resisted any restrictions on content by either governments or citizen groups. YouTube is another very popular social media platform. Hundreds of thousands of videos are uploaded to YouTube every day. Since it is not possible to pre-screen this much content, YouTube follows an innovative community policing system that involves users in helping to enforce YouTube's standards. Millions of users report potential violations of Community

Guidelines by selecting the "Flag" link while watching videos. YouTube's Community Guidelines prohibit promoting hate speech and violence against others, or even depicting 'gratuitous violence'. Meanwhile, YouTube is capricious and arbitrary about content that they'll take down on the basis of hate speech. The content on social media, including that of hate speech, depicts ground realities. Consider that according to a Pew Research around 50% of Pakistanis consider Shias as non-Muslims. So its not surprising that there is a lot of anti-Shia sentiment found on social media. Searches on YouTube return dozens of videos of banned sectarian and terrorist organizations which provide incitement to violence, weapons training, speeches by Jihadi leadership, and general material intended to radicalize potential recruits. YouTube also has the video of a speech by a sectarian terrorist leader named Haq Nawaz Jhangvi giving a so-called fatwa proclaiming that anyone who doubts the apostasy of a Shia is also an apostate. Jhangvis followers can be found killing innocent Shias all over Pakistan. Facebook and Twitter are also rampant with hatred and incitement to violence against minority sects such as Shias and Ahmedies. Not only sects but individuals having a critical stance against the dominant religious clergy and powerful establishment are targeted. They are not only called traitors and apostates but are also issued with explicit threats against their lives. Facebook pages such as My Ideology is Islam and my Identity is Pakistan, Jaago Pakistan, Enemie s of Pakistan and others with hundreds of thousands of likes indulge in unstoppable campaigns of vilification and incitement to violence against other nations, ethnicities, sects, creeds, and individuals. Similarly Twitter has active handles of a great number of banned outfits. With a staggering number of followers, they have been known to engage in active hate campaigns from time to time. This was the domestic context, but hate speech on social media exists on a global scale too. The most commonly occurring form of this is gender stereotyping, sexual abuse, and promotion of violence. A page titled Good Girl Gina with millions of followers is a campaign for degrading women and inciting violence against them. Similarly another page titled 12-year-old Slut vilifies female teenagers and calls for aggression and sexual violence against them. The glorification of violence is another online trend. When on July 20 2012, a man named James Holmes walked into a packed midnight screening of The Dark Knight Rises in the United States and opened fire, the shooting resulted in the death of 12 people and the injury of 59 others. Bizarrely, various pages celebrating and honoring Holmes popped up on Facebook. It is difficult to control who posts what on social media. Generally social media sites do have terms of use on what can and cannot be posted. YouTube and Facebook say that they review postings that are flagged or reported as inappropriate to determine whether those postings are violating their terms of use. But it is extremely difficult to monitor postings from

millions of users, particularly if the language used is not one of the international languages. Moreover, diverse social and legal contexts which determine the parameters of free and hate speech are not universalized. Similarly what is called hate speech in one country could be free speech in another. The controversial YouTube video Innocence of Muslims is a good case in point. According to Google, the company which owns YouTube, the Innocence of Muslims video does not violate terms of service for YouTube regarding hate speech because it is focused on the Muslim religion and not the people who practice it. Rachel Whetstone, senior VP for Communications and Public Policy at Google said: One type of content, while legal everywhere, may be almost universally unacceptable in one region, yet viewed as perfectly fine in another In Pakistans context it becomes much more difficult because of the social setting. Religious content enjoys such blanket immunity and unquestioned acceptance in Pakistan that people find it too sensitive to delineate whether some content in question is just religious or hate speech preached in terms of religion. Content containing hate speech will always exist and social media has made it more interactive which augments chances of direct conflict. There is thus a need to review hate speech legislation. Nevertheless, the law may not always be a panacea to hate, neither is government censorship. In fact its very hard to create legal prohibition or prescription against the free flow of information on social media. There is a need to deal with hate speech on social media in other, more creative ways. The best antidote to hate speech is more speech. Public awareness of hate speech on social media can do a lot to help sensitize users, Internet companies and governments. There is also a need to popularize and circulate reports and materials related to hate speech on the Internet. Social media has taken our lives from a private to a more public sphere. Its influence on our lives grows day by day and it has changed the way we communicate and interact with each other. It is therefore crucial to embed this new communication paradigm with tolerance so that access to greater information does not have to be preserved at the expense of spreading hate speech.

You might also like