The Internet has been a major driver in connecting people across the globe for many years. As the cost of producing content drops, it becomes easier and easier for the average person to share their views with an incredibly large audience. However, with this increased connectivity comes a dark side: should we really allow certain types of material to spread across the globe in this relatively uninhibited and unregulated space?
ISIS, in particular has seen great success using social media like Twitter and Facebook to recruit new members and incite violence. Despite the efforts of the social media sites in question attempting to remove pro-terrorist content and the seemingly obvious answer to ISIS propaganda (“just delete their accounts”), it is becoming more and more difficult to control the firehose of content being posted online every day.
ISIS is an extreme example of Internet misuse. The vast majority of people (myself included) have no problem with the deletion of terrorist content online. But (outside of terrorism/explicitly inciting violence) who decides what content is discriminatory, hateful or offensive? And who decides what content is inflammatory enough to warrant removal?
Countries like China have enlisted the help of companies like Yahoo! and Google to block content from their platforms that the government deems inappropriate. Obviously this would never hold up in the United States, but China has no freedom of speech laws, and there is very little recourse under Chinese law for companies and individuals that are censored.
While I wholeheartedly disagree with the Chinese government’s oppression of its own people, I think it’s dangerous to place corporations in a position where they can pick and choose which laws of a country they want to obey. On the other hand, I don’t think it’s ethical for companies to assist or be complicit in a government’s oppressive behavior. I agree with Lee Rowland’s sentiment that:
“If these companies do whatever they’re capable of doing to publicize that their content is being screened, monitored, and sometimes censored by governments, I think there’s a really good argument that maintaining a social-media presence is inherently a liberalizing force,” Rowland said.
While I don’t believe it to be unethical for a company to decide to cease operations in a country with oppressive censorship laws, I believe the best course of action is to comply with government restrictions, but make it explicitly clear to the user (through transparency reports, warning messages on webpages) that their content is being monitored by government agents. This way, a company is able to comply with the laws of a country without being complicit in the silent oppression of its people. Maintaining a company’s position within a country, and using it to educate users about when content is being blocked, and the frequency with which their information is being monitored helps to encourage citizens to take action in their own countries. Indeed these transparency reports have also proven very valuable in educating United States users about government surveillance of their content.
The other major censors of online content are the companies hosting the content themselves. Facebook, Twitter, YouTube, and most other content providers have their own rules and regulations about what can and cannot be posted on their services. Common provisions include prohibitions on violent, hateful or illegal content. While it is completely within a company’s legal rights to decide what it hosts on its own servers, I believe it to be highly unethical for a company to delete content merely because it doesn’t agree with their own interests and beliefs. If companies want to have the moral high ground in debates over issues like government censorship and surveillance, they must themselves ensure that they themselves aren’t doing the very same things. The bias in a company’s decision should be towards maintaining content unless it is very specifically inciting violence, illegal etc. In the vast majority of cases, it should be left to the user to determine what content they see on the platform. If users want to post overtly sexist Breitbart articles, and the company takes them down immediately, they are denying the chance for people (other users of the service) to debate these complex and important issues, and in many cases debunk pervasive and harmful stereotypes. The echo chamber effect this creates is not only extremely dangerous and divisive, but creates a feedback loop actually preventing users from escaping the “filter bubble.” As offensive content is removed, and the filter bubble restored, users become more and more convinced of their own opinion (right or wrong) and continue to produce more and more inflammatory content, which is again removed and the cycle restarts.
The Internet should not be something that divides people, but rather connects them in ways that were not possible 20 years ago. Censorship in all forms (excepting extreme circumstances like terrorism, etc) necessarily divides people, and users should be vigilant of the dangers of both government and corporate sponsored censorship.