Internet Censorship

The Internet has been a major driver in connecting people across the globe for many years. As the cost of producing content drops, it becomes easier and easier for the average person to share their views with an incredibly large audience. However, with this increased connectivity comes a dark side: should we really allow certain types of material to spread across the globe in this relatively uninhibited and unregulated space?

ISIS, in particular has seen great success using social media like Twitter and Facebook to recruit new members and incite violence. Despite the efforts of the social media sites in question attempting to remove pro-terrorist content and the seemingly obvious answer to ISIS propaganda (“just delete their accounts”), it is becoming more and more difficult to control the firehose of content being posted online every day.

ISIS is an extreme example of Internet misuse. The vast majority of people (myself included) have no problem with the deletion of terrorist content online. But (outside of terrorism/explicitly inciting violence) who decides what content is discriminatory, hateful or offensive? And who decides what content is inflammatory enough to warrant removal?

Countries like China have enlisted the help of companies like Yahoo! and Google to block content from their platforms that the government deems inappropriate. Obviously this would never hold up in the United States, but China has no freedom of speech laws, and there is very little recourse under Chinese law for companies and individuals that are censored.

While I wholeheartedly disagree with the Chinese government’s oppression of its own people, I think it’s dangerous to place corporations in a position where they can pick and choose which laws of a country they want to obey. On the other hand, I don’t think it’s ethical for companies to assist or be complicit in a government’s oppressive behavior. I agree with Lee Rowland’s sentiment that:

“If these companies do whatever they’re capable of doing to publicize that their content is being screened, monitored, and sometimes censored by governments, I think there’s a really good argument that maintaining a social-media presence is inherently a liberalizing force,” Rowland said.

While I don’t believe it to be unethical for a company to decide to cease operations in a country with oppressive censorship laws, I believe the best course of action is to comply with government restrictions, but make it explicitly clear to the user (through transparency reports, warning messages on webpages) that their content is being monitored by government agents. This way, a company is able to comply with the laws of a country without being complicit in the silent oppression of its people. Maintaining a company’s position within a country, and using it to educate users about when content is being blocked, and the frequency with which their information is being monitored helps to encourage citizens to take action in their own countries. Indeed these transparency reports have also proven very valuable in educating United States users about government surveillance of their content.

The other major censors of online content are the companies hosting the content themselves. Facebook, Twitter, YouTube, and most other content providers have their own rules and regulations about what can and cannot be posted on their services. Common provisions include prohibitions on violent, hateful or illegal content. While it is completely within a company’s legal rights to decide what it hosts on its own servers, I believe it to be highly unethical for a company to delete content merely because it doesn’t agree with their own interests and beliefs. If companies want to have the moral high ground in debates over issues like government censorship and surveillance, they must themselves ensure that they themselves aren’t doing the very same things. The bias in a company’s decision should be towards maintaining content unless it is very specifically inciting violence, illegal etc. In the vast majority of cases, it should be left to the user to determine what content they see on the platform. If users want to post overtly sexist Breitbart articles, and the company takes them down immediately, they are denying the chance for people (other users of the service) to debate these complex and important issues, and in many cases debunk pervasive and harmful stereotypes. The echo chamber effect this creates is not only extremely dangerous and divisive, but creates a feedback loop actually preventing users from escaping the “filter bubble.” As offensive content is removed, and the filter bubble restored, users become more and more convinced of their own opinion (right or wrong) and continue to produce more and more inflammatory content, which is again removed and the cycle restarts.

The Internet should not be something that divides people, but rather connects them in ways that were not possible 20 years ago. Censorship in all forms (excepting extreme circumstances like terrorism, etc) necessarily divides people, and users should be vigilant of the dangers of both government and corporate sponsored censorship.

Corporate Conscience, Muslim Registry

The idea that “corporations are people” has been around for a long time. The specific terminology and reach of court decisions have varied greatly, but the debate remains the same: are corporations entitled to rights generally intended to protect individual citizens? While the specific phrasing “corporations == people” is a gross oversimplification of the issue, the idea of corporate personhood has some merits.

Obviously, corporations are not the same as people. As Kent Greenfield writes:

“Of course corporations are not genuine human beings. They should not automatically receive all the constitutional rights that you and I can claim. Corporations cannot vote or serve on juries, for example; it does not make any sense to think of corporations asserting those rights, both because of the nature of the right and the nature of the corporate entity.”

However, Greenfield goes on to point out that corporate personhood is not intended to bestow every single right afforded to ordinary citizens–it is merely a way to create a legal entity distinct from any person within, in order to protect shareholders, owners and employees, both from certain forms of government intervention and from each other. While it may not seem like huge corporations require protection from government intervention, it is important to also consider that smaller entities (churches/religious groups, small businesses, civil rights groups, etc.) are often considered corporations under U.S. law. In order to protect interest groups that may be at odds with the party in power (for example, the NAACP during the civil rights protests in the 1960s), it is necessary to give these associations some of the rights of ordinary citizens, primarily to protect the people within the corporation, not necessarily the corporation itself. Similarly, the idea of corporate personhood also allows corporations to be brought to trial should they break the law.

I disagree with Greenfield when he claims that corporations cannot have a conscience. Obviously they don’t have an individual conscience like people do, yet they still are faced with ethical dilemmas. A corporate conscience is a collective one, shared between its employees and customers. Effective corporations cannot make decisions based solely on profit, as Greenfield claims. Take a look at the news coverage surrounding Uber in the past few months. Uber made a series of questionable ethical decisions (ignoring claims of sexual harassment, failing to comply with numerous regulations, etc.), and is now paying the price: customers are leaving en masse (see #DeleteUber), and a number of high-ranking executives and normal employees are abandoning their jobs after finding the company’s values no longer aligned with their own.

On the other hand, when President Trump proposed the idea of a “Muslim Registry” on the campaign trail, companies and employees alike pledged that they would not be complicit in utilizing corporate power to help the government track Muslims in the U.S. This is a perfect example of collective corporate conscience at work: when employees and customers band together to prevent unethical corporate behavior. Even though a corporation is an inanimate entity and is by itself incapable of ethical reasoning, the people involved in a corporation give it the ability to make ethical (or unethical) decisions. Making unethical decisions as a corporation is long-term unsustainable: employees will leave and customers will boycott until the company can no longer do business effectively. With a strong vocal customer and employee base, corporations will be less likely to engage in unethical business practices, even without strong government oversight.

The protections given to corporations under the Constitution and “corporate personhood” are very important for a number of reasons, including making it possible for companies to stand up to perceived or actual government coercion (like the Muslim Registry) without fearing retribution. Furthermore, just because corporations are given certain protections that an ordinary citizen would have does not mean that they cannot be held accountable: indeed, they should be held accountable every day by their customers and employees. The fact that corporations are considered “people” in this way means they can also be held legally accountable, through the same Constitutionally governed civil and criminal process that ordinary citizens have a right to.

Internet of (Insecure) Things

The Internet of Things is an ever-growing network of items we use every day, connecting mundane objects like thermostats and lightbulbs (maybe even your car) into a huge glob of data that purports to leverage these connections to make everything in your life “smart.” As IoT devices become increasingly mainstream, the challenge of securing the multitude of devices becomes simultaneously much more difficult and much more important.

One huge challenge in securing IoT devices is that each installed device greatly increases the number of potential access points into a network. For example, in a typical home, there are usually only a few internet connected devices–laptops, PCs, phones mainly. However, when you throw in a smart refrigerator, several smart lightbulbs, a smart thermostat, a smart power meter, etc. the size of the attack surface grows very quickly. This is further complicated by the fact that many users don’t follow good security practices, like changing default passwords and staying on top of software updates. Sure, you might update your computer every month or so, but who thinks about updating the software that runs their kitchen lights? As these devices become mainstream, it is no longer reasonable to assume the typical user has the required technical experience to take adequate security measures. These devices need to be designed and ship with security as a top priority, not only making good security available, but by making it the default, and easily accessible to a non-technical user.

Another challenge faced in securing IoT devices is that there are simply a lot of reasons why a malicious actor might want to compromise an IoT device. The most obvious perhaps would be a direct, targeted attack on the user or their data. Attacks like this have been demonstrated by security researchers on internet connected cars, where hackers were able to compromise both electrical and mechanical systems of the car. Thankfully, car hacking is not quite so easy, as this Scientific American article points out:

Here’s the simple truth. No hacker has ever taken remote control of a stranger’s car. Not once. It’s extraordinarily difficult to do. It takes teams working full-time to find a way to do it.

Certainly there are certainly more (less grandiose) reasons a hacker might want to execute a targeted attack (disabling security systems, using IoT devices to compromise the whole network, etc), but there’s also a more subtle threat: surveillance. The Harvard Law Bulletin says this:

…the Internet of Things is raising “new and difficult questions about privacy over the long term,” according to Zittrain, yet is mushrooming with almost no legislative or regulatory oversight. We are, he says, “hurtling toward a world in which a truly staggering amount of data will be only a warrant or a subpoena away, and in many jurisdictions, even that gap need not be traversed.”

The scope of surveillance isn’t limited to government.  Criminals sniffing IoT traffic coming from a poorly secured smart house would easily be able to tell whether or not the house was occupied, or even determine the daily habits of the occupants.

Furthermore, the purpose of IoT security isn’t to protect just the user of the device. Last year’s DDOS attack on Dyn showed the power of IoT botnets, allowing attackers to shut down some of the world’s largest websites almost on demand. Now that we have essentially unmonitored internet-connected devices in every room of the house, your home or business could easily become a staging point for an attack on another target, completely without your knowledge. As such, we need to build up a sort of “herd immunity,” by ensuring that security is baked into the design process of IoT devices, since even a few compromised devices on a network can give access to scores of others.

While many are quick to propose government regulation, I think a much better solution would be to create independent review boards (described in this Ars Technica article), similar to Consumer Reports or the NHTSA safety rating for cars.  Government regulation can only go so far–while mandating certain security processes might help to secure existing devices, the slow-moving legal process might slow innovation in the future as security processes, priorities and threat models change. Furthermore, it would be difficult to hold corporations accountable for secure design until a catastrophe actually occurred. An independent review board on the other hand, where security researchers and penetration testers could examine the security protocols of each device on a case-by-case basis, would be much more effective and capable of keeping pace with the ever-changing world of computer security, and help keep us all safer.