Disrupting the CS degree

This has been in many ways a transitional year for the Notre Dame CSE department. With the elimination of Fundamentals of Computing 2 and the shifting of Data Structures from junior to sophomore year, core CS concepts are being taught earlier mainly in order to align with industry demands. However, the department has also experienced some growing pains (or “scalability problems” if you prefer buzzwords) since the enrollment has more than doubled in the past 2 years. Meanwhile, there has been a growing contingent of people in industry claiming that a college degree is wholly unnecessary to work in tech in 2017. At the heart of the issue is the disconnect between a fast-moving ever-changing industry and the mostly static structure of academic institutions. How can a degree program with years of baggage from academic traditions and norms hope to compete with an industry that changes by the month?

ACM, the Association for Computing Machinery, and ABET, the organization that certifies/accredits collegiate engineering programs both have their own guidelines (ACM, ABET) for designing an engineering curriculum. While the guidelines arguably have some merits, there is a more fundamental problem than the exact text of the recommendations. It’s the difference between Waterfall and Agile: academic institutions have to be more concerned with making sure their CS department aligns with the exact accreditation requirements than with keeping their CS program on the “bleeding edge” of the industry. When departments have to constantly operate at the pace of the accreditation board, waiting for them to issue changes to the curriculum on a 2-3 year cycle, they lose some of the ability to pilot new and innovative changes to the program. Notre Dame, to its credit, does a very good job of aligning to the ABET criteria, but we still ended up in a situation this year where we realized our curriculum had begun to lag behind the industry and our Peer Institutions™, and suddenly had to make drastic changes to keep up. Wouldn’t it be better if we instead made a number of small iterative changes every year instead of a massive overhaul every few years?

Having TA’d for the new intro course sequence (Fundamentals -> Data Structures) in this year of change, I naturally started thinking about the curriculum, and wondering if we could start from scratch, what my “ideal” CS program would look like. After thinking about this for much of the year, I came up with some (very) rough design goals (numbered below) and specific changes (bulleted) that I think would benefit our CS program.

  1. Strong engineering background. Students should be familiar with engineering process, tools and lingo. Students should be able to communicate highly technical ideas to both technical and non-technical audiences, in both writing and speech.
    • Teach LaTeX in EG101 for technical writing. Focus more on presentation skills and technical/research writing.
  2. Streamline the curriculum and remove redundancy. Teach critical topics earlier.
    • Move Fundamentals 1 to spring semester of freshman year
      • Teach in Python
      • Focus on high-level concepts of programming, plus basic OOP
    • Combine the old Fundamentals 2 with Systems Programming
      • Fall semester sophomore year
      • C/C++
      • Unix methodology
      • Shell scripting
    • Combine Computer Architecture and Logic Design
      • von-Neumann concepts from Logic Design
      • Teach x86 rather than MIPS
      • how do compilers work?
    • Data Structures
      • Spring of sophomore year
      • Dive deep into OOP
      • Abstract and concrete data structures
      • Basic algorithms
    • Algorithms
      • Move to fall of Junior year
      • Require for both CPEG and CSE
  3. Maximum flexibility in specialization. Not everyone wants to be a software engineer. Including a lot of space for CSE electives allows students to choose (with the help of their adviser) a sequence of electives that suits their career goals.
    • Since the “core” computer science curriculum is mostly finished by the middle of junior year, many CSE elective spots open up.
    • Expand selection of concentrations available, encourage students to work with advisers to build their own concentration/elective sequence to achieve their career goals
    • Allow students to explore different options for career paths
  4. Maintain Notre Dame’s “holistic education.” The curriculum should be designed to take advantage the value of liberal arts classes to a technical education. Many liberal arts classes are directly relevant in a technical context (philosophical logic, computer ethics, etc), but also allow the student to explore creative outlets that complement the technical education. In some ways, computer science is like art, requiring a great deal of creativity and ability to find meaning in abstractions.
    • Encourage students to think about ethical implications of research in specific elective classes they take
    • Encourage students to use liberal arts classes to complement their technical education
  5. Focus on building a portfolio rather than a resume. The student should have a set of robust, long-term and well-documented projects that serve not only to build a professional reputation, but also to encourage students to think creatively, both in identifying problems and designing solutions. “A resume says nothing of a programmer’s ability. Every computer science major should build a portfolio.” (source)
    • Add a yearlong senior design project for both CS and CPEG. Students work with faculty advisers to take a project from a problem statement, to an idea, to development, documentation and codebase maintenance. Encourage students with complementary skillsets (from their concentration paths) to work together
    • Encourage elective classes to include projects that can be added to a student’s portfolio. Such projects should be cleanly written and well documented–something that can be shown to an employer.

I realize I’ve digressed somewhat from the original point of the blog post, and I just want to take a moment to say that despite all my criticism, I do really like the Notre Dame CS curriculum. It isn’t perfect, but it works, and it works quite well. I’m only temporarily invoking my graduating senior/old man complaining rights (“I could do that better!”) because I really enjoyed my time here and I want our department to continue to be successful.

old   Pictured: Nick Aiello writing this blog post, 2017

Despite what silicon valley says, I think there are still benefits to an undergrad CS education. Many of the arguments to the contrary seem to hinge on a faulty equivalence between programming and computer science. Computer Science != Programming. Programming is certainly an important part of computer science, but it’s only part of the picture. You can certainly learn to be a good programmer without a college education. Programming is just a tool used to express your ideas. An artist might express an idea in paint, photograph or pencil sketch, (just like a programmer might use Java or C++) but the idea itself is independent from the medium. A bachelor’s degree program provides a four-year span of time to hone that skill: not only being able to express an idea in code, but creating and refining an idea in the first place. It’s also a risk-free environment to explore career options without the overhead of switching jobs, and explore new and innovative project ideas that might be too risky for industry. I don’t regret choosing to get a college degree in CS, even if I could’ve possibly gotten the same job from a coding bootcamp, because “getting the job” isn’t really the important part: it’s about the experience.

Intellectual Property and FOSS

The copyright clause of the U.S. constitution states that the United States Congress will have the power:

“To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” – U.S Constitution Article I, Section 8, Clause 8

On its surface, this seems pretty simple: if you create something, you should be protected in your right to distribute, sell or monetize it in whatever way you see fit. This clause both protects and encourages creators, while discouraging those who would fraudulently claim the work as their own.

If inventions were created in a vacuum, this might be sufficient. However this is not a realistic picture of what “innovation” is. Especially in technology, every innovation is linked to many others in a myriad of hard-to-define ways. If we were to interpret every patent strictly, certain inventions would be difficult to use in the creation of other inventions, necessarily leading to new inventions having to “reinvent the wheel” so to speak. For this reason, patents generally have to be quite limited in scope, and we have “fair use” laws intended to allow for the creation of derivative works. In software this is especially important, as a program might link against any number of other libraries, all of which could have different license restrictions.

The camp of FOSS (free and open-source software) developers aim to combat this problem by providing a baseline set of libraries that can be used for a variety of purposes with few or no strings attached. At the forefront of this movement is the GNU Project, started by Richard Stallman’s Free Software Foundation, which commonly uses the tagline “free as in speech, not as in beer.” The implication of this statement is that “free software” should not only be monetarily free but also allow the user freedom of expression regarding the source code.

While I have a lot of respect for the GNU Project and the FSF, I do take issue with their GPL “copyleft” license. Essentially, any project using libraries licensed under the GPL must also be open-sourced under the GPL. While it is certainly understandable to want to keep open source code open source, I think it does degrade the “freeness” of the software, since it essentially can’t be used by developers who, due to circumstances beyond their control (corporate, regulatory, etc) don’t control the license to their proprietary code. I prefer a license like the BSD license, which is a non-copyleft license, but is still compatible with the GPL, and affords users many of the same freedoms. Although the open source community deals with a lot of problems regarding third parties taking advantage of the freedom afforded by permissive licensing, this is a necessary evil in order to keep the software community “free.”

The open-source community does have several benefits that proprietary software lacks. Namely, FOSS is often (counterintuitively) more secure than its proprietary equivalent. In information security, Kerchoff’s principle is a common axiom touting the benefits of open-sourcing software. Essentially, the principle states that any cryptosystem should be secure, even if all information about it (except the key) are made public. It’s important to note that while FOSS is necessarily more public, it is not always inherently better from a Kerchoff’s principle standpoint, merely that it has the potential to be much more secure by being reviewed by a large number of eyes. Whether a large number of eyes actually sees the open source code is often unpredictable, as we saw from the Heartbleed bug a few years ago.

Autonomous Vehicles

It seems like everyone’s getting into the self-driving car business these days. While the advent of autonomous vehicles (AVs) could be extremely beneficial for all of us, it is important to address the numerous ethical issues posed by AVs.

AVs represent a huge turning point for AI. While chess or Go-playing AIs are impressive, autonomous driving AIs must not only operate in a largely unstructured environment. Furthermore, unlike game-playing AIs, AVs have direct real-word implications. Beyond taking the frustration out of your morning commute, AVs would make our roads much safer. Human drivers often simply don’t have the reaction time necessary to avoid every road hazard, especially when travelling at high speed, in bad weather or in New Jersey. This is part of the task of driving where the machine will always win: pure reaction speed. If a deer jumps out in front of your car at night, your car might be able to save your life by slamming on the brakes before you even process what’s happening.

However, there is still one area where the AI driver still struggles to keep up with the human driver: moral reasoning. This is the primary argument against AVs: what happens when your self-driving car is barreling towards a group of pedestrians crossing the street, and it can only avoid them by driving off the road, possibly injuring or killing the passengers of the car? A human driver may or may not have sufficient time to make an ethical judgement in this scenario, but AI doesn’t have this luxury: it will almost certainly have time to make the decision. Utilitarianism suggests that the car should minimize the total harm done, but would anyone be willing to get into an AV that might opt to kill them? A study in Science magazine finds:

Still, study three presents the first hint of a social dilemma. On a scale of 1 to 100, respondents were asked to indicate how likely they would be to buy an AV programmed to minimize casualties (which would, in these circumstances, sacrifice them and their co-rider family member), as well as how likely they would be to buy an AV programmed to prioritize protecting its passengers, even if it meant killing 10 or 20 pedestrians. Although the reported likelihood of buying an AV was low even for the self-protective option (median = 50), respondents indicated a significantly lower likelihood (P < 0.001) of buying the AV when they imagined the situation in which they and their family member would be sacrificed for the greater good (median = 19). In other words, even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves.

Furthermore, who should be held responsible when an AV makes a mistake, like the recent Tesla Autopilot crash? Ordinarily, we’d find the driver at fault, but what happens when that driver is an AI? If we revoke the AI’s “drivers’ license” so-to-speak, suddenly hundreds of cars not involved in that collision are affected.

While all these concerns are valid, they are not cause enough to cease development of AVs, for a number of reasons. First, I believe that safe AVs are completely within our reach. We have been building other “safety-critical” software for years, controlling the navigation systems of airplanes, the power grids of large cities and medical machines in hospitals. All of these systems could potentially cause harm if they fail, but we have learned new development processes and risk-mitigation strategies to provide a reasonably strong guarantee of safety. Once AV technology has reached a stable state, it is very likely that the failure rate of AI drivers will be much lower than the “failure rate” of human drivers. So while AI drivers might make mistakes, humans make orders of magnitude more.

Second, while the “trolley problem” and its variants are compelling ethical thought experiments, real world examples of this problem involving AVs will be incredibly rare. We humans have been dealing with this problem since cars were invented: it’s the reason we have lower speed limits in heavily congested areas, and areas frequented by pedestrians. If a car is travelling at a speed appropriate for the traffic (human and automobile) on the road (for example, 25mph in an area with a lot of pedestrians), it is unlikely that the AV’s reaction time will be so slow that it cannot stop (or slow to a non-fatal speed) in time, even if the pedestrian jumps out right in front of the car. In the case that the car must swerve off the road, an AV is much more likely than a human to be able to do so without injuring the passengers, since it can maneuver to take maximum advantage of the car’s physical safety features and prevent effects of human error (rolling the car, hitting an avoidable obstacle on the side of the road).

Finally, we must consider the worst case: despite all the precautions, someone was still injured or killed in the crash of an AV. Who takes the blame? In this case, it is impossible to make a broad generalization. The AV manufacturer, the human “driver”/passenger, or the driver of the other car/pedestrians are all potentially at fault. For example, in the Tesla autopilot crash, the driver was found to have been watching a movie while driving, even though Tesla explicitly states that their Autopilot mode is not ready to be used without human supervision. While I don’t believe that Tesla can simply wash their hands of all crashes simply by saying “oh well, we told you it was a beta feature,” it was clear the driver had no interest in monitoring the operation of the vehicle, and Tesla cannot be held fully liable for his inattentiveness. Accidents like this must be addressed on a case-by-case basis (as they are currently) to determine fault. While it is unlikely that an AV will never be at fault in a crash, the point remains that the overall accident rate will likely decrease greatly with more AVs on the road.

Although I’ve spent most of this post defending AVs, I am still unsure that I would ever buy one. The reason is more personal than logical: I enjoy driving. I would absolutely buy a car with enhanced safety features (automatic emergency braking, etc), but I’m not ready to give up the act of driving just yet. Driving, especially on long car trips, is relaxing to me. I can sit back, listen to music and podcasts and just generally unwind, with the perfect excuse to have a few minutes of freedom from emails and messages: “Sorry, I was driving.”

Wikileaks Podcast

This week I recorded a podcast with Alanna McEachen and Nathan Kowaleski regarding Wikileaks and Julian Assange. The podcast can be found on SoundCloud here. Below are some of my thoughts on the podcast and Wikileaks in general.

I have mixed feelings about the disclosure of a number of CIA hacking techniques disclosed in the Vault 7 leak. First and foremost, I think it’s great that Wikileaks is continuing to shed light on the operations of secret government organizations. Bringing to light a number of zero-day vulnerabilities in common technology will make us all safer by allowing device and software manufacturers to close up the bugs before they are discovered by parties more openly hostile to the interests of the average citizen. However, I think Vault 7 was less explosive than many of the other Wikileaks dumps. The earlier dumps exposed severe, often criminal wrongdoing, of governmental and corporate bodies, but these dumps really fail to expose any wrongdoing. Is anyone surprised that an intelligence agency keeps spy tools around? Wikileaks didn’t publish any evidence that the tools were being used against American citizens, or that the government’s hacking ability exceeds that of any given team of security researchers. In this way, I think the Vault 7 leak was overblown. I definitely want Wikileaks to continue its mission of bringing transparency to government organizations that need it, but by exaggerating the impact of the leaks, the organization is degrading its own credibility in a time when it could sorely use some.

Second, I’m not convinced that Assange is as trustworthy as he’d have the world believe. While the stated goals of Wikileaks may be noble, I think they’d benefit from less involvement from Assange. He has done little to convince me that he is any more trustworthy than the organizations he publishes leaks on.

Finally, I think whisteblowing can be an ethical decision. It can also not be an ethical decision. It depends on the motivations of the person doing the whisteblowing and the necessary care is taken to prevent collateral damage from the leaks. I think whisteblowing out of spite is unethical. Whisteblowing should be a tool for transparency and accountability, not merely to bring up “dirt” on a political opponent and further an agenda. Whisteblowing should also be careful to limit collateral damage. For example, Wikileaks generally redacts names from its leaks to protect the so-called “little guys” in the documents from retribution. Whisteblowing is about holding people accountable for wrongdoing, but only if they were actually responsible for that wrongdoing. If an innocent informant gets killed because a document is poorly redacted, I would argue that it would be better for the document to not have been leaked at all. However, when conducted responsibly, I think whisteblowing plays a valuable role in the democratic process. Keeping citizens informed of the government activities their tax dollars pay for is a crucial part of democracy–citizens who are able to see firsthand the atrocities of war will (hopefully) make them less likely to support future wars.

Artificial Intelligence

Artificial Intelligence, at its core, is about estimating functional relationships. It may not be possible to directly observe the relationship between a set of inputs X and the corresponding output Y, but given a reasonable amount of data and using a variety of sophisticated statistical models, a computer can estimate the functional transformation from X to Y to a degree of accuracy comparable to or surpassing human thought. As Kris Hammond says in his introductory treatment of AI, AI is all about “enabling computers to do the same sorts of things humans do.” Humans also do this. We use basic sensory inputs and a variety of experience in order to synthesize conclusions about various phenomena and take action (outputs) based upon our conclusions.  In this way, human intelligence is very similar to machine intelligence–we reason based on relationships inferred from data. However, the human mind is much better at extracting relevant conclusions from a breadth of data, whereas machine intelligence is much better at finding subtle and nuanced patterns from a limited dataset. That is, statistical methods are much better at solving the “narrow AI” problem, whereas the human brain is capable of much more general reasoning.

One important example of this is human emotion. While “feelings” are in many ways poorly understood, they enable a human brain to group certain experiences by how the experience made them feel, rather than any specific combination of sensory input they were experiencing at the time. As yet, we still don’t have a way of teaching a machine to “feel” or cluster their input data in this way.  Current attempts at this are limited to techniques like “sentiment analysis,” where a machine attempts to learn a relationship between a certain set of data and a generalized human emotion (happy, sad, angry, etc). This may allow (in a limited domain) a machine to guess how a human might feel about a particular article in a newspaper, but machines, as yet, cannot develop their own concept of “emotion,” especially since human emotion is incredibly nuanced, and can’t be easily boiled down into generalized terms like “happy” or “sad” in such a way to allow a machine to cluster experiences in any meaningful way by their predicted human “emotional” content. This severely cripples a machine’s ability to truly understand the data on which it operates.

Alan Turing famously proposed a test for AI systems, essentially evaluating the ability of an AI to fool a human into thinking it itself is in fact human, through a question-answer style conversation. While Turing-style tests may evaluate the ability of an AI to perform one specific task (pretending to be human in conversation) I think they are less valuable for other forms of AI. In the vast majority of circumstances, we don’t want AIs to be like humans. Humans make mistakes, frequently. The most useful AIs don’t simply mimic human behavior–they surpass it. No one wants a self-driving AI to only sometimes drive safely. This high degree of accuracy capable in most AI tasks necessarily distances AI from humans, reducing the usefulness of the Turing test in evaluating many incarnations of “narrow AI.”

While it is valid, especially given machines’ lack of capacity for empathy noted above, to be worried about increasing AI control over our lives, it is important to remember that we’re still at a point where humans are still in the loop. Going forward, I believe that the role of AI will be more to assist human beings, to fill in the gaps created by the natural limitations of human beings, rather than to replace people, if for no other reason than people are naturally skeptical of surrendering a great deal of control over their own lives. We humans have learned how to do this problem of control, to varying degrees of efficacy–for example, to solve the problem of corrupt humans controlling the lives of others, we created democracy: government with checks and balances so that no one person has unilateral control over the lives of others. Arguments about the effectiveness of any particular system of government are beyond the scope of this blog post, but the point stands: so long as we take care to keep the humans in the loop, the robot apocalypse won’t be happening anytime soon.

Internet Censorship

The Internet has been a major driver in connecting people across the globe for many years. As the cost of producing content drops, it becomes easier and easier for the average person to share their views with an incredibly large audience. However, with this increased connectivity comes a dark side: should we really allow certain types of material to spread across the globe in this relatively uninhibited and unregulated space?

ISIS, in particular has seen great success using social media like Twitter and Facebook to recruit new members and incite violence. Despite the efforts of the social media sites in question attempting to remove pro-terrorist content and the seemingly obvious answer to ISIS propaganda (“just delete their accounts”), it is becoming more and more difficult to control the firehose of content being posted online every day.

ISIS is an extreme example of Internet misuse. The vast majority of people (myself included) have no problem with the deletion of terrorist content online. But (outside of terrorism/explicitly inciting violence) who decides what content is discriminatory, hateful or offensive? And who decides what content is inflammatory enough to warrant removal?

Countries like China have enlisted the help of companies like Yahoo! and Google to block content from their platforms that the government deems inappropriate. Obviously this would never hold up in the United States, but China has no freedom of speech laws, and there is very little recourse under Chinese law for companies and individuals that are censored.

While I wholeheartedly disagree with the Chinese government’s oppression of its own people, I think it’s dangerous to place corporations in a position where they can pick and choose which laws of a country they want to obey. On the other hand, I don’t think it’s ethical for companies to assist or be complicit in a government’s oppressive behavior. I agree with Lee Rowland’s sentiment that:

“If these companies do whatever they’re capable of doing to publicize that their content is being screened, monitored, and sometimes censored by governments, I think there’s a really good argument that maintaining a social-media presence is inherently a liberalizing force,” Rowland said.

While I don’t believe it to be unethical for a company to decide to cease operations in a country with oppressive censorship laws, I believe the best course of action is to comply with government restrictions, but make it explicitly clear to the user (through transparency reports, warning messages on webpages) that their content is being monitored by government agents. This way, a company is able to comply with the laws of a country without being complicit in the silent oppression of its people. Maintaining a company’s position within a country, and using it to educate users about when content is being blocked, and the frequency with which their information is being monitored helps to encourage citizens to take action in their own countries. Indeed these transparency reports have also proven very valuable in educating United States users about government surveillance of their content.

The other major censors of online content are the companies hosting the content themselves. Facebook, Twitter, YouTube, and most other content providers have their own rules and regulations about what can and cannot be posted on their services. Common provisions include prohibitions on violent, hateful or illegal content. While it is completely within a company’s legal rights to decide what it hosts on its own servers, I believe it to be highly unethical for a company to delete content merely because it doesn’t agree with their own interests and beliefs. If companies want to have the moral high ground in debates over issues like government censorship and surveillance, they must themselves ensure that they themselves aren’t doing the very same things. The bias in a company’s decision should be towards maintaining content unless it is very specifically inciting violence, illegal etc. In the vast majority of cases, it should be left to the user to determine what content they see on the platform. If users want to post overtly sexist Breitbart articles, and the company takes them down immediately, they are denying the chance for people (other users of the service) to debate these complex and important issues, and in many cases debunk pervasive and harmful stereotypes. The echo chamber effect this creates is not only extremely dangerous and divisive, but creates a feedback loop actually preventing users from escaping the “filter bubble.” As offensive content is removed, and the filter bubble restored, users become more and more convinced of their own opinion (right or wrong) and continue to produce more and more inflammatory content, which is again removed and the cycle restarts.

The Internet should not be something that divides people, but rather connects them in ways that were not possible 20 years ago. Censorship in all forms (excepting extreme circumstances like terrorism, etc) necessarily divides people, and users should be vigilant of the dangers of both government and corporate sponsored censorship.

Corporate Conscience, Muslim Registry

The idea that “corporations are people” has been around for a long time. The specific terminology and reach of court decisions have varied greatly, but the debate remains the same: are corporations entitled to rights generally intended to protect individual citizens? While the specific phrasing “corporations == people” is a gross oversimplification of the issue, the idea of corporate personhood has some merits.

Obviously, corporations are not the same as people. As Kent Greenfield writes:

“Of course corporations are not genuine human beings. They should not automatically receive all the constitutional rights that you and I can claim. Corporations cannot vote or serve on juries, for example; it does not make any sense to think of corporations asserting those rights, both because of the nature of the right and the nature of the corporate entity.”

However, Greenfield goes on to point out that corporate personhood is not intended to bestow every single right afforded to ordinary citizens–it is merely a way to create a legal entity distinct from any person within, in order to protect shareholders, owners and employees, both from certain forms of government intervention and from each other. While it may not seem like huge corporations require protection from government intervention, it is important to also consider that smaller entities (churches/religious groups, small businesses, civil rights groups, etc.) are often considered corporations under U.S. law. In order to protect interest groups that may be at odds with the party in power (for example, the NAACP during the civil rights protests in the 1960s), it is necessary to give these associations some of the rights of ordinary citizens, primarily to protect the people within the corporation, not necessarily the corporation itself. Similarly, the idea of corporate personhood also allows corporations to be brought to trial should they break the law.

I disagree with Greenfield when he claims that corporations cannot have a conscience. Obviously they don’t have an individual conscience like people do, yet they still are faced with ethical dilemmas. A corporate conscience is a collective one, shared between its employees and customers. Effective corporations cannot make decisions based solely on profit, as Greenfield claims. Take a look at the news coverage surrounding Uber in the past few months. Uber made a series of questionable ethical decisions (ignoring claims of sexual harassment, failing to comply with numerous regulations, etc.), and is now paying the price: customers are leaving en masse (see #DeleteUber), and a number of high-ranking executives and normal employees are abandoning their jobs after finding the company’s values no longer aligned with their own.

On the other hand, when President Trump proposed the idea of a “Muslim Registry” on the campaign trail, companies and employees alike pledged that they would not be complicit in utilizing corporate power to help the government track Muslims in the U.S. This is a perfect example of collective corporate conscience at work: when employees and customers band together to prevent unethical corporate behavior. Even though a corporation is an inanimate entity and is by itself incapable of ethical reasoning, the people involved in a corporation give it the ability to make ethical (or unethical) decisions. Making unethical decisions as a corporation is long-term unsustainable: employees will leave and customers will boycott until the company can no longer do business effectively. With a strong vocal customer and employee base, corporations will be less likely to engage in unethical business practices, even without strong government oversight.

The protections given to corporations under the Constitution and “corporate personhood” are very important for a number of reasons, including making it possible for companies to stand up to perceived or actual government coercion (like the Muslim Registry) without fearing retribution. Furthermore, just because corporations are given certain protections that an ordinary citizen would have does not mean that they cannot be held accountable: indeed, they should be held accountable every day by their customers and employees. The fact that corporations are considered “people” in this way means they can also be held legally accountable, through the same Constitutionally governed civil and criminal process that ordinary citizens have a right to.

Internet of (Insecure) Things

The Internet of Things is an ever-growing network of items we use every day, connecting mundane objects like thermostats and lightbulbs (maybe even your car) into a huge glob of data that purports to leverage these connections to make everything in your life “smart.” As IoT devices become increasingly mainstream, the challenge of securing the multitude of devices becomes simultaneously much more difficult and much more important.

One huge challenge in securing IoT devices is that each installed device greatly increases the number of potential access points into a network. For example, in a typical home, there are usually only a few internet connected devices–laptops, PCs, phones mainly. However, when you throw in a smart refrigerator, several smart lightbulbs, a smart thermostat, a smart power meter, etc. the size of the attack surface grows very quickly. This is further complicated by the fact that many users don’t follow good security practices, like changing default passwords and staying on top of software updates. Sure, you might update your computer every month or so, but who thinks about updating the software that runs their kitchen lights? As these devices become mainstream, it is no longer reasonable to assume the typical user has the required technical experience to take adequate security measures. These devices need to be designed and ship with security as a top priority, not only making good security available, but by making it the default, and easily accessible to a non-technical user.

Another challenge faced in securing IoT devices is that there are simply a lot of reasons why a malicious actor might want to compromise an IoT device. The most obvious perhaps would be a direct, targeted attack on the user or their data. Attacks like this have been demonstrated by security researchers on internet connected cars, where hackers were able to compromise both electrical and mechanical systems of the car. Thankfully, car hacking is not quite so easy, as this Scientific American article points out:

Here’s the simple truth. No hacker has ever taken remote control of a stranger’s car. Not once. It’s extraordinarily difficult to do. It takes teams working full-time to find a way to do it.

Certainly there are certainly more (less grandiose) reasons a hacker might want to execute a targeted attack (disabling security systems, using IoT devices to compromise the whole network, etc), but there’s also a more subtle threat: surveillance. The Harvard Law Bulletin says this:

…the Internet of Things is raising “new and difficult questions about privacy over the long term,” according to Zittrain, yet is mushrooming with almost no legislative or regulatory oversight. We are, he says, “hurtling toward a world in which a truly staggering amount of data will be only a warrant or a subpoena away, and in many jurisdictions, even that gap need not be traversed.”

The scope of surveillance isn’t limited to government.  Criminals sniffing IoT traffic coming from a poorly secured smart house would easily be able to tell whether or not the house was occupied, or even determine the daily habits of the occupants.

Furthermore, the purpose of IoT security isn’t to protect just the user of the device. Last year’s DDOS attack on Dyn showed the power of IoT botnets, allowing attackers to shut down some of the world’s largest websites almost on demand. Now that we have essentially unmonitored internet-connected devices in every room of the house, your home or business could easily become a staging point for an attack on another target, completely without your knowledge. As such, we need to build up a sort of “herd immunity,” by ensuring that security is baked into the design process of IoT devices, since even a few compromised devices on a network can give access to scores of others.

While many are quick to propose government regulation, I think a much better solution would be to create independent review boards (described in this Ars Technica article), similar to Consumer Reports or the NHTSA safety rating for cars.  Government regulation can only go so far–while mandating certain security processes might help to secure existing devices, the slow-moving legal process might slow innovation in the future as security processes, priorities and threat models change. Furthermore, it would be difficult to hold corporations accountable for secure design until a catastrophe actually occurred. An independent review board on the other hand, where security researchers and penetration testers could examine the security protocols of each device on a case-by-case basis, would be much more effective and capable of keeping pace with the ever-changing world of computer security, and help keep us all safer.

Going Dark, Apple vs. FBI and the future of privacy rights

On July 8th, 2015, FBI Director James Comey testified in front of the Senate Judiciary Committee regarding perceived threats to the ability of law enforcement to monitor and collect data on encrypted communications in the course of executing legally obtained search warrants. Less than one year later, the FBI was embroiled in a heated fight with Apple after ordering the company to deliberately introduce a backdoor into iOS software in order to crack the iPhone of the perpetrator of the San Bernardino terrorism attack.

With theoretically unbreakable device and data encryption becoming more and more widely available, it’s a safe bet that we haven’t seen the last of the privacy-safety debate. While I understand the importance of the ability of law enforcement to execute valid search warrants, I think the Apple lawsuit constituted a dangerous overreach of the power of the FBI.

Disregarding the privacy aspect of the argument for a moment, while the FBI does have the power to compel companies like Apple to comply with warrants requesting user data stored on their servers, they do not and should not have the power to compel a company to act as the FBI’s personal contractor, ordering them to build from scratch a feature that directly undermines the value of their products. Apple CEO Tim Cook said the following in response to the FBI request:

“The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe,” he declared. A federal judge is effectively ordering these unnamed people to write code that would indisputably harm their company and that would, in their view, harm America. They are being conscripted to take actions that they believe to be immoral, that would breach the trust of millions, and that could harm countless innocents. They have been ordered to do intellectual labor that violates their consciences.”

The FBI is clearly not operating within their given power. They are not ordering Apple to comply with a warrant, nor provide data to assist with the investigation; they are ordering Apple to dedicate a significant amount of time and resources to undermine the work of their engineers and the quality of their product. Such an order is unprecedented. When law enforcement needs to gain access to a locked safe, it would be ridiculous for them to approach the safe makers and order them to not only open the safe, but to make sure all safes they manufactured in the future had a secret second combination that would be known only to law enforcement (and, of course, mandate that people only use these compromised safes).

Apple even offered to help the FBI access the data from the iPhone through other means (syncing the phone to iCloud through the terrorist’s home wifi) without compromising the security of the iPhone, and completely within the FBI’s current power. Yet the FBI ignored Apple’s instructions and reset the iCloud password on the account, rendering the attack vector useless. If the FBI were merely concerned about this particular iPhone, they would have accepted Apple’s help and legally retrieved the data. However, this was clearly about more: they wanted a backdoor not just to this particular iPhone, but every iPhone.

Perhaps this doesn’t scare you. After all, this is the American government we’re talking about. “I have nothing to hide, so I have nothing to fear,” you might say. If the government wants a backdoor to legally execute search warrants, why can’t they have it?

First, there is no such thing as a private backdoor. Companies pour billions of dollars every year into product security, and yet it seems like new security breaches are occurring every week. If we have such a hard time securing our systems right now, how the hell are we going to secure a system against criminal hackers when it has a gaping security hole by design? Would you feel comfortable trusting your iPhone with your credit card information when you know that it has a security vulnerability that can be exploited by anyone with sufficient knowledge? At the very best, this group of people with “sufficient knowledge” is limited to law enforcement and Apple employees. However, as we’ve seen from the Snowden leaks, and the leaking of NSA hacking tools last year, the myriad of email leaks this year, etc. the government just isn’t that good at protecting data. It only takes one bad actor to leak the secret to accessing to every single iPhone in circulation, and once that door is open, it’s going to be very difficult to close. Do you trust every single person employed by the government? All 21.9 million of them? Every single person from the local police department to the FBI and NSA to the president himself? Even if you trust the government as a whole, it only takes one malicious actor abusing their access to compromise your personal privacy, or worse, leaking access to the backdoor.

Second, and perhaps more importantly, rights don’t just go away when you don’t feel like exercising them. Snowden once said in an interview that “arguing that you don’t care about privacy because you have nothing to hide is like arguing that you don’t care about free speech because you have nothing to say.” I would take this one step further: everyone has something to hide. If you really, honestly, truly believe that you have nothing to hide, post your iPhone passcode on Facebook. Put your credit card number on a bumper sticker. Hand out printed copies of your text message history on the street corner. Obviously I’m exaggerating, but the point is that “something to hide” isn’t limited to “something criminal to hide.” Everyone has information that could potentially be used against them, either by the government, criminals, or even ordinary citizens, and you have a right to protect that information.

Compromising your right to privacy can even help compromise your other rights, like the right to free speech. For example, an investigation by the FCC is currently underway to determine whether law enforcement officials illegally used “stingrays” to capture cell phone calls from protesters at the Dakota Access Pipeline last year. Even if the investigation comes to nothing, numerous reports surfaced of law enforcement using social media to track DAPL protesters.

Finally, let’s expand our view to the rest of the world. Thankfully, as United States citizens we have rights that are protected by the government and are (ideally) not violated on a regular basis. However, consider the greater worldwide community. What happens if we take secure encryption away from a reporter in the Middle East covering the atrocities committed by ISIS? Do we have to now give access to the (hypothetical) Apple backdoor to every government who demands it, even those who commit horrendous human rights violations, and who will unashamedly use it to spy on their own citizens?

Privacy is not an easy subject to talk about. Issues like Apple vs. FBI are laden with emotion-laden arguments (from both sides) designed to scare people without providing any real substance. While it’s often tempting to compromise personal privacy for promises of security and public safety, all this accomplishes is to further make insecure the systems we rely on and trust on a daily basis.

Hidden Figures

This week I finally got to see the incredible film Hidden Figures. For those of you who have been living under a rock, this movie tells the inspiring story of three black female NASA engineers who made significant and lasting contributions to the U.S. space program in the 1960s. Nathan K, Alanna and I recorded a podcast where we talk about some of our initial reactions to the movie, which can be found on soundcloud here.

In it, we discuss some of the obstacles faced by the main characters Mary, Katherine and Dorothy as they struggle against an oppressive culture in segregated Virginia, and how they apply to the issues that still face minorities in engineering today.

The title of the film “Hidden Figures” describes the main characters well: they are undervalued, work behind the scenes for little recognition and often have credit for their work stolen by their colleagues. However, the “hidden” theme goes deeper than this. For example, in one scene later on in the movie, the following exchange takes place between Dorothy and Vivian (her boss), in the recently desegregated women’s restroom.

Vivian: I want you to know I really don’t have anything against you people.

Dorothy: I honestly believe that you believe that.

This simple exchange brings to light something else hidden in the movie: the idea of unconscious bias. I most recently heard this term in Aimee Lucido’s blog post in response to the recent blog post by Susan Fowler, a former engineer at Uber. Unconscious bias is a more subtle form of discrimination than the outright racism shown by many of the characters in the movie. Vivian, although at times she may truly believes she means well, still treats her black colleagues very differently than her white colleagues. This is shown perhaps most prominently when she publicly rejects and humiliates Mary for applying to the NASA engineering program for disregarding certain experience requirements (classes). Instead of having a one-on-one meeting with Mary and discussing her options for meeting these requirements (or god forbid showing a little support for her fellow colleague), she calls her out in front of everyone. I would argue that Vivian’s unconscious bias plays a significant role in this scene–even though she believes that she is doing the right thing, because she’s following NASA’s rules and regulations to the letter, she enforces them in such a way that devalues Mary as a person and an engineer.

This is certainly an issue that still exists today, and I would absolutely recommend reading the coverage of the Uber sexual harassment scandal as it unfolds, including the two fantastic blog posts linked above. Before you can truly empower the hidden figures of today, it is crucial that you first identify your own hidden biases.