Going Dark, Apple vs. FBI and the future of privacy rights

On July 8th, 2015, FBI Director James Comey testified in front of the Senate Judiciary Committee regarding perceived threats to the ability of law enforcement to monitor and collect data on encrypted communications in the course of executing legally obtained search warrants. Less than one year later, the FBI was embroiled in a heated fight with Apple after ordering the company to deliberately introduce a backdoor into iOS software in order to crack the iPhone of the perpetrator of the San Bernardino terrorism attack.

With theoretically unbreakable device and data encryption becoming more and more widely available, it’s a safe bet that we haven’t seen the last of the privacy-safety debate. While I understand the importance of the ability of law enforcement to execute valid search warrants, I think the Apple lawsuit constituted a dangerous overreach of the power of the FBI.

Disregarding the privacy aspect of the argument for a moment, while the FBI does have the power to compel companies like Apple to comply with warrants requesting user data stored on their servers, they do not and should not have the power to compel a company to act as the FBI’s personal contractor, ordering them to build from scratch a feature that directly undermines the value of their products. Apple CEO Tim Cook said the following in response to the FBI request:

“The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe,” he declared. A federal judge is effectively ordering these unnamed people to write code that would indisputably harm their company and that would, in their view, harm America. They are being conscripted to take actions that they believe to be immoral, that would breach the trust of millions, and that could harm countless innocents. They have been ordered to do intellectual labor that violates their consciences.”

The FBI is clearly not operating within their given power. They are not ordering Apple to comply with a warrant, nor provide data to assist with the investigation; they are ordering Apple to dedicate a significant amount of time and resources to undermine the work of their engineers and the quality of their product. Such an order is unprecedented. When law enforcement needs to gain access to a locked safe, it would be ridiculous for them to approach the safe makers and order them to not only open the safe, but to make sure all safes they manufactured in the future had a secret second combination that would be known only to law enforcement (and, of course, mandate that people only use these compromised safes).

Apple even offered to help the FBI access the data from the iPhone through other means (syncing the phone to iCloud through the terrorist’s home wifi) without compromising the security of the iPhone, and completely within the FBI’s current power. Yet the FBI ignored Apple’s instructions and reset the iCloud password on the account, rendering the attack vector useless. If the FBI were merely concerned about this particular iPhone, they would have accepted Apple’s help and legally retrieved the data. However, this was clearly about more: they wanted a backdoor not just to this particular iPhone, but every iPhone.

Perhaps this doesn’t scare you. After all, this is the American government we’re talking about. “I have nothing to hide, so I have nothing to fear,” you might say. If the government wants a backdoor to legally execute search warrants, why can’t they have it?

First, there is no such thing as a private backdoor. Companies pour billions of dollars every year into product security, and yet it seems like new security breaches are occurring every week. If we have such a hard time securing our systems right now, how the hell are we going to secure a system against criminal hackers when it has a gaping security hole by design? Would you feel comfortable trusting your iPhone with your credit card information when you know that it has a security vulnerability that can be exploited by anyone with sufficient knowledge? At the very best, this group of people with “sufficient knowledge” is limited to law enforcement and Apple employees. However, as we’ve seen from the Snowden leaks, and the leaking of NSA hacking tools last year, the myriad of email leaks this year, etc. the government just isn’t that good at protecting data. It only takes one bad actor to leak the secret to accessing to every single iPhone in circulation, and once that door is open, it’s going to be very difficult to close. Do you trust every single person employed by the government? All 21.9 million of them? Every single person from the local police department to the FBI and NSA to the president himself? Even if you trust the government as a whole, it only takes one malicious actor abusing their access to compromise your personal privacy, or worse, leaking access to the backdoor.

Second, and perhaps more importantly, rights don’t just go away when you don’t feel like exercising them. Snowden once said in an interview that “arguing that you don’t care about privacy because you have nothing to hide is like arguing that you don’t care about free speech because you have nothing to say.” I would take this one step further: everyone has something to hide. If you really, honestly, truly believe that you have nothing to hide, post your iPhone passcode on Facebook. Put your credit card number on a bumper sticker. Hand out printed copies of your text message history on the street corner. Obviously I’m exaggerating, but the point is that “something to hide” isn’t limited to “something criminal to hide.” Everyone has information that could potentially be used against them, either by the government, criminals, or even ordinary citizens, and you have a right to protect that information.

Compromising your right to privacy can even help compromise your other rights, like the right to free speech. For example, an investigation by the FCC is currently underway to determine whether law enforcement officials illegally used “stingrays” to capture cell phone calls from protesters at the Dakota Access Pipeline last year. Even if the investigation comes to nothing, numerous reports surfaced of law enforcement using social media to track DAPL protesters.

Finally, let’s expand our view to the rest of the world. Thankfully, as United States citizens we have rights that are protected by the government and are (ideally) not violated on a regular basis. However, consider the greater worldwide community. What happens if we take secure encryption away from a reporter in the Middle East covering the atrocities committed by ISIS? Do we have to now give access to the (hypothetical) Apple backdoor to every government who demands it, even those who commit horrendous human rights violations, and who will unashamedly use it to spy on their own citizens?

Privacy is not an easy subject to talk about. Issues like Apple vs. FBI are laden with emotion-laden arguments (from both sides) designed to scare people without providing any real substance. While it’s often tempting to compromise personal privacy for promises of security and public safety, all this accomplishes is to further make insecure the systems we rely on and trust on a daily basis.

Hidden Figures

This week I finally got to see the incredible film Hidden Figures. For those of you who have been living under a rock, this movie tells the inspiring story of three black female NASA engineers who made significant and lasting contributions to the U.S. space program in the 1960s. Nathan K, Alanna and I recorded a podcast where we talk about some of our initial reactions to the movie, which can be found on soundcloud here.

In it, we discuss some of the obstacles faced by the main characters Mary, Katherine and Dorothy as they struggle against an oppressive culture in segregated Virginia, and how they apply to the issues that still face minorities in engineering today.

The title of the film “Hidden Figures” describes the main characters well: they are undervalued, work behind the scenes for little recognition and often have credit for their work stolen by their colleagues. However, the “hidden” theme goes deeper than this. For example, in one scene later on in the movie, the following exchange takes place between Dorothy and Vivian (her boss), in the recently desegregated women’s restroom.

Vivian: I want you to know I really don’t have anything against you people.

Dorothy: I honestly believe that you believe that.

This simple exchange brings to light something else hidden in the movie: the idea of unconscious bias. I most recently heard this term in Aimee Lucido’s blog post in response to the recent blog post by Susan Fowler, a former engineer at Uber. Unconscious bias is a more subtle form of discrimination than the outright racism shown by many of the characters in the movie. Vivian, although at times she may truly believes she means well, still treats her black colleagues very differently than her white colleagues. This is shown perhaps most prominently when she publicly rejects and humiliates Mary for applying to the NASA engineering program for disregarding certain experience requirements (classes). Instead of having a one-on-one meeting with Mary and discussing her options for meeting these requirements (or god forbid showing a little support for her fellow colleague), she calls her out in front of everyone. I would argue that Vivian’s unconscious bias plays a significant role in this scene–even though she believes that she is doing the right thing, because she’s following NASA’s rules and regulations to the letter, she enforces them in such a way that devalues Mary as a person and an engineer.

This is certainly an issue that still exists today, and I would absolutely recommend reading the coverage of the Uber sexual harassment scandal as it unfolds, including the two fantastic blog posts linked above. Before you can truly empower the hidden figures of today, it is crucial that you first identify your own hidden biases.

Blowing the whistle

A little over 1 month ago, then-President Obama made the controversial decision to commute the sentence of Chelsea Manning from 35 years to just over 7. The decision was significant not only because it cut short the longest sentence ever conferred upon a whistleblower (government or otherwise), but also seemed to mark a change in the Obama administration’s hardline stance on whistleblowers like Manning and Snowden.

The Manning leaks, which contained several hundred thousand sensitive military documents and diplomatic cables, dwarfed the size of previous intelligence breaches. In the years since Manning’s incarceration, “leaks” have become an all-too-familiar part of the American vocabulary. With the increase in frequency of leaks leading up to and following the election of President Trump late last year, a great deal of ethical dilemmas have been raised, many of which lack clear answers, given that each leak varies a great deal in size, scope, level of secrecy and intent.

The arguments against leaking (and the reasons cited for Manning’s unprecedented sentence) include that indiscriminate leaking endangers the lives of U.S. agents (military personnel, citizens, informants, etc) around the world. Classified information laws certainly are defensible in this regard: U.S. soldiers and informants take incredible risks every day in order to keep our country safe. However, it would be foolish to think that misconduct doesn’t exist, or that breaches of trust between American citizens and the government never happen. The principle of accountability in government goes back to the very essence of the Constitution, and the military, as sworn defenders of our Constitution, should be held accountable too.

Freedom of information is an important part of this. Classifying documents that could jeopardize the lives of U.S. agents is not the problem. Rather, the issue is the wrongful classification and cover-up of documents that could jeopardize the careers of those involved in wrongdoing. One of the most explosive pieces from Manning’s leaks, dubbed “Collateral Murder” by WikiLeaks, depicts the killing of several unarmed men in a helicopter attack, including two Reuters photographers and at least one Iraqi civilian. This is a horrific example of information that was classified merely to protect those responsible for the wrongful killings.

At some point, silence becomes complicity. American citizens have a right to know what their government is involved in, both at home and abroad. Knowledge of government actions is fundamental to the operation of our democracy, since covering up information destroys the ability of the public to cast an informed vote. Our government’s accountability to the citizens is what separates our country from those like North Korea, Russia, and China, who would censor information indiscriminately to protect those in power.

It’s important to note, however, that the mere act of leaking information is not intrinsically good (or bad). Releasing information does not make one a patriot, nor a traitor. The intent of such an action is the most important part. “Ethical” whistleblowing necessarily entails weighing all outcomes of the leak. Releasing the video referenced above was unlikely to cause danger to the lives of U.S. soldiers on the ground. On the other hand, releasing classified communications containing the identities of U.S. contacts on foreign soil poses a very real threat to the lives of those the U.S. has a duty to protect. The sheer size of the Manning leaks pose a lot of questions in this regard: how is it possible that she could have single-handedly reviewed the entirety of the information to ensure it did not pose a threat? Indeed, while Manning testified during her trial that she did not intend to cause a threat to American soldiers abroad, her chat logs and the methodology employed in obtaining the leaked information (mass downloads) suggest that she failed to review the information adequately to make that claim.

I think that many of the Manning leaks should have been protected under whistleblower protection laws. She certainly should not have been charged with “aiding the enemy,” since such a charge requires the malicious intent to directly cause harm to American interests, and she should not have been sentenced to 35 years in prison. However, I don’t think that she was completely innocent, for the reasons stated above (indiscriminate, irresponsible leaking), and should have stood trial for a lesser charge. Thankfully, no evidence surfaced that suggested any loss of life due to the Manning leaks.

At the end of the day, freedom of information is an incredibly important part of our country’s heritage. A functioning democracy requires a well-informed populace. Responsible whistleblowers realize this. They don’t leak information out of malice, they do so because they believe in democracy. They believe so strongly in democracy that they would risk their own well-being to expose those who would do it harm. After all, isn’t that the very definition of patriotism?

Diversity

I interned for two summers in a row at a large, well-known engineering corporation. I worked on two midsize engineering teams, containing engineers covering a wide range of experience levels: from me, not-even graduated, all the way up to senior engineers with 25+ years of tenure at this company alone. And oh yeah, exactly one of them (between both teams) was a woman. Every single person was white.

Discrimination is something I’m very very lucky to have never had to deal with personally. I’m an awkward white male in his early 20’s–I’m one set of oversized glasses away from fulfilling nearly every 1980’s “hacker” stereotype you could imagine. So why do I think discrimination is such a problem? In short, it’s because I love the tech industry. I’ve known I wanted to work Computer Science since high school. I always thought of tech as the great equalizer, an industry built not around self, but around actually making people’s lives better. And since the beginning, my excitement for it is something I wanted to share with other people. I don’t want to think that because of someone’s gender or race, someone might be excluded from the excitement that comes the first time you run “javac” and the compiler doesn’t spit back mountains of arcane text. Tech is an experience worth sharing.

At Notre Dame, we’re fortunate enough to have a smaller gender imbalance in CSE than most tech companies (based on my very informal guesswork, averaging 65-35 in most of my classes). I never realized how drastic the gender gap was in the larger industry until I left for my internship. I guess I had always thought tech was a progressive field, and that somehow the ‘meritocracy’ would iron out whatever diversity issues arose. The reality, however is a lot different.

Study after study has pointed out tech’s abysmal record for bringing in female and minority talent, even among the most supposedly meritocratic and progressive companies like Google and Facebook. Clearly there exists a huge disconnect between reality and the ideal the tech industry prides itself on.

But is is the tech industry’s fault?  Maybe girls just can’t cut it in STEM courses because they can’t (to quote the article) “cope well in competitive environments, so even if they get onto the courses, they often drop out when one day a book like this [original link broken] lands on their desk, or when their grades start slipping.” I’m again going to turn to my own empirical data. This semester and last semester I’ve served as a TA for the intro -to-CS sequence of courses at Notre Dame (Fundamentals of Computing/Data Structures), which has grown this year to over 160 students, and given me the opportunity to work with a huge number of students. Between weekly assignments and labs, I’ve had the chance to see a great deal of code from a large number of people and have found no evidence to suggest that gender is in any way related to code quality. I know “code quality” isn’t really a quantifiable metric, and that my study can’t really be considered scientifically rigorous, but since the guy who wrote the article has never worked a day in the tech industry, I’m still claiming that there’s some value in my conclusion.

I mentioned above how tech is meant to be shared. The problem is there’s a small group of people within tech who consciously or unconsciously, don’t want to, or choose not to, share tech. For one, it’s people who, had experience with CS long before they came to college, and feel the need to publicly show off their “advanced” knowledge in classrooms, creating a hostile environment where students with no prior experience start to lose hope of “catching up” to their classmates. This isn’t an indictment of students with prior experience, simply a recognition that they have a responsibility to use their knowledge to help their fellow students, not drive a wedge between them. It moves toward making the field inaccessible to newcomers–exactly what the tech industry claims to want to avoid. Harvey Mudd has had success combating this effect by segregating intro-CS classes by levels of prior experience.  They noticed that the students most commonly affected by the above were women, since they had the greatest perceived culture gap between themselves and the still-majority-male tech world, and took steps to make the classes more female friendly. This isn’t (to again quote the Breitbart article) treating women as “special flowers” in need of “protection” or “special treatment,” it’s about treating everyone with the same amount of respect and dignity.

My point is this: if you, like me, have been gifted with a love of technology, you have a responsibility to use that love to improve lives. Share your passion with others in a way that allows them to participate and discover their own passion. Don’t take something incredibly empowering like tech and keep it to yourself. We’re all in this together, and when we finally realize that, we can get back to doing what we do best: making the world a better place.

Burnout notice

This year my classmates and I were fortunate enough to have a month-long winter break. I did all the usual winter break things: slept 10 hours a day in a bed a normal height off the ground, watched 4 seasons of Bones on Netflix in an amount of time that almost required a time-turner, ate way too many Christmas cookies, and finally indulged some hobbies that I’d been putting off all semester: running, playing guitar and wandering around the woods taking pictures.

Driving back to school after this month of blissful relaxation, I realized something: despite having been the longest winter break I’d had in years, this year was the first time I really felt like I needed the whole 30 days. Most years I would have been ready to go back for a week or so before classes actually started, but this one was different.

In the fall semester of 2016 I was enrolled in 5 CS/EE classes, spending 10-12 hours a week rehearsing and performing with the Marching Band, tutoring and grading as a TA for a further 12 hours per week, working in the band library here, and interviewing here and there for different companies. I know I’m not alone here: plenty of people were at least as busy as I was (a recent study using fitness tracker data found that Notre Dame students are some of the most sleep deprived in the country). So why was this semester particularly bad?  Shouldn’t I be used to this by now?

My experience with occupational burnout was far from unique. In the tech industry, burnout is becoming more and more common among employees, not just in resource-strapped startups, but in tech giants. The lavish benefits and free services provided to employees by large companies often give the impression that working there is all free massages and bike meetings, but behind all this is a question worth considering: is all this provided only so that employees will work more hours? A friend of mine interviewed at a large software company in the Bay Area and noticed that many of the employees in this particular division kept toothbrushes in the bathroom.  Unless all the people in that division were particularly dedicated to their mental health, this should throw up a few red flags for anyone interviewing there.

Burnout happens when we neglect things in our lives that are important to us.  This can include friends, family, hobbies, and even our own health. At the end of last semester, I was constantly exhausted. I overslept constantly. My diet suffered and I hadn’t exercised in weeks. I gained almost 10 pounds in two months. I sustained myself on coffee. I was moody and generally unpleasant to be around. By the time I went home, my desire to write code, build things and pursue my craft had been completely drained.

The second rule of the stock market (after buy-low-sell-high) is to diversify your portfolio, a term which here means investing in a set of companies, funds, etc. that represent a broad spectrum of industries and business models. It’s easy to see why: investing all your resources into one industry subjects you to a great deal of risk, should that industry hit a rough patch.

The same principle applies in preventing work burnout. You have a limited amount of emotional “capital” that you can invest in the things you care about: work, hobbies and friends, among others. Investing in these activities can in turn generate more emotional capital for you to reinvest elsewhere. This system, for the most part, works very well: you invest your time and energy into a rewarding and enjoyable activity  and are rewarded with fulfillment and happiness. However, just as there are no risk-free investments, there are no activities that always generate emotional profit. Sometimes, work, hobbies, and yes, friends and family can be draining. If you’re one of the lucky ones who keeps a well-diversified portfolio, this downturn is nothing to be afraid of. If one investment (work) isn’t paying off, you can rely on others (friends and family) to keep you afloat. Many people, however (myself included), struggle to keep this balance and as a result, repeated downturns in the emotional “market” cause them to burn out. Last semester, I was only heavily invested in two things: work and band. Lacking another outlet, I grew to dread doing the things I enjoyed, like coding and playing horn.

Putting aside the (stretched) analogy of investing, I’m trying to learn from my experience last semester. I’ve scheduled time in the morning to work out every day. Every weekend, I try to do something for myself, even if it’s just blowing off homework for a few hours to practice guitar or wander campus and take pictures. It’s easy to forget, when staring down deadlines for projects and exams, how much more productive I am when I’m happy, and that “wasting” an hour to step away and do something else I enjoy can often boost my productivity incalculably when I get back. Marissa Mayer says that burnout is just veiled resentment, and that people “‘can work arbitrarily hard for an arbitrary amount of time,’ but they will become resentful if work makes them miss things that are really important to them.” I disagree that burnout is only resentment (I don’t resent my school in the slightest for my stressful semester), but certainly the idea of missing things that are important factors heavily into burnout.

“Doing what you love” doesn’t have to mean doing only one thing that you love, and doing that one thing so much you forget you love it. Being happy (and productive) has to come from balancing all of the things you love, not just the one someone decided was worth paying you for.

ND-CSE Code of Ethics

I, along with Alanna McEachen and Nathan Kowaleski were tasked with creating a ‘Code of Ethics’ for the Notre Dame CSE department for our Computer Ethics class.  The full document is available here.  Our code of ethics aims to make concrete the values and goals of the Computer Science/Engineering department by laying out commitments we, as undergrads, should make to ourselves and to our peers, loosely based on the ACM Code of Ethics and Professional Conduct.  Below are some of my reflections on our document.

The ND-CSE Code of Ethics is broken down into four general sections, with a number of tenants in each.  Each tenant is reproduced below (a longer explanation is available in the link above).

“As a Notre Dame Computer Scientist/Engineer, I will…”

  1. General
    • Take pride in my work
    • Take responsibility for mistakes
    • Deliver on my commitments
    • Adhere to the spirit and latter of all relevant regulations and policies
  2. Collaboration
    • Work with, never against, my colleagues
    • Share credit appropriately
    • Mentor, and accept mentorship
  3. Social Responsibility
    • Code for the greater good
    • Respect privacy and confidentiality
  4. Educational Mission
    • Keep an open mind
    • Actively seek out opportunities to learn
    • Strive to make projects approachable to non-engineers

The General section outlines general responsibilities that our students have regarding work.  These tenants were created with the goal of mirroring similar commitments we will have to make in our professional lives after graduating from the University.  Quality of workmanship and personal responsibility are crucial, no matter where our career takes us.

The Collaboration section addresses the fact that engineering is a fundamentally human-oriented activity.  The work that we do is inherently focused on improving lives via technical problem solving.  This starts with the engineering team: a team that cannot even work with each other cannot possibly work for the greater good, since their efforts will be dominated by the desire for individual power and recognition.  We work with each other because we acknowledge that we are all working toward the same goal.

The Social Responsibility section is a natural follow-up to the collaboration section. We move from working together within a team to working together with the general public. When we create as engineers, we accept that we have a responsibility to make every effort to ensure that our creations really do improve lives in the real world, not just on paper.

Finally, the Educational Mission addresses the fact that we are a University and that our ultimate goal is the pursuit of knowledge, both for ourselves and for others.  My favorite tenant in this section is “strive to make projects approachable to non-engineers.”  Often the users of the projects we work on are not themselves engineers. While it is not crucial for a user to understand every nuance in the operation of the objects they use on a daily basis, we realize that lack of knowledge of technology has caused and will continue to cause real and serious problems.  For example, most people have no idea how accessible the information they post on social media is, and how without constant vigilance, their privacy can erode away completely. It is our responsibility to share our knowledge freely, not keep it to ourselves.

Overall, I think we’ve created a very strong document.  One weakness I realize, however, is that the Social Responsibility section references “Catholic Social Teaching,” which is appropriate for Notre Dame, given the Catholic foundation of the University, but will not carry the same weight outside of the University, or among those members who aren’t Catholic. A more general code of ethics shouldn’t contain references to religion, however the general idea of “protecting the weakest and most vulnerable members of society,” is certainly applicable and important beyond the context of Catholic Social Teaching.

Finally I want to address a more personal question: is a code of ethics really all that useful? I would argue yes, with some reservation. Codes of ethics are an excellent way for an organization to state its beliefs concretely and succinctly.  However, there is a difference between a code of ethics and a code of law.  A code of ethics would be quite ineffective if it were used to enforce compliance. A code of ethics is by nature idealistic, dealing with broad ideas and goals that an organization strives to accomplish. A code of law, by contrast, is quite specific about what individual members of organizations can or cannot do.  While often codes of law and codes of ethics try to mirror each other as closely as possible, both are necessary to curb “unethical” behavior in an organization.  A code of ethics is idealistic but unenforceable, providing a goal for all to work toward, whereas a code of law is pessimistic but unambiguous, providing an equal standard by which all are held accountable.