What obligation do data scientists have to be good stewards of the data they are collecting and analyzing? This type of question has been asked in ethics courses in data science programs for years, though ethics remain a hot topic for the technology industry. That’s because ethical considerations are often still secondary, as companies instead prioritize technological developments to speed up the amount of data collected, analyzed, and stored.
Data scientists have access to your sensitive data. That’s driving more schools to teach ethicsBY Dawn RzeznikiewiczFebruary 22, 2022, 5:51 PM
“Frankly, there’s really no reason for the industry to act differently,” says Julia Stoyanovich, an associate professor at New York University’s Tandon School of Engineering and an IEEE member, adding that the newest generation of engineers is much more aware of the social issues and implications of emerging technologies. “There is a big, big push from these engineers to start changing the practice.”
But the status quo is starting to evolve, as schools incorporate ethics as a more fundamental component in the training of future technologists and data scientists. Some top-ranked programs offer ethics courses as part of their core curricula. The University of Illinois Urbana-Champaign, which landed the No. 1 spot in Fortune’s ranking of the best online data science programs, offers a course called Ethics and Policy for Data Science. Meanwhile, Duke University offers a course called Data Science Ethics and New York University’s Center for Data Science has one called Responsible Data Science.
In recent years, there’s been an even higher level of scrutiny, resulting in a major shift in how ethics is being taught within data science programs. Traditionally, any ethics material in technology curriculums was still taught by ethicists, sociologists, and anthropologists—and engineers have not always found the content relevant, says Stoyanovich. “Their insight is extremely valuable. But for an engineer, such as myself, it’s difficult to relate to that type of narrative.”
Understanding value tradeoffs at stake with new tech
Stanford University is one school that recognized the need for change. The university has been teaching ethics as part of its computer science curriculum for decades, but recently reevaluated who is teaching ethics to technologists. The Engineering School is in its fourth year of offering the course Ethics, Public Policy, and Technological Change, which is jointly taught by a philosopher, social scientist, computer scientist, and political scientist.
“The goal was to help our technical students understand the value tradeoffs that are at stake as they design new technologies, and how you deal with some of the consequences of technology,” explains Jeremy Weinstein, the course’s political science professor. For the past four years, the class has consistently enrolled 250 to 350 students each semester that it’s offered. It’s even grown to be offered in the evenings to working professionals.
“We’re at a moment where we’re beginning to think about how the system errors of Big Tech need to be addressed,” says Weinstein.
In addition to its dedicated course, Stanford University has introduced Embedded EthiCS to create ethics-based curriculum modules with a goal to embed them into the university’s undergraduate computer science courses. “Ethics is not a course. It’s a practice,” says Weinstein. “This initiative helps students to develop a set of muscles to engage in these [ethical] debates.”
Still, plenty of schools and data-focused programs have a lot of catching up to do when it comes to incorporating ethics into their coursework.
The challenge of teaching ethics to technologists
Both teaching and developing that ethics “muscle” can be difficult. As a starting point, you need to find professors who are willing to teach social science and technology. “This is kind of a new intersection, and there’s not a whole lot of academics yet in that space,” explains Nita Farahany, a professor of philosophy and law at Duke University School of Law and the founding director of Duke Science & Society.
The professors of these ethics courses need to bridge how a computational scientist thinks and how an ethicist thinks. As a result, a professor teaching ethics must be an expert in all aspects—technical, legal, political, and social—and that’s somewhat of a unicorn in today’s academic world.
In addition, technology students who are required to take an ethics course are often grappling with this type of learning for the first time. “There has been a view of ethics as a checkbox, rather than a serious area of inquiry,” says Farahany of the engineering field.
Technology students are accustomed to looking through the black-and-white lens of right and wrong answers, Farahany explains. “They don’t have any context for understanding why it matters, why it’s important, or how it’s relevant to their future career. And on top of that, they’re used to fields that are very data driven, very analytic.”
What we stand to lose if ethics is ignored
There’s a noticeable gap between the academics working to embed ethics into technology education and what the tech industry at large is currently doing to rectify the situation.
Stoyanovich says the reason the ethos “Build it first and ask for forgiveness later” is still prevalent is simple. “It’s an easier way to conduct business,” she explains. “Incentives are simply not there for the industry to act in ways that would embed ethics and responsibility into the design process.”
Some of the issues the established college courses are aiming to address include personal privacy, cybersecurity, A.I. bias, and data collection and use. In a class project at Stanford, for example, students were asked to evaluate and make recommendations as to how CCTV cameras should be installed on campus, as well as the extent to which facial recognition technology should be used.
At a time when Big Tech companies are plagued with ethical issues—Facebook and free speech, YouTube and censorship, adtech platforms and privacy—the stakes are high for the industry to adapt. Stoyanovich says one danger of unregulated or irresponsible design development and use of A.I. is that the profession might be discredited as consumers lose trust, and ultimately the field will lose funding. “If we overpromise and underdeliver repeatedly, it’s not just that there are harms at the moment, but also the entire field will be set back.”
Whose responsibility is it to consider ethics?
Academics play a crucial role in the integration of ethics into technology; schools are teaching the next generation of builders who will eventually enter the workforce. But computer science or programming professionals are just one piece of the responsibility puzzle. “It is not just the job of engineers to warn one another about how their products could be used,” says Weinstein.
Rather, there’s a growing awareness that ethical technology needs to play a big role in corporate responsibility. Stoyanovich suggests that the responsibility be shared even further—among multiple teams at Big Tech companies (programmers and technologists, analysts, quality assurance professionals, and customer service) as well as policymakers and the general public.
“We need to create a robust, distributed accountability regime where everybody is responsible,” says Stoyanovich.
Colleges and universities are starting to work together on this crucial issue. This year, the National Humanities Center is hosting its first Responsible Artificial Intelligence Curriculum Design Project, and 15 of the nation’s top universities and colleges will partner with the center to develop undergraduate courses that address ethical questions about the role of artificial intelligence.
Universities with data science programs seem to unanimously agree on the importance of ethics playing a significant role in education. As Weinstein notes: “We’re pushing back against the notion that technology is somehow neutral or objective.”