On Teaching Data Ethics

On Teaching Data Ethics

How important is data ethics, what issues need to be addressed, which themes should be explored, and how can the subject be taught effectively?

Data ethics involves the study and adoption of data practices, algorithms, and applications that respect fundamental individual rights and societal values.[i] The primacy of data in modern economies becomes more apparent each day. Success not only in science but in business and society depends on understanding both what data exist and what it represents. It is of little wonder that universities the world propose specializations today in data science, machine learning, and artificial intelligence. Yet confining data science to the realm of specialists is both short-sighted and potentially perilous, for both public and private organizations are increasingly relying on analytics to monitor and evaluate almost every aspect of our daily lives.

Is data ethics limited to concerns about e-mail scams, the abusive use of micro-targeting, and the immorality of troll farms? Cambridge academics have monetized their research of psychometric data to predict and influence behavioral preferences.[ii] Facebook has deliberately modified the sentiment of seven hundred thousand of its users “home feeds” without their consent.[iii] Amazon has continued to aggressively market its facial-recognition tool Rekognition in spite of concerns over privacy and bias.[iv] Courts are using algorithms to profile convicts according to “risk” based on skin color at each stage of the legal process.[v] Employers are recruiting using algorithms that inherently favor certain socioeconomic groups.[vi] Applications of data science cannot be dismissed as simply “business as usual", for they produce ethical consequences that condition the future both business and society.

What types of problems are we trying to solve in applying data science to automate processes, interpret sensory data, master conceptual relationships or influence environmental dynamics? Artificial intelligence (AI) can be distinguished from machine learning (ML) by comparing its objectives, methods, and applications. By its very nature, machine learning has been historically focused on producing new knowledge, whereas AI aims to replace human intelligence. Machine learning uses algorithms to improve supervised, unsupervised or reinforced learning, AI leverages algorithms to replicate human behavior. Data Scientists deploy machine learning to better understand patterns in the data, they hope that AI will provide the answer to complex problems. If the objective of ML is to improve our ability to make better decisions, that of AI is to provide the optimal solution. The ethical implications of data science depend upon each organization’s objectives, data practices, and applications.

Is data ethics a bigger subject that artificial intelligence itself? If the scope of artificial intelligence is difficult to gauge, its societal impact extends far beyond trying to “do something useful with the growing morass of data” at our disposal.[vii] Are data just technology, can AI be reduced to a form of experimental logic designed cure modern life’s ills? Because Artificial intelligence reflects the visions, biases, and logic of human decision making, we need to consider to what extent AI can be isolated from the larger economic and social challenges it has been designed to address. Emerging issues such as personal privacy, public engagement with data, the pertinent metrics for evaluating human progress, and the relationship between data and governance suggest that data condition how we see and evaluate the world around us. If data are of little value until they are used to insight decisive action, data ethics needs to focus less on data and on algorithms that their impact on the bounded rationality that defines human decision-making. In sum, as the proponents of open science suggest, there isn’t a binary opposition between data and action, only interactions between interventions and contexts.[viii]

Which subjects need to be addressed in a curriculum on Data Ethics? As the initiatives in Europe, Brazil, India, Singapore, and California illustrate, the issues surrounding personally identifiable information, explicit consent, as well as the rights to access, to rectify and to be forgotten all need to be explored. Implicit bias should also be high up on the list and explore how attitudes and preconceptions influence our understanding of data, cognition, logic, and ethics.[ix] The managerial issues around digital transformation can be analyzed: including the extent to which managers and organizations need to appropriate and be held responsible for their data practices. Technology’s impact on reasoning should also be discussed, for our reliance on data has subtly modified the traditional definitions of “freedom of choice”, “privacy”, “truthfulness” and “trust”.[x] Finally, the compatibility between AI and innovation can be examined: our reliance on scientism belittles other forms of human intelligence including emotional (interpersonal), linguistic (word smart), intrapersonal (self-knowledge) and spiritual (existential).

Finally, how and where should Data Ethics be taught? As a baseline, Rob Reich suggests that all those who are trained to become technologists should have an ethical and social framework for thinking about the implications of their work.[xi] Yet, as the parliamentary hearings on AI in France and Germany demonstrate, students preparing for careers in public policy and other fields would well profit from having a better understanding of the societal impacts of data science. Rather than proposing a checklist of “rights and wrongs” modules on data ethics would be well inspired to on the ethical consequences of data-driven problem-solving. In the absence of a universal list of “rights and wrongs”, Shannon Vallor argues that students need to develop “practical wisdom” to navigate ethical challenges posed by successive generations of technology.[xii] If Data ethics may never be fully captured in one course, it can be better explored in a framework applied to academic study and research as a whole.

Lee Schlenker Originally published in Marketing&Innovation

  • This contribution will provide the framework of our contribution to the CDEFI conference on “Ethique et numérique” June 6th in Toulouse.

Lee Schlenker is a Professor of Business Analytics and Community Management, and a Principal in the Business Analytics Institute http://baieurope.com. His LinkedIn profile can be viewed at www.linkedin.com/in/leeschlenker. You can follow the BAI on Twitter at https://twitter.com/DSign4Analytics


[i] Floridi, L. and Taddeo, M. (2016), What is Data Ethics?

[ii] Cadwalladr, C. (2018), How Cambridge Analytica turned Facebook ‘likes’ into a lucrative political tool

[iii] Meyer, R. (2014), Everything We Know About Facebook's Secret Mood Manipulation Experiment

[iv] Kellon, L. (2019), Amazon heads off facial recognition rebellion

[v] Hao, K. (2019), AI is sending people to jail—and getting it wrong

[vi] Schulte, J. ‘2019), AI-assisted recruitment is biased

[vii] Coiffait, L. (2018), Universities’ evolving role in the ethics of data and artificial intelligence

[viii] LERU (2018), Open Science and its role in universities

[ix] Schlenker, L. (2019), The Ethics of Data Science

[x] Ericson, Lucy (2018), It’s Time for Data Ethics Conversations at your Dinner Table

[xi] Wykstra, D. (2019), Fixing Tech’s Ethics Problem Starts in the Classroom

[xii] Vallor, S. (2016), Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Oxford University Press

Can AI predict the future of intermodal transportation?

Can AI predict the future of intermodal transportation?

Will AI predict the future of Health Analytics? *

Will AI predict the future of Health Analytics? *