Advanced Topics: Ethics in AI and Data Science

Objective

To understand the contemporary ethical issues in the use of AI and Data Science tools and provide tools and techniques to analyse and address them

Learning outcomes

– To know the main ethical concerns involved in designing and using AI and data-centred systems
– To understand the (algorithmic) solutions proposed to design ethical concerns into AI and data analytics algorithms
– To analyse the consequences of ethical decision in AI and autonomous decision-making systems
– To analyse and implement these decisions in practice

Structure

1. Ethics for Subsymbolic AI

1.1. Privacy

– Privacy Concerns
– Differential Privacy

1.2. Fairness and bias

– Concepts and definitions of fairness
– Impossibility results, Pareto optimal solutions

1.3. Explainability

– Causal approaches to explainability

2. Ethics for Symbolic AI

2.1. Understanding and reasoning about human behaviour

– Game theoretic approaches
– User-centred design

Assessment

For all of the assessment modules below, The students form groups of two. The assessment criteria are judged individually when possible. Hence, members of the same group may receive different final marks.

1. Presentations 30%

Each group gives four presentations on four different topics, spread throughout the term (two presentations by each group member, both will contribute to each and every presentation and should answer all questions). Each presentation should cover at least one key paper of the respective subject matter.  Each presentation will account for 7.5% of the total mark.

Judgment criteria (2.5% each):

Content

Was the presentation well organised?
Was it given at the right level of detail (informative but without being unnecessarily complex)? Was it clear that the student thought about the topics and contributed to the final conclusions?

Delivery

Did the student speak fluently and clearly, without excessive recourse to notes? Did he/she use the right level of technical language (sufficiently technical but without excess of acronyms)? Did he/she respond well to questions?

Support

Did the student make good use of available technologies (e.g., for slides or tool demos)?
Are the slides well designed?
Do slides support the logical flow of the presentation?
Was the presentation prepared to a professional standard?

2. Term papers 50%

Each group writes four term papers on the topics of their presentations. The term papers should not only reflect the content covered in their own presentation, but also in all other presentations and hence will cover multiple papers.  Each term paper should be around 2500 words (this is just a guideline not a hard rule). Each term will account for 12.5% of the total mark.

Judgment criteria:

Logical Structure (2.5%):

Structuring the paper into sections and the sections into paragraphs with a logical flow of information. Proper use of itemised or enumerated lists, figures, and tables. Including an abstract, and proper introduction and conclusions. 

Presentation (2.5%):

Using proper headings for sections. Figures and tables should have appropriate captions and should be referred to in the text.
Typeset properly in LaTex; you could use an online LaTex environment such as Overleaf and use one of its Homework Templates. No spelling or grammar mistakes

Content (5%):

In-depth treatment of the chosen topic: identifying the main message of the paper,  concrete problem definition, providing clear definitions, motivated by illustrative examples, identifying concrete results and exemplifying them.  Critical appraisal of the results.

References (2.5%):

Using scientific references (textbooks, papers appearing in peer-reviewed journals and conferences; avoid referring to Wikipedia; do not use too many references to popular websites). Using a consistent and complete citation and bibliographic style (e.g., any one of the following: Harvard, Chicago, IEEE)

3. Mini-project  20%

There will be a mini-project on fairness and bias, where you redesign a classifier to counter bias. The detailed project description will be released in the first week of October.  The final deadline and presentation is on Saturday 27th and Monday 29th of November.

Resources

Books:

Main book: Michael Kearns and Aaron Roth. The Ethical Algorithm. Oxford University Press. 2020.
Other books:
● Mark Coeckelbergh, AI Ethics, MIT Press, 2020.
● Cynthia Dwork and Aaron Roth. The Algorithmic Foundations of Differential Privacy. NOW Publishers, 2014. https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf
● David Edmonds. Would You Kill the Fat Man? Princeton. 2014.
● Shannon Vallor. Technology and the Virtues. Oxford University Press, 2016.

Similar courses:

Philosophically-oriented:
– https://ethics-of-ai.mooc.fi/
– https://cft.vanderbilt.edu/university-courses/university-course-the-ethics-of-artificial-intelligence-ai/
– https://www.unibo.it/en/teaching/course-unit-catalogue/course-unit/2020/446601
– https://web.stanford.edu/class/cs122/

Practically-oriented:
– https://www.coursera.org/specializations/ethics-in-ai

Further reading / tools (for your term paper and project)

Differential privacy

– C. Dwork. Differential privacy. InProceedings of the International Colloquium on Automata, Languages and Programming (ICALP)(2), pages1–12. 2006.

– C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. InTheory of Cryptography Conference’06, pages 265–284. 2006.

– C. Dwork and M. Naor. On the difficulties of disclosure prevention in sta-tistical databases or the case for differential privacy.Journal of Privacy and Confidentiality, 2010.

– I. Mironov. On significance of the least significant bits for differential privacy. In T. Yu, G. Danezis, and V. D. Gligor, editors,Association forComputing Machinery Conference on Computer and Communications Security, pages 650–661. Association for Computing Machinery, 2012.

– Anindya De. Lower bounds in differential privacy. InTheory of Cryptog-raphy, pages 321–338, 2012.

Fairness / bias

Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Semantics derived automatically from language corpora contain human-like biases

– Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

– Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

– Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Rich Zemel. Fairness Through Awareness.

Explainability

Quoc V. Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeff Dean, Andrew Y. Ng. Building high-level features using large scale unsupervised learning.

Algorithmic Game Theory

Michael Kearns, Mallesh M. Pai, Aaron Roth, Jonathan Ullman. Mechanism Design in Large Games: Incentives and Privacy.

D. Gale and L. S. Shapley. College Admissions and the Stability of Marriage.

Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. Generative Adversarial Networks.

User-centred design tools:

– https://www.irit.fr/recherches/ICS/documentation/
– http://www.pvsioweb.org/
– http://ivy.di.uminho.pt/

Other resources:

Living with AI Podcasts
Verifiability YouTube Channel
Moral Machine