Explainable Artificial Intelligence (XAI)
(데이터사이언스 특강: 설명가능한 AI)

Instructor: Dr. Wen-Syan Li, Professor (wensyanli@snu.ac.kr, Office: 942-412)

Goals

Explainable AI (XAI) is artificial intelligence (AI) in which the results of a machine learning solution can be interpreted or explained to humans so that the inference process of the trained models can be understood. It contrasts with the concept of the pure data-driven black-box approach in machine learning where sometimes even its designers cannot explain why a trained machine learning model comes to a particular decision. 

Content

In this course, we aim at covering the following topics:

 

● The concepts of XAI
● Definitions of interpretation and explainability.
● Interpretation and explainability for machine learning and deep learning.
● The tradeoff between interpretability and performance of machine learning/deep learning
● Explainable models and post-doc explainability
● Local explainability, global explainability, and counterfactual explainability
● Model interpretation and explainability for images, video, audio, music, natural language processing.
● How to maximize activation and visualize parameters and filters, such as patterns, features, and styles in a machine learning model.
● How to evaluate interpretation
● Explaining prediction of a model
● Discovering potential vulnerability of adversarial attacks on machine learning
● Human-centric AI – how to improve the interaction between machine learning and humans
● How humans’ feedback and knowledge base can be incorporated into machine learning and vice versa
● Applications and case study
● Legal, ethical, and fairness aspects of XAI

 

By the end of this class, students will learn the main concepts, methodologies, and tools in explainable artificial intelligence (XAI) and apply these skills to real-world machine learning problems. The students will develop the critical thinking to analyze a given task requiring both performance and explainability and will also gain the experience of applying the data science process end-to-end as an individual and as a team member.

Textbook

There is no single required textbook for this course as the lectures will be based on multiple textbooks, various articles, and web documents as well as real scenarios from external companies. Among numerous textbooks available in the market, the following are recommended.

 

1. Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop, ISBN-13: 978-0387310732. Online material and downloadable pdf are available at https://www.microsoft.com/en-us/research/people/cmbishop/prml-book/


2. Machine Learning by Tom M Mitchell ISBN-13: 978-1259096952. Online material is available at http://www.cs.cmu.edu/~tom/10701_sp11/lectures.shtml


3. Deep Learning (Adaptive Computation and Machine Learning series) by Ian Goodfellow, Yoshua Bengio, Aaron Courville, ISBN-13: 978-0262035613. Online material is available at https://www.deeplearningbook.org/

4. Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning series) second edition by Richard S. Sutton (Author), Andrew G. Barto, ISBN-13: 978-0262039246. Online material is available at https://mitpress.mit.edu/books/reinforcement-learning-second-edition (reinforcement learning-focused book)

5. Machine Learning by Andrew Ng’s online machine learning course available at https://www.youtube.com/watch?v=PPLop4L2eGk&list=PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN

Grading Policy

This course is offered for 3 credits. The grade is broken down as follows.

● Attendance: 5% (with a video camera on, or permission from the instructor)
● Class participation (especially students should actively participate in productive discussion during paper presentations and group projects): 5%
● Paper presentation assignments (for papers, company product, video, tool, a short tutorial, etc): 30%
   o Grading for paper presentations: Contents (from the paper responsible for and related work): 50%, Clarity of presentation: 30%, Being able to answer questions: 20%
● Two programming-/tools-based assignments: 20%
● Final project (students should come up with scenarios and data sets; generally available on Kaggle or data science community sites; it will include proposal presentation and result presentation; the score will be calibrated based on individual contribution): 40%
● Grading for project proposal & project presentations (out of 40% of overall score):
   o Proposal (topic search, organization, planning, presentation):15%,
   o Contents of the project (motivations, merits, approaches to solving the problems, i.e. methodology, technical depth, experiments, and summary of related work and future work): 70%
   o Clarity of final project presentation and being able to answer questions: 10%,
   o Final reports or extended presentation materials as well as project source code for verification (15%)

Prerequisite

Familiarity with general concepts of machine learning and deep learning as well as tools/languages for ML/DL is needed for programming-based assignments and projects.

Note

Lecture Plan: Week by week course topics will be provided in the second week of the class and adjustments may apply. Some lab sessions or assignment sessions may be given by TAs.

 

Attendance policy: The University requires attendance to be recorded. However, we realize that absence from
class is sometimes unavoidable due to medical or personal reasons. It is the responsibility of the student
to report the reason for his/her absence to the instructor and provide documentation if requested. The
instructor will decide whether the student’s absence should be excused. Each student is accountable for all
work missed because of absence.

 

Language policy: This course will be taught in English. All lectures, as well as exams and assignments,
will be given in English. Students will use English for answering exam questions and doing assignments.