Data science interacts with these questions in two ways:
(i) as the technology driving the very systems responsible for certain social impacts, posing new questions about what it means for such systems to accord with ethical norms and the law; and
(ii) as a set of powerful tools for analyzing existing systems (even those that don’t themselves depend on ML), e.g., for auditing existing systems for various biases.
This tutorial will tackle both angles on the interaction between technology and society vis-a-vis concerns over fairness and bias. Our presentation will cover a wide range of disciplinary perspectives with the first part focusing on the social impacts of technology and the formulations of fairness and bias defined via protected characteristics, and the second part taking a deep into peer review to explore other forms of bias such as that due to subjectivity, miscalibration, and fraud.
SYLLABUS
The tutorial comprises two parts which will be covered independent of each other.
Outline | Topic | Time | Presenter | Material |
---|---|---|---|---|
Part 1 | Fairness in sociotechnical systems | 8.30-10am | Zachary Lipton | Slides |
Part 2 | Peer review:
| 10-10.15am, [break], 10.45-12.30pm | Nihar B. Shah |