AAAI 2020 Tutorial:
Fairness and Bias in Peer Review and other Sociotechnical Intelligent Systems

Nihar B. Shah and Zachary Lipton

Date: Saturday, February 8, 2020
Time: 8:30 am – 12:30 pm
Venue: Sutton Center
Questions of fairness and bias abound in all socially-consequential decision-making. Whether designing the protocols for peer review of research papers, setting hiring policies, or framing research questions in genetics, any decision with the potential to allocate benefits or confer harms raises concerns about *who* gains or loses that may fail to surface in naively-chosen performance measures.

Data science interacts with these questions in two ways:
(i) as the technology driving the very systems responsible for certain social impacts, posing new questions about what it means for such systems to accord with ethical norms and the law; and
(ii) as a set of powerful tools for analyzing existing systems (even those that don’t themselves depend on ML), e.g., for auditing existing systems for various biases.

This tutorial will tackle both angles on the interaction between technology and society vis-a-vis concerns over fairness and bias. Our presentation will cover a wide range of disciplinary perspectives with the first part focusing on the social impacts of technology and the formulations of fairness and bias defined via protected characteristics, and the second part taking a deep into peer review to explore other forms of bias such as that due to subjectivity, miscalibration, and fraud.


SYLLABUS

The tutorial comprises two parts which will be covered independent of each other.

Outline Topic Time Presenter Material
Part 1 Fairness in sociotechnical systems 8.30-10am Zachary Lipton Slides
Part 2 Peer review:
  • Biases
  • Noise
  • Miscalibration
  • Dishonest behavior
  • Subjectivity
  • Norms and policies
10-10.15am, [break], 10.45-12.30pm Nihar B. Shah