Why do humans have such trouble trusting algorithmic-decision
making? The predictive strength of decision-making algorithms has led
to their growing application in society, for example, in autonomous
driving, online behavioral advertising, digital health, court decisions on
recidivism, and credit scoring. There are even plans to deploy
predictive algorithms as a replacement for human juries at the
Olympics in 2022. The reason is simple: even rudimentary algorithmic
models consistently outperform humans on various prediction tasks.
However, research indicates that humans are reluctant to trust
automated decision-making models. For example, almost 80% of
Americans say they would not want to travel in an autonomous car
because they don’t trust it. The same phenomenon holds for simpler
applications of algorithmic models. The aim of this seminar is to
explore the key factors that underlie human trust and distrust in
algorithmic decision-making. Students will engage with a range of
literature on human-machine interaction, deceptive and
trust-enhancing interfaces, policy measures to create algorithmic trust,
and human psychological dispositions of trust in automated
decision-making. Each student will comprehensively review a paper to
understand how it potentially informs academia, industry, or policy
making. Overall, this seminar addresses an emergent scientific field
and students are encouraged to focus on the implications of learning
algorithms and novel data analytics methods on human trust from a
variety of different perspectives. In order to complete the seminar
successfully, students are required to prepare a presentation and
hand in an 8-10-page report.