Wrapping up and going beyond the basics
Readings due before class on Thursday, December 13, 2018
Required
This looks like a lot, but most of these are quite short.
Keep in mind throughout all these readings that an “algorithm” in these contexts is typically some fancy type of regression model where the outcome variable is something binary like “Safe babysitter/unsafe babysitter,” “Gave up seat in past/didn’t give up seat in past”, or “Violated probation in past/didn’t violate probation in past”, and the explanatory variables are hundreds of pieces of data that might predict those outcomes (social media history, flight history, race, etc.).
Data scientists build a (sometimes proprietary and complex) model based on existing data, plug in values for any given new person, multiply that person’s values by the coefficients in the model, and get a final score in the end for how likely someone is to be a safe babysitter or how likely someone is to return to jail.
- 12.1–12.2 in ModernDiveChester Ismay and Albert Y. Kim, ModernDive: An Introduction to Statistical and Data Sciences via R, 2018, https://moderndive.com/.
- DJ Patil, “A Code of Ethics for Data Science”
- Mike Loukides, Hilary Mason, and DJ Patil, Ethics and Data ScienceThis concise booklet is the result of DJ Patil’s call for ethics in the previous post.
- “AI in 2018: A Year in Review”
- “How Big Data Is ‘Automating Inequality’”
- “In ‘Algorithms of Oppression,’ Safiya Noble finds old stereotypes persist in new media”
- 99% Invisible, “The Age of the Algorithm”: Note that this is a podcast, or a 20ish minute audio story. Listen to this. The rest of the things on this page are helpful and supplementary (very few podcasts provide this much extra information), but you don’t need to go through it all.
- “Wanted: The ‘perfect babysitter.’ Must pass AI scan for respect and attitude.”
- “Companies are on the hook if their hiring algorithms are biased”
- “Courts use algorithms to help determine sentencing, but random people get the same results”