If you suspend your transcription on amara.org, please add a timestamp below to indicate how far you progressed! This will help others to resume your work!
Please do not press “publish” on amara.org to save your progress, use “save draft” instead. Only press “publish” when you're done with quality control.
Decision making is increasingly being performed by intelligent algorithms in areas from search engine rankings to public policy. Algorithmic decision making includes applications as important as who is flagged as a potential terrorist as in the United States’ no-fly list to deciding how police officers will be allocated as in predictive policing.
These systems are getting smarter as we develop better algorithms, as well as more expansive as they integrate more data. Government agencies and corporations are determining how to best convert the mass quantities of data that have been collected on their citizens and customers into meaningful inferences and decisions through data mining and predictive systems.
However, many of these systems consist of algorithms whose operation is closed to the public - constituting a new form of secrecy maintained by powerful entities. The intentional or unintentional impact of some of these systems can have profound consequences.
This talk will cover some of the emerging issues with the widespread use of these systems in terms of transparency and fairness. We need to have some mechanism for verifying how these systems operate. Are these algorithms discriminatory? Are they fair with respect to protected groups? What role can auditing and reverse engineering play? I'll discuss these questions, the current status of this field, and some paths forward.