If We Want AI to be Interpretable, We Need to Measure Interpretability


Speaker: Jordan Boyd-Graber (University of Maryland)

Date and Time: Wednesday, November 9 at 3:30pm

Place: 2405 or Zoom

Abstract:

AI tools are ubiquitous, but most users treat it as a black box: a handy tool that suggests purchases, flags spam, or autocompletes text. While researchers have presented explanations for making AI less of a black box, a lack of metrics make it hard to optimize explicitly for interpretability. Thus, I propose two metrics for interpretability suitable for unsupervised and supervised AI methods. For unsupervised topic models, I discuss our proposed "intruder" interpretability metric, how it contradicts the previous evaluation metric for topic models (perplexity), and discuss its uptake in the community over the last decade. For supervised question answering approaches, I show how human-computer cooperation can be measured and directly optimized by a multi-armed bandit approach to learn what kinds of explanations help specific users. I will then briefly discuss how similar setups can help users navigate information-rich domains like fact checking, translation, and web search.

Bio:

Jordan Boyd-Graber is an associate professor in the University of Maryland's Computer Science Department, iSchool, UMIACS, and Language Science Center. He generally works on how humans can interact with AI tools, starting first with topic models, then translation, then negotiation, and most recently question answering. He and his students have won "best of" awards at NIPS (2009, 2015), NAACL (2016), and CoNLL (2015). Jordan also won the British Computing Society's 2015 Karen Spärk Jones Award and a 2017 NSF CAREER award.

Zoom Link: https://illinois.zoom.us/j/89179351119?pwd=VWlxWWN6WnVRZndnSnlPT3Q0TUhzZz09

Meeting ID: 891 1793 1119

Password: csillinois