Teaching Machines through Human Explanations


Speaker: Xiang Ren (USC)

Date and Time: 1pm CT, February 26, Friday

Abstract:

Humans can often learn how to detect new entities and categorize documents from just a handful of examples, yet existing natural language processing (NLP) systems typically require tens of thousands of examples to perform with similar accuracy. While deep learning methods have achieved impressive results on academic datasets, these data-hungry models are expensive to build and hard to maintain. In this talk, I will present an explanation-based learning framework that can make the process of building and maintaining NLP models more label efficient and reliable, and less reliant on machine learning expertise. I will introduce neural rule grounding, a technique to generalize one labeling rule to annotate many examples for augmenting model training. Next, I will discuss how to extend the soft grounding framework to handle natural language explanations that are unstructured and compositional, and present its applications to text classification and extension to question answering. Lastly, I will touch on a knowledge-aware graph network for incorporating commonsense knowledge into the language modeling process.

Bio:

Xiang Ren is an assistant professor at the USC Computer Science Department, a Research Team Leader at USC ISI, and the PI of the Intelligence and Knowledge Discovery (INK) Lab at USC. Priorly, he spent time as a research scholar at the Stanford NLP group and received his Ph.D. in Computer Science from the University of Illinois Urbana-Champaign. Dr. Ren works on knowledge acquisition and reasoning in natural language processing, with focuses on developing human-centered and label-efficient computational methods for building trustworthy NLP systems. He received The Web Conference Best Paper runner-up, ACM SIGKDD Doctoral Dissertation Award, and several research awards from Google, Amazon, JP Morgan, Sony, and Snapchat. He was named Forbes' Asia 30 Under 30 in 2019.