Situations described using natural language are richer than what humans explicitly communicate. For example, the sentence “She pumped her fist” connotes many potential auspicious causes. For machines to understand natural language, they must be able to make commonsense inferences about explicitly stated information, and recognize how these inferences enrich the situational context that is underspecified by language. In this talk, I will present work in designing systems that use knowledge graphs as structural scaffolds for commonsense reasoning in QA systems. First, I will present how neural knowledge models that represent knowledge implicitly can be used to dynamically generate on-demand knowledge graphs for interpretable reasoning in zero-shot QA. Then, I will introduce new models for interfacing between language and knowledge representations to enable expressive commonsense reasoning. Finally, I will conclude with a discussion on the tradeoff between interpretability and expressivity when designing neuro-symbolic interfaces for knowledge representation and reasoning.
Antoine Bosselut is an assistant professor in the School of Computer and Communication Sciences at EPFL. He leads the EPFL NLP group where they conduct research on natural language processing (NLP) systems that can model, represent, and reason about human and world knowledge.