Explainability in question answering allows researchers to check that the model is making the right decision for the right reason. Datasets of explanations are useful for both training and evaluation of models along these lines. However, in common sense question answering, annotators have different opinions about how much detail an explanation should contain. This leads to low inter-annotator agreement. In this talk I will discuss a new annotation procedure that aims to identify only the key facts used when choosing between answers. Rather than directly asking annotators to choose these, we ask them to create explanations for a counterfactual situation and analyse how they differ from the real situation.
Guy Aglionby is a final year PhD student at the University of Cambridge Computer Lab, supervised by Prof Simone Teufel and a member of Homerton College. His PhD research is on developing interpretable models for common sense multi-hop reasoning.