Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution

Abstract

We present a setup for training, evaluating and interpreting neural language models, that uses artificial, language-like data. The data is generated using a massive probabilistic grammar (based on state-split PCFGs), that is itself derived from a large natural language corpus, but also provides us complete control over the generative process. We describe and release both grammar and corpus, and test for the naturalness of our generated data. This approach allows us to define closed-form expressions to efficiently compute exact lower bounds on obtainable perplexity using both causal and masked language modelling. Our results show striking differences between neural language modelling architectures and training objectives in how closely they allow approximating the lower bound on perplexity. Our approach also allows us to directly compare learned representations to symbolic rules in the underlying source. We experiment with various techniques for interpreting model behaviour and learning dynamics. With access to the underlying true source, our results show striking differences and outcomes in learning dynamics between different classes of words.

Date
Nov 22, 2023 11:00 AM — 12:00 PM
Location
OAT S13

Bio

Jaap Jumelet is a PhD candidate in the group of Jelle Zuidema at the ILLC, University of Amsterdam. The topic of his PhD lies at the intersection of explainable AI and natural language processing. He is interested in uncovering the linguistic capacities of current NLP models, and developing new techniques that allow to gain these insights in a robust and faithful way.