From Cognitive Modeling to Typological Universals: Investigations with (Large) Language Models

Abstract

Natural languages have so-called typological universals. Can recent advancements in neural language modeling and cognitive modeling provide insights into the source of such language universals? I will introduce my recent investigations on such topics, including those appearing in NAACL 2024 and ACL 2024. Specifically, next-word probability computed by a particular (even not that large) language model well explains human cognitive loads mirrored by reading behavior, and such cognitively-motivated models can also delineate possible languages from counterfactual, impossible languages with their estimated processing costs. These bridge (i) human reading behavior (cognitive bias), (ii) next-word predictability, and (iii) language universals, suggesting that attested natural languages are shaped to facilitate next-word prediction under cognitively plausible biases.

Date
May 28, 2024 10:30 AM — 12:00 PM
Location
OAT S17

Bio

Tatsuki Kuribayashi obtained a Ph.D. in information science at Tohoku University, Japan (adviser: Prof. Kentaro Inui) and started a Postdoc at MBZUAI, UAE, in 2023 (adviser: Prof. Timothy Baldwin). His research focuses on the intersection between natural language processing (NLP) and the cognitive science of language, along with the contribution to works on the interpretability of NLP models, automatic writing assistance, and neuro-symbolic reasoning of NLP models. He is an organizer of the Cognitive Modeling and Computational Linguistics Workshop (CMCL 2024). His co-authored works are recognized with Best Paper Awards in AACL-SRW 2022, ACL-SRW 2023, and a spotlight paper in ICLR 2024.