Losing bits and finding meaning: Efficient compression shapes meaning in language

Abstract

Our world is extremely complex, and yet we are able to exchange our thoughts and beliefs about it using a relatively small number of words. What computational principles can explain this extraordinary ability? In this talk, I argue that in order to communicate and reason about meaning while operating under limited resources, both humans and machines must efficiently compress their representations of the world. In support of this claim, I present a series of studies showing that: (i) languages evolve under pressure to efficiently compress meanings into words; (ii) the same principle can give rise to human-like semantic representations in artificial neural networks trained for vision; and (iii) efficient compression may also explain how meaning is constructed in real time, as interlocutors reason pragmatically about each other’s intentions and beliefs. Taken together, these results suggest that efficient compression underlies how humans communicate and reason about meaning, and may guide the development of artificial agents that can naturally communicate and collaborate with humans.

Date
Jul 4, 2022 2:00 PM — 3:00 PM
Location
CAB H52

Bio

Noga Zaslavsky is a postdoc at MIT. Her research aims to understand language, learning, and reasoning from first principles, building on ideas and methods from machine learning and information theory.