Thinking like Transformers

Abstract

Transformers - the purely attention based NN architecture - have emerged as a powerful tool in sequence processing. But how does a transformer think? When we discuss the computational power of RNNs, or consider a problem that they have solved, it is easy for us to think in terms of automata and their variants (such as counter machines and pushdown automata). But when it comes to transformers, no such intuitive model is available. In this talk I will present a programming language, RASP (Restricted Access Sequence Processing), which we hope will serve the same purpose for transformers as finite state machines do for RNNs. In particular, we will identify the base computations of a transformer and abstract them into a small number of primitives, which are composed into a small programming language. We will go through some example programs in the language, and discuss how a given RASP program relates to the transformer architecture.

Date
Mar 1, 2022 5:00 PM — 6:00 PM
Location
CAB D78

Bio

Gail is a PhD student at Technion working with Eran Yahav and Yoav Goldberg. Her research interest is in understanding sequential neural networks (such as RNNs and transformers) through the lens of formal language theory.