Home
People
Publications
Teaching
Thesis Projects
Reading Group
Talks
Joining
Anej Svete
Latest
Gumbel Counterfactual Generation from Language Models
Information Locality as an Inductive Bias for Neural Language Models
Training Neural Networks as Recognizers of Formal Languages
Unique Hard Attention: A Tale of Two Sides
On Efficiently Representing Regular Languages as RNNs
Transformers Can Represent n-gram Language Models
Lower Bounds on the Expressivity of Recurrent Neural Language Models
On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning
What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages
On the Representational Capacity of Recurrent Neural Language Models
Recurrent Neural Language Models as Probabilistic Finite-state Automata
Algorithms for Weighted Finite-State Automata with Failure Arcs
Cite
×