Home
People
Publications
Teaching
Thesis Projects
Reading Group
Talks
Joining
Clara Meister
Latest
Language Model Quality Correlates with Psychometric Predictive Power in Multiple Languages
On the Optimality of Word Lengths
Revisiting the Optimality of Word Lengths
A Formal Perspective on Byte-Pair Encoding
A Measure-theoretic Characterization of Tight Language Model
Locally Typical Sampling
Naturalistic Causal Probing for Morpho-Syntax
On the Effect of Anticipation on Reading Times
On the Efficacy of Sampling Adapters
Testing the Predictions of Surprisal Theory in 11 Languages
Tokenization and the Noiseless Channel
On the Usefulness of Embeddings, Clusters and Strings for Text Generation Evaluation
A Cross-Linguistic Pressure for Uniform Information Density in Word Order
Mutual Information and Hallucinations in Abstractive Summarization
Analyzing Wrap-Up Effects through an Information-Theoretic Lens
Estimating the Entropy of Linguistic Distributions
Cluster-based Evaluation of Automatically Generated Text
On Decoding Strategies for Neural Text Generators
A Plug-and-Play Method for Controlled Text Generation
A Plug-and-Play Method for Controlled Text Generation
A surprisal--duration trade-off across and within the world's languages
A surprisal--duration trade-off across and within the world’s languages
Conditional Poisson Stochastic Beam Search
Conditional Poisson Stochastic Beam Search
Conditional Poisson Stochastic Beams
Keyword2Text: A Plug-and-Play Method for Controlled Text Generation
On Homophony and Rényi Entropy
On Homophony and Rényi Entropy
On Homophony and Rényi Entropy
Phone-level Uniform Information Density across and within Languages
Revisiting the Uniform Information Density Hypothesis
Revisiting the Uniform Information Density Hypothesis
Revisiting the Uniform Information Density Hypothesis
A cognitive regularizer for language modeling
A cognitive regularizer for language modeling
Determinantal Beam Search
Determinantal Beam Search
Is Sparse Attention more Interpretable?
Is Sparse Attention more Interpretable?
Language Model Evaluation Beyond Perplexity
Language Model Evaluation Beyond Perplexity
Searching for Search Errors in Neural Morphological Inflection
Searching for Search Errors in Neural Morphological Inflection
If Beam Search is the Answer, What was the Question?
Generalized Entropy Regularization or: There's Nothing Special about Label Smoothing
Best-First Beam Search
SIGMORPHON 2020 Task 0 System Description: ETH Zürich Team
Cite
×