Text-editing models have recently become a prominent alternative to seq2seq models for monolingual text-generation tasks such as grammatical error correction, text simplification, and style transfer. These tasks share a common trait – they exhibit a large amount of textual overlap between the source and target texts. Text-editing models take advantage of this observation and learn to generate the output by predicting edit operations applied to the source sequence. In contrast, seq2seq models generate outputs word-by-word from scratch thus making them slow at inference time. Text-editing models provide several benefits over seq2seq models including faster inference speed, higher sample efficiency, and better control and interpretability of the outputs. This talk provides an introduction to text-editing models and a closer look at two models developed in our team: LaserTagger and EdiT5. We also discuss the applications of text-editing models and the challenges, such as hallucination and bias mitigation, often faced when productionizing text-generation models.
Eric Malmi is a Senior Research Scientist at Google, Zürich. His research focuses on developing Natural Language Generation (NLG) methods for Google Assistant. He received his PhD (2018) in Computer Science from Aalto University, Finland. During his studies, Eric did internships at Google, Qatar Computing Research Institute, Idiap Research Institute, and CERN.