Recent work on large language models relies on the intuition that most natural language processing tasks can be described via natural language instructions and that models trained on these instructions show strong zero-shot performance on several standard datasets. However, these models, even though impressive, still perform poorly on a wide range of tasks outside of their respective training and evaluation sets. To address this limitation, we argue that a model should be able to keep extending its knowledge and abilities, without forgetting previous skills. In spite of the limited success of Continual Learning we show that Fine-tuned Language Models can be continual learners. We empirically investigate the reason for this success and conclude that Continual Learning emerges from self-supervision pre-training. Our resulting model Continual-T0 (CT0) is able to learn 8 new diverse language generation tasks, while still maintaining good performance on previous tasks, spanning in total 70 datasets. Finally, we show that CT0 is able to combine instructions in ways it was never trained for, demonstrating some level of instruction compositionality.
Tuhin Chakrabarty is a PhD student in Computer Science at Columbia University. Within the department he is a part of the Natural Language Processing group, where he is advised by Smaranda Muresan. His research is supported by the Columbia Center of Artificial Intelligence & Technology (CAIT) & Amazon Science Ph.D. Fellowship. His research interests are broadly in Natural Language Processing and Machine Learning, with special focus in Language Generation. His overarching research question centers around how we can control large language models to understand, interpret or generate creative text.Recently he is also working in Continual Learning of Large language models. https://tuhinjubcse.github.io/