Recipes for Building an Open-Domain Chatbot

Stephen Roller,Emily Dinan,Naman Goyal,Da Ju,Mary Williamson,Yinhan Liu,Jing Xu,Myle Ott,Kurt Shuster,Eric Michael Smith,Y-Lan Boureau,J. Weston

Published 2020 in Conference of the European Chapter of the Association for Computational Linguistics

ABSTRACT

Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we highlight other ingredients. Good conversation requires blended skills: providing engaging talking points, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models outperform existing approaches in multi-turn dialogue on engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CONCEPTS

REFERENCES

Showing 1-61 of 61 references · Page 1 of 1

CITED BY

Showing 1-100 of 1105 citing papers · Page 1 of 12