An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation
Abstract
Generating semantically coherent responses is still a major challenge in dialogue generation. Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs. To address this problem, we propose an Auto-Encoder Matching (AEM) model to learn such dependency. The model contains two auto-encoders and one mapping module. The auto-encoders learn the semantic representations of inputs and responses, and the mapping module learns to connect the utterance-level representations. Experimental results from automatic and human evaluations demonstrate that our model is capable of generating responses of high coherence and fluency compared to baseline models.
BibTex
@inproceedings{Luo2018AEM,
author = {Luo, Liangchen and Xu, Jingjing and Lin, Junyang and Zeng, Qi and Sun, Xu},
title = {An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing},
month = {November},
year = {2018},
address = {Brussels, Belgium},
pages = {702--707},
publisher = {Association for Computational Linguistics}
}