Adversarial Learning for Neural Dialogue Generation
2017pp. 2157–2169
Citations Over TimeTop 1% of 2017 papers
Abstract
In this paper, drawing intuition from the Turing test, we propose using adversarial training for open-domain dialogue generation: the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning (RL) problem where we jointly train two systems, a generative model to produce response sequences, and a discriminator-analagous to the human evaluator in the Turing test-to distinguish between the human-generated dialogues and the machine-generated ones. The outputs from the discriminator are then used as rewards for the generative model, pushing the system to generate dialogues that mostly resemble human dialogues.
Related Papers
- → Discriminator-Quality Evaluation GAN(2022)4 cited
- → <title>Adversarial inferencing for generating dynamic adversary behavior</title>(2003)14 cited
- → Forensic science evidence in non-adversary criminal justice systems(2018)1 cited
- BRIDGING ADVERSARIAL SAMPLES AND ADVERSARIAL NETWORKS(2019)
- → A Simple Yet Efficient Method for Adversarial Word-Substitute Attack(2022)