papernotes icon indicating copy to clipboard operation
papernotes copied to clipboard

MaskGAN: Better Text Generation via Filling in the ______

Open howardyclo opened this issue 7 years ago • 1 comments
trafficstars

Metadata

  • Authors: William Fedus, Ian Goodfellow and Andrew M. Dai
  • Organization: Google Brain
  • Conference: ICLR 2018
  • Paper: https://arxiv.org/pdf/1801.07736.pdf
  • Code: https://github.com/tensorflow/models/tree/master/research/maskgan

howardyclo avatar May 28 '18 06:05 howardyclo

Summary

Neural text generation models are typically auto-regressive, trained via maximizing likelihood or perplexity and evaluate with validation perplexity. They claim such training and evaluation could result in poor sample quality (during sample generation, the model is often forced to condition on sequences that were never conditioned on at training time, leading to unpredictable dynamics in the hidden state of the RNN). Thus, they purpose to improve sample quality using actor-critic conditional GANs.

Related Work

  • Professor Forcing: Making the hidden state dynamics to become predictable.
  • Scheduled Sampling: Randomly conditioning on sampled words at training time.
  • Both work indirectly to improve sample quality but no cost function to encourage high sample quality (this paper does so).

howardyclo avatar May 30 '18 14:05 howardyclo