# Plug and Play Language Models: A Simple Approach to Controlled Text Generation (@ ICLR 2020)

### Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, Rosanne Liu

This paper proposes a simple approach for controllable text generation that doesn't require re-training or fine-tuning the base language model. Given the desired attributes $a$ of the generated text, one must first train a model $p(a|x)$ that computes the probability that a certain generated prefix $a$ will have attributes $a$ (a concrete example could be sentiment; $p(a|x)$ would amount to sentiment classification). This model can usually be simple, as discrimination (tell whether a sentence is positive or negative) is easier than generation (model the distribution of positive sentences). Given this model, one can sample from $p(x|a)$ by using Bayes and the vanilla language model $p(x)$, since $p(x|a) \propto p(x, a) = p(x)p(a|x)$. A cool, simple idea.