# Zero-shot Learning via Simultaneous Generating and Learning (@ NeurIPS 2019)

### Hyeonwoo Yu, Beomhee Lee

Their approach is based on using a generative model (a Variational Auto-Encoder) to optimize the likelihood of samples from seen classes given their class embeddings. They use a VAE to model $p(x|z; \theta)$: the probability of a data point given its class embedding and the model parameters $\theta$. Then, they apply Expectation-Maximization to optimize this likelihood using the samples from seen classes: first, you use the model to sample points from unseen classes, and sample points from seen classes directly from the training set; then, you change the parameters to maximize the likelihood of the sampled points. At test time, you can plug in the class embeddings from unseen classes to get a generative model for them as well. Also, the likelihoods at test time can be used to do classification, instead of needing to train a classifier on top of generated samples.