# Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations (@ ICML 2019)

### Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

This paper proves that learning disentangled representations is impossible''. This is a somewhat expected result (as the authors admit) when you consider *all* the generative models that could have given rise to the observed data: for any algorithm that recovers the axes of variation of the latent space, there's an infinite number of equivalent generative models that could have generated the same data, and where the recovered axes are completely entangled'' (changes in one always causes changes in the other).