# Deep Learning for Symbolic Mathematics (@ ICLR 2020)

### Guillaume Lample, François Charton

As for the actual results, this just shows that deep learning models are really powerful density estimators even for quite complicated distributions. However, it's not actually the case that this model replaces the Computer Algebra systems they mention. There was a later AAAI paper that showed that their models are not exactly as robust as their results might suggest. I'll comment on that later and link it from here. But this is just a more general theme of pure deep learning models: they are often non-robust in weird ways once you go out of their training distribution. You can make the training distribution really large (e.g., by generating random expressions as they do), but even then there's no guarantee that beyond that they generalize nicely. And for combinatorial domains like expression trees, there's no really large'' finite dataset that can really cover the infinite variation one can produce even within a quite modest depth.