Another interesting paper on neural program synthesis. As previous work from Kevin Ellis, they deal with the domain of graphics programs: programs that output an image, but now also string-manipulation programs. Differently from their NeurIPS paper from the year before, during the generation of the program, they run it, get the resulting image, and feed it to the prediction. They also train a value network to predict how likely is the current program, given its output, to eventually produce a correct program. This likelihood is used to drive Stochastic Monte Carlo enumeration. SMC makes for nice pictures.
Differently from Xinyun's ICLR paper and the author's own prior work, though, their graphic language this time is only sequential, i.e. it has no loops or conditionals. They also need to tweak RobustFill's DSL for string manipulation so that it has some well-defined output at every step, otherwise their approach wouldn't directly apply. The idea might need some tweaks to handle control-flow constructs and partial programs well, although in principle there are simple things to try.
But in any case, again a nice use of execution-guided synthesis. SMC + value network seems like a nice addition - before running a program, you might be confident it will work, but after seeing its output you realize it's actually not going to at all. If the value network tells you this, this program's lineage will eventually be discarded by SMC. Quite a nicely assembled tool set.