This paper introduces the use of an interpreter during the process of synthesis. It's a very simple yet powerful idea, and later work has used it even less than an year later. Put simply, a problem with neural program synthesis is that it usually occurs in syntax space only. A seq2seq model will embed the specification somehow and then predict the program, token by token. But this means the embedding never actually gets to represent what the program does, just how it looks like. But they are programs, and programs have well-defined semantics - we can run them. So the idea is that, instead of just predicting the program from input-output examples, you can have the synthesizer predict it from input, output + _current state_. This is the key idea. Once the synthesizer predicts the next instruction, you can run it on the current state on your interpreter, see how it modifies the state, and feed that back into the synthesizer's input.
The second idea in the paper is quite traditional ML - using an ensemble of synthesizers, instead of just one. That's known to work better since forever, so unsurprisingly having synthesizers vote or just find different solutions and taking the simplest one makes things better.
So yes, people at home, don't be scared to run your programs!