This paper describes Holophrasm, which is most likely the first automated theorem prover for a higher-order formal system (Metamath) that uses no hand-crafted features, and is based on deep learning end-to-end. While its absolute performance wasn't necessarily astonishing, Holophrasm is quite remarkable for showing that this is possible in practice. The absolute results can also easily be improved by just updating the 2016 architectures for the neural networks (e.g. the GRUs) with more modern and larger alternatives (Transformers). Most of the relevant ideas were already there, for example in how to adapt MCTS to this setup with an infinite action space and two distinct types of nodes ("proposition" and "theorem application" nodes). Many of the more recent papers for automated theorem proving in Lean using language models are, in fact, not too far from Holophrasm in their basic ideas.