Gabriel Poesia

An Explanation of In-context Learning as Implicit Bayesian Inference (@ ICLR 2022)

Sang Michael Xie, Aditi Raghunathan, Percy Liang, Tengyu Ma

Link

This paper provides an explanation of in-context learning in language models through the claim that models learn to implicitly perform Bayesian inference to infer a latent concept from their prompts.

One observation that makes in-context learning intriguing is that language models are trained on sequences that qualitatively differ from the prompts we use at test-time. In particular, sequences of raw input-output examples are quite unusual for the Web, but LMs can complete them with correct (or at least on-topic) outputs nonetheless.

The paper setup is the following:

Their analysis essentianlly shows that one of two things happen:

They then make a synthetic dataset where concepts and documents follow their HMM-induced distribution from the theory, and show that both Transformers and LSTMs can perform in-context learning, and that model scale increases in-context performance even if the training loss stays the same. This last bit is an interesting observation even in this small-scale setting.

One thing I missed from the paper's argument was the link between Transformers trained via maximum likelihood and their Bayesian predictor. As far as I followed, the argument is that the Bayesian predictor would perform in-context learning and achieve optimality in $0-1$ loss. But this is not conceptually the same as saying that "any model that is $0-1$ loss-optimal is equivalent to the Bayesian predictor", even though I do in general buy their overall take on what is even in-context learning.