Gabriel Poesia

ReAct: Synergizing Reasoning and Acting in Language Models (@ ICLR 2023)

Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao


This paper introduced a simple idea that started a quite productive line of work on "language agents", which reason and act in natural language. This has been a flexible architecture that people have been trying on all sorts of domains (e.g., in theorem proving).

By definition, an agent acts, i.e., it chooses actions to perform. But determining which actions most likely lead to the goal is non-trivial, and humans often reason in natural language (either verbally or even just in our heads) for doing so. This process, of interleaving reasoning in natural language and choosing actions to perform, can be straightforwardly implemented by prompting modern LLMs. This is essentially what language agents do. It works for similar reasons why chain-of-thought is often better than direct prediction (i.e., why "thinking step-by-step" helps), and why using tools help (relying on external knowledge instead of risking hallucinations). I also like the fine-tuning results, which are in line with what we see in some of our work with guide tools and STaR. In short, it's a simple, cool, flexible idea that has found lots of potential applications.