# Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (@ arXiv 2022)

### Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, Ed Chi

The prompt for each task certainly plays a huge role in their performance. This is clearly a case of an existential'': when shown just the right prompt examples that decompose similar tasks in useful ways, this paper shows that current large language models can successfully decompose new (and harder versions of) tasks. But which decomposition is useful for which task is something that the prompt designers are still doing. For example, I would guess that, if asked to propose a decomposition of the task of take a sequence and concatenate the last letter of each of its words'', the model would hardly output the decomposition that they propose (of taking each word, extracting its last letter, concatenating to the output so far). How to generalize in the task of decomposing new tasks is an important problem that these results suggest should be addressed.