chain-of-thought
chain-of-thought copied to clipboard
Research papers about Chain of Thought (CoT)
Chain-of-thought
Summarize and classify related papers about CoT.
Original work on CoT:
- First release time: 28 Jan 2022
- Title: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
- Conference: NeurIPS 2022
Content
- Related Surveys
- COT Enhancement
- Two main types of CoTs
- Why CoT works? - Analysis of CoT
- CoT Evaluation
- Other types of CoTs
Related Surveys
| First release time | Title | Conference |
|---|---|---|
| 27 Sep 2023 | A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future | arXiv |
| 15 Feb 2023 | Augmented Language Models: a Survey | arXiv |
| 4 Jan 2023 | Iterated Decomposition: Improving Science Q&A by Supervising Reasoning Processes | arXiv |
| 20 Dec 2022 | Towards Reasoning in Large Language Models: A Survey | ACL 23 Findings |
| 19 Dec 2022 | Reasoning with Language Model Prompting: A Survey | ACL 2023 |
COT Enhancement
| First release time | Title | What Changes? |
|---|---|---|
| 21 Mar 2022 | Self-Consistency Improves Chain of Thought Reasoning in Language Models | naive greedy decoding => self-consistency |
Two main types of CoTs
Zero-shot:
| First release time | Title | Conference |
|---|---|---|
| 6 May 2023 | Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models | ACL 23 |
| 3 Nov 2022 | Large Language Models Are Human-Level Prompt Engineers | ICLR 23 |
| 24 May 2022 | Large Language Models are Zero-Shot Reasoners | NeurIPS2022 |
Few-shot: five sub-groups
- Related to code generation
- Auto CoT
- Iterative prompt CoT
- Involve question decomposition
- Mix
Related to code generation
| First release time | Title | Conference |
|---|---|---|
| 22 Nov 2022 | Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks | arXiv |
| 18 Nov 2022 | PAL: Program-aided Language Models | ICML 2023 |
| 13 Oct 2022 | Language models of code are few-shot commonsense learners | EMNLP 2022 |
Involve question decomposition
| First release time | Title | Conference |
|---|---|---|
| 8 Dec 2022 | Successive Prompting for Decomposing Complex Questions | EMNLP 2022 |
| 7 Oct 2022 | Measuring and Narrowing the Compositionality Gap in Language Models | EMNLP 2023 (Findings) |
| 5 Oct 2022 | Decomposed Prompting: A Modular Approach for Solving Complex Tasks | ICLR 2023 |
| 21 May 2022 | Least-to-Most Prompting Enables Complex Reasoning in Large Language Models | ICLR 2023 |
| 19 May 2022 | Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning | ICLR 2023 |
| 15 May 2022 | SeqZero: Few-shot Compositional Semantic Parsing with Sequential Prompts and Zero-shot Models | Findings of NAACL 2022 |
Auto CoT
| First release time | Title | Conference |
|---|---|---|
| 24 Feb 2023 | Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data | arXiv |
| 7 Oct 2022 | Automatic Chain of Thought Prompting in Large Language Models | ICLR 23 |
Why CoT works? - Analysis of CoT
| First release time | Title | Conference |
|---|---|---|
| 20 Dec 2022 | Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters | ACL 2023 |
| 25 Nov 2022 | Complementary Explanations for Effective In-Context Learning | ACL 2023 (Findings) |
| 3 Oct 2022 | Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought | ICLR 2023 |
| 16 Sep 2022 | Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango | Google Research |
CoT Evaluation
Other types of CoTs
| First release time | Name | Title | Conference |
|---|---|---|---|
| 28 May 2023 | Tab-CoT | Tab-CoT: Zero-shot Tabular Chain of Thought | ACL 23 Findings |
| 17 May 2023 | Tree of Thoughts | Tree of Thoughts: Deliberate Problem Solving with Large Language Models | arXiv |
| 9 May 2023 | Memory of Thoughts | MoT: Pre-thinking and Recalling Enable ChatGPT to Self-Improve with Memory-of-Thoughts | arXiv |
| 22 Nov 2022 | Program of Thoughts | Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks | arXiv |