langchain icon indicating copy to clipboard operation
langchain copied to clipboard

consider introduce ToT approch in langchain🐱🐱

Open rickerliang opened this issue 1 year ago • 19 comments

Feature request

https://arxiv.org/pdf/2305.10601.pdf

Motivation

Language models are increasingly being deployed for general problem-solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, there is a new framework for language model inference, “Tree of Thoughts” (ToT), which generalizes over the popular “Chain of Thought” approach to prompting language models and enables exploration over coherent units of text (“thoughts”) that serve as intermediate steps toward problem-solving. ToT allows LMs to perform deliberate decision-making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices.

Your contribution

maybe, not sure😥

rickerliang avatar May 19 '23 10:05 rickerliang

Seems to me one of the most, if not THE most powerful method of querying an LLM so far. The results from the paper are just staggering. I definitely vote for this to get implemented. 👍

JanHomann avatar May 21 '23 12:05 JanHomann

Also, this paper:

https://arxiv.org/pdf/2305.08291.pdf

whatworksdigital avatar May 21 '23 12:05 whatworksdigital

If this will get implemented here, one idea of improving this concept even further would be the introduction of a hierarchy of trees. The first level would contain a tree of rough, big steps, then each leaf of the tree would then be composed of smaller steps (a sub-tree). It's kind of a recursive idea: trees within trees within trees up to the granularity that a single LLM can handle.

So you would first decompose the problem into larger, more general steps / trees (level one), then take each of those general steps and decompose them further into more specific, detailed steps (level two), and so on, until you reach a level of granularity that the language model can handle effectively.

With this approach, maybe even larger, more complex problems could be solved.

JanHomann avatar May 21 '23 20:05 JanHomann

If this will get implemented here, one idea of improving this concept even further would be the introduction of a hierarchy of trees. The first level would contain a tree of rough, big steps, then each leaf of the tree would then be composed of smaller steps (a sub-tree). It's kind of a recursive idea: trees within trees within trees up to the granularity that a single LLM can handle.

So you would first decompose the problem into larger, more general steps / trees (level one), then take each of those general steps and decompose them further into more specific, detailed steps (level two), and so on, until you reach a level of granularity that the language model can handle effectively.

With this approach, maybe even larger, more complex problems could be solved.

Ive been trying to get a dynamic tree structure that includes checking for prompt injection and 'sanitizing' user prompts when and where possible. theres no exact agent schema that is good for langchain to do this out of the box, but i'm using a combination of the authoriarian dialogue and planner chains to try and accomplish it

ether8unny avatar May 22 '23 04:05 ether8unny

I started working on ToT described in the second paper mentioned here by @whatworksdigital. I'm not ready to open a pull request but you're welcome to take a look and tell me if I'm on the right track.

https://github.com/vadimgu/langchain/blob/ToT/langchain/chains/tot/base.py

vadimgu avatar May 22 '23 04:05 vadimgu

I'm having trouble finding a test-case scenario for testing and demo. Can someone advise me on this?

vadimgu avatar May 23 '23 04:05 vadimgu

I'm having trouble finding a test-case scenario for testing and demo. Can someone advise me on this?

Hi @vadimgu, first of all, really looking forward to seeing your ToT implementation when you get it going. Thank you.

Secondly, and I don't know if this helps, or if I'm just stating the obvious, but the paper mentioned testing it on Soduko puzzles but didn't provide much detail.

whatworksdigital avatar May 23 '23 12:05 whatworksdigital

I was looking into ToT but I really don't think that this would be the silver bullet we're after. Consider that you want your agent/chain to act as a general assistant and be able conversationally to recognize what it has to do and use the tools provided I believe that usage of ToT will not remove much of the detailed context you need to give it in order to explain what is the thought it needs to do. I am now thinking about constructing something like the Thought/Action/Action Input but some how to create a state machine of possible actions and for each action I will have details tools and and possible next states and leave the LLM to figure out what paths it needs to take across the branches of the SM tree and implement one or another task, essential taking out a little bit of it creativity but specifying a stricter tree of paths so it can do less bad improvisations. But I am still thinking about.

emonidi avatar May 23 '23 22:05 emonidi

I'm having trouble finding a test-case scenario for testing and demo. Can someone advise me on this?

Hi @vadimgu, first of all, really looking forward to seeing your ToT implementation when you get it going. Thank you.

Secondly, and I don't know if this helps, or if I'm just stating the obvious, but the paper mentioned testing it on Soduko puzzles but didn't provide much detail.

Maybe I am also stating the obvious, but both papers have an associated GitHub page:


Large Language Model Guided Tree-of-Thought

Jieyi Long

https://arxiv.org/abs/2305.08291 https://github.com/jieyilong/tree-of-thought-puzzle-solver


Tree of Thoughts: Deliberate Problem Solving with Large Language Models

Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan

https://arxiv.org/abs/2305.10601 https://github.com/ysymyth/tree-of-thought-llm

JanHomann avatar May 23 '23 22:05 JanHomann

I finally was able to use a 4x4 sudoku just like in the paper.

With a rule-based checker, the ToT software can be viewed as a hybrid system which allows explicitly encoding prior knowledge (e.g. the Sudoku rules) into a neural network powered system.

A Sudoku puzzle is a problem that is well-suited for the ToT method. However, I don't think that the ToT method is a one-size-fits-all solution for all types of problems.

vadimgu avatar May 24 '23 04:05 vadimgu

Here is another repository that implements Tree of Thought(s).

https://github.com/kyegomez/tree-of-thoughts

JanHomann avatar May 24 '23 12:05 JanHomann

Here is another repository that implements Tree of Thought(s).

https://github.com/kyegomez/tree-of-thoughts

Thanks. I started implementing in LangchainJS, bist still not quite familiar with the architecture. Will share if succeed. My thoughts so far: Thought Generator Executor with its own prompt and parser. Thought Evaluation Executor with its own prompt and parser. Orcherstrating Executor

emonidi avatar May 24 '23 13:05 emonidi

Here is another repository that implements Tree of Thought(s). https://github.com/kyegomez/tree-of-thoughts

Thanks. I started implementing in LangchainJS, bist still not quite familiar with the architecture. Will share if succeed. My thoughts so far: Thought Generator Executor with its own prompt and parser. Thought Evaluation Executor with its own prompt and parser. Orcherstrating Executor

The backtracking itself probably should be done by a simple classic algorithm.

JanHomann avatar May 26 '23 10:05 JanHomann

Here is another repository that implements Tree of Thought(s). https://github.com/kyegomez/tree-of-thoughts

Thanks. I started implementing in LangchainJS, bist still not quite familiar with the architecture. Will share if succeed. My thoughts so far: Thought Generator Executor with its own prompt and parser. Thought Evaluation Executor with its own prompt and parser. Orcherstrating Executor

The backtracking itself probably should be done by a simple classic algorithm.

I was thinking actually to change my code to generate the whole tree first internally in the Executor and then to apply an algorithm on the already generated data and also provide the whole generated tree as public if the dev would like to apply their own algorithm

emonidi avatar May 26 '23 10:05 emonidi

Here is another repository that implements Tree of Thought(s). https://github.com/kyegomez/tree-of-thoughts

Thanks. I started implementing in LangchainJS, bist still not quite familiar with the architecture. Will share if succeed. My thoughts so far: Thought Generator Executor with its own prompt and parser. Thought Evaluation Executor with its own prompt and parser. Orcherstrating Executor

The backtracking itself probably should be done by a simple classic algorithm.

I was thinking actually to change my code to generate the whole tree first internally in the Executor and then to apply an algorithm on the already generated data and also provide the whole generated tree as public if the dev would like to apply their own algorithm

That might lead to too many branches. Not sure. If you look at the tree plot in the paper, they seem to abandon leafs that are not likely to lead to results. So there seems to be some pruning at every stage. Not sure. It’s a search algorithm BY two LLMs. One generates the leaves and the other one evaluates them. Breath first or depth first. Those are standard computer science algorithms. Tree search. Except that the leafs are generates and ranked on the fly.

JanHomann avatar May 26 '23 12:05 JanHomann

Here is another repository that implements Tree of Thought(s). https://github.com/kyegomez/tree-of-thoughts

Thanks. I started implementing in LangchainJS, bist still not quite familiar with the architecture. Will share if succeed. My thoughts so far: Thought Generator Executor with its own prompt and parser. Thought Evaluation Executor with its own prompt and parser. Orcherstrating Executor

The backtracking itself probably should be done by a simple classic algorithm.

I was thinking actually to change my code to generate the whole tree first internally in the Executor and then to apply an algorithm on the already generated data and also provide the whole generated tree as public if the dev would like to apply their own algorithm

That might lead to too many branches. Not sure. If you look at the tree plot in the paper, they seem to abandon leafs that are not likely to lead to results. So there seems to be some pruning at every stage. Not sure. It’s a search algorithm BY two LLMs. One generates the leaves and the other one evaluates them. Breath first or depth first. Those are standard computer science algorithms. Tree search. Except that the leafs are generates and ranked on the fly.

Pruning is already there if(evaluation > totInput.threshold){ await dfs(totInput.sizeLimit,_currentStep+1,thoughts.text[i]) } You give it a value threshold and you call the next step of branching only if the evaluation is greater than the give threshold. Take a look here: https://gist.github.com/emonidi/b257014de26497db506850df9998b2bb#file-tot-ts-L159

emonidi avatar May 26 '23 14:05 emonidi

I finally was able to use a 4x4 sudoku just like in the paper.

With a rule-based checker, the ToT software can be viewed as a hybrid system which allows explicitly encoding prior knowledge (e.g. the Sudoku rules) into a neural network powered system.

A Sudoku puzzle is a problem that is well-suited for the ToT method. However, I don't think that the ToT method is a one-size-fits-all solution for all types of problems.

@vadimgu, can you provide me with an example where you used your TOT approach, please. I'm quite keen to test it and also contribute to this feature.

Nevermind, found the notebook. Thanks!

pdutoit2011 avatar Jun 02 '23 11:06 pdutoit2011

any update info? thanks!

LangDaoAI avatar Jun 13 '23 09:06 LangDaoAI

I continue working on ToT. I had to add the concept of a thought generation strategy in order to overcome the issue of generating the same output. These strategies ensure that the language model generates diverse and non-repeating thoughts, which are crucial for problem-solving tasks that require exploration.

  1. Sample thoughts from a Chain-of-Thought (CoT) prompt. This strategy works better when the thought space is rich, such as when each thought is a paragraph. Independent and identically distributed samples lead to diversity, which helps to avoid repetition.

  2. Propose thoughts using a "propose prompt". This strategy works better when the thought space is more constrained, such as when each thought is just a word or a line. Proposing different thoughts in the same prompt completion helps to avoid duplication.

I'm testing it all and will be updating the PR in a few days.

vadimgu avatar Jun 15 '23 01:06 vadimgu

@vadimgu I checked your repo. It's nice ! Are you blocked ? would you like some help ?

fabricehong avatar Jun 23 '23 13:06 fabricehong

@fabricehong Thanks reaching out an offering your help. If you have the time, I would be grateful if you could review my pull request. Additionally, if you have any specific areas or aspects you would like to focus on, please let me know.

I find the ToT (or my implementation of ToT) being very difficult to use and I struggle to find an interesting use case.

vadimgu avatar Jun 25 '23 21:06 vadimgu

I forked your repo and will try to adapt it to a personal use case. After that I'll be able to make a more valuable review. I sent you a mail in case we need to discuss

Cheers

fabricehong avatar Jun 25 '23 21:06 fabricehong

landed in #5167!

baskaryan avatar Aug 17 '23 20:08 baskaryan