langchain
langchain copied to clipboard
Adjusted GPT4All llm to streaming API and added support for GPT4All_J
Fix for these issues: https://github.com/hwchase17/langchain/issues/4126 https://github.com/hwchase17/langchain/issues/3839#issuecomment-1534258559
You are awesome!! Looking forward to using this <3
@vowelparrot What are the next steps in the merging process. Do I just wait for approvals or do I need to do something?
I'm backlogged :) If you could share some code examples with the outputs along with the model URL you were using that are working for the use cases, that can help
@vowelparrot Sure thing! I should have done it earlier :)
Currently we have support for both versions of gtp4all: gtpj and llama
To run it using the gtpj version first download the example model:
! wget https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin
Then specify it's path and run the generation:
local_path = ggml-gpt4all-j-v1.3-groovy.bin
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model=local_path, backend='gptj', callbacks=callbacks, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
For the llama version you can download it from here:
! wget https://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin
And run it like that:
local_path = ggml-gpt4all-l13b-snoozy.bin
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model=local_path, backend='llama', callbacks=callbacks, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
Works locally for me. Thanks!