gpt4free
gpt4free copied to clipboard
Request for documentation on integrating Langchain with GPT4Free
Hello GPT4Free maintainers,
I've been exploring the GPT4Free project and would like to use it in conjunction with Langchain (python.langchain.com). I'm particularly interested in using Langchain to run queries against GPT4All in the context of a single documentary knowledge source, as mentioned in blog.ouseful.info. However, I haven't found any explicit documentation or examples on how to integrate Langchain with GPT4Free.
I would appreciate it if you could provide some guidance or documentation on how to achieve this integration. In particular, I'm looking for information on:
- How to make API requests to GPT4Free using Langchain.
- How to process the generated text from GPT4Free using Langchain (e.g., translation, summarization, etc.).
Any assistance or pointers in the right direction would be greatly appreciated. Thank you for your time and your work on this project!
Same Question 👍🏽
Amaizing
waiting for this!
Hello World
from typing import Optional, List, Mapping, Any
from langchain.llms.base import LLM
import gpt4free
from gpt4free import Provider
class EducationalLLM(LLM):
@property
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
return gpt4free.Completion.create(Provider.You, prompt=prompt)
llm = EducationalLLM()
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}? Just tell one and only the name",
)
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run("colorful socks"))
Hello World
from typing import Optional, List, Mapping, Any from langchain.llms.base import LLM import gpt4free from gpt4free import Provider class EducationalLLM(LLM): @property def _llm_type(self) -> str: return "custom" def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str: return gpt4free.Completion.create(Provider.You, prompt=prompt) llm = EducationalLLM() from langchain.prompts import PromptTemplate prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}? Just tell one and only the name", ) from langchain.chains import LLMChain chain = LLMChain(llm=llm, prompt=prompt) print(chain.run("colorful socks"))
Not respecting the stop words will cause the LLM to hallucinate tool usage and response. I have attached an example of how to truncate the response from gpt4free when there is no stop word support in the create method:
class EducationalLLM(LLM):
@property
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
out = gpt4free.Completion.create(Provider.You, prompt=prompt)
if stop:
stop_indexes = (out.find(s) for s in stop if s in out)
min_stop = min(stop_indexes, default=-1)
if min_stop > -1:
out = out[:min_stop]
return out
can we use agents with this?
Yes 👍 you can use everything from langchain !
Updated code-
from typing import Optional, List, Mapping, Any
from langchain.llms.base import LLM
# import gpt4free
# from gpt4free import Provider
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
import g4f
class EducationalLLM(LLM):
@property
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
out = g4f.ChatCompletion.create(
model=g4f.models.gpt_4,
messages=[{"role": "user", "content": prompt}],
) #
if stop:
stop_indexes = (out.find(s) for s in stop if s in out)
min_stop = min(stop_indexes, default=-1)
if min_stop > -1:
out = out[:min_stop]
return out
llm = EducationalLLM()
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}? Just tell one and only the name",
)
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run("colorful socks"))
I create a project to export g4f to openai api, it may help. https://github.com/tgscan-dev/g4f-api
I create a project to export g4f to openai api, it may help. https://github.com/tgscan-dev/g4f-api
That is awesome! Does that mean we can now integrate gpt4free into AutoGPT, GPT-Pilot, Agents, etc. by providing the local ip address for the openai key? Or is there going to be any issues with some of the responses such that only simple-text/based response is available making it unusable for AutoGPT? @xtekky I read your comments about AutoGPT saying something similar.
I create a project to export g4f to openai api, it may help. https://github.com/tgscan-dev/g4f-api
That is awesome! Does that mean we can now integrate gpt4free into AutoGPT, GPT-Pilot, Agents, etc. by providing the local ip address for the openai key? Or is there going to be any issues with some of the responses such that only simple-text/based response is available making it unusable for AutoGPT? @xtekky I read your comments about AutoGPT saying something similar.
@xtekky https://github.com/xtekky/gpt4free/issues/258#issuecomment-1528761387
Yes 👍 you can use everything from langchain !
hey @toukoum I was trying to use langchain with g4f with agents,but its not working.
eg: openai-tools are not working for g4f results
how can i use bind_tools from langchin with g4f ? please help
Updated: `from typing import Optional, List, Mapping, Any from langchain.llms.base import LLM from langchain.prompts import PromptTemplate from langchain.chains import LLMChain import g4f from g4f.client import Client
class EducationalLLM(LLM):
@property
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, stop: Optional[List[str]] = None, model: str = "gpt-4o") -> str:
client = Client()
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
)
out = response.choices[0].message.content
if stop:
stop_indexes = (out.find(s) for s in stop if s in out)
min_stop = min(stop_indexes, default=-1)
if min_stop > -1:
out = out[:min_stop]
return out
llm = EducationalLLM()
prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}? Just tell one and only the name", )
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run("colorful socks"))`