langchain icon indicating copy to clipboard operation
langchain copied to clipboard

GPT4All chat error with async calls

Open PiotrPmr opened this issue 2 years ago • 10 comments

Hi, I believe this issue is related to this one: #1372

I'm using GPT4All integration and get the following error after running ConversationalRetrievalChain with AsyncCallbackManager: ERROR:root:Async generation not implemented for this LLM. Changing to CallbackManager does not fix anything.

The issue is model-agnostic, i.e., I have used ggml-gpt4all-j-v1.3-groovy.bin and ggml-mpt-7b-base.bin. The LangChain version I'm using is 0.0.179. Any ideas how this can be potentially solved or should we just wait for a new release fixing it?

Suggestion:

Release a fix, similar as in #1372

PiotrPmr avatar May 24 '23 19:05 PiotrPmr

@PiotrPmr Hi, Any solution found for this issue? Getting NotImplementedError: Async generation not implemented for this LLM error for langchain version 0.0.186 with gpt4all model.

poojatambe avatar May 31 '23 10:05 poojatambe

I'm having the same issue with another LlamaCpp LLM as well as a HuggingFaceHub LLM. I'm using LLMChain. Hoping someone can fix this!

ncfx avatar Jun 10 '23 16:06 ncfx

having the same issue, +1

khaledadrani avatar Jun 15 '23 08:06 khaledadrani

I managed to make a fix, and will be making a pr soon

khaledadrani avatar Jun 16 '23 10:06 khaledadrani

@khaledadrani eagerly waiting for it.

suhailmalik07 avatar Jun 19 '23 07:06 suhailmalik07

@khaledadrani If you could describe your solution before making a pr that would be helpful. Thanks.

kesavazt avatar Jun 19 '23 08:06 kesavazt

I am currently checking with the requirements for my pr to be ready for reviewing (format, linting, testing) I will finish them ASAP and this is my first contribution ever in an open source project :)

what I did was implement _acall with support for async await, should I add just one test with async for this? is it enough to be accepted? https://github.com/khaledadrani/langchain/blob/32a041b8a2a5a8a6db36592b501e4ce9d54c219b/tests/unit_tests/llms/fake_llm.py

Edit; I also need to put a test here https://github.com/khaledadrani/langchain/blob/32a041b8a2a5a8a6db36592b501e4ce9d54c219b/tests/integration_tests/llms/test_gpt4all.py

khaledadrani avatar Jun 19 '23 14:06 khaledadrani

Hello again, I read in the contribution document that is it possible to add a jupyter notebook example https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md, however, I am unable to find any notebooks in the repository? Can someone tell me where should I put the example notebook? Thanks!

khaledadrani avatar Jul 10 '23 09:07 khaledadrani

Surprised there isn't more community presence on this issue because GPT4ALL is so popular, would be great to see it merged. Thanks for the efforts @khaledadrani

chrisedington avatar Jul 11 '23 12:07 chrisedington

I think someone made an implementation already, but did not report to this issue? Can anyone confirm that it works? (it happened while rebasing my fork, I found almost the same implementation)

khaledadrani avatar Jul 11 '23 13:07 khaledadrani

A workaround for using ConversationalRetrievalChain with llamaCpp is implemet _acall function. This is not tested extensively.


from langchain.llms import LlamaCpp
from typing import Any, Dict, List, Generator, Optional
from langchain.callbacks.manager import AsyncCallbackManagerForLLMRun

class LlamaCppAsync(LlamaCpp):
    async def _acall(
            self,
            prompt: str,
            stop: Optional[List[str]] = None,
            run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
            **kwargs: Any,
    ) -> str:
        """Asynchronous Call the Llama model and return the output.

        Args:
            prompt: The prompt to use for generation.
            stop: A list of strings to stop generation when encountered.

        Returns:
            The generated text.

        Example:
            .. code-block:: python

                from langchain.llms import LlamaCpp
                llm = LlamaCpp(model_path="/path/to/local/llama/model.bin")
                llm("This is a prompt.")
        """
        if self.streaming:
            # If streaming is enabled, we use the stream
            # method that yields as they are generated
            # and return the combined strings from the first choices's text:
            combined_text_output = ""
            stream = self.stream_async(prompt=prompt, stop=stop, run_manager=run_manager)

            async for token in stream:
                combined_text_output += token["choices"][0]["text"]
            return combined_text_output
        else:
            params = self._get_parameters(stop)
            params = {**params, **kwargs}
            result = self.client(prompt=prompt, **params)
            return result["choices"][0]["text"]

    async def stream_async(
        self,
        prompt: str,
        stop: Optional[List[str]] = None,
        run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
    ) -> Generator[Dict, None, None]:
        """Yields results objects as they are generated in real time.

        BETA: this is a beta feature while we figure out the right abstraction.
        Once that happens, this interface could change.

        It also calls the callback manager's on_llm_new_token event with
        similar parameters to the OpenAI LLM class method of the same name.

        Args:
            prompt: The prompts to pass into the model.
            stop: Optional list of stop words to use when generating.

        Returns:
            A generator representing the stream of tokens being generated.

        Yields:
            A dictionary like objects containing a string token and metadata.
            See llama-cpp-python docs and below for more.

        Example:
            .. code-block:: python

                from langchain.llms import LlamaCpp
                llm = LlamaCpp(
                    model_path="/path/to/local/model.bin",
                    temperature = 0.5
                )
                for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'",
                        stop=["'","\n"]):
                    result = chunk["choices"][0]
                    print(result["text"], end='', flush=True)

        """
        params = self._get_parameters(stop)
        result = self.client(prompt=prompt, stream=True, **params)
        for chunk in result:
            token = chunk["choices"][0]["text"]
            log_probs = chunk["choices"][0].get("logprobs", None)
            if run_manager:
                await run_manager.on_llm_new_token(
                    token=token, verbose=self.verbose, log_probs=log_probs
                )
            yield chunk

Then change


    question_gen_llm = LlamaCpp(
        model_path=LLM_MODEL_PATH,
        n_ctx=2048,
        streaming=True,
        callback_manager=question_manager,
        verbose=True,
    )

To


    question_gen_llm = LlamaCppAsync(
        model_path=LLM_MODEL_PATH,
        n_ctx=2048,
        streaming=True,
        callback_manager=question_manager,
        verbose=True,
    )

diegovazquez avatar Jul 17 '23 17:07 diegovazquez

have the same issue. Any updates on this?

VladPrytula avatar Jul 19 '23 09:07 VladPrytula

@VladPrytula is it not fixed for GPT4ALL? Was I mistaken in my previous comment?

khaledadrani avatar Jul 22 '23 11:07 khaledadrani

It is not fixed , I have added async manually to the class - and it kind of works , but I don’t like the result , it is effectively in compliance with async interface , but not async per se

VladPrytula avatar Jul 22 '23 11:07 VladPrytula

For GPT4All you can use this class in your projects


from langchain.llms import GPT4All
from functools import partial
from typing import Any, List
from langchain.callbacks.manager import AsyncCallbackManagerForLLMRun
from langchain.llms.utils import enforce_stop_tokens

class AGPT4All(GPT4All):
    async def _acall(self, prompt: str, stop: List[str] | None = None, run_manager: AsyncCallbackManagerForLLMRun | None = None, **kwargs: Any) -> str:
        text_callback = None
        if run_manager:
            text_callback = partial(run_manager.on_llm_new_token, verbose=self.verbose)
        text = ""
        params = {**self._default_params(), **kwargs}
        for token in self.client.generate(prompt,streaming = True, **params):
            if text_callback:
                await text_callback(token)
            text += token
        if stop is not None:
            text = enforce_stop_tokens(text, stop)
        return text

Mabenan avatar Jul 22 '23 13:07 Mabenan

@Mabenan I kind of got this working, but I am not sure I am using AsyncCallbackManagerForLLMRun correctly. Do you have an example of how to instantiate it properly to use AGPT4All?

auxon avatar Aug 04 '23 20:08 auxon

@auxon I use it the following way


history = ConversationBufferMemory(ai_prefix="### Assistant", human_prefix="### Human")
template = """
            {history}
            ### Human: {input}
            ### Assistant:"""
prompt = PromptTemplate(template=template, input_variables=["history","input"])
streaminCallback = AsyncIteratorCallbackHandler()
            llmObj = AGPT4All(model = modelpath, verbose=False, allow_download=True,
                temp = properties["temp"],
                top_k = properties["top_k"],
                top_p = properties["top_p"],
                repeat_penalty = properties["repeat_penalty"],
                repeat_last_n = properties["repeat_last_n"],
                n_predict = properties["n_predict"],
                n_batch = properties["n_batch"],
                callbacks=[streaminCallback],
                n_threads= threads,
                streaming = True)
history.load_memory_variables({})
chain = ConversationChain(prompt=prompt, llm=llmObj, memory=history)
asyncio.create_task(chain.apredict(input=data["prompt"]))
start = datetime.now()
tokenCount = 0
async for respEntry in streaminCallback.aiter():
    now = datetime.now()
    diff = now - start
    tokenCount += 1
    print("Tokens per Second: " + str(tokenCount / diff.total_seconds()))
    compResp = compResp + respEntry
    yield respEntry
´´´

Mabenan avatar Aug 04 '23 21:08 Mabenan

@Mabenan Thanks!

auxon avatar Aug 04 '23 21:08 auxon

Hi When I try to 'arun' the below chain using Sagemaker Endpoint, I'm receiving the following error. chain = LLMChain(llm = SagemakerEndpoint(endpoint_name=llm_ENDPOINT,region_name=REGION_NAME, content_handler=content_handler), prompt=prompt)

NotImplementedError: Async generation not implemented for this LLM.

Is the Async calls available for Sagemaker Endpoint? If not, is there a workaround for the same.

Thanks in advance.

varshasathya avatar Aug 22 '23 09:08 varshasathya

Hi, @PiotrPmr! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, you are experiencing an error with async calls in the GPT4All chat integration. It seems that other users have also reported the same issue and are waiting for a fix. User @khaledadrani has mentioned that they have made a fix and will be making a pull request soon. Additionally, user @Mabenan has provided a workaround for using GPT4All with async calls.

Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain community! Let us know if you have any further questions or concerns.

dosubot[bot] avatar Nov 21 '23 16:11 dosubot[bot]

Hello, it has been a long time. Was this fixed or not? Otherwise I will be returning to it ASAP?

khaledadrani avatar Nov 28 '23 12:11 khaledadrani

@baskaryan Could you please help @PiotrPmr with the issue they mentioned? They are still experiencing an error with async calls in the GPT4All chat integration and would appreciate your assistance. Thank you!

dosubot[bot] avatar Nov 28 '23 12:11 dosubot[bot]

#14495 as promised, here is the fix for this issue. Need reviewing obviously.

khaledadrani avatar Dec 09 '23 18:12 khaledadrani

Hello, are there any updates on this? I see that #14495 is still open. Thank you!

charlod avatar Mar 01 '24 08:03 charlod

@charlod is the current implementation of ainvoke or acall (going to be deprecated) not working for you?

model_response = await qa.ainvoke(
               input={"query": "what is Python?"}
           )

khaledadrani avatar Mar 01 '24 09:03 khaledadrani

I've just tested with the current ainvoke implementation, and it works. Thanks again!

charlod avatar Mar 07 '24 16:03 charlod

I've just tested with the current ainvoke implementation, and it works. Thanks again!

closing!

baskaryan avatar Mar 29 '24 23:03 baskaryan