langchain icon indicating copy to clipboard operation
langchain copied to clipboard

ERROR:root:'OpenAIEmbeddings' object has no attribute 'max_retries'

Open ankh2054 opened this issue 2 years ago β€’ 3 comments
trafficstars

version: 0.0.106

OpenAI seems to no longer support max_retries.

https://platform.openai.com/docs/api-reference/completions/create?lang=python

ankh2054 avatar Mar 10 '23 20:03 ankh2054

what is the full stack trace? what command are you running to generate this?

hwchase17 avatar Mar 10 '23 20:03 hwchase17

works in version: 0.0.0100, but not 0.0.106

I am using https://github.com/hwchase17/chat-langchain and sending input question to /chat.

WARNING:root:Failed to default session, using empty session: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /sessions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13f00ca50>: Failed to establish a new connection: [Errno 61] Connection refused')) File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/spawn.py", line 120, in spawn_main exitcode = _main(fd, parent_sentinel) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/spawn.py", line 133, in _main return self._bootstrap(parent_sentinel) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started target(sockets=sockets) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/uvicorn/server.py", line 60, in run return asyncio.run(self.serve(sockets=sockets)) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 640, in run_until_complete self.run_forever() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 607, in run_forever self._run_once() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1922, in _run_once handle._run() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 238, in run_asgi result = await self.app(self.scope, self.asgi_receive, self.asgi_send) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in call return await self.app(scope, receive, send) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/fastapi/applications.py", line 271, in call await super().call(scope, receive, send) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/starlette/applications.py", line 118, in call await self.middleware_stack(scope, receive, send) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/starlette/middleware/errors.py", line 149, in call await self.app(scope, receive, send) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in call await self.app(scope, receive, sender) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in call await self.app(scope, receive, send) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/starlette/routing.py", line 706, in call await route.handle(scope, receive, send) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/starlette/routing.py", line 341, in handle await self.app(scope, receive, send) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/starlette/routing.py", line 82, in app await func(session) File "/Users/charlesholtzkampf/sentnl/chatai/lib/python3.11/site-packages/fastapi/routing.py", line 289, in app await dependant.call(**values) File "/Users/charlesholtzkampf/sentnl/chatai/main.py", line 47, in websocket_endpoint qa_chain = get_chain(vectorstore, question_handler, stream_handler) File "/Users/charlesholtzkampf/sentnl/chatai/query_data.py", line 88, in get_chain traceback.print_stack() INFO: connection open ERROR:root:'OpenAIEmbeddings' object has no attribute 'max_retries'

ankh2054 avatar Mar 10 '23 21:03 ankh2054

Works up to 0.0.105, but not in 0.0.106

    vectorstore: VectorStore, question_handler, stream_handler, tracing: bool = True
) -> ChatVectorDBChain:
    """Create a ChatVectorDBChain for question/answering."""
    # Construct a ChatVectorDBChain with a streaming llm for combine docs
    # and a separate, non-streaming llm for question generation
    manager = AsyncCallbackManager([])
    question_manager = AsyncCallbackManager([question_handler])
    stream_manager = AsyncCallbackManager([stream_handler])
    if tracing:
        tracer = LangChainTracer()
        tracer.load_default_session()
        manager.add_handler(tracer)
        question_manager.add_handler(tracer)
        stream_manager.add_handler(tracer)

    question_gen_llm = OpenAI(
        temperature=0,
        verbose=True,
        callback_manager=question_manager,
    )
    streaming_llm = OpenAI(
        streaming=True,
        callback_manager=stream_manager,
        verbose=True,
        temperature=0,
    )

    question_generator = LLMChain(
        llm=question_gen_llm, prompt=CONDENSE_QUESTION_PROMPT, callback_manager=manager
    )

    doc_chain = load_qa_chain(
        streaming_llm, chain_type="stuff", prompt=QA_PROMPT, document_prompt=EOS_DOC_PROMPT,
        callback_manager=manager
    )

    qa = ChatVectorDBChain(
        vectorstore=vectorstore,
        combine_docs_chain=doc_chain,
        question_generator=question_generator,
        callback_manager=manager,
    )
    traceback.print_stack()
    return qa
    ```

ankh2054 avatar Mar 11 '23 09:03 ankh2054

Hello, I have the same error on version: 0.0.106, not sure why.

0.0.105 is working fine.

TomDarmon avatar Mar 11 '23 23:03 TomDarmon

https://github.com/hwchase17/langchain/blob/c844d1fd4667f3748b712550221f2139755110a2/langchain/embeddings/openai.py

ankh2054 avatar Mar 12 '23 09:03 ankh2054

I think I know why, will update soon. user error I think.

ankh2054 avatar Mar 12 '23 10:03 ankh2054

❓Problem: Loading vectorstore that was created using version openai.py embedding that did not contain max_retries attributes in class. https://github.com/hwchase17/langchain/blob/383c67c1b259ddd0faada1469abdfa7b04cfe481/langchain/embeddings/openai.py

πŸ’‘Solution: Rebuild my vectorstore and all was good.

ankh2054 avatar Mar 12 '23 10:03 ankh2054

❓Problem: Loading vectorstore that was created using version openai.py embedding that did not contain max_retries attributes in class. https://github.com/hwchase17/langchain/blob/383c67c1b259ddd0faada1469abdfa7b04cfe481/langchain/embeddings/openai.py

πŸ’‘Solution: Rebuild my vectorstore and all was good.

I have the same problem, could you share how to rebuild the vectorstore? If so, I will very appreciate!πŸ˜„

codepydog avatar Mar 13 '23 06:03 codepydog

It depends on what you’re loading, are you using a specific Repo example?

ankh2054 avatar Mar 13 '23 07:03 ankh2054

Had the same issue, its an incompatablility with old vectorstores. I did a new document loader in colab and tried that one and worked. I was storing files in bucket on s3 and was having issues. Switching to new loader helped.

ryaneggz avatar Apr 14 '23 05:04 ryaneggz