llama_index icon indicating copy to clipboard operation
llama_index copied to clipboard

Indexing should recover from errors more gracefully

Open VivaLaPanda opened this issue 2 years ago • 2 comments

I've been trying to generate a tree index, but I'm hitting OAI ratelimits. The problem is that this forces me to start the index from scratch again, which is time-consuming and expensive.

If a ratelimit gets hit, the index should retry, or at the very least save some sort of intermediate state so you can resume indexing later.

VivaLaPanda avatar Jan 11 '23 19:01 VivaLaPanda

Log output of me running into this:

>>> index = GPTTreeIndex(documents, prompt_helper=prompt_helper)
> Building index from nodes: 502 chunks
0/5029
10/5029
20/5029
30/5029
40/5029
50/5029
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/gpt_index/indices/tree/base.py", line 65, in __init__
    super().__init__(
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/gpt_index/indices/base.py", line 86, in __init__
    self._index_struct = self.build_index_from_documents(
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/gpt_index/utils.py", line 113, in wrapped_llm_predict
    f_return_val = f(_self, *args, **kwargs)
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/gpt_index/indices/base.py", line 206, in build_index_from_documents
    return self._build_index_from_documents(documents, verbose=verbose)
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/gpt_index/indices/tree/base.py", line 96, in _build_index_from_documents
    index_graph = index_builder.build_from_text(
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/gpt_index/indices/common/tree/base.py", line 75, in build_from_text
    root_nodes = self.build_index_from_nodes(
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/gpt_index/indices/common/tree/base.py", line 103, in build_index_from_nodes
    new_summary, _ = self._llm_predictor.predict(
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/gpt_index/langchain_helpers/chain_wrapper.py", line 96, in predict
    llm_prediction = self._predict(prompt, **prompt_args)
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/gpt_index/langchain_helpers/chain_wrapper.py", line 82, in _predict
    llm_prediction = llm_chain.predict(**full_prompt_args)
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/langchain/chains/llm.py", line 103, in predict
    return self(kwargs)[self.output_key]
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/langchain/chains/base.py", line 146, in __call__
    raise e
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in __call__
    outputs = self._call(inputs)
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/langchain/chains/llm.py", line 87, in _call
    return self.apply([inputs])[0]
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/langchain/chains/llm.py", line 78, in apply
    response = self.generate(input_list)
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/langchain/chains/llm.py", line 73, in generate
    response = self.llm.generate(prompts, stop=stop)
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/langchain/llms/base.py", line 81, in generate
    raise e
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/langchain/llms/base.py", line 77, in generate
    output = self._generate(prompts, stop=stop)
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/langchain/llms/openai.py", line 158, in _generate
    response = self.client.create(prompt=_prompts, **params)
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/openai/api_resources/completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/openai/api_requestor.py", line 599, in _interpret_response
    self._interpret_response_line(
  File "/Users/bitsmith/.pyenv/versions/3.9.11/lib/python3.9/site-packages/openai/api_requestor.py", line 655, in _interpret_response_line
    raise self.handle_error_response(
openai.error.RateLimitError: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists.
>>> 

VivaLaPanda avatar Jan 11 '23 19:01 VivaLaPanda

this is a great idea

jerryjliu avatar Jan 12 '23 17:01 jerryjliu

I had to wrap my queries in exception code to catch these API outages earlier this week. Would nice if it's handled within the framework since the openai.errors aren't exposed to my client.

dahifi avatar Jan 12 '23 20:01 dahifi

should be handled with #215!

jerryjliu avatar Jan 12 '23 20:01 jerryjliu