Logan

Results 685 comments of Logan

This should be working now, assuming you set an explicit/consistent doc_id for each document. More details here: https://gpt-index.readthedocs.io/en/latest/how_to/index/document_management.html https://github.com/jerryjliu/llama_index/tree/main/docs/examples/discover_llamaindex/document_management/ https://www.youtube.com/watch?v=j6dJcODLd_c

I dont have access to azure to test this. But I know the azure team is working on updating this integration, hopefully soon :)

Good suggestion! as a quick note, I will point out that the response object always has the source nodes used to create the response `print(response.source_nodes)` -- each source node has...

Set the request timeout larger, I usually do something like `request_timeout=3000.0`

Can you give more details? Is it writing the same sql query? Is it returning a response? It could just be variation when you have different sql data or different...

Seems like maybe a difference in installed dependencies? Thats my best guess

Sounds good! I know ChatGPT has proven to be a little... obtuse to work with lol

Yea, I second this. Is there no way to use LangChain with a model that is loaded locally?