jlonge4
jlonge4
I think additionally returning raw response here -> https://github.com/run-llama/llama_index/blob/e2396a7c6951973527339df62ce0f4a8ad17723b/llama-index-core/llama_index/core/base/llms/types.py#L90 would allow us to get -> 'usage': {'input_tokens': 32, 'output_tokens': 383} from the Bedrock response correct?
@freikim Not sure if this issue I raised will help, but for some reason I had a different version of sqlalchemy that was causing me the same pain. https://github.com/deepset-ai/haystack/issues/6457
@hansblafoo Ohhh yeah that's a bit worse 😅 thanks for the clarification
@SuperCowboyDinosaur hmmm...is it still happening to you?
@SuperCowboyDinosaur Checkout v3, and follow the new update instructions in the read me. You won't be disappointed.
@odevroed Thank you for your kind words I am glad you are enjoying it! Also that is my fault, I need to push a fix that deletes the existing indexes...
@odevroed That is also on my list haha, the plan is to include a drop down to allow selection of whichever model, dynamically swapping the prompt as well to fit...
@mountainrocky thanks a lot I'm glad you like it! I have been wanting to try to run inference using mojo for speed, but that's quite a dependency. As far as...
> This is an awsome project. I pull the code and get it up running quickly. > > > > Do you have any idea how to improve the query...
So you'll want to replace "where your PDFs are" with the absolute path to your documents. For example... "C://Users/You/Bruce_Bruce_2018_Practical Statistics for Data Scientists.pdf" @brinrbc