arnavsinghvi11

Results 254 comments of arnavsinghvi11
trafficstars

Thanks @harrysalmon ! Does this require any more tests for `openai>=0.28.1`?

Hi @FarisHijazi , left some comments here. I mainly want to clarify whether this PR supports _all_ LLM integrations with LlamaIndex as specified or only limited to OpenAI? seems like...

Hi @aaronbriel , The optimized_program currently includes few-shot examples from only 8 of the classifiers because the BootstrapWithRandomSearch configuration is set to select: `"max_bootstrapped_demos": 8, "max_labeled_demos": 8,"` To get unique...

hey @kmeehl , do you get this error for all your datapoints, or does it only happen on some? I believe most multimodal LLMs are not well adapted for structured...

Hi @nguyenhoan1988 , thanks for the PR! could this instead be supported like how we do image & audio in inspect_history rather than keeping a separate `verbose` flag? Also, we...

Hi @satyaloka93 , is this LiteLLM [documentation](https://docs.litellm.ai/docs/providers/openai#set-ssl_verifyfalse) useful? seems like an external litellm configuration you can set and then just call dspy.LM in the same way as you would.

Hey @satyaloka93 , [dspy.LM works with prepending 'openai/' ](https://dspy.ai/learn/programming/language_models/?h=openai%2F) before your model type. Can you check if your httpx_client configuration follows the documentation above and is also compliant with the...

Hi @jimixxperez , this approach makes sense but may need some deeper testing. feel free to open a PR with your current version and we can iterate on it!

Thanks for opening this PR @hawktang . just curious, where is the LiteLLMVectorizer being used? seems to me it is just using the openAI embeddings but just wanted to double-check...

Hi @Jasonsey , I believe you need to add the `hosted_vllm` prefix to your model name or pass in vllm as a provider arg. Feel free to reference the LiteLLM...