Prasad Chalasani

Results 131 comments of Prasad Chalasani

Ok I've setup a working example using `ollama` to run your RAG example with `mistral:7b-instruct-v0.2-q4_K_M` see this example script in the `langroid-examples` repo: https://github.com/langroid/langroid-examples/blob/main/examples/docqa/rag-local-simple.py This works for me on an...

> what's the `farm-haystack` dependency? It's for parsing pdf docs, but I just made a new release that eliminates `haystack` and `transformers` dependencies (which also eliminates `torch` as a dependency),...

> Huge amount of text, in this case, should be handled by the tool/function itself. I only listed one possible reason (i.e. result size) why we'd want the user to...

> Maybe worth wrapping it (optionally) so it is added automatically to the tool response. This could be one way, e.g. via a config flag. We're thinking about this or...

> and this is what i get for sample code at top yes I don't expect this to improve the script startup time. That is still an issue due to...

Forgot to mention one other thing. When you set up a task with `interactative=True`, then it will wait for user input after each valid response from a non-human entity (i.e....

> use the `local/http://localhost:8000/v1` Note that the syntax is “local/localhost:8000/v1” I.e you shouldn’t include the “http://”

That's puzzling. Looking at the [vllm docs](https://docs.vllm.ai/en/latest/getting_started/quickstart.html#using-openai-chat-api-with-vllm), it should launch an OpenAI-compatible endpoint at `http://http://localhost:8000/v1`, and langroid should then work with setting `OpenAIGPTConfig(chat_model="local/localhost:8000/v1")`. The only thing Langroid expects is an...

@frankyuan This is expected. You have to have some way to signal that the task is done, to exit the loop. That could either be by - explicit signals from...

Can you post your exact `OpenAIGPTConfig` setting here? I haven't tested with vllm, but this may be helpful: https://docs.litellm.ai/docs/providers/vllm You might try setting `chat_model="litellm/vllm/[model-name]"`, though if your model is actually...