Victor Dibia
Victor Dibia
Thanks for the detailed description and ideas! All very good! There is currently a PR that is slightly related, focused on enabling persona based exploration #11 . The idea is...
Great. Let us discuss your findings here so far. On my end, I have been trying local huggingface models For example, I have found this [hermes 13b](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-13b) to decent performance...
Hi, I have not tested with the mixtral model series. I'd suggest attempting to use [vllm](https://github.com/vllm-project/vllm) to setup an open ai compatible server and then connect to that using the...
@Varun-varun , This has been updated. ```bash pip install -U lida llmx openai ``` Can you verify you still have issues when you run
This has been addressed by updating llmx (which lida uses) use previous version of openai library. Please let me know if a fresh install of lida `pip install -U lida`...
Hi, Thanks for noting this. I just saw there was abug where the latest version of llmx (the library used to make connections to openai) was not installed. I have...
Hi @rmitra34 , Thanks for trying this out. At the moment, `visualize` method returns a `spec` field when altair is the selected library. `spec` is a dictionary representation of the...
Hi @HydraBucket , It looks like you may be using the wrong base url ... can you change it from `http://0.0.0.0:4000` to `http://0.0.0.0:4000/v1`. Note the **v1** at the end. To...
Hi @vanetreg , Thanks for raising this. Do you know if the hf inference api is an openai compatible endpoint? AutoGen (and autogenstudio) standardize on llm endpoints that are oai...
Its a good idea! There are a few custom parameters that will need to be supported. I believe this is related to #1364 which is already on the roadmap.