Victor Dibia
Victor Dibia
This is an excellent suggestion. I'll explore an implementation
Thanks for the feedback. There is a chance I should be attempting to flush memory after each generation. I will look into this. Conda shold not be a factor.
Hi @preyasgarg, Thanks for the note. Can you provide more information or context? How much improvement might this penalty provide and expected impact on generated images? Also, a PR would...
Hi @nuaabuaa07 , Can you describe the use case you intend to support? E.g. what is the input and expected output? Currently, LIDA is optimized for tabular data, but I...
Thanks Aiden. I am leaning more towards supporting **_discovery_** of data as opposed to hosting data (we probably can assume the user is able to do this already). I updated...
are you able to run `pip install lida` without any errors? I typically suggest using a fresh python environment e.g. via conda.
The mixtral models have not been tested with lida/llmx. I would recommend the following. - load the model using a tool like [vllm](https://docs.vllm.ai/en/latest/getting_started/quickstart.html#openai-compatible-server) which supports mixtral - vllm provides an...
@trojrobert , any chance you want to open a PR for this?
Hi, This is a known issue tends to arise with smaller models, also mentioned in #27 We are currently running some experiments to see which small model provides decent performance...
What quantized models are you interested in using? In general, as long as you can spin up an openai compliant webserver endpoint from your model, you can integrate it into...