lida icon indicating copy to clipboard operation
lida copied to clipboard

Evaluation of HF Models with LIDA

Open victordibia opened this issue 2 years ago • 9 comments

What

Local models (e.g. LLAMA based models available via HuggingFace in the 7B or 13B size classes) offer multiple benefits (e.g., can be finetuned/adapted, run locally etc). While LIDA has been mostly tested with OpenAI models, more work is needed to test workflows and performance for HF models.

Work Items

  • Test a set of local HF models either directly with llmx or with LIDA, systematically document bugs and suggest fixes via PRs

victordibia avatar Sep 18 '23 16:09 victordibia

I have started working on this

trojrobert avatar Sep 29 '23 07:09 trojrobert

Great. Let us discuss your findings here so far.

On my end, I have been trying local huggingface models

For example, I have found this hermes 13b to decent performance in terms of goal generation but limited success with visualization generation.

I'll share updates.

In the mean time, here is how I am testing local models with lida. I have updated the readme.


Using LIDA with Locally Hosted LLMs (HuggingFace)

LIDA uses the llmx library as its interface for text generation. llmx supports multiple local models including HuggingFace models. You can use the huggingface models directly (assuming you have a gpu) or connect to an openai compatible local model endpoint e.g. using the excellent vllm library.

Using HuggingFace Models Directly

from lida import llm
text_gen = llm(provider="hf", model="uukuguy/speechless-llama2-hermes-orca-platypus-13b", device_map="auto")
lida = Manager(llm=text_gen)
# now you can call lida methods as above e.g.
sumamry = lida.summarize("data/cars.csv") # ....

Using an OpenAI Compatible Endpoint e.g. vllm server

from lida import Manager, TextGenerationConfig , llm

model_name = "uukuguy/speechless-llama2-hermes-orca-platypus-13b"
model_details = [{'name': model_name, 'max_tokens': 2596, 'model': {'provider': 'openai', 'parameters': {'model': model_name}}}]

# assuming your vllm endpoint is running on localhost:8000
text_gen = llm(provider="openai",  api_base="http://localhost:8000/v1", api_key="EMPTY", models=model_details)
lida = Manager(text_gen = text_gen)

victordibia avatar Sep 29 '23 16:09 victordibia

I was thinking Langchain will be useful here. Interesting to see what you are doing with llmx.

trojrobert avatar Sep 30 '23 02:09 trojrobert

Do you guys consider using https://mistral.ai/news/announcing-mistral-7b/ ?

zirubak avatar Oct 10 '23 03:10 zirubak

@victordibia I think the code below in llmx library cause this error. Line 47 and 48. You are adding "provider" and "models" to the kwargs and you also have it as an argument.

    kwargs["provider"] = kwargs["provider"] if "provider" in kwargs else provider
    kwargs["models"] = kwargs["models"] if "models" in kwargs else models

image

trojrobert avatar Oct 23 '23 21:10 trojrobert

@victordibia I got an error while downloading other models from hugging face

This is the code image

This is the error image

Solution

Add new argument "offload_folder" to self.model in llmx package image

trojrobert avatar Nov 16 '23 15:11 trojrobert

Hello, I am having some trouble Loading "ehartford/dolphin-2.5-mixtral-8x7b" Has anyone tried this? Any help or steps you can provide is much appreciated.

vegansquirrel avatar Dec 20 '23 18:12 vegansquirrel

Hi,

I have not tested with the mixtral model series. I'd suggest attempting to use vllm to setup an open ai compatible server and then connect to that using the openai approach. Mixtral is supported on vllm.

Let me know how it goes.

victordibia avatar Dec 20 '23 19:12 victordibia

Do you guys consider using https://mistral.ai/news/announcing-mistral-7b/ ?

this works for summary generation not for graphs. did you find anything else working better ?

kishoretvk avatar Mar 19 '24 22:03 kishoretvk