extended_openai_conversation icon indicating copy to clipboard operation
extended_openai_conversation copied to clipboard

0.1.x (Use tools instead of functions)

Open jekalmin opened this issue 1 year ago • 14 comments

  • Merged PR(https://github.com/jekalmin/extended_openai_conversation/pull/25) into 0.1.x for releasing 0.1.0-beta1
  • Need to merge into main branch when API is stable

jekalmin avatar Nov 19 '23 01:11 jekalmin

@rkistner Maybe I will add Use Tools option in the next release (probably 1.0.0), so that I don't have to keep track of multiple codes.

jekalmin avatar Jan 01 '24 10:01 jekalmin

Hi! if using tools instead of functions, does this prompt need to be changed or the functions need to removed? thanks

currently running the beta and would like to test. thanks

Anto79-ops avatar Jan 19 '24 22:01 Anto79-ops

Thanks for your interest!

I released this in 1.0.2-beta1. You don't have to change prompt or functions.

Please try it and give a feedback.

jekalmin avatar Jan 20 '24 01:01 jekalmin

Thank you!

I wanted to share that I've been experimenting with Beta1 using LocalAI.

With LocalAI, I have the Mixtral 8x7b (v 2.7) 6Q GGUF model setup, which is supposedly one of the best models out right now.

I pointed your integration to this model and then toggled the "use tools" button and pressed submit.

I was pleasantly surprised that by doing the above, it was able to read sensor information in my home assistant quite well with no errors.

Untitled.jpg

Untitled2.jpg

However, when I asked it to turn on a light, it seems to have gone through the actions and acknowledged that it's on, but the fact is it didn't actually turn on the light.

Untitled3.jpg

So I think using tools is moving in the right direction. However, there might be some tweaks that are required for it to actually do service calls. Do you know if that's something that can be done in the prompt?

Thank you!

Anto79-ops avatar Jan 20 '24 17:01 Anto79-ops

I also use the assist to help trobleshoot, it seems to get the correct entity and correct service, so not sure why its not working as its hitting the correct service and entitiy:

Screenshot_20240120_133622_Home Assistant

Screenshot_20240120_133631_Home Assistant

Screenshot_20240120_133638_Home Assistant

Anto79-ops avatar Jan 20 '24 20:01 Anto79-ops

As always, thanks @Anto79-ops for your cooperation. Is there a log about how LLM called a function?

jekalmin avatar Jan 24 '24 01:01 jekalmin

I can check if there's a way to look at the logs of LocalAI... while I send the command to turn off the light. Is that something that would be useful or do you need the home assistant integration logs?

Anto79-ops avatar Jan 24 '24 01:01 Anto79-ops

I just wanted to know if function is called in message history log. Let me try this soon.

jekalmin avatar Jan 24 '24 04:01 jekalmin

fantastic. You'll be very suprised how well it works, Here is the model im using, Q6_K version

https://huggingface.co/TheBloke/dolphin-2.7-mixtral-8x7b-GGUF

its not a small model, so it may 20 to 40 seconds to reply if you don't have a decent CPU/GPU computer.

Let me know how it goes!

Anto79-ops avatar Jan 24 '24 05:01 Anto79-ops

LocalAI does not support function calling right now, you need to instruct your model to generate functions and parse the output.

This integration relies on the response from the openai api having the is_function_call value set, localAI models are not trained to perform this. I am investigating integration with: https://github.com/MeetKai/functionary which combined with their special VLLM server seems to be promising in its responses - but it's weak at general responses, so you really need multiple models.. tricky tricky.

ex10ded avatar Feb 04 '24 12:02 ex10ded

@ex10ded Thanks for your comments. Have you been able to get the functionary v2 gguf model to work with LocalAI? It seems to require a special chat template if you use gguf (not vLLM):

https://github.com/mudler/LocalAI/discussions/1641

Anto79-ops avatar Feb 04 '24 17:02 Anto79-ops

@ex10ded Thanks for your comments. Have you been able to get the functionary v2 gguf model to work with LocalAI? It seems to require a special chat template if you use gguf (not vLLM):

mudler/LocalAI#1641

No, I hit the same issue as you - the template does not seem to work when using anything other than their special vLLM server (not even standard vllm) - they seem to do a lot of pre-processing of the tools sent even before applying a template too.

ex10ded avatar Feb 05 '24 09:02 ex10ded

I'm on 1.0.3 and for some reason it works when use tools off but when I turn it on it seems to REALLY want to use execute_services for getting states.. This is when using default model and prompt. Did anyone else notice this difference? It should be the same thing, right?

wicol avatar Apr 25 '24 10:04 wicol

I got localAI working with this here addon, but the LLM seems to start looping on function call.

Anyone encountered this?

maxi1134 avatar Jun 15 '24 16:06 maxi1134