llama-cpp-agent icon indicating copy to clipboard operation
llama-cpp-agent copied to clipboard

Crash, when setting top_k, top_p, or repeat_penalty

Open woheller69 opened this issue 1 year ago • 15 comments

I updated my GUI to your new 0.2.2 version. It now works as long as I do not set top_p, top_k, or repeat_penalty.

these give e.g.:

    llama_cpp.llama_sample_top_p(
ctypes.ArgumentError: argument 3: TypeError: wrong type
   self.provider = LlamaCppPythonProvider(self.main_model)
    self.settings = self.provider.get_provider_default_settings()
    self.settings.max_tokens = 2000
    self.settings.temperature = 0.65
    self.settings.top_k=40,
    self.settings.top_p=0.4,
    self.settings.repeat_penalty=1.18,
    self.settings.stream=True,

woheller69 avatar May 13 '24 15:05 woheller69

And where do I set repeat_last_n=64,

woheller69 avatar May 13 '24 15:05 woheller69

Or do I have to set these parameters now when defining the model

Llama(
            model_path = self.model_path,
            n_gpu_layers = 0,
            f16_kv = True,
            top_k = 40,
            top_p = 0.4,
            repeat_penalty = 1.18,

woheller69 avatar May 13 '24 15:05 woheller69

@woheller69 You have to remove the , from the lines assigning top_p, top_k, repeat_penalty and stream.

You set the repeat_last_n=64 at the intialization of the Llama class.

But there still is a problem with the generation at llama-cpp-python, it will insert strange symbols into text. Will look into this.

Maximilian-Winter avatar May 13 '24 16:05 Maximilian-Winter

Thanks, stupid error on my side :-)

So I moved the repeat_penalty to Llama(...) and removed the commas.

        self.main_model = Llama(
            model_path = self.model_path,
            n_gpu_layers = 0,
            f16_kv = True,
            repeat_last_n = 64,
            use_mmap = True,
            use_mlock = False,
            embedding = False,
            n_threads = self.threads,
            n_batch = 128,
            n_ctx = self.context,
            offload_kqv = True,
            last_n_tokens_size = 1024,
            verbose = True,
            seed = -1,
        )
        self.provider = LlamaCppPythonProvider(self.main_model)
        self.settings = self.provider.get_provider_default_settings()
        self.settings.max_tokens = 2000
        self.settings.temperature = 0.65
        self.settings.top_k=40
        self.settings.top_p=0.4
        self.settings.repeat_penalty=1.18
        self.settings.stream=True

save_messages is not available anymore. Is there a replacement?

woheller69 avatar May 13 '24 16:05 woheller69

@woheller69 The Agent now uses an chat history class that implements the handling of the messages. The BasicChatHistory class has a message store that handles the storing of messages. You can acces it by calling agent.chat_history.message_store.save_to_json and giving it a filename. You load it with load_from_json.

I have to add that to documentation. Thank you for pointing that out!

Maximilian-Winter avatar May 13 '24 16:05 Maximilian-Winter

@woheller69 I think llama-cpp-python is broken, the following code will generate garbage at the beginning of generation or crash the script:

llama_model = Llama(r"C:\AI\Agents\gguf-models\mistral-7b-instruct-v0.2.Q6_K.gguf", n_batch=1024, n_threads=10, n_ctx=8192)

for t in llama_model.create_completion("[INST] Hello! [/INST]", stream=True):
    print(t["choices"][0]["text"], end="")

Maximilian-Winter avatar May 13 '24 16:05 Maximilian-Winter

Can you close this if you have no further questions?

Maximilian-Winter avatar May 13 '24 16:05 Maximilian-Winter

I have tried several models and do not get garbage. llama-cpp-python 0.2.74, updated yesterday.

woheller69 avatar May 13 '24 17:05 woheller69

Trying to save messages using

    self.llama_cpp_agent.chat_history.message_store.save_to_json("msg.txt")

gives

    TypeError: Object of type Roles is not JSON serializable

woheller69 avatar May 13 '24 17:05 woheller69

Sorry, I will update the package later today. Will inform you here!

Maximilian-Winter avatar May 13 '24 17:05 Maximilian-Winter

@woheller69 Thank you for the information on llama-cpp-python.

Maximilian-Winter avatar May 13 '24 17:05 Maximilian-Winter

@woheller69 Fixed everything and published new version, let me know if it works for you. Also added function to get message store of chat history. You can save and load like that:

    agent.chat_history.get_message_store().load_from_json("test.json")
    agent.chat_history.get_message_store().save_to_json("test.json")

Maximilian-Winter avatar May 13 '24 21:05 Maximilian-Winter

saving messages now works but using it I find that adding a message does not work anymore. When interrupting inference manually , see #47, I am adding the partial message to history with

self.llama_cpp_agent.add_message(self.model_reply, "assistant")

This worked with the "old" version. Now it does not have an effect.

When saving the messages after add_message the added message is not there.

woheller69 avatar May 14 '24 04:05 woheller69

I found I can add it with

            self.llama_cpp_agent.chat_history.get_message_store().add_assistant_message(self.model_reply)

But will it be used in follow-up conversation then?

woheller69 avatar May 14 '24 04:05 woheller69

Another thing: The prompt_suffix works nicely, but it is not stored as part of the assistants message. I think this should be the case.

E.g. using "Sure thing!" as prompt_suffix will eliminate refusals from Llama 3 :-) But when the conversation is saved, "Sure thing!" is missing.

woheller69 avatar May 14 '24 07:05 woheller69