gemma_pytorch
gemma_pytorch copied to clipboard
Output with higher max_length is repetition of base text
While generating any text with a specified value of max_length, the generated text keeps repeating several times until the output spans the value of max_length. An example of the above is using the following code
gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("gemma_2b_en")
single_prompt_result = gemma_lm.generate("Keras is a", max_length=4096)
print(single_prompt_result)
As you can observe the sentence keeps repeating to span the max_length while it should ideally stop once it has written the base text.
The code was run on Kaggle with "gemma_2b_en" model GPU - P100 To recreate the issue you can run the given code.
Could you please try the instruction-tuned model instead? It should give you better results.
Could you please try the instruction-tuned model instead? It should give you better results.
Thanks, With the instruct tuned model the output is perfect.
Btw is there any reason why the gemma_2b_en model produced repetitive output instead ks stopping ?.
It's kind of expected that the pre-trained models only try to complete text. Maybe one way you could try is to tune the sampling parameters to see if you can get a bit diversity in the output.
I am just happy to be a part of this chat
It's kind of expected that the pre-trained models only try to complete text. Maybe one way you could try is to tune the sampling parameters to see if you can get a bit diversity in the output.
Yeah, Its expected of it to complete the text but still shouldn't repeat its text right? Example the other text generation models might produce half ending sentence outputs depending on the max_length size but they don't producr repeating ouputs.
I've noticed the 2b model repeating itself as well. Although, I found it does it when the context of my prompt would be hard even for a human to figure out.
it is expected these repetitions on PT models.it would be better to fine tune them or use the IT models