PowerInfer icon indicating copy to clipboard operation
PowerInfer copied to clipboard

[Question]: High PPL on wikitext2 of ReLU-LLAMA-7B for language modeling tasks

Open llCurious opened this issue 11 months ago • 2 comments

Prerequisites

Before submitting your question, please ensure the following:

  • [x] I am running the latest version of PowerInfer. Development is rapid, and as of now, there are no tagged versions.
  • [x] I have carefully read and followed the instructions in the README.md.
  • [x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).

Question Details

I use the sparsed LLaMA model from SparseLLM (Huggingface), named as ReluLLaMA-7B. I calculate the PPL with max_seq_length of 512 for wikitext2 dataset (from Huggingface). However, the PPL reaches 16003, while the original dense LLaMA2, named as Llama-2-7b-hf has a PPL of 54.

There seems to be a huge PPL loss due to the relu activation. Do you have any ideas on this phenomenon?

Additional Context

All the packages use the latest version. All the models and datasets are from Huggingface.

llCurious avatar Mar 11 '24 06:03 llCurious

hi @hodlen , do you have any ideas?

llCurious avatar Mar 11 '24 08:03 llCurious

Sorry for the late reply. That was a bit unexpected since we have tested its perplexity under both transformers/torch and PowerInfer. Can you provide minimal reproducible code so we can further help to investigate?

hodlen avatar Apr 06 '24 14:04 hodlen