LMOps
LMOps copied to clipboard
[MiniLLM] Questions about performance without LM pipeline
In your Readme , I noticed that you said the LM corpus (like openwebtext in gpt2 and roberta in Llama ) is unnecessary in your method.Do you consider sharing the model's performance board of your MinilLLM (but without corpus data ) like your main table in your paper? As follows:
Thanks for your patient and valuable response !