progen icon indicating copy to clipboard operation
progen copied to clipboard

Fine Tuning the Model

Open cv277 opened this issue 2 years ago • 9 comments

I want to fine-tune ProGen2-small on my own dataset. See this google colab notebook for an annotated version of the code and the error: https://colab.research.google.com/drive/1_R0xgf6Kw0K88PYF7-ZOCIh9WRSmXN8C?usp=sharing

First I load the model like this:

import torch
from tokenizers import Tokenizer
from progen.progen2.models.progen.modeling_progen import ProGenForCausalLM

model = ProGenForCausalLM.from_pretrained('/content/drive/MyDrive/progen2-small', torch_dtype=torch.float16, low_cpu_mem_usage=True).to(device)

I am using the huggingface Trainer to fine-tune the model with the DataCollatorForLanguageModeling. I load the tokenizer like this:

def create_tokenizer_custom(file):
    with open(file, 'r') as f:
        return Tokenizer.from_str(f.read())

tokenizer = create_tokenizer_custom(file='/content/progen/progen2/tokenizer.json')

And then convert it to a PreTrainedTokenizerFast as suggested by: https://github.com/huggingface/tokenizers/issues/325

from tokenizers import Tokenizer
from transformers import PreTrainedTokenizerFast

tokenizer.save("my-tokenizer.json")
fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="my-tokenizer.json")

During fine-tuning, the training loss becomes 0.0000. After training, I attempt to produce new samples:

with torch.no_grad():
  input_ids = torch.tensor(fast_tokenizer.encode("1GRGL")).view([1, -1]).to(device)
  tokens_batch = model.generate(input_ids, do_sample=True, temperature=0.7, max_length=50, top_p=10, num_return_sequences=1, pad_token_id=0)
  as_lists = lambda batch: [batch[i, ...].detach().cpu().numpy().tolist() for i in range(batch.shape[0])]
  print(tokenizer.decode_batch(as_lists(tokens_batch))[0])

However, I get this error: RuntimeError: probability tensor contains either inf, nan or element < 0 Please see the above google colab notebook for the entire code.

cv277 avatar Feb 10 '23 05:02 cv277

top_p=10 might be <1?

Seaxingzhou avatar Feb 10 '23 07:02 Seaxingzhou

top_p=10 might be <1?

Unfortunately I still get the error after setting top_p to a value less than one. Thank you though!

cv277 avatar Feb 11 '23 04:02 cv277

I am getting a warning and an error which are as follows: Warning: You're using a PreTrainedTokenizerFast tokenizer. Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding. Error: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'.

tanuj2212 avatar Mar 22 '23 18:03 tanuj2212

@cv277 where you able to resolve the issue?

Geraldene avatar Apr 24 '23 15:04 Geraldene

To fix this, you should use torch_dtype=torch.float32 instead.

msparsa avatar May 04 '23 17:05 msparsa

我想知道数据集的样式是什么样的,能否提供呢

TC2022lxf avatar Jun 21 '23 06:06 TC2022lxf

I've switched to torch_dtype=torch.float32 but am still getting this issue for progen-base and larger models, but not for progen-small when I'm calling model = ProGenForCausalLM.from_pretrained('/content/drive/MyDrive/progen2-small', torch_dtype=torch.float32, low_cpu_mem_usage=True).to(device)

Has anyone experienced similar issues or is there somewhere else I need to change the dtype?

oliverfleetwood avatar Nov 09 '23 19:11 oliverfleetwood

@oliverfleetwood that works for me, I tried loading the progen2-large model and it loads fine - what error are you encountering?

Geraldene avatar Nov 12 '23 09:11 Geraldene

First I only ran on cpu. After upgrading cuda and reinstalling torch, I was able to run the larger models on a GPU with the same setup. I still get the same error as I try to run the larger models (ie all except for progen-small) on CPU

oliverfleetwood avatar Nov 17 '23 09:11 oliverfleetwood