SEED
SEED copied to clipboard
Hyperparameter for training SEED Tokenizer
Hi! Thank you for the wonderful work.
I wonder if you can provide detailed information on training SEED Tokenizer. I cannot find the hyperparameter for training SEED Tokenizer in your paper.
Also, I have another question. In the paper, SEED Tokenizer training is divided into two stages. Does that mean the Q-former is pre-trained in stage 1 and then the Q-former, codebook, decoder, and MLP are trained in stage 2?
Thank you.