How to train model with the HME100K dataset?
Hi@LBH1024,
Can you provide the vocab file used for HME100K dataset?
Besides, how do you split the dataset since there is no validation set?
Thanks.
Hi, the vocab file can be obtained by counting all the symbol classes that appear in HME100K dataset. More details on how to split the dataset can be found in the paper.
Hi@LBH1024,
based on the details described for HME100K dataset in the paper, I can not reproduce the results. Can you provide more details about the config of optimizer used for HME100K dataset?
Thanks!
按照以下参数进行训练:
epochs: 90 batch_size: 8 workers: 0 train_parts: 3 valid_parts: 1 valid_start: 0 save_start: 0
optimizer: Adadelta lr: 1 lr_decay: cosine eps: 1e-6 weight_decay: 2e-5 beta: 0.9
Hi @LBH1024 . There are a few Chinese characters in the symbol class of HME100K, do I need to ignore these characters when building the vocab? Thanks!
Hi @LBH1024 . There are a few Chinese characters in the symbol class of HME100K, do I need to ignore these characters when building the vocab? Thanks!
These Chinese characters are need to be recognized, so you shouldn't ignore them.
These Chinese characters are need to be recognized, so you shouldn't ignore them.
Thanks
Hi @LBH1024, could you please provide the code or a more detailed explanation regarding the pre-processing steps you used on the HME100K dataset for training/inference?
- I notice you mention here that the HME100K images can be resized to a height of 120 while keeping the image aspect ratio. Why 120? Do you pad the lower right corner of the smaller images to match the maximum width in the batch as you mention here after resizing?
- Do you keep the HME100K images as RGB or do you convert to grayscale or any other format such as bitmap to be consistent with the CROHME dataset?
- Do you normalize the images in any way?
Thanks.
Hi, how long does it take to train using the HME100K dataset? Can I refer to your training parameters?