fewshot-font-generation
fewshot-font-generation copied to clipboard
training setdata
I use LF-Font for training and have the following problem:
- Are all the unicode encodings in the given example data/chn/train_chars.json used for training? I think the number of characters is very large. Don’t we have fewer samples?
- https://github.com/clovaai/fewshot-font-generation/blob/c445f66c5c18c5002241e1fec65aaaa1042a2f63/LF/phase1_trainer.py#L61 in this link ref_imgs = batch["ref_imgs"].cuda(), why there is a ref_imgs field in the batch
Hi,
- Yes, we use all the characters in data/chn/train_chars.json for training. As mentioned in our paper, our method uses large number of characters using training phase.
- We use a custom collate_fn for dataloader to have a dictionary-formatted batch. See here.
Sorry for the late reply.