VQ-Font
VQ-Font copied to clipboard
Tryin to Generate Korean Characters
I am trying to generate Korean characters using your model. For the Chinese everything is fine, but when I am trying to generate Korean characters, I keep getting an error
File "/media/hdd1/Irfan/VQ-Font-korean/datasets/dataset_transformer.py", line 96, in getitem
content_imgs = torch.stack([self.env_get(self.env, self.content_font, uni, self.transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/hdd1/Irfan/VQ-Font-korean/datasets/dataset_transformer.py", line 96, in
When I tried to debug, it appeared that no matter what font I chose, it wasn't loading content images. Other images e.g. style images etc are loading.
Following is the output of build_trainset.sh
build_meta4train_lmdb done! all_style_fonts: 7 train_style_fonts: 5 val_style_fonts: 2 seen_unicodes: 2000 unseen_unicodes: 350
Any idea what is it that I am doing wrong?
Sorry for the late reply, it seems that some images are missing in your datasets, The Korean .ttf files you collected may not all contain (2000+350) characters. Can i see the content of your build_trainset.sh
?
The TTF has all the (2000+350) characters as I am training my model on the same dataset as well. I have verified again, the generated folders have all the required images.
Following are the contents of build_trainset.sh
python3 build_meta4train.py
--saving_dir ../results/korean_dataset/
--content_font ../datasets/images/content_all
--train_font_dir ../datasets/images/train_all
--val_font_dir ../datasets/images/val
--seen_unis_file ../meta/train_unis.json
--unseen_unis_file ../meta/val_unis.json