Christian Bartz

Results 316 comments of Christian Bartz

@rezha130 I think your problem is [this](https://github.com/Bartzi/see/blob/master/chainer/text_recognition_demo.py#L45) line, you should exchange `52` by `72`. You char_map is different to the one I've been using. This problem could be fixed in...

Did you check those 2 lines? And adjust them to your case? - [https://github.com/Bartzi/see/blob/master/chainer/text_recognition_demo.py#L142](https://github.com/Bartzi/see/blob/master/chainer/text_recognition_demo.py#L142) - [https://github.com/Bartzi/see/blob/master/chainer/text_recognition_demo.py#L144](https://github.com/Bartzi/see/blob/master/chainer/text_recognition_demo.py#L144)

Your groundtruth is not necessary for using the demo script, but it looks okay to me. Your problem is that you are using a script that is designed for printing...

If you use `train_text_recognition` you can use word based ground truth file... oops yeah that is a little different to the other scripts... hmm I'm sorry for that...

>I remove [0] and get this error: >Traceback (most recent call last): File "text_recognition_demo.py", line 181, in word = "".join(map(lambda x: chr(char_map[str(x)]), word)) File "text_recognition_ktp.py", line 181, in word =...

First thing I see is that the predicted bboxes don't look good at all. They should change positions after a while see the text recognition video from [this](https://bartzi.de/documents/attachment/download?hash_value=35314d0f836cc38d8bb64a46663a06e2_7) file. Furthermore,...

okay, 1. `TextRecFileDataset` is not used for training FSNS data. 2. here are the first to lines of the text recognition gt_file: ``` 23 1 /data/text_recognition/samples/9999/9999026_]kinkiness_-5_DonegalOne-Regular.jpeg ]kinkiness ``` 3. here...

hmm, hard to say without the stack trace. But it basically says, that there are some arrays that are concatenated that do not have the correct shape. Could be because...

> Input images size is not fixed in train data set. That does not work, because the network is not fully convolutional and because it is not possible to create...

@rezha130 Before you resized your images to `600x150`, did you check that they have the same semantics as the images of the FSNS dataset? **This is important!!** Forget about the...