Nils Wenzlitschke

Results 6 comments of Nils Wenzlitschke

- We have already tried bert-base-german-cased instead of distilbert-base-uncased with the same results. - Label_dictionary looks fine consisting of the relations we have annotated but printing one sentence from `corpus.train[0]`...

The sentences are now printed properly! However, the results are still the same as in the training log.

Hey @dobbersc , Unfortunately also the adjustment of the entity_pair_filters to all possible combinations did not improve the situation and the loss remains at 0 as before from the beginning....

Ok Update: Now that I have once renamed all NER tags in our conll file from, for example, PER's to I-PER's/LOC to I-LOC, I am now not directly getting Loss...

I am facing the same problem. Has anyone found a solution?

Could the error be in the CTCLabelConverter because I use Attn instead of CTC for prediction? and accordingly use the AttnLabelConverter from the deep-text-recognition-benchmark repository in my training? I looked...