Jonas Dauscher
Jonas Dauscher
> Would be nice if you can add some examples for fine-tuning for example with any pretrained bert as decoder !? :) Do we have also a chance to export...
> Hello, I have the same problem with handwritten text using `google/vit-base-patch16-224-in21k` as encoder and `indobenchmark/indobert-base-p1` as decoder > > All the result didn't predict well and give some extra...
> Hi, > > Yes that's possible. What you can do is initialize the weights of the encoder with those of ViT, and the weights of the decoder with those...
> > tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased") > > Yes, you need to use that in order to prepare the labels for the model (as the labels are the `input_ids` of the...
@gvlokesh, yes the training improved quite well. Which training data are you planning to use ? I struggled a lot of finding / generating German handwritten Dataset. If you have...
I guess one simple method to reduce the memory usage is to reduce the batch size... But if you have any ideas, to predict the memory usage before training, please...
> Can you please check to see why my entered moves are incorrect and what may be good moves? Thanks! > > ``` > | 12 | 13 | 14...