daeing

Results 8 comments of daeing

> The input image can be any size BUT it has to be resized to 224x224 before feeding it to the pre-trained model since the network has a fixed input...

> There will be distortion when the image is resized to 224x224. Pls note that the dataset images are not all 32x100. They are of various sizes and orientations. So,...

> > > There will be distortion when the image is resized to 224x224. Pls note that the dataset images are not all 32x100. They are of various sizes and...

> 1. If you train without LightSeq and fine-tune using LightSeq, use pre-trained parameters to initialize the LightSeq layers. > 2. If you train with LightSeq and fine-tune using LightSeq,...

> @daeing Please You can check the huggingface Bert example. sorry, I have pretrained pytorch transformer-decoder layer. I want to use lightseq to accelerate inference speed. But I don't know...

> You can check the inference example and modify the export script for your pretrained model. ok, many thanks

> 8.6 ok, many thanks. Can 3090ti use FasterTransformer?

> Please give more details. Which model are you training? Which training data are you using? What is the batch size? Did you change the learning rate? > > Anyway,...