Ryusaeba

Results 19 comments of Ryusaeba

Hi @eric612 , Thanks for the quick response, so how do you workaround this when using multi-scale trick for YOLOv2?

@liangshuang1993 Hi, I also got the same result. Do you have any solution so far?

Hi @yonghenglh6 Can we use your cpp/hpp/cu files to load [MobileNet you pasted](https://github.com/shicai/MobileNet-Caffe) as pretrained weight to do finetune work? I have this question is because when we update conv...

I check the website http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html and saw the following statement. "If we provide the weights argument to the caffe train command, the pretrained weights will be loaded into our model,...

@yonghenglh6 Thanks! I have got all pass message by using check.py. Then I apply DepthWiseConvlution on https://github.com/shicai/MobileNet-Caffe inference path, the TOP-1 result (accuracy) is the same but I get slight...

Hi @moliqian , I thought ImageData Layer is for classification task. How do you use ImageDataLayer as SSD's input? Another question is do you implement data augmentation in the layer?...

@agemagician, did you resolve this issue? If so, could you share the details with me?

@yzxyzh You mentioned you are using A100 40G * 8, however [the README.md said. ](https://github.com/lm-sys/FastChat#fine-tuning-vicuna-7b-with-local-gpus) said that You can use the following command to train Vicuna-7B with 4 x A100...

@stas00 Below is the OVERFLOW meesage I got. After first 6 steps is executed, we could see `[INFO] [logging.py:96:log_dist] [Rank 0] step=10, skipped=6`. I Does this means it is harmless...

Thanks for the explanation. I just realized it is related to "automatic loss scaling with mixed precision". Regarding the loss question, you are right.