keras-yolo4 icon indicating copy to clipboard operation
keras-yolo4 copied to clipboard

What is the loss after training

Open lanyufei opened this issue 4 years ago • 5 comments

I trained with my own data set, and loss felt high and converged slowly

lanyufei avatar Apr 30 '20 13:04 lanyufei

如果是没有加载预训练权重的从头训练,请解除前面层的冻结。

Ma-Dan avatar May 01 '20 09:05 Ma-Dan

我加载预训练权重了,最终earlystopping的loss30多,我在yolov3上训练loss训练后17多,这正常吗

lanyufei avatar May 01 '20 10:05 lanyufei

我训练也遇到这种问题,loss很高,还没有我用yolov3得到的loss值低,而且loss值降低的很慢,请问这什么原因呢?

Augenstern-yzh avatar May 05 '20 06:05 Augenstern-yzh

I trained with my own data set, and loss felt high and converged slowly @lanyufei did you have to use a very small batch size when training on your own dataset? I had to use 4 even though I am using a RTX 2080 TI founders edition GPU with 11g's of ram. Doing the same on qqwweee's implementation, which this borrows from, I could do 16 when training from scratch. Any idea whats using up all the memory?

robisen1 avatar May 26 '20 04:05 robisen1

I trained with my own data set, and loss felt high and converged slowly @lanyufei did you have to use a very small batch size when training on your own dataset? I had to use 4 even though I am using a RTX 2080 TI founders edition GPU with 11g's of ram. Doing the same on qqwweee's implementation, which this borrows from, I could do 16 when training from scratch. Any idea whats using up all the memory?

qqwweee's implemantation is on yolov3 not v4. yolov4 is a way bigger network and requiers a lot more vram to run. In the github repository of the original yolo their is a section witch tells you what changes you need to do according to your gpu memory to train it faster

iliask97 avatar Jul 19 '20 22:07 iliask97