locating-objects-without-bboxes icon indicating copy to clipboard operation
locating-objects-without-bboxes copied to clipboard

Training About Mall Dataset

Open hustcc19860606 opened this issue 4 years ago • 6 comments

I'm trying to replicate your training process using the “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt”. I find that there is still training space after loading weights, and the loss of verification set is very large. I confuse about it, can you give me some suggestions? QQ图片20200710151642

hustcc19860606 avatar Jul 10 '20 07:07 hustcc19860606

What is "training space"?

javiribera avatar Jul 15 '20 04:07 javiribera

The “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” is one trained weight of network . When I try to use it as one checkpoint, I find that run_ avg can be reduced by training. Is this phenomenon right? Moreover, avg_ val_ Loss on the validate set is much larger than run_ avg of train set, is this phenomenon right? In my opinion, using a well trained weight of network as checkpoint, the run_ avg will has no descending space by training. Can you give me some suggestions? @javiribera

hustcc19860606 avatar Jul 20 '20 02:07 hustcc19860606

The “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” is one trained weight of network . When I try to use it as one checkpoint, I find that run_ avg can be reduced by training. Is this phenomenon right? Moreover, avg_ val_ Loss on the validate set is much larger than run_ avg of train set, is this phenomenon right?

Yes, this is called overfitting.

In my opinion, using a well trained weight of network as checkpoint, the run_ avg will has no descending space by training. Can you give me some suggestions? @javiribera

I disagree. You can easily get the training loss to 0 by just training for very long time a huge model. What matters is the validation error. You are free to find another approach that would get a lower training and/or validator loss.

I don't see any of this as a problem of the method, the training or the code.

javiribera avatar Aug 10 '20 02:08 javiribera

Thanks for your reply. I need more suggestions to repeat the training process. I find that you use 9749 epoches to get “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” trained weight. Meanwhile, we find the lowest_mahd in the record of your weight is 4.32. We use your weight to get judge.mahd is 91.57 on the validation set, which contains No.seq_001601.jpg to No.seq_001800.jpg. And the judge.mahd is 7.67 on the train set, which contains No.seq_000001.jpg to No.seq_001600.jpg. Is that right? We have trained about 2000 epoches and consume serval days by NVIDIA P40, only the following pic is achieved. Do we need more training epoches? Can you give me some suggestions? @javiribera seq_001819

hustcc19860606 avatar Aug 10 '20 07:08 hustcc19860606

Thanks for your reply. I need more suggestions to repeat the training process. I find that you use 9749 epoches to get “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” trained weight. Meanwhile, we find the lowest_mahd in the record of your weight is 4.32. We use your weight to get judge.mahd is 91.57 on the validation set, which contains No.seq_001601.jpg to No.seq_001800.jpg. And the judge.mahd is 7.67 on the train set, which contains No.seq_000001.jpg to No.seq_001600.jpg. Is that right? We have trained about 2000 epoches and consume serval days by NVIDIA P40, only the following pic is achieved. Do we need more training epoches? Can you give me some suggestions? @javiribera seq_001819

Hi~ I am now looking for the mall dataset used in this paper. As mentioned in another issue, access to the website of the shopping mall dataset is prohibited in this repository. Do you still have the mall dataset you learned and experimented with, if so, can you share it?

csm-kr avatar Jan 28 '21 01:01 csm-kr

Thanks for your reply. I need more suggestions to repeat the training process. I find that you use 9749 epoches to get “mall,lambdaa=1,BS=32,Adam,LR1e-4.ckpt” trained weight. Meanwhile, we find the lowest_mahd in the record of your weight is 4.32. We use your weight to get judge.mahd is 91.57 on the validation set, which contains No.seq_001601.jpg to No.seq_001800.jpg. And the judge.mahd is 7.67 on the train set, which contains No.seq_000001.jpg to No.seq_001600.jpg. Is that right? We have trained about 2000 epoches and consume serval days by NVIDIA P40, only the following pic is achieved. Do we need more training epoches? Can you give me some suggestions? @javiribera seq_001819

Did you solve your problem? How many epochs did you finally train?

Thanks!

Frank-Dz avatar Mar 17 '21 00:03 Frank-Dz