ride icon indicating copy to clipboard operation
ride copied to clipboard

[not an issue] is there any tutorial to learn the training of MCGSM?

Open anilrgukt opened this issue 8 years ago • 4 comments

Also a general doubt, when using the code/experiments/train.py what is the loss that it prints? is it avg. log likelihood? If so, while training the log likelihood should increase, am I right?

For me the log likelihood score is continually decreasing as the epochs progress.

thanks, Anil

anilrgukt avatar Jun 29 '16 10:06 anilrgukt

Hi Anil,

the training script prints the negative log-likelihood, so if it decreases, that's good.

I don't have a tutorial for training an MCGSM, only this example: https://github.com/lucastheis/cmt#python-example

Lucas

lucastheis avatar Jun 29 '16 15:06 lucastheis

Dear Lucas,

While using experiment/train.py along with validation, the loss_valid comes out to be a negative value. Whereas, the training loss and the loss calculated on test data using experiment/evaluate.py are both positive. Is it normal to get negative values for loss_valid even-though it is calculated as negative log likelihood ?

Thanks Regards, Akshat

adaveiitm avatar Jul 02 '16 13:07 adaveiitm

How different are the numbers?

lucastheis avatar Jul 05 '16 23:07 lucastheis

We are working with the BSDS300 dataset with batch size of 64 and 6 iterations for each mini-batch. While training, the negative log likelihood scores are initially 1.95 and eventually decrease to around 0.95 as shown in the following figure. Where as the validation loss score is 1.57 after initialization but we get -3.448, -3.508, -3.508, -3.513, -3.514 in the subsequent epochs. Also, the score evaluated on test data using evaluate.py is also postive (around 3.07)

comp_fig

adaveiitm avatar Jul 08 '16 05:07 adaveiitm