ConvCRF icon indicating copy to clipboard operation
ConvCRF copied to clipboard

about the training implementation

Open chenypic opened this issue 6 years ago • 17 comments

Great work. Thanks for your code. Do you have a plan to publish the training implementation? I really want to follow your job.

chenypic avatar May 16 '18 13:05 chenypic

ConvCRFs can be trained using PyTorch. Training is straight forward and can be done like any other neural network. Iterate over the training data, apply softmax cross-entropy loss and use the pytorch autograd package to backprop.

I strongly recommend that you implement your own pipeline. Having a good understanding of your training process is quite crucial in deep learning.

I am considering to make my pipeline public, however the code is currently quite messy, undocumented and will not work out of the box. I think implementing your own pipeline by following some of the pytorch tutorials is much more rewarding and easiert then trying to make mine work.

Edit: I deleted part of my earlier response to increase my overall niceness. You can find the full response in the changelog.

MarvinTeichmann avatar May 16 '18 14:05 MarvinTeichmann

Thanks for your detailed response. I appreciate it, and I agree with you. I will implement my own pipeline according to your paper and my task.

chenypic avatar May 16 '18 15:05 chenypic

Hi Marvin, I wrote a script to train the convCRF using nll loss. I treat the air plane image as a two class segmentation problem. At the beginning the training went well, the segmentation was improving, but if I keep train it, it would not converge. It reaches the min loss value then the loss stated to increase and the segmentation become worse. Finally, the result become look like the noisy unary. Could you give me some suggestions on what problem this could be? Thank you very much!

Hai

SHMCU avatar Aug 27 '18 03:08 SHMCU

Hi Hai,

may I ask you why have you used the nll loss and not the cross entropy loss in the training?

Thanks

prio1988 avatar Aug 28 '18 20:08 prio1988

Hi prio1988,

I think nll loss is actually multiclass cross entropy, right? It should also work when I set the model to work on only two classes, that is background and foreground. Right?

hsu-z2 avatar Aug 28 '18 20:08 hsu-z2

Nll loss assume that you have already applied a logSoftMax layer on the top of your network. The multi class cross entropy loss is the torch.nn.CrossEntropyLoss. I think that probably you should use the last one. Instead I am still wondering why to apply a logsoftmax on the unary instead that just a softmax.

prio1988 avatar Aug 28 '18 21:08 prio1988

Oh, thank you for the very good suggestion! I will dig into the problem of logsoftmax+nll or softmax+crossEntropyLoss.I read somewhere that logsoftmax is numerically more stable than softmax. On Tuesday, August 28, 2018, 5:00:05 PM EDT, prio1988 [email protected] wrote:

Nll loss assume that you have already applied a logSoftMax layer on the top of your network. The multi class cross entropy loss is the torch.nn.CrossEntropyLoss. I think that probably you should use the last one. Instead I am still wondering why to apply a logsoftmax on the unary instead that just a softmax.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

SHMCU avatar Aug 28 '18 21:08 SHMCU

If you use the crossEntropyLoss you can avoid also the softmax. It is done internally by the loss.

prio1988 avatar Aug 28 '18 21:08 prio1988

OK. Then that would be much better. Since the implementation of crossEntropyLoss already considered the numerical stability issues.Thank you! On Tuesday, August 28, 2018, 5:09:38 PM EDT, prio1988 [email protected] wrote:

If you use the crossEntropyLoss you can avoid also the softmax. It is done internally by the loss.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

SHMCU avatar Aug 28 '18 21:08 SHMCU

I have trained it however I get the following error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn image

HqWei avatar Jan 16 '19 13:01 HqWei

Is there any one having tried training?

HqWei avatar Jan 16 '19 13:01 HqWei

I have trained it however I get the following error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn image

I have the same problem. Have you solved it?

qiqihaer avatar Mar 26 '19 07:03 qiqihaer

@HqWei @qiqihaer Could you share a portion of your code for training convCRF?

pvthuy avatar Jun 02 '20 09:06 pvthuy

There is a paper called PAC-CRF, you may find the convCRF implementation there. On Tuesday, June 2, 2020, 02:00:21 AM PDT, pvthuy [email protected] wrote:

@HqWei @qiqihaer Could you share a portion of your code for training convCRF?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

SHMCU avatar Jun 02 '20 19:06 SHMCU

@SHMCU It's very helpful. Thank you very much!

pvthuy avatar Jun 03 '20 01:06 pvthuy

@SHMCU It's very helpful. Thank you very much!

Hi, did you solve the in-place operation problem? Should we set CRF iteration step to 1 to avoid this error? I tried it on PACCRF and the same problem occured.

GITSHOHOKU avatar Nov 22 '21 04:11 GITSHOHOKU

ConvCRFs can be trained using PyTorch. Training is straight forward and can be done like any other neural network. Iterate over the training data, apply softmax cross-entropy loss and use the pytorch autograd package to backprop.

I strongly recommend that you implement your own pipeline. Having a good understanding of your training process is quite crucial in deep learning.

I am considering to make my pipeline public, however the code is currently quite messy, undocumented and will not work out of the box. I think implementing your own pipeline by following some of the pytorch tutorials is much more rewarding and easiert then trying to make mine work.

Edit: I deleted part of my earlier response to increase my overall niceness. You can find the full response in the changelog.

Hi, I have a question about the training step with this wonderful CRF impletement. Should we set CRF iteration step=1 in training step ? And in inference step to set it bigger than 1?

GITSHOHOKU avatar Nov 22 '21 04:11 GITSHOHOKU