SSL4DSA icon indicating copy to clipboard operation
SSL4DSA copied to clipboard

Confused in the data folder?

Open mmliu202210 opened this issue 1 year ago • 3 comments

You also, I would like to ask about the label in the data folder, the paper describes that only 20 labels are needed, why do you need to enter 150 labels, and what should be in the skeleton folder? Looking forward to hearing from you!

mmliu202210 avatar Jun 20 '24 10:06 mmliu202210

  1. You can see, we only used the labeled_bs labels to calculate the supervise loss during training. https://github.com/Allenem/SSL4DSA/blob/7d529fd14e765cec4e12f2eedc3391c3acc09583/code/train_semisupervised_CNN_Transformer_PLCL.py#L328-L346
  2. The skeleton folder was designed for another mission about the coronary artery centerline extraction. You can ignore it.

I hope you could be satisfied with my answer

Allenem avatar Jun 20 '24 11:06 Allenem

It seems like the dimention index 1 is out of bounds, because cannot identify errors: index 1 is out of bounds for axis 0 with size 1 during loss.backward(). I suggest you check the shapes of input image, input label, each loss, images used for calculating loss, lables used for calculating loss. You could print their shapes one by one, and check the sentence which used index 1 in axis 0 meanwhile.

Allenem avatar Jun 21 '24 05:06 Allenem

Thank you very much for your answer! Have you encountered this problem? How do I fix this? Traceback (most recent call last): File "train_semisupervised_CNN_Transformer_PLCL.py", line 559, in loss.backward() # 反向传播,计算梯度 File "/mnt/99247d91-0f6b-7e41-b405-f664d2eed5ef/students/lm/anaconda3/envs/SSL4DSA/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/mnt/99247d91-0f6b-7e41-b405-f664d2eed5ef/students/lm/anaconda3/envs/SSL4DSA/lib/python3.8/site-packages/torch/autograd/init.py", line 173, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

mmliu202210 avatar Jun 21 '24 08:06 mmliu202210