mrtoct-tensorflow icon indicating copy to clipboard operation
mrtoct-tensorflow copied to clipboard

Running train_pixtopix.py

Open drcdr opened this issue 6 years ago • 1 comments

I have been trying to get started with mrtoct. After some slight modifications, I have had initial success with the data download, extract, convert and coregister, as well as running train_unet.py as described in readme.md. After about 40 hours, I got the following, I don't know if this is good or not. I haven't run prediction yet: loss = 0.003564108, step = 200100 (71.873 sec)

However, not I'm not sure how to run train_pixtopx.py. What is the maskings_path; how is the data generated for it? Can you please provide basic instructions for this?

drcdr avatar Aug 15 '19 15:08 drcdr

I have been trying to get started with mrtoct. After some slight modifications, I have had initial success with the data download, extract, convert and coregister, as well as running train_unet.py as described in readme.md. After about 40 hours, I got the following, I don't know if this is good or not. I haven't run prediction yet: loss = 0.003564108, step = 200100 (71.873 sec)

I don't remember the training loss, however, you can check the evaluation notebooks in the repository or the Arxiv article. I believe both show results for training and testing data with different metrics.

But also note that the loss itself may not be very meaningful depending on what you are trying to achieve.

However, not I'm not sure how to run train_pixtopx.py. What is the maskings_path; how is the data generated for it? Can you please provide basic instructions for this?

The idea behind masking: Segment the brain matter using e.g. SPM and create a binary mask (1 -> brain tissue, 0 -> otherwise). Then use the masking to fine-tune the network on soft tissue. If I remember correctly, we used math multiplication to discard any deviations outside the brain tissue.

Why did we do this? The CT contrast is very low in the subregion of the brain tissue but high close to the skull. Therefore, a network using MSE (L2) or MAE (L1) as cost function will primarily focus on these features. By fine-tuning with the masking we hoped to increase details on the brain tissue. Unfortunately, the results were not really promising.

To cut a long story short: You probably want to remove the maskings related code in train_pixtopix.py.

bodokaiser avatar Aug 15 '19 15:08 bodokaiser