QuickNATv2
QuickNATv2 copied to clipboard
Apply on different dataset
The segmentation looks great but in my machine it took 120 seconds after I changed
[Predictions, SegTime] = SegmentVol(Data,70); % Change the 70 based on your GPU RAM size
to
[Predictions, SegTime] = SegmentVol(Data,10); % Change the 70 based on your GPU RAM size
The problem is when do resampling we either change the spacing or the size. In your sample case the spacing was 1mm and the size 256. I have an image with spacing 1.3mm and size 130, I don't think it is possible to change this image to 1mm and 256 with keeping its actual measurements.
Sorry if this is a very beginner question but could you please provide an example/tutorial about how to train the network on different dataset?
If I got it right, your data is 256x256x130 (1mmx1mmx1.3mm). In that case only use the predictions from coronal network (If your 130, 256x256 slices corresponds to coronal slices). The view aggregation improves the performance abit, but Coronal predictions are also good. You need to modify the SegmentVol function.
I will prepare an example for the training code. You need to run:
[net, info] = QuickNAT_Train(imdb, [], inpt);
where inpt.expDir = 'Exp01'; % Any Folder Name where you want the model to be saved and
imdb contains the data.
I wrote a short description on how to make a imdb dataset in: https://github.com/abhi4ssj/ReLayNet
thanks for the info. I tried the coronal idea but I keep getting this error:
WARNING: error reading MR params
Index exceeds matrix dimensions.
Error in MRIread (line 100)
tr = mr_parms(1);
Error in RunFile (line 17)
DataVol = MRIread(FileName);
I will check the training link.
It seems the problem is with the reader 'MRIread'. You can try any other reader and pass the volume directly as Data (matrix of size 256x256x130) in [Predictions, SegTime] = SegmentVol(Data,10);
Tried different reader. with 256,256,130 and 1,1,1.3 I got index range error but it works with 256,256,256 and 1,1,1.3, the result doesn ot look good
this is the modified code:
`
function [Predictions, SegTime] = SegmentVolCor(DataVol,NumFrames)
% The segmentation is done 2D slice-wise. Speed is dependent on number of frames you can push
'NumFrames'. This is dependent on GPU size. Please
% try different values to optimize this for your GPU. In Titan Xp 12GB, 70 slices were pushed giving
segmentation time of 20secs.
addpath('../QuickNAT_Networks')
warning('off', 'all');
% Load the Trained Models
load('../TrainedModels/CoronalNet.mat'); % CoronalNet
fnet = dagnn.DagNN.loadobj(net);
% Prepare the data for deployment in QuickNAT
sz = size(DataVol);
DataSelect = single(reshape(mat2gray(DataVol(:,:,:)),[sz(1), sz(2), 1, sz(3)]));
PredictionsFinal_Cor = [];
% ---- start of segmentation
tic
% 70 slices in one pass restricted by GPU space
for j= 1:NumFrames:256
if(j>(256-NumFrames+1))
k=256;
else
k=j+NumFrames-1;
end
fnet.mode = 'test'; fnet.move('gpu');
% fnet.conserveMemory = 0;
fnet.eval({'input', gpuArray(DataSelect(:,:,:,j:k))});
reconstruction = fnet.vars(fnet.getVarIndex('prob')).value;
reconstruction = gather(reconstruction);
Predictions1 = squeeze(reconstruction);
PredictionsFinal_Cor = cat(4,PredictionsFinal_Cor, Predictions1);
fnet.move('cpu');
% fnet2.conserveMemory = 0;
reconstruction = gather(reconstruction);
Predictions1 = squeeze(reconstruction);
Predictions1 = squeeze(reconstruction);
end
% 16 class SagittalNet predictions converted to 28 class for consistency
% Multi-view Aggregation Stage
PredictionsFinal = ( 0.4*PredictionsFinal_Cor );
% Arg Max Stage for dense labelling
[~, Predictions] = max(PredictionsFinal,[],3);
Predictions = squeeze(Predictions);
SegTime = toc;
%---- end of Segmentation `
BTW, I have to recompile the library every time I restart matlab, is this normal?
Are you overlaying the MRI and segmentation properly? Coronal axis predictions looks ok-ish to me, just misaligned (Might be a problem with header files). If possible drop me an e-mail with a sample volume. I will try to solve it. I haven't tried with 1.3mm thick slices as such data was not included in the training. Still, it should perform decently.
I think when you write the result you use the default values not the one of the input image, in my case: input: size:256,256,256 spacing:1,1,0.66 origin: -86.6,133.9,116.7 result: size:256,256,256 spacing:1,1,1 origin: 128.-128,128
Here are two samples the original and the one with the result above: