cudnn.torch
cudnn.torch copied to clipboard
cudnn.convert() in combination with nngraph
I have an issue with cudnn.convert()
in combination with nngraph.
The following example produces an error indicating that there is still the cudnn module in the nngraph after calling cudnn.convert()
:
require('nn')
require('nngraph')
require('cunn')
require('cudnn')
local backend = cudnn
local input = nn.Identity()()
local output = backend.SpatialConvolution(1, 1, 3, 3)(input)
local net = nn.gModule({input}, {output})
local input = torch.zeros(1, 1, 200, 300)
input = input:cuda()
net = net:cuda()
local output = net:forward(input)
net = cudnn.convert(net, nn):float()
input = input:float()
local output = net:forward(input)
cudnn.convert is not supposed to work with nngraph now, although it should not be difficult to implement.
added an error message now via https://github.com/soumith/cudnn.torch/commit/07b0b2d9989c7f68dad2b9d43eb0813315c7b032
Are you sure that it is still not supported by nngraph? I have been using cudnn.convert together with nngraph for months without any issues, all modules have been converted just fine. The example above runs perfectly fine as well (I had to comment the gModule detection out), executing net:get(2)
correctly yields nn.SpatialConvolution(1 -> 1, 3x3)
.
@CodeRect it works in some (most?) cases, but there are some situations where I think it messes up with some functionalities, such as type()
. I didn't have the time to track it down precisely though, I'll need to create a small case to illustrate the problem, but I won't have time soon to do it.
I just installed cuDNN and the latest versions for torch7, nn and nngraph. I am unable to convert an nngraph network to cudnn. I get the following error:
Warning: cudnn.convert does not work with nngraph yet. Ignoring nn.gModule
It's not an error, it's a warning
Cc: @fmassa do you have time to refine this to not completely disable it for nngraph? You said there's some cases where nngraph conversion doesn't work right?
The easiest way would be to print a warning and still proceed with the conversion. I just removed the check and continued to use it as usual. I have been using nngraph quite extensively without any issues so far. I am not familiar with the internals though, which is why I so far refrained from participating further in the discussion.
@soumith I'll have a look at that tomorrow. I don't have access anymore to the models that were failing, but I'll try to reproduce the failure case I had.
@Amir-Arsalan Are you sure that you are using the most recent version of cudnn.torch? I see no reason why CUDA 8.0 should change anything.
@CodeRect Actually I just realized that my code was not using cuDNN due to an 'if' statement I had. My apologies.
Are there any estimates as to when nngraph might be supported by cuDNN?
@fmassa long overdue?
@soumith sorry, I got stuck with my PhD writing. but I'm submitting a final version to the reviewers this week so I will have time to look into fixing this nngraph+cudnn problem this weekend.
I know pytorch is probably consuming most people's bandwidths, but just bumping this to ask whether nngraph x CuDNN is done?
@suryabhupa hey, sorry about that.
I got busy with other things and I didn't have the time to look into that yet.
Not sure when I'll find some time though, and I don't have anymore the model that was causing the trouble in the :cuda()
/:float()
conversion that I mentioned before, and that required me to manually convert some internals from nngraph
.
@szagoruyko You say 'cudnn.convert is not supposed to work with nngraph now, although it should not be difficult to implement.' I have met the same issue as @griegler said. I want to convert my GPU model to CPU, and I have used nngraph module in my model. Would you please give me some advice on how to implement it? Thanks a lot.
You should be moving to pytorch, torch is no longer supported.