mshadow
mshadow copied to clipboard
<gpu> and <cpu> generates totally different results
Help.
I improved the /guide/neuralnet/convnet.cu
, add my own function
when I use -cpu
parameter, things go well , the error rate is declining
however, -gpu
generates totally different results: the error just stays at 0.9 or so(didn't change a little bit)...
It may seems there's a bug in mshadow
's gpu implementation, but I don't know.
I didn't add my own gpu code, just use mshadow
's(like <xpu>
).
for cpu , I used blas
lib. And my cuda version is 7.0
.
Please help.
@tqchen @antinucleon
besides, when I change the configuration of stride
, ksize
and pad
, the outputs are also not the same.
some configuration work well on both cpu and gpu, but some only on cpu.
Is there any probability that GPU and CPU performs differently on =
and Copy
?
I use a lot of =
between tensor and expression, following the original convent
version.
And I also use Copy
, which is used when =
can't work (like tensor[i] = tensor2.Slice(a, a+1)
or tensor[i] = tensor2[i]
What's more, I used expression like tensor.Slice(i, i+1) = some expression
, is this valid? or do I must turn it into Copy(tensor.Slice(i, i+1) , exp, stream_)
?
Please help. @tqchen @antinucleon
I set all tensor's stream, and in GPU mode it didn't report error. Is there a bug or other reason that GPU and CPU performs differently?
Besides, I used a std::vector
to store tensors,
like vector[i] = tensor
I'll be really really grateful if someone has some idea to help me.
Could post you code with gist so I can have a glance?
Note there is a special semantics on =
in mshadow. So when you assign a tensor to another, it is a pointer copy instead of a assignment. So if you intended it be be an assignment, use
Copy(dst, src);
Or
dst = F<op::identity>(src);
Or
dst =1.0f * src;
This code base has been donated to the Apache MXNet project per #373, and repo is deprecated. Future development and issue tracking should continue in Apache MXNet.