neo icon indicating copy to clipboard operation
neo copied to clipboard

make neo compatible with ORC since shallowCopy has already been removed for ORC

Open ringabout opened this issue 1 year ago • 4 comments

ref https://github.com/nim-lang/Nim/pull/19972 ref https://github.com/nim-lang/Nim/pull/20070

ringabout avatar Jul 29 '22 09:07 ringabout

Hi @ringabout , thank you for your contribution. Unfortunately, this fails tests, namely the suite "trivial operations should share storage". In other words, it seems that we are performing a copy when it would not be necessary: transpositions, reshaping and so on. Is there a way to make this work with ORC?

andreaferretti avatar Aug 04 '22 11:08 andreaferretti

I haven't check the actual impact in Arraymancer yet, but this https://github.com/mratsim/Arraymancer/pull/562 apparently didn't fail tests especially the one involving the GRU layer as it relies on shallow copies: https://github.com/mratsim/Arraymancer/blob/a7c5476/src/arraymancer/nn_primitives/nnp_gru.nim#L77-L79.

If this is true, you can use the {.shallow.} pragma, though curently ORC CI seems to have other issues (cc @Vindaar) https://github.com/mratsim/Arraymancer/pull/563

https://github.com/mratsim/Arraymancer/blob/fecd375/src/arraymancer/laser/tensor/datatypes.nim#L31-L45

mratsim avatar Aug 05 '22 10:08 mratsim

I haven't check the actual impact in Arraymancer yet, but this mratsim/Arraymancer#562 apparently didn't fail tests especially the one involving the GRU layer as it relies on shallow copies: https://github.com/mratsim/Arraymancer/blob/a7c5476/src/arraymancer/nn_primitives/nnp_gru.nim#L77-L79.

If this is true, you can use the {.shallow.} pragma, though curently ORC CI seems to have other issues (cc @Vindaar) mratsim/Arraymancer#563

https://github.com/mratsim/Arraymancer/blob/fecd375/src/arraymancer/laser/tensor/datatypes.nim#L31-L45

I could be mistaken, but isn't the important difference here that in that GRU example the code only uses tensors that match our KnownSupportsCopyMem (as they are float) and thus fall under the manual memory allocation branch? For those I wouldn't expect to see any different behavior.

Here in neo it seems more comparable to our fallback 'backend' using seq storage, in which case I also haven't checked if the new code actually keeps reference semantics in those cases.

Vindaar avatar Aug 05 '22 11:08 Vindaar

I could be mistaken, but isn't the important difference here that in that GRU example the code only uses tensors that match our KnownSupportsCopyMem (as they are float) and thus fall under the manual memory allocation branch? For those I wouldn't expect to see any different behavior.

Right I forgot, when I wrote GRU we still had a seq and I had to explicitly ensure that {.shallow.} worked as expected.

mratsim avatar Aug 05 '22 17:08 mratsim