odl icon indicating copy to clipboard operation
odl copied to clipboard

Pytorch and tensor flow backend pass through the CPU.

Open jonasteuwen opened this issue 4 years ago • 4 comments

Current implementations of the forward and backward projector wrappers in ODL pass through numpy arrays. Are there any plans to take the GPU context of ASTRA and connect it somehow?

jonasteuwen avatar May 01 '20 09:05 jonasteuwen

This has been discussed at several occasions and especially so for algorithms that want to make use of automatic differentiation in PyTorch. A closely related issue is #731 (see also #739) and the pull-request (that is still open) #1546.

ozanoktem avatar May 01 '20 10:05 ozanoktem

Main issue is that we don't have CUDA support in ODL directly, so any ODL operator will make the data go through CPU memory. Apart from the issues and PRs @ozanoktem referred to, there are two more: #1231 and #1401 (basically the same though). But my current stance is that all the tedious work to make ODL element wrappers behave nicely is waste of time (read: I won't do it) since I want to get away from that concept and use arrays directly, see #1475.

So yes, there are plans, but at least for me the order is #1475, then #1401. If anyone else would like to give it a try independently, go ahead. But it's not trivial.

In the short run @jonasteuwen you could hack something yourself by pulling out the Operator-specific parts in OperatorFunction and implement the calls to ASTRA directly.

kohr-h avatar May 01 '20 13:05 kohr-h

I guess #1401 would likely take bring it much closer as there seems to be a back and forth possible between cupy <-> pytorch.

jonasteuwen avatar May 01 '20 13:05 jonasteuwen

Definitely, you can just hand over device memory pointers.

kohr-h avatar May 01 '20 17:05 kohr-h