Chaithya G R
Chaithya G R
I think that is fixed in tensorflow nufft.. we have minor issues still there in graph mode which I'm actively fixing. But it's tracked there.
This will be a very useful feature when we would want to reuse some variables or objects in parallel computation.
I think you need to define your custom call function as : ```python def call(self, inputs): return output ``` If you use `__call__`, you are essentially overloading the core python...
> Do you use CUDA or OpenCL In this case it is openCL. However, I think I have similar memory issues in CUDA. > how exactly do you check the...
Let me give it a try with plain pyOpenCL and get back. I do agree that this issue I am seeing is very weird. Perhaps I too will try to...
> Did you try doing import gc; gc.collect()? Nope I did not, I was assuming this would just collect unused memory attached to RAM, would it help with memory on...
Also, a side note here that when executing eagerly, `parallel_iterations = 1`, making debugging of code extremely slow (batch_size times slower), which is bad in case we want to reach...
No, if i set it higher i get a warning saying that I cant run it in parallel in eager mode. The parallel iterations are used particularly while graph is...
This support is only added for cufinufft at the moment, adding to finufft soon.
I have rebased all my changes to the latest version. I still will need the cufinufft_spread and interp functions for exposing them. @blackwer can you please comment on this and...