Kane Scipioni
Kane Scipioni
As I understand, `.forward()` only supports feeding a single tensor that is already assembled. I can concatenate the tensors from replay memory before hand, but is that not the responsibility...
Awesome. I added a unit test. Let me know what you think.
Of course. Btw, dlib is great!
Hey @davisking! Do you have any feedback or get a chance to test this code?
Thanks! I also had difficulty finding the time to fix this. I defined variadic macros because `IF_DLIB_USE_CUDA(expression)` implies a single argument and the preprocessor splits on all the commas in...
> Since the resizable_tensor objects are all still calling cudaMalloc and allocating GPU memory. Oh, good catch. I didn't check the behavior with `CUDA_VISIBLE_DEVICES=`. It seems like this can be...
Also, do you still want to add the `do { } while(0)` to the macro?
I think this is working now. Let me know what you think about this behavior. A tensor allocated as host only (i.e. `use_cuda() == false`) must remain host only unless...
@davisking What do you think about these changes to the cuda memory allocation? I suppose the obvious alternative is to defer cuda allocation until a call to `device()` is made,...