extension-ffi icon indicating copy to clipboard operation
extension-ffi copied to clipboard

More examples

Open fxia22 opened this issue 8 years ago • 9 comments

Hi pytorch team,

I am looking to port https://github.com/qassemoquab/stnbhwd to pytorch with effi. Do you know is it possible? Is the mechanism of writing extension for torch and pytorch similar or in other words, can I reuse some of the code from that repo. THanks.

fxia22 avatar Feb 17 '17 02:02 fxia22

Yes, it should be quite easy to reuse it. You'd only need to copy over the C files and change the functions to accept tensors as arguments instead of parsing them out of the Lua state. Then just use the package example and that should be it.

apaszke avatar Feb 17 '17 14:02 apaszke

Awesome, thank you! I will let you know how things go.

fxia22 avatar Feb 17 '17 17:02 fxia22

@apaszke I am trying to get the data from CudaTensor, I changed the example library to the following but that gives me seg fault:

int my_lib_add_forward_cuda(THCudaTensor *input1, THCudaTensor *input2,
               THCudaTensor *output)
{
  if (!THCudaTensor_isSameSizeAs(state, input1, input2))
    return 0;
  float * input_data = THCudaTensor_data(state, input1);
  printf("data %f\n", input_data[0]);
  THCudaTensor_resizeAs(state, output, input1);
  THCudaTensor_cadd(state, output, input1, 1.0, input2);
  return 1;
}

I need similar operations for spatial transformer network to work with CUDA (cpu version already works). Can you share with me how to do this extraction? Thanks in advance.

fxia22 avatar Feb 17 '17 22:02 fxia22

I guess my question is about how to reuse cuda code. When I attempted to do so, it tells me threadId.x is not defined.

fxia22 avatar Feb 18 '17 00:02 fxia22

you cannot printf a cuda pointer, it will segfault. Maybe you can lightly read the CUDA programming guide: docs.nvidia.com/cuda/cuda-c-programming-guide

soumith avatar Feb 18 '17 03:02 soumith

Can't you just copy the code from the original repo? You shouldn't need to change any code that computes the function, only change the argument parsing.

apaszke avatar Feb 18 '17 17:02 apaszke

Thanks for your reply.

@apaszke Yes, I finished the CPU version porting and it was quite intuitive. And I read the CUDA programming guide. But how can I build a .cu extension with extension-ffi? I am able to use some torch CUDA functions like THCudaTensor_cadd, but how can I write my own CUDA functions?

For example, when I try to write on own add function, it gives me this error:

/home/fei/Development/extension-ffi/script/src/my_lib_cuda.c: In function ‘VecAdd’:
/home/fei/Development/extension-ffi/script/src/my_lib_cuda.c:9:64: error: ‘threadIdx’ undeclared (first use in this function)
 __global__ void VecAdd(float* A, float* B, float* C) { int i = threadIdx.x; C[i] = A[i] + B[i];  }
                                                                ^

fxia22 avatar Feb 19 '17 04:02 fxia22

@fxia22 torch.utils.ffi doesn't appear to have any knowledge of nvcc or .cu. I think you need to build your cuda sources separately (see the example Makefiles that come with the CUDA SDK) and then add the built object(s) to 'extra_objects' through kwargs when creating an extension.

See: https://docs.python.org/3/distutils/apiref.html#distutils.core.Extension

mattmacy avatar Feb 28 '17 23:02 mattmacy

@mattmacy Thanks, I will give it a shot!

fxia22 avatar Mar 01 '17 20:03 fxia22