ginkgo icon indicating copy to clipboard operation
ginkgo copied to clipboard

Improving Csr::strategy_type

Open thoasm opened this issue 5 years ago • 2 comments

Currently, all strategies for CSR are tailored for CUDA, which is fine for the most part. However, also the names are CUDA specific (see Csr::cusparse), which should be changed in my opinion since we want to support multiple platforms. I would prefer to go with a more neutral name like Csr::sparse_library in that case (maybe there is even an OpenMP sparse library we could use for that). Also, automatical only works on CUDA devices and even requires a CudaExecutor to work. I would prefer a solution where it is possible to also adapt to certain OpenMP properties (which we should have as soon as we have a more sophisticated SpMV there).

Additionally, I am not sure why we use an std::shared_ptr for these strategies. Currently, we always have to call std::make_shared to generate a strategy, which is both not intuitive and not necessary since there is not much stored inside a strategy object (at most an std::string and an int64_t). I think copying the object would be faster than allocating memory on the heap, although it should not really matter much (the more important part for me is the intuitiveness). We could also encapsulate the strategies in a class named strategy, so it is clear that Csr::spmv_strategy::automatical is an SpMV strategy.

In summary, I think the following changes should be introduced:

  • Change the names of the strategies to more neutral ones, e.g. cusparse -> sparse_library
  • Make automatical to actually be automatical and dependent on the executor (CUDA vs. OpenMP vs. Reference) without requiring a CudaExecutor
  • Change the type of the strategy from std::shared_ptr to just a plain object since the most one of these objects contain is an int64_t and an std::string
  • put all strategy classes in a separate class spmv_strategy (or similar), so you call it with Csr::spmv_strategy::automatical, which is more descriptive

Additionally, some functionality/performance changes can also be incorporated into the strategies:

  • Split the generate/prepare step required for some strategies (cusparse. hipsparse) and move them if possible to make_srow to keep data persistent over many apply calls optimizing the apply calls. See discussion here.

thoasm avatar Jun 27 '19 13:06 thoasm

The strategy automatical or load_balance compute the srow for the GPU kernel but the matrix is maybe on the host memory.

strategy creates a CUDA handle to take the parameter of GPU code bechmark/spmv does not pass the executor to strategy when reading the matrix data. code

This implementation leads that read matrix of benchmark/spmv still uses device 0 when we set device_id=1. I think using make_srow only when creating the matrix on GPU should be okay.

yhmtsai avatar Aug 13 '19 15:08 yhmtsai

I think it would be better if we had a kernel call to make_srow and call the kernel that matches the executor where the matrix is stored. Just using make_srow for CUDA is the wrong approach in my opinion, since we want to support all platforms we support with Ginkgo. Currently, the strategy is limited to only Nvidia GPUs, while we might want to have support for specialized OpenMP kernels, or for AMD GPUs, which we will also support in the near future.

thoasm avatar Aug 13 '19 15:08 thoasm