Meshroom icon indicating copy to clipboard operation
Meshroom copied to clipboard

[FR]: Use OpenCL instead privative alternatives (CUDA, Metal)

Open RafaelLinux opened this issue 4 years ago • 74 comments

I just reported previosly the impossibility to render with Meshroom, probably cause despite I have an NVidia GPU, Nvidia does not provide any CUDA package for OpenSUSE 15.1 . I use Blender, GIMP ... all of them are using OpenCL. Meshroom is developed for Linux and Windows. OpenCL is updated continuously for both platforms. OpenCL performance is slightly under propietary Nvidia or AMD APIs, so, why do not let Meshroom to use OpenCL GPGPU API? Even Intel GPU users could use Meshroom if it uses OpenCL framework.

Please, could you consider this suggestion?

Thank you

RafaelLinux avatar Aug 16 '19 11:08 RafaelLinux

Read https://github.com/alicevision/AliceVision/issues/439 Here is the Background on why CUDA is used in many applications: https://www.quora.com/Why-cant-a-deep-learning-framework-like-TensorFlow-support-all-GPUs-like-a-game-does-Many-games-in-the-market-support-almost-all-GPUs-from-AMD-and-Nvidia-Even-older-GPUs-are-supported-Why-cant-these-frameworks

natowi avatar Aug 16 '19 11:08 natowi

I read the thread. Some commentaries are from 2018, and OpenCL 2.2 didn't exist, and many changes come from then. CUDA is used in many applications, but OpenCL too () . In that list is Darktable too, that I usually use.

Anyway, Fabencastian wrote

Currently, we have neither the interest nor the resources to do another implementation of the CUDA code to another GPU framework.

That's a pity, cause lot of users could not try Meshroom, despite it's a great develop. I'm just now in the PC with the Intel GPU, so there is no way to use Meshroom and tried alternatives, like Metashape, that doesn't require necessarily and Nvidia GPU.

RafaelLinux avatar Aug 16 '19 13:08 RafaelLinux

That @fabiencastan does not have the time to do a port of a - for him working implementation - does not mean that other cannot implement it in their own time. A very big thing here is, would you implement it in OpenCL, or something different. Some good pointers on the wiki what are viable alternatives could help people that want to start on this task.

skinkie avatar Aug 20 '19 19:08 skinkie

Hi skinkie, I have no sufficient skills to code in C/C++. I'll give a try if it were Python, PHP or even JS. I point to the fact that "less users able to run an application = less interest in the application = less feedback" and finally, the great idea falls in an lost effort. It's true it's easier to work with the CUDA API, but a lot of users in this forum has reported info about how to migrate or simplify change to OpenCL. That could be a good point to start. That's only my opinion, of course.

RafaelLinux avatar Aug 20 '19 20:08 RafaelLinux

@RafaelLinux As user you can use Meshroom without CUDA, the only part of the application that is 'hidden' is the DepthMap stage and even that allows for preview without CUDA. As developer MeshRoom is Python + QML low entry level to make impact. The first acceleration CUDA is used in is the feature extraction. You could just try to get this to work: https://github.com/pierrepaleo/sift_pyocl

Personally my focus for Meshroom is introducing some heuristics for matching images and supervised learning opposed to the current brute force approach. Not that I am a photogrammetry specialist, but I can surely try to work on this open source project.

skinkie avatar Aug 20 '19 21:08 skinkie

Maybe I'm using incorrectly Meshroom, cause if I only reach DepthMap, I only see a cloud of points, so I can see the model result.

RafaelLinux avatar Aug 21 '19 00:08 RafaelLinux

https://github.com/alicevision/meshroom/wiki/Draft-Meshing

skinkie avatar Aug 21 '19 07:08 skinkie

Thank you, is a good workaround. I ll try it. Anyway, remember users don't mind how long it takes, quality is the priority, so please, don't forget this feature request ;)

RafaelLinux avatar Aug 21 '19 09:08 RafaelLinux

One could also use hipfy from AMD to convert CUDA code to HIP, wich can be built to work on either NVIDIA or AMD cards (with very nice performance, I currently use it for Tensorflow, and it works like a charm !)

aviallon avatar Aug 25 '19 23:08 aviallon

@aviallon The last time (2018) hip did not support some cuda functions https://github.com/alicevision/AliceVision/issues/439#issuecomment-417422887 and there was no full support for windows and amdgpu linux https://github.com/alicevision/AliceVision/issues/439#issuecomment-417635336.

You are welcome to try again using hipfy.

natowi avatar Aug 25 '19 23:08 natowi

for reference https://github.com/cpc/hipcl

arpu avatar Sep 17 '19 22:09 arpu

for reference https://github.com/cpc/hipcl

This is interesting, have anyone tried it?

pppppppp783 avatar Sep 25 '19 17:09 pppppppp783

https://www.computer.org/publications/tech-news/from-cuda-to-opencl-execution/

pppppppp783 avatar Sep 25 '19 17:09 pppppppp783

Nvidia does not provide any CUDA package_ for OpenSUSE 15.1.

This is simply a packaging issue since Arch has CUDA despite being not in the list here.

You already reported that issue to both, the open SUSE packagers and the NVidea CUDA team?

And you can probably repackage either the 15.0 variant of openSUSE package or the Arch package, which uses an independent source, as you can see in the link.

ShalokShalom avatar Sep 27 '19 08:09 ShalokShalom

@ShalokShalom the problem with Cuda remains that older hardware absolutely does not work with newer CUDA versions. This causes problems for nvidia-drivers and cuda, where one is effectively searching for the 'ideal pair' between them. I would be very interested if opencl could bridge this gap even by choosing the execution pipeline of choice.

skinkie avatar Sep 27 '19 08:09 skinkie

And how is that with HiP? Nvidia hardware runs on it as well?

I consider using a Geforce GT 610 for CUDA, can you tell me how to choose the suitable CUDA version?

Thanks a lot

ShalokShalom avatar Sep 27 '19 09:09 ShalokShalom

@ShalokShalom

And how is that with HiP? Nvidia hardware runs on it as well?

"HIP allows developers to convert CUDA code to portable C++. The same source code can be compiled to run on NVIDIA or AMD GPUs"

I consider using a Geforce GT 610 for CUDA, can you tell me how to choose the suitable CUDA version?

On Windows, install the latest version, on Linux this might depend on your Distro. GT 610 supports CUDA 2.1, MR requires 2+

natowi avatar Sep 27 '19 09:09 natowi

I am on Linux, what decides which version is optimal? I am on KaOS, that is a rolling distribution.

So, does HiP negligible the version differences between CUDA and the different NVidia hardware?

Could or should we replace CUDA entirely with it or is the overhead to big?

ShalokShalom avatar Sep 27 '19 10:09 ShalokShalom

@ShalokShalom With HiP we can compile two versions of Meshroom: for CUDA and AMD GPUs. For CUDA users nothing changes. (https://kaosx.us/docs/nvidia/ But you won´t get far with a 1GB GT 610)

natowi avatar Sep 27 '19 10:09 natowi

We have to wait for HiP to support cudaMemcpy2DFromArray. Then we can add AMD support for AV/MR and try HiPCL.

natowi avatar Sep 27 '19 10:09 natowi

@natowi But you won´t get far with a 1GB GT 610)

If Meshroom would allow parallel computation for nodes where both CPU and GPU could for example do feature extraction. Any additional computing resource could help. It depends on how much overhead the GPU would give in compare to a (faster) decent CPU but I would still see the potential for independent computation tasks.

skinkie avatar Sep 27 '19 12:09 skinkie

looks like hip supports now cudaMemcpy2DFromArray any progress on this?

arpu avatar Nov 16 '19 01:11 arpu

@skinkie see https://github.com/alicevision/meshroom/issues/175

@arpu Yes, all CUDA functions are now supported by HiP and I was able to convert the code to HiP using the conversion tool (read here for details). The only thing left is to write a new cmake file that includes HiP and supports both CUDA and AMD compilation and the different platforms. Here is the Meshroom PopSift plugin I used for testing. At the moment I don´t have the time to figure out how to rewrite the cmake file, but I think @ShalokShalom wanted to look into this. You are welcome to do so as well.

natowi avatar Nov 16 '19 10:11 natowi

One question is very critical, I think: Will we ship two versions?

Linux distributions do their packaging themselves and we could benefit enormously by finding someone who is willing to maintain Alice for their userbase since that could result in new developers and funding.

2 versions, one for CUDA and one for HIP is something they will never do.

ShalokShalom avatar Nov 19 '19 07:11 ShalokShalom

@ShalokShalom from the HiP code we can compile both CUDA and AMD versions. Similar to the parameter target platform/os in the cmake, CUDA or AMD can be defined. So depending on the compiler parameters we can define the versions (OS+cuda/amd). So once we can compile all supported plattforms from our hipified code, we can create a PR to use HiP instead of CUDA code by default in the official repo.

natowi avatar Nov 19 '19 08:11 natowi

Any idea how long that approximately takes? I feel like a child just before Christmas eve :D

PickUpYaAmmo avatar Nov 25 '19 07:11 PickUpYaAmmo

@PickUpYaAmmo I will take another look at this over the winter holidays.

natowi avatar Nov 25 '19 08:11 natowi

yay iim excited fr this i been following these threads for a while now i'm excited this finally happening! thank you guys so much! any idea of a "guesstimate" when we may see the first release ?

BootySmack avatar Dec 15 '19 19:12 BootySmack

Is ROCm not an AMD alternative for CUDA? Someone else mentioned using HIP to convert CUDA for TensorFlow work, but AMD has been supporting TensorFlow with ROCm, albeit, ROCm support is still a bit rough atm and perhaps not as accessible as OpenCL.

There's also third party libs like ArrayFire which are a bit more limited in functionality afaik, but abstract OpenCL/CUDA under a single API and creates JIT compiled kernels, not sure how appropriate it is for this project but it's meant to do a pretty good job at compute workloads and optimizing them, is written in C++ too like this project, so it may be preferable to maintaining/developing OpenCL/CUDA code directly?

polarathene avatar Dec 23 '19 05:12 polarathene

It might need some refactoring, but shouldn't it be possible to simply support both code paths and use some logic to decide which one to use based on some condition (availablity of hardware or configuration of some kind)? This would not require a second package for different hardware.

Keridos avatar Apr 18 '20 13:04 Keridos