Open3D
                                
                                 Open3D copied to clipboard
                                
                                    Open3D copied to clipboard
                            
                            
                            
                        Adapted find_dependencies.cmake to build with CUDA >= 12.0 with dynamic libraries as well
Type
- [x] Bug fix (non-breaking change which fixes an issue): Fixes #
- [ ] New feature (non-breaking change which adds functionality). Resolves #
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) Resolves #
Motivation and Context
Currently, CUDA support when building searches explicitly for static dependencies. However, in absence of static lib binaries, Open3D fails to build. I think this is important since some CUDA installations may lack the required static dependencies.
For instance, the official NVIDIA DeepStream 7.0 Docker images don't include the static libraries, while the framework include multiple new features oriented toward LiDAR data processing, camera calibration and 3D tracking, so being able to develop custom GStreamer plugins taking advantage of Open3D would be very helpful.
Checklist:
- [x] I have run python util/check_style.py --applyto apply Open3D code style to my code.
- [ ] This PR changes Open3D behavior or adds new functionality.
- [ ] Both C++ (Doxygen) and Python (Sphinx / Google style) documentation is updated accordingly.
- [ ] I have added or updated C++ and / or Python unit tests OR included test results (e.g. screenshots or numbers) here.
 
- [x] I will follow up and update the code if CI fails.
- [x] For fork PRs, I have selected Allow edits from maintainers.
Description
When importing CUDA dependencies for cuBLAS if version is >= 12.0 it now checks first if dynamic libraries are available to build as usual. Otherwise, it will look for static libraries. This change has been tested in these both NVIDIA Docker images, being the DeepStream image the one without static libraries:
- nvcr.io/nvidia/cuda:12.2.0-devel-ubuntu22.04
- nvcr.io/nvidia/deepstream:7.0-gc-triton-devel
After basic containers setups, building was performed by running :
git clone https://github.com/davconde/Open3D.git
cd Open3D
apt install -y libfmt-dev libomp-dev
SUDO=command util/install_deps_ubuntu.sh assume-yes
mkdir build && cd build
export OPEN3D_INSTALL_PATH=/usr/local
cmake -DBUILD_SHARED_LIBS=ON -DBUILD_PYTHON_MODULE=OFF -DBUILD_CUDA_MODULE=ON -DCMAKE_INSTALL_PREFIX=${OPEN3D_INSTALL_PATH} ..
make -j$(nproc)
make install
echo 'export LD_PRELOAD=/usr/local/lib/libOpen3D.so:${LD_PRELOAD}' >> ~/.bashrc
export LD_PRELOAD=/usr/local/lib/libOpen3D.so:${LD_PRELOAD}
And tested with the following snippet:
#include <iostream>
#include <open3d/Open3D.h>
#include <open3d/core/CUDAUtils.h>  // Include this for CUDA utilities
int main() {
    using namespace open3d;
    // Print Open3D version
    std::cout << "Open3D " << OPEN3D_VERSION << std::endl;
    // Check if CUDA is available
    if (core::cuda::IsAvailable()) {
        std::cout << "CUDA is available" << std::endl;
    } else {
        std::cout << "CUDA is NOT available" << std::endl;
    }
    // Create a simple point cloud
    core::Tensor points = core::Tensor::Init<float>({{0, 0, 0}, {1, 0, 0}, {0, 1, 0}, {0, 0, 1}});
    t::geometry::PointCloud pcd(points);
    std::cout << "PointCloud:\n" << pcd.GetPointPositions().ToString() << std::endl;
    return 0;
}
Which yielded coherent results in both cases.
Thanks for submitting this pull request! The maintainers of this repository would appreciate if you could update the CHANGELOG.md based on your changes.
Hi @davconde, thanks for submitting this PR. Building with cuda shared libraries is definitely useful. Some comments:
- Please add an option BUILD_WITH_CUDA_STATIC (default ON) to CMakeLists.
- We should not guess the build environment - if BUILD_WITH_CUDA_STATIC is ON and any static lib is not found, it's a FATAL_ERROR. Same if it's OFF and shared libs are not found.
- Test both the Open3D shared lib and static lib (using BUILD_SHARED_LIBS = ON/OFF options ) and BUILD_UNIT_TESTS=ON testsexecutable.
- Testing must be done in an empty new dcker container. Not the build container. Otherwise the binary is not relocatable / portable.
- With dynamic linking, the cuda libraries are now part of the Open3D ABI. This interface info needs to be present in the cmake and package config scripts created when we do make package. To test that this is setup correctly, build a test Open3D app using each of thefind_package(Open3D)andopen3d.pcmethods and ensure that the build files look for the correct CUDA libraries. See https://www.open3d.org/docs/latest/getting_started.html#id6 for a simple test app.
- Please add an option BUILD_WITH_CUDA_STATIC (default ON) to CMakeLists.
The option was added as option(BUILD_WITH_CUDA_STATIC     "Build with static CUDA libraries"         ON )
- We should not guess the build environment - if BUILD_WITH_CUDA_STATIC is ON and any static lib is not found, it's a FATAL_ERROR. Same if it's OFF and shared libs are not found.
The way CUDA libraries were currently imported  for cuBLAS and since last change in #5833 is not using open3d_find_package_3rdparty_library so without the previous library search, even by then no FATAL_ERROR was thrown. I've added it however when building NPP, either statically or dynamically: message(FATAL_ERROR "CUDA NPP libraries not found.")
- Test both the Open3D shared lib and static lib (using BUILD_SHARED_LIBS = ON/OFF options ) and BUILD_UNIT_TESTS=ON
testsexecutable.- Testing must be done in an empty new dcker container. Not the build container. Otherwise the binary is not relocatable / portable.
As requested, I've built the 4 possible combinations of toggling BUILD_SHARED_LIBS and BUILD_WITH_CUDA_STATIC for then moving the binaries into a new container and running the unit test ./bin/tests in each generated install directory after setting the LD_LIBRARY_PATH accordingly for each case. Global results are as follow:
- BUILD_SHARED_LIBS=OFFand- BUILD_WITH_CUDA_STATIC=ON
[----------] Global test environment tear-down
[==========] 1714 tests from 109 test suites ran. (265153 ms total)
[  PASSED  ] 1712 tests.
[  SKIPPED ] 2 tests, listed below:
[  SKIPPED ] RGBDImage.CreateFromNYUFormat
[  SKIPPED ] Tensor/TensorPermuteDevices.TakeOwnership/1
- BUILD_SHARED_LIBS=OFFand- BUILD_WITH_CUDA_STATIC=OFF
[----------] Global test environment tear-down
[==========] 1714 tests from 109 test suites ran. (117355 ms total)
[  PASSED  ] 1712 tests.
[  SKIPPED ] 2 tests, listed below:
[  SKIPPED ] RGBDImage.CreateFromNYUFormat
[  SKIPPED ] Tensor/TensorPermuteDevices.TakeOwnership/1
- BUILD_SHARED_LIBS=ONand- BUILD_WITH_CUDA_STATIC=ON
[----------] Global test environment tear-down
[==========] 1714 tests from 109 test suites ran. (120258 ms total)
[  PASSED  ] 1712 tests.
[  SKIPPED ] 2 tests, listed below:
[  SKIPPED ] RGBDImage.CreateFromNYUFormat
[  SKIPPED ] Tensor/TensorPermuteDevices.TakeOwnership/1
- BUILD_SHARED_LIBS=ONand- BUILD_WITH_CUDA_STATIC=OFF
[----------] Global test environment tear-down
[==========] 1714 tests from 109 test suites ran. (119631 ms total)
[  PASSED  ] 1712 tests.
[  SKIPPED ] 2 tests, listed below:
[  SKIPPED ] RGBDImage.CreateFromNYUFormat
[  SKIPPED ] Tensor/TensorPermuteDevices.TakeOwnership/1
- With dynamic linking, the cuda libraries are now part of the Open3D ABI. This interface info needs to be present in the cmake and package config scripts created when we do
make package. To test that this is setup correctly, build a test Open3D app using each of thefind_package(Open3D)andopen3d.pcmethods and ensure that the build files look for the correct CUDA libraries. See https://www.open3d.org/docs/latest/getting_started.html#id6 for a simple test app.
As specified in the linked documentation, I've tried building sample apps by assigning each generated install directory to the Open3D_ROOT cmake flag. Building and execution was successful.