tensorflow-opencl icon indicating copy to clipboard operation
tensorflow-opencl copied to clipboard

How to compile for AMD GPU?

Open Whytehorse opened this issue 9 years ago • 37 comments

what options need to be selected when running ./configure in tensorflow-opencl directory? Here's where I'm not sure what to enter in order to get tensorflow to recognize and use the AMD GPU/APU. `~/tensorflow-opencl ~/tensorflow-opencl Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] No Google Cloud Platform support will be enabled for TensorFlow Do you wish to build TensorFlow with Hadoop File System support? [y/N] No Hadoop File System support will be enabled for TensorFlow Found possible Python library paths: /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/lib/python3/dist-packages]

Using python library path: /usr/lib/python3/dist-packages Do you wish to build TensorFlow with OpenCL support? [y/N] y OpenCL support will be enabled for TensorFlow Do you wish to build TensorFlow with CUDA support? [y/N] y CUDA support will be enabled for TensorFlow Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: `

Whytehorse avatar Jan 03 '17 22:01 Whytehorse

Hi @Whytehorse do you intend to run TensorFlow supporting OpenCL and CUDA at the same time? We did not test that yet. Although, I don't see any reason why it wouldn't work.

In order to start using TF with OpenCL only you need to say no to CUDA. As well, you will need to get ComputeCpp compiler (https://www.codeplay.com/products/computesuite/computecpp).

~/tensorflow-opencl ~/tensorflow-opencl
Please specify the location of python. [Default is /usr/bin/python]: 
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] 
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N] 
No Hadoop File System support will be enabled for TensorFlow
Found possible Python library paths:
  /usr/local/lib/python2.7/dist-packages
  /usr/lib/python2.7/dist-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]
Using python library path: /usr/local/lib/python2.7/dist-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] y
OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] 
No CUDA support will be enabled for TensorFlow
Please specify which C++ compiler should be used as the host C++ compiler. [Default is ]: /usr/bin/clang++
Please specify which C compiler should be used as the host C compiler. [Default is ]: /usr/bin/clang
Please specify the location where ComputeCpp for SYCL 1.2 is installed. [Default is /usr/local/computecpp]: <ADD PATH TO COMPUTECPP ROOT HERE>

What AMD GPU/APU do you use?

lukeiwanski avatar Jan 13 '17 10:01 lukeiwanski

I've got the A10-7850(Kaveri), aka steamroller. I already got ComputeCpp. I already compiled successfully using ComputeCpp and OpenCL. My graphics driver shows OpenCL 1.2 is supported. When I run the compiled binary, it doesn't use the GPU at all, only CPU.

$clinfo
  Profiling :					 No
  Platform ID:					 0x7f3b1feffbd8
  Name:						 AMD A10-7850K APU with Radeon(TM) R7 Graphics
  Vendor:					 AuthenticAMD
  Device OpenCL C version:			 OpenCL C 1.2 
  Driver version:				 2117.10 (sse2,avx,fma4)
  Profile:					 FULL_PROFILE
  Version:					 OpenCL 1.2 AMD-APP (2117.10)
  Extensions:					 cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_device_fission cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_spir cl_khr_gl_event 

Whytehorse avatar Jan 13 '17 15:01 Whytehorse

Could you provide the output of computecpp_info ?

lukeiwanski avatar Jan 13 '17 15:01 lukeiwanski


ComputeCpp Info (CE 0.1.1)


Toolchain information:

GLIBCXX: 20150426 This version of libstdc++ is supported.


Device Info:

Discovered 1 devices matching: platform : device type :


Device 0:

Device is supported : NO - Unsupported vendor CL_DEVICE_NAME : Spectre CL_DEVICE_VENDOR : Advanced Micro Devices, Inc. CL_DRIVER_VERSION : 2117.10 (VM) CL_DEVICE_TYPE : CL_DEVICE_TYPE_GPU


Whytehorse avatar Jan 13 '17 15:01 Whytehorse

computecpp_info looks good. Could you provide the output of your binary too?

lukeiwanski avatar Jan 13 '17 15:01 lukeiwanski

What binary?

Whytehorse avatar Jan 13 '17 16:01 Whytehorse

The binary that states your GPU is not used at all that you mentioned earlier:

When I run the compiled binary, it doesn't use the GPU at all, only CPU.

Does it output any information? How do you know it works only on CPU? Did you run your model with sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) ?

lukeiwanski avatar Jan 13 '17 16:01 lukeiwanski

I run this:

import tensorflow as tf
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print sess.run(c)

and I get this: Device mapping: no known devices. I tensorflow/core/common_runtime/direct_session.cc:255] Device mapping:

MatMul: (MatMul): /job:localhost/replica:0/task:0/cpu:0
I tensorflow/core/common_runtime/simple_placer.cc:827] MatMul: (MatMul)/job:localhost/replica:0/task:0/cpu:0
b: (Const): /job:localhost/replica:0/task:0/cpu:0
I tensorflow/core/common_runtime/simple_placer.cc:827] b: (Const)/job:localhost/replica:0/task:0/cpu:0
a: (Const): /job:localhost/replica:0/task:0/cpu:0
I tensorflow/core/common_runtime/simple_placer.cc:827] a: (Const)/job:localhost/replica:0/task:0/cpu:0
[[ 22.  28.]
 [ 49.  64.]]

Whytehorse avatar Jan 13 '17 22:01 Whytehorse

FYI here is a working build command for my cpu: bazel build --local_resources 2048,.5,1.0 --copt=-march=bdver3 -c opt //tensorflow/tools/pip_package:build_pip_package

Whytehorse avatar Jan 15 '17 20:01 Whytehorse

We are still working on MatMul Op but you can try it out: https://github.com/lukeiwanski/tensorflow-opencl/commit/d04d4d249e79db526119b8afc58dc6ab75c7b6e0.

lukeiwanski avatar Jan 16 '17 19:01 lukeiwanski

what's a good test to verify that opencl is working in tensorflow and using my gpu?

Whytehorse avatar Jan 17 '17 03:01 Whytehorse

Have a look at: https://github.com/benoitsteiner/tensorflow-opencl/blob/master/tensorflow/python/kernel_tests/basic_gpu_test.py

If the following line passes for you that means your GPU is working with TensorFlow: bazel test -c opt --config=sycl --verbose_failures --test_timeout 3600 //tensorflow/python/kernel_tests:basic_gpu_test

lukeiwanski avatar Jan 17 '17 16:01 lukeiwanski

		fail("ERROR: Building with --config=s...")
ERROR: Building with --config=sycl but TensorFlow is not configured to build with SYCL support. Please re-run ./configure and enter 'Y' at the prompt to build with SYCL support.

I didn't see sycl support, only ComputeCpp and OpenCL and I specified those so... ?

Whytehorse avatar Jan 22 '17 01:01 Whytehorse

That's a good point, perhaps we should include a better explanation on what SYCL is and how it relates to OpenCL.

Have you tried following these instructions? https://github.com/lukeiwanski/tensorflow-opencl/blob/cd5861c4defe3182122276fdbbb371f60ea5b708/tensorflow/g3doc/get_started/os_setup.md#create-the-pip-package-and-install

Your example from earlier should now work ( Matmul Op should run on GPU via SYCL / OpenCL ) with the head of this repo ( after this https://github.com/benoitsteiner/tensorflow-opencl/pull/33 gets in that is :) ).

lukeiwanski avatar Jan 22 '17 19:01 lukeiwanski

I re-ran ./configure and then ran the test and I get that it passed:

//tensorflow/python/kernel_tests:basic_gpu_test                          PASSED in 49.2s

Executed 1 out of 1 test: 1 test passes.

Whytehorse avatar Jan 22 '17 22:01 Whytehorse

OK, so I tried this command: bazel build --local_resources 2048,.5,1.0 --copt=-march=bdver3 -c opt --config=sycl //tensorflow/tools/pip_package:build_pip_package And I get this error now:

ERROR: /home/ben/tensorflow-opencl/tensorflow/core/kernels/BUILD:2585:1: C++ compilation of rule '//tensorflow/core/kernels:pooling_ops' failed: computecpp failed: error executing command external/local_config_sycl/crosstool/computecpp -Wall -msse3 -g0 -O2 -DNDEBUG '-march=bdver3' '-std=c++11' -MD -MF ... (remaining 113 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1.
In file included from tensorflow/core/kernels/pooling_ops_3d.cc:26:
./tensorflow/core/kernels/eigen_pooling.h:354:9: error: cannot compile this builtin function yet
        pequal(p, pset1<Packet>(-Eigen::NumTraits<T>::highest()));
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./tensorflow/core/kernels/eigen_pooling.h:337:22: note: expanded from macro 'pequal'
#define pequal(a, b) _mm256_cmp_ps(a, b, _CMP_EQ_UQ)
                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/computecpp/bin/../lib/clang/3.6.0/include/avxintrin.h:421:11: note: expanded from macro '_mm256_cmp_ps'
  (__m256)__builtin_ia32_cmpps256((__v8sf)__a, (__v8sf)__b, (c)); })
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 10018.276s, Critical Path: 8544.09s

Whytehorse avatar Jan 23 '17 01:01 Whytehorse

The above error started when I ran a git pull on tensorflow-opencl. Do I need a newer bazel or python or something for the newer tensorflow? Or is this a coding error? is it ComputeCpp? Any help would be greatly appreciated.

Whytehorse avatar Jan 23 '17 23:01 Whytehorse

We are looking into this. It seems like compiler came across an intrinsic that has not been implemented yet. However, it should not have happened since pooling ops is not registered for SYCL device yet.

For now, could you remove --copt=-march=bdver3 ?

lukeiwanski avatar Jan 23 '17 23:01 lukeiwanski

It successfully compiled with bazel build --local_resources 2048,.5,1.0 -c opt --config=sycl //tensorflow/tools/pip_package:build_pip_package Thank you. Now when I run the gpu test I get this error

Error: [ComputeCpp:RT0106] Device not found
external/bazel_tools/tools/test/test-setup.sh: line 114: 20289 Aborted```

Whytehorse avatar Jan 24 '17 02:01 Whytehorse

Could you update to the latest version of ComputeCpp ( 0.1.2 ) and show us the output of the computecpp_info? https://www.codeplay.com/products/computesuite/computecpp

BTW. Where did you get that driver from?

lukeiwanski avatar Jan 24 '17 11:01 lukeiwanski

I got computecpp 1.2 and recompiled. The GPU test errors out with the following log:

exec ${PAGER:-/usr/bin/less} "$0" || exit 1
-----------------------------------------------------------------------------
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
terminate called after throwing an instance of 'cl::sycl::exception'
  what():  Error: [ComputeCpp:RT0106] Device not found
external/bazel_tools/tools/test/test-setup.sh: line 114: 13930 Aborted                 (core dumped) "${TEST_PATH}" "$@"

I got that older binary from the the instructions. I began trying to compile this a few weeks ago so that's why the older binary. It looks like the instructions need to be updated too at https://github.com/lukeiwanski/tensorflow-opencl/blob/cd5861c4defe3182122276fdbbb371f60ea5b708/tensorflow/g3doc/get_started/os_setup.md#create-the-pip-package-and-install

Whytehorse avatar Jan 25 '17 03:01 Whytehorse

@Whytehorse there is a pending PR https://github.com/benoitsteiner/tensorflow-opencl/pull/32 for instruction update.

Could you paste the output of the computecpp_info ?

lukeiwanski avatar Jan 25 '17 15:01 lukeiwanski


ComputeCpp Info (CE 0.1.2)


Toolchain information:

GLIBCXX: 20150426 This version of libstdc++ is supported.


Device Info:

Discovered 1 devices matching: platform : device type :


Device 0:

Device is supported : NO - Device does not support SPIR CL_DEVICE_NAME : AMD KAVERI (DRM 2.43.0 / 4.4.0-59-generic, LLVM 4.0.0) CL_DEVICE_VENDOR : AMD CL_DRIVER_VERSION : 17.1.0-devel CL_DEVICE_TYPE : CL_DEVICE_TYPE_GPU




Whytehorse avatar Jan 26 '17 21:01 Whytehorse

I tried getting the latest bleeding edge video drivers and it looks like they're only on OpenCL 1.1.

  1. Platform Profile: FULL_PROFILE Version: OpenCL 1.1 Mesa 17.1.0-devel - padoka PPA Name: Clover Vendor: Mesa Extensions: cl_khr_icd
  2. Device: AMD KAVERI (DRM 2.43.0 / 4.4.0-59-generic, LLVM 5.0.0) 1.1 Hardware version: OpenCL 1.1 Mesa 17.1.0-devel - padoka PPA 1.2 Software version: 17.1.0-devel - padoka PPA 1.3 OpenCL C version: OpenCL C 1.1 1.4 Parallel compute units: 8

Is there any way to get openCL 1.2? I can only use the amdgpu driver because I'm on ubuntu 16.04 and AMD has no driver for it.

Whytehorse avatar Jan 27 '17 00:01 Whytehorse

I am experiencing a similar issue using the RX460. When I try to do sess = tf.Session(), I get the following.

>>> sess = tf.Session()
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
terminate called after throwing an instance of 'cl::sycl::exception'
  what():  Error: [ComputeCpp:RT0106] Device not found
[1]    17042 abort (core dumped)  python3

My computecpp_info

> ./computecpp_info
********************************************************************************

ComputeCpp Info (CE 0.1.2)

********************************************************************************

Toolchain information:

GLIBCXX: 20150426
This version of libstdc++ is supported.

********************************************************************************


Device Info:

Discovered 1 devices matching:
  platform    : <any>
  device type : <any>

--------------------------------------------------------------------------------
Device 0:

  Device is supported                     : NO - Device does not support SPIR
  CL_DEVICE_NAME                          : gfx803
  CL_DEVICE_VENDOR                        : Advanced Micro Devices, Inc.
  CL_DRIVER_VERSION                       : 1.1 (HSA,LC)
  CL_DEVICE_TYPE                          : CL_DEVICE_TYPE_GPU 
********************************************************************************

********************************************************************************

********************************************************************************

My clinfo

> clinfo
Number of platforms                               1
  Platform Name                                   AMD Accelerated Parallel Processing
  Platform Vendor                                 Advanced Micro Devices, Inc.
  Platform Version                                OpenCL 2.0 AMD-APP (2300.5)
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_icd cl_amd_event_callback cl_amd_offline_devices 
  Platform Extensions function suffix             AMD

  Platform Name                                   AMD Accelerated Parallel Processing
Number of devices                                 1
  Device Name                                     gfx803
  Device Vendor                                   Advanced Micro Devices, Inc.
  Device Vendor ID                                0x1002
  Device Version                                  OpenCL 1.2 
  Driver Version                                  1.1 (HSA,LC)
  Device OpenCL C Version                         OpenCL C 2.0 
  Device Type                                     GPU
  Device Profile                                  FULL_PROFILE
  Max compute units                               16
  Max clock frequency                             1200MHz
  Device Partition                                (core)
    Max number of sub-devices                     16
    Supported partition types                     none specified
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x1024
  Max work group size                             256
  Preferred work group size multiple              64
  Preferred / native vector sizes                 
    char                                                 4 / 4       
    short                                                2 / 2       
    int                                                  1 / 1       
    long                                                 1 / 1       
    half                                                 1 / 1        (n/a)
    float                                                1 / 1       
    double                                               1 / 1        (cl_khr_fp64)
  Half-precision Floating-point support           (n/a)
  Single-precision Floating-point support         (core)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Address bits                                    64, Little-Endian
  Global memory size                              2147483648 (2GiB)
  Error Correction support                        No
  Max memory allocation                           1610612736 (1.5GiB)
  Unified memory for Host and Device              No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       1024 bits (128 bytes)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        16384
  Global Memory cache line                        64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             26607
    Max size for 1D images from buffer            65536 pixels
    Max 1D or 2D image array size                 2048 images
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             2048x2048x2048 pixels
    Max number of read image args                 128
    Max number of write image args                8
  Local memory type                               Local
  Local memory size                               65536 (64KiB)
  Max constant buffer size                        1610612736 (1.5GiB)
  Max number of constant args                     8
  Max size of kernel argument                     1024
  Queue properties                                
    Out-of-order execution                        No
    Profiling                                     Yes
  Prefer user sync for interop                    Yes
  Profiling timer resolution                      1ns
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            No
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels                                
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Device Extensions                               cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_amd_media_ops cl_amd_media_ops2 cl_khr_subgroups cl_khr_depth_images 

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  AMD Accelerated Parallel Processing
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   Success [AMD]
  clCreateContext(NULL, ...) [default]            Success [AMD]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 AMD Accelerated Parallel Processing
    Device Name                                   gfx803
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 AMD Accelerated Parallel Processing
    Device Name                                   gfx803

ICD loader properties
  ICD loader Name                                 OpenCL ICD Loader
  ICD loader Vendor                               OCL Icd free software
  ICD loader Version                              2.2.8
  ICD loader Profile                              OpenCL 1.2
	NOTE:	your OpenCL library declares to support OpenCL 1.2,
		but it seems to support up to OpenCL 2.1 too.

bryanlimy avatar Jan 27 '17 19:01 bryanlimy

I filed a bug report to AMD, ubuntu, and contacted the bleeding edge drivers ppa maintainer. If anyone else is affected by this, add your comments to the bug so it will get triaged and solved faster. https://bugs.launchpad.net/ubuntu/+source/kde-baseapps/+bug/1659706

Whytehorse avatar Jan 28 '17 01:01 Whytehorse

@Whytehorse in AMD Linux Driver website: Base Feature Support Supported APIs: OpenCL 1.2

http://support.amd.com/en-us/kb-articles/Pages/AMD-Radeon-GPU-PRO-Linux-Beta-Driver%E2%80%93Release-Notes.aspx

amorenew avatar Jan 31 '17 08:01 amorenew

@amorenew that driver doesn't support my APU and the one that does isn't available on Ubuntu 16.04.

Whytehorse avatar Jan 31 '17 10:01 Whytehorse

I'm getting the same error: ./tensorflow/core/kernels/eigen_pooling.h:354:9: error: cannot compile this builtin function yet and I'm not building with extra compile flags, just: bazel build --config opt --config=sycl //tensorflow/tools/pip_package:build_pip_package.

Is there a workaround for this? I'm on Ubuntu 16.04 with amdgpu-pro drivers. Building without OpenCL works.

maxgillett avatar Feb 11 '17 06:02 maxgillett

@maxgillett Try -c opt instead of --config opt

JasonLinMS avatar Apr 05 '17 01:04 JasonLinMS