CudaMiner icon indicating copy to clipboard operation
CudaMiner copied to clipboard

K6000 + 2x Tesla K40c not working (Win7 x64)

Open mstormo opened this issue 11 years ago • 2 comments

When I leave it up to autodetection, cudaminer only finds the Tesla K40c's, leaving out the K6000. When I force it with -C 2,2,2 it crashes on startup.

Here's the output of cudaminer (2013-10-10):

[2013-10-19 13:27:01] Binding thread 0 to cpu 0
[2013-10-19 13:27:01] 3 miner threads started, using 'scrypt' algorithm.
[2013-10-19 13:27:01] Binding thread 1 to cpu 1
[2013-10-19 13:27:01] Binding thread 2 to cpu 2
[2013-10-19 13:27:01] Failed to get Stratum session id
[2013-10-19 13:27:01] Stratum difficulty set to 512
[2013-10-19 13:27:01] DEBUG: job_id='7e92' extranonce2=00000000 ntime=5262cefb
[2013-10-19 13:27:01] Stratum detected new block
[2013-10-19 13:27:02] GPU #2: x≥á♥▼─(v# with compute capability 19847128.19847128
[2013-10-19 13:27:02] GPU #2: interactive: 1, tex-cache: 2D, single-alloc: 1
[2013-10-19 13:27:02] GPU #0: Tesla K40c with compute capability 3.5
[2013-10-19 13:27:02] GPU #0: interactive: 0, tex-cache: 2D, single-alloc: 1
[2013-10-19 13:27:02] GPU #1: Tesla K40c with compute capability 3.5
[2013-10-19 13:27:02] GPU #1: interactive: 0, tex-cache: 2D, single-alloc: 1

Here's the output of NVidia-smi:

+------------------------------------------------------+
| NVIDIA-SMI 331.40     Driver Version: 331.40         |
|------------------------------+---------------------+---------------+
| GPU  Name           TCC/WDDM | Bus-Id       Disp.A | Vol Uncor.ECC |
| Fan  Temp  Perf Pwr:Usage/Cap|        Memory-Usage | GPU ComputeM. |
|==============================+=====================+===============|
|   0  Quadro K6000      WDDM  | 0000:01:00.0    Off |           Off |
| 28%   45C    P8   19W / 225W |   12232MB / 12287MB |  0%   Default |
+------------------------------+---------------------+---------------+
|   1  Tesla K40c         TCC  | 0000:02:00.0    Off |           Off |
| 33%   45C    P8   17W / 235W |      21MB / 12287MB |  0%   Default |
+------------------------------+---------------------+---------------+
|   2  Tesla K40c         TCC  | 0000:03:00.0    Off |           Off |
| 30%   37C    P8   17W / 235W |      21MB / 12287MB |  0%   Default |
+------------------------------+---------------------+---------------+

Any ideas what's going on here? Odd that cudaminer finds the Tesla's as GPU #0, and #1; when NVidia-smi reports them as GPU #1, and #2. It is because the K6000 is running WDDM? I have another rig with a K6000 running Linux, and it's working just fine. Maybe K6000 on Windows is a bit iffy? Is it perhaps a 4GB memory barrier issue under WDDM on Windows, even though the card has 12GB? The K6000 is the viz card, so I cannot set it in TCC more to test that theory, unfortunately.

Thanks for any help!

mstormo avatar Oct 19 '13 18:10 mstormo

cudaminer.exe is currently a 32bit executable.

Someone else posted a 64bit build in the bitcointalk cudaminer thread. Maybe try this one.

2013/10/19 Marius Storm-Olsen [email protected]

When I leave it up to autodetection, cudaminer only finds the Tesla K40c's, leaving out the K6000. When I force it with -C 2,2,2 it crashes on startup.

Here's the output of cudaminer (2013-10-10):

[2013-10-19 13:27:01] Binding thread 0 to cpu 0 [2013-10-19 13:27:01] 3 miner threads started, using 'scrypt' algorithm. [2013-10-19 13:27:01] Binding thread 1 to cpu 1 [2013-10-19 13:27:01] Binding thread 2 to cpu 2 [2013-10-19 13:27:01] Failed to get Stratum session id [2013-10-19 13:27:01] Stratum difficulty set to 512 [2013-10-19 13:27:01] DEBUG: job_id='7e92' extranonce2=00000000 ntime=5262cefb [2013-10-19 13:27:01] Stratum detected new block [2013-10-19 13:27:02] GPU #2: x≥á♥▼─(v# with compute capability 19847128.19847128 [2013-10-19 13:27:02] GPU #2: interactive: 1, tex-cache: 2D, single-alloc: 1 [2013-10-19 13:27:02] GPU #0: Tesla K40c with compute capability 3.5 [2013-10-19 13:27:02] GPU #0: interactive: 0, tex-cache: 2D, single-alloc: 1 [2013-10-19 13:27:02] GPU #1: Tesla K40c with compute capability 3.5 [2013-10-19 13:27:02] GPU #1: interactive: 0, tex-cache: 2D, single-alloc: 1

Here's the output of NVidia-smi:

+------------------------------------------------------+ | NVIDIA-SMI 331.40 Driver Version: 331.40 | |------------------------------+---------------------+---------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Vol Uncor.ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU ComputeM. | |==============================+=====================+===============| | 0 Quadro K6000 WDDM | 0000:01:00.0 Off | Off | | 28% 45C P8 19W / 225W | 12232MB / 12287MB | 0% Default | +------------------------------+---------------------+---------------+ | 1 Tesla K40c TCC | 0000:02:00.0 Off | Off | | 33% 45C P8 17W / 235W | 21MB / 12287MB | 0% Default | +------------------------------+---------------------+---------------+ | 2 Tesla K40c TCC | 0000:03:00.0 Off | Off | | 30% 37C P8 17W / 235W | 21MB / 12287MB | 0% Default | +------------------------------+---------------------+---------------+

Any ideas what's going on here? Odd that cudaminer finds the Tesla's as GPU #0, and #1 https://github.com/cbuchner1/CudaMiner/issues/1; when NVidia-smi reports them as GPU #1https://github.com/cbuchner1/CudaMiner/issues/1, and #2 https://github.com/cbuchner1/CudaMiner/issues/2. It is because the K6000 is running WDDM? I have another rig with a K6000 running Linux, and it's working just fine. Maybe K6000 on Windows is a bit iffy? Is it perhaps a 4GB memory barrier issue under WDDM on Windows, even though the card has 12GB? The K6000 is the viz card, so I cannot set it in TCC more to test that theory, unfortunately.

Thanks for any help!

— Reply to this email directly or view it on GitHubhttps://github.com/cbuchner1/CudaMiner/issues/11 .

cbuchner1 avatar Oct 21 '13 10:10 cbuchner1

The x64 version didn't change anything. Still the same output, with GPU #2 name being all garbled, then crashing.

mstormo avatar Oct 21 '13 13:10 mstormo