ccminer
ccminer copied to clipboard
Ubuntu Intesity set
Hi, I have some issues with hashrate on a rig. In windows 10 I get 10.3-10.5mh/s with your miner and intensity 17. In ubuntu below 10mh/s. Default intensity is 15.250 if I manually change to 16 or 17 the hashrate signifanctly drops and anything over 17 gets error out of memory. Wondering if in ubuntu I should set-up intensity differently and not from a command line with -i attribute?
P.S. With default settings in Windows I am getting same as in Linux but -i 17 in Windows pushes towards 10.3+
Interesting thing I found. I have another RIG with just 2 GPU's and if i set -i 17 I get around 3-4% better hashrate on those 2 video cards but on main rig with 8 GPUs it actually lowers my hashrate if -i used.
getting more interesting when: I use command -d 0,1,2,3 -i 17 I get 5150 KH/s (average) (2x 1070, 2x 1080ti) I use command -d 4,5,6,7 -i 17 I get 5270 KH/s (average) (2x 1070, 2x 1080ti)
so in theory hashrate should be as in windows 10.3 MH/s+ but for some reason when all GPUs running it drops significant for some reason. Is this normal drop or something can be done to avoid? Any chance to change default intensity in algorythm as maybe that will give some results?
I think the CPU could be the reason why you have a slower hash rate with 8 cards. It works the best when there is one CPU core for each card. You can set different intensities for each card like this for example: -i 15,15,16,17,15 and so on.
I did -i 17,17,17,17,17,17,17 and strange it improves hashrate slighlty to around 10.1KH/s. Just thinking if there is an easy way to change default intensity from 15.250 to something else to see if that can change anything? Where does ccminer get default intensity and how can I change default value? I am not very good with scripts etc but if it is not too difficult would be nice to try and modify :)
Do you think RAM could be related? As in Windows it has SWAP and Ubuntu does not by default so maybe here is the issue...?
What algo is this? Neoscrypt? In this case the default intensities are here in the code: https://github.com/KlausT/ccminer/blob/windows/neoscrypt/neoscrypt.cu#L63
The default intensities are always a little bit low to make sure it runs on every system.
Yes neoscrypt. I've tried changing those numbers already but everytime I run ccminer still get same message using default intensity 15.250. In fact to be very honest I have no idea how you get 14 here: (256 * 64 * 1); // -i 14 :)
What i also tried to delete every option and just change default settings number but still getting 15.250. Do I have to ./make to get the result?
The number you see there is 2 to the power of 14 When you multiply by 2 you get 1 more intensity. As you can see, each card type can have it's own default. Yes, after the code change you have to run make
@KlausT It's missing the 1070 Ti
? Hardware 61 is supported from the stock Makefile.
Le 19 janv. 2018 17:41, "kaos-1" [email protected] a écrit :
@KlausT https://github.com/klaust It's missing the 1070 Ti
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/KlausT/ccminer/issues/163#issuecomment-359020841, or mute the thread https://github.com/notifications/unsubscribe-auth/AQeB2nfw9IKjWx4wY0AeqYlo4nXVH7d9ks5tMMVRgaJpZM4Rkdu2 .
I don't have 1070Ti only 1080ti and 1070. Could not get good result with -i 17 but what I found that with -i 17.9 and --cpu-priority=1 I am able to achieve around 10.4+ so I think it is very good as free boost :) Haven't run for long so not too sure how stable it will be. Thank you for your help and your work keeping this miner updated ;) Not sure if it just for me but with ccminer 2.2.4 I only get 9.6KH/s and with yours 10.4KH/s so it is alsmot as one extra GPU :)
Good evening, I have a server with a 650Ti and another with a 660Ti running 24h and I'm thinking to use them to mine some XMR. Since I need them always on could be a way to recover some spare coins. The problem is I don't have intensity option on my build (./make from the last one found here), the the server are using 100% of GPU. Is there any flag to put on compiler to have it working?