xmr-stak-cpu
xmr-stak-cpu copied to clipboard
MEMORY ALLOC FAILED: mlock failed despite everything looks okay
Hi,
Here's my start log:
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Starting single thread, affinity: 0.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Starting single thread, affinity: 1.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Starting single thread, affinity: 2.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Starting single thread, affinity: 3.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Starting single thread, affinity: 4.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Starting single thread, affinity: 5.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Starting single thread, affinity: 6.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Starting single thread, affinity: 7.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Starting single thread, affinity: 8.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Starting single thread, affinity: 9.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Connecting to pool europe.cryptonight-hub.miningpoolhub.com:20580 ...
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : hwloc: memory pinned
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : MEMORY ALLOC FAILED: mlock failed
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : hwloc: memory pinned
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : MEMORY ALLOC FAILED: mlock failed
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : hwloc: memory pinned
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : MEMORY ALLOC FAILED: mlock failed
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : hwloc: memory pinned
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : MEMORY ALLOC FAILED: mlock failed
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : MEMORY ALLOC FAILED: mlock failed
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Connected. Logging in...
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : Difficulty changed. Now: 500054.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:16] : New block detected.
Oct 18 00:04:26 thrall xmr-stak-cpu[9463]: [2017-10-18 00:01:29] : New block detected.
So I still see issues regarding memory allocation. HOWEVER: I do have proper limits:
cat /proc/`ps aux | grep xmr-stak-cpu | grep -v grep | awk '{ print $2 }'`/limits | grep 'Max locked memory'
Returning:
Max locked memory 262144 262144 bytes
Btw, I think you should add this simple shell command to help users figuring out if everything is correctly setup in README.md
So any hint ?
Btw, I'm nearly done packaging the whole app as a proper Debian/Ubuntu package. All you have to do is edit a simple config file to setup the pool address, everything else is configured automatically (using an homemade Python script to generate config.txt). Would you be interrested in it ?
Regards, Adam.
Hello Same error for me. My config is on https://pastebin.com/bjqXa0pG
Return with the command Max locked memory 268435456 268435456 bytes
Hey,
Here're another logs showing the hugepage get's used when starting xmr-stak-cpu (from /proc/meminfo) but still displaying some memory related errors.
I have recompile xmr-stak-cpu master branch using:
cmake . -DHWLOC_ENABLE=OFF.
This was said in some other issue The errors are away now and hashrate is the same as before. So it looks fine but I wich somebody could explain the difference when -DHWLOC_ENABLE=ON or -DHWLOC_ENABLE=OFF. What should the hwloc bring? more hashrate?
The system can much better analyzed and dual, quad and octa socket systems will be supported. Thr performance can differ depending of the system and OS.
But what's required to enable this feature on a Linux system ?
So if I understand it right hwloc does not necessary make a difference, depending on the hardware and OS you use.
@eLvErDe my advise, check your hashrate how it is right now and recompile xmr-stak-cpu with the option -DHWLOC_ENABLE=OFF. then check you hashrate again and if there is not difference you can use this new compiled version. That's the way I did it. I also tried the dev Branch but result was the same with and without this option in my case
Looking at libhwloc dependency I think it has absolutely no relation with my memory message. Moreover it seems used by the tool to generate optimized config.txt
I have the same issue. This despite having set hugepages to 128 on Ubuntu 16.04, which is supposed to fix this error. What else could be causing this issue?
Some limits (max memory size, virtual memory) can be too small. I needed to increase them to 64 GB. If this does not help, can you please post the output of 'ulimit -a'?
Same problem form me.
core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 66068 max locked memory (kbytes, -l) 262144 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 66068 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Have same problem on 13 gtx 1060 gpu rig
you must enable large page support in your os.
limits.conf + huge pages in sysctl.conf solve this error, thank you
I get successful mlocks on Debian Stretch with:
$
$ cat /etc/security/limits.d/xmrminer-limits.conf
# xmr-stak
* soft memlock 262144
* hard memlock 262144
$ cat /etc/sysctl.d/xmrminer-hugepages.conf
vm.nr_hugepages=128
$
even as an unprivileged user. Try those and reboot if you're not getting this to work.
Two mysteries I still hit that might(?) be related to others' troubles :
- under a systemd unit it's not working, even with:
LimitMEMLOCK=262144
set in the [Service].
- my hashrate is exactly the same with or without getting the error (Ryzen 1700).
I had the same problem on my OVH Dedi, running Debian 9, this fixed it for me
https://steemit.com/monero/@scotty86/enable-huge-pages-large-pages-on-debian-9
Update: I figured it out. In systemd limit is set in bytes while in the limits.conf it is set in KB. Now I've set it in systemd as LimitMEMLOCK=256M. See my gist.
btw it is very interesting why 2 huge pages are reported used when the process limit was only 256k. Why would linux allow 4MB pages be reserved then?
@bill-mcgonigle did you figure it out? See my service file:
https://gist.github.com/akostadinov/55e907b1e20d4b7700fa7b88791a82ae
I verified limit is correct for the xmr process. If I run it as root everything is fine. I wonder if systemd is setting some additional restrictions somehow. Or is confusing xmr somehow because I see in /proc/meminfo:
HugePages_Total: 128
HugePages_Free: 126
When I kill xmr, then all huge pages are free. So it seems it does use huge pages but still showing allocation errors. What could be the cause of these other errors?
You can also avoid these errors by using sudo when you run XMR-STAK
Thanks a lot @akostadinov
That's exactly the same issue in my Debian package :)
@greydaymine using sudo actually worked. Originally had this problem: https://github.com/fireice-uk/xmr-stak-cpu/issues/383#issuecomment-357317664
@greydaymine sudo worked for me as well. thank you.
@greydaymine sudo worked for me as well. Thanks!
Sudo removed errors, hash rate still terrible, running on a twin cpu server for a total of 8 cores but only getting 100h/s, not much better than my new iPhone
Sudo is a BAD solution.
I agree with @eLvErDe on using sudo to fix this. If the software has a bug or some kind of security loophole and you are running this as root: Consider yourself PWN'd!
I set this up in a script and made the ulimit change whilst I am still root and drop privileges upon running xmr-stak.
My script kinda looks like this:
#!/bin/bash
test "$UID" -eq 0 && ulimit -l 10240
# Change these to the user you want to run this program as.
# Recommend against your normal user. Some unprivileged user/system account.
RUNAS_USER=monero
RUNAS_GROUP=apps
# Checks current $UID and drops privileges as needed.
test "$UID" -eq `id -u $RUNAS_USER` || {
echo "Dropping privileges..."
exec sudo -H -u$RUNAS_USER -g$RUNAS_GROUP "$0" $@
}
# Navigate back to user's home directory.
cd
# Let this command overwrite the current process image with `exec`. Optional.
exec xmr-stak -c config.txt
Note: I had to set the mlock limit higher in order to get past this error.
Hopefully this helps someone else.
Unrelated: After increasing the ulimit on memory locking, my next issue is getting past "MEMORY ALLOC FAILED: mmap with HUGETLB failed, attempting without it". However, I'll work through this next.