llvmlite
llvmlite copied to clipboard
undefined symbol: LLVMInitializeInstCombine on arm64 (ARMv8)
After installing llvmlite v0.26.0:
>>> import llvmlite
>>> llvmlite.__version__
'0.26.0'
>>> import llvmlite.binding
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sigma/llvmtest/local/lib/python2.7/site-packages/llvmlite/binding/__init__.py", line 6, in <module>
from .dylib import *
File "/home/sigma/llvmtest/local/lib/python2.7/site-packages/llvmlite/binding/dylib.py", line 4, in <module>
from . import ffi
File "/home/sigma/llvmtest/local/lib/python2.7/site-packages/llvmlite/binding/ffi.py", line 128, in <module>
raise e
OSError: /home/sigma/llvmtest/local/lib/python2.7/site-packages/llvmlite/binding/libllvmlite.so: undefined symbol: LLVMInitializeInstCombine
I realize the root cause is the missing symbol in the llvm-6.0
Ubuntu apt packages but a workaround was apparently created for macOS: #346. And the bug is not present using the same package versions on x86. So maybe you could apply the same workaround for arm64?
I see that on x86 libllvmlite.so has no static dependency on the system's llvm library at all. While on arm64 it does:
arm64:
$ ldd llvmtest/lib/python2.7/site-packages/llvmlite/binding/libllvmlite.so
linux-vdso.so.1 => (0x0000007fab3a4000)
libLLVM-6.0.so.1 => /usr/lib/aarch64-linux-gnu/libLLVM-6.0.so.1 (0x0000007fa801c000)
...
x86:
$ ldd /usr/local/lib/python2.7/dist-packages/llvmlite/binding/libllvmlite.so
linux-vdso.so.1 => (0x00007ffd50da6000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fde00fad000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fde00da9000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fde00b8c000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fde00883000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fde0066d000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fde002a3000)
/lib64/ld-linux-x86-64.so.2 (0x00007fde03db4000)
Looks like an x86 binary wheel was posted to pypi but not an aarch64 binary wheel. It seems that building the binary wheel requires building a local copy of libLLVM with a series of patches applied. Without the patches you get the error in the title of this issue.
Could you add aarch64 to the list of architectures for which you post pre-built wheels to pypi?
I know that the conda package manager can provide another path for getting a working llvmlite package but as far as I can tell conda does not officially support arm64.
I built this wheel. Perhaps you could post it to pipy?
It is unclear what the convention is for ARM wheels on PyPI, which is why we haven't done that yet. For x86, the manylinux1 standard defines the linkage (minimum glibc, etc) that a compliant binary wheel must have. (Although with no validation, there are projects which violate the manylinux1 requirements in their wheels.)
Also, just as a general rule, we don't want to post binaries built by others, but we are happy to build wheels on our ARM hardware once we understand what the community expects.
From a quick review of the manylinux1 standard it looks like the only shared library in question for the llvmliyr build I created (using your instructions) is libz. So I guess if the makefiles were modified to statically link libz then llvmlite would be just as compliant with manylinux1 on ARM as it is on x86. Is that the issue? We might have to extrapolate how manylinux1 applies to ARM but the binary packages and dynamic linking follow the same pattern as on x86 so this seems straightforward.
Usually the biggest compatibility issue is glibc. Manylinux1 specifies the version supplied by RHEL / CentOS 5 as the requirement (which should then work for later versions), as well as requires everything be compiled with the pre-GCC 5 C++ ABI. I'm not sure if that is a good idea with ARM (where the Linux ecosystem is much newer), since there are likely ARM bugs in older versions of libraries that we would rather not standardize on.
As a side note, I'm not super familiar with the Linux distributions for ARM these days. Are all the popular choices Debian-derived? (Raspbian and Ubuntu are the only two distributions I've used on ARM, but that may not be typical.) If we build on Ubuntu 16.04, is that likely to cover most AArch64 users out there?
There is CentOS for 64-bit ARM (CentOS 7). But Raspbian is the default for Pis and they seem to be recommending Ubuntu 18 for the 64-bit Pi boards. Nvidia is shipping Jetsons with Ubuntu 16.04. I'll bet its safe to focus on the Debian-derived distros.
Regarding the C++ ABI, I agree that 64-bit ARM is new enough that you should use the v6 C++ library.
Has there been any resolution to this? I rather do not build llvm
myself.
I tried the combination llvm-7.0
with llvmlite-0.27.0
and numba-0.42.0
and run into https://github.com/numba/numba/issues/3666, so I tried to downgrade to llvm-6.0
with llvmlite-0.26.0
and numba-0.41.0
and run into this one here.
Hardware is ARMv8 thus no conda
. I think also it would be good enough to focus on Ubuntu 16.04 and upwards. It seems the next Jetson build will even come with 18.04, Xavier already uses it (https://developer.nvidia.com/embedded/jetpack), and they stated in the forum that they intend to also release packages for TX2s.