simdcomp icon indicating copy to clipboard operation
simdcomp copied to clipboard

Add ARM64 / Aarch64 support

Open QwertyJack opened this issue 3 years ago • 3 comments

Would you add arm support? Say arm64 or aarch64. I find DLTcollab/sse2neon might be helpful.

QwertyJack avatar Jan 14 '22 14:01 QwertyJack

Pull request invited.

lemire avatar Jan 14 '22 14:01 lemire

Hi

Recently we added support of aarch64 to iresearch project and to ArangoDB. And we found out some bugs while using combination of simdcomp and sse2neon.

In function __SIMD_fastunpack1_32 following instruction is used: _mm_srli_epi32 Everything works fine on x86 architecture but on aarch64 we got some strange bug with double increment of shift variable. After deep investigation we found that sse2neon implementation of this intrinsic use macros which substitute second parameter twice. And there are a lot of other places where such situation is possible.

The solution is quite simple: Move increment from function call to single line.

static void __SIMD_fastunpack1_32(const  __m128i*   in, uint32_t *    _out) {
    __m128i*   out = (__m128i*)(_out);
    __m128i    InReg1 = _mm_loadu_si128(in);
    __m128i    InReg2 = InReg1;
    __m128i    OutReg1, OutReg2, OutReg3, OutReg4;
    const __m128i mask =  _mm_set1_epi32(1);

    uint32_t i, shift = 0;

    for (i = 0; i < 8; ++i) {
        OutReg1 = _mm_and_si128(  _mm_srli_epi32(InReg1, shift) , mask);
        ++shift;
        OutReg2 = _mm_and_si128(  _mm_srli_epi32(InReg2, shift) , mask);
        ++shift;
        OutReg3 = _mm_and_si128(  _mm_srli_epi32(InReg1, shift) , mask);
        ++shift;
        OutReg4 = _mm_and_si128(  _mm_srli_epi32(InReg2, shift) , mask);
        ++shift;
        _mm_storeu_si128(out++, OutReg1);
        _mm_storeu_si128(out++, OutReg2);
        _mm_storeu_si128(out++, OutReg3);
        _mm_storeu_si128(out++, OutReg4);
    }
}

Please be careful

alexbakharew avatar Mar 10 '22 11:03 alexbakharew

In function __SIMD_fastunpack1_32 following instruction is used: _mm_srli_epi32 Everything works fine on x86 architecture but on aarch64 we got some strange bug with double increment of shift variable. After deep investigation we found that sse2neon implementation of this intrinsic use macros which substitute second parameter twice. And there are a lot of other places where such situation is possible.

Recent SSE2NEON improves _mm_srai_epi32 to handle complex arguments.

commit 7ef68928 Author: Developer-Ecosystem-Engineering AuthorDate: Fri Jul 8 10:52:46 2022 -0700

jserv avatar Oct 08 '22 13:10 jserv