TSXor icon indicating copy to clipboard operation
TSXor copied to clipboard

Vectorization

Open slice4e opened this issue 3 years ago • 13 comments
trafficstars

Since we are comparing the value to a "window" of previous values during compression, I believe we may benefit from vectorizing the code - compare the value to multiple values concurrently using vector instructions.

It is possible that the compiler can auto-vectorize the window comparison loop (best option), if the correct flags are used. Alternatively, assembly intrinsics can be used to vectorize.

slice4e avatar Jun 24 '22 19:06 slice4e

Sure, thank you for the suggestion! Are you willing to submit a PR with the vectorized code?

jermp avatar Jun 25 '22 06:06 jermp

Hi - I will look into it...

slice4e avatar Jun 28 '22 16:06 slice4e

Hey @slice4e 👋 thanks for creating this issue.

As you could see from the Makefile we enabled the -O3 flag but this clearly does not guarantee that the vectorization has been applied to every loop. I tried at least to identify what prevent the compiler to vectorize the comparison function and the message I got is this one:

test/../core/../lib/Window.cpp:53:9: remark: loop not vectorized: cannot identify array bounds [-Rpass-analysis=loop-vectorize]
        for (int i = 0; i < WINDOW_SIZE; i++)
        ^

AFAIR it's really hard to vectorize that part of the code, but of course, I can be wrong 😊

Please feel free to open a PR in case you find a proper way to vectorize the code

andybbruno avatar Jul 08 '22 16:07 andybbruno

image @slice4e It seems that the _mm512_load_epi64 func would cause Segmentation fault, on my cascade lake machine My complier CXXFLAGS is g++ -std=c++17 -march=cascadelake -mprefer-vector-width=512

lrg11 avatar Apr 13 '23 03:04 lrg11

@lrg11 - I believe that this will typically occur is the data you are accessing with a vecrorized load is unaligned. Could you try forcing a 64-byte aligned allocation of this buffer?

slice4e avatar May 02 '23 19:05 slice4e

@lrg11 - after some thought, I belive it will be prudent to first understand what fraction of the execution time we are spending in the "vectorizable" code. This limits the potential upside (based on Amdal's Law) but also due to frequency implications. Depending on the architecture, executing AVX512 instrucitons will cause the CPU to lower frequency, which may negatively impact the non-vectorized part of the code. For example, if we are optimizing a function which takes only 5% of the total execution time , even if we optimize it by 8X, there is no benefit if we lowered the frequency of the overall execution by 5%.

slice4e avatar May 02 '23 19:05 slice4e

@slice4e @lrg11 thanks both for your interest 🙏

My 2 cents: if you take a look at our paper (see here) we found out that on average 38% of the time we end up in the XOR case (the one @slice4e tried to speed up) which is the slowest step in this algorithm. So, if there's a faster method to find the "closest" XOR value we can significantly improve the compression speed!

Screenshot 2023-05-03 at 08 14 42

andybbruno avatar May 03 '23 06:05 andybbruno

@lrg11 - after some thought, I belive it will be prudent to first understand what fraction of the execution time we are spending in the "vectorizable" code. This limits the potential upside (based on Amdal's Law) but also due to frequency implications. Depending on the architecture, executing AVX512 instrucitons will cause the CPU to lower frequency, which may negatively impact the non-vectorized part of the code. For example, if we are optimizing a function which takes only 5% of the total execution time , even if we optimize it by 8X, there is no benefit if we lowered the frequency of the overall execution by 5%.

Right, I had verified this point, vectorizing sometimes seems to be performance-loss

lrg11 avatar May 04 '23 00:05 lrg11

@slice4e @lrg11 thanks both for your interest 🙏

My 2 cents: if you take a look at our paper (see here) we found out that on average 38% of the time we end up in the XOR case (the one @slice4e tried to speed up) which is the slowest step in this algorithm. So, if there's a faster method to find the "closest" XOR value we can significantly improve the compression speed!

Screenshot 2023-05-03 at 08 14 42

It could be done by using a indices which store the index of value with same trailing bits. This is a algorithm called chimp with the same insight, Ref

lrg11 avatar May 17 '23 07:05 lrg11

Thanks @lrg11 for the hint! Please feel free to improve our solution with chimp approach (just open a PR), it'll be highly appreciated ☺️

andybbruno avatar May 18 '23 06:05 andybbruno

OK, anyway, Could you provide me with the dataset in the TSXor paper, which can accelerate my developing and evaluating 

1256765829 @.***

 

------------------ 原始邮件 ------------------ 发件人: "Yang @.>; 发送时间: 2023年5月18日(星期四) 下午2:58 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [andybbruno/TSXor] Vectorization (Issue #2)

Thanks @lrg11 for the hint! Please feel free to improve our solution with chimp approach (just open a PR), it'll be highly appreciated ☺️

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

lrg11 avatar May 18 '23 07:05 lrg11

I don't know if I still have the original files. Anyways, I'd rather use the datasets of this new paper so that we can validate our performances against theirs.

andybbruno avatar May 18 '23 07:05 andybbruno

OK, I'm evaluating the new paper dataset! Thanks  

1256765829 @.***

 

------------------ 原始邮件 ------------------ 发件人: "Yang @.>; 发送时间: 2023年5月18日(星期四) 下午3:13 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [andybbruno/TSXor] Vectorization (Issue #2)

I don't know if I still have the original files. Anyways, I'd rather use the datasets of this new paper so that we can validate our performances against theirs.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

lrg11 avatar May 18 '23 07:05 lrg11