aes-js
aes-js copied to clipboard
x % 2^n = x & (2^n - 1), x * 2^n = x << n
Shift is faster than multiplication: x * 2^n = x << n
Bitwise and is faster than division: x % 2^n = x & (2^n - 1)
Maybe there is no big speed improvement, but in iterations it makes sense to use the fastest operation available.
Have you done benchmarks to test the performance? I would suspect that multiplication and shifting are about the same in most JavaScript engines, but I'm curious to see. JavaScript is quite far from the metal.
For the numbers involved, this is possibly safe, but requires extra testing to make sure the value do not exceed 4 bytes, since bitwise operators implicitly cast to signed 32-bit values.
For example:
> 3 * Math.pow(2, 32)
12884901888
> 3 << 32
3
> 3 * Math.pow(2, 31)
6442450944
> 3 << 31
-2147483648
> 3 * Math.pow(2, 30)
3221225472
> 3 << 30
-1073741824
> 3 << 10
3072
> 3 * Math.pow(2, 10)
3072
I did try to run some benchmarks, but it is almost impossible to get any real numbers because we aim to measure a single (processor?) operation surrounded by many other operations (increment of the loop, assign etc). And I have to have some extra code, otherwise risking to hit a compiler optimization of some form, like dead code removal. Yet I'm not sure what other optimizations the compiler is doing before my benchmark. With each attempt to benchmark, I get close results for each operation...
Being unable to create any meaningful benchmark, I just have a feeling that in real-world JS applications bitwise can be faster, if the resulted int32 is not used as a float. Even though the specs require numbers to be converted to double in JS, the implementors don't convert them to double unless the context requires to do so, thus numbers in JS can be real int32 values internally, hence bitwise operations can be run as a single CPU instruction, at least in theory.
In my PR there is no overflow, nor touching of sign-bit, because all operations result in the range of [0..2^31].
On the other hand, most crypto algorithms are originally implemented in C/C++, thus int32 * int32 == int32 always, but in JS in32 * int32 == double sometimes.
Take a look at this example:
JS:
3 * Math.pow(2, 31) == 6442450944 == 0x180000000
C:
3 * ((unsigned)1 << 31) == 3 * 2147483648 == 2147483648 == 0x80000000
Notice the overflow of 0x100000000 not being cut off in JS version?
If the original algorithm is in C, I would re-implement it in JS as folows:
JS like C:
(3 << 31)>>>0 == 2147483648 == 0x80000000
which basically replicates unsigned int multiplication in JS.
Well, maybe this example is not relevant to my PR, but still a valid concern when porting C algorithms to JS.
Should this be closed? There have been no activity for more than a year and I don't see any benchmark showing any benefits :-(