osmosis
osmosis copied to clipboard
feat(osmomath): log2 approximation
Closes: #XXX
What is the purpose of the change
This is a POC of a base 2 logarithm implementation using a lookup table.
It is based on: https://stackoverflow.com/questions/4657468/fast-fixed-point-pow-log-exp-and-sqrt Currently, there are several optimizations and improvements possible to improve performance and accuracy.
The core idea is based on: $$log_{2}{x} = log_{2}{(2*x/2)} = log_{2}{2} + log_{2}{(x / 2)} = 1 + log_{2}{(x / 2)}$$
We iteratively repeat this expansion, calculating the number of iterations (y), until $$1 <= x < 2$$
Then, the number of iterations (y) equals to the whole part of the exponent while x is used to lookup mantissa in the pre-computed table. The two values are added together to get the final result.
I'm also still exploring some other / simpler designs such as: http://www.claysturner.com/dsp/BinaryLogarithm.pdf
Additionally, Taylor series is still under consideration. So far the information about its use I found is conflicting
Testing and Verifying
This change added tests
Documentation and Release Note
- Does this pull request introduce a new feature or user-facing behavior changes? no
- Is a relevant changelog entry added to the
Unreleasedsection inCHANGELOG.md? no - How is the feature or change documented? not applicable
Nice! So I forgot how nice the Taylor expansion of log is:

If our focus is get something very high precision, with minimal complexity, I think the simplest approach may be:
- Using the recursive trick + lookup table tricks, to reduce to a log in the range
(1, 1.0001). - Then do the really simple taylor expansion, which is actually quite easy to bound our max error term. (we would do iterations s.t.
10^{-4}^{num_iterations} < max_precision)
(10^{-4} arbitrary)
The way we do this reduction is lookup table storing high precision logs for e.g. 1.9, 1.8, ..., 1.1, 1.01, 1.001 etc.
The alternative is doing a lookup table at the bottom, and interpolating between results. (Either linear interpolation from fixed precision lookup, or runga-katta method)
This pull request has been automatically marked as stale because it has not had any recent activity. It will be closed if no further activity occurs. Thank you!
Benchmark at commit: d6324f4
goos: linux
goarch: amd64
pkg: github.com/osmosis-labs/osmosis/v12/osmomath
cpu: 12th Gen Intel(R) Core(TM) i7-1260P
BenchmarkLog2-16 7366 146285 ns/op 67907 B/op 1236 allocs/op
BenchmarkLog2-16 8853 141121 ns/op 67944 B/op 1236 allocs/op
BenchmarkLog2-16 7800 136633 ns/op 68118 B/op 1240 allocs/op
BenchmarkLog2-16 8341 158709 ns/op 67909 B/op 1235 allocs/op
BenchmarkLog2-16 8383 142193 ns/op 68279 B/op 1244 allocs/op
BenchmarkLog2-16 8413 142288 ns/op 67883 B/op 1235 allocs/op
BenchmarkLog2-16 8049 138310 ns/op 67704 B/op 1231 allocs/op
BenchmarkLog2-16 7173 160391 ns/op 67601 B/op 1229 allocs/op
BenchmarkLog2-16 8934 152299 ns/op 67991 B/op 1237 allocs/op
BenchmarkLog2-16 7923 160199 ns/op 67719 B/op 1232 allocs/op
PASS
ok github.com/osmosis-labs/osmosis/v12/osmomath 24.489s
According to benchmarks, reducing the log to be between [1, 1.0001) ends up being much less performant. More than 5500 iterations for log base 2 of 3. With the current implementations, there are approximately 300 iterations.
The reason is that we have to continuously be looping and dividing by the upper bound to normalize the number to be below 1.0001. Since the denominator is small, it ends up taking many iterations, reducing the performance.
I don't have the exact benchmark in nanoseconds for the [1, 1.0001) approach but it is higher than the current implementation even before reaching Taylor series step.
Upon exploring the properties of log base 2, I discovered the current method that required fewer iterations: http://www.claysturner.com/dsp/BinaryLogarithm.pdf
It is implemented here and provides strong accuracy guarantees relative to the Wolfram Alpha results.
The benchmark for the latest commit is added up top