go-algorand
go-algorand copied to clipboard
AVM: uint256 opcodes
Summary
This PR adds uint256 opcodes. The current draft contains basic arithmetic ops, but comparison and bitwise opcodes could also be added.
- Leverages https://github.com/holiman/uint256 which is used by go-ethereum and erigon
- Each opcode takes in
[]byteand returns[32]byte - An error is returned if one of the arguments is > 32 bytes
- An error is returned on underflow/overflow where applicable
The benchmarks:
=== RUN BenchmarkUint256Math
BenchmarkUint256Math
=== RUN BenchmarkUint256Math/add256
BenchmarkUint256Math/add256
BenchmarkUint256Math/add256-10 16894750 72.34 ns/op 3.000 extra/op 38 B/op 1 allocs/op
=== RUN BenchmarkUint256Math/sub256
BenchmarkUint256Math/sub256
BenchmarkUint256Math/sub256-10 17190334 68.53 ns/op 3.000 extra/op 38 B/op 1 allocs/op
=== RUN BenchmarkUint256Math/mul256
BenchmarkUint256Math/mul256
BenchmarkUint256Math/mul256-10 15211278 79.63 ns/op 3.000 extra/op 38 B/op 1 allocs/op
=== RUN BenchmarkUint256Math/div256
BenchmarkUint256Math/div256
BenchmarkUint256Math/div256-10 17130069 70.08 ns/op 3.000 extra/op 38 B/op 1 allocs/op
=== RUN BenchmarkUint256Math/mod256
BenchmarkUint256Math/mod256
BenchmarkUint256Math/mod256-10 17231794 69.92 ns/op 3.000 extra/op 38 B/op 1 allocs/op
=== RUN BenchmarkUint256Math/sqrt256
BenchmarkUint256Math/sqrt256
BenchmarkUint256Math/sqrt256-10 21905337 55.86 ns/op 2.000 extra/op 38 B/op
Due to these results, I believe that a opcode cost of 2 is reasonable for all of the added opcodes.
Rationale
The byte math operators are rather expensive to use and if you are using them within the context of an ARC4 method even more opcodes need to be spent to pad values for encoding. This PR solves both of these problems by making uint256 more efficient and always returning [32]byte.
Test Plan
All opcodes are tested to ensure they work as expected and return errors when expected.
TODO
- [ ] Test error when arguments are > 32 bytes
- [ ] Add bitwise operators
- [ ] Add comparison operators (
<,>, etc.)
With the work on bmodexp #6140, I think we'll have a way to do custom cost functions for opcodes. That will probably allow for nicer cost functions can charge less when using the bmath opcodes on smaller values like uint256s. I don't think they will get quite as fast as these special purpose opcodes, but I'm loathe to introduce all another pile of opcodes for a single purpose.
With the work on bmodexp https://github.com/algorand/go-algorand/pull/6140, I think we'll have a way to do custom cost functions for opcodes. That will probably allow for nicer cost functions can charge less when using the bmath opcodes on smaller values like uint256s. I don't think they will get quite as fast as these special purpose opcodes, but I'm loathe to introduce all another pile of opcodes for a single purpose.
Yeah totally understand that. One other factor to consider here is that these opcodes offer consistent bitwidth. When working with uint256 in a contract the actual arimetic is only a part of the total cost to use them. There's still a lot of opcode budget being spent on overflow checking and encoding (padding/truncating values).
Before making an assesment on whether we should add these opcodes or not I think we should see what the cost in totality is like when working with uint256 numbers under a practical usecase and go from there.
I'm going to close this in favor of (#6270) which has improved costs for many bytesmath opcodes.