aptos-core
aptos-core copied to clipboard
[Bug][Compiler] Binary operations with side effecting operands causes diverging behavior in compiler v1 vs v2
🐛 Bug
When there is a binary operation with operands that have a side effect, the semantics are unexpected in both v1 and v2. They also diverge in the semantics.
To reproduce
Transaction test to reproduce
//# publish
module 0xdecaf::binop {
public fun test1(): u64 {
let x = 1;
x + {x = x + 1; x} + {x = x + 1; x}
}
public fun test2(): u64 {
let x = 1;
{x = x + 1; x} + x + {x = x + 1; x}
}
public fun test3(): u64 {
let x = 1;
{x + {x = x + 1; x}} + {x = x + 1; x}
}
public fun test4(): u64 {
let x = 1;
(x + {x = x + 1; x}) + {x = x + 1; x}
}
public fun test5(): u64 {
let x = 1;
let (a, b, c) = (x, {x = x + 1; x}, {x = x + 1; x});
a + b + c
}
fun inc(x: &mut u64): u64 {
*x = *x + 1;
*x
}
public fun test6(): u64 {
let x = 1;
x + inc(&mut x) + inc(&mut x)
}
struct S {
x: u64,
y: u64,
z: u64,
}
public fun test7(): u64 {
let x = 1;
let S {x, y, z} = S { x, y: {x = x + 1; x}, z: {x = x + 1; x} };
x + y + z
}
public fun test8(): u64 {
let x = 1;
let S {x, y, z} = S { x, y: inc(&mut x), z: inc(&mut x) };
x + y + z
}
public fun test9(): u64 {
let x = 1;
let s = S { x, y: {x = x + 1; x}, z: {x = x + 1; x} };
let S {x, y, z} = s;
x + y + z
}
public fun test10(): u64 {
let x = 1;
let s = S { x, y: inc(&mut x), z: inc(&mut x) };
let x;
let y;
let z;
S {x, y, z} = s;
x + y + z
}
public fun test11(): u64 {
let a = 1;
let x;
let y;
let z;
(x, y, z) = (a, {a = a + 1; a}, {a = a + 1; a});
x + y + z
}
public fun test12(): u64 {
let x = 1;
let S {y, x, z} = S { x, y: {x = x + 1; x}, z: {x = x + 1; x} };
x + y + z
}
}
//# run 0xdecaf::binop::test1
//# run 0xdecaf::binop::test2
//# run 0xdecaf::binop::test3
//# run 0xdecaf::binop::test4
//# run 0xdecaf::binop::test5
//# run 0xdecaf::binop::test6
//# run 0xdecaf::binop::test7
//# run 0xdecaf::binop::test8
//# run 0xdecaf::binop::test9
//# run 0xdecaf::binop::test10
//# run 0xdecaf::binop::test11
//# run 0xdecaf::binop::test11
Expected Behavior
When we run the above transaction test under compiler v2 vs. v1 comparison, we see that the output for various tests differ. We would expect that (1) behavior is the same for v1 and v2, (2) behavior is based on a semantics typically expected. However, neither of these are true right now, as can be seen from the table below:
| test no. | v1 | v2 |
|---|---|---|
| 1 | 9 | 7 |
| 2 | 9 | 7 |
| 3 | 9 | 7 |
| 4 | 9 | 7 |
| 5 | 6 | 6 |
| 6 | 6 | 7 |
| 7 | 6 | 8 |
| 8 | 6 | 8 |
| 9 | 6 | 8 |
| 10 | 6 | 8 |
| 11 | 6 | 6 |
| 12 | 6 | 8 |
The column v1 is the compiler v1 return value output for the various test cases, analogously for compiler v2.
Related to #11270, but not the same (this bug shows divergent behavior for v1 vs v2, not just unexpected behavior).
#6 for v2 is really puzzling.
I had already been pondering this simplification rule, so #1-5 for v1 is unsurprising:
{ e1; e2 } + { e3; e4 } --> { e1; e3; e2+e4 }
But the hoisted expressions must be side-effect-free. Clearly someone forgot about that check for v1.
Another transactional test that may be related to this bug:
//# publish
module 0xcafe::vectors {
use std::vector;
fun make_vector(a: u64): vector<u64> {
let x = vector[a,
{a = a + 1; a},
];
let y = x;
y
}
fun sum(a: &vector<u64>): u64 {
let sum = 0;
vector::for_each_ref(a, |elt| { sum = sum + *elt});
sum
}
public fun test_big_vector(): u64 {
let v = make_vector(0);
sum(&v)
}
}
//# run 0xcafe::vectors::test_big_vector
For this test, v1 returns 1, v2 returns 2.