[SSA] Ensure phi args are only added when defined.
Address latter part of #108.
Originally, variables were added to phi that were undefined along the path. This resulted in iding a non-existent variable during the from_ssa phase.
I was trying to force the use of reaching definitions, but this shouldn't be necessary. All we care about here is that for each p, it was defined along some path of predecessors beforehand. Otherwise, it is, well, undefined.
Example
@main() {
cond: bool = const true;
br cond .true .false;
.true:
a: int = const 0;
jmp .zexit;
.false:
# `a` is undefined along this path.
jmp .zexit;
.zexit:
print a;
}
Outputs
# bril2json < if-const.bril | python ../../to_ssa.py | bril2txt
@main {
.b1:
cond.0: bool = const true;
br cond.0 .true .false;
.true:
a.0: int = const 0;
jmp .zexit;
.false:
jmp .zexit;
.zexit:
a.1: int = phi __undefined a.0 .false .true; # Before, this was `phi a.0 a.0 .false .true`
print a.1;
ret;
}
We can then leave SSA form:
# bril2json < if-const.bril | python ../../to_ssa.py | python ../../from_ssa.py | bril2txt
@main {
.b1:
cond.0: bool = const true;
br cond.0 .true .false;
.true:
a.0: int = const 0;
a.1: int = id a.0;
jmp .zexit;
.false:
a.1: int = id __undefined; # Before, this was `id a.0`, which is nonsensical.
jmp .zexit;
.zexit:
print a.1;
ret;
}
Next Step(s)
After the submission of this PR:
-
Remove instances of
var: <type> = id __undefined. It works in most simple cases to just not append arguments that undefined tophi, but leads to undefined variables when leaving SSA form on some of the benchmarks, probably from propagating. -
Tests for
from_ssa.
Isn't your defined_analysis() an over-estimate of the defined variables, so it will miss some cases?
I believe the problem you're trying to solve:
a.1: int = phi a.0 a.0 .true .false
Is caused by a bug in the to_ssa.py script. I'm going to put up a PR to fix it shortly.
I've put up PR #119. I believe that it fixes all the issues of #108.
Isn't your
defined_analysis()an over-estimate of the defined variables, so it will miss some cases?I believe the problem you're trying to solve:
a.1: int = phi a.0 a.0 .true .falseIs caused by a bug in the
to_ssa.pyscript. I'm going to put up a PR to fix it shortly.
I'm not saying I'm correct in any way, but can you explain what you mean by "over-estimate of defined variables"? I'm not sure I'm following. And yep! That's what I was attempting to fix.
can you explain what you mean by "over-estimate of defined variables"?
Consider the variable v in the loop function in this example. I believe your analysis will say that v is defined at the entry of every block except .entry. But, when print is false v is never defined (and isn't defined in the first iteration of the loop). In that case, it over-estimates the set of defined variables.
@func(): int {
n: int = const 5;
ret n;
}
@loop(infinite: bool, print: bool) {
.entry:
.loop.header:
br infinite .loop.body .loop.end;
.loop.body:
br print .loop.print .loop.next;
.loop.print:
v: int = call @func;
print v;
.loop.next:
jmp .loop.header;
.loop.end:
}
@main() {
infinite: bool = const false;
print: bool = const true;
call @loop infinite print;
}
I think a better name for the analysis is maybe_defined_variables. You can use it as a negation. E.g. any variable not in the set is certainly not defined. And conversely you can use the maybe_undefined_variables analysis to say anything not in that set is certainly defined.
Huh, yeah good point. It produces the same SSA as your PR, but defined is a poor choice of terminology. Thanks for the insight. The way I thought of the analysis is that it is defined along some path of predecessors at some point, so maybe_defined is better.