Type inference / flow analysis does not consider non-returning functions
Acton Version
0.26.0.20250924.14.31.36
Steps to Reproduce and Observed Behavior
def bar():
raise ValueError("error")
def foo(s) -> str:
if s:
return s
else:
bar()
actor main(env):
print(foo(""))
env.exit(0)
Expected Behavior
Unhandled ValueError exception when run
But it does! You're just missing a return in front of of the bar() call. Without a return in both if-branches you actually have a function that can fall through, which gives it the return type ?str; i.e., slightly larger than the annotation!
Add the return and the code compiles fine, and ValueError is raised at run-time.
PS
In Haskell you'd be right to expect foo to return whatever barreturns, without any explicit return syntax! But not in Acton :-)
But bar() can't return, only raise an exception that is unhandled by foo(), hence there is no path where foo() can return None, only an actual str, right?
Ah, so you're saying that bar should really have been given a fully generic return type, since it can never return anything? Interesting, let me think about that...!
So this really boils down to whether we want to keep the concept of "falling through a statement" a purely syntactic matter, or whether we want it to also include the result of semantic analyses like type inference.
Currently we consider a statement to be able to fall through to its successor unless it is a return, a break, a continue, or a raise – that is, the statements that don't fall through are easily recognizable syntactic exceptions. And we use this classification to determine whether a function has an implicit return None at the end, or whether an if branch contributes bindings to the scope that follows (which is extended by the intersection of the bindings in the fall-through branches). But we do this entirely based on statement syntax; we do not, for example, look at determine a loop's termination properties in order to determine whether control can ever reach its successor statement.
Nor do we currently analyze the code of a called function to find out if it's guaranteed not to return, we simply assume that every called function may return. However, one could argue that the called function's type actually reveals the necessary info, since the only way to construct a function of type [A] => (ts) -> A (where A doesn't occur in the ts) is either to loop forever or always throw an exception.
But there's a catch: extending the falls-through classification this way would make the scoping rules depend on the result of type inference, while type inference itself is heavily dependent on the rules that govern what variables are in scope! So we'd have a circular dependency in general, that would have to be broken by some clever restriction mechanism when doing type inference for recursive functions. And note that this isn't just a question of compiler implementation complexity, as it would also imply subtle and complex rules to answer the basic question regarding which variables are accessible where.
So I'm leaning towards not going this route, as the potential payoff appears to be limited to the special case of abstracting over the raise keyword. Furthermore, I don't think the slight shift of abstracting over the actual exception instead looks that bad:
def bar():
return ValueError("error")
def foo(s) -> str:
if s:
return s
else:
raise bar()
It actually makes it evident that foo doesn't fall through, so its return type must be correct as long as s is a str.
In the end it's all about finding reasonable tradeoffs!