language icon indicating copy to clipboard operation
language copied to clipboard

Generic type inference fails when return value is inside an assignment

Open rubenferreira97 opened this issue 4 months ago • 3 comments

When calling a generic function without explicit type parameters, the compiler fails to infer the correct type if the result is wrapped in an assignment expression, even if the surrounding context makes the expected type unambiguous.

T id<T>(T value) => value;

class A {
  int? value;

  int bad() {
    // Error:
    // A value of type 'int?' can't be returned from the method 'bad'
    // because it has a return type of 'int'.
    return value = id(42); // compiler infers `id<int?>(42)`
  }

  int good() {
    // Works fine when explicitly specifying type argument.
    return value = id<int>(42);
  }
}

Expected behavior: The compiler should be able to infer T = int from the surrounding return type, even though the result is assigned to an int? variable.


I would not be surprised if both examples failed, since I expect the compiler to first assign and then return:

  int bad2() {
    value = id(42); // infers id<int?>(42) correctly, because the only context type is `int?`
    return value;
  }

But the good does something different. Is it possible to be smarter here and infer the correct type?

rubenferreira97 avatar Aug 12 '25 10:08 rubenferreira97

The issue here is that the expression's value is used by two contexts.

If the example was

 int? x1;
 int x2;
 x1 = x2 = id(1 as dynamic);

then using the outer context as context type for the id call will make the code not compile.

Since the inference rules don't have a way to apply two contexts, and the DOWN function that could be used to combine them is woefully useless in all but trivial cases, inference uses one of the contexts.

That is the context of the inner assignment, which is why you see the current behavior.

Using the other context would fail in other ways.

Using both is ... nontrivial.

Maybe we could say that if one context type is a subtype of the other, use the subtype, otherwise use the inner one. It still feels a little arbitrary.

lrhn avatar Aug 14 '25 22:08 lrhn

@lrhn Thanks a lot for the detailed explanation, that makes sense.

I see why in plain assignment chains, like your x1 = x2 = id(...) example, there isn’t really a single “right” answer: the expression is simultaneously constrained by two different contexts, and inference can’t satisfy both. In that situation it feels natural that the compiler picks one of them, even if it looks a bit arbitrary from the outside.

However, in the return case, I wonder if it would make sense for the return type context to be given higher precedence during inference. In practice, when writing code like:

int bad() {
  return value = id(42);
}

it feels intuitive that the compiler should prefer int (from the function’s declared return type) over int? (from the field assignment).

rubenferreira97 avatar Sep 03 '25 12:09 rubenferreira97

If it had been:

  int value;
  int? bad() {
    return value = id(42);
  }

then "clearly" the compiler should prefer the more precise int from value. If it uses the return type, the code will fail to compile since it becomes return value = id<int?>(42); and value isn't nullable.

There is no general best choice between the contexts, other than "the most precise". If one exists.

lrhn avatar Sep 03 '25 16:09 lrhn