Type inference with implicit arguments
In some cases, it is possible to use implicit arguments to build a term which can be elaborated into different terms having different types (modulo =_R). It seems that, when Lambdapi tries to infer a type of such a term, it makes some unjustified steps during the unification.
Consider the following toy example.
constant symbol Type : TYPE;
constant symbol ' : Type → TYPE;
notation ' prefix 10;
constant symbol Π* (A : Type) (_ : ' A → Type) : Type;
constant symbol abs (A : Type) [B : ' A → Type] (_ : Π x : ' A, ' B x) : ' Π* A B;
constant symbol Nat : Type;
constant symbol Unit (x : ' Nat) : Type;
constant symbol unit (x : ' Nat) : ' Unit x;
type abs Nat unit;
type @abs Nat Unit unit;
When inferring the type of abs Nat unit, it gets elaborated into @abs Nat M unit and at some point we have to solve the problem x : Nat ⊢ ' Unit x = ' M x. Both M = Unit and M = λx, Unit x are solutions to this problem, so it would not be possible to infer the type of abs Nat unit. Still, Lambdapi does not complain and returns the type ' Π* Nat (λx, Unit x). Of course, if we try to force M = Unit by writing @abs Nat Unit unit, then Lambdapi returns ' Π* Nat Unit, which is not convertible to ' Π* Nat (λx, Unit x).
Is this the expected behavior?
https://github.com/Deducteam/lambdapi/blob/1fb1d9aa8d162279e0496d976b286fffc721f332/src/core/unif.ml#L408-L410 Unification in LP is only intended to be correct and predictable. As unification modulo rewriting is undecidable and may have many solutions, we only ask to try to return a solution that is reasonable (this is subjective). It is a tool to help users. If the result is not satisfactory, then the user can specify some implicit terms by hand. Here, the code could certainly be modified to avoid this useless eta-expansion but it would make it a little bit more complicated and less efficient. I don't know if this is worth the effort.
I don't think that we should prefer one solution instead of another, but my point is that I thought that the unification tried to preserve all solutions. If unification is only used to help users with implicits I agree that this is not a problem, but I have the impression that the subject reduction algorithm uses unification to simplify the LHS constraints. Given that these simplifications do not preserve all solutions, isn't this a problem? Anyway, I have the impression that this is the only step of the unification algorithm that does not preserve all solutions, so one approach could be to simply remove this step. However, I'm not familiar with the code to be sure that this is indeed the case.