Dedukti icon indicating copy to clipboard operation
Dedukti copied to clipboard

Rule name and priority

Open Gaspi opened this issue 5 years ago • 20 comments

Current state

Names

Right now a rule has a name that can be specified by the user using the syntax

{my_rule_name} [ctx] foo --> bar. 

When unspecified the name head_symbol!line_number (for instance f!12) is chosen by default. There is no uniqueness guarantee and the name is only ever used to print in verbose mode.

Rule priority

  • In Dedukti, whenever several rules match a redex, the rule that was first defined is always applied. i.e. even using decision trees, rules are always tried in order of their declarations.
  • In lambdapi I believe its also the case but recent development are questioning this choice.

I see two limitations here:

  • One might not need the rules to be applied in a given order but rather want them to be applied "eagerly": while inspecting a term, whenever any rule matches without further inspection then immediately use this rule (instead of keeping trying the first rule declared).
  • One might want to define a custom order of application of the rules, different from the one corresponding to the order of declaration. Simply changing the order of declaration is not an option here since this order may be forced by the type checking (one rule requiring the previous ones to type check). I see two possible cases:
    • Late addition of a "shortcut rule" supposed to speed up calculations but never used because subsumed by previously defined rules.
    • Costly (e.g. non left-linear) rule required early for type checking but that should really only be used as a last resort.

Propositions

  • Add a new optional integer field to a rule: its "priority".
  • This could be specified using a syntax similar to the rule name:
{name, 12} [x] f x --> x. (; Both specified, priority=12      ;)
{, 7}      [x] f x --> x. (; Default name  , priority=7       ;)
{name}     [x] f x --> x. (; Specified name, default priority ;)
  • The priority would then correspond to the "eagerness" to use this rule when reducing the head symbol.
  • When unspecified it should default to something ever increasing yet allowing custom priority to be set ahead or slip between consecutive rules (I'm thinking 100 + 10 * rule_number).
  • Priorities need not be distinct. Rules of the same priority should be applied "eagerly" whenever they become the "top priority" rules during matching (when more important rules become discarded).

In terms of performance, I believe this feature will only impact the decision tree generation in a very manageable way and could speed up the type checking.

In terms of implementation, I believe this is not too hard to adapt decision trees to take rule priority into account when selecting a column to inspect or when checking for a match.

Using Dedukti as a type checker should only be done with a confluent terminating rewrite system so the priority of rule application should not be a problem.

Using Dedukti as a rewriting tool on non confluent system often requires to have some control on the rewriting strategy. This would be a first step in that direction.

Further questions

  • Should we forbid (or warn against) identical rule names ?
  • Should the default name be guaranteed unique ?

Gaspi avatar Apr 26 '19 09:04 Gaspi

@fblanqui @rlepigre

Giving names to rules will be useful to do meta rewriting anyway.

francoisthire avatar Apr 26 '19 12:04 francoisthire

Hello Gaspard. You are raising interesting questions about the control of the reduction engine. But could you please first motivate this extension: in which application do you need this? Second, why do you open an issue in Dedukti and not in Lambdapi?

fblanqui avatar May 03 '19 07:05 fblanqui

The CIC encoding used for Matita or Coq might need that kind of thing. We are facing with "probably" a blow up which is triggered by a non-linear rule needed to type check other rules but should have a low priority.

francoisthire avatar May 03 '19 07:05 francoisthire

"might need" doesn't look enough to me... And this could be fixed by (if needed) modifying the strategy of construction of decision trees, by privileging the linear rules to the non-linear ones. See @gabrielhdt 's work.

fblanqui avatar May 03 '19 07:05 fblanqui

Well this answer your second question. Since we don't know yet, we have to experiment and for @Gaspi and I, it is far more easier to experiment on a code base that we know than lambdapi. So, maybe this issue will be no longer needed in lambdapi with Gabriel's work, but currently we have to experiment and we want to test quickly if it is efficient or not.

If it does not work, or if we realize that we have another solution, sure we can drop this suggestion for now.

francoisthire avatar May 03 '19 07:05 francoisthire

I think that it will be easier to experiment with Gabriel's fork since there is only one function to modify (and no grammar change).

fblanqui avatar May 03 '19 07:05 fblanqui

And perhaps the heuristic function currently implemented by Gabriel already manages this. @gabrielhdt , could you please comment on this?

fblanqui avatar May 03 '19 07:05 fblanqui

I'm currently working on the manipulation of the nonlinear constraints within the heuristic. A first implementation triggered the check each time a nonlinearity were encountered, I'm trying to be able to delay them.

So eventually, what you suggest will be possible, tweaking the heuristic to postpone non linear checks or use them as early as possible.

gabrielhdt avatar May 03 '19 07:05 gabrielhdt

This issue arose when discussing with Gabriel about reduction strategy.

  • In (old) Dedukti, rules are always used in the order of declaration which is a useful guarantee when one wants to control de strategy but isn't enough since the necessity of type checking might constrain this order.
  • In lambdapi, @gabrielhdt suggested to let the decision tree generation re-order the rules to increase performance (delay non linearity checks, etc) which is also something the user might want when they use a confluent system. My suggestion allows for any of these two (1st: ever increasing priority, 2nd: constant priority) and more. For instance: declaring first type checking necessary rules and then adding "shortcuts" or better orient the rewriting in the case of overlaps (even when they are joignable, one side might be significantly faster)

Gaspi avatar May 05 '19 08:05 Gaspi

But could you please first motivate this extension: in which application do you need this? Second, why do you open an issue in Dedukti and not in Lambdapi?

  • I didn't want to duplicate this issue (no cross-repo issue in Github...). Besides I wasn't sure this was relevant (yet) to lambdapi, I don't know which choices have been made in terms of rewriting strategy.
  • I have a practical use case but (below) but it's a bit involved. If anyone would like further details, maybe @francoisthire and I can show you (in PM) the complete system.
[A,B,F]
  c (cPi A (x => B x)) (x => F x) -->
  cLam (x:Code => c (B x) (F (u A x))).

[A,t] c A (u A t) --> t.

This system fails on the typing of the first rule:

Error while typing 'F[1] (Coq.u A[3] x[0])' in context:
[
  A : Code,
  B : x:Code -> Code,
  F : x:(D A[1]) -> D (B[1] (c A[2] x[0])),
  x : Code
].
---- Expected:
D (B[2] x[0])
---- Inferred:
D (B[2] (c A[3] (u A[3] x[0])))

As you can see, the secondly declared rule is required to type check the first. However the second is non-linear and should ideally be only used as a "last-resort".

Gaspi avatar May 05 '19 08:05 Gaspi

Why the second rule is non-linear? What are the types of c and u? It may be an interesting example for @wujuihsuan2016 .

If the second rule is required for type-checking the first one, then it should appear before!

It is a reasonable strategy to try to apply non-linear rules first. @gabrielhdt may implement this. In this example, I do not see the need for adding priorities.

fblanqui avatar May 09 '19 18:05 fblanqui

On the other hand, convertibility checks are expensive, aren't they? So we might want to avoid triggering them, if we can.

gabrielhdt avatar May 09 '19 20:05 gabrielhdt

I recently tried the sudoku, which contains a eq rule:

eq x x --> T
eq x y --> F

which is a big problem if I don't keep the order of rules. I have thus created a flag --ordered-rules which enforce rule order. It could be relevant, as a first feature, to declare this flag per symbol in the lp file. Fyi, here are the generated trees for eq, with and without the flag eq eq

gabrielhdt avatar Jun 12 '19 06:06 gabrielhdt

This rewrite system is not confluent and thus should not be accepted.

fblanqui avatar Jun 12 '19 09:06 fblanqui

I disagree, it should be accepted!

  1. As long as we postponed confluence, termination as meta properties to be checked, the rewrite engine should accept the system. However, we don't have to specify its behavior on non confluent system for example.

  2. This non confluent system has the advantage to be fast with sharing. Let's say we use a unary representation of natural numbers, then eq 100 100 can reduce to true in one step with this system, while in the usual system, it will compute to true in 100 steps. Hence, for efficiency, there is an advantage to allow such system.

  3. I truly think that the rewrite engine and the type checker are two separate entities. In particular, to fit the theory, the rewrite engine should not rely on typing properties. Hence, such system should be accepted by the rewrite engine. It's only when we ask Dedukti to type check a term that the latter could reject this system because it is not confluent.

For all these reasons, I think the proposition of @gabrielhdt is great because it offers the best of both worlds: If one wants to type check, it does not need to preserve the order of the rule since confluence is assumed. If one wants to compute only, it can use a non confluent system and get a predictable behavior.

francoisthire avatar Jun 12 '19 09:06 francoisthire

LP is intended to be a proof assistant, not a rewrite engine. So, in this context, these rewrite rules are not acceptable.

  1. I don't see the point. The aim of LP is type-checking. Confluence is assumed.

  2. I agree with you but, then, the rewrite engine should be a separate program, not LP.

  3. This could be solved by using confluent conditional rules instead: eq x y -> true if x=y, eq x y -> false if x<>y. But, we are not going to do this in LP currently because this is not the priority.

fblanqui avatar Jun 12 '19 09:06 fblanqui

LP is intended to be a proof assistant, not a rewrite engine. So, in this context, these rewrite rules are not acceptable.

Hence, we have to consider that the current Dedukti and LP are two different tools for two different objectives. It won't be possible to translate my tools on LP if you make that kind of assumption, in particular the current dkmeta won't work.

I don't think it is an issue, but it is probably something we need to discuss further during a meeting for example.

This could be solved by using confluent conditional rules instead: eq x y -> true if x=y, eq x y -> false if x<>y. But, we are not going to do this in LP currently because this is not the priority.

With the last rule you propose, you will never be confluent since you are not stable by substitution. See this post by Raphael

francoisthire avatar Jun 12 '19 10:06 francoisthire

François, Dedukti itself has never been thought as a rewriting engine alone. What you are talking about is not Dedukti, but dkmeta, the rewriting part of Dedukti. So, yes, these two tools should be separated, a rewriting engine on one side without type-checking, and a type-checker on the other side that could use the rewriting engine as a library. And this is exactly what you did in Dedukti. What I propose is just to push this further.

Concerning Raphaël's counter-example, this is a good point. Indeed, non-convertibility tests are not a good idea. But doesn't this counter-example hold with your system too?

To come back to your point 2, what you say is not true because to know that 100 and 100 are the same terms, you need to go through all their subterms recursively, unless you use some sharing. So, you will have 100 steps anyway. The point is therefore to add some sharing. @barras started to work on this last year.

fblanqui avatar Jun 13 '19 08:06 fblanqui

François, Dedukti itself has never been thought as a rewriting engine alone. What you are talking about is not Dedukti, but dkmeta, the rewriting part of Dedukti. So, yes, these two tools should be separated, a rewriting engine on one side without type-checking, and a type-checker on the other side that could use the rewriting engine as a library. And this is exactly what you did in Dedukti. What I propose is just to push this further.

Probably there is misunderstanding there. What I am saying is that you should have a way to instrument the rewrite engine of lambdapi to allow this non confluent system. This was the proposition by Gabriel. However, I had the impression that you did not agree with that.

My point was to say that if the rewrite engine of lambdapi does not allow non confluent system, or there is no predictable way to know which rule will be applied, then there is no chance to transfer the dkmeta tool on over the lambdapi rewrite engine.

francoisthire avatar Jun 13 '19 09:06 francoisthire

I see your point. Develop a strategy language for the rewriting engine (RE) is an interesting topic in its own right. But there has been a lot of works on this topic already. So, we should first do some review of them before deciding to do anything. Meanwhile, we could indeed allow an option like --ordered-rules in the (RE), but not in LP. For the moment, LP has to reject these rules anyway because there are not confluent, because we want the implementation to fit with the theory. Now, one could try to prove that LP/R is equivalent to LP/R' where R' is a restriction of R by some strategy. This is an interesting question but I don't think that anybody is going to attack this problem soon.

fblanqui avatar Jun 13 '19 12:06 fblanqui