lambdapi icon indicating copy to clipboard operation
lambdapi copied to clipboard

Very slow rewrite on reflective contraction example (huge time in Infer.check / Eval.whnf)

Open NotBad4U opened this issue 3 months ago • 0 comments

Hi everyone,

I hit a performance issue in the Lambdapi kernel when using rewrite on a reflective proof of contraction for propositional disjunctions. Even on a very small example, the rewrite step becomes dramatically slower as I increase the size of a clause, and profiling suggests that most of the time is spent deep inside the conversion machinery (Infer.check → Eval.pure_eq_modulo → Eval.whnf).

What I’m trying to do

I’m implementing the logical contraction rule for propositional disjunction by proof by reflection. The code can be found here: issue.lp

Roughly:

Disjunctions like a ∨ b ∨ b ∨ b ∨ c ∨ c ∨ ⊥ are reified into clauses as lists of atoms p i indexing an environment of formulas. A function den interprets a clause back as a disjunction over that environment. A mergesort-style normalization turns a clause with duplicates into a sorted clause without duplicates. A lemma contraction_correct states that interpreting a clause is preserved by this normalization, and I use:

compute (contraction (r ₁));
rewrite left contraction_correct;
reflexivity

where r is the result of reifying the original disjunction.

I then experimented with larger clauses (e.g. with 10, 15, 20 copies of b) using the commented examples example_10b, example_15b, example_20b in test.lp. The comments in the file reflect what I observe: for the largest one, compute (contraction (r ₁)) takes on the order of 10s, but rewrite left contraction_correct alone takes about a minute on my machine, and the whole command is around 1m24s.

Profiling

understand where the time goes, I recompiled Lambdapi with LexiFi/landmarks and ran the kernel on two versions of the same script:

  • a “small” version (10 items in disjunction ),
  • “large” version (20 items in disjunction, corresponding to the problematic example_20b).

This produced two JSON traces:

These JSON files can be loaded in the Landmarks web viewer at: https://lexifi.github.io/landmarks/viewer.html

Traces analysis:

From contraction10.json, for the small example:

  • Most of the time sits under Command.handle → Tactic.handle → Tactic.tac_refine / Tactic.tac_solve → Infer.check.
  • Inside Infer.check, a noticeable portion of time is spent in:
    • Eval.pure_eq_modulo and Eval.whnf,
    • LibTerm.eq_alpha,
    • plus some Term.unfold, Term.subst, coercions, etc. The cost is non-trivial but still reasonable for this small script.

For contraction20.json, for the larger example, the profile changes dramatically:

  • Command.handle is almost entirely dominated by a handful of Tactic.handle / Tactic.tac_refine calls whose cost is now huge, but I think it is normal here since the problem is more in the subcall.
  • Almost all of that time is spent in a single Infer.check → Eval.pure_eq_modulo → Eval.whnf chain.
  • Inside Eval.whnf, there is a massive explosion in:
    • Term.unfold (hundreds of millions of calls),
    • Term.subst and Term.msubst,
    • Term.mk_Appl / Term.mk_TRef / Term.add_args,
    • and LibTerm.eq_alpha / Eval.eq_modulo (millions of calls).

So the slowdown appears to come from conversion/equality checking during the rewrite left contraction_correct step, not from the reflective normalization function itself (compute (contraction (r ₁)) is comparatively much cheaper).

I’m happy to run more experiments or add more instrumentation if that helps narrow it down. If you want to set up the Landmarks profiling library, I can definitely help. The problem we hit here is similar to the one we had in the reflective proof for linear arithmetic. This issue is a bottleneck for several of my SMT-LIB benchmark samples.

NotBad4U avatar Nov 25 '25 15:11 NotBad4U