plutus icon indicating copy to clipboard operation
plutus copied to clipboard

Size of AST does not seem to contribute to costs

Open L-as opened this issue 3 years ago • 15 comments

Summary

See https://github.com/input-output-hk/plutus/blob/master/plutus-core/plutus-core/src/PlutusCore/Evaluation/Machine/ExBudget.hs#L57 @TotallyNotChase tried using the UPLC term λredm. λctx. λx. <huge> as a minting policy (before Vasil), and it the costs didn't seem to change compared to a small term.

Steps to reproduce the behavior

Try evaluating term.

Actual Result

Size doesn't affect costs.

Expected Result

They should according to the comments, but maybe the comments are outdated?

Describe the approach you would take to fix this

No response

System info

_

L-as avatar Jun 27 '22 16:06 L-as

In particular, I tried with a "always succeeds" minting policy, where the I alternated the body between constants and lambdas (making sure no extra evaluation takes place). Whether the script was (\redm -> \ctx -> <huge constant>) or (\redm -> \ctx -> \i -> <huge body>) - the CPU/Steps and Mem cost reported on cardanoscan's "Contracts' tab was always the same- 800 Mem and 208511 CPU - which does indeed account for the CEK machine costs (for each term, and a startup cost) and the builtin cost model (though there are no builtin function calls in this case). There is no extra cost depending on the actual size of the AST (e.g constants, no matter the size, yield the same cost).

This does align with my understanding of the cost model, but the comment linked seems misleading in that it suggests there is another category of costing - one that depends on the size of the AST.

TotallyNotChase avatar Jun 27 '22 17:06 TotallyNotChase

Yes, the comment is wrong, the size doesn't contribute to costs.

michaelpj avatar Jun 27 '22 18:06 michaelpj

Maybe it should?

L-as avatar Jun 27 '22 18:06 L-as

Users already pay for AST size because it contributes to transaction size

michaelpj avatar Jun 28 '22 08:06 michaelpj

Well, not with reference scripts, and it seems like rewarding smaller scripts might be logical.

L-as avatar Jun 28 '22 09:06 L-as

That is a good point about reference scripts.

michaelpj avatar Jun 28 '22 10:06 michaelpj

We've considered implementing aggressive inlining for Helios when generating UPLC, if however the size of the reference script gets an associated cost at some point in the future this optimization won't be worth it. @michaelpj do you think that this could indeed change?

christianschmitz avatar Nov 15 '22 05:11 christianschmitz

Unclear. We might have to do it for correctness, I don't know.

michaelpj avatar Nov 15 '22 12:11 michaelpj

Maybe @dcoutts has some thoughts? Currently the incentives are such that you make almost 16 kiB scripts, but I'm not sure how the penalty of needing more inputs above that affects it, and what is best for Cardano, i.e. what aligns fees and costs the mosts.

L-as avatar Nov 25 '22 11:11 L-as

@michaelpj

That is a good point about reference scripts.

Unclear. We might have to do it for correctness, I don't know.

So what about that? This seems like an important issue.

@JaredCorduan maybe you have some insight?

effectfully avatar Feb 06 '23 22:02 effectfully

I'm not very familiar with how the evaluator counts the execution units, but I do know that the script context is only given a hash of each reference script, so it is not too surprising to me that the costs are blind to the size (besides the initial cost of creating the reference script). I am not aware of any related correctness issues.

JaredCorduan avatar Feb 07 '23 00:02 JaredCorduan

To be clear, here's the potential issue:

  • Create a script S, that is very large but evaluates very quickly, e.g. if false then <lots of stuff> else ().
  • Create N outputs locked by S
  • Spend those outputs using a reference script for S.
  • We do O(size of S) work in e.g. deserializing the script, but the user pays O(N) for evaluation and O(N) for tx size.

I don't think this is a big problem, but it was previously the case that they would pay O(size of S) due to tx size fees.

michaelpj avatar Feb 07 '23 11:02 michaelpj

We do O(size of S) work in e.g. deserializing the script, but the user pays O(N) for evaluation and O(N) for tx size.

This doesn't sound very good. If I'm reading it correctly (not sure about that), it means we can do more work than what we charge for, which is nasty.

It's been a year, do you know if we did any work on this front or if there was any progress at all? Do you have an opinion on whether we should prioritize this issue?

Also CC'ing @zliu41.

effectfully avatar Aug 14 '23 19:08 effectfully

Nothing beyond my previous comment.

michaelpj avatar Aug 15 '23 09:08 michaelpj

I still don't think this is too bad. I'm unsure if it's something we could retrofit in, either.

michaelpj avatar Aug 15 '23 09:08 michaelpj