plutus
plutus copied to clipboard
Size of AST does not seem to contribute to costs
Summary
See https://github.com/input-output-hk/plutus/blob/master/plutus-core/plutus-core/src/PlutusCore/Evaluation/Machine/ExBudget.hs#L57
@TotallyNotChase tried using the UPLC term λredm. λctx. λx. <huge> as a minting policy (before Vasil), and it the costs didn't seem to change compared to a small term.
Steps to reproduce the behavior
Try evaluating term.
Actual Result
Size doesn't affect costs.
Expected Result
They should according to the comments, but maybe the comments are outdated?
Describe the approach you would take to fix this
No response
System info
_
In particular, I tried with a "always succeeds" minting policy, where the I alternated the body between constants and lambdas (making sure no extra evaluation takes place). Whether the script was (\redm -> \ctx -> <huge constant>) or (\redm -> \ctx -> \i -> <huge body>) - the CPU/Steps and Mem cost reported on cardanoscan's "Contracts' tab was always the same- 800 Mem and 208511 CPU - which does indeed account for the CEK machine costs (for each term, and a startup cost) and the builtin cost model (though there are no builtin function calls in this case). There is no extra cost depending on the actual size of the AST (e.g constants, no matter the size, yield the same cost).
This does align with my understanding of the cost model, but the comment linked seems misleading in that it suggests there is another category of costing - one that depends on the size of the AST.
Yes, the comment is wrong, the size doesn't contribute to costs.
Maybe it should?
Users already pay for AST size because it contributes to transaction size
Well, not with reference scripts, and it seems like rewarding smaller scripts might be logical.
That is a good point about reference scripts.
We've considered implementing aggressive inlining for Helios when generating UPLC, if however the size of the reference script gets an associated cost at some point in the future this optimization won't be worth it. @michaelpj do you think that this could indeed change?
Unclear. We might have to do it for correctness, I don't know.
Maybe @dcoutts has some thoughts? Currently the incentives are such that you make almost 16 kiB scripts, but I'm not sure how the penalty of needing more inputs above that affects it, and what is best for Cardano, i.e. what aligns fees and costs the mosts.
@michaelpj
That is a good point about reference scripts.
Unclear. We might have to do it for correctness, I don't know.
So what about that? This seems like an important issue.
@JaredCorduan maybe you have some insight?
I'm not very familiar with how the evaluator counts the execution units, but I do know that the script context is only given a hash of each reference script, so it is not too surprising to me that the costs are blind to the size (besides the initial cost of creating the reference script). I am not aware of any related correctness issues.
To be clear, here's the potential issue:
- Create a script S, that is very large but evaluates very quickly, e.g.
if false then <lots of stuff> else (). - Create N outputs locked by S
- Spend those outputs using a reference script for S.
- We do O(size of S) work in e.g. deserializing the script, but the user pays O(N) for evaluation and O(N) for tx size.
I don't think this is a big problem, but it was previously the case that they would pay O(size of S) due to tx size fees.
We do O(size of S) work in e.g. deserializing the script, but the user pays O(N) for evaluation and O(N) for tx size.
This doesn't sound very good. If I'm reading it correctly (not sure about that), it means we can do more work than what we charge for, which is nasty.
It's been a year, do you know if we did any work on this front or if there was any progress at all? Do you have an opinion on whether we should prioritize this issue?
Also CC'ing @zliu41.
Nothing beyond my previous comment.
I still don't think this is too bad. I'm unsure if it's something we could retrofit in, either.