EIPs
EIPs copied to clipboard
[EIP-4844] Set A Minimum Data Gasprice
The original fee market update PR https://github.com/ethereum/EIPs/pull/5707 initially included a MIN_DATA_GASPRICE
of 10**8
. After some discussion (https://github.com/ethereum/pm/issues/647), it became clear that this is a contentious value. Thus, the fee market PR was merged with the smallest possible value of 1
.
This PR proposes to set MIN_DATA_GASPRICE
back to 10**8
, which is merely meant as a starting point for discussion about this choice. The PR will stay in draft until agreement is reached.
Hi! I'm a bot, and I wanted to automerge your PR, but couldn't because of the following issue(s):
(fail) eip-4844.md
classification |
---|
updateEIP |
- eip-4844.md requires approval from one of (@vbuterin, @dankrad, @protolambda, @asn-d6, @lightclient, @inphi)
Rationale for 10**8
:
The value is not supposed to serve an active spam protection role. Instead, it is meant to only be hit in exceptional conditions, such as potentially right at the introduction of blob transactions, as well as during exceptional times of network instability (where only small / empty blocks can be created for a while). In both of these cases, not going all the way down to 1
reduces the potential duration of ramping back up (with consistently full blob blocks) to an equilibrium price from (at most) ~45min to (at most) ~15min. As network load is at its peak during that time, reducing it by 2/3 or more is potentially helpful. Given that these conditions are expected to be very rare, the benefit over a value of 1
are however minor.
The minimum of 10**8
per data gas was chosen because it results in a minimum blob price of 1e8 * 2**17 ≈ 1.3e13
wei, or 0.000013
ether. This is roughly equivalent to the cost of a minimal 21000 gas transaction at 1 gwei, or of an equally-sized 128KB calldata transaction at a gas price of 0.01 gwei. These values seem small enough to be reliably non-binding under normal conditions, while still being close enough to ensure most of the ramp-up cut-down benefit.
If there is a concern with the adjustment speed, then we should fix the adjustment algorithm, not introduce hard coded limits. For example, the algorithm could "speed up" if it has been consistently changing in one direction for multiple blocks in a row. A simple implementation of this would be to track a "speed" variable somewhere that increases with every consecutive block that is moving in a particular direction.
Imagine if we choose 1e8, and then ETH ends up 1000x in price, now the floor is too far away and we are back to the original problem. Now imagine if ETH does 1/1000x, and now the floor is too close. Arbitrary numbers like this either eventually won't provide their value or eventually will break things and we should fix the root problem (adjustment speed algorithm) rather than trying to guess at what the right thresholds will be.
I would argue that the min price should be 10**10
instead of 10**8
Wei/byte. This works out to $16/MB vs 16 cents/MB.
The reason for this is that the data blobs actually do impose a real cost on the network, that we are subsidizing using the currently productive part of the network (Ethereum L1). It is reasonable to exclude extremely spammy applications (like file sharing) in case there is no real Ethereum usage (rollups). I'd say in this case it is justified to be a little bit opinionated.
Even for rollups with no compression at all, this would work out as a cost of 0.4 cents per 250 byte transaction, so for any reasonable expectation of real Ethereum demand, this should still be a non-binding lower limit.
I would argue that the min price should be 10**10
instead of 10**8
Wei/byte. This works out to $16/MB vs 16 cents/MB.
The reason for this is that the data blobs actually do impose a real cost on the network, that we are subsidizing using the currently productive part of the network (Ethereum L1). It is reasonable to exclude extremely spammy applications (like file sharing) in case there is no real Ethereum usage (rollups). I'd say in this case it is justified to be a little bit opinionated.
Even for rollups with no compression at all, this would work out as a cost of 0.4 cents per 250 byte transaction, so for any reasonable expectation of real Ethereum demand, this should still be a non-binding lower limit.
I would argue that the min price should be
10**10
instead of10**8
Wei/byte. This works out to $16/MB vs 16 cents/MB.
If ETH is worth 1000x today, that would be $16,000/MB and if it 1/1000th in price that is $0.0016/MB. The problem here isn't that we can't come up with a "reasonable" number today, it is that any reasonable number we come up with will likely be wrong eventually.
Either the network can handle the target throughput (I think 1MB/block at the moment?), or it cannot handle it. If it cannot handle it, then we should throttle down the throughput. I don't think we should care what people are storing on chain, and instead pick a target that ensures Ethereum clients are still operable by the target demographic we want to be able to operate a node.
If ETH is worth 1000x today, that would be $16,000/MB and if it 1/1000th in price that is $0.0016/MB. The problem here isn't that we can't come up with a "reasonable" number today, it is that any reasonable number we come up with will likely be wrong eventually.
Absolutely, but I think for an intermediate solution that a very different reasoning applies. We don't expect 4844 in its current form to still be in place in 2 years.
Either the network can handle the target throughput (I think 1MB/block at the moment?), or it cannot handle it.
That reasoning makes no sense to me. Even if the network can handle it (which I think we will test soon), this will increase the cost of running a full node by some amount. I currently run mine on 1 TB and for me this will probably mean pruning all the time or upgrading the disk. So I think it isn't a stupid idea to say we only want to use this if there's actually some economic value in it.
Absolutely, but I think for an intermediate solution that a very different reasoning applies. We don't expect 4844 in its current form to still be in place in 2 years.
Every Ethereum Hard Fork should be designed such that the system is indefinitely stable if that hard fork is the last hard fork ever. We should not be designing features that we know create a bad equilibrium or perverse incentives with plans to "fix it later". Also, if 4844 won't be like this in 2 years, then perhaps we shouldn't spend development time/effort on it because that is tech debt that will have to live forever.
That being said, I thought 4844 was designed specifically so that future sharding can build on top of it, not replace it? If so then this code/logic should continue on indefinitely.
Even if the network can handle it (which I think we will test soon), this will increase the cost of running a full node by some amount. I currently run mine on 1 TB and for me this will probably mean pruning all the time or upgrading the disk. So I think it isn't a stupid idea to say we only want to use this if there's actually some economic value in it.
I agree with you that the cost of running a node should be of critical importance, and it is why I'm generally against 4844 because it increases operational costs which are already an absolutely massive problem for Ethereum. However, if we ignore that (which seems like what people want to do), then we should at least design things under the assumption that whatever feature we build will be used in the worst way possible. This is historically how we have tried to design everything (e.g., opcode pricing) and I think it is a very wise strategy. We shouldn't design it under the assumption that "people probably won't use all of that disk space".
IIUC, your argument is basically "if people use this extra space for L2 scaling, then it is worth it to burn 150GB of every operator's disk on it, but if people are using it to transiently store GIFs then we should not burn 150GB of disk". Unfortunately, we cannot control how people use this and we should not assume that the most valuable use of this feature will be L2 scaling. We should design things under the assumption that storing GIFs (or whatever worse thing you can think of) is the most valuable use of this feature and if we don't like that, we should either change the design or drop the feature.
We can always just tune the fee adjustment algorithm such that it rises quickly and falls slowly or some other more advanced mechanism that can detect the rate of change and adjust itself accordingly.
I think we have to agree to disagree on a few things here:
Every Ethereum Hard Fork should be designed such that the system is indefinitely stable if that hard fork is the last hard fork ever.
Clearly not true or we would never have had a difficulty bomb. Please do not start the discussion on false premises. "I am against it" does not make it a fundamental design principle for Ethereum.
That being said, I thought 4844 was designed specifically so that future sharding can build on top of it, not replace it? If so then this code/logic should continue on indefinitely.
4844 is designed so that sharding can be built on top of it with absolutely minimal changes to consensus (but fairly extensive changes to networking). It is very reasonable to expect that constant will be tuned in this process, and one of these constants can be the minimum price. Sharding will make the cost of verifying the availability and storing of the data much cheaper as it does not fall on each individual node anymore, so it would probably make sense that the min price would be lower as well as increasing throughput.
We shouldn't design it under the assumption that "people probably won't use all of that disk space".
I have not claimed that. I am simply saying that if there is no economically interesting use of this space, then we might as well not use it for a while, keeping the full node cost lower, and the min price is a pragmatic mechanism to do this.
but if people are using it to transiently store GIFs then we should not burn 150GB of disk
If you read carefully you would see that this is not what I'm saying.
If you want to argue against 4844, then this is probably not the right pull request to do this so please keep those arguments out of here.
Every Ethereum Hard Fork should be designed such that the system is indefinitely stable if that hard fork is the last hard fork ever.
Agreed that this can't be said absolutely in all cases or that every decision is made with this as the 100% dominant factor, but I do agree that this is, can, and should be a guiding principle given the uncertainty in timelines, governance, and anything else that lies in the future.
I lean toward leaving this value at it's minimum to allow the system to effectively dynamically price itself in any uncertain future.
With the throughput reduction for the initial Shanghai target (see https://github.com/ethereum/EIPs/pull/5863), I would be in favor of leaving the minimum data gasprice at 1 for now. At a max total blob size of 0.5 MB per block, avoiding / shortening the rare sustained load situation during ramp-up is less of a priority. We can always revisit the min gasprice for a future fork once we see real-world mainnet behavior, especially if those forks also raise the blob throughput.
I will recommend on https://github.com/ethereum/pm/issues/670 to close this PR for now.
Closing as per https://github.com/ethereum/pm/issues/670.