solana icon indicating copy to clipboard operation
solana copied to clipboard

Feature Gate: round down compute-unit-price to the nearest 1_000 microlamports

Open tao-stones opened this issue 1 year ago • 43 comments

Description

Problem

Data from mainnet-beta shows the compute_unit_price is too granular, that:

  1. value of compute_unit_price scatters from range [0, 200_000+] in rather random fashion. Ideally user should set the value based on recent blocks' and/or accounts' min priority fee (available via RPC), only only when needed;
  2. priority_fee, calculated by multiplying compute_unit_price and compute_unit_limit, are too small; the side effect is user request lot more CU than what is actually needed for transaction.

Proposed solution

Regulate compute_unit_price by rounding down to its nearest 1_000 micro-lamport. The effect is user should set compute_unit_price in increment of 1_000. Transaction has less then 1_000 compute_unit_price will have no priority nor be charged a priority fee.

Examples: For a transaction with default 200_000 CU compute_unit_limit:

set compute_unit_price in micro-lamports tx currently prioritized by value current priority fee (lamports) round down to (1_000 microlamports) tx WOULD BE prioritized by value: WOULD BE priority fee
1 1 1 0 0 (eg: no priority) 0
999 999 200 0 0 (eg, no priority) 0
1_000 1_000 200 1_000 1_000 200
1_999 1_999 400 1_000 1_000 200

Proposed change needs to be feature gated because it changes actually priority fee changed to payer account.

Feature ID

6J6GS57v5q4CnPrLgDMsyZLFfpPEBPZ66h8efDEpesPk

Activation Method

Single Core Contributor

Minimum Beta Version

1.16.0

Minimum Stable Version

No response

Testnet Activation Epoch

No response

Devnet Activation Epoch

No response

Mainnet-Beta Activation Epoch

No response

tao-stones avatar May 02 '23 21:05 tao-stones

hi @taozhu-chicago, i don't understand why this is a good solution:

value of compute_unit_price scatters from range [0, 200_000+] in rather random fashion. Ideally user should set the value based on recent blocks' and/or accounts' min priority fee (available via RPC)

Most people I know use 1-10,000 microlamports. 1,000,000+ sounds like a severe congestion case on-chain, or wallets / UIs just blindly setting a fixed value. Do you have more data? i think it's important to look more differentiated at users in segments:

  1. <10tx/week occasional user
  2. <100tx/week power user
  3. <1000tx/week bot running every hour or absolute power user
  4. <10,000 tx/week bot running every few minutes
  5. ≥10,000 tx/week HFT (around 100B CU/week, 1 micro lamport = 0,0001 SOL / week)

priority_fee, calculated by multiplying compute_unit_price and compute_unit_limit, are too small; the side effect is user request lot more CU than what is actually needed for transaction.

Most CLOB workloads are dependent on account state, e.g. if you order $1M of any asset on openbook you are going to match with more than 1 party, hence there's a dynamic component that is very hard to limit by a fixed number. As there's no way for programs to terminate a transaction before it hits the gas limit gracefully, people are just forced to over-estimate for the worst-case. I don't see how this fee change improves the actual issue and gives developers the tools to actually estimate the CU usage of their transactions.

mschneider avatar May 08 '23 11:05 mschneider

Hey @mschneider, good to have your inputs. The easy one first: this doesn't aim to fix but to encourage requesting accurate, or at least more reasonable, CU limit. I'd guess it'd take multiple approaches and much works, but eventually the requested CU limit need to be at a reasonable level.

i think it's important to look more differentiated at users in segments

You mean priority fee be charged at different level? Also, for HFT in your example, don't you think 0.0001Sol/Week is too insignificant?

tao-stones avatar May 09 '23 01:05 tao-stones

i think the right scale is somewhere in between!

1 micro lamport in priority fee = e-5 SOL / week 1 lamport in priority fee = 10 SOL / week

probably e-2 is a good reference value

mschneider avatar May 09 '23 17:05 mschneider

Yea, a in-between value does sound safe / comfortable, but replacing one fraction unit with another fraction unit doesn't sound like final solution, tbh.

I'd be arguing that 10Sol/week has more chance than 0.01Sol/week to incentivize dev digging into refine, even redesign, transactions.

tao-stones avatar May 09 '23 22:05 tao-stones

If the goal is to allow users to set accurate compute limits I believe the best way would be to expose the CU consumed in a sysvar. This way users can decide to stop consuming CU before reaching the limit

mschneider avatar May 09 '23 22:05 mschneider

I'd be arguing that 10Sol/week has more chance than 0.01Sol/week to incentivize dev digging into refine, even redesign, transactions.

By introducing way too large buckets you will prevent real price discovery we see the same with order and price lots on orderbooks the ideal measure of increment is usually 1bps or 1e-4. the best example would be volume on sol/usd on serum which we fixed with the openbook launch

mschneider avatar May 09 '23 23:05 mschneider

Sample TX CU: 50,000 $E-4 in lamports: ~5000 Min increment: 0.1 lamports / CU

mschneider avatar May 10 '23 17:05 mschneider

hey Jayant from Pyth here. I understand the desire to have people set CU limits accurately, but I don't think this proposal is going to solve that problem. As @mschneider 's points out, you have to set the CU limit to the maximum that the transaction could possibly consume. In Pyth's case, we have an expensive aggregation operation that needs to be performed once per slot. Most price update transactions use very few CUs, except for the one that happens to trigger aggregation. However, since we can't tell up front which transactions will trigger aggregation, we have no choice but to set the CU limit conservatively (as if every transaction triggers aggregation).

jayantk avatar May 15 '23 17:05 jayantk

In Pyth's case, we have an expensive aggregation operation that needs to be performed once per slot. Most price update transactions use very few CUs, except for the one that happens to trigger aggregation. However, since we can't tell up front which transactions will trigger aggregation, we have no choice but to set the CU limit conservatively (as if every transaction triggers aggregation).

Longer-term would the functionality being proposed here help with that, https://github.com/solana-foundation/solana-improvement-documents/pull/16, where the Pyth program could potentially rebate fees when the aggregation is not triggered.

mvines avatar May 15 '23 23:05 mvines

Hi guys,

Sorry to jump in like this, maybe a quick intro first before I comment: I do market-making/HFT on Solana, currently have the most volume on all exchanges on Solana, and most of the liquidity on all exchanges except one. I'd guess a good percentage of all TXs on Solana are directly/indirectly related to my system. It's a small ecosystem right now, a lot of the other MMs left but I'm doing my best to keep supporting different protocols even when it's not profitable.

Based on my own usage, 1_000_000 increments is extremely high. You can see my TXs on openbook for example, the priority fee is 50k-200k usually.

IMO I'd increase to 1_000 increments (milli lamports) but definitely not 1 full lamport because then it won't be sustainable and I'll stop paying fees completely.

There's a good analogy for this on the orderbook where the "tick size" (price increments) needs to be granular enough to allow market makers to quote efficiently (~0.5 bps of the price is good), but not too granular/small that it leads to abuse (you can bid 0.0001 bps higher than the previous best price to get the lead) which defeats the time-priority part of the orderbook.

I appreciate that this is a public/open debate, a big reason why I got into DeFi is the fact there's no gatekeeping and everyone can take part.

I hope my input was useful.

Thank you spacemonkey

SpaceMonkeyForever avatar May 24 '23 14:05 SpaceMonkeyForever

Hi @SpaceMonkeyForever, thanks for contributing!

You can see my TXs on openbook for example, the priority fee is 50k-200k usually.

Can you provide a few examples of your txs? I'm assuming by "priority fee" you mean compute_unit_price that is 50k-200k microlamports?

There's a good analogy for this on the orderbook where the "tick size" (price increments) needs to be granular enough to allow market makers to quote efficiently (~0.5 bps of the price is good), but not too granular/small that it leads to abuse

Yes that identifying right increment is the key. However, keep in mind that one doesn't have to always pay prioritization fee for every transactions, only when congested;

tao-stones avatar May 24 '23 14:05 tao-stones

Hey, thanks for replying!

Yes, compute_unit_price. I don't want to link my wallets publicly, but if you want to see example TXs, it will be easy to find my accounts if you check openbook makers (openserum.io is good).

Yes, I understand it's only used when needed, 1_000_000 is a large increment IMO still. When doing HFT, this doesn't help much because I cannot wait for my TXs to start timing out first before adding a fee. Market makers have to be quoting all the time every second or else you start losing money and you stop providing liquidity (you can still trade as a taker i.e. toxic flow which will hurt the protocol even more than pulling out liquidity).

IMHO I would rather keep it the same as it is now or increase to 1_000 increments or maybe 10_000 increments at most.

SpaceMonkeyForever avatar May 24 '23 14:05 SpaceMonkeyForever

if compute_unit_price is set to 50K - 200K microlamports, then increase its unit to 1 lamport is a x20 - x5 increase on compute-unit-price alone; at same time, if compute_unit_limit can be reduced by x3 (spot checked few txs, looks like possible), we are looking at x7 - x2 prioritization fee increase, assuming every transaction still pays prioritization fee.

Utilizing RPC (https://docs.solana.com/api/http#getrecentprioritizationfees) to determine when and how much prioritization fee might be needed before sending transactions, there are more room to further reduce overall fees.

Would be great if you can explore possibilities of reducing requested compute_unit_limit for transactions, and opportunities to not to always set priority.

tao-stones avatar May 24 '23 15:05 tao-stones

@taozhu-chicago

Right, I know about the RPC endpoint, it's useful for UI apps for sure. The compute limit is already set to a minimum to maximise chance of delivery and minimise the fees.

I think you are focused on the general use-case, I'm talking about a market maker use-case. The best I can do is estimate what the priority fee needed might be.

Also, it seems your overall goal is to increase the fees, which would kill all orderbook liquidity on the chain and only AMMs will remain (what Ethereum has). Aside from killing all orderbook protocols like openbook, I can go into detail on why that's a very bad idea in general, but it's better to stay on topic and we can all agree that we need orderbooks and HFT (not CEX-level but still good) on Solana, it's what makes it better than all other chains.

If you want to cut down on compute, IMHO there are 2 ideas I can think of:

  • Increasing CU limit without adding a priority fee does not increase the flat fee at the moment, maybe this is something you can get rid of.
  • Maybe you can decrease the default compute limit because for most of the protocols I interact with, I need a lot less than that. Some protocols like Zeta and Drift do require a high limit though, but that shouldn't be the default IMHO.

SpaceMonkeyForever avatar May 24 '23 18:05 SpaceMonkeyForever

+1 to exposing CU consumption if we want users to request tighter CU limits

crispheaney avatar May 25 '23 14:05 crispheaney

+1 to exposing CU consumption if we want users to request tighter CU limits

Agreed. Simulations or careful design help on some use cases, but may not be enough for others. SIMD #49 also helps better utilize requested CU limit. Any suggestions are welcomed.

tao-stones avatar May 25 '23 14:05 tao-stones

Also, it seems your overall goal is to increase the fees,

The overall goal is to continuously improve network so it serves better for everyone. To that end, the motivation of this proposal is to set prioritization fee to a reasonably meaningful level, described in SIMD #50. Ideal outcome of adjusting compute_unit_price unit from microlamports to lamport is not to increase fee (even tho TXs not using priority are not impacted), but to incentivize:

  1. paying for priority only when necessary;
  2. requesting CU limits to what actually needed;

Open to suggestions ofc.

Increasing CU limit without adding a priority fee does not increase the flat fee at the moment, maybe this is something you can get rid of.

Charge base fee based on CU limits is coming, SIMD 19. It's more relevent to address item2 above as early as possible;

Maybe you can decrease the default compute limit because for most of the protocols I interact with, I need a lot less than that. Some protocols like Zeta and Drift do require a high limit though, but that shouldn't be the default IMHO.

There are only ~6% non-vote transactions explicitly set CU limits right now. IMHO, this needs to be increased to (close to) 100% before adjusting default value.

tao-stones avatar May 25 '23 15:05 tao-stones

Can you elaborate on why higher CU limits is problematic? Will make it easier to brainstorm suggestions!

crispheaney avatar May 25 '23 15:05 crispheaney

paying for priority only when necessary;

I imagine you'd end up getting less fees overall. That sounds bad to me, but maybe I'm not getting it.

requesting CU limits to what actually needed;

We already do this when setting a priority fee AFAIK to reduce costs, I definitely do. However, the limit needs to be high enough that the TX will not fail even in the worst case scenario, as others already mentioned.

So, as far as I can see, the two goal points are already covered. IMHO making the network better for everyone includes keeping liquidity providers on the chain and specially orderbook exchanges which is what makes Solana unique. Right now, most MMs have left the chain which is one reason why I can have the majority of all orderbook liquidity on Solana.

Charge base fee based on CU limits is coming, https://github.com/solana-foundation/solana-improvement-documents/pull/19. It's more relevent to address item2 above as early as possible;

Right, I didn't know you are making the fee dynamic, honestly that too sounds bad for market-making on Solana if it ends up increasing the fee too much specially that marking-making involves executing multiple transactions in one TX. I meant that right now increasing the compute limit (without adding priority fee) beyond the default still costs the same default fee which shouldn't be the case.

There are only ~6% non-vote transactions explicitly set CU limits right now. IMHO, this needs to be increased to (close to) 100% before adjusting default value.

makes sense.

I've asked other people, cris being one, to add their opinion too. Hopefully we will get more of the perspective of exchange owners and traders since those are the most active apps on Solana.

SpaceMonkeyForever avatar May 25 '23 16:05 SpaceMonkeyForever

Can you elaborate on why higher CU limits is problematic? Will make it easier to brainstorm suggestions!

  • requesting more CU than actually needed wastes block space (block has CU limits), therefore lower throughput;
  • with CU based fees (SIMD 19, SIMD 16, and SIMD 50), higher CU limits means higher fee.

tao-stones avatar May 25 '23 16:05 tao-stones

are block CU limits based on CU used or CU requested? i was under the impression it's the former, but could be wrong.

eugene-chen avatar May 25 '23 16:05 eugene-chen

If we're in a market regime where blockspace demand is so low, such that the price is low enough for people to spam funny logs or drastically over-request CUs, I don't think this is an issue.

The market's response--if there is actually more demand that is being serviced--is to increase the price by the appropriate amount. If folks (users, bots, etc.) sending transactions aren't getting them included at a satisfactory rate, they need to increase the priority fee.

note: the UX with this type of priority fee will always be quite poor, since you will always be guessing how much priority fee you need to add. A 1559-like mechanism should make a lot more sense.

eugene-chen avatar May 25 '23 16:05 eugene-chen

I imagine you'd end up getting less fees overall.

In general sense, leaders do extra work for TXs with priority, prioritization fee compensates that extra work. Less prioritization, less fee.

As for CU limits, I think there are room for improvement. Randomly pick a tx, it only used half of request CU. It is understood TX has to cover worse scenario, but we can work out better ideas such as SIMD 49

tao-stones avatar May 25 '23 16:05 tao-stones

are block CU limits based on CU used or CU requested? i was under the impression it's the former, but could be wrong.

Currently there is a QoS adjustment logic (issue #31379 ) to pack block with actual used CU. But the main reason this extra logic is there is because requested CU are way off. In case of bankless leader, it will have to fully rely on request CU to make packing decision.

tao-stones avatar May 25 '23 16:05 tao-stones

Randomly pick a tx, it only used half of request CU

Exactly what you said: you have to cover worst case scenario. Most of the time, 150k is used, but TX do hit over 250k sometimes and you'll lose money when that happens. SIMD 49, as far as I understand, would make it so that I place orders up to the remaining CU i.e. some orders at the end might not make it to the book. It's more of a trade-off than a solution IMO because it does remove liquidity from the book specially the backstop orders at the end which are important for liquidations to work properly on margin protocols like Mango.

SpaceMonkeyForever avatar May 25 '23 17:05 SpaceMonkeyForever

Without understanding what takes to trade successfully, and with full respect, I'm wondering if possible such transaction be broken up into smaller TXs, coordinated by signals generated off-chain? Akin to traditional trading/MM-ing app builds order books locally via feeds, sending orders via signals generated from local order books.

tao-stones avatar May 25 '23 18:05 tao-stones

We don't see a ton of this today, but having lots of logic on-chain (so state can be read at execution time) creates difference in CU requested (max CU used) vs. actual CU used.

For example, I may have a smart contract where I ping it with what I think the fair price is. The contract's logic decides whether it should cancel orders, place new orders, etc., depending on the input fair price + the current state of the market, sometimes using a lot of CUs and sometimes using not that many.

eugene-chen avatar May 25 '23 23:05 eugene-chen

I'm wondering if possible such transaction be broken up into smaller TXs

I'm not sure if I understand correctly. My point above was to comment on the proposal (SIMD 49) that would allow the TX to specify "process instructions up to compute limit N" rather than running out of compute, maybe I misunderstood it though. What I was trying to say is that if the compute requirement suddenly jumps (since it's dynamic, I cannot know for sure that a TX will require only 150k and not 250k), and I use that new feature of "process inx up to" then some orders will not be placed because their inxs will be omitted, specially the ones at the end with bigger sizes which are needed for liquidations to work properly on e.g. Mango (liquidations usually require big orders because it's a big market order that needs to be filled).

About splitting orders over multiple TXs in general though: One of the best features on Solana relative to everywhere else I trade(d) is the fact I can replace all orders in a single TX atomically, this is extremely useful. Most CEXs don't support this, Binance being one big exception but it's still not as smooth as Solana (it's modify not cancel+place, so need to get order ids right). It is possible to split up order updates over e.g. 2 TXs but then isn't that creating more congestion for you? Because, again, I don't know when I should or shouldn't split so I'll just always split and now my TX rate is doubled everywhere which will be a significant increase in total TXs sent to the chain every second.

SpaceMonkeyForever avatar May 26 '23 00:05 SpaceMonkeyForever

Right, this is model is common that creates large variance in requesting CU limits. Would be great if the decision could be made off-chain, so you have one small and static tx to frequently pull data from chain to build local cache; then ping it with price locally, based on decision to send new/update/cancel orders, which are pretty much single-function transactions that'd be constant in requested CU.

Not sure if it is feasible to all cases, but maybe there are successful stories out there?

tao-stones avatar May 26 '23 00:05 tao-stones

yes, the problem is you really care about the state at run time, not at tx send time, especially in a HFT setting. Extra troubling when you don't even know when your tx is going to land

eugene-chen avatar May 26 '23 00:05 eugene-chen