Cranking (consume events) FAQ
Hello,
I want to ask some questions and touch base a bit about what the cranking process is and what are the best practices by which it should be done.
What I know from examining the code:
- Cranking is done through the
consume_eventsinstruction - Each invocation of the
consume_eventsinstruction needs to know how many items in the queue it should read and process (through themax_iterationsparameters) - Each invocation of the
consume_eventsinstruction needs to provide the DEX program with theopen_orders_accountsof all the users that are referenced by the events which are about to be processed. This can be done by loading theevent_queueand extracting theiropen_orders_accountfrom theEventby the following rule: in case it's anEventFillthen it's taken from theevent_fill.makerCallbackInfo.slice(0, 32); in case it's anEventOutthen it's taken from theevent_out.callbackInfo.slice(0, 32) - A
takerdoesn't need to have his events consumed because thenew_orderinstruction already takes care of updating theopen_orders_accountin case there was at least a partial fill for his order - A
makerhas to have his events consumed by a crank in order to get hisopen_orders_accountupdated after orders of his have been matched -
accumulated_royaltiesandaccumulated_feesbelonging to the market are updated byconsuming events
What I am not sure about and would like to figure out:
- I thought about ways of embedding the
consume_eventsinstruction in different transactions. For example:
- Transaction(
new_order,consume_events) so that after a taker will get his order matched, he will also consume the events that would update theopen_orders_accountof the maker, making the latter able to settle his funds. - Transaction(
consume_events,settle) so that before a maker tries to settle his funds, he makes sure he consumed the events that will update hisopen_orders_accountin order to actually have something to settle
However, I don't think these solutions are good. For example:
- For the first bullet above,: it seems that the events pushed to the queue in the case of a match are not "seen" in the
consume_eventsinstruction that belongs to the same transaction. But if I create a second transaction with just theconsume_eventsinstruction and execute it after thenew_orderthis will work - For the second bullet above: this is too sketchy because there could be multiple events waiting in the queue that don't belong to the maker trying to get his
open_orders_accountupdated. Given the fact that he needs to specify amax_iterations, he could consume the next X events that belong to other users and were not consumed yet, so he would end up cranking the market for others and not consume enough events to reach his own events.
All in all, I think trying to embed the consume_events instruction in users' trading transaction is a dead-end. This leads to the point 2) below.
- There is a
crankerfolder in the repo. The code looks like something that needs to be run in a background job by the market admin at a particular frequency. The frequency can be set according to the liquidity in the market I guess.
Here are the questions:
-
Is the background job the way to go regarding
cranking/consuming eventsof a market? - If it is, any suggestions about the number of iterations it should consume at each run? Is there a limit for how many events can be consumed?
Hi @mihneacalugaru, thanks for your questions.
In the general case it's essentially impossible to bundle a new_order instruction with a consume_events and expect that the new order will be consistently cranked. This is due to the fact that you can't reliably predict what order you will be matched against. In practice, for smaller markets in which one maker dominates, predictions might be more successful though.
This is why the recommended approach is indeed to run a cranking daemon on a third-party server. This can be done by the market admin or anyone else for that matter. When creating a market we do recommend a creator-run cranking service though.
With regards to the number of iterations to consume, the idea is that you want that number to be as high as possible while not exceeding the compute budget. In theory, the cranking server could self-optimize based on metrics such as orderbook depth, but this isn't something we have worked on ourselves. In the meantime, a value of 10 should be safe enough.
Hi @ellttBen, thanks for your answer, as always!
Understood everything you said and also implemented it. Works well!
More as a FAQ-sided note, I wanted to talk a bit about the AAOB's event queue model. As it can be implied from my previous comment, initially I was wondering why we even need an event queue from which to pop and process on the caller program. I was wondering why we can't consume the event instead of pushing it to the queue.
But I think I understand now:
-
AAOBwas not made fordex-v4, butdex-v4is just a client that callsAAOB. -
AAOB(as it names implies) it doesn't care for what underlying asset it facilitates the trading. Which means it doesn't know what to do when an order match happens. It just knows it happened and lets the calling program to decide what to do about that via the callback information mechanism - Also, we can't consume the event in the
dex-v4'snew_orderinstruction right after theAAOB'snew_orderinstruction returned because the queue it's FIFO, so we don't know what we are consuming and there is no guarantee that through the events we consume there is also the event in which the maker for this order gets his funds.
Correct me If I'm wrong with any of my above findings and understanding.
Thank you very much for your time and for working with me on building this early version of FAQ!