Mainnet Execution Simulation
Mainnet Execution Simulation
This feature allows developers to fork the Mainnet state in Clarinet Simnet.
Use cases:
- When Clarinet projects rely on deployed contracts, such as DeFi protocols or oracles, it can be challenging to populate these protocols with real-world data. This feature enables the use of Mainnet data in unit tests or the Clarinet console.
- It also enables the simulation of a Mainnet transaction before actually running it.
This feature aims to be data-efficient and only fetches the necessary marf data and metadata.
This specification outlines the work required in the following three main areas:
- Clarinet
- Stacks RPC API
- DevOps for API keys and caching
Clarinet
This feature impacts the Clarinet datastore, which implements the ClarityBackingStore trait. When the Simnet attempts to read data from the store, it first checks if it is available locally. If not, it fetches the data from the network and writes the result locally. The desired chaintip (or block hash) will be set in the project manifest, allowing for consistent data retrieval from the network with efficient caching.
Clarinet will rely on an API key or auth token set in ~/.clarinet/clarinetrc.toml or an environment variable. This can also be set using the clarinet login command and integration with the Platform. However, this can be done at a later stage.
New manifest settings (Clarinet.toml) (naming to be confirmed):
[simnet]fork_network: boolean (false)network_api_url: string (https://api.hiro.so)chaintip: number (none)
New command:
clarinet simnet enable-mainnet-forkto configure the simnet settings and set the chaintip to the current height
New Stacks RPC endpoints
To enable the simnet Datastore to read data from a specific Stacks network, the Stacks nodes need to expose two new RPC endpoints:
-
/v2/clarity_marf_value/:clarity_marf_key- Takes an optional
tipquery parameter - Takes an optional
proofquery parameter (defaults tofalsein this context)
- Takes an optional
-
/v2/clarity_metadata/:principal/:contract_name/:clarity_metadata_key- Does not take a
tipquery parameter - Does not take a
proofquery parameter
- Does not take a
Stacks Hiro API rate limiting and caching
By default, Clarinet will attempt to fetch data from Hiro's infrastructure (https://api.hiro.so/v2, https://api.testnet.hiro.so).
Note: It will be possible to configure the simnet to use a different API endpoint.
Question: When RPC endpoints are added, are any actions required on Hiro Stacks API?
API key
To fetch these endpoints, it is recommended to provide an API key. The API key should work both locally and in CI. In order of priority, Clarinet will attempt to get an API key or token from:
- Environment variable
HIRO_API_KEY - Clarinet config file
~/.clarinet/clarinetrc.toml, under the propertyhiro_api_key- In the future, users could log in (set this token) with the commandclarinet login
If no API key is available, Clarinet will recommend using one. It will still be possible to perform queries, but with a limited rate limit. Can we and should we set higher rates for these endpoints?
Caching
Since the fetched data is immutable, the caching strategy can be aggressive.
Examples:
/v2/clarity_metadata/<addr>/hello-world/vm-metadata::9::contract-size- Returns immutable data and can be cached forever
/v2/clarity_marf_value/vm::<addr>.hello-world::1::bar?tip=<tip>&proof=false- Returns immutable data and can be cached forever
/v2/clarity_marf_value/vm::<addr>.hello-world::1::bar?proof=false- Cannot be efficiently cached if
tipis not set (it fetches current tip data). In practice, Clarinet will almost always set thetip
- Cannot be efficiently cached if
Stacks Hiro API rate limiting and caching
Question: When RPC endpoints are added, are any actions required on Hiro Stacks API?
For rate-limiting, no actions are needed. Rate-limiting defaults and API key processing gets configured on any service or endpoint we expose by default.
API key
The API key should work both locally and in CI.
We can provide a special API key for CI to use with a higher rate-limit if needed, encrypted by Github Secrets. Additionally, it may be best if CI strictly tests against our staging environment for better/safer separation.
Caching
These are great examples, thanks for putting thought into this area; it really helps our infra stay healthy. When the final implementation is just about ready, I would suggest submitting a new devops issue with the specific caching needs on each type of endpoint for us to set up.
@CharlieC3
API key
This is for users to use, both locally and in their CI. We would expect them to use it against a mainnet node.
Rate limiting
Could we use specific rate limiting rules for these 2 endpoints?
I also need to tests against real-world use case what would be a reasonable limit
This is for users to use, both locally and in their CI. We would expect them to use it against a mainnet node.
Gotcha, they should be able to use their own API key(s) in that case.
Could we use specific rate limiting rules for these 2 endpoints?
Yup we can do that. Do you know roughly how many calls/minute would be made to these endpoints? Are we talking hundred(s), thousand(s), tens of thousand(s), etc?
When RPC endpoints are added, are any actions required on Hiro Stacks API?
No action needed
Is the team working on this feature and how long will it take them?
As Security researches we are in desperate need of this to write Poc's
@neogranicen Yes we are currently working on it. It's a bit early to communicate on timelines. Please subscribe to this issue to get future updates
any news @hugocaillard ??
@neogranicen This has been paused be is back in progress