foundry icon indicating copy to clipboard operation
foundry copied to clipboard

`forge deploy` feature wishlist

Open tynes opened this issue 3 years ago • 8 comments

Component

Forge

Describe the feature you would like

It would be nice to have a higher level command on top of forge script that is specifically designed to facilitate the management and deployment of complex smart contract systems. hardhat-deploy has over 20k repos that are using it on Github but it only receives a few commits per month and is missing certain features that would make it much easier to use.

The main feature that forge script is missing that would make it ideal for using for deployments is the way that the deployment artifacts are maintained

Desired features:

  • There is a single artifact file per deployed contract
  • This file contains all of the information required to interact with the contract as well as verify it
  • Any language could have a library to read in the artifact and build a Contract object/struct, so that its easy to interact with the contracts, including forge script
  • hardhat deploy does not work well with hardware wallets, forcing insane hacks at deploy time to get it to work. Should be able to use hardware wallets at any derivation path
  • Idempotency, don't redeploy things if the code in the local workspace matches the code referenced in the deploy artifact (allows easy resumes of deployments)
  • Be able to reference deployment artifacts in remote locations (filesystem, URI), to be able to interact with those contracts (could be done via cheatcode)
  • The artifacts should be namespaced by network, something like deployments/{mainnet,goerli}
  • Automatically use chain id to know which network's artifacts to use
  • If the namespacing is canonical, then a network of deployment artifacts will make it really easy to interact with different contracts from different projects across chains
  • A way to manage a contract dependency graph at deploy time

Additional context

No response

tynes avatar Dec 19 '22 17:12 tynes

+1 for some sort of higher level structure. I'm currently using bash scripts to orchestrate multiple forge script runs. My main issue is there is no easy way to persist contracts through the deploy. For example if I have 2 dependencies that need to be deployed first I need to run each of them individually, use jq to pull contract info from the broadcast dir then set environment variables for the subsequent scripts. This is a bit clunky, and I would prefer some more standardized framework to orchestrate this.

I'm not familiar with hardhat-deploy, but I need something along the lines of:

vm.export("MY_CONTRACT_LABEL", contractAddress)

Which can be imported in a subsequent script by:

vm.import("MY_CONTRACT_LABEL")

hexonaut avatar Dec 19 '22 18:12 hexonaut

This list looks good to support mostly everything we need at Maker, but I would add two more points:

  • There should be a standard way to pass arbitrary configuration parameters from the deploy context to the script context. I think the structure of having some global configuration json file which can then be easily modified and passed into the child script contexts would be ideal. Perhaps env vars are sufficient for the translation between deploy and script, but just wanted to make sure this point is considered in the structure.
  • Scripts should be able to export arbitrary data to the parent deploy process similar to the previous post I made. It's possible there are multiple deploys of the same contract, and I would like a way to specify which is which with a friendly label.

hexonaut avatar Jan 03 '23 16:01 hexonaut

There should be a standard way to pass arbitrary configuration parameters

I definitely agree with this, hardhat handles this with scripts by allowing you to define arbitrary CLI params. There is not a way to do this with hardhat deploy so we created a hardhat plugin for managing dynamic deploy config. This plugin allows you to define a JSON file that includes the dynamic deploy config and then the values can be accessed during deployment execution

tynes avatar Jan 03 '23 18:01 tynes

FYI, here's how we do it at Mangrove. It's still hackish and in need of cleanup but can provide ideas. There's no idempotency and so far it has not been a problem. The flow is:

  1. You call forge script --fork-url myNetwork MyDeployment
  2. During script execution, all current deployments on myNetwork are accessible by name through fork.get(name)
  3. You register any new/replacement deploy with fork.set(name,address)
  4. A JSON file with all known deployments is written to disk. If WRITE_DEPLOY was true, the old JSON file is replaced. Otherwise you have both files available.

Some more details:

There's a Toy ENS contract that maintains a name=>address mapping. Scripts inherit the Deployer contract which detects the current fork and loads addresses from a network-dependent JSON file into the ToyENS (if the remote node has its own ToyENS at 0xdecaf0... those mappings prioritized)). Scripts call fork.set(name,address) to register any newly deployed contract, then end with a call to their outputDeployment() method, which dumps all the Toy ENS mappings to a JSON file. By default, a timestamped JSON file is created. If WRITE_DEPLOY is true, the "latest" file is updated. All those files are keyed to the current network.

Misc additional stuff:

  • There are 2 Toy ENS contracts. There's "context", which cannot be written to, and "deployed" which can. That's so scripts can't overwrite e.g. uniswap's address by mistake, but can fetch it by name.
  • Scripting discipline requires:
    1. run() ends with outputDeployment()
    2. Deploys are followed by fork.set(name,address)
    3. There's an innerRun function called by run() so that scripts can call each other through innerRun() and run() can read environment variables to get its arguments. That's way more convenient that giving a function signature as argument to forge script.
  • There's extra broadcaster-selection logic to handle the "I want to simulate as if calling from a contract". In that case we use --skip-simulation.

Pending improvements:

  • The code contains additional complexity because we often don't want to deploy but just generate deployment calldata to pass to a contract.
  • The fork-selection logic should be updated to use mappings instead of an if-then-else.
  • writeJson should be used instead of manual string-wrangling to write the JSON files.
  • Add local overrides for mappings using either a local JSON file or environment vars e.g. OVERRIDE_UNISWAP=....

adhusson avatar Jan 22 '23 11:01 adhusson

This looks great @adhusson .

hexonaut avatar Jan 22 '23 14:01 hexonaut

Following up here from Discord:

I'll run over our current structure and say why we chose this path and where the pain points are. This is the PR this is being implemented for Maker: https://github.com/makerdao/dss-direct-deposit/pull/92

  1. We define json configuration files in script/input/<CHAINID> such as here: https://github.com/makerdao/dss-direct-deposit/tree/deploy-scripts/script/input/1
  2. Scripts are run which will read in the configuration json and output with user-defined labels to https://github.com/makerdao/dss-direct-deposit/tree/deploy-scripts/script/output/1. Please note we cannot use the broadcast json files because it may be that a contract was not deployed from an EOA, is behind a proxy, etc. We need to label these ourselves in the script. Example #1 and #2 of deploy scripts in this repo. Examples of loadConfig and exportContract.
  3. In Maker we need to deploy contracts and then execute them in a separate operation (by the administrative spell instead of the deployer EOA). For the "core deploy" example above we have a dependency loading function here.

The pain points:

  1. The overall structure above is cumbersome to pass json files in and out. The only way to read files is from pre-defined folders that have read-write access allowed in foundry.toml. This is problematic if for example I am installing dss-direct-deposit as a library in some parent deploy repository. In that case I want to be able to build inside the lib/dss-direct-deposit directory, but pass in a file from outside. Currently the only way to do this (that I know of) is to put the json inside an environment variable which is kind of unintuitive. It would be nice to be able to pass in a configuration here by CLI arg such as forge script .... --user-arg-config /path/to/my/config.json which would automatically allow that file to be read.
  2. Dependency management is non-existent, and we don't really want to roll our own. Currently we are just combining previously generated json output files into an environment vars such as here. This is messy, and I would like not have to do manual json manipulation in bash scripts. :)

Proposed solution:

I think what I would like to see is the higher level forge deploy command as suggested in OP that would take a json configuration file that glues all of this together.

An example of what this could look like (feel free to maybe move the cmd stuff into a json root object or something):

{
	"core": {
		"cmd": "forge script script/D3MCoreDeploy.s.sol:D3MCoreDeployScript --use solc:0.8.14 --rpc-url $ETH_RPC_URL --sender $ETH_FROM --broadcast --verify",
		"config": "./script/input/$FOUNDRY_CHAINID/core.json",
		"outputDir": "./script/output/$FOUNDRY_CHAINID/",
		"outputName": "core"
	},
	"core-init": {
		"dependencies": ["core"],
		"cmd": "forge script script/D3MCoreInit.s.sol:D3MCoreInitScript --use solc:0.8.14 --rpc-url "$ETH_RPC_URL" --broadcast --unlocked --sender $MCD_PAUSE_PROXY",
		"config": "./script/input/$FOUNDRY_CHAINID/core.json"
	},
	"aave": {
		"dependencies": ["core"],
		"cmd": "forge script script/D3MDeploy.s.sol:D3MDeployScript --use solc:0.8.14 --rpc-url $ETH_RPC_URL --sender $ETH_FROM --broadcast --verify",
		"config": "./script/input/$FOUNDRY_CHAINID/aave.json",
		"outputDir": "./script/output/$FOUNDRY_CHAINID/",
		"outputName": "aave"
	},
	"aave-init": {
		"dependencies": ["core", "d3m:aave"],
		"cmd": "forge script script/D3MInit.s.sol:D3MInitScript --use solc:0.8.14 --rpc-url $ETH_RPC_URL --broadcast --unlocked --sender $MCD_PAUSE_PROXY",
		"config": "./script/input/$FOUNDRY_CHAINID/aave.json"
	}
}

Running this would then be as simple as forge deploy aave which would check if the core output exists, if it does it skips that step and if it doesn't would run that step. After core step is complete it would move to aave and run that.

Scripts could access dependencies by using vm.readDependency("XYZ") or similar. In the "aave-init" example above d3m:aave remaps the aave target into a dependency labelled "d3m". This is so the D3Minit script can call vm.readDependency("d3m") and it will fill in either aave or compound or whatever the higher level script determines.

The config should also be able to specify a dependency inside a forge install ... directory added to lib/XYZ/... so that higher level deploy coordination repos can orchestrate amongst a bunch of smaller repos.

How does that look to everyone? More should be added, but this is the biggest piece that is missing for us imo.

hexonaut avatar Jan 27 '23 14:01 hexonaut

+1 for some sort of higher level structure. I'm currently using bash scripts to orchestrate multiple forge script runs. My main issue is there is no easy way to persist contracts through the deploy. For example if I have 2 dependencies that need to be deployed first I need to run each of them individually, use jq to pull contract info from the broadcast dir then set environment variables for the subsequent scripts. This is a bit clunky, and I would prefer some more standardized framework to orchestrate this.

Your use case is probably different but you can use StdJson to parse JSON files in Solidity. That's what I've used for the ERC-5164 bridge interface: https://github.com/pooltogether/ERC5164/blob/main/script/helpers/DeployedContracts.sol

Basically, I deploy the contracts on both chains and then use the script to retrieve the deployed contracts and interact with them: https://github.com/pooltogether/ERC5164/blob/main/script/deploy/DeployToOptimism.s.sol#L39 Of course, it requires a lot of forge commands but with the use of NPM, you can easily create one single command that handles the deployment: https://github.com/pooltogether/ERC5164/blob/main/package.json

PierrickGT avatar Mar 17 '23 23:03 PierrickGT

I don't think this is necessary

as mentioned forge script already is a superset of the proposed forge deploy. it is just the name that is different

this would introduce bloat to the commands and I think it's best not to add it

daweth avatar Aug 16 '23 01:08 daweth

Hi all, I feel like the current version of forge script has largely achieved what is defined in the wishlist. In order to make this ticket and its leftover points actionable I would like to propose we mark this ticket as resolved and handle follow ups in individual tickets. If there are any concrete features you are still missing in your workflow please open a feature request or join the conversation on an existing one. Thanks!

zerosnacks avatar Jun 12 '25 15:06 zerosnacks