foundry icon indicating copy to clipboard operation
foundry copied to clipboard

feat(`config`): add chain dependent `solc` config and support in multichain scripts

Open sakulstra opened this issue 10 months ago • 16 comments

Component

Forge

Describe the feature you would like

At bgd we rely quite a lot on multichain scripts, which worked fine as long as Aave was only deployed on shanghai compatible blockchains. With the recent expansion to linea(london) and zksync(zksolc) we're reaching a point where our workarounds no longer work.

Deploying for different chains, we try to solve with profiles.

[profile.bnb]
evm_version = 'shanghai'

[profile.linea]
evm_version = 'london'

One issue we face here is that whenever switching the evm version a complete recompile is needed. It would be great if the cache was per evm_version, so things only have to be recompiled once.


While the above issue is annoying, one more problematic thing is that multichain scripts (single script switching between networks) will always run with a single solc & evm_version. This leads to problematic situations:

  • if you select london (with the goal to unite on the lowest accross chains), the script might fail with not activated when touching sth deployed on shanghai or cancun
  • if you select cancun you will end up deploying non executable code on chains that are stuck on london
  • another caveat is that when you rely on address prediction based on create2, addresses might be different then expected

Currently I don't see any solution for this problem, the workaround we currently use is to no longer use multichain scripts... which ofc is painful. It would be great if one could configure solc per chainId and dynamically switch based on which network is currently forked. No idea how this could work, but I guess if using forge multichain this would be a must.

Additional context

No response

sakulstra avatar Feb 07 '25 12:02 sakulstra

@sakulstra you have the option to pass the --evm-version arg to forge script For tests that's supported inline like

    function setUp() public {
        myContract = new MyContract();
    }

    /// forge-config: default.evm_version = "berlin"
    function test_berlin() public {
        myContract.doSomething();
    }

    /// forge-config: default.evm_version = "cancun"
    function test_cancun() public {
        myContract.doSomething();
    }

does this work for your use case?

grandizzy avatar Feb 07 '25 13:02 grandizzy

@grandizzy I think no.

This only really helps when I want to have a single evm_version for the full script, right? But the point of multichain scripts is to do things on multiple chains in the same script.

E.g. in this example i'd like to switch the evm_version on every build<Network>Payload (as what we do under the hood is switch to that network and check if the payload was deployed and registered via create2). <- but e.g. this will will now no longer work with linea. If i deployed the payload with london, but execute the script via shanghai the addresses will be different. If I run the script with london, things on other networks might not work.

sakulstra avatar Feb 07 '25 13:02 sakulstra

sorry, i misunderstood your use case. Switching evm version in same script / test is something we're looking to add (through a setEvmVersion cheatcode or by automatically figuring it out when a fork is selected). Automatically picking it up is a little bit more cumbersome as we'd have to maintain hardforks history (see https://github.com/foundry-rs/foundry/issues/6440) but at min we could offer a similar config per fork / rpc_endpoint as in https://hardhat.org/hardhat-network/docs/guides/forking-other-networks#using-a-custom-hardfork-history )

grandizzy avatar Feb 07 '25 13:02 grandizzy

Switching evm version in same script / test is something we're looking to add (through a setEvmVersion cheatcode or by automatically figuring it out when a fork is selected)

Having some per chain config would actually be great. Working with per chain profiles is quite painful, especially when working with external teams.

Would be so much more pleasant if on foundry toml we could just say: "use cancun & 0.8.27 on optimism, shanghai & 0.8.23 on mantle, london & 0.8.22 on linea" and then things work just automatically based on the selected fork. Could be especially interesting once pectra is live and one might want to use eof and via-ir on some chain, but not on others.

sakulstra avatar Feb 07 '25 13:02 sakulstra

Just to add on this, on sonic WETH was deployed with cancun, so for multichain scripts touching WETH (which sadly is quite common for us) we'd now need to bump to cancun breaking multiple of the other chains.

sakulstra avatar Feb 19 '25 07:02 sakulstra

One issue we've encountered with our large codebase (500+ files) is this:

One issue we face here is that whenever switching the evm version a complete recompile is needed. It would be great if the cache was per evm_version, so things only have to be recompiled once.

We have to remember to do things in the right order if not we end up recompiling our contracts all the time even to just run our test suite.

efecarranza avatar Mar 26 '25 17:03 efecarranza

@grandizzy sorry for the ping, but is there any progress on / anywhere i can track:

but at min we could offer a similar config per fork / rpc_endpoint as in https://hardhat.org/hardhat-network/docs/guides/forking-other-networks#using-a-custom-hardfork-history )

The situation is becoming more painful every day. E.g. on aave recently rlUSD was added, which was compiled against cancun, to e2e tests on mainnet now have to run on cancun, but multichain scripts have to run on paris to satisfy linea/mantle ... so we can no longer run the e2e suites in scripts, which is very painful.

sakulstra avatar Apr 28 '25 11:04 sakulstra

Would it perhaps be possible to introduce a inline config a la: /// forge-config: profile = 'myprofile'

This way at least for tests we could work around the lack of this feature. Currently if you have multiple tests that rely on different profiles (e.g. different libraries, different evm_version), it's almost impossible to have a holistic forge test that does not randomly fail. Manually specifying compiler settings everywhere only brings you so far.

E.g. currently I'm working on tests that need different libraries linked on each network. As far as i can see from the docs, there is no way to define libraries in code or via inline config - so i did via profiles, but then i need to run every test manually by selecting the correct profile.

sakulstra avatar Jun 12 '25 08:06 sakulstra

@grandizzy on latest nightly i think there's some rather weird "bug", that is helping us, but I thought makes sense report anyways 😓

If you run: https://github.com/bgd-labs/aave-proposals-v3/blob/main/src/20250610_Multi_UpgradeAaveInstancesToV34/UpgradeAaveInstancesToV34_20250610.s.sol via

forge script  src/20250610_Multi_UpgradeAaveInstancesToV34/UpgradeAaveInstancesToV34_20250610.s.sol:CreateProposal --rpc-url mainnet --sender 0x73AF3bcf944a6559933396c1577B257e2054D935 -vvvv

on nightly it will work.

On stable it will fail with:

    ├─ [106] 0x0846C28Dd54DEA4Fd7Fb31bcc5EB81673D68c695::getPayloadsCount() [staticcall]
    │   └─ ← [NotActivated] EvmError: NotActivated
    └─ ← [Revert] EvmError: Revert

For us this is good, but it's extremely weird that it works / it should not work.

The evm_version on the default profile is london. We then switch to Sonicon L425 and do a call to the payloadsController. The payloadsController on sonic is deployed with shanghai. So the call reverts with NotActivated which is expected. We don't have a proper workaround for this (thus this issue).

For me seems like a bug, that on latest nightly it does no longer error. Couldn't find any related commit in history.

sakulstra avatar Jun 28 '25 19:06 sakulstra

@grandizzy on latest nightly i think there's some rather weird "bug", that is helping us, but I thought makes sense report anyways 😓

If you run: https://github.com/bgd-labs/aave-proposals-v3/blob/main/src/20250610_Multi_UpgradeAaveInstancesToV34/UpgradeAaveInstancesToV34_20250610.s.sol via

forge script  src/20250610_Multi_UpgradeAaveInstancesToV34/UpgradeAaveInstancesToV34_20250610.s.sol:CreateProposal --rpc-url mainnet --sender 0x73AF3bcf944a6559933396c1577B257e2054D935 -vvvv

on nightly it will work.

On stable it will fail with:

    ├─ [106] 0x0846C28Dd54DEA4Fd7Fb31bcc5EB81673D68c695::getPayloadsCount() [staticcall]
    │   └─ ← [NotActivated] EvmError: NotActivated
    └─ ← [Revert] EvmError: Revert

For us this is good, but it's extremely weird that it works / it should not work.

The evm_version on the default profile is london. We then switch to Sonicon L425 and do a call to the payloadsController. The payloadsController on sonic is deployed with shanghai. So the call reverts with NotActivated which is expected. We don't have a proper workaround for this (thus this issue).

For me seems like a bug, that on latest nightly it does no longer error. Couldn't find any related commit in history.

Thanks, latest nightly has revm updated so could be due to this, will check CC @zerosnacks

grandizzy avatar Jun 29 '25 03:06 grandizzy

@sakulstra re https://x.com/sakulstra/status/1975830082255266278 I am still trying to figure out if we could reuse existing compiler profiles, could you please clone https://github.com/grandizzy/foundry-9840-solc then forge build and you will see different profiles / artifacts in out/ dir:

├── bnb
│   └── Counter.sol
│       └── Counter.json
├── linea
│   └── Counter.sol
│       └── Counter.json

and in cache file (cache/solidity-files-cache.json)

  "profiles": {
    "bnb": {
      "solc": {
        "optimizer": {
          "enabled": false,
          "runs": 200
        },
        ...
        "evmVersion": "shanghai",
        "viaIR": false,
        "libraries": {}
      },
      ...
    },
    "default": {
      "solc": {
        "optimizer": {
          "enabled": false,
          "runs": 200
        },
        ...
        "evmVersion": "prague",
        "viaIR": false,
        "libraries": {}
      },
      ...
    },
    "linea": {
      "solc": {
        "optimizer": {
          "enabled": false,
          "runs": 200
        },
        ...
        "evmVersion": "london",
        "viaIR": false,
        "libraries": {}
      },
      ...
    }
  }

we can look into extending this if inline

grandizzy avatar Oct 08 '25 11:10 grandizzy

@grandizzy idk if i correctly understand the example - in the end you duplicate the contracts and it works because of this compiler restriction, right?

How would that work without duplicating the files?


One thing i find a bit weird (which might be because i lack the bigger picture) is that the caching is "per profile". Wouldn't it be more reasonable to have the caching per set of settings? If linea & bnb are the same(at least in regards to the compiler), why compile it twice?

Also with https://getfoundry.sh/reference/forge-std/config#configuration-file-format things imo become a bit weird, as the config is split in two parts:

  1. the compiler config via profile
  2. the chain config (that might implicitly put constraints on the compiler config)

Like in the test scenario on the docs:

contract MultiChainTest is Test, Config {
    function setUp() public {
        _loadConfigAndForks("./config.toml", false);
    }
 
    function test_readValues() public {
        // Switch to mainnet fork
        vm.selectFork(forkOf[1]);
 
        // Read mainnet WETH address
        address wethMainnet = config.get("weth").toAddress();
 
        // Switch to optimism fork
        vm.selectFork(forkOf[10]);
 
        // Read optimism WETH address
        address wethOptimism = config.get("weth").toAddress();
 
        // Values are chain-specific
        assertEq(wethMainnet, 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2);
        assertEq(wethOptimism, 0x4200000000000000000000000000000000000006);
    }
}

What would happen if it runs against shanghai, but then you switch to sonic, where weth is compiled with cancun iirc. Do you then have in the script to somehow switch the profile?

sakulstra avatar Oct 08 '25 14:10 sakulstra

@grandizzy idk if i correctly understand the example - in the end you duplicate the contracts and it works because of this compiler restriction, right?

How would that work without duplicating the files?

Yeah , I pointed to the way it works and that artifacts are built just to check the approach, the final impl won't require any source code duplication but only defining multiple profiles with different evm / solc versions which would produce the same as in my comment.

One thing i find a bit weird (which might be because i lack the bigger picture) is that the caching is "per profile". Wouldn't it be more reasonable to have the caching per set of settings? If linea & bnb are the same(at least in regards to the compiler), why compile it twice?

Yep, good point, again that was only to provide some details, the final impl should use the same artifact (mind that we provide now the extends feature for profiles so you could have linea and bnb extend a cancun profile while arbitrum and sepolia extend a prague one.

Also with https://getfoundry.sh/reference/forge-std/config#configuration-file-format things imo become a bit weird, as the config is split in two parts:

1. the compiler config via profile

2. the chain config (that might implicitly put constraints on the compiler config)

What would happen if it runs against shanghai, but then you switch to sonic, where weth is compiled with cancun iirc. Do you then have in the script to somehow switch the profile?

I think at the end they should be unified and proper configs to be applied to both compiler and tests / scripts (first at build time, 2nd when you select the fork) so you would only have to config like

[profile.bnb]
endpoint_url = ".."
evm_version = 'shanghai'

[profile.linea]
endpoint_url = ".."
evm_version = 'london'

grandizzy avatar Oct 08 '25 15:10 grandizzy

Yep, good point, again that was only to provide some details, the final impl should use the same artifact (mind that we provide now the extends feature for profiles so you could have linea and bnb extend a cancun profile while arbitrum and sepolia extend a prague one.

Actually does it perhaps make sense on the chain config to be able to specify a profile? So I can do sth like:

[mainnet]
endpoint_url = "${MAINNET_RPC}"
profile = cancun

sakulstra avatar Oct 09 '25 07:10 sakulstra

@sakulstra actually you can already do that, with a config like

[profile.default]
solc_version = "0.8.13"
src = "src"
out = "out/default"
libs = ["lib"]

[profile.chain1]
solc_version = "0.8.13"
src = "src"
out = "out/chain1"
libs = ["lib"]

[profile.chain2]
solc_version = "0.8.26"
src = "src"
out = "out/chain2"
libs = ["lib"]

[profile.chain3]
solc_version = "0.8.30"
src = "src"
out = "out/chain3"
libs = ["lib"]

building with different profiles will result in proper artifacts placed in

out/
├── chain1
├── chain2
├── chain3
└── default

the only issue here is that artifacts are rebuild when profiles switched, e.g.

$ FOUNDRY_PROFILE=chain2 forge build
[⠊] Compiling...
[⠒] Compiling 2 files with Solc 0.8.26
[⠑] Solc 0.8.26 finished in 435.43ms
Compiler run successful!

$ FOUNDRY_PROFILE=chain2 forge build
[⠊] Compiling...
No files changed, compilation skipped

$ FOUNDRY_PROFILE=chain1 forge build
[⠊] Compiling...
[⠔] Compiling 23 files with Solc 0.8.13
[⠑] Solc 0.8.13 finished in 473.53ms
Compiler run successful!

$ FOUNDRY_PROFILE=chain1 forge build
[⠊] Compiling...
No files changed, compilation skipped

will check why's that

grandizzy avatar Oct 15 '25 06:10 grandizzy

@grandizzy, this is the workaround we use for a while.

[profile.linea]
evm_version = 'london'
cache_path = 'cache/london'

But it does not help with multichain scripts, as there is no way within multichain scripts to switch the profile based on the current chain. That's why I thought, having a profile prop:

[mainnet]
endpoint_url = "${MAINNET_RPC}"
profile = cancun

On the chain config, it would be the most straightforward solution for the problem.

sakulstra avatar Oct 15 '25 06:10 sakulstra