openzeppelin-upgrades
openzeppelin-upgrades copied to clipboard
`forceImport` fails with `The following deployment clashes with an existing one...`
I am deploying some proxies from a factory contract. I want to do some checks of the storage layout before upgrading these proxies, so in preparation for that I was hoping to import the "before" implementations into openzeppelin-upgrades
using the forceImport
.
At the time of calling forceImport
I do not have any .openzeppelin
folder, nor (obviously) any manifest file within that folder. Yet I get the following error when importing the layout for a contract at (say) 0x123:
Error: The following deployment clashes with an existing one at 0x123
.
Debugging a little, it seems like checkForAddressClash
is bing called twice, with the second invocation failing (because the first succeeded?). The two callstacks (console.trace()
) are:
at checkForAddressClash (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/upgrades-core/src/impl-store.ts:168:7)
at /Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/upgrades-core/src/impl-store.ts:59:17
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Manifest.lockedRun (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/upgrades-core/src/manifest.ts:123:6)
at async fetchOrDeployGeneric (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/upgrades-core/src/impl-store.ts:43:24)
at async simulateDeployImpl (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/hardhat-upgrades/src/utils/simulate-deploy.ts:39:3)
at async addImplToManifest (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/hardhat-upgrades/src/force-import.ts:94:3)
at async importProxyToManifest (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/hardhat-upgrades/src/force-import.ts:68:3)
at async Proxy.forceImport (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/hardhat-upgrades/src/force-import.ts:44:7)
```
at checkForAddressClash (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/upgrades-core/src/impl-store.ts:178:6)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async /Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/upgrades-core/src/impl-store.ts:59:11
at async Manifest.lockedRun (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/upgrades-core/src/manifest.ts:123:6)
at async fetchOrDeployGeneric (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/upgrades-core/src/impl-store.ts:43:24)
at async simulateDeployImpl (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/hardhat-upgrades/src/utils/simulate-deploy.ts:39:3)
at async addImplToManifest (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/hardhat-upgrades/src/force-import.ts:94:3)
at async importProxyToManifest (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/hardhat-upgrades/src/force-import.ts:68:3)
at async Proxy.forceImport (/Users/neil/dev/voltz/voltz-core/node_modules/@openzeppelin/hardhat-upgrades/src/force-import.ts:44:7)
```
In case it is relevant, I am also using the hardhat-deploy
plugin.
Can you share your script that calls forceImport
? Are you using a specific network or just the hardhat development network?
Can you run rename the .openzeppelin
folder to something else (so that it will be re-created by the plugin), run export DEBUG=@openzeppelin:upgrades:*
to enable debug logging, then run your script again and provide the output?
Any solution to this? I ended up having a similar issue as well.
@Hampton-Black Would you be able to share a minimal GitHub repo that can be used to reproduce this issue?
@ericglau I'm working out of a private repo in GitLab, but the basics of it are very similar to the above scenario:
- I'm using
hardhat-deploy
plugin, - I'm using
forceImport
fromopenzeppelin-upgrades
in order to update the deployment artifact that was previously created with the new proxy deployed - I'm specifically using beacon upgradeable proxies
From your debug mode mentioned above, I'm attaching some of the log output.
Thanks for quick response btw!
Thanks for providing that info. Please note that the Hardhat Upgrades plugin is not currently integrated with hardhat-deploy
.
I haven't been able to reproduce this yet, but I've set up an example repository here of deploying a proxy using hardhat-deploy
, then running forceImport
and upgradeProxy
using the plugin. See the readme of that repo for the steps to run this example.
Also keep in mind that when you call forceImport
, the second argument must be the contract factory of the current implementation contract version that is being used, not the version that you are planning to upgrade to.
Hope this example helps. (And if you are still encountering the issue in your project, could you modify that example repo to be more similar to your scenario so that the issue can be reproduced? Or provide an example of what your Hardhat script looks like? Thanks!)
Thanks for the feedback.
A little more background and possibly the use of forceImport
may not be necessary now that I think about it?
In our deploy script, it does the following:
- Deploys the Implementation contract,
- Deploys the Beacon using
openzeppelin-upgrades
- Deploys a custom NFT factory contract we've created
- Calls the deployed factory contract to create a new collection (this process creates a new Beacon proxy, using the above deployed Beacon and Implementation addresses).
- then we would use
forceImport
in order to store the newly deployed Beacon proxy so that it would be stored in the.openzeppelin/$(network).json
file that is created - and lastly, from
hardhat-deploy
, we get the artifact associated with the original Implementation contract andsave
the new Beacon proxy address to it, to use in additional scripts.
But reading the documentation again, is this forceImport
even needed to accomplish our purpose? I may have misread it. When passing the option, kind: 'beacon'
, does it try to force import the Beacon or the Beacon proxy address?
Hope this helps.
Thanks
I am having a similar issue, While going over it, I got a question.
How does UUPS decide what will an Implementation address be? (Where it will deploy Implementation)?? Because when deploying two contracts with the same bytecode their implementation seems to collide.
In my case I need a custom proxy implementation (I need to override receive()
function on proxy itself).
So i'm doing the following:
- deploy implementation with
upgrades.deployImplementation(...)
<- this adds implementation to the manifest - deploy custom proxy manually with ethers
- run
upgrades.forceImport(proxyAddress, ImplementationFactory, { 'kind': 'transparent' })
forceImport
is when things go wrong. It seems that it wants to re-add implementation to the manifest and it fails because it's already there.
The problem is here:
at checkForAddressClash (redacted/node_modules/@openzeppelin/upgrades-core/src/impl-store.ts:240:15)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async redacted/node_modules/@openzeppelin/upgrades-core/src/impl-store.ts:77:11
at async Manifest.lockedRun (redacted/node_modules/@openzeppelin/upgrades-core/src/manifest.ts:265:14)
at async fetchOrDeployGeneric (redacted/node_modules/@openzeppelin/upgrades-core/src/impl-store.ts:52:24)
at async fetchOrDeploy (redacted/node_modules/@openzeppelin/upgrades-core/src/impl-store.ts:138:11)
at async simulateDeployImpl (redacted/node_modules/@openzeppelin/hardhat-upgrades/src/utils/simulate-deploy.ts:38:3)
at async addImplToManifest (redacted/node_modules/@openzeppelin/hardhat-upgrades/src/force-import.ts:100:3)
at async importProxyToManifest (redacted/node_modules/@openzeppelin/hardhat-upgrades/src/force-import.ts:74:3)
at async Proxy.forceImport (redacted/node_modules/@openzeppelin/hardhat-upgrades/src/force-import.ts:46:7)
Hi community, I found that if you want to use forceImport function, you must have kept your old-version contract file, which doesn't make sense.
So I write a simple TS script to help you upgrade a contract using hardhat, without the cache of deployment phase.
https://gist.github.com/wyf-ACCEPT/3a4a96ef5dfb7c73c537ed7ee629d49e
Also leave the solution here:
- Fill the
.env
file:
USDC_ADDRESS=""
PROXY_ADMIN=""
- Add
contracts/proxy-utils.sol
file (just to compile theProxyAdmin
contract):
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import "@openzeppelin/contracts/proxy/transparent/ProxyAdmin.sol";
- Add
scripts/upgradeManual.ts
file:
import "dotenv/config"
import { ethers } from "hardhat"
async function main() {
const proxyAddress = process.env.USDC_ADDRESS!
const proxyAdminAddress = process.env.PROXY_ADMIN!
const proxyAdmin = await ethers.getContractAt("ProxyAdmin", proxyAdminAddress)
const newImpl = await ethers.deployContract("UpgradeableUSDC", [])
await proxyAdmin.upgradeAndCall(proxyAddress, await newImpl.getAddress(), "0x")
console.log(`\x1b[0m${"UpgradeableUSDC"}(upgradeable) upgraded to: \x1b[32m${proxyAddress}`)
}
main()
.then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});
- Run the script with:
npx hardhat run ./scripts/upgradeManual.ts --network <your_network>