Can't deploy a worker to routes that are assigned to another worker.
When I publish my worker I get this error
✘ [ERROR] A request to the Cloudflare API (/accounts/***/workers/scripts/my-api-production/routes) failed.
workers.api.error.duplicate_route [code: 10020]
When I comment out the route in my wrangler.toml I get
✘ [ERROR] Can't deploy a worker to routes that are assigned to another worker.
"my-api" is already assigned to routes:
- mydomain.com
Unassign other workers from the routes you want to deploy to, and then try again.
Visit https://dash.cloudflare.com/***/workers/overview to unassign a worker from a route.
The only way I found to make it actually deploy was to go to https://dash.cloudflare.com/myAccountId/mySiteDomain/workers
and manually remove the route before each deployment.
Is there a way to have this automatically work? I don't want to manually do this every time.
Versions: wrangler-action: v3.4.1 wrangler: v3.30.1
Here's my workflow file:
name: Deploy
on:
workflow_dispatch:
push:
branches: [main]
jobs:
build_and_deploy:
name: Build and deploy
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [21.x]
env:
ENVIRONMENT: production
APP_SECRET: ${{ secrets.APP_SECRET }}
DATABASE_URL: ${{ secrets.DATABASE_URL }}
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
steps:
- uses: actions/checkout@v4
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
- uses: c-hive/gha-yarn-cache@v2
- name: Get version
id: version
run: echo "::set-output name=version::$(date +'%Y-%m-%dT%H:%M:%S')-${{ github.sha }}"
- name: Install dependencies
run: yarn --frozen-lockfile
- name: 🔨📦 Build and deploy
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
environment: 'production'
wranglerVersion: "3.30.1"
secrets: |
APP_SECRET
DATABASE_URL
wrangler.toml
name = "my-api"
account_id = "id"
main = "./server/worker.ts"
route="mydomain.com"
compatibility_date = "2022-01-11"
node_compat = true
dev.port = 3010
kv_namespaces = [
{ binding = "SESSION", id = "542345234" },
{ binding = "SELECTED_PROFILE", id = "24523434" }
]
[triggers]
crons = ["0 */3 * * *"]
[env.development.triggers]
crons = ["* * * * *"]
[durable_objects]
bindings = [
{name = "Cart", class_name = "Cart"},
]
[[migrations]]
tag = "v1"
new_classes = ["Cart"]
[[migrations]]
tag = "v2"
new_classes = ["ProductDurableObject"]
[[migrations]]
tag = "v3"
deleted_classes = ["ProductDurableObject"]
# # Override values for `--env production` usage
[env.production]
route="mydomain.com"
kv_namespaces = [
{ binding = "SESSION", id = "123" },
{ binding = "SELECTED_PROFILE", id = "213423" }
]
We use a single wrangler.toml to manage ~20 or so workers as separate envs.
We run into this when we want to move a route from worker env A to worker env B. Would love if we could allow overwriting, instead of the 3 PR dance we do today where we add another route with lower precedence to worker B, remove the route in A, and then add the original route to B and remove the temp route.
+1
Not sure exactly what your use case is, but it might be useful to look into Versions & Deployments https://developers.cloudflare.com/workers/configuration/versions-and-deployments/
If this doesn't help, file an issue in the Workers-SDK since this would require a new wrangler command https://github.com/cloudflare/workers-sdk/issues