Autodeploy using old workspace?
Hi, I'm new to autodeploy so I might be holding it wrong but I can't for the life of me get autodeploy to work.
At this stage my best guess is it's due to a project being impartially removed? Here's what happened:
- I created my app,
10ten-lifeand deploy it manually. No problems. - I set up autodeploy and tried to deploy it but it failed due to some unrelated reasons (type check on the generated workflow file silently failing, see #47).
- I decided to update SST to see if that would fix 2.
- The latest SST requires apps not start with numbers so I renamed it to
tenten-life. - I tried deploying
tenten-lifemanually but it tells me that the app name is changed so I need to remove the old one first. - I temporarily downgraded SST, renamed back to
10ten-life, ransst remove, renamed back totenten-life, upgraded SST, and manually deployed. Everything is working. - In the console I still see
10ten-lifeso I removed it. (I think. It's been a couple of days, but I'm pretty sure I had to manually drop it from the console.) - I tried to deploy
tenten-lifevia autodeploy but I get the following error:
sdk-v2/provider2.go:515: sdk.helper_schema: creating CloudFront Distribution: operation error CloudFront: CreateDistributionWithTags, https response error StatusCode: 409, RequestID: f4445184-a939-4775-b331-efb259d29f82, CNAMEAlreadyExists: One or more of the CNAMEs you provided are already associated with a different resource.: [email protected]
Which is weird because it works when I deploy locally—even on a different computer—and I've definitely removed 10ten-life from AWS.
If I run aws sts get-caller-identity in the workflow I can see that it's using the correct AWS account.
If I run sst diff in the workflow I hit an error about secrets being missing despite the fact that they are shown in the resources and I have definitely deployed locally after re-setting them up.
Curiously, if I press the up-down arrow next to the app, I see 10ten-life is still presented as an option, despite me having removed it from the console and from AWS:
So at this stage my best guess is that something is being cached and when it goes to autodeploy tenten-life it's trying to deploy it as 10ten-life and hitting conflicts. I've tried doing a force deploy but it doesn't help.
My console config is as follows:
console: {
autodeploy: {
async workflow({ $, event }) {
await $`npm i -g pnpm`;
await $`pnpm i`;
if (event.action === 'removed') {
await $`pnpm sst remove`;
} else {
// We have to wrap all these commands up and check their exit code
// explicitly due to https://github.com/sst/console/issues/47
console.log('Installing SST');
await $`pnpm sst install`;
console.log('Checking Astro site');
{
const { exitCode } = await $`pnpm astro check`.nothrow();
if (exitCode !== 0) {
throw new Error('`astro check` failed');
}
}
console.log('Checking TypeScript');
{
const { exitCode } = await $`pnpm tsc --noEmit`.nothrow();
if (exitCode !== 0) {
throw new Error('`tsc` failed');
}
}
console.log('Actually doing deploy');
{
const { exitCode } = await $`pnpm sst deploy`.nothrow();
if (exitCode !== 0) {
throw new Error('`sst deploy` failed');
}
}
}
},
target(event) {
if (
event.type === 'branch' &&
event.branch === 'main' &&
event.action === 'pushed'
) {
return { stage: 'dev' };
}
},
},
}
Even if I drop the workflow altogether the problem persists (i.e. the default workflow has the same problem).
I've spent hours and hours on this but I'm stuck.
Yeah this isn't quite Autodeploy related but let's start here, can you try going to your workspace settings and rescanning your AWS account?
Yeah this isn't quite Autodeploy related but let's start here, can you try going to your workspace settings and rescanning your AWS account?
Sorry for the delay (your message came through on a Saturday morning here). I tried rescanning but the I get the same error.
Although curiously it now appears to be trying to deploy to us-east-1:
Prior to rescanning I only had the entry for us-west-2 (which is where I intend to deploy it and which is where I've been successfully manually deploy it).
I tried explicitly specifying the region as us-west-2 in sst.config.ts but it still appears as though it's trying to deploy to us-east-1.
(For what it's worth, I've confirmed that the SST console stack is deployed in us-east-1.)
Do you have multiple AWS accounts or just one?
Do you have multiple AWS accounts or just one?
@jayair I have multiple AWS accounts but I've only ever connected one to the SST console and I've verified that SST autodeploy is using the same account as when I deploy it locally (by making the autodeploy workflow call aws sts get-caller-identity).
@jayair Anything else I can try here?
It really seems like when I deploy locally it gets the region as us-west-2 and everything is great, but autodeploy ends up with region us-east-1.
In my config I have:
app(input) {
return {
name: 'tenten-life',
removal: input?.stage === 'production' ? 'retain' : 'remove',
home: 'aws',
region: 'us-west-2',
};
},
If I try to specify the provider region to be us-west-2 like so:
app(input) {
return {
name: 'tenten-life',
removal: input?.stage === 'production' ? 'retain' : 'remove',
home: 'aws',
+ providers: {
+ aws: {
+ region: 'us-west-2',
+ },
+ },
region: 'us-west-2',
};
},
I get errors like Failed to create runner: CodeBuild is not authorized to perform: sts:AssumeRole on service role.
The SST stack is deployed in us-east-1 which I believe is correct.
Actually, never mind, I might have fixed this. I'll fill in details later but just wanted to update this ticket so you don't waste time debugging it.
Great, yeah report back.
Closing for now.