tasking-manager
tasking-manager copied to clipboard
Deployment fixes and infrastructure maintenance
The major reason why the deployment failed last week was networking issues connecting via ipv4 to s3 resources for cloudwatch utilities and deployment signaling software. We changed over to using the dual-stack links for downloading those packages. I also took the liberty of fixing some long-standing problems with the infrastructure and deployment:
- Replaced the LaunchConfiguration with a LaunchTemplate
- Made the frontend s3 bucket private and only accessible through cloudfront
- created a development environment in the CI config and removed some redundant builds to deployment branches
Huge thanks to @eternaltyro for helping with the NAT64 config on the VPC side as well. A lot of AWS resources still don't support dual-stack connectivity despite amazon's recent push away from ipv4.
Some more improvements could be made but I don't want to hold up the deployment any longer:
- We should set proper response headers with a working content security policy
- Instead of using
deployment/*branches, we should deploy to production directly from releases.
For devs- now you can test new features on the dev server. All you need to do is replace one line in the .circleci/config.yml file:
- equal: [ {YOUR-BRANCH}, << pipeline.git.branch >> ] # change this to the branch you wish to test
just change {YOUR-BRANCH} to the branch you wish to test thedeployment on. It must be a branch within the hotosm tasking manager repository. Just note that whenever the develop branch is updated, those changes will be overridden. we may need to add some changes to make it clear what branch or version is the current one on the dev server? maybe under release in the heartbeat api response.
Once merged, tasks-stage.hotosm.org will deploy off the main branch, which needs to be created. This will line up more closely with FMTM's deployment schema.
The deployment "failed" on CircleCI but it was actually successful. It looks like CircleCI will fail out after 10 minutes of no input, but the instance deployment took 11 minutes:
After review, it would be fine to push despite the error. I will see if we can extend the timeout. @eternaltyro it is odd that now its taking longer to instantiate EC2s than before, i am not sure what is happening. Use to be around 6-8 minutes. Maybe not worth investigating though, we have other priorities for containerisation that will make it irrelevant.
I am fine with the branching strategy. Requesting @eternaltyro for the cloudformation checks. cc: @dakotabenjamin
Quality Gate passed
Issues
1 New issue
0 Accepted issues
Measures
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code