talawa-api
talawa-api copied to clipboard
Cloud Based API Instance for Developers
We need to have a cloud based API instance for developers to test against. It must:
- Be integrated into our CI/CD pipeline so that the app is rebuilt with every newly merged PR.
- Initialize the database to a known working state with appropriate services will be restarted when this happens: (there are existing API calls for this):
- with each CI/CD update
- every 24 hours
- Use a talawa.io sub-domain as the endpoint
- The API instance will have a rate limiting mechanism to reduce the risk of abuse
- Preferably use a free service
- Use a palisadoes role based account for management
- The demo cloud API must be easily switched between branches as the code source. We will be migrating to master as the default branch soon.
Please coordinate these efforts with @noman2002 and @kb-0311. Ask to be assigned this task by them.
@noman2002 @kb-0311 I can give this issue a try
there is file named push-documentation.yml
in github/workflows
folder. Do we need to make changes here or create a new file?
- We used to have this working with Heroku using this file. https://github.com/PalisadoesFoundation/talawa-api/blob/develop/.github/workflows/ci.yml.archive
- There was some setup that was required with the Heroku cloud to make it work correctly whenever we did a merge. We decided to stop using Heroku due to the cost of having the instance running. We would like to use a fremium service.
@kirtanchandak Please proceed with this. You can use that file, you just need to do some minor changes. But I would suggest please check other solutions like render.com instead of Heroku.
Hi @noman2002 @kirtanchandak, I have been working on this same issue containerization talawa-api using docker here - https://github.com/PalisadoesFoundation/talawa-api/pull/1418 that was merged yesterday and was working on developing a ci/cd pipeline for this exact use case, @kirtanchandak can I work on this issue if you haven't made any significant progress yet?
One issue per person
One issue per person
Okay
This issue did not get any activity in the past 10 days and will be closed in 180 days if no update occurs. Please check if the develop branch has fixed it and report again or close the issue.
@kirtanchandak Are you still working on this?
@palisadoes I can work on this if @kirtanchandak is not working on this
Assigning to@vasujain275. Please work with @noman2002 and @kb-0311 for comments, advice and reviewing the PR
Hi @noman2002 and @kb-0311, I did some research and found that the MongoDB Atlas free tier is good enough for our use case. I looked into redis free tier at render.com, it has 25 MB of memory with a 50 connection limit and no persistence storage. We can also host our API on render.com but it will be shut down after an inactivity of 5 minutes and will be restarted when called again. Will that render.com's inactivity delay or redis free tier will be a problem?
@vasujain275
- Instead of render are there other free tier cloud instances that we can use to deploy our application?
- Well in production the Redis server was to be used as an in-memory cache, not a global cache so we also need to have the option to access the file system/ services of the server where the API is deployed.
- Render and stuff is fine but if possible can you also explore free credit use of GCP and/or Azure? Do let me know if those are feasible.
@kb-0311 We can use the AWS free tier, which will have a t2.micro ec2 instance free for 12 months per account ( we will need a credit card to avail this 12-month free tier ). We can use the docker file and docker-compose file I created in a recent issue, we can set a ci/cd pipeline to push our image to the docker hub on each push and then pull to e2c using GitHub actions. We can then run docker-compose on e2c also using GitHub ci/cd that can run containers of both redis and MongoDB, although I think it will be best to run MongoDB on Atlas as it has a good free tier and will reduce the load on our t2.micro ec2 instance if we run it on atlas. We can also use azure/gcp, they also have this exact free tier of 12 months that requires a credit card on signup.
@vasujain275 Ideally we also want:
- A single account to manage all the required resources.
- The ability to have trusted contributors update the cloud service parameters to make sure the CI/CD functions
- Ensure that only one person has the right to purchase new services or upgrade any existing ones. This will help reduce long term costs and potential abuse.
Other:
- @kb-0311 @noman2002 Do you have anything else to add?
- Is the most recent comment from @vasujain275 in keeping with the desired outcome?
@palisadoes
- Only a single aws account is required to manage this cloud instance.
- We only need to add the cloud service parameters as github secrets of this talawa-api repo. Then the secrets can be safely and easily used in our github actions using environment variables.
- Addition of github secrets can be done by any trusted contributors.
- Only the person who will sign up for aws instance will have the right to purchase new services or upgrade any existing ones.
- No one will need to directly ssh into the instance to make changes, as they will be done via ci/cd pipeline.
- As for MongoDB Atlas, we will only need to add the MongoDB Atlas Cluster url as github secret of this repo.
Q. How can github actions do all the changes in our aws instance? Ans. Docker Containers, we will never install the talwa api directly to our aws instance instead we will use docker container of our api that can be rebuilt and restarted every time we push changes to our repo.
My proposed solution -
- Firstly, I will create all github actions files and docker compose files required to completely set up talawa-api instance on aws via ci/cd.
- I will then test all the github actions and compose files on my personal aws ec2 instance to make sure all the things are working as intended. This way I will not be required to have direct access to talawa-api actual production ec2 instance.
- When all the github actions are completed and tested by me then one of the trusted contributors can signup for aws instance and MongoDB Atlas. Then he will set up the required env variables as github secrets in this repo.
@noman2002 @kb-0311 , appreciate your feedback on possible improvements in this approach
- Based on your answer we will need to manage two cloud accounts. Can this be done using only one account?
- We need to make sure that more than one person can access the AWS dashboard for purposes of troubleshooting. Can this be done without them having access to the credit card information to add new services?
- If we use a MongoDB Atlas account, how can we guarantee the same AWS access requirement above?
@vasujain275
- Yes your solution and flow make sense to me.
- At this stage it would be easier to showcase the CI/CD pipeline using your forked talawa-api instance as the base for now and your free tier AWS ec2 instance as the deployment server.
- Create a new branch and configure the docker-compose file as necessary, since you are the owner of the forked repo you can also add the GitHub secrets for Altas and aws.
- Then make demo PR merges to that branch and demonstrate how to CI/CD pipeline will work using a video.
- This will help us understand potential problems that could occur at a given point more precisely.
@palisadoes
- Can you elaborate on the two accounts? Are you referring to one account for aws and another for MongoDB Atlas or something else?
- In aws and MongoDB I think there are certain ABAC and RBAC features for this purpose. Maybe we can take a look at that? @vasujain275
@palisadoes
- By two accounts, If you referring to one account for aws and another for MongoDB Atlas then in my opinion it's better to have them separated as it will decrease the load on our ec2 machine.
- As @kb-0311 suggests there is role-based access in both aws and MongoDB Atlas that we can use to give limited access to another person for troubleshooting purposes.
@kb-0311 I will start working on this by creating a new branch "deployment" in my forked repo and will follow the process you mentioned in the previous comment.
- Yes one in AWS and the other with Mongo
- Remember this will be solely for demo purposes not a heavily used system with only a few transactions per hour for developers to test against. Keep this in mind with your final recommendation.
@palisadoes Okay I will keep that in mind
What's the status on this? It's an important part of our 2023 road map. We need this up and running by very early January.
@palisadoes
Sorry for the delay, I have programmed deploy.yml locally but having some issues with my credit card on aws so not able to test it. I will positively resolve it within a couple of days.
@vasujain275 If AWS is a difficulty, what would be the minimal Ubuntu VPS required with GoDaddy to get a test instance running? We could provide access via a public/private SSH key for security.
We really need to get this going.
@palisadoes I got the AWS ec2 instance working, I only had debit cards and it was not accepting them. I found a credit card and it worked, I have been testing the code for the past couple of hours. Creating a PR in couple of minutes
Please answer my question on the GoDaddy VPS.
With that solution we would have a fixed cost and not have to worry about unexpected cloud charges.
Let me know
@palisadoes
- Yeah, we can do a basic GoDaddy VPS and that works fine because we just need an Ubuntu Server.
- As for the aws free tier machine, it is really slow and can cause issues in the long run. Build times are around 15min on that.
- GoDaddy VPS would be a much better option than aws free tier as we don't have to worry about unexpected web charges as you mentioned and will have better performance.
- Regarding MongoDB hosted on different cloud, I agree with you that having one cloud is better option and from my testing it will not affect our performance. We should go with one cloud instance only.
- I am improving the docker compose file with more layer caching so that we can reduce build time and overall make the process more efficient.
If we had a VPS we could host mongo on the local server for simplicity.
What would your minimum be?
- https://www.godaddy.com/hosting/vps-hosting
Remember, we will be resetting the DB every 24 hours. So we don't need a lot of disk.
@palisadoes I believe that hosting MongoDB, Redis, and Talawa API on the same VPS is feasible. In this scenario, I think the basic plan (1 vCPU / 2GB RAM / 40GB Storage) would be sufficient.
@vasujain275
- Please DM me your ssh public key in slack so I can add you to the server.
- What directory structure would you propose to get this set up?
- Under what user would this all run under? I'd prefer for everything to run as a dedicated unprivileged account.
@kb-0311
- Any thoughts?