Database Resource Provider
Issue #, if available: In support of #456; #651
Description of changes: Addition of an extensible RDS resource provider - currently only supports Aurora, but will be adding additional support in the coming months as demand changes.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
@5herlocked would love to see this extend to RDS as an addon I'm working on (Apache Airflow) needs RDS for DB.
Will start working on it this week
As requested, RDS Instance provider has been built.
Will be adding in test cases promptly.
@youngjeong46 ~~Give this a try. It does successfully create an RDS and provides a secret as an ISecret. Should be pretty straight forward to integrate it into a full deployment.~~
Looks like I need to update the RP for the newest version of the VPC RP.
@shapirov103 @youngjeong46 Please leave feedback if considered required. Will fillout additional documentation towards EoW.
Absolutely, we can use crossplane or ACK. Though it would require a fair bit of setup to ensure the RDS instance/cluster's lifecycle would match that of the EKS cluster.
Right now if I understand it right, the process to setup an RDS cluster/instance supporting an EKS cluster is one of the following:
- EKS Blueprints -> ACK -> Argo Managed RDS YAMLs
- EKS Blueprints -> ACK -> Pre-Existing RDS YAMLs (probably still managed through argo)
With this, we can bypass ACK and directly attach the Lifecycle of a Blueprinted cluster to an RDS instance supporting to make the flow:
- EKS Blueprints -> RDS with the same configurations that you get with the CDK resources
DatabaseInstanceandDatabaseCluster
I believe this more closely matches with the spirit of the repository than using ACK, as this considers the RDS instance as a part of the infrastructure required by the cluster and provides the ability to create and delete it as part of the infra pipeline.
Will add the examples and create the documentation before EoW.
@5herlocked First of all, thank-you for the PR and also response. Having RDS and managing its lifecycle strictly as part of Cluster creation is strictly a No-No. Lifecycle of RDS should be ideally maintained outside of the lifecycle of the EKS cluster creation so even if the cluster is deleted, RDS remains and data persisted in RDS remains. For S3 its a different usecase where you might need to create a bucket for temporary purposes like Airflow usecase. But in case of RDS its a strict anti pattern to manage the RDS creation with EKS Cluster creation. The reason why i suggested to use ACK or CrossPlane is for multi cluster management scenarios where you have a management cluster which creates child cluster and RDS databases. So bottom line is RDSProvider does not align to the spirit of ResourceProvider so i would question on why we needs this, what is your motivation and usecase if the answer is RDS creation with cluster, i wouldn't necessarily agree. Using ACK or CrossPlane with Argo or Flux promotes GitOps which would be a right approach in your case.
@elamaran11 and myself discussed it. It should be made very clear in the documentation that this addon is coupled to the lifecycle of the cluster (which is the case for any resource tbh). It can only be used with a so-called management cluster that governs the platform. It can also be used for illustrative purposes, however it makes sense to check resource retention control to avoid accidental data loss when stack is reprovisioned or dropped. In the management cluster pattern, this resource provider is not that different from any resources provisioned with ACK or Crossplane. Removal of the resource from the management cluster by default will drop such resources as well. I can see Ela's perspective as more generalizing - e.g. what if we now add a bunch of different database providers and then customers may misuse it. We need to define an overall strategy for resource providers with the goal of addressing real customer use cases and improving customer experience the best we can.
@shapirov103 @elamaran11 Will it make sense to, by default set the retention policy to snapshot when deleting as a way of safeguarding customer data?
@5herlocked let's resolve the conflicts and I agree with the direction of applying a snapshot or any more conservative retention policy to avoid inadvertent data loss. We will need a doc under docs/resource-providers (index.md and the rds.md).
@5herlocked Did you get a chance to close this PR comments.
Yes, it was as part of the last commit. Please feel free to look it over.
This PR has been automatically marked as stale because it has been open 60 days with no activity. Remove stale label or comment or this PR will be closed in 10 days
@elamaran11 feel free to close/merge. It is complete and the tests are inplace.
Hey @5herlocked how would this work in terms of network connectivity? I don't any security group defined or imported here.
/do-e2e-tests
@sanyer Networking configuration is entirely inherited from the VPC Resource Provider.
Additional configurations are can also be transparently passed to the underlying resources, note the interface CreateRDSInstanceProviderProps.rdsProps. For additional reference: https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_rds.InstanceProps.html AND https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_rds.DatabaseClusterProps.html
@shapirov103 Will include deployment screenshots and necessary e2e tests ASAP.
@shapirov103 Will include deployment screenshots and necessary e2e tests ASAP.
@5herlocked whenever you get a chance, lets address the above and merge it. The functionality looks really good, I assume there will be more requests around this capability once customers start using it.
This PR has been automatically marked as stale because it has been open 60 days with no activity. Remove stale label or comment or this PR will be closed in 10 days
@5herlocked please merge main to avoid MD link failure
Underlying Database Cluster CDK constructs changed. This is no longer functional.
--- Reverting to draft to work on it ---