cluster
cluster copied to clipboard
Request Infrastructure for OpenEBS project
Please fill out the details below to file a request for access to the CNCF Community Infrastructure Lab. Please note that access is targeted to people working on specific open source projects; this is not designed just to get your feet wet. The most important answer is the URL of the project you'll be working with. If you're looking to learn Kubernetes and related technologies, please try out Katacoda.
First and Last Name
- Aman Gupta
- Abhilash Shetty
Company/Organization
MayaData / OpenEBS
Job Title
Software Developement Engineer-2
Project Title (i.e., a summary of what do you want to do, not what is the name of the open source project you're working with)
- OpenEBS e2e
- Host Gitlab (using as CI/CD tool for e2e)
Briefly describe the project (i.e., what is the detail of what you're planning to do with these servers?)
-
OpenEBS e2e: The e2e test pipelines for testing various OpenEBS storage engines are currently running on on-premise servers, hosted at MayaData office lab in Bangalore, India. Since OpenEBS is a vendor-neutral and CNCF sandbox project, we would like to move the execution workflow onto CNCF owned infrastructures. The work-flow of pipelines will be in the way that we create on-demand kubernetes-cluster with required spec and run the test cases on that cluster, and at then end we would dispose/delete the cluster.
-
Hosting Gitlab: OpenEBS e2e uses Gitlab as its CI/CD tool, which also hosted in same on-premise setups metioned above. Machines on which gitlab will be hosted, will be needed as always running, persistent servers.
Is the code that you’re going to run 100% open source? If so, what is the URL or URLs where it is located? What is your association with that project?
The code is 100% open source and belongs to CNCF as it is a sandbox project.
- Test cases are present in the respective storage engines github repository, which all can be found under this openebs github account. https://github.com/openebs/
- Pipeline scripts to run those test-cases are present under mayadata-io github account. https://github.com/mayadata-io/
What kind of machines and how many do you expect to use (see: https://metal.equinix.com/product/servers/)?
We are looking for:
- on-demand and disposables
- one 4-node k8s cluster (c3.small x86 , 1Master + 3 worker, k8s version: {should be variable to select}, OS: Ubuntu: {version can be variable})
- one 8-node k8s cluster (c3.small x86 , 3Master + 5 worker, k8s version: {should be variable to select}, OS: RHEL: {version can be variable})
- one 6-node k8s cluster (c3.small x86, 1Master + 5 worker, k8s version: {should be variable to select}, OS: centos: {version can be variable})
- Always runnig server:
- one 8-node k8s cluster (c3.small x86, 1Master + 7 Worker, k8s version: 1.21, OS: ubuntu 20.04)
What operating system and networking are you planning to use?
- Same as mentioned in the above point.
Any other relevant details we should know about?
- We would be happy if we can get the access in the way that via some scripts we can create clusters on those VM's and will dispose the cluster or can cleanup resources at the end of pipeline execution.
- we will be required the access to attach additional disks to create disk-pools on nodes