quads
quads copied to clipboard
Extend QUADS to manage sharing of reserved instances (VM or Bare Metal) in the Public Cloud
Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
We can reserve instances, VM or Bare Metal, in the cloud to get discounts. However, there is no way to manage allocation of those reserved instances like QUADS manages sharing of Bare Metal machines within an on-prem environment.
We often run out of capacity in our internal shared lab environments which leads to long waiting times for allocation requests.
Describe the solution you'd like A clear and concise description of what you want to happen.
Extend QUADS to include managing allocation of reserved instances using cloud specific APIs. Essentially, if internal labs are running full or do not have resources that are being requested, QUADS should be able to allocate them from the pool of reserved instances in the public cloud.
Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.
None
Additional context Add any other context or screenshots about the feature request here.
Thanks for filing this. Most of the heavy lifting here will need to be on the Foreman side using compute resource plugins:
https://theforeman.org/manuals/2.3/index.html#5.2ComputeResources
This is because QUADS needs to be handed hosts (from somewhere) to manage and there's some really good integration already for most public cloud providers at the Foreman level, allowing us theoritically to manage them with QUADS like they were normal hosts.
We'd want to experiment with this once we upgrade/migrate our Foreman server to new hardware and versions which should occur in the next few months. Great suggestion @ashishkamra
Hello, Checking in see if this is being prioritized as we start using more Bare metal instances (reserved for months and years) in the IBM cloud.
Hey @ashishkamra here's where we are with this right now, having discussed this at some level of implementation depth but also just recently receiving IBM Cloud credentials so we can start poking at things.
I would classify this at pending proof of concept / early architecture phase but it's been discussed and will be something we'll keep on the radar.
More Details
We have both an ideal longer-term, cross-departmental development goal and a short-term proof-of-concept integration plan discussed thus far:
Ideal Long-Term / Product Enhancement Integration
- We would normally rely on a Foreman Compute Resource plugin (or develop one with the Foreman developers) assuming IBM cloud exposes bare-metal API's we can interact with - this is the best approach (longer-term) as it opens up Red Hat Satellite customers to utilize IBM cloud through RH Satellite.
- This is worth discussing with the product management folks and I would be ecstatic to get the ball rolling with the downstream/upstream product owners in the RH Satellite and Foreman space to see what our collaboration options might be here. I will take ownership of this as I know the right folks to speak with externally and upstream.
- Perhaps someone else has already discussed this given the large percentage of subscription revenue RH Satellite affords and the level of hybrid cloud integration that already exists for many other cloud/infra providers and APIs (AWS, Azure, GCE, Libvirt, oVirt, RHEV, OpenStack, Rackspace).
Potentially Shorter Term / Integration without Foreman Development
- We have other implementation options to pursue without a Foreman Compute Resource plugin that we've brainstormed, that would make IBM cloud BM systems appear as normal Shared Lab / QUADS-managed systems
- Spin up a permanent, light-weight bastion type host which can serve out a persistent point-to-point OpenVPN tunnel from our Foreman (or establish an eVPN to our internal network infrastructure if we wanted to mix local + remote resources, though latency and WAN speeds would be a limitation - similar to an AWS VPC though there may be an option like this that exists or is in the works)
- OpenVPN between Foreman <-> IBM Cloud would allow us to manage the resources with QUADS but it'd still be isolated and not routed with the rest of the Shared Lab unless we routed it through Foreman
- eVPN between Shared Lab Edge/TOR/10K8 <-> IBM Cloud would let us treat this like a complete extension of local resources, reachable internally and with a proper FQDN / DNS / domain name but would likely require working with IT Networking and Infosec teams for both guidance and approval respectively.
- Expose an RFC1918 subnet to our Foreman e.g. 10.1.x.x that won't conflict with internal corporate IT network namespaces
- Assign this as a "Subnet" to Foreman so we can provide the same level of PXE/DHCP/TFTP as we would local Shared Lab systems
- Manage sets of systems as they were local bare-metal assets.
- The challenge here is leveraging whatever APIs might be exposed from IBM cloud for creating/destroying BM resources to only use what we need at any given time, ideally another modular QUADS library similar to what we have for JIRA, Foreman, Juniper and other third-party infrastructure resources e.g.
ibm_cloud.py
- Virtual would be much easier as likely we can simply utilize the Libvirt/oVirt Foreman compute resource after some persistent VPN solution is in place.
- Spin up a permanent, light-weight bastion type host which can serve out a persistent point-to-point OpenVPN tunnel from our Foreman (or establish an eVPN to our internal network infrastructure if we wanted to mix local + remote resources, though latency and WAN speeds would be a limitation - similar to an AWS VPC though there may be an option like this that exists or is in the works)
For awareness: @grafuls @kambiz-aghaiepour @abondvt89 @briordan1 @radez
Related work-in-progress patchset for the Potentially Shorter Term / Integration without Foreman Development approach mentioned above.
https://review.gerrithub.io/c/redhat-performance/quads/+/522359
Related RFE for upstream Foreman and IBM cloud compute resource plugin: https://projects.theforeman.org/issues/33489
Putting this back on the radar since QUADS 2.0 has been released, this will fit nicely into our providers
framework which lets us seamlessly extend QUADS-managed resource into the public cloud. We'll need to purchase an SRX Juniper tier device that supports native multi-vendor VPC but we can tackle that internally, I've already started discussions to include this for a future capital purchase. cc: @natashba