[SPEC] Remote service deployments
Overview
Operators may be in the following situations:
- They may have additional resources for a service instance on another machine
- They may want to delegate an instance to a cloud provider (AWS, GCP)
How can we support these setups in the manager?
Imagine a scenario where a GPU-heavy blueprint is requested, and the operator has multiple machines at their disposal, with one having GPUs. How could they ensure that service ends up on that machine?
We'll need some remote VM deployer for AWS, GCP, Azure, and Akash.
The operator/blueprint manager will have XYZ_DEPLOYER_SECRET set to handle these deployments and possibly a flag/setting that indicates instances should deploy here.
Order of operations
There are 2 orders of operations that I can think of.
- The first is that I want to literally spawn a cloud instance to run this instance. The cloud VM doesn't exist up to this point. I use my API keys/SECRET keys to create it using the AWS, GCP, AKASH cloud SDKs.
- The second is that i want to deploy into an existing system, maybe a bare metal machine that is already running. What needs to be done here? Would the manager need to be able to SSH?
We should break this task into these 2 flows and tackle them separately.
1 · Talk to the hypervisor directly (libvirt/KVM)
| What lives on the bare-metal box | How the Manager reaches it | When to choose | Rust toolbox |
|---|---|---|---|
| libvirtd (manages KVM/QEMU VMs). Enable its remote API and either • TLS on TCP 16514, or • SSH tunnel (qemu+ssh://host/system). | No interactive shell required. Manager sends libvirt RPCs (XML descriptions of domains, pools, networks). | You need raw VM control but don’t want the exposure of logging in with a general-purpose shell. | virt/libvirt-rs crates give safe Rust bindings →rust let conn = virt::Connect::open("qemu+tls://bm01/system")?; conn.define_domain_xml(xml)?; |
| (Docs.rs) |
Final notes on security & professionalism
-
Separate networks – mgmt interface of the metal host should sit on a private VLAN or WireGuard tunnel; never expose libvirtd or Firecracker sockets on the public WAN.
-
Immutable logs – have the Blueprint Manager emit an on-chain hash of every VM definition for auditability.
-
Rotate credentials automatically (SSH CA or short-lived mTLS certs issued by HashiCorp Vault).
With these patterns your Blueprint Manager can deterministically, safely and programmatically materialise Blueprints on any remote bare-metal box – all in Rust, with crates that are already production-hardened.