Toshi
Toshi copied to clipboard
Request: configurable support for Kubernetes vs Consul
I totally get that refactoring to be agnostic to discovery mechanisms would be a significant time investment. On that front, I'd be happy to contribute the kubernetes part if you decide to go that route.
With that said, it's fairly straightforward to use the kubernetes API. An HTTP request is made to https://kubernetes.default.svc.cluster.local/api/v1/namespaces/<namespace>/endpoints?labelSelector=<name-defined-in-k8s-config>
. The response is something like this, assuming serde for serialization
#[derive(Serialize, Deserialize, Debug)]
struct Addresses {
ip: String,
#[serde(rename = "nodeName")]
node_name: String,
#[serde(rename = "targetRef")]
target_ref: TargetRef,
}
#[derive(Serialize, Deserialize, Debug)]
struct Items {
metadata: Metadata1,
subsets: Vec<Subsets>,
}
#[derive(Serialize, Deserialize, Debug)]
struct Labels {
app: String,
}
#[derive(Serialize, Deserialize, Debug)]
struct Metadata {
#[serde(rename = "selfLink")]
self_link: String,
#[serde(rename = "resourceVersion")]
resource_version: String,
}
#[derive(Serialize, Deserialize, Debug)]
struct Metadata1 {
name: String,
namespace: String,
#[serde(rename = "selfLink")]
self_link: String,
uid: String,
#[serde(rename = "resourceVersion")]
resource_version: String,
#[serde(rename = "creationTimestamp")]
creation_timestamp: String,
labels: Labels,
}
#[derive(Serialize, Deserialize, Debug)]
struct Ports {
name: String,
port: i64,
protocol: String,
}
#[derive(Serialize, Deserialize, Debug)]
struct K8sEndpoint {
kind: String,
#[serde(rename = "apiVersion")]
api_version: String,
metadata: Metadata,
items: Vec<Items>,
}
#[derive(Serialize, Deserialize, Debug)]
struct Subsets {
addresses: Vec<Addresses>,
ports: Vec<Ports>,
}
#[derive(Serialize, Deserialize, Debug)]
struct TargetRef {
kind: String,
namespace: String,
name: String,
uid: String,
#[serde(rename = "resourceVersion")]
resource_version: String,
}
Retrieving the ip addresses is as simple as
let mut list_of_nodes = Vec::new();
for item in endpoints.items {
for subset in item.subsets {
for address in subset.addresses {
list_of_nodes.push(address.ip);
}
}
}
Per #19 if leader election wanted to be done, kubernetes has a unique number tied to each API object called resourceVersion
. Here, each Address
has a TargetRef
field which will have resource_version
field. The leader can be chosen via min/max of the resource version associated with it. Kubernetes can also expose the pod name to the container via environment variable so any toshi node can know its kubernetes identifier.
Abstraction is the next step we want to get a working model functioning first, but after that point boxing into the abstraction should be a pretty straightforward process. Probably zookeeper is another candidate for implementations.