teraslice
teraslice copied to clipboard
Add way to supply CA certificates for kafka/es connections in kubernetes
To support Kafka over SSL in k8s we need to somehow provide a k8s config map that contains certs to all worker and ex pods.
See:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#populate-a-volume-with-data-stored-in-a-configmap
As a workaround, for now, users can specify volumes in their jobs that contain the certificates like this:
"volumes": [
{
"name": "teraslice-connector-certs",
"path": "/app/config/certs"
}
],
And set their ssl*location
properties in their connector definitions to match those paths.
Referencing another dynamically provisioned "{release name}-certs" configmap is already being done for "{release name}-worker" configs.
https://github.com/terascope/teraslice/blob/a83eca00919408c06b955beffa23a9fde97441e0/packages/teraslice/lib/cluster/services/cluster/backends/kubernetes/k8sResource.js#L59-L63
Maybe the most desireable thing to do is instead of a configmap is a secret. Something along these lines, where it could contain any number of certs
https://kubernetes.io/docs/concepts/storage/volumes/#example-pod-with-multiple-secrets-with-a-non-default-permission-mode-set
This would also have a use case with S3 as well since the S3 Terafoundation connector has an option to enable SSL and specify a cert to use with the connector.
I think this issue will need to be implemented in each individual asset's API package but there should probably be a shared base class in terafoundation that has common properties (all the SSL properties for instance). But this would be a bit of a re-architecture that implies a fair amount of work in Teraslice and in Spaces. So we'll hold off on doing the broader change right now.
For the case of the S3 connector and temporarily providing a better way of supplying the CA certificate when necessary, perhaps we can add a multi-line string called caCertificate
that contains the contents of the certificate and remove the certLocation
.
So, this ...
terafoundation:
connectors:
s3:
s3_gen1:
endpoint: "https://localhost:9000"
accessKeyId: "yourId"
secretAccessKey: "yourPassword"
forcePathStyle: true
sslEnabled: true
certLocation: "/app/config/certs/rootca.pem"
would become this:
terafoundation:
connectors:
s3:
s3_gen1:
endpoint: "https://localhost:9000"
accessKeyId: "yourId"
secretAccessKey: "yourPassword"
forcePathStyle: true
sslEnabled: true
caCertificate: |
-----BEGIN CERTIFICATE-----
MIICGTCCAZ+gAwIBAgIQCeCTZaz32ci5PhwLBCou8zAKBggqhkjOPQQDAzBOMQsw
...
DXZDjC5Ty3zfDBeWUA==
-----END CERTIFICATE-----
Supplying both certLocation
and caCertificate
should be an error. Docs should indicate certLocation
as being deprecated.
- Edit 1: Clarify the handling of conflicting properties.
In addition to S3 connector specific addition of caCertificate
, lets ad a top level property in terafoundation
called globalCaCertificate
, which is also a multiline string meant to contain a certificate which will be used by all connectors that use SSL.
This means we need to change how we handle these, which includes the file in /etc/ssl
. We basically need to concatenate all of these together in the order shown below:
// caCertificate - connector specific
// globalCaCertificate
// read from file in /etc/ssl
If either caCertificate
or globalCaCertificate
is omitted, just omit it from the concatenated result. The file in /etc/ssl
should always be present in the encoded cert given to the client. I think the client takes an array, it might be OK to add these as separate elements to that array, but please check what it says about the ordering of that array.
Instead of using the default certificates from the operating system, we will use those built into nodejs at tls.rootCertificates
.
// certLocation OR caCertificate. If both are provided it will throw an error
// global_ca_certificate - terafoundation uses snake case
// tls.rootCertificates
For now we going to de-scope the globalCaCertificate
. It's not strictly needed and implementing it is introducing it is getting messy.
This has been completed for the s3Connector
case, other cases remain unimplemented.