cloud-sql-proxy
cloud-sql-proxy copied to clipboard
Running cloudsql-proxy as Kubernetes DaemonSet
I'd like to run the cloudsql-proxy container as a DaemonSet instead of a sidecar in my pods, because I have multiple (different) pods on a node, which all need to connect to a Cloud SQL instance. So instead of:
volumes:
- emptyDir:
name: cloudsql-sockets
I use:
volumes:
- hostPath:
path: /cloudsql
name: cloudsql-sockets
So the other pods can just mount the hostPath /cloudsql/
(read-only) to load the UNIX sockets.
However, when I try to start the cloudsql-proxy container, it gives me this error:
Error syncing pod, skipping: failed to "StartContainer" for "cloudsql-proxy" with RunContainerError: "runContainer: Error response from daemon: mkdir /cloudsql: read-only file system"
According to the Kubernetes docs, when using hostPath
, only root can write to it. So containers which want to write to the mounted hostPath should also be using user root. Is this not the case for cloudsql-proxy container?
One solution would be using TCP sockets, but I prefer UNIX sockets.
May I ask why you prefer UNIX sockets over TCP sockets?
Here is how I would have done that:
- Create sqlproxy pod (potentially with multiple tcp/instances)
- Expose multiple ports from your pod (with instance name in the port name, as seen below)
- Create one service per instance with the same port (or one service with multiple ports):
apiVersion: v1
kind: Service
metadata:
name: sqlproxy-service-INSTANCENAME
spec:
ports:
- name: sqlport
port: 3306
targetPort: sqlproxy-port-INSTANCENAME
selector:
app: sqlproxy
Then, simply connect to the SQL proxy of your choice:
mysql -h sqlproxy-service-INSTANCENAME ...
You end-up with only one pod/sqlproxy running in the cluster on one node (doesn't matter which), and of course this kind of service is not accessible from the outside. You can scale the sqlproxy by increasing the number of replicas.
On the other hand, if you absolutely want to use UNIX sockets and the DaemonSet, then this documentation shows you how to run the container in a privileged mode: http://kubernetes.io/docs/user-guide/security-context/. I'll try to get the documentation of HostPath updated to point to this URL.
I prefer UNIX sockets because I don't like to manage (duplicate) ports. However, the solution you suggest - using named ports and attaching a service to it - would solve this problem. Thanks, I will close this issue.
@apelisse I like your idea of having one service for all instances. could you also post the yml file for the pod, I'm currently trying to figure out how to expose different ports for the different instances ... thanks!
to me running 1 service and exposing the ports through TCP means that you are talking plain text inside your kubernetes cluster to the cloudsql proxy service. if anyone was inside of that network they would be able to see your username/passwords as you send them to the cloudsql proxy service (then cloudsql proxy will connect to your database over ssl)
I think using the unix sockets could actually be a way to not send raw tcp text around inside your kubernetes cluster. I just wanted to mention it here and see if that makes sense or if that is not something that is worth worrying about.
Anybody has been successful at running it with Unix sockets as k8 Daemonsets? It seems like a volume needs to be mounted on each pod and the socket needs to be placed in that volume.
Given the number of comments on this thread, I'm going to re-open it as a docs issue. I don't know how to run the proxy with Daemonsets off the top of my head and will work on adding an example, either here or in the examples directory. For now, though, there are some higher priorities and so this work will probably take awhile.
@enocom This is how I'm doing it with TCP. Curious how to do it with Unix sockets.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cloud-sql-proxy
namespace: default
labels:
app: cloud-sql-proxy
spec:
selector:
matchLabels:
name: cloud-sql-proxy
template:
metadata:
labels:
name: cloud-sql-proxy
spec:
containers:
- name: cloudsql-proxy
image: 'gcr.io/cloudsql-docker/gce-proxy:1.25.0'
command:
- /cloud_sql_proxy
args:
- -instances={address}=tcp:0.0.0.0:5432
- -credential_file=/secrets/cloudsql/credentials
ports:
- name: cloudsql-port
containerPort: 5432
hostPort: 5432
livenessProbe:
tcpSocket:
port: cloudsql-port
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: cloudsql-port
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
limits:
cpu: 120m
memory: 150Mi
requests:
cpu: 80m
memory: 100Mi
volumeMounts:
- name: secret-name
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: secret-name
secret:
secretName: secret-name
---
apiVersion: v1
kind: Secret
metadata:
name: secret-name
namespace: default
data:
credentials: {base64-of-service-account-key}
We've recently release a Kubernetes operator for the Proxy. And so while it's not an exact match for the request here, I'd suggest looking at https://github.com/googlecloudplatform/cloud-sql-proxy-operator at a better option.