permission-manager
permission-manager copied to clipboard
Helm Chart support
I can see the chart in the source but it does not appear to be published anywhere. I tried https://sighupio.github.io for your chart repo as a test with no dice.
Do you have plans to publish it out and document it?
I'm bumping this thread as I would also like to have follow up on this. I can see that a helm chart is present within the code
I cloned the project and tried installing the chart (I defined the control plane endpoint) but it keeps returning the following error: "Could not install release Error: create: failed to create: Request entity too large: limit is 3145728"
This is odd because the files are nowhere near as large enough to reach that limit.
As a side note, I was able to install and use permission-manager using the installation guide, I just thought it would be cleaner to use a helm def.
A dry run returns the following:
NAME: dev-permission-manager
LAST DEPLOYED: Thu Sep 2 09:27:11 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
USER-SUPPLIED VALUES:
config:
basicAuthPassword: XXXXXXX
clusterName: dev-v2
controlePlaneAddress: XXXXXXX
replicaCount: 1
COMPUTED VALUES:
affinity: {}
autoscaling:
enabled: false
maxReplicas: 100
minReplicas: 1
targetCPUUtilizationPercentage: 80
config:
basicAuthPassword: XXXXX
clusterName: dev-v2
controlePlaneAddress: XXXXX
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: quay.io/sighup/permission-manager
tag: v1.7.0-rc3
imagePullSecrets: []
ingress:
annotations: null
enabled: false
hosts:
- host: domain.com
paths: []
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
replicaCount: 1
resources: {}
securityContext: {}
service:
port: 80
type: ClusterIP
serviceAccount:
annotations: {}
create: true
name: ""
tolerations: []
HOOKS:
---
# Source: permission-manager/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "dev-permission-manager-test-connection"
labels:
helm.sh/chart: permission-manager-1.7.0-rc3
app.kubernetes.io/name: permission-manager
app.kubernetes.io/instance: dev-permission-manager
app.kubernetes.io/version: "1.7.0-rc3"
app.kubernetes.io/managed-by: Helm
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['dev-permission-manager:80']
restartPolicy: Never
MANIFEST:
---
# Source: permission-manager/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: dev-permission-manager
labels:
helm.sh/chart: permission-manager-1.7.0-rc3
app.kubernetes.io/name: permission-manager
app.kubernetes.io/instance: dev-permission-manager
app.kubernetes.io/version: "1.7.0-rc3"
app.kubernetes.io/managed-by: Helm
---
# Source: permission-manager/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: dev-permission-manager
labels:
helm.sh/chart: permission-manager-1.7.0-rc3
app.kubernetes.io/name: permission-manager
app.kubernetes.io/instance: dev-permission-manager
app.kubernetes.io/version: "1.7.0-rc3"
app.kubernetes.io/managed-by: Helm
type: Opaque
stringData:
PORT: "4000" # port where server is exposed
CLUSTER_NAME: dev-v2
CONTROL_PLANE_ADDRESS: XXXXXX
BASIC_AUTH_PASSWORD: XXXXX
---
# Source: permission-manager/templates/crd.yml
apiVersion: "apiextensions.k8s.io/v1beta1"
kind: "CustomResourceDefinition"
metadata:
name: "permissionmanagerusers.permissionmanager.user"
spec:
group: "permissionmanager.user"
version: "v1alpha1"
scope: "Cluster"
names:
plural: "permissionmanagerusers"
singular: "permissionmanageruser"
kind: "Permissionmanageruser"
validation:
openAPIV3Schema:
required: ["spec"]
properties:
spec:
required: ["name"]
properties:
name:
type: "string"
minimum: 2
---
# Source: permission-manager/templates/ClusterRole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: dev-permission-manager
labels:
helm.sh/chart: permission-manager-1.7.0-rc3
app.kubernetes.io/name: permission-manager
app.kubernetes.io/instance: dev-permission-manager
app.kubernetes.io/version: "1.7.0-rc3"
app.kubernetes.io/managed-by: Helm
rules:
# Allow full management of all the Permission Manager resources
- apiGroups: [ "permissionmanager.user" ]
resources:
- "*"
verbs: [ "get", "list", "create", "update", "delete", "watch" ]
# Allow full management of the RBAC resources
- apiGroups:
- "rbac.authorization.k8s.io"
resources:
- "clusterrolebindings"
- "clusterroles"
- "rolebindings"
- "roles"
verbs: [ "get", "list", "create", "update", "delete", "bind", "watch" ]
- apiGroups: [""]
resources:
- "serviceaccounts"
- "secrets"
verbs: [ "get", "list", "create", "update", "delete", "watch" ]
# Allow full management of certificates CSR, including their approval
- apiGroups: [ "certificates.k8s.io" ]
resources:
- "certificatesigningrequests"
- "certificatesigningrequests/approval"
verbs: [ "get", "list", "create", "update", "delete", "watch" ]
# Support legacy versions, before signerName was added
# (see https://github.com/kubernetes/kubernetes/pull/88246)
- apiGroups: [ "certificates.k8s.io" ]
resources:
- "signers"
resourceNames:
- "kubernetes.io/legacy-unknown"
- "kubernetes.io/kube-apiserver-client"
verbs: [ "approve" ]
# Allow to get and list Namespaces
- apiGroups: [ "" ]
resources:
- "namespaces"
verbs: [ "get", "list" ]
---
# Source: permission-manager/templates/seed.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: template-namespaced-resources___operation
rules:
- apiGroups:
- "*"
resources:
- "*"
verbs:
- "*"
---
# Source: permission-manager/templates/seed.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: template-namespaced-resources___developer
rules:
- apiGroups:
- "*"
resources:
- "configmaps"
- "endpoints"
- "persistentvolumeclaims"
- "pods"
- "pods/log"
- "pods/portforward"
- "podtemplates"
- "replicationcontrollers"
- "resourcequotas"
- "secrets"
- "services"
- "events"
- "daemonsets"
- "deployments"
- "replicasets"
- "ingresses"
- "networkpolicies"
- "poddisruptionbudgets"
verbs:
- "*"
---
# Source: permission-manager/templates/seed.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: template-cluster-resources___read-only
rules:
- apiGroups:
- "*"
resources:
- "componentstatuses"
- "namespaces"
- "nodes"
- "persistentvolumes"
- "mutatingwebhookconfigurations"
- "validatingwebhookconfigurations"
- "customresourcedefinitions"
- "apiservices"
- "tokenreviews"
- "selfsubjectaccessreviews"
- "selfsubjectrulesreviews"
- "subjectaccessreviews"
- "certificatesigningrequests"
- "runtimeclasses"
- "podsecuritypolicies"
- "clusterrolebindings"
- "clusterroles"
- "priorityclasses"
- "csidrivers"
- "csinodes"
- "storageclasses"
- "volumeattachment"
verbs: ["get", "list", "watch"]
---
# Source: permission-manager/templates/seed.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: template-cluster-resources___admin
rules:
- apiGroups:
- "*"
resources:
- "componentstatuses"
- "namespaces"
- "nodes"
- "persistentvolumes"
- "mutatingwebhookconfigurations"
- "validatingwebhookconfigurations"
- "customresourcedefinitions"
- "apiservices"
- "tokenreviews"
- "selfsubjectaccessreviews"
- "selfsubjectrulesreviews"
- "subjectaccessreviews"
- "certificatesigningrequests"
- "runtimeclasses"
- "podsecuritypolicies"
- "clusterrolebindings"
- "clusterroles"
- "priorityclasses"
- "csidrivers"
- "csinodes"
- "storageclasses"
- "volumeattachment"
verbs: ["*"]
---
# Source: permission-manager/templates/ClusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dev-permission-manager
roleRef:
kind: ClusterRole
name: dev-permission-manager
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: dev-permission-manager
namespace: default
---
# Source: permission-manager/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: dev-permission-manager
labels:
helm.sh/chart: permission-manager-1.7.0-rc3
app.kubernetes.io/name: permission-manager
app.kubernetes.io/instance: dev-permission-manager
app.kubernetes.io/version: "1.7.0-rc3"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 4000
protocol: TCP
name: http
selector:
app.kubernetes.io/name: permission-manager
app.kubernetes.io/instance: dev-permission-manager
---
# Source: permission-manager/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-permission-manager
labels:
helm.sh/chart: permission-manager-1.7.0-rc3
app.kubernetes.io/name: permission-manager
app.kubernetes.io/instance: dev-permission-manager
app.kubernetes.io/version: "1.7.0-rc3"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: permission-manager
app.kubernetes.io/instance: dev-permission-manager
template:
metadata:
labels:
app.kubernetes.io/name: permission-manager
app.kubernetes.io/instance: dev-permission-manager
spec:
serviceAccountName: dev-permission-manager
securityContext:
{}
containers:
- name: permission-manager
securityContext:
{}
image: "quay.io/sighup/permission-manager:v1.7.0-rc3"
imagePullPolicy: IfNotPresent
envFrom:
- secretRef:
name: dev-permission-manager
ports:
- name: http
containerPort: 4000
protocol: TCP
livenessProbe:
tcpSocket:
port: 4000
readinessProbe:
tcpSocket:
port: 4000
resources:
{}
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=permission-manager,app.kubernetes.io/instance=dev-permission-manager" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:80
@mcgarrah @Joseph94m I had the same problem so I added to my own helm repo: https://devopstales.github.io/helm-charts