jitsi-deployment
jitsi-deployment copied to clipboard
Installation Issue
I had install code with $git clone https://github.com/hpi-schul-cloud/jitsi-deployment.git
and run $kustomize build . | kubectl apply -f -
in overlays/development. But i have errors like this
"Error: accumulating resources: accumulateFile "accumulating resources from 'ops/': '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/overlays/development/ops' must resolve to a file", accumulateDirector: "recursed accumulation of path '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/overlays/development/ops': accumulating resources: accumulateFile "accumulating resources from '../../../base/ops': '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/base/ops' must resolve to a file", accumulateDirector: "recursed accumulation of path '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/base/ops': accumulating resources: accumulateFile \"accumulating resources from 'monitoring/': '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/base/ops/monitoring' must resolve to a file\", accumulateDirector: \"recursed accumulation of path '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/base/ops/monitoring': accumulating resources: accumulateFile \\\"accumulating resources from 'https://github.com/coreos/kube-prometheus?ref=master': YAML file [https://github.com/coreos/kube-prometheus?ref=master] encounters a format error.\\\\nerror converting YAML to JSON: yaml: line 31: mapping values are not allowed in this context\\\\n\\\", accumulateDirector: \\\"couldn't make target for path '/tmp/kustomize-756266654/repo': unable to find one of 'kustomization.yaml', 'kustomization.yml' or 'Kustomization' in directory '/tmp/kustomize-756266654/repo'\\\"\"""
error: no objects passed to apply
k8smaster@k8smaster:~/Documents/hp"
where is the porblem? And thank you for sharing this.
Its on warning for Kustomize version. Need to install kustomize/v3.5.4
I read the requirements part carelessly. Thank you.
Hello again, I went over kustomize and installed but I'm getting some errors. $ kustomize build. | After kubectl apply -f -
command only fails here.
And some containers remain in a pending state.
An additionally i think like this: Can i run jicofo, jvb, prosody, web in one pod, and scale this. And all of replicas under a proxy. Its more basic your system. Thank you for reply.
Change the base-64 on Certificate to your real base 64 token
overlays/production/ops/bbb-basic-auth-secret.yaml
base/jitsi/jitsi-secret.yaml
@congthang1 is right, you need to change the secrets to your own secrets. The present placeholders will not work. Btw. everything regarding "bbb" (which means Big Blue Button) is related to a different project we have deployed. We are only using this setup to monitor the Big Blue Button instances as well. Therefore you can safely skip/delete the BBB related yamls. I recommend to fork this repo.
Yes I know its for bbb but its will help deployment success :)
Hello @simoncolincap @janrenz @mvakert @wolfm89
Everything goes well apart from the below error at the time of deployment..
For your information, I have also replaced the placeholders for the base64 credentials in
/overlays/production/ops/bbb-basic-auth-secret.yaml /base/jitsi/jitsi-secret.yaml
apiVersion: v1 kind: Secret metadata: namespace: jitsi name: jitsi-config type: Opaque data: JICOFO_COMPONENT_SECRET:6290f3e910afc1b0ded51a9ff5825e08 JICOFO_AUTH_PASSWORD:b8cb3a744c3a83b9107f6e4115c0bad8 JVB_AUTH_PASSWORD:119d7ffe32bb2964337068852d6ad078 JVB_STUN_SERVERS:meet-jit-si-turnrelay.jitsi.net:443 TURNCREDENTIALS_SECRET:119d7ffe32bb2964337068852d6ad675 TURN_HOST:52.172.170.79 STUN_PORT:4000 TURN_PORT:4001 TURNS_PORT:4002
error: error validating "STDIN": error validating data: ValidationError(Secret.data): invalid type for io.k8s.api.core.v1.Secret.data: got "string", expected "map"; if you choose to ignore these errors, turn validation off with --validate=false
Swift response will be life saver..
Thanks in advance!
@sunilkumarjena21 Did you successfully deploy this setup? Can you share your deployment experience,
We have this running in IONOS Cloud. Our dev Op Tesm Might not check issues here , I will ask them to do so
I am also having the same issue. the full log is as follows. Unfortunately i m new to kubernetes etc which is making my life difficult.
namespace/jitsi unchanged
configmap/jvb-entrypoint unchanged
configmap/jvb-shutdown unchanged
configmap/prosody unchanged
configmap/web unchanged
service/shard-0-prosody unchanged
service/shard-1-prosody unchanged
service/web unchanged
deployment.apps/shard-0-jicofo unchanged
deployment.apps/shard-0-prosody unchanged
deployment.apps/shard-0-web unchanged
deployment.apps/shard-1-jicofo unchanged
deployment.apps/shard-1-prosody unchanged
deployment.apps/shard-1-web unchanged
statefulset.apps/shard-0-jvb unchanged
statefulset.apps/shard-1-jvb unchanged
horizontalpodautoscaler.autoscaling/shard-0-jvb-hpa configured
horizontalpodautoscaler.autoscaling/shard-1-jvb-hpa configured
unable to recognize "STDIN": no matches for kind "DecoratorController" in version "metacontroller.k8s.io/v1alpha1"
Error from server (BadRequest): error when creating "STDIN": Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data: decode base64: illegal base64 data at input byte 0, error found in #10 byte of ...|ret\u003e","JICOFO_C|..., bigger context ...|"JICOFO_AUTH_PASSWORD":"\u003cbase64-secret\u003e","JICOFO_COMPONENT_SECRET":"\u003cbase64-secret\u0|...
kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
jitsi shard-0-jicofo-7cd859bd58-lmvv9 0/1 Pending 0 14h
jitsi shard-0-jvb-0 0/2 Pending 0 14h
jitsi shard-0-prosody-545bd4c886-ms5xj 0/1 Pending 0 14h
jitsi shard-0-web-594487f6b-n64c8 0/1 Pending 0 14h
jitsi shard-1-jicofo-86bdcbb46c-zn9r9 0/1 Pending 0 14h
jitsi shard-1-jvb-0 0/2 Pending 0 14h
jitsi shard-1-prosody-56d78f454b-5bds2 0/1 Pending 0 14h
jitsi shard-1-web-5c95899656-zj5fj 0/1 Pending 0 14h
kube-system coredns-74ff55c5b-2tmgp 1/1 Running 2 14h
kube-system etcd-minikube 1/1 Running 2 14h
kube-system kube-apiserver-minikube 1/1 Running 2 14h
kube-system kube-controller-manager-minikube 1/1 Running 2 14h
kube-system kube-proxy-jwm4s 1/1 Running 2 14h
kube-system kube-scheduler-minikube 1/1 Running 2 14h
kube-system storage-provisioner 1/1 Running 5 14h