networkingana

Results 20 comments of networkingana

I have the same issue ``` + CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@") + exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=192.168.139.112 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class com.payten.dar.Main local:///opt/spark-jars/DataAnalyticsReporting.jar '--kafka 10.99.39.20:9092'...

I'm questioning maybe the same, I have written gluster exporter which executes `gluster volume status ` for example but I cannot access this command from my sidecar container, any help?...

Hi @adejanovski, Thank you for your answer. Not sure I understand correctly. I'm aware of the prefix setting for the k8ssandra-operator, but the backup that was created with k8ssandra 1.5.1...

This is my medusa config in K8ssandraCluster: ``` medusa: storageProperties: bucketName: k8ssandra-medusa concurrentTransfers: 1 host: minio.minio.svc.cluster.local maxBackupAge: 0 maxBackupCount: 0 multiPartUploadThreshold: 104857600 port: 9000 prefix: k8ssandra secure: false storageProvider: s3_compatible...

I edited the CR K8ssandraCluster, and removed prefix from there, and tried medusa sync task, it seems the config was not applied, how can I be sure, that the configuration...

My config now looks like this: ``` medusa: storageProperties: bucketName: k8ssandra-medusa concurrentTransfers: 1 host: minio.minio.svc.cluster.local maxBackupAge: 0 maxBackupCount: 0 multiPartUploadThreshold: 104857600 port: 9000 prefix: "" secure: false storageProvider: s3_compatible storageSecretRef:...

UPDATE: Now my configuration look like this: ``` medusa: storageProperties: bucketName: k8ssandra-medusa concurrentTransfers: 1 host: minio.minio.svc.cluster.local maxBackupAge: 0 maxBackupCount: 0 multiPartUploadThreshold: 104857600 port: 9000 prefix: ' ' secure: false storageProvider:...

I run into a issue with the restore process: 4/5 finished successfully according the logs from the medusa-restore container but the rack 1 has a error and the pod is...

So I decided to upgrade the k8ssandra to k8ssandra-operator, probably that will help with this issue. Can I somehow stop the current restore, even if all data is gone?

I migrated the source cluster to k8ssandra-operator, and this is the result from the restore ``` INFO [SSTableBatchOpen:1] 2023-08-18 21:21:50,473 SSTableReaderBuilder.java:351 - Opening /var/lib/cassandra/data/system/repairs-a3d277d1cfaf36f5a2a738d5eea9ad6a/nb-19-big (1.875KiB) INFO [SSTableBatchOpen:1] 2023-08-18 21:21:50,474 SSTableReaderBuilder.java:351...