Louis Koo

Results 132 comments of Louis Koo

https://github.com/argoproj/argo-workflows/blob/5aac5a8f61f4e8273d04509dffe7d80123ff67f5/workflow/controller/taskresult.go#L67 If the status of the pod is error, I think the taskresults is completed.

https://github.com/argoproj/argo-workflows/pull/13332 it can not fix the issue, when the argo version upgrade from v3.4.9 to v3.5.8. the pod status is completed, but the task under the wf of taskResultsCompletionStatus is...

other issue: pod is completed,but the status of the task of wf is 'false', and the status of the wf is running ``` taskResultsCompletionStatus: prod--prod--lt-filter-1-1-95--2692e7e1-30e5-44a5-977a-d5c32dmh5-2722797653: false root@10-16-10-122:/home/devops# kubectl get pods...

``` S:disk/by-dname/sda S:disk/by-id/scsi-35000cca27a410dac S:disk/by-id/scsi-SHGST_HUH721212AL5200_2AH4T2TY S:disk/by-id/wwn-0x5000cca27a410dac S:disk/by-path/pci-0000:61:00.0-sas-0x5000cca27a410dad-lun-0 W:7054 I:3612 E:ID_BUS=scsi E:ID_FS_TYPE= E:ID_MODEL=HUH721212AL5200 E:ID_MODEL_ENC=HUH721212AL5200\x20 E:ID_PATH=pci-0000:61:00.0-sas-0x5000cca27a410dad-lun-0 E:ID_PATH_TAG=pci-0000_61_00_0-sas-0x5000cca27a410dad-lun-0 E:ID_REVISION=A3S0 E:ID_SCSI=1 E:ID_SCSI_INQUIRY=1 E:ID_SCSI_SERIAL=2AH4T2TY E:ID_SERIAL=35000cca27a410dac E:ID_SERIAL_SHORT=5000cca27a410dac E:ID_TYPE=disk E:ID_VENDOR=HGST E:ID_VENDOR_ENC=HGST\x20\x20\x20\x20 E:ID_WWN=0x5000cca27a410dac E:ID_WWN_WITH_EXTENSION=0x5000cca27a410dac E:MPATH_SBIN_PATH=/sbin E:SCSI_IDENT_LUN_NAA_REG=5000cca27a410dac E:SCSI_IDENT_PORT_NAA_REG=5000cca27a410dad E:SCSI_IDENT_PORT_RELATIVE=1 E:SCSI_IDENT_SERIAL=2AH4T2TY...

@vadlakiran you need to deploy a ntp server to keep the time at same.

like this: https://github.com/fabianlee/docker-chrony-alpine/blob/main/k8s-chrony-alpine.yaml

is there any new progress? @jameshearttech @thotz @divaspathak @subhamkrai @parth-gr

@travisn under nvme cluster, the metadata store and data store use the same device; but under hdd cluster, we want to division one or more nvme device as the metadata...

@travisn yes, we want to use multi metadataDevices,For exampe , we have two nvme devices and 6 hdds, we need to divsion every nvme devices to three partition as the...