glusterd2
glusterd2 copied to clipboard
Validation for cluster.brick-multiplex cluster option.
Updates #1367
Add validation to setting cluster.brick-multiplex cluster option. Discard all garbage strings and allow only "on", "yes", "true", "enable", "1" OR "off", "no", "false", "disable", "0"
Signed-off-by: Vishal Pandey [email protected]
Can one of the admins verify this patch?
add to whitelist
@atinmu The validations are happening through that infra. Only thing is that we call RegisterClusterOpValidationFunc() to register these validation functions. We can add validations directly in clusterOptMap. But, since we do have a method to register the validation function, so I am thinking of leaving the framework as generalised as possible and just use the functions already made available to register the options validator. Even if we add validation functions in clusterOptMap, those validation functions structure is bound to be same to what's happening now.
@atinmu As per our discussion in the morning, I tried using ValidateBool () but its not possible as the validation function for cluster options expects a function with string arguments and the ValidateBool() function has different types of arguments to what is expected for cluster option validation function.
@vpandey-RH Can you please address the comment?
@atinmu The shd test is failing because max-bricks-per-process is still not merged and in all shd tests I am killing one of the bricks from volume, so if brick-multiplexing is on, killing one brick will result in all bricks going in offline state and that's why the CI fails, saying transport endpoint is not connected.
@vpandey-RH you should be testing shd on a multi node cluster like environment, running a replica config on the same node isn’t ideal where such behavior can’t be tested. Agree?
@vpandey-RH CI still fails.
retest this please
11:40:11 --- FAIL: TestVolume/Statedump (1.03s) 11:40:11 require.go:157: 11:40:11 Error Trace: volume_ops_test.go:530 11:40:11 Error: Not equal: 11:40:11 expected: 2 11:40:11 actual : 8 11:40:11 Test: TestVolume/Statedump 11:40:11 --- PASS: TestVolume/Stop (0.66s) 11:40:11 --- PASS: TestVolume/List (0.04s) 11:40:11 --- PASS: TestVolume/Info (0.01s) 11:40:11 --- PASS: TestVolume/Edit (0.05s) 11:40:11 --- PASS: TestVolume/VolumeFlags (0.71s) 11:40:11 --- PASS: TestVolume/Delete (0.02s) 11:40:11 --- PASS: TestVolume/Disperse (0.47s) 11:40:11 --- PASS: TestVolume/DisperseMount (0.14s) 11:40:11 --- PASS: TestVolume/DisperseDelete (0.26s) 11:40:11 --- PASS: TestVolume/testShdOnVolumeStartAndStop (1.90s) 11:40:11 --- PASS: TestVolume/testArbiterVolumeCreate (0.88s) 11:40:11 --- FAIL: TestVolume/SelfHeal (0.62s) 11:40:11 require.go:765: 11:40:11 Error Trace: glustershd_test.go:94 11:40:11 utils_test.go:33 11:40:11 Error: Expected nil, but got: &os.PathError{Op:"open", Path:"/tmp/gd2_func_test/TestVolume/SelfHeal/mnt020307401/file1.txt", Err:0x6b} 11:40:11 Test: TestVolume/SelfHeal 11:40:11 Messages: failed to open file: open /tmp/gd2_func_test/TestVolume/SelfHeal/mnt020307401/file1.txt: transport endpoint is not connected 11:40:11 --- FAIL: TestVolume/GranularEntryHeal (0.01s) 11:40:11 require.go:765: 11:40:11 Error Trace: glustershd_test.go:195 11:40:11 utils_test.go:33 11:40:11 Error: Expected nil, but got: &errors.errorString{s:"volume already exists"} 11:40:11 Test: TestVolume/GranularEntryHeal 11:40:11 --- FAIL: TestVolume/SelfHeal#01 (0.01s) 11:40:11 require.go:765: 11:40:11 Error Trace: glustershd_test.go:53 11:40:11 utils_test.go:33 11:40:11 Error: Expected nil, but got: &errors.errorString{s:"volume already exists"} 11:40:11 Test: TestVolume/SelfHeal#01 11:40:11 --- FAIL: TestVolume/SplitBrainOperations (0.49s) 11:40:11 require.go:347: 11:40:11 Error Trace: glustershd_test.go:309 11:40:11 utils_test.go:33 11:40:11 Error: Should be false 11:40:11 Test: TestVolume/SplitBrainOperations 11:40:11 Messages: glustershd is still running 11:40:11 --- PASS: TestVolume/VolumeProfile (1.04s) 11:40:11 === RUN TestVolumeOptions
@vpandey-RH CI failure PTAL
@vpandey-RH any update on this PR?
@atinmu Still working on this. Stuck in an issue regarding local mount by glfsheal binary because of which the self heal tests are failing.
@atinmu Still working on this. Stuck in an issue regarding local mount by glfsheal binary because of which the self heal tests are failing.
@vpandey-RH Did we manage to figure out the root cause?
@atinmu No. I have asked @aravindavk for some help.