glusterfs icon indicating copy to clipboard operation
glusterfs copied to clipboard

test: ./tests/bugs/posix/bug-1651445.t is failing while running test suite

Open mohit84 opened this issue 2 years ago • 5 comments

The ./tests/bugs/posix/bug-1651445.t is getting failed continuously while running test suite. The test case is failing after reaching a situation while brick is throwing an ENOSPC error and after cleanup, as the test case is trying to create a file it is failing. The file creation is failing because the flag (disk_space_full) is reset after every 5s by a thread posix_ctx_disk_thread_proc.

Solution: After cleanup data wait for 5s to reset the flag. Earlier the test case did the same but it was changed by the patch(#3637). Fixes: #3695 Signed-off-by: Mohit Agrawal [email protected]

Change-Id: Ifa0310ba9266651557e29480f5ea476016726e41

mohit84 avatar Aug 09 '22 02:08 mohit84

/run regression

mohit84 avatar Aug 09 '22 02:08 mohit84

It did not work for me:

[ykaul@ykaul glusterfs]$ sudo prove -vf tests/bugs/posix/bug-1651445.t
tests/bugs/posix/bug-1651445.t .. 
1..19
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
ok   1 [   2168/      8] <   9> 'verify_lvm_version'
ok   2 [      8/   1044] <  10> 'glusterd'
ok   3 [      9/      9] <  11> 'pidof glusterd'
ok   4 [      9/      1] <  12> 'init_n_bricks 3'
ok   5 [      9/   2210] <  13> 'setup_lvm 3'
ok   6 [      8/    183] <  15> 'gluster --mode=script --wignore volume create patchy replica 3 ykaul.tlv.redhat.com:/d/backends/1/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/2/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/3/patchy_snap_ykaul_mnt'
ok   7 [     17/   1263] <  16> 'gluster --mode=script --wignore volume start patchy'
ok   8 [      9/     42] <  18> 'glusterfs --volfile-id=/patchy --volfile-server=ykaul.tlv.redhat.com /mnt/glusterfs/0'
ok   9 [     11/    136] <  21> 'gluster --mode=script --wignore volume set patchy storage.reserve 40MB'
ok  10 [      8/    198] <  23> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=90M count=1'
dd: error writing '/mnt/glusterfs/0/b': No space left on device
dd: closing output file '/mnt/glusterfs/0/b': No space left on device
not ok  11 [      8/   2391] <  24> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=10M count=1' -> ''
ok  12 [     23/   3025] <  28> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  13 [      8/    183] <  29> 'dd if=/dev/urandom of=/mnt/glusterfs/0/a bs=1022 count=1 oflag=seek_bytes,sync seek=102 conv=notrunc'
ok  14 [     40/    132] <  34> 'gluster --mode=script --wignore volume set patchy storage.reserve 40'
ok  15 [   5024/    142] <  39> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=70M count=1'
dd: error writing '/mnt/glusterfs/0/b': No space left on device
dd: closing output file '/mnt/glusterfs/0/b': No space left on device
not ok  16 [      7/   4196] <  40> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=10M count=1' -> ''
ok  17 [     15/   4035] <  42> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  18 [     15/   3151] <  44> 'gluster --mode=script --wignore volume stop patchy'
ok  19 [     20/    935] <  45> 'gluster --mode=script --wignore volume delete patchy'
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_3" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_2" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_1" successfully removed
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
Failed 2/19 subtests 

Test Summary Report
-------------------
tests/bugs/posix/bug-1651445.t (Wstat: 0 Tests: 19 Failed: 2)
  Failed tests:  11, 16
Files=1, Tests=19, 37 wallclock secs ( 0.02 usr  0.00 sys +  0.63 cusr  0.94 csys =  1.59 CPU)
Result: FAIL

mykaul avatar Aug 09 '22 08:08 mykaul

It did not work for me:

[ykaul@ykaul glusterfs]$ sudo prove -vf tests/bugs/posix/bug-1651445.t
tests/bugs/posix/bug-1651445.t .. 
1..19
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
ok   1 [   2168/      8] <   9> 'verify_lvm_version'
ok   2 [      8/   1044] <  10> 'glusterd'
ok   3 [      9/      9] <  11> 'pidof glusterd'
ok   4 [      9/      1] <  12> 'init_n_bricks 3'
ok   5 [      9/   2210] <  13> 'setup_lvm 3'
ok   6 [      8/    183] <  15> 'gluster --mode=script --wignore volume create patchy replica 3 ykaul.tlv.redhat.com:/d/backends/1/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/2/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/3/patchy_snap_ykaul_mnt'
ok   7 [     17/   1263] <  16> 'gluster --mode=script --wignore volume start patchy'
ok   8 [      9/     42] <  18> 'glusterfs --volfile-id=/patchy --volfile-server=ykaul.tlv.redhat.com /mnt/glusterfs/0'
ok   9 [     11/    136] <  21> 'gluster --mode=script --wignore volume set patchy storage.reserve 40MB'
ok  10 [      8/    198] <  23> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=90M count=1'
dd: error writing '/mnt/glusterfs/0/b': No space left on device
dd: closing output file '/mnt/glusterfs/0/b': No space left on device
not ok  11 [      8/   2391] <  24> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=10M count=1' -> ''
ok  12 [     23/   3025] <  28> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  13 [      8/    183] <  29> 'dd if=/dev/urandom of=/mnt/glusterfs/0/a bs=1022 count=1 oflag=seek_bytes,sync seek=102 conv=notrunc'
ok  14 [     40/    132] <  34> 'gluster --mode=script --wignore volume set patchy storage.reserve 40'
ok  15 [   5024/    142] <  39> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=70M count=1'
dd: error writing '/mnt/glusterfs/0/b': No space left on device
dd: closing output file '/mnt/glusterfs/0/b': No space left on device
not ok  16 [      7/   4196] <  40> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=10M count=1' -> ''
ok  17 [     15/   4035] <  42> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  18 [     15/   3151] <  44> 'gluster --mode=script --wignore volume stop patchy'
ok  19 [     20/    935] <  45> 'gluster --mode=script --wignore volume delete patchy'
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_3" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_2" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_1" successfully removed
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
Failed 2/19 subtests 

Test Summary Report
-------------------
tests/bugs/posix/bug-1651445.t (Wstat: 0 Tests: 19 Failed: 2)
  Failed tests:  11, 16
Files=1, Tests=19, 37 wallclock secs ( 0.02 usr  0.00 sys +  0.63 cusr  0.94 csys =  1.59 CPU)
Result: FAIL

The test case is failing on your laptop because LVM has more reserved space as compared to centos-7. In the CI environment, the test case was not failing at this point. It was passed at this point but it was failing because space_check thread has set the value and after cleanup the flag was not reset and dd command was failing. It will not fail in CI environment(centos-7).

mohit84 avatar Aug 09 '22 12:08 mohit84

LGTM

Explanation sounds fine. But still, it passed previous patch and now failing in some cases (as @mykaul mentioned). Good to analyze again.

Done

mohit84 avatar Aug 10 '22 08:08 mohit84

/run regression

mohit84 avatar Aug 10 '22 08:08 mohit84

@mykaul Can you please confirm if the test case is passed in your environment

mohit84 avatar Aug 11 '22 04:08 mohit84

@mykaul Can you please confirm if the test case is passed in your environment

Not yet:

./tests/bugs/posix/bug-1651445.t .. 
1..19
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
ok   1 [   2233/      9] <   9> 'verify_lvm_version'
ok   2 [     10/   1015] <  10> 'glusterd'
ok   3 [     18/     15] <  11> 'pidof glusterd'
ok   4 [     12/      2] <  12> 'init_n_bricks 3'
ok   5 [     14/   2929] <  13> 'setup_lvm 3'
ok   6 [     19/    193] <  15> 'gluster --mode=script --wignore volume create patchy replica 3 ykaul.tlv.redhat.com:/d/backends/1/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/2/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/3/patchy_snap_ykaul_mnt'
ok   7 [     18/   1271] <  16> 'gluster --mode=script --wignore volume start patchy'
ok   8 [     22/     29] <  18> 'glusterfs --volfile-id=/patchy --volfile-server=ykaul.tlv.redhat.com /mnt/glusterfs/0'
ok   9 [      7/    148] <  21> 'gluster --mode=script --wignore volume set patchy storage.reserve 40MB'
ok  10 [     20/    134] <  24> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=90M count=1'
dd: error writing '/mnt/glusterfs/0/b': No space left on device
dd: closing output file '/mnt/glusterfs/0/b': No space left on device
not ok  11 [      7/    212] <  36> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=4M count=1' -> ''
ok  12 [     13/    143] <  40> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  13 [     20/    234] <  41> 'dd if=/dev/urandom of=/mnt/glusterfs/0/a bs=1022 count=1 oflag=seek_bytes,sync seek=102 conv=notrunc'
ok  14 [     61/    154] <  46> 'gluster --mode=script --wignore volume set patchy storage.reserve 40'
ok  15 [   5025/    112] <  51> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=70M count=1'
ok  16 [      7/      9] <  52> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=4M count=1'
ok  17 [      8/    185] <  54> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  18 [     11/   3155] <  56> 'gluster --mode=script --wignore volume stop patchy'
ok  19 [     13/    960] <  57> 'gluster --mode=script --wignore volume delete patchy'
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_3" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_2" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_1" successfully removed
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
Failed 1/19 subtests 

mykaul avatar Aug 13 '22 17:08 mykaul

@mykaul Can you please confirm if the test case is passed in your environment

Not yet:

./tests/bugs/posix/bug-1651445.t .. 
1..19
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
ok   1 [   2233/      9] <   9> 'verify_lvm_version'
ok   2 [     10/   1015] <  10> 'glusterd'
ok   3 [     18/     15] <  11> 'pidof glusterd'
ok   4 [     12/      2] <  12> 'init_n_bricks 3'
ok   5 [     14/   2929] <  13> 'setup_lvm 3'
ok   6 [     19/    193] <  15> 'gluster --mode=script --wignore volume create patchy replica 3 ykaul.tlv.redhat.com:/d/backends/1/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/2/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/3/patchy_snap_ykaul_mnt'
ok   7 [     18/   1271] <  16> 'gluster --mode=script --wignore volume start patchy'
ok   8 [     22/     29] <  18> 'glusterfs --volfile-id=/patchy --volfile-server=ykaul.tlv.redhat.com /mnt/glusterfs/0'
ok   9 [      7/    148] <  21> 'gluster --mode=script --wignore volume set patchy storage.reserve 40MB'
ok  10 [     20/    134] <  24> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=90M count=1'
dd: error writing '/mnt/glusterfs/0/b': No space left on device
dd: closing output file '/mnt/glusterfs/0/b': No space left on device
not ok  11 [      7/    212] <  36> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=4M count=1' -> ''
ok  12 [     13/    143] <  40> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  13 [     20/    234] <  41> 'dd if=/dev/urandom of=/mnt/glusterfs/0/a bs=1022 count=1 oflag=seek_bytes,sync seek=102 conv=notrunc'
ok  14 [     61/    154] <  46> 'gluster --mode=script --wignore volume set patchy storage.reserve 40'
ok  15 [   5025/    112] <  51> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=70M count=1'
ok  16 [      7/      9] <  52> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=4M count=1'
ok  17 [      8/    185] <  54> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  18 [     11/   3155] <  56> 'gluster --mode=script --wignore volume stop patchy'
ok  19 [     13/    960] <  57> 'gluster --mode=script --wignore volume delete patchy'
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_3" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_2" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_1" successfully removed
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
Failed 1/19 subtests 

What is the os version on your laptop?

mohit84 avatar Aug 14 '22 02:08 mohit84

@mykaul Can you please confirm if the test case is passed in your environment

Not yet:

./tests/bugs/posix/bug-1651445.t .. 
1..19
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
ok   1 [   2233/      9] <   9> 'verify_lvm_version'
ok   2 [     10/   1015] <  10> 'glusterd'
ok   3 [     18/     15] <  11> 'pidof glusterd'
ok   4 [     12/      2] <  12> 'init_n_bricks 3'
ok   5 [     14/   2929] <  13> 'setup_lvm 3'
ok   6 [     19/    193] <  15> 'gluster --mode=script --wignore volume create patchy replica 3 ykaul.tlv.redhat.com:/d/backends/1/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/2/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/3/patchy_snap_ykaul_mnt'
ok   7 [     18/   1271] <  16> 'gluster --mode=script --wignore volume start patchy'
ok   8 [     22/     29] <  18> 'glusterfs --volfile-id=/patchy --volfile-server=ykaul.tlv.redhat.com /mnt/glusterfs/0'
ok   9 [      7/    148] <  21> 'gluster --mode=script --wignore volume set patchy storage.reserve 40MB'
ok  10 [     20/    134] <  24> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=90M count=1'
dd: error writing '/mnt/glusterfs/0/b': No space left on device
dd: closing output file '/mnt/glusterfs/0/b': No space left on device
not ok  11 [      7/    212] <  36> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=4M count=1' -> ''
ok  12 [     13/    143] <  40> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  13 [     20/    234] <  41> 'dd if=/dev/urandom of=/mnt/glusterfs/0/a bs=1022 count=1 oflag=seek_bytes,sync seek=102 conv=notrunc'
ok  14 [     61/    154] <  46> 'gluster --mode=script --wignore volume set patchy storage.reserve 40'
ok  15 [   5025/    112] <  51> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=70M count=1'
ok  16 [      7/      9] <  52> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=4M count=1'
ok  17 [      8/    185] <  54> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  18 [     11/   3155] <  56> 'gluster --mode=script --wignore volume stop patchy'
ok  19 [     13/    960] <  57> 'gluster --mode=script --wignore volume delete patchy'
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_3" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_2" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_1" successfully removed
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
Failed 1/19 subtests 

What is the os version on your laptop?

Fedora 36, latest.

mykaul avatar Aug 15 '22 05:08 mykaul

@mykaul Can you please confirm if the test case is passed in your environment

Not yet:

./tests/bugs/posix/bug-1651445.t .. 
1..19
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
ok   1 [   2233/      9] <   9> 'verify_lvm_version'
ok   2 [     10/   1015] <  10> 'glusterd'
ok   3 [     18/     15] <  11> 'pidof glusterd'
ok   4 [     12/      2] <  12> 'init_n_bricks 3'
ok   5 [     14/   2929] <  13> 'setup_lvm 3'
ok   6 [     19/    193] <  15> 'gluster --mode=script --wignore volume create patchy replica 3 ykaul.tlv.redhat.com:/d/backends/1/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/2/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/3/patchy_snap_ykaul_mnt'
ok   7 [     18/   1271] <  16> 'gluster --mode=script --wignore volume start patchy'
ok   8 [     22/     29] <  18> 'glusterfs --volfile-id=/patchy --volfile-server=ykaul.tlv.redhat.com /mnt/glusterfs/0'
ok   9 [      7/    148] <  21> 'gluster --mode=script --wignore volume set patchy storage.reserve 40MB'
ok  10 [     20/    134] <  24> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=90M count=1'
dd: error writing '/mnt/glusterfs/0/b': No space left on device
dd: closing output file '/mnt/glusterfs/0/b': No space left on device
not ok  11 [      7/    212] <  36> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=4M count=1' -> ''
ok  12 [     13/    143] <  40> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  13 [     20/    234] <  41> 'dd if=/dev/urandom of=/mnt/glusterfs/0/a bs=1022 count=1 oflag=seek_bytes,sync seek=102 conv=notrunc'
ok  14 [     61/    154] <  46> 'gluster --mode=script --wignore volume set patchy storage.reserve 40'
ok  15 [   5025/    112] <  51> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=70M count=1'
ok  16 [      7/      9] <  52> 'dd if=/dev/zero of=/mnt/glusterfs/0/b bs=4M count=1'
ok  17 [      8/    185] <  54> '! dd if=/dev/zero of=/mnt/glusterfs/0/c bs=5M count=1'
ok  18 [     11/   3155] <  56> 'gluster --mode=script --wignore volume stop patchy'
ok  19 [     13/    960] <  57> 'gluster --mode=script --wignore volume delete patchy'
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_3" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_2" successfully removed
  Logical volume "brick_lvm" successfully removed
  Logical volume "thinpool" successfully removed
  Volume group "patchy_snap_ykaul_vg_1" successfully removed
losetup: /d/dev/loop*: failed to use device: No such device
losetup: /d/dev/loop*: failed to use device: No such device
Failed 1/19 subtests 

What is the os version on your laptop?

Fedora 36, latest.

Can you please confirm what is the output of df -klh after call setup_lvm in your environment?

mohit84 avatar Aug 16 '22 05:08 mohit84

dev/mapper/patchy_snap_ykaul_vg_1-brick_lvm  145M  9.1M  136M   7% /d/backends/1/patchy_snap_ykaul_mnt
/dev/mapper/patchy_snap_ykaul_vg_2-brick_lvm  145M  9.1M  136M   7% /d/backends/2/patchy_snap_ykaul_mnt
/dev/mapper/patchy_snap_ykaul_vg_3-brick_lvm  145M  9.1M  136M   7% /d/backends/3/patchy_snap_ykaul_mnt
ok   6 [     10/    189] <  16> 'gluster --mode=script --wignore volume create patchy replica 3 ykaul.tlv.redhat.com:/d/backends/1/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/2/patchy_snap_ykaul_mnt ykaul.tlv.redhat.com:/d/backends/3/patchy_snap_ykaul_mnt'
ok   7 [     18/   1261] <  17> 'gluster --mode=script --wignore volume start patchy'
ok   8 [      9/     25] <  19> 'glusterfs --volfile-id=/patchy --volfile-server=ykaul.tlv.redhat.com /mnt/glusterfs/0'
ok   9 [      8/    137] <  22> 'gluster --mode=script --wignore volume set patchy storage.reserve 40MB'
/dev/mapper/patchy_snap_ykaul_vg_1-brick_lvm  145M  9.4M  136M   7% /d/backends/1/patchy_snap_ykaul_mnt
/dev/mapper/patchy_snap_ykaul_vg_2-brick_lvm  145M  9.4M  136M   7% /d/backends/2/patchy_snap_ykaul_mnt
/dev/mapper/patchy_snap_ykaul_vg_3-brick_lvm  145M  9.4M  136M   7% /d/backends/3/patchy_snap_ykaul_mnt
ok  10 [     10/    198] <  26> 'dd if=/dev/zero of=/mnt/glusterfs/0/a bs=90M count=1'
/dev/mapper/patchy_snap_ykaul_vg_1-brick_lvm  145M  100M   46M  69% /d/backends/1/patchy_snap_ykaul_mnt
/dev/mapper/patchy_snap_ykaul_vg_2-brick_lvm  145M  100M   46M  69% /d/backends/2/patchy_snap_ykaul_mnt
/dev/mapper/patchy_snap_ykaul_vg_3-brick_lvm  145M  100M   46M  69% /d/backends/3/patchy_snap_ykaul_mnt
dd: error writing '/mnt/glusterfs/0/b': No space left on device
dd: closing output file '/mnt/glusterfs/0/b': No space left on device

mykaul avatar Aug 16 '22 07:08 mykaul

136M

I think there is some issue in your environment, as we can see initially the brick available size is 136M even after reserving 40M the first dd command should not fail because the required space is still available on the backend (136-40 = 96M ). I have installed fedora 36 on the beaker node but I am not able to reproduce it. I have executed a loop 10 times to reproduce it.

mohit84 avatar Aug 16 '22 09:08 mohit84

@amarts Can you please approve the patch, I will merge it.

mohit84 avatar Aug 17 '22 05:08 mohit84