glusterfs icon indicating copy to clipboard operation
glusterfs copied to clipboard

Small improvements to content population (for quick read)

Open mykaul opened this issue 2 years ago • 4 comments

  • No need to CALLOC the whole array, MALLOC is OK, as we are going to read into it.
  • Skip zero length files.
  • Open the file with O_NOATIME - I don't think there's a need to update access time over this read.

Updates: #1000 Signed-off-by: Yaniv Kaul [email protected]

mykaul avatar Sep 07 '22 18:09 mykaul

CLANG-FORMAT FAILURE: Before merging the patch, this diff needs to be considered for passing clang-format

index afba522c3..f75919a06 100644
--- a/xlators/storage/posix/src/posix-helpers.c
+++ b/xlators/storage/posix/src/posix-helpers.c
@@ -477,8 +477,7 @@ _posix_xattr_get_set(dict_t *xattr_req, char *key, data_t *data,
                 goto err;
             }
 
-            ret = dict_set_bin(filler->xattr, key, databuf,
-                               read_len);
+            ret = dict_set_bin(filler->xattr, key, databuf, read_len);
             if (ret < 0) {
                 gf_msg(filler->this->name, GF_LOG_ERROR, 0,
                        P_MSG_XDATA_GETXATTR,

gluster-ant avatar Sep 07 '22 18:09 gluster-ant

/run regression

mykaul avatar Sep 13 '22 09:09 mykaul

1 test(s) failed ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t

0 test(s) generated core

1 test(s) needed retry ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t https://build.gluster.org/job/gh_centos7-regression/2849/

gluster-ant avatar Sep 13 '22 10:09 gluster-ant

/run regression

mykaul avatar Sep 13 '22 15:09 mykaul

/run regression

mykaul avatar Sep 30 '22 07:09 mykaul

1 test(s) failed ./tests/basic/fuse/active-io-graph-switch.t

0 test(s) generated core

6 test(s) needed retry ./tests/000-flaky/basic_changelog_changelog-snapshot.t ./tests/000-flaky/basic_distribute_rebal-all-nodes-migrate.t ./tests/000-flaky/bugs_distribute_bug-1117851.t ./tests/000-flaky/bugs_glusterd_bug-857330/normal.t ./tests/000-flaky/bugs_glusterd_bug-857330/xml.t ./tests/basic/fuse/active-io-graph-switch.t

5 flaky test(s) marked as success even though they failed ./tests/000-flaky/basic_changelog_changelog-snapshot.t ./tests/000-flaky/basic_distribute_rebal-all-nodes-migrate.t ./tests/000-flaky/bugs_distribute_bug-1117851.t ./tests/000-flaky/bugs_glusterd_bug-857330/normal.t ./tests/000-flaky/bugs_glusterd_bug-857330/xml.t https://build.gluster.org/job/gh_centos7-regression/2941/

gluster-ant avatar Sep 30 '22 10:09 gluster-ant

It inconsistently fails on my laptop, so I assume there's something not working well yet:

[ykaul@ykaul glusterfs]$ sudo prove -vf tests/basic/quick-read-with-upcall.t 
tests/basic/quick-read-with-upcall.t .. 
1..33
losetup: /d/dev/loop*: failed to use device: No such device
ok   1 [    234/    901] <   9> 'glusterd'
ok   2 [     15/      8] <  10> 'pidof glusterd'
No volumes present
ok   3 [      8/     41] <  11> 'gluster --mode=script --wignore volume info'
ok   4 [      6/    176] <  14> 'gluster --mode=script --wignore volume create patchy 10.100.102.15:/d/backends/patchy1 10.100.102.15:/d/backends/patchy2'
ok   5 [     12/    224] <  15> 'gluster --mode=script --wignore volume start patchy'
ok   6 [      7/     15] <  18> 'glusterfs -s 10.100.102.15 --volfile-id patchy --direct-io-mode=enable /mnt/glusterfs/0'
ok   7 [      7/     15] <  19> 'glusterfs -s 10.100.102.15 --volfile-id patchy --direct-io-mode=enable /mnt/glusterfs/1'
ok   8 [      8/      2] <  32> 'write_to /mnt/glusterfs/0/test.txt test-message0'
ok   9 [      7/      2] <  33> 'test-message0 cat /mnt/glusterfs/0/test.txt'
ok  10 [      6/      2] <  34> 'test-message0 cat /mnt/glusterfs/1/test.txt'
ok  11 [      7/      1] <  36> 'write_to /mnt/glusterfs/0/test.txt test-message1'
ok  12 [      7/      2] <  37> 'test-message1 cat /mnt/glusterfs/0/test.txt'
ok  13 [      7/      2] <  38> 'test-message0 cat /mnt/glusterfs/1/test.txt'
ok  14 [   1014/      7] <  43> 'test-message1 cat /mnt/glusterfs/1/test.txt'
ok  15 [     13/    122] <  45> 'gluster --mode=script --wignore volume set patchy features.cache-invalidation on'
ok  16 [     15/    110] <  46> 'gluster --mode=script --wignore volume set patchy performance.quick-read-cache-timeout 15'
ok  17 [     10/    115] <  47> 'gluster --mode=script --wignore volume set patchy performance.md-cache-timeout 15'
ok  18 [     14/      3] <  49> 'write_to /mnt/glusterfs/0/test1.txt test-message0'
ok  19 [      9/      2] <  50> 'test-message0 cat /mnt/glusterfs/0/test1.txt'
ok  20 [      7/      3] <  51> 'test-message0 cat /mnt/glusterfs/1/test1.txt'
ok  21 [      7/      1] <  53> 'write_to /mnt/glusterfs/0/test1.txt test-message1'
ok  22 [      7/      2] <  54> 'test-message1 cat /mnt/glusterfs/0/test1.txt'
ok  23 [      7/      2] <  55> 'test-message0 cat /mnt/glusterfs/1/test1.txt'
not ok  24 [   1014/      6] <  58> 'test-message0 cat /mnt/glusterfs/1/test1.txt' -> 'Got "test-message1" instead of "test-message0"'
ok  25 [  30018/      6] <  61> 'test-message1 cat /mnt/glusterfs/1/test1.txt'
ok  26 [     10/    111] <  63> 'gluster --mode=script --wignore volume set patchy performance.quick-read-cache-invalidation on'
ok  27 [     12/    113] <  64> 'gluster --mode=script --wignore volume set patchy performance.cache-invalidation on'
ok  28 [     13/      3] <  66> 'write_to /mnt/glusterfs/0/test2.txt test-message0'
ok  29 [      9/      5] <  67> 'test-message0 cat /mnt/glusterfs/0/test2.txt'
ok  30 [      9/      3] <  68> 'test-message0 cat /mnt/glusterfs/1/test2.txt'
ok  31 [      7/      2] <  70> 'write_to /mnt/glusterfs/0/test2.txt test-message1'
ok  32 [      7/      4] <  71> 'test-message1 cat /mnt/glusterfs/0/test2.txt'
ok  33 [      7/      3] <  72> 'test-message1 cat /mnt/glusterfs/1/test2.txt'
Failed 1/33 subtests 

(sometimes it passes!)

mykaul avatar Oct 22 '22 08:10 mykaul

Ah, the test fails on devel branch as well sometimes...

mykaul avatar Oct 22 '22 09:10 mykaul

/run regression

mykaul avatar Nov 04 '22 11:11 mykaul

Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.

stale[bot] avatar Jun 10 '23 03:06 stale[bot]

Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.

stale[bot] avatar Aug 12 '23 02:08 stale[bot]