glusterfs icon indicating copy to clipboard operation
glusterfs copied to clipboard

Reducing the number of call to THIS

Open rkothiya opened this issue 3 years ago • 7 comments

The macro THIS calls a function. Modified all the functions where THIS is called multiple times by saving the return value of THIS and reusing it.

Updates: #1683

Change-Id: I91328eb4af905ee4f8c9ff993dd0e734ade0183a Signed-off-by: Rinku Kothiya [email protected]

rkothiya avatar Jun 16 '21 07:06 rkothiya

As mention in this issue there are 150 file where we need to change this. Instead of changing everything at once, I am trying to change few files at a time which will make the review process easier.

I have not changed some functions ( for example inode_grep() ). As "THIS" is called only once here, even after making the change, it will still be called once hence there was no point in doing that, so I skipped it.

rkothiya avatar Jun 16 '21 07:06 rkothiya

CLANG-FORMAT FAILURE: Before merging the patch, this diff needs to be considered for passing clang-format

index 12374003a..5e700a6eb 100644
--- a/api/src/glfs-fops.c
+++ b/api/src/glfs-fops.c
@@ -104,7 +104,8 @@ glfd_set_state_bind(struct glfs_fd *glfd)
  */
 static int
 glfs_get_upcall_cache_invalidation(struct gf_upcall *to_up_data,
-                                   struct gf_upcall *from_up_data, xlator_t *this)
+                                   struct gf_upcall *from_up_data,
+                                   xlator_t *this)
 {
     struct gf_upcall_cache_invalidation *ca_data = NULL;
     struct gf_upcall_cache_invalidation *f_ca_data = NULL;
@@ -141,8 +142,6 @@ glfs_get_upcall_lease(struct gf_upcall *to_up_data,
     struct gf_upcall_recall_lease *f_ca_data = NULL;
     int ret = -1;
 
-
-
     f_ca_data = from_up_data->data;
 
     ca_data = GF_CALLOC(1, sizeof(*ca_data), glfs_mt_upcall_entry_t);
@@ -5392,7 +5391,6 @@ pub_glfs_fd_set_lkowner(struct glfs_fd *glfd, void *data, int len)
         goto invalid_fs;
     }
 
-
     if ((len <= 0) || (len > GFAPI_MAX_LOCK_OWNER_LEN)) {
         errno = EINVAL;
         gf_smsg(this->name, GF_LOG_ERROR, errno, API_MSG_INVALID_ARG,
@@ -5467,7 +5465,8 @@ invalid_fs:
 }
 
 static void
-glfs_enqueue_upcall_data(struct glfs *fs, struct gf_upcall *upcall_data, xlator_t *this)
+glfs_enqueue_upcall_data(struct glfs *fs, struct gf_upcall *upcall_data,
+                         xlator_t *this)
 {
     int ret = -1;
     upcall_entry *u_list = NULL;
@@ -5494,7 +5493,8 @@ glfs_enqueue_upcall_data(struct glfs *fs, struct gf_upcall *upcall_data, xlator_
                                                      upcall_data, this);
             break;
         case GF_UPCALL_RECALL_LEASE:
-            ret = glfs_get_upcall_lease(&u_list->upcall_data, upcall_data, this);
+            ret = glfs_get_upcall_lease(&u_list->upcall_data, upcall_data,
+                                        this);
             break;
         default:
             break;
@@ -6228,7 +6228,6 @@ pub_glfs_xreaddirplus_r(struct glfs_fd *glfd, uint32_t flags,
 
     GF_REF_GET(glfd);
 
-
     errno = 0;
 
     if (ext)

gluster-ant avatar Jun 18 '21 13:06 gluster-ant

/recheck smoke

rkothiya avatar Jun 18 '21 13:06 rkothiya

/run regression

rkothiya avatar Jun 18 '21 16:06 rkothiya

1 test(s) failed ./tests/line-coverage/cli-negative-case-and-function-coverage.t

0 test(s) generated core

2 test(s) needed retry ./tests/basic/afr/split-brain-healing.t ./tests/line-coverage/cli-negative-case-and-function-coverage.t https://build.gluster.org/job/gh_centos7-regression/1360/

gluster-ant avatar Jun 18 '21 20:06 gluster-ant

Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.

stale[bot] avatar Jan 18 '22 23:01 stale[bot]

Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.

stale[bot] avatar Sep 20 '22 19:09 stale[bot]

Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.

stale[bot] avatar Oct 16 '22 03:10 stale[bot]