glusterfs icon indicating copy to clipboard operation
glusterfs copied to clipboard

readdir core,access to freed memory

Open feizai131 opened this issue 2 years ago • 2 comments

Description of problem: (gdb) bt #0 0x00007f4df0770195 in glfd_entry_next (glfd=0x7f4118089290, plus=1) at glfs-fops.c:2898 #1 0x00007f4df0774937 in pub_glfs_xreaddirplus_r (glfd=0x7f4118089290, flags=3, xstat_p=0x7f4c65ee8800, ext=0x7f4c65ee86e0, res=0x7f4c65ee8838) at glfs-fops.c:4831 #2 0x0000000000a9bbd5 in cfs_glfs_xreaddirplus_r (glfd=0x7f4118089290, flags=3, xstat_p=0x7f4c65ee8800, ext=0x7f4c65ee86e0, res=0x7f4c65ee8838) at /usr/src/debug/cfs_v2_nasserver-2.5.5.34/nfs-ganesha-2.5.5-stable/src/support/cfs_gfapi.c:1080

(gdb) f 0 #0 0x00007f4df0770195 in glfd_entry_next (glfd=0x7f4118089290, plus=1) at glfs-fops.c:2898 2898 in glfs-fops.c (gdb) p glfd $1 = (struct glfs_fd *) 0x7f4118089290 (gdb) p *glfd $2 = {openfds = {next = 0x49d4a88, prev = 0x49d4a88}, _ref = {cnt = 3, release = 0x7f4df076390a <glfs_fd_destroy>, data = 0x7f4118089290}, fs = 0x49d4980, state = GLFD_OPEN, offset = 45824, fd = 0x7f4118014730, entries = { next = 0x7f4db40ebf70, prev = 0x7f4db4695b10}, next = 0xdeadc0de00, readdirbuf = 0x0, lk_owner = {len = 0, data = '\000' <repeats 1023 times>}} (gdb) p glfd->next $3 = (gf_dirent_t *) 0xdeadc0de00 (gdb) p ret $4 = -1 The exact command to reproduce the issue:

glfd_entry_refresh

if (ret >= 0) {
	if (plus) {
        /**
         * Set inode_needs_lookup flag before linking the
         * inode. Doing it later post linkage might lead
         * to a race where a fop comes after inode link
         * but before setting need_lookup flag.
         */
        list_for_each_entry (entry, &entries.list, list) {
                if (entry->inode)
                        inode_set_need_lookup (entry->inode, THIS);
                else if (!IA_ISDIR (entry->d_stat.ia_type)) {
                        /* entry->inode for directories will be
                         * always set to null to force a lookup
                         * on the dentry. Also we will have
                         * proper stat if directory present on
                         * hashed subvolume.
                         */
                        gf_fill_iatt_for_dirent (entry,
                                                 fd->inode,
                                                 subvol,
                                                 data_in);
                }
        }

        gf_link_inodes_from_dirent (THIS, fd->inode, &entries);
    }

	list_splice_init (&glfd->entries, &old.list);
	list_splice_init (&entries.list, &glfd->entries);

	/* spurious errno is dangerous for glfd_entry_next() */
	errno = 0;
}
   glfd->next = NULL;   // Add a new line to solve this problem

Possible causes: A line of code (glfd->next = NULL; )should be added to assign the pointer to null. Because this address saves the address of the last application, it will be released in old .

if (ret > 0) glfd->next = list_entry (glfd->entries.next, gf_dirent_t, list); If the conditions(ret > 0) here are not satisfied, glfd->next is not reassigned in some scenarios, and it is still the previous pointer then Closedir frees this memory int pub_glfs_closedir(struct glfs_fd* glfd, glfs_context_t* context) { int ret = -1; DECLARE_OLD_THIS; __GLFS_ENTRY_VALIDATE_FD(glfd, invalid_fs); gf_dirent_free(list_entry(&glfd->entries, gf_dirent_t, list)); glfs_mark_glfd_for_deletion(glfd); __GLFS_EXIT_FS; ret = 0; invalid_fs: } The full output of the command that failed:

Expected results:

Mandatory info: - The output of the gluster volume info command:

- The output of the gluster volume status command:

- The output of the gluster volume heal command:

**- Provide logs present on following locations of client and server nodes - /var/log/glusterfs/

**- Is there any crash ? Provide the backtrace and coredump

Additional info:

- The operating system / glusterfs version:

Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration

feizai131 avatar Jan 13 '23 03:01 feizai131

Hi @feizai131. I think you are right. I'll send a patch for this. Can you provide the test code that triggered this bug ?

xhernandez avatar Jan 26 '23 08:01 xhernandez

Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.

stale[bot] avatar Sep 17 '23 06:09 stale[bot]