glusterfs
glusterfs copied to clipboard
Glusterfs fail to build with -Werror=stringop-overread on F35
Description of problem: Glusterfs fail to build on Fedora 35 with warning treated as error
/home/jenkins/root/workspace/gh_fedora-smoke/libglusterfs/src/globals.c: In function ‘gf_thread_needs_cleanup’:
12:08:38 /home/jenkins/root/workspace/gh_fedora-smoke/libglusterfs/src/globals.c:314:11: error: ‘pthread_setspecific’ expecting 1 byte in a region of size 0 [-Werror=stringop-overread]
12:08:38 314 | (void)pthread_setspecific(free_key, (void *)1);
12:08:38 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
12:08:38 In file included from /home/jenkins/root/workspace/gh_fedora-smoke/libglusterfs/src/globals.c:11:
12:08:38 /usr/include/pthread.h:1308:12: note: in a call to function ‘pthread_setspecific’ declared with attribute ‘access (none, 2)’
12:08:38 1308 | extern int pthread_setspecific (pthread_key_t __key,
12:08:38 | ^~~~~~~~~~~~~~~~~~~
Seems stringop-overread
is the warning that trigger a error. I found while trying to change glusterfs smoke test to run on F35, cf https://github.com/gluster/build-jobs/pull/116
As @mykaul pointed, this seems to be a GCC issue https://www.mail-archive.com/[email protected]/msg273180.html
So gcc is the same on F34 and F35, minus a bunch of patches (so not exactly the same), but it work on F34 and not F35. I guess what has changed is likely the underlying glibc, as I understand https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102329#c7
There is a fix in GCC: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101751 as of 4 days ago, but not yet backported.
GCC 12 is due spring 2022, so for Fedora 36 at best. I guess that if that's backported, it will appear directly in F35, depending on the timing.
Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.
If the bot want me to comment, sure :)
Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.