zephyr
zephyr copied to clipboard
Generated linker scripts break when ZEPHYR_BASE and ZEPHYR_MODULES share structure that contains symlinks
Describe the bug My zephyr workspace consists of symlinks to read-only copies of modules which exist in a programmatically managed filesystem. The filesystem looks something like this.
-/store
\-- zephyr-base
\-- hal-nxp
\-- cmsis
\-- zcbor
- /workspace
-- zephyr -> /store/zephyr-base
-- modules/hal/nxp -> /store/hal-nxp
-- modules/hal/cmsis -> /store/cmsis
This command builds successfully:
source /store/zephyr/zephyr-env.sh
cmake $ZEPHYR_BASE/samples/hello_world \
-Bbuild -DBOARD=frdm_k64f \
-DZEPHYR_MODULES=/store/hal-nxp;/store/cmsis
make -Cbuild
This command fails to build:
source /workspace/zephyr/zephyr-env.sh
cmake $ZEPHYR_BASE/samples/hello_world \
-Bbuild -DBOARD=frdm_k64f \
-DZEPHYR_MODULES=/workspace/modules/hal/nxp;/workspace/modules/hal/cmsis
make -Cbuild
The reason is because the generated linker scripts in build/zephyr/include/generated/
use relative paths to $ZEPHYR_BASE.
When the build is running it thinks its running from wherever it really is in the filesystem, i.e. /store/zephyr instead of ./workspace/zephyr.
The linker generates an include that looks like
/* Sort key: "default" */#include "../../../modules/hal/nxp/mcux/quick_access_code.ld"
But starting from /store/zephyr/ there is no ../../../modules. If any of the "../" are symlinks at configure time the build will fail.
If these files would embed absolute paths instead the symlinks would be resolved correctly at configure and build time. Is there any reason to prefer relative paths over absolute paths?
.
I see what you mean, you have the zephyr folder also symlink'd which is the cause
Yes, I imagine it is an easy fix, but I’m not competent enough with python to determine what part actually comes up with those paths.
https://github.com/zephyrproject-rtos/zephyr/pull/16368/commits/62d611a3f8645c74f56f375a8e06b0143b4c34e7 It turns out this is not the way it always has been. The desire to move to relative paths was ironically in the name of reproducibility.
I am working on integrating the zephyr build system into a functional deployment model which addresses some concerns brought up here.
I can easily undo this patch in my build system so I guess it is up to the project on if this is desired behavior.
When the build is running it thinks its running from wherever it really is in the filesystem, i.e. /store/zephyr instead of ./workspace/zephyr.
It depends. The build system is made of a very large collection of parts and different parts have different opinions.
But starting from /store/zephyr/ there is no ../../../modules.
Yes, symbolic links break relative paths, this has always been the case. Symbolic links are "quick hacks", they cause countless other issues: https://lwn.net/Articles/899543/
I frequently use symbolic links to avoid spending time solving real problems and I even submitted small symlink fixes when possible (1c8632cfaad3, f7414ab85958, https://github.com/zephyrproject-rtos/west/pull/313, ...) but I keep my symlinks expectations low.
It turns out this is not the way it always has been (62d611a3f8645c, 2019). The desire to move to relative paths was ironically in the name of reproducibility.
I miss the irony sorry. Any connection between symlinks and reproducibility?
EDIT: I missed you were working on reproducibility.
On a somewhat related note, this ZEPHYR_BASE discussion:
- https://github.com/zephyrproject-rtos/zephyr/discussions/33521
cc: @aborisovich
In the particular case of #include "../../../modules/hal/nxp/mcux/quick_access_code.ld"
maybe something like this could work:
#include "modules/hal/nxp/mcux/quick_access_code.ld"
$linker_cmd -L zephyr/include/generated/
Worth a try.
Symbolic links are "quick hacks", they cause countless other issues
In this case it is not a quick hack. The functional deployment model as described in the paper I referenced above introduces a global filesystem by using the cryptographic hashes of a packages input. For example the file /gnu/store/some-hash-zephyr-base.3.1.99 is always the result of a specific computation, ie pulling the git repo and checking out a specific commit, applying whatever patches, etc. It is then linked symbolically from the read-only store into a given environment. Everything is built in isolation with only its explicit dependencies available, in this case explicit means other items in the store which were created in the same way.
I miss the irony sorry. Any connection between symlinks and reproducibility?
Not directly. I only discovered this issue because the build system was only working inside of it's isolated container which is very unusual, complete opposite of "it works on my machine". When I tried to create development profiles the builds started failing.
Reproducible computations Reproducible profiles tarballs, the ultimate container format
The symlinks just make the reproducibility manable.
Nothing needs to be done to the projects use of relative paths. I understand this use-case is unusual. The bug is that the relative path used during configure time is not the same as the relative path used at build time. The symlinks need to be resolved before the relative path is determined in both cases.
I don't think it should be a bug to use symlinks in the workspace, perhaps it would be approriate to have a configure time variable to control this behavior.
Symbolic links are "quick hacks", they cause countless other issues: https://lwn.net/Articles/899543/
In this case it is not a quick hack.
Did you have a look a this article? I restored the URL. It explains why symbolic links are "broken by design". How you use them is mitigation at best. It is from file system experts who spent years trying to fix them.
"All problems in computer science can be solved by another level of indirection" "...except for the problem of too many layers of indirection."
I did read the article and it highlights the problems of treating the filesystem as unstructured memory. I did not want to go so far into the weeds because it seemed unrelated but chapter 3 of that PDF discusses this issue as it relates to this deployment model and the parallels between memory management in languages like c and c++ and the file system.
I think "broken by design" is a bit of an over statement.
The problem comes from the fact that file paths are analogous to void *
,
they contain an address in the file system with no guarantees on the contents.
Relative paths are the same as pointer arithmetic, given a starting address
we can create new addresses and dereference them to varying degrees of success.
What the build system does now is stage code (the linker script) which embeds a pointer which it obtains by taking its function location (path) and doing math with it. Except it's not doing math with it, it's staging the math to be done in a different function with a different address.
The dangers of the filesystem changing under you are exactly the same as functions with bad pointers changing memory that doesn't belong to them, intentionally or not. It can be argued that using pointers is bad and many languages do not expose such low level memory manipulation to code written in them.
Under the hood you are of course still using pointers to memory and all the same dangers apply, but the build system is written in python for a reason: memory management is hard and often unrelated to the problem at hand. But this doesn't mean that pointer arithmetic is "broken by design". There is a time and place for it and with the correct tools and understanding the dangers can be managed.
For the same reason Python does not allow for pointer manipulation, Nix imposes the same kind of dicipline to the store. It is mounted in a different name space that not even root can write to without first remounting it. Users can interact with it by sending instructions to the daemon but cannot manipulate it directly. There is no danger of the contents of the symlinks from changing, but this is a policy question not fit for a build system to answer. If the store is changing maliciously the system is compromised anyway.
Right now the design of the build system with ZEPHYR_MODULES
and
ZEPHYR_BASE
being used the way they are implies a certain flexibility
with regards to the filesystem.
Essentially asking me for pointers to some data, but recording them as relative to a stack pointer.
The justification of not leaking filepaths does not make sense as long
as ZEPHYR_MODULES
and ZEPHYR_BASE
work the way they do. If
ZEPHYR_BASE=$HOME/workspace/zephyr
and
ZEPHYR_MODULES=/modules/nxp_hal
then the linker script will encode
your home path in the relative path anyway. The only way to ensure you
don't leak path names (to whom by the way?) is to ensure your
workspace is located in /workspace
or a similar location which can
be replicated on every machine.
If relative paths must be used, there needs to be some steps taken to resolve symlinks.
The dangers of the filesystem changing under you are exactly the same as functions with bad pointers changing memory that doesn't belong to them, intentionally or not. It can be argued that using pointers is bad and many languages do not expose such low level memory manipulation to code written in them.
Interesting analogy. Similarly, the Zephyr build system could (conditional, I do not speak on behalf of "the build system"), make a similar decision not to bother supporting symbolic links because they are generally dangerous - even when some users are using them very safely. Less hypothetically, reaching "good symlink test coverage" would be impossible because the possibilities are literally infinite. Also, they are rarely ever used on Windows.
Of course "not supported" does NOT imply "reject all small and harmless fixes for specific symlink situations", far from it.
Right now the design of the build system with ZEPHYR_MODULES and ZEPHYR_BASE being used the way they are implies a certain flexibility with regards to the filesystem. Essentially asking me for pointers to some data, but recording them as relative to a stack pointer. The justification of not leaking filepaths does not make sense as long as ZEPHYR_MODULES and ZEPHYR_BASE work the way they do.
BTW I've been buiding Zephyr a million times without defining neither ZEPHYR_MODULES nor ZEPHYR_BASE. This even involved a few symbolic links (don't tell anyone) but in an obviously simpler setup that yours.
The only way to ensure you don't leak path names (to whom by the way?) is to ensure your workspace is located in /workspace or a similar location which can be replicated on every machine.
It's been a very long time but when I was working on https://github.com/zephyrproject-rtos/zephyr/pull/14593 that was not correct. In other words I had reached a point where all linker scripts and all binary outputs were strictly identical between /workspace and /workspace2. Only caveat: this required turning off -g, which was incompatible with some (optional) build features.
you don't leak path names (to whom by the way?)
Strange question... to anyone you're sharing build outputs with?
"not supported" does NOT imply "reject all small and harmless fixes for specific symlink situations", far from it.
Of course I don't expect the project to bend over backwards for my use-cases, there is no change which doesn't break someone's workflow after all. When I opened this bug I had no idea where to begin sniffing out where these paths were resolved, I stumbled upon that commit almost by chance. I have a patch which changes the behavior for the zephyr base which ends up in the store so my problem has been solved as far as I can tell.
I only felt that the discrepancy between configure time and build time was probably unintentional.
@marc-hb would you be acceptable to having https://github.com/zephyrproject-rtos/zephyr/commit/62d611a3f8645c74f56f375a8e06b0143b4c34e7 reverted? This is a cmake generated file and will be regenerated by the build system if any configuration changes or if it is moved to a different machine so does not require having a relative path.
I'm confused, is this revert the "patch" @paperclip4465 mentioned there?
I have a patch which changes the behavior for the zephyr base which ends up in the store so my problem has been solved as far as I can tell.
This change was made for a reason. When trying to root cause a reproducibility regression, you want as many generated files as possible to be identical. Imagine for instance that part of the linker script is generated from a randomly ordered dictionary (as they used to be in Python). With relative paths we can compare your generated script with mine and find the issue instantly. With absolute paths the diff will be drown in noise.
Absolute paths are really not the way forward: https://reproducible-builds.org/docs/build-path/
It looks like the real issue here is not that the path is relative, it's because of the ./../../
. Have you tried something like https://github.com/zephyrproject-rtos/zephyr/issues/50284#issuecomment-1253969848?
It does not change the output at all. From generating a build in zephyr as it is today:
4a922fd65ec0d335c53d27257c28c6ed zephyr.bin
adc971c91963357b02728fd895c0a365 zephyr.dts
1f624fb43a45dede8252d9552926a471 zephyr.dts.d
ef164c67ee8f80a8aff5e2df83ce42f9 zephyr.dts.pre
783c2dfd907b91a8af49d1bd3f86fc0a zephyr.elf
53e05f2c506eb24c9851f207e65fef80 zephyr_final.map
54a3e04a543ff798ac848123e9daf0e6 zephyr.hex
6f91912553c5469fb6efc040014c046a zephyr.lst
53e05f2c506eb24c9851f207e65fef80 zephyr.map
020905dcc15023a5a607aebe5576738d zephyr_pre0.elf
bdb952b466084fbf7f1bcb3382bbedda zephyr_pre0.map
30bb392881b06f810412f224bc87957d zephyr_pre1.elf
3ae687e5226af530b78e4ebf68f36142 zephyr_pre1.map
acdaaab141c5f09c2c280043541e9659 zephyr.stat
With the following change made:
diff --git a/cmake/modules/extensions.cmake b/cmake/modules/extensions.cmake
index dae67fff09..05b59e14e6 100644
--- a/cmake/modules/extensions.cmake
+++ b/cmake/modules/extensions.cmake
@@ -1267,10 +1267,11 @@ function(zephyr_linker_sources location)
endif()
- # Find the relative path to the linker file from the include folder.
- file(RELATIVE_PATH relpath ${ZEPHYR_BASE}/include ${path})
-
# Create strings to be written into the file
- set (include_str "/* Sort key: \"${SORT_KEY}\" */#include \"${relpath}\"")
+ set (include_str "/* Sort key: \"${SORT_KEY}\" */#include \"${path}\"")
Gives the following output files:
4a922fd65ec0d335c53d27257c28c6ed zephyr/zephyr.bin
adc971c91963357b02728fd895c0a365 zephyr/zephyr.dts
1f624fb43a45dede8252d9552926a471 zephyr/zephyr.dts.d
ef164c67ee8f80a8aff5e2df83ce42f9 zephyr/zephyr.dts.pre
783c2dfd907b91a8af49d1bd3f86fc0a zephyr/zephyr.elf
53e05f2c506eb24c9851f207e65fef80 zephyr/zephyr_final.map
54a3e04a543ff798ac848123e9daf0e6 zephyr/zephyr.hex
6f91912553c5469fb6efc040014c046a zephyr/zephyr.lst
53e05f2c506eb24c9851f207e65fef80 zephyr/zephyr.map
020905dcc15023a5a607aebe5576738d zephyr/zephyr_pre0.elf
bdb952b466084fbf7f1bcb3382bbedda zephyr/zephyr_pre0.map
30bb392881b06f810412f224bc87957d zephyr/zephyr_pre1.elf
3ae687e5226af530b78e4ebf68f36142 zephyr/zephyr_pre1.map
acdaaab141c5f09c2c280043541e9659 zephyr/zephyr.stat
Which match up with the other files. The intermediary build files might not match up, but that does not matter, the build output is the same regardless of type of path used.
The intermediary build files might not match up, but that does not matter,
You missed the point. Intermediate files do matter very much because they're how you root-cause non determinism in a reasonable time. You have to imagine a regression, for instance something in the .ld
file starts to be non deterministic. Then good luck root-causing that when all intermediate files were always different anyway. Identical intermediate files is how I found and fixed all these reproducibility issues in a reasonable time:
- #14593
Absolute paths are really not the way forward: https://reproducible-builds.org/docs/build-path/
I don't think there should be any expectation that builds run from different points in the filesystem should be identical, let alone builds run on different machines. Only that builds with the same inputs in the same environment should result the same outputs. Inputs in this case being all of the explicit dependencies/build instructions and environment being everything else.
A build run from /tmp/workspaceA
should not be expected to be identical from a build run in /tmp/workspaceB
.
However a build run from /tmp/workspaceA
twice should result in identical, deterministic, intermediate build artifacts.
Replicating identical environments across different machines is a way more involved process than just getting the paths correct, and depending on the relative paths between ZEPHYR_BASE
and ZEPHYR_MODULES
a large amount of the local filesystem could get encoded into the build artifacts anyway. But for the same ZEPHYR_BASE
and ZEPHYR_MODULES
we should get the same build artifacts.
Relative paths can still be used, there just needs to be some acknowledgement that there could be symlinks in the path at configure time when the scripts are generated which are not seen at build time. Using absolute paths avoids this issue at the cost of intermediate build artifacts needing to be evaluated on the machine that created them (or one that looks very similar). I think this is the usual use-case anyway. I don't think there would be any value in moving generated configure artifacts to another machine to be built.
I found this snippet on here which suggests a "cannonical build path" be used when testing for reproducibility.
https://reproducible-builds.org/docs/history/
Giving up on build paths Initially we though that variations happening when building the package from different build path should be eliminated. This has proven difficult. The main problem that has been identified is that full path to source files are written in debug symbols of ELF files. First attempt used the -fdebug-prefix-map option which allows to map the current directory to a canonical one in what gets recorded. But compiler options get written to debug file as well. So it has to be doubled with -gno-record-gcc-switches to be used for reproducibility. The first large scale rebuild has proven that it was also hard to determine what the actual build path has been accurately. Second attempt used debugedit which is used by Fedora and other to change the source paths to a canonical location after the build. Unfortunately, gcc write debug strings in a hashtable. debugedit will not reorder the table after patching the strings, so the result is still unreproducible. Adding this feature to debugedit looked difficult. We can still make the approach work by passing -fno-merge-debug-strings but this is space expensive. The second large scale rebuild used the latter approach. It was still difficult to guess the initial build path properly. Stéphane Glondu was the first to suggest to using a canonical build path to solve the issue. During discussions at DebConf14, we revisited the idea, and felt it was indeed appropriate to decide on a canonical build path. It has an added benefit of making it easier to use debug packages: one simply has to unpack the source in the right place, no extra configuration required. Finally, it was agreed to add a Build-Path field to .buildinfo as it made it easier to reproduce the initial build if the canonical build location would change.
https://hal.inria.fr/hal-01161771/en
A build run from /tmp/workspaceA should not be expected to be identical from a build run in /tmp/workspaceB.
At #14593 time I achieved that and this is what allowed me to fix all the issues listed there in a reasonable time.
Initially we though that variations happening when building the package from different build path should be eliminated. This has proven difficult. The main problem that has been identified is that full path to source files are written in debug symbols of ELF files.
The most important keyword here is debug[*]. Remove -g
to switch to a release build and that's it: you can often instantly pinpoint the root-cause of any reproducibility issue thanks to simple recursive diff on all build outputs. This was possible at #14593 time and it would be a serious regression to start losing that. The test script wasn't perfect but it did not even have any fancy dependency, anyone could run it on pretty much any Linux distribution (or CI).
Only that builds with the same inputs in the same environment should result the same outputs.
Of course both people need to be trying to build the same thing but asking them to use the same user name and same /home/johndoe/zephyrproject
directory is really too demanding and unnecessary.
I don't think there would be any value in moving generated configure artifacts to another machine to be built.
That's not the point, the point is to quickly understand why two different systems build different things when they shouldn't. This is a typical problem when trying to reproduce someone else's elusive bug on the field and it can be highly dependent on the build configuration which means CI can never catch all reproducibility issues in advance (it should still catch the most common ones of course)
While we're debating and trying to find the exact definition of "reproducibility" (which is impossible, there's is no well-defined line in the sand), has anyone tried to simply remove the ../../../
from the relative path using command line flags?
I don't really have time to resume #14593 right now but if trying to remove this ../../../
myself is the price to avoid theoretical discussions then I might just spend part of a weekend doing it.
[*] BTW: https://www.bing.com/search?q=debug+path+mapping
I performed a quick test of getting current main of zephyr in one directory and tag v3.2.0-rc1 in another directory then building hello_world for the same board, and diff'd both build directories. The build directories are absolutely nothing alike, both have tonnes of absolute paths. A quick check with the diff file and nano outputs a diff file with 18,681 lines. Searching backwards in the file, the last file with an absolute path difference is on line 16288.
Of course both people need to be trying to build the same thing but asking them to use the same user name and same /home/johndoe/zephyrproject directory is really too demanding and unnecessary.
Well from what I can see not only would you need to ask them to do that, you must ask them to do that, otherwise you're going to have nothing to analyse.
Of course both people need to be trying to build the same thing but asking them to use the same user name and same /home/johndoe/zephyrproject directory is really too demanding and unnecessary
The build should work from anywhere, and the build system should be flexible enough to fit into most development environments but when talking about reproducibility I was thinking something more like /tmp/zephyrproject
, a location can can easily exist on most machines. I think release builds should be built in a location like this by a robot anyway and not someone’s home directory, but people will do what they want.
I performed a quick test of getting current main of zephyr in one directory and tag v3.2.0-rc1 in another directory then building hello_world for the same board, and diff'd both build directories. The build directories are absolutely nothing alike, both have tonnes of absolute paths
Did you set -g0
as mentioned above?
EDIT: you also need to filter out CMake files of course, see https://github.com/zephyrproject-rtos/zephyr/pull/14593/files. Only the actual build artefacts matter.
Different files excluding cmake:
[53] => _AA/zephyr/include/generated/devicetree_generated.h
[54] => _AA/zephyr/include/generated/libc/minimal/strerror_table.h
[58] => _AA/zephyr/kconfig/sources.txt
[69] => _AA/zephyr/misc/generated/syscalls_subdirs.txt
[93] => _AA/zephyr/zephyr.dts.d
[94] => _AA/zephyr/zephyr.dts.pre
As per:
# Absolute paths there too, including a couple of build*/**/generated/ GREP_generated='-e /zephyr/misc/generated/syscalls_subdirs.txt$ -e /zephyr/kconfig/sources.txt$
Manual exclusions doesn't seem positive for a system that, apparently, should have the same build files
Grrrr again... linux world problems. @paperclip4465 do you realize that Zephyr project and many other dependant projects use west tool perform most of the operations? Do you realize, that west tool by design works on hardcoded structures of files and folders? You can't just "make yourself a symlink" because you feel like it "in your environment". The project is not supposed to be flexible. It is supposed to be very unflexible. What I mean by that is - every developer that clones project, clones it to very same directory structure and builds very same way. And when somebody smart tries to do something hacky like a symlink of some I don't know, new zephyr application from directory where it should not be, build systems tells you ERROR DON'T. Thanks to this we avoid many bugs and time taking issues.
References to support my claim that "west tool by design works on hardcoded structures of files and folders" https://docs.zephyrproject.org/latest/develop/west/manifest.html#self - path property describes where project should be https://docs.zephyrproject.org/latest/develop/west/manifest.html#self - project section describes by path where to find other projects https://docs.zephyrproject.org/latest/develop/west/manifest.html#west-manifest-import-bool - importing other projects is based on searching for file with hardcoded name "west.yml" https://docs.zephyrproject.org/latest/develop/west/manifest.html#option-4-sequence - submanifests are imported from "submanifests" directory and files with '.yml' extension alfabetically.
Please also note what is wirtten in Zephyr project Getting Started website:
So this project works also on Windows and MacOS. So please stop "symlinks flexibility" talk, this is not how we do things in Zephyr.
do you realize that Zephyr project and many other dependant projects use west tool perform most of the operations? Do you realize, that west tool by design works on hardcoded structures of files and folders?
This has nothing to do with west, it is a cmake problem. Using cmake without west is supported. I am working on an alternate build tool which uses a different strategy than west to ensure reproducibility and help with my own sanity during deployments.
If west build
fails because it's being run in a workspace that isn't managed by west that clearly isn't the west project's problem.
The cmake scripts have a bug where the configure time path is not the same as the build time path because it does not resolve the symlinks before it creates the relative paths while the build side effectively does. West happens to avoid this bug by insisting on a hard coded file structure.
Using absolute paths fixes this issue, a simple cmake variable to control this behavior should be enough to make everyone happy and does not break windows and macos builds so I'm really not sure what the problem is.
As nordicjm showed, it is a 2 line patch and it is enough for me to continue along happily. It is not a big effort to apply this patch so I closed the issue.
Symbolic links, when the fun never stops:
$ cmake -S . -B build/ -L > /dev/null
CMake Error: The source "/home/me/project/app/CMakeLists.txt" does not
match the source "/real/home/me/project/app/CMakeLists.txt" used to generate cache.
Re-run cmake with a different source directory.
Different files excluding cmake:
Yes, but these comments in .h
do not affect binaries.
Did you set -g0 as mentioned above?
OK that required a little bit more work, see short patch below. @tejlmand any faster or better way to turn off all debug symbols for reproducibility purposes?
After applying this diff on two very different Linux systems (different OS, different home path, etc.) I got strictly identical .obj
and .elf
files all across the board when building west build -b qemu_x86 sample/hello_world
. The ability to super quickly point the finger at guilty .obj
files is invaluable, only thanks to this ability I could fix the many reproducibility issues listed in #14593 back in the day.
It is also useful to pinpoint weird OS differences and other strange toolchain or build system issues, see for instance https://github.com/zephyrproject-rtos/zephyr/pull/52671#issuecomment-1340184391
I don't think there should be any expectation that builds run from different points in the filesystem should be identical, let alone builds run on different machines.
eppur si muove
--- a/cmake/bintools/gnu/target_bintools.cmake
+++ b/cmake/bintools/gnu/target_bintools.cmake
@@ -66,7 +66,7 @@ set_property(TARGET bintools PROPERTY elfconvert_flag_outfile "")
set_property(TARGET bintools PROPERTY disassembly_command ${CMAKE_OBJDUMP})
set_property(TARGET bintools PROPERTY disassembly_flag -d)
set_property(TARGET bintools PROPERTY disassembly_flag_final "")
-set_property(TARGET bintools PROPERTY disassembly_flag_inline_source -S)
+set_property(TARGET bintools PROPERTY disassembly_flag_inline_source "")
set_property(TARGET bintools PROPERTY disassembly_flag_all -SDz)
set_property(TARGET bintools PROPERTY disassembly_flag_infile "")
diff --git a/cmake/compiler/gcc/compiler_flags.cmake b/cmake/compiler/gcc/compiler_flags.cmake
index 2dbb1e3e08ea..1216dcac8251 100644
--- a/cmake/compiler/gcc/compiler_flags.cmake
+++ b/cmake/compiler/gcc/compiler_flags.cmake
@@ -168,11 +168,11 @@ check_set_compiler_property(APPEND PROPERTY hosted -fno-freestanding)
check_set_compiler_property(PROPERTY freestanding -ffreestanding)
# Flag to enable debugging
-set_compiler_property(PROPERTY debug -g)
+set_compiler_property(PROPERTY debug -g0)
# GCC 11 by default emits DWARF version 5 which cannot be parsed by
# pyelftools. Can be removed once pyelftools supports v5.
-check_set_compiler_property(APPEND PROPERTY debug -gdwarf-4)
+# check_set_compiler_property(APPEND PROPERTY debug -gdwarf-4)
set_compiler_property(PROPERTY no_common -fno-common)