Default constructor isn't optimised out during midend LLVM.
If you compile the following code:
struct Foo {
int get_num_field(int v) { return v; }
};
int get_num(int a) {
return Foo().get_num_field(a);
}
with: clang -O3 -fclangir -S -emit-llvm
The output for get_num looks like:
; Function Attrs: nounwind
define dso_local i32 @_Z7get_numi(i32 returned %0) local_unnamed_addr #0 !dbg !7 {
%2 = alloca %struct.Foo, align 1, !dbg !8
call void @_ZN3FooC1Ev(ptr nonnull %2), !dbg !9
ret i32 %0, !dbg !8
}
While without -fclangir it looks like this:
; Function Attrs: mustprogress nofree norecurse nosync nounwind willreturn memory(none) uwtable
define dso_local noundef i32 @_Z8get_num2i(i32 noundef returned %a) local_unnamed_addr #0 {
entry:
ret i32 %a
}
Notice that the call to the constructor is not optimized out in the clangir mode
Here is the link to godbolt that reproduces the issue: https://godbolt.org/z/bTos9qx4d
I think it happens because the default constructor is declared and not defined in the clangir mode.
Is it a conscious decision or a bug?
Is it a conscious decision or a bug?
Both, we conscious map it but we should optimize it out in LoweringPrepare when it's trivial. Let me put this on my queue! Thanks
Can I ask the reason for having different behavior than in the current clang? Or it is not different?
Can I ask the reason for having different behavior than in the current clang?
We want to track source semantics of when objects get constructed and destroyed, more interesting for some static analysys, but for later codegen purposes it should be removed. So it's not different in the sense that you should expect the same LLVM IR you get through OG clang (it's a bug now).
I see, thanks for the explanation!