clangir
clangir copied to clipboard
Target features need to be propagated to LLVM
See https://godbolt.org/z/3n85TK4ab for an example. CodeGen adds -target-features to LLVM function attributes to propagate -march. CIRGen needs to match this so that we can e.g. use feature-specific intrinsics end-to-end.
@seven-mile from your target triple explorations, do you have a hint on what are we missing or whether we can map this as part of MLIR datalayout?
@bcardosolopes Clang TargetInfo and Flang treat them parallel with target triple, which I believe is the right abstraction. It sounds reasonable to also have cir.target_features cir.tune_cpu, cir.target_cpu ... as optional StringAttr.
https://github.com/llvm/clangir/blob/41078e9dfb3c64a111182bcc1fdd28c57fed95dc/flang/include/flang/Optimizer/CodeGen/CGPasses.td#L72-L80
Note that CodeGen adds these as per-function attributes (presumably for LTO purposes), so we'll want to follow-suit.
Thanks, @seven-mile for good direction. Here what I'm thinking after looking at Flang's way a bit. Flang makes those target-related attributes as part of their LLVM lowering pass options, and it seems that they could pass them as ModuleOp attribute when converting Flang into LLVM dialect, and (without verifying myself as I'm not familiar with flang) that seems to work to give proper function attributes in the generated IR. And if my understanding is right, this seems to be the best approach as OG is basically adding them as function attributes during codegen which would be totally not necessary for MLIR.
Therefore, I'm thinking to do some experiment on CIR side, basically during loweringPrepare, add needed target feature info as ModuleOp string attributes ( we should have those info during loweringPrepare, thus no need to pass them as Pass Option as Flang does). Then during LLVM Lowering pick them up, pass them into LLVM::ModuleOp, and this might just work. Let me know how this idea sounds to you. If this looks reasonable, I'll experiment on it next week, and report here whether this approach is successful or not.
@ghehg Your plan sounds nice ; ) Given a ClangIR module corresponds to a TU, it should not lose any expressiveness for a module-level attribute.
As for the information we need, they basically reside in clang::TargetOptions. ~~If LoweringPrepare is your preference, you might have to consider the extra invocation path from cir-opt.~~ (Upd: No no, mistaken. Just passing TargetOptions down should be enough for you <3 )
Earlier timepoint also seem okay to me, e.g. the same time when we set triple for the module.
@ghehg Your plan sounds nice ; ) Given a ClangIR module corresponds to a TU, it should not lose any expressiveness for a module-level attribute.
As for the information we need, they basically reside in
clang::TargetOptions. ~IfLoweringPrepareis your preference, you might have to consider the extra invocation path fromcir-opt.~ (Upd: No no, mistaken. Just passingTargetOptionsdown should be enough for you <3 )Earlier timepoint also seem okay to me, e.g. the same time when we set triple for the module.
Sounds good. setting triple may have better readability other than avoiding command options, one can just read from CIR file to know what target features are being used for the TU. I'll experiment both approaches soon.