AITemplate
AITemplate copied to clipboard
fix bf16 lowering
Summary:
- when enable bf16,
torch.ops.fbgemm.generic_histogram_binning_calibration_by_feature
in submod1 does not take bf16. So we need to cast its input to fp32 - nan_to_num could handle bf16 now
Differential Revision: D45421503
This pull request was exported from Phabricator. Differential Revision: D45421503