Ilya V

Results 7 comments of Ilya V

Looks like it is ready to merge. @ptillet could you please take a look and merge if everything is Ok?

> Essentially, this patch enables `triton::FPToFP` operation to cast fp16 to fp32 and back, @joviliast is it correct? This patch disables casting. Did you mean "pass"?

> @joviliast @binarman Can you elaborate on the motivation of this PR? Why does converting between the same type cause FMA failures? Because the case of the same internal types...

> can you make a minimized lit test out of this failure. That will help everybody understand (and will be prevent regressions) @ThomasRaoux , done. Added commit containing lit test...

> lit tests under amd dir is not tested I don't quite catch your point. I can see it passed in CI logs: ![screen](https://github.com/openai/triton/assets/152324710/8959ae6b-07d0-408c-95b5-3ed5962d7332)

result for 03-matrix-multiplication.py ``` python ./python/tutorials/03-matrix-multiplication.py triton_output_with_fp16_inputs=tensor([[-35.5625, -14.6719, -16.1875, ..., -21.8438, 24.1562, -12.2266], [ 12.6172, 2.1699, 19.5312, ..., -22.1250, -26.2812, 12.6641], [-12.3906, 6.5508, 11.4531, ..., 17.9219, -59.5000, -12.0469], ..., [...

LGTM Thank you for this PR. Have you run test_dot locally on navi ?