ao icon indicating copy to clipboard operation
ao copied to clipboard

Update config.py to negate the dimension issue for FP8 support for AMD GPUs

Open kailashg26 opened this issue 1 month ago • 9 comments

Update the config.py file to fix (“negate”) the dimension mismatch issue that arises when enabling FP8 (8-bit floating point) precision support on AMD GPUs.

Error: image

kailashg26 avatar Oct 24 '25 21:10 kailashg26

:link: Helpful Links

:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3246

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

pytorch-bot[bot] avatar Oct 24 '25 21:10 pytorch-bot[bot]

Hi @kailashg26!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

meta-cla[bot] avatar Oct 24 '25 21:10 meta-cla[bot]

could you share how this is related to AMD GPUs? From the screenshot in the PR summary, it looks like the shapes being fed through the network do not adhere to the requirements of scaled_mm, which seems like a GPU independent problem.

vkuzo avatar Oct 27 '25 13:10 vkuzo

@vkuzo I agree, this is GPU independent problem. I just tried on MI300 and MI355 and this problem persists. But when I enable padding it works fine! The issue related to this PR is https://github.com/meta-pytorch/torchtune/issues/2833#issuecomment-3008413564

kailashg26 avatar Oct 27 '25 15:10 kailashg26

got it, can we just enable padding at the callsite for your use case instead of changing the default?

vkuzo avatar Oct 27 '25 18:10 vkuzo

@vkuzo not sure how do we do that at the callsite. You mean before I run my script using something like this?

find /opt/venv/lib/python3.10/site-packages/torchao/float8/config.py -type f -print0 | xargs -0 sed -i 's/pad_inner_dim: bool = False/pad_inner_dim: bool = True/g' \

I was just wondering if this might be very hacky

kailashg26 avatar Oct 27 '25 18:10 kailashg26

usually the user creates a Float8LinearConfig, like so

config = Float8LinearConfig(...)

in the place where that happens in torchtune, you could set the padding flag to True, or make that user configurable. Would that work?

vkuzo avatar Oct 27 '25 19:10 vkuzo

But if we use upstream torchtune we have to submit PR to upstream one right? Not sure if they are actively accepting PRs

kailashg26 avatar Oct 27 '25 19:10 kailashg26

how about just patching this function (https://github.com/meta-pytorch/torchtune/blob/67ab86b94de9e7ac7dd9850113ebe69e2bbd307c/torchtune/training/quantization.py#L232) in your callsite instead?

def patched_torchtune_convert_to_float8_training(...):
    ... modify config as needed ...

# put this somewhere after torchtune imports but before you run float8 conversion
torchtune.training.quantization.convert_to_float8_training = patched_torchtune_convert_to_float8_training

vkuzo avatar Oct 29 '25 15:10 vkuzo