ao
ao copied to clipboard
Change torchao quantization types from int to size_t and preface vars with "preferred_"
Summary:
- Change the types of activation_data_alignment and weight_data_alignment from int to size_t.
- Change the return types of activation_data_size_fn_type and weight_data_size_fn_type from int to size_t.
- Rename activation_data_alignment to preferred_activation_data_alignment.
- Rename weight_data_alignment to preferred_weight_data_alignment
Differential Revision: D63873383
:link: Helpful Links
:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1041
- :page_facing_up: Preview Python docs built from this PR
Note: Links to docs will display an error until the docs builds have been completed.
This comment was automatically generated by Dr. CI and updates every 15 minutes.
This pull request was exported from Phabricator. Differential Revision: D63873383
This pull request was exported from Phabricator. Differential Revision: D63873383
This pull request was exported from Phabricator. Differential Revision: D63873383
Ready for another look @metascroy, thanks!
Ready for another look @metascroy, thanks!
Changes look good! Did you rerun the tests after making them?
Changes look good! Did you rerun the tests after making them?
Yes, everything compiled/passed when running:
sh torchao/experimental/ops/benchmarks/build_and_run_benchmarks.sh
sh torchao/experimental/ops/linear_8bit_act_xbit_weight/examples/build_and_run_examples.sh stateful_class_wrapper
sh torchao/experimental/ops/linear_8bit_act_xbit_weight/examples/build_and_run_examples.sh separate_function_wrappers
sh torchao/experimental/build_torchao_ops.sh aten