chore: patch huggingface pytorch 1.13 image
GitHub Issue #, if available:
Note:
-
If merging this PR should also close the associated Issue, please also add that Issue # to the Linked Issues section on the right.
-
All PR's are checked weekly for staleness. This PR will be closed if not updated in 30 days.
Description
Tests run
NOTE: By default, docker builds are disabled. In order to build your container, please update dlc_developer_config.toml and specify the framework to build in "build_frameworks"
- [ ] I have run builds/tests on commit <INSERT COMMIT ID> for my changes.
Confused on how to run tests? Try using the helper utility...
Assuming your remote is called origin (you can find out more with git remote -v)...
- Run default builds and tests for a particular buildspec - also commits and pushes changes to remote; Example:
python src/prepare_dlc_dev_environment.py -b </path/to/buildspec.yml> -cp origin
- Enable specific tests for a buildspec or set of buildspecs - also commits and pushes changes to remote; Example:
python src/prepare_dlc_dev_environment.py -b </path/to/buildspec.yml> -t sanity_tests -cp origin
- Restore TOML file when ready to merge
python src/prepare_dlc_dev_environment.py -rcp origin
NOTE: If you are creating a PR for a new framework version, please ensure success of the standard, rc, and efa sagemaker remote tests by updating the dlc_developer_config.toml file:
Expand
- [ ]
sagemaker_remote_tests = true - [ ]
sagemaker_efa_tests = true - [ ]
sagemaker_rc_tests = true
Additionally, please run the sagemaker local tests in at least one revision:
- [ ]
sagemaker_local_tests = true
Formatting
- [ ] I have run
black -l 100on my code (formatting tool: https://black.readthedocs.io/en/stable/getting_started.html)
DLC image/dockerfile
Builds to Execute
Expand
Fill out the template and click the checkbox of the builds you'd like to execute
Note: Replace with <X.Y> with the major.minor framework version (i.e. 2.2) you would like to start.
-
[ ] build_pytorch_training_<X.Y>_sm
-
[ ] build_pytorch_training_<X.Y>_ec2
-
[ ] build_pytorch_inference_<X.Y>_sm
-
[ ] build_pytorch_inference_<X.Y>_ec2
-
[ ] build_pytorch_inference_<X.Y>_graviton
-
[ ] build_tensorflow_training_<X.Y>_sm
-
[ ] build_tensorflow_training_<X.Y>_ec2
-
[ ] build_tensorflow_inference_<X.Y>_sm
-
[ ] build_tensorflow_inference_<X.Y>_ec2
-
[ ] build_tensorflow_inference_<X.Y>_graviton
Additional context
PR Checklist
Expand
- [ ] I've prepended PR tag with frameworks/job this applies to : [mxnet, tensorflow, pytorch] | [ei/neuron/graviton] | [build] | [test] | [benchmark] | [ec2, ecs, eks, sagemaker]
- [ ] If the PR changes affects SM test, I've modified dlc_developer_config.toml in my PR branch by setting sagemaker_tests = true and efa_tests = true
- [ ] If this PR changes existing code, the change fully backward compatible with pre-existing code. (Non backward-compatible changes need special approval.)
- [ ] (If applicable) I've documented below the DLC image/dockerfile this relates to
- [ ] (If applicable) I've documented below the tests I've run on the DLC image
- [ ] (If applicable) I've reviewed the licenses of updated and new binaries and their dependencies to make sure all licenses are on the Apache Software Foundation Third Party License Policy Category A or Category B license list. See https://www.apache.org/legal/resolved.html.
- [ ] (If applicable) I've scanned the updated and new binaries to make sure they do not have vulnerabilities associated with them.
NEURON/GRAVITON Testing Checklist
- When creating a PR:
- [ ] I've modified
dlc_developer_config.tomlin my PR branch by settingneuron_mode = trueorgraviton_mode = true
Benchmark Testing Checklist
- When creating a PR:
- [ ] I've modified
dlc_developer_config.tomlin my PR branch by settingec2_benchmark_tests = trueorsagemaker_benchmark_tests = true
Pytest Marker Checklist
Expand
- [ ] (If applicable) I have added the marker
@pytest.mark.model("<model-type>")to the new tests which I have added, to specify the Deep Learning model that is used in the test (use"N/A"if the test doesn't use a model) - [ ] (If applicable) I have added the marker
@pytest.mark.integration("<feature-being-tested>")to the new tests which I have added, to specify the feature that will be tested - [ ] (If applicable) I have added the marker
@pytest.mark.multinode(<integer-num-nodes>)to the new tests which I have added, to specify the number of nodes used on a multi-node test - [ ] (If applicable) I have added the marker
@pytest.mark.processor(<"cpu"/"gpu"/"eia"/"neuron">)to the new tests which I have added, if a test is specifically applicable to only one processor type
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.