You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PyTorch-1.10.0_fix-fp16-quantization-without-fbgemm.patch is missing which didn't apply even though pytorch/pytorch#84750 was merged. The merge is however only in PyTorch 2.0. The patch didn't apply because the code was reformatted in pytorch/pytorch@e60fd10
This PR adds an updated version of that Patch which applies to all PyTorch 1.11-1.13 versions so far.
Test report by @Flamefire FAILEDSUCCESS (see below)
Build succeeded for 0 out of 1 (1 easyconfigs in total)
taurusi8026 - Linux CentOS Linux 7.9.2009, x86_64, AMD EPYC 7352 24-Core Processor (zen2), 8 x NVIDIA NVIDIA A100-SXM4-40GB, 470.57.02, Python 2.7.5
See https://gist.github.com/Flamefire/fa230960cd0bb0137bc803ace4d779dd for a full test report.
Only a single failure: distributed/test_distributed_spawn. We exclude this in other ECs as it times out on Ampere GPUs which matches what I see here --> Excluded this test. As the other tests succeed I won't rerun this
Test report by @branfosj SUCCESS
Build succeeded for 1 out of 1 (1 easyconfigs in total)
bear-pg0203u29a.bear.cluster - Linux RHEL 8.6, x86_64, Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz (icelake), 1 x NVIDIA NVIDIA A100-SXM4-80GB, 520.61.05, Python 3.6.8
See https://gist.github.com/branfosj/1f41e5f624b64e9593b2ff6d4c798103 for a full test report.
Test report by @casparvl SUCCESS
Build succeeded for 1 out of 1 (1 easyconfigs in total)
gcn3.local.snellius.surf.nl - Linux RHEL 8.6, x86_64, Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz, 4 x NVIDIA NVIDIA A100-SXM4-40GB, 520.61.05, Python 3.6.8
See https://gist.github.com/casparvl/7326feb83763409121944330a8bee114 for a full test report.
boegel
changed the title
Fix quantization failure in PyTorch 1.11.0 on POWER
add patch to fix quantization failure in PyTorch 1.11.0 on POWER
Aug 15, 2023
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
(created using
eb --new-pr)PyTorch-1.10.0_fix-fp16-quantization-without-fbgemm.patchis missing which didn't apply even though pytorch/pytorch#84750 was merged. The merge is however only in PyTorch 2.0. The patch didn't apply because the code was reformatted in pytorch/pytorch@e60fd10This PR adds an updated version of that Patch which applies to all PyTorch 1.11-1.13 versions so far.