[QNN EP] Adjust tolerance for Clip and Transpose tests due to FP16 default in QNN HTP
Description
This PR updates the tolerance thresholds for the Clip and Transpose tests in QnnHTPBackendTests. The adjustment accounts for minor accuracy differences introduced by the change in default floating-point precision in QNN HTP starting from version 2.35.
Motivation and Context
Since QNN 2.35, the default floating-point precision in QNN HTP has changed from FP32 to FP16. Additionally, the configuration option QNN_HTP_GRAPH_CONFIG_OPTION_PRECISION has been deprecated.
This change in precision can lead to expected accuracy loss, especially in scenarios where graph inputs and outputs are defined as float 32, but internal computations are performed in FP16 (e.g., FP32 → FP16 → FP32 conversions). To accommodate this, the tolerance thresholds for the affected tests have been increased to prevent false negatives due to precision differences.
@microsoft-github-policy-service agree company="Qualcomm"
@microsoft-github-policy-service agree company="Qualcomm"
/azp run Linux QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows ARM64 QNN CI Pipeline, Windows GPU CUDA CI Pipeline, Windows GPU DML CI Pipeline, Windows GPU Doc Gen CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows OpenVINO CI Pipeline, Windows x64 QNN CI Pipeline
Azure Pipelines successfully started running 4 pipeline(s).
/azp run Linux QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows ARM64 QNN CI Pipeline, Windows GPU CUDA CI Pipeline, Windows GPU DML CI Pipeline, Windows GPU Doc Gen CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows OpenVINO CI Pipeline, Windows x64 QNN CI Pipeline
Azure Pipelines successfully started running 4 pipeline(s).
closing and reopening to restart CI pipelines