inference icon indicating copy to clipboard operation
inference copied to clipboard

Onnxruntime and Pytorch for Multi AMD GPUs Runs

Open zixianwang2022 opened this issue 1 year ago • 3 comments

I modified pytorch_SUT.py, onnxruntime_SUT.py, squad_QSL.py, and run.py for BERT inference. They are able to run using multiple AMD GPUs as long as the right environment is set up.

Credit to AMD mentor Miro Hodak for guiding and directing me in optimizing the code and Khai Vu of Student Cluster Competition UCSD Team at SC23 for helping me set up Onnxruntime environment for AMD ROCM.

zixianwang2022 avatar Nov 12 '23 03:11 zixianwang2022