GraXpert icon indicating copy to clipboard operation
GraXpert copied to clipboard

Replace ONNX Runtime inference with PyTorch

Open Steffenhir opened this issue 3 months ago • 1 comments

Summary

  • add a torch-based inference helper that converts ONNX models via onnx2torch and caches them per device
  • update deconvolution, denoising, and background extraction to run through the new torch pipeline and select the best available device
  • drop the onnxruntime provider helper and add the required torch/onnx dependencies
  • update the build workflow to install torch-directml on Windows and drop the onnxruntime-specific packages

Testing

  • pytest tests/test_AstroImage.py

https://chatgpt.com/codex/tasks/task_e_68e0e878a98c8326a90d26da16e7d0cd

Steffenhir avatar Oct 04 '25 09:10 Steffenhir

Would this enable hardware acceleration for denoising on Apple Silicon?

Domdron avatar Nov 06 '25 15:11 Domdron