GraXpert
GraXpert copied to clipboard
Replace ONNX Runtime inference with PyTorch
Summary
- add a torch-based inference helper that converts ONNX models via onnx2torch and caches them per device
- update deconvolution, denoising, and background extraction to run through the new torch pipeline and select the best available device
- drop the onnxruntime provider helper and add the required torch/onnx dependencies
- update the build workflow to install torch-directml on Windows and drop the onnxruntime-specific packages
Testing
- pytest tests/test_AstroImage.py
https://chatgpt.com/codex/tasks/task_e_68e0e878a98c8326a90d26da16e7d0cd
Would this enable hardware acceleration for denoising on Apple Silicon?