Sai Aakash
Sai Aakash
I see. Thanks @zhengziqiang !
I have another question. The inference time does not reduce if I increase the `iou_threshold`. It just stays the same way. From what you said, the inference speed should increase...
Thanks for all the suggestions ! I was able to run CoralSCOP by making some changes on the lines of what @D-Barradas and @taiamiti suggested. For me things were slightly...
@Balandat yes. I was able to fit the model without any problems after undoing the changes from #2527.
A potential fix is to route the code inside `BatchedMultiOutputGPyTorchModel`'s posterior method in different ways. When the trace_mode is on it could be just ``` if self._num_outputs > 1: mvn...
I have put in a PR that provides a temporary fix which at least enables exporting the model to torchscript. This uses the `from_batch_mvn` operation only when the posterior method...