Ma, Guokai
Ma, Guokai
@tjruwase we plan to add a new workflow to validate installation for CPU and run some inference related test. If there is anything we should know before add new workflow...
> > > @tjruwase we plan to add a new workflow to validate installation for CPU and run some inference related test. If there is anything we should know before...
> > > @tjruwase we plan to add a new workflow to validate installation for CPU and run some inference related test. If there is anything we should know before...
> @delock, @mrwyattii, > > I notice the cpu-inference CI is failing for the following reason: > > Is it possible to have a filter to exclude this CI based...
The Intel Extension for Pytorch we used here is a hotfix for the AVX2 instruction set detection. In this particulary run seems the public wheel is downloaded from pypi. We...
Hi @tjruwase need your input about UT data format. Currently on pytorch CPU backend we only support BF16 training and inference. This will break UTs using FP16 data format. We...
> @tjruwase The difference between training and inference is mainly from software priority aspect. Sometimes a backend has forward OPs for a data type implemented first, then implement backward of...
From my understanding, the challenge is more in engineering effort. The effort include implement backward OPs in primitive library, then integrate these primitive into PyTorch, then do extensive testing to...
> @delock, thanks for the discussion. Perhaps we should handle datatype support and OP support differently in the UTs. > > 1. For datatype support, let us use the proposed...
We will have a new release for Intel Extension for PyTorch for AVX2 detection very soon, will update install link to fix the CI workflow for CPU. A reference to...