torch-dct
torch-dct copied to clipboard
DCT结果貌似不太一样呀
刚刚使用dct_2d函数试了一下,结果和cv2的dct()函数结果出入较大。手动计算了DC系数,手算的结果和cv2.dct()是吻合的,但是与dct_2d()的DC系数相差较大,其它系数暂未验证。请问您遇到过这样的情况吗
我遇到了,所以我也问了,感觉是rfft到dct的过程出了问题?
The assert in README failed also. I get an error of 2e-5.
#13 Try this: im_gray = cv2.imread('lena.jpg', cv2.IMREAD_GRAYSCALE) a = cv2.dct(im_gray.astype("float")) b = torch_dct.dct_2d(torch.from_numpy(im_gray.astype("float")), norm='ortho') error = (torch.abs(torch.from_numpy(a) - b)).sum()
I got an error of 6.0581e-08.
跟matlab的dct2也差别很大
和scipy.fftpack.dct计算的也不一样
I tried both options with program below. With torch 1.7.1 + torchaudio 0.7.2 (latest versions):
import torch
import torch_dct as dct
import scipy.fftpack
x = torch.randn(200)
X = dct.dct(x) # DCT-II done through the last dimension
y = dct.idct(X) # scaled DCT-III done through the last dimension
print(torch.abs(x-y))
XX = torch.Tensor(scipy.fftpack.dct(x.numpy()))
print(torch.abs(XX-X))
And according to the results it can be seen that most of absolute differences between this DCT implementation and scope's solution is adorn numerical range of [1e-6, 1e-7], occasionally 1e-5. I don't know if it's a good observation which can support the capability of this program. Although I have been used it for sometime as a DCT fixed module for my experiments, so far I did not observe anything weird.
我用上面的代码做了一个简单的测试,发现确实有一些出入,而且并非test模块assertion部分所声明的1e-10 而是大约在1e-6至1e-7之间。不过我用这个模块做了一段时间的实验,目前没有发现什么异常。
@zh217 So in general, to recap I think apart from updating modules itself we need to upgrade our criteria on testing module. Do you have any ideas on it? For example, maybe loosening the threshold from 10^-10 to a larger value?
torch-dct的结果除以1000以后与cv2的相符合,刚试了下
@adaxidedakaonang 是的,如果scale之后是符合的。所以我认为这个部分无伤大雅,可以loosen一些。@zh217 your thoughts?
请问DCT是全局DCT还是在8×8的小块进行的
看你需求了。jpeg是选的8x8小块上做的
Adida @.***> 于2022年7月12日周二 16:09写道:
请问DCT是全局DCT还是在8×8的小块进行的
— Reply to this email directly, view it on GitHub https://github.com/zh217/torch-dct/issues/12#issuecomment-1181454832, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD47FES3ZKPYMRRJRDXY723VTUR27ANCNFSM4JIQUDPA . You are receiving this because you were mentioned.Message ID: @.***>
torch-dct的结果除以1000以后与cv2的相符合,刚试了下 这个是什么意思?
刚刚研究了下,好像误差大是因为pytorch默认的浮点数为float32?
和scipy.fftpack.dct计算的也不一样
请问这个问题你现在解决了吗?
跟matlab的dct2也差别很大
matlab的dct是和scipy.fftpack.dct计算的一样,请问您解决这个问题了吗
刚刚研究了下,好像误差大是因为pytorch默认的浮点数为float32?
好久没用了,不知道代码有无更新。当时作者代码的计算结果和cv的计算结果相差一个固定系数,为1000。所以对pytorch dct的结果进行与1000的相除即可与cv一致。这是当时的意思。
I tried to set the default type to float64 and the assertion passed.
`import torch import torch_dct as dct
// This line is crucial for accuracy torch.set_default_dtype(torch.float64)
x = torch.randn(500) X = dct.dct(x) # DCT-II done through the last dimension y = dct.idct(X) # scaled DCT-III done through the last dimension assert (torch.abs(x - y)).sum() < 1e-10, (torch.abs(x - y)).sum() # x == y within numerical tolerance`
The error margin is around 1e-13.