wangxin-fighting

Results 7 issues of wangxin-fighting

Hello, I am running train_ When generating this code, it was found that only in the log_ A CKPT file has been generated in Gen, but it is not the...

ModuleNotFoundError: No module named 'pointops_cuda'

Hello, I would like to ask why I don't find the decoder section inside the mod of the code, is there no decoder? And I see that the paper is...

Hello, after I replaced the dataset, there were abnormal results. For example, the Pixel AUC increased from 0.72 to 0.8, but the Image AUC value decreased from 0.9 to 0.8....

Hello, excuse me, do you need to download all 179 libraries in the "requirements. txt" file in the code you published? Due to my failure to install these 179 libraries...

![model](https://github.com/user-attachments/assets/2d027209-e144-48de-a76d-594c7632bed0) 您好,我有两个疑惑。 1:请问论文中图2的model overview里面,X和Xk-1两个有什么区别? 2:不同的transformer层是怎么连接的?比如我看您的代码里面只有encoder,没有decoder,请问第一层Transformer(encoder)的输出是什么,它是怎么传递到第二层作为输入的呢?

您好,我在阅读您的论文中,对于这句话没法准确翻译:In this paper, we propose a novel AD framework: FOcusthe-Discrepancy (FOD), which can simultaneously spot the patch-wise, intra- and inter-discrepancies of anomalies.尤其是patch-wise这个名词,我想了很久,也没明白什么意思,您能解释下它的正确含义吗?非常感谢