VGCN-PyTorch
VGCN-PyTorch copied to clipboard
PyTorch Implementation of TCSVT 2020 "Blind Omnidirectional Image Quality Assessment with Viewport Oriented Graph Convolutional Networks"
关于预训练问题
前辈您好,看了VGCN的实验部分有一个问题,在联合优化之前的预训练的过程中,我应该如何确定我训练出来的预训练参数是“正确”的呢?也是用scroo和plcc确定嘛? 
前辈您好,我无法打开Prepare Data中的链接-Obtain [cviqd_local_epoch.pth](https://drive.google.com/file/d/1ROT4InmAEKUisfNbMHwWpWb0nvlDhoSe/view?usp=sharing), [cviqd_global_epoch.pth](https://drive.google.com/file/d/1ggxGi2uvmL3n0BtYLC-HCrWbhna2TkFQ/view?usp=sharing), and [cviqd_model.pth](https://drive.google.com/file/d/19WJHBkogveax0b3IgpWeRco5xXgKQvFl/view?usp=sharing) 这是为什么呢?
Dear sir, Thank you for your wonderful work. I misunderstanding about how to calculate the output of the GCN layer, for example, "GraphConvolution(512, 256)". (In case I want to change...
Hello, I hope this message finds you well. I recently came across your open-source code repository for the paper Blind Omnidirectional Image Quality Assessment with Viewport Oriented Graph Convolutional Networks....
在复现过程中,我严格按照论文中的方法和步骤进行操作,仅将数据集在本地的位置进行了改变(适应我的环境),其他所有设置均未做任何修改。然而,我得到的实验结果与论文中所呈现的数据相差甚远。我已经仔细检查了代码实现的各个部分,确保与论文中的描述一致,但仍然无法得到理想的结果。期待您的解答    