Guanhua Feng

Results 16 comments of Guanhua Feng

> Thank you! That has worked. > > However I am getting another error now. Can't understand what this means... > > (siamese_nested_unet) PS G:\NeuroPixel\segmentation\unet_plus_plus\siamese_nested_unet\Siam-NestedUNet-master> python train.py INFO:root:GPU AVAILABLE? True...

Hi, could you share the copy of paper to me , my email address is [email protected] thanks

问下n值一般怎么取,我看原paper中在MS-Celeb-1M实验,(MS-Celeb-1M [11] is a large-scale face recognition dataset consisting of 100K identities, and each identity has about 100 facial images.),k取得80和每个id数量差不太多,但是deepfashion实验里面取得5,但是我看你在infomap里面k取得400。

> > 问下n值一般怎么取,我看原paper中在MS-Celeb-1M实验,(MS-Celeb-1M [11] is a large-scale face recognition dataset consisting of 100K identities, and each identity has about 100 facial images.),k取得80和每个id数量差不太多,但是deepfashion实验里面取得5,但是我看你在infomap里面k取得400。 > > Infomap是尽量把满足相似度阈值的边都链接起来。本代码中的k值只是为了用faiss快速构建knn,k并不是infomap的一个参数。根据实际数据情况调整即可。 哈喽,请问下GCN-V和GCN-E paper里面100w以上的数据实验是怎么跑的,一个GPU直接OOM了。。

z = F.fold(z, kernel_size=s, output_size=(H1, W1), stride=s) File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 3860, in fold return torch._C._nn.col2im(input, _pair(output_size), _pair(kernel_size), RuntimeError: [enforce fail at CPUAllocator.cpp:65] . DefaultCPUAllocator: can't allocate memory: you tried to...

> I did this as a workaround: > > ``` > embedding_vectors = test_outputs['layer1'] > # randomly select d dimension > condition = idx < embedding_vectors.shape[1] > idx_1 = idx[condition]...

> @Youskrpig > > > In addition, random choose feature dimension is not so good. For details, refer to the latest paper: > > "Semi-orthogonal Embedding for Efficient Unsupervised Anomaly...

> > 针对LEVIR_CD数据集,我直接缩放到256*256之后,精度在49左右,是什么原因拿。。 > > resize 对图像影响较大,举个极端例子,本来变化区域是50个像素点构成的房子,经过你这种操作后,变化区域只有1个像素点,相当于增加了检测难度 But your change label is also resized to 256*256.

Hello, I run the test experiments with provided pretrained_gcn_v_fashion.pth. but it seems the clusters is not lined to the paper result. Here is the result: ![image](https://user-images.githubusercontent.com/51428142/99938586-14fd5d00-2da3-11eb-9aa7-fcd638021eb6.png) ![image](https://user-images.githubusercontent.com/51428142/99938591-17f84d80-2da3-11eb-851f-dcbad2bfd651.png) ![image](https://user-images.githubusercontent.com/51428142/99938664-44ac6500-2da3-11eb-905c-aca2766d757a.png)

Thanks for your reply. I have other two questions: 1. The training epoch in your experiments(unlabeled clustering on MS-Celeb-1M but different size(584k, 1.74M, 2.89M) remains the same? or increase ?...