CSRNet-pytorch icon indicating copy to clipboard operation
CSRNet-pytorch copied to clipboard

CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

Results 67 CSRNet-pytorch issues
Sort by recently updated
recently updated
newest added

Because ShangHaiA dataset' size is not the same.How to set batch size >1 when train the code?

first: I didn't change any parameters in this code: args.original_lr = 1e-6 args.lr = 1e-7 args.batch_size = 1 args.momentum = 0.95 # 0.95 args.decay = 5*1e-4 # 5*1e-4 args.start_epoch =...

hi,leeyeehoo,thank u for the released codes. In image.py, I don't understand the meaning of the code" target = cv2.resize(target,(target.shape[1]/8,target.shape[0]/8),interpolation = cv2.INTER_CUBIC)*64 ",maybe the reason is the resize scale is 1/8,but...

The data augmentation in code is not the same as the paper. There is no data augmentation in the code, but the code just use the same image four times.

Traceback (most recent call last): File "train.py", line 230, in main() File "train.py", line 58, in main train_list = json.load(outfile) File "/anaconda/envs/azureml_py36/lib/python3.6/json/__init__.py", line 299, in load parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File...

In density map generation, the gaussian_filter_density function will lead to slight variation. So i change the code: density += scipy.ndimage.filters.gaussian_filter(pt2d, sigma, mode='constant') to map = scipy.ndimage.filters.gaussian_filter(pt2d, sigma, mode='constant') map =...

target = cv2.resize(target,(target.shape[1]/8,target.shape[0]/8),interpolation = cv2.INTER_CUBIC)*64 这个语句什么作用

I use this model to auto count object total number,when image is 4000x4000,train will get error 'out of memory',I don't want to downscale image size,What should I do? ![3](https://user-images.githubusercontent.com/51109294/58455151-5469cf80-8153-11e9-9206-2b9c81961363.png)

Hello, Thanks a lot for the very easy-to-follow instructions and the code. I am having difficulty reproducing your results. I am using your pretrained model 'PartAmodel_best.pth.tar'. And I am using...

作者的模型网络输出密度图的宽高都是原始图像的1/8,以256*256的图像为例,网络预测的密度图是32*32大小,但是相对于真实值密度图256*256,它俩的像素值之和(人数)应该是64倍的差异啊,计算MAE的时候,应该网络预测的密度图值乘以64再与原始密度图像素和进行比较吧,为什么val.ipynb文件中,直接使用网络输出的密度图与原始标签进行比较。 ~~~python # 计算mae的代码 mae += abs(output.detach().cpu().sum().numpy()-np.sum(groundtruth)) # 真实密度图标签的生成代码 k = np.zeros((img.shape[0],img.shape[1])) # 密度图与原始图像尺寸一致 gt = mat["image_info"][0,0][0,0][0] for i in range(0,len(gt)): if int(gt[i][1])