deep-high-resolution-net.pytorch
deep-high-resolution-net.pytorch copied to clipboard
RuntimeError: storage has wrong size: expected 0 got 128
Hi, I wanted to try using the pre-trained models with other data set. I am using azure ml to execute the code. I tried using the code:
python tools/test.py
--cfg experiments/coco/hrnet/w32_256x192_adam_lr1e-3.yaml
TEST.MODEL_FILE models/pytorch/pose_coco/pose_hrnet_w32_256x192.pth
TEST.USE_GT_BBOX False
but I receive the error:
=> creating output/coco/pose_hrnet/w32_256x192_adam_lr1e-3 => creating log/coco/pose_hrnet/w32_256x192_adam_lr1e-3_2020-11-19-08-09 Namespace(cfg='experiments/coco/hrnet/w32_256x192_adam_lr1e-3.yaml', dataDir='', logDir='', modelDir='', opts=['TEST.MODEL_FILE', 'models/pytorch/pose_coco/pose_hrnet_w32_256x192.pth', 'TEST.USE_GT_BBOX', 'False'], prevModelDir='') AUTO_RESUME: True CUDNN: BENCHMARK: True DETERMINISTIC: False ENABLED: True DATASET: COLOR_RGB: True DATASET: coco DATA_FORMAT: jpg FLIP: True HYBRID_JOINTS_TYPE: NUM_JOINTS_HALF_BODY: 8 PROB_HALF_BODY: 0.3 ROOT: data/coco/ ROT_FACTOR: 45 SCALE_FACTOR: 0.35 SELECT_DATA: False TEST_SET: val2017 TRAIN_SET: train2017 DATA_DIR: DEBUG: DEBUG: True SAVE_BATCH_IMAGES_GT: True SAVE_BATCH_IMAGES_PRED: True SAVE_HEATMAPS_GT: True SAVE_HEATMAPS_PRED: True GPUS: (0, 1, 2, 3) LOG_DIR: log LOSS: TOPK: 8 USE_DIFFERENT_JOINTS_WEIGHT: False USE_OHKM: False USE_TARGET_WEIGHT: True MODEL: EXTRA: FINAL_CONV_KERNEL: 1 PRETRAINED_LAYERS: ['conv1', 'bn1', 'conv2', 'bn2', 'layer1', 'transition1', 'stage2', 'transition2', 'stage3', 'transition3', 'stage4'] STAGE2: BLOCK: BASIC FUSE_METHOD: SUM NUM_BLOCKS: [4, 4] NUM_BRANCHES: 2 NUM_CHANNELS: [32, 64] NUM_MODULES: 1 STAGE3: BLOCK: BASIC FUSE_METHOD: SUM NUM_BLOCKS: [4, 4, 4] NUM_BRANCHES: 3 NUM_CHANNELS: [32, 64, 128] NUM_MODULES: 4 STAGE4: BLOCK: BASIC FUSE_METHOD: SUM NUM_BLOCKS: [4, 4, 4, 4] NUM_BRANCHES: 4 NUM_CHANNELS: [32, 64, 128, 256] NUM_MODULES: 3 HEATMAP_SIZE: [48, 64] IMAGE_SIZE: [192, 256] INIT_WEIGHTS: True NAME: pose_hrnet NUM_JOINTS: 17 PRETRAINED: models/pytorch/imagenet/hrnet_w32-36af842e.pth SIGMA: 2 TAG_PER_JOINT: True TARGET_TYPE: gaussian OUTPUT_DIR: output PIN_MEMORY: True PRINT_FREQ: 100 RANK: 0 TEST: BATCH_SIZE_PER_GPU: 32 BBOX_THRE: 1.0 COCO_BBOX_FILE: data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json FLIP_TEST: True IMAGE_THRE: 0.0 IN_VIS_THRE: 0.2 MODEL_FILE: models/pytorch/pose_coco/pose_hrnet_w32_256x192.pth NMS_THRE: 1.0 OKS_THRE: 0.9 POST_PROCESS: True SHIFT_HEATMAP: True SOFT_NMS: False USE_GT_BBOX: False TRAIN: BATCH_SIZE_PER_GPU: 32 BEGIN_EPOCH: 0 CHECKPOINT: END_EPOCH: 210 GAMMA1: 0.99 GAMMA2: 0.0 LR: 0.001 LR_FACTOR: 0.1 LR_STEP: [170, 200] MOMENTUM: 0.9 NESTEROV: False OPTIMIZER: adam RESUME: False SHUFFLE: True WD: 0.0001 WORKERS: 24 => loading model from models/pytorch/pose_coco/pose_hrnet_w32_256x192.pth Traceback (most recent call last): File "tools/test.py", line 131, in
main() File "tools/test.py", line 90, in main model.load_state_dict(torch.load(cfg.TEST.MODEL_FILE), strict=False) File "/anaconda/envs/azureml_py36/lib/python3.6/site-packages/torch/serialization.py", line 585, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/anaconda/envs/azureml_py36/lib/python3.6/site-packages/torch/serialization.py", line 772, in _legacy_load deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly) RuntimeError: storage has wrong size: expected 0 got 128
Any idea to solve this error? Thanks.
I'm trying to do the same thing but differently and i'm facing the exact same error, did you finally found a way to solve the problem ? if so can you tell me how you did it please ? if you still remember of course ! thank you in advance