Scene-Graph-Benchmark.pytorch
Scene-Graph-Benchmark.pytorch copied to clipboard
Model fails (does not start) to classify custom image
❓ Questions and Help
Here's my system: docker image with gpu support ubuntu 18.04
(base) root@43a59b70d445:/app/scene-graph-benchmark# nvidia-smi
Thu Sep 21 11:57:45 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3080 On | 00000000:08:00.0 On | N/A |
| 53% 27C P3 90W / 340W | 895MiB / 10240MiB | 14% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
(base) root@43a59b70d445:/app/scene-graph-benchmark# conda list | grep 'cudatoolkit\|cudnn'
(base) root@43a59b70d445:/app/scene-graph-benchmark# python -c "import torch; print(torch.__version__)"
1.4.0
(base) root@43a59b70d445:/app/scene-graph-benchmark# python -c "import torch; print(torch.cuda.is_available())"
True
(base) root@43a59b70d445:/app/scene-graph-benchmark# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
This is what I get when tryin gto run --- SGDet, Original, MOTIFS Model, SUM Fusion or --- SGDet, Causal TDE, MOTIFS Model, SUM Fusion.
Error:
(base) root@43a59b70d445:/app/scene-graph-benchmark# CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10027 --nproc_per_node=1 tools/relation_test_net.py --config-file "configs/e2e_relation_X_101_32_8_FPN_1x.yaml" MODEL.ROI_RELATION_HEAD.USE_GT_BOX False MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False MODEL.ROI_RELATION_HEAD.PREDICTOR CausalAnalysisPredictor MODEL.ROI_RELATION_HEAD.CAUSAL.EFFECT_TYPE none MODEL.ROI_RELATION_HEAD.CAUSAL.FUSION_TYPE sum MODEL.ROI_RELATION_HEAD.CAUSAL.CONTEXT_LAYER motifs TEST.IMS_PER_BATCH 1 DTYPE "float16" GLOVE_DIR /app/scene-graph-benchmark/glove MODEL.PRETRAINED_DETECTOR_CKPT /home/kaihua/checkpoints/causal-motifs-sgdet OUTPUT_DIR /app/scene-graph-benchmark/upload_causal_motif_sgdet TEST.CUSTUM_EVAL True TEST.CUSTUM_PATH /app/scene-graph-benchmark/custom_images DETECTED_SGG_DIR /app/scene-graph-benchmark/custom_images
2023-09-21 11:58:35,785 maskrcnn_benchmark INFO: Using 1 GPUs
2023-09-21 11:58:35,785 maskrcnn_benchmark INFO: AMP_VERBOSE: False
DATALOADER:
ASPECT_RATIO_GROUPING: True
NUM_WORKERS: 4
SIZE_DIVISIBILITY: 32
DATASETS:
TEST: ('VG_stanford_filtered_with_attribute_test',)
TO_TEST: None
TRAIN: ('VG_stanford_filtered_with_attribute_train',)
VAL: ('VG_stanford_filtered_with_attribute_val',)
DETECTED_SGG_DIR: /app/scene-graph-benchmark/custom_images
DTYPE: float16
GLOVE_DIR: /app/scene-graph-benchmark/glove
INPUT:
BRIGHTNESS: 0.0
CONTRAST: 0.0
HUE: 0.0
MAX_SIZE_TEST: 1000
MAX_SIZE_TRAIN: 1000
MIN_SIZE_TEST: 600
MIN_SIZE_TRAIN: (600,)
PIXEL_MEAN: [102.9801, 115.9465, 122.7717]
PIXEL_STD: [1.0, 1.0, 1.0]
SATURATION: 0.0
TO_BGR255: True
VERTICAL_FLIP_PROB_TRAIN: 0.0
MODEL:
ATTRIBUTE_ON: False
BACKBONE:
CONV_BODY: R-101-FPN
FREEZE_CONV_BODY_AT: 2
CLS_AGNOSTIC_BBOX_REG: False
DEVICE: cuda
FBNET:
ARCH: default
ARCH_DEF:
BN_TYPE: bn
DET_HEAD_BLOCKS: []
DET_HEAD_LAST_SCALE: 1.0
DET_HEAD_STRIDE: 0
DW_CONV_SKIP_BN: True
DW_CONV_SKIP_RELU: True
KPTS_HEAD_BLOCKS: []
KPTS_HEAD_LAST_SCALE: 0.0
KPTS_HEAD_STRIDE: 0
MASK_HEAD_BLOCKS: []
MASK_HEAD_LAST_SCALE: 0.0
MASK_HEAD_STRIDE: 0
RPN_BN_TYPE:
RPN_HEAD_BLOCKS: 0
SCALE_FACTOR: 1.0
WIDTH_DIVISOR: 1
FLIP_AUG: False
FPN:
USE_GN: False
USE_RELU: False
GROUP_NORM:
DIM_PER_GP: -1
EPSILON: 1e-05
NUM_GROUPS: 32
KEYPOINT_ON: False
MASK_ON: False
META_ARCHITECTURE: GeneralizedRCNN
PRETRAINED_DETECTOR_CKPT: /home/kaihua/checkpoints/causal-motifs-sgdet
RELATION_ON: True
RESNETS:
BACKBONE_OUT_CHANNELS: 256
DEFORMABLE_GROUPS: 1
NUM_GROUPS: 32
RES2_OUT_CHANNELS: 256
RES5_DILATION: 1
STAGE_WITH_DCN: (False, False, False, False)
STEM_FUNC: StemWithFixedBatchNorm
STEM_OUT_CHANNELS: 64
STRIDE_IN_1X1: False
TRANS_FUNC: BottleneckWithFixedBatchNorm
WIDTH_PER_GROUP: 8
WITH_MODULATED_DCN: False
RETINANET:
ANCHOR_SIZES: (32, 64, 128, 256, 512)
ANCHOR_STRIDES: (8, 16, 32, 64, 128)
ASPECT_RATIOS: (0.5, 1.0, 2.0)
BBOX_REG_BETA: 0.11
BBOX_REG_WEIGHT: 4.0
BG_IOU_THRESHOLD: 0.4
FG_IOU_THRESHOLD: 0.5
INFERENCE_TH: 0.05
LOSS_ALPHA: 0.25
LOSS_GAMMA: 2.0
NMS_TH: 0.4
NUM_CLASSES: 81
NUM_CONVS: 4
OCTAVE: 2.0
PRE_NMS_TOP_N: 1000
PRIOR_PROB: 0.01
SCALES_PER_OCTAVE: 3
STRADDLE_THRESH: 0
USE_C5: True
RETINANET_ON: False
ROI_ATTRIBUTE_HEAD:
ATTRIBUTE_BGFG_RATIO: 3
ATTRIBUTE_BGFG_SAMPLE: True
ATTRIBUTE_LOSS_WEIGHT: 1.0
FEATURE_EXTRACTOR: FPN2MLPFeatureExtractor
MAX_ATTRIBUTES: 10
NUM_ATTRIBUTES: 201
POS_WEIGHT: 50.0
PREDICTOR: FPNPredictor
SHARE_BOX_FEATURE_EXTRACTOR: True
USE_BINARY_LOSS: True
ROI_BOX_HEAD:
CONV_HEAD_DIM: 256
DILATION: 1
FEATURE_EXTRACTOR: FPN2MLPFeatureExtractor
MLP_HEAD_DIM: 4096
NUM_CLASSES: 151
NUM_STACKED_CONVS: 4
POOLER_RESOLUTION: 7
POOLER_SAMPLING_RATIO: 2
POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)
PREDICTOR: FPNPredictor
USE_GN: False
ROI_HEADS:
BATCH_SIZE_PER_IMAGE: 256
BBOX_REG_WEIGHTS: (10.0, 10.0, 5.0, 5.0)
BG_IOU_THRESHOLD: 0.3
DETECTIONS_PER_IMG: 80
FG_IOU_THRESHOLD: 0.5
NMS: 0.3
NMS_FILTER_DUPLICATES: True
POSITIVE_FRACTION: 0.5
POST_NMS_PER_CLS_TOPN: 300
SCORE_THRESH: 0.01
USE_FPN: True
ROI_KEYPOINT_HEAD:
CONV_LAYERS: (512, 512, 512, 512, 512, 512, 512, 512)
FEATURE_EXTRACTOR: KeypointRCNNFeatureExtractor
MLP_HEAD_DIM: 1024
NUM_CLASSES: 17
POOLER_RESOLUTION: 14
POOLER_SAMPLING_RATIO: 0
POOLER_SCALES: (0.0625,)
PREDICTOR: KeypointRCNNPredictor
RESOLUTION: 14
SHARE_BOX_FEATURE_EXTRACTOR: True
ROI_MASK_HEAD:
CONV_LAYERS: (256, 256, 256, 256)
DILATION: 1
FEATURE_EXTRACTOR: ResNet50Conv5ROIFeatureExtractor
MLP_HEAD_DIM: 1024
POOLER_RESOLUTION: 14
POOLER_SAMPLING_RATIO: 0
POOLER_SCALES: (0.0625,)
POSTPROCESS_MASKS: False
POSTPROCESS_MASKS_THRESHOLD: 0.5
PREDICTOR: MaskRCNNC4Predictor
RESOLUTION: 14
SHARE_BOX_FEATURE_EXTRACTOR: True
USE_GN: False
ROI_RELATION_HEAD:
ADD_GTBOX_TO_PROPOSAL_IN_TRAIN: True
BATCH_SIZE_PER_IMAGE: 1024
CAUSAL:
CONTEXT_LAYER: motifs
EFFECT_ANALYSIS: True
EFFECT_TYPE: none
FUSION_TYPE: sum
SEPARATE_SPATIAL: False
SPATIAL_FOR_VISION: True
CONTEXT_DROPOUT_RATE: 0.2
CONTEXT_HIDDEN_DIM: 512
CONTEXT_OBJ_LAYER: 1
CONTEXT_POOLING_DIM: 4096
CONTEXT_REL_LAYER: 1
EMBED_DIM: 200
FEATURE_EXTRACTOR: RelationFeatureExtractor
LABEL_SMOOTHING_LOSS: False
NUM_CLASSES: 51
NUM_SAMPLE_PER_GT_REL: 4
POOLING_ALL_LEVELS: True
POSITIVE_FRACTION: 0.25
PREDICTOR: CausalAnalysisPredictor
PREDICT_USE_BIAS: True
PREDICT_USE_VISION: True
REL_PROP: [0.01858, 0.00057, 0.00051, 0.00109, 0.0015, 0.00489, 0.00432, 0.02913, 0.00245, 0.00121, 0.00404, 0.0011, 0.00132, 0.00172, 5e-05, 0.00242, 0.0005, 0.00048, 0.00208, 0.15608, 0.0265, 0.06091, 0.009, 0.00183, 0.00225, 0.0009, 0.00028, 0.00077, 0.04844, 0.08645, 0.31621, 0.00088, 0.00301, 0.00042, 0.00186, 0.001, 0.00027, 0.01012, 0.0001, 0.01286, 0.00647, 0.00084, 0.01077, 0.00132, 0.00069, 0.00376, 0.00214, 0.11424, 0.01205, 0.02958]
REQUIRE_BOX_OVERLAP: False
TRANSFORMER:
DROPOUT_RATE: 0.1
INNER_DIM: 2048
KEY_DIM: 64
NUM_HEAD: 8
OBJ_LAYER: 4
REL_LAYER: 2
VAL_DIM: 64
USE_GT_BOX: False
USE_GT_OBJECT_LABEL: False
RPN:
ANCHOR_SIZES: (32, 64, 128, 256, 512)
ANCHOR_STRIDE: (4, 8, 16, 32, 64)
ASPECT_RATIOS: (0.23232838, 0.63365731, 1.28478321, 3.15089189)
BATCH_SIZE_PER_IMAGE: 256
BG_IOU_THRESHOLD: 0.3
FG_IOU_THRESHOLD: 0.7
FPN_POST_NMS_PER_BATCH: False
FPN_POST_NMS_TOP_N_TEST: 1000
FPN_POST_NMS_TOP_N_TRAIN: 1000
MIN_SIZE: 0
NMS_THRESH: 0.7
POSITIVE_FRACTION: 0.5
POST_NMS_TOP_N_TEST: 1000
POST_NMS_TOP_N_TRAIN: 1000
PRE_NMS_TOP_N_TEST: 6000
PRE_NMS_TOP_N_TRAIN: 6000
RPN_HEAD: SingleConvRPNHead
RPN_MID_CHANNEL: 256
STRADDLE_THRESH: 0
USE_FPN: True
RPN_ONLY: False
VGG:
VGG16_OUT_CHANNELS: 512
WEIGHT: catalog://ImageNetPretrained/FAIR/20171220/X-101-32x8d
OUTPUT_DIR: /app/scene-graph-benchmark/upload_causal_motif_sgdet
PATHS_CATALOG: /app/scene-graph-benchmark/maskrcnn_benchmark/config/paths_catalog.py
PATHS_DATA: /app/scene-graph-benchmark/maskrcnn_benchmark/config/../data/datasets
SOLVER:
BASE_LR: 0.01
BIAS_LR_FACTOR: 1
CHECKPOINT_PERIOD: 2000
CLIP_NORM: 5.0
GAMMA: 0.1
GRAD_NORM_CLIP: 5.0
IMS_PER_BATCH: 16
MAX_ITER: 40000
MOMENTUM: 0.9
PRE_VAL: True
PRINT_GRAD_FREQ: 4000
SCHEDULE:
COOLDOWN: 0
FACTOR: 0.1
MAX_DECAY_STEP: 3
PATIENCE: 2
THRESHOLD: 0.001
TYPE: WarmupReduceLROnPlateau
STEPS: (10000, 16000)
TO_VAL: True
UPDATE_SCHEDULE_DURING_LOAD: False
VAL_PERIOD: 2000
WARMUP_FACTOR: 0.1
WARMUP_ITERS: 500
WARMUP_METHOD: linear
WEIGHT_DECAY: 0.0001
WEIGHT_DECAY_BIAS: 0.0
TEST:
ALLOW_LOAD_FROM_CACHE: False
BBOX_AUG:
ENABLED: False
H_FLIP: False
MAX_SIZE: 4000
SCALES: ()
SCALE_H_FLIP: False
CUSTUM_EVAL: True
CUSTUM_PATH: /app/scene-graph-benchmark/custom_images
DETECTIONS_PER_IMG: 100
EXPECTED_RESULTS: []
EXPECTED_RESULTS_SIGMA_TOL: 4
IMS_PER_BATCH: 1
RELATION:
IOU_THRESHOLD: 0.5
LATER_NMS_PREDICTION_THRES: 0.5
MULTIPLE_PREDS: False
REQUIRE_OVERLAP: False
SYNC_GATHER: True
SAVE_PROPOSALS: False
2023-09-21 11:58:35,785 maskrcnn_benchmark INFO: Collecting env info (might take some time)
2023-09-21 11:58:37,647 maskrcnn_benchmark INFO:
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.6 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: version 3.10.2
Python version: 3.8
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 535.104.05
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.20.1
[pip3] numpydoc==1.1.0
[pip3] torch==1.4.0
[pip3] torchvision==0.5.0
[conda] blas 1.0 mkl
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py38h27cfd23_1
[conda] mkl_fft 1.3.0 py38h42c9631_2
[conda] mkl_random 1.2.1 py38ha9443f7_2
[conda] torch 1.4.0 pypi_0 pypi
[conda] torchvision 0.5.0 pypi_0 pypi
Pillow (8.2.0)
2023-09-21 11:58:39,524 maskrcnn_benchmark.data.build INFO: ----------------------------------------------------------------------------------------------------
2023-09-21 11:58:39,524 maskrcnn_benchmark.data.build INFO: get dataset statistics...
2023-09-21 11:58:39,524 maskrcnn_benchmark.data.build INFO: Loading data statistics from: /app/scene-graph-benchmark/upload_causal_motif_sgdet/VG_stanford_filtered_with_attribute_train_statistics.cache
2023-09-21 11:58:39,524 maskrcnn_benchmark.data.build INFO: ----------------------------------------------------------------------------------------------------
loading word vectors from /app/scene-graph-benchmark/glove/glove.6B.200d.pt
__background__ -> __background__
fail on __background__
loading word vectors from /app/scene-graph-benchmark/glove/glove.6B.200d.pt
__background__ -> __background__
fail on __background__
INIT SAVE DIR /app/scene-graph-benchmark/upload_causal_motif_sgdet
get_checkpoint_file /app/scene-graph-benchmark/upload_causal_motif_sgdet/last_checkpoint
last_saved /app/scene-graph-benchmark/upload_causal_motif_sgdet/model_0028000.pth
2023-09-21 11:58:41,861 maskrcnn_benchmark.utils.checkpoint INFO: Loading checkpoint from /app/scene-graph-benchmark/upload_causal_motif_sgdet/model_0028000.pth
50%|█████████████████████████████████████████████████████████████████▌ | 1/2 [00:00<00:00, 9.66it/s]Skipping non-image file: custom_data_info.json
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 19.29it/s]
=====> /app/scene-graph-benchmark/custom_images/custom_data_info.json SAVED !
2023-09-21 11:58:42,846 maskrcnn_benchmark.inference INFO: Start evaluation on VG_stanford_filtered_with_attribute_test dataset(1 images).
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "tools/relation_test_net.py", line 123, in <module>
main()
File "tools/relation_test_net.py", line 107, in main
inference(
File "/app/scene-graph-benchmark/maskrcnn_benchmark/engine/inference.py", line 110, in inference
predictions = compute_on_dataset(model, data_loader, device, synchronize_gather=cfg.TEST.RELATION.SYNC_GATHER, timer=inference_timer)
File "/app/scene-graph-benchmark/maskrcnn_benchmark/engine/inference.py", line 34, in compute_on_dataset
output = model(images.to(device), targets)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/app/scene-graph-benchmark/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py", line 49, in forward
features = self.backbone(images.tensors)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/app/scene-graph-benchmark/maskrcnn_benchmark/modeling/backbone/resnet.py", line 149, in forward
x = getattr(self, stage_name)(x)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/app/scene-graph-benchmark/maskrcnn_benchmark/modeling/backbone/resnet.py", line 331, in forward
out = self.conv2(out)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/app/scene-graph-benchmark/maskrcnn_benchmark/layers/misc.py", line 33, in forward
return super(Conv2d, self).forward(x)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 345, in forward
return self.conv2d_forward(input, self.weight)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 341, in conv2d_forward
return F.conv2d(input, weight, self.bias, self.stride,
File "/opt/conda/lib/python3.8/site-packages/apex/amp/wrap.py", line 28, in wrapper
return orig_fn(*new_args, **kwargs)
RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 263, in <module>
main()
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 258, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'tools/relation_test_net.py', '--local_rank=0', '--config-file', 'configs/e2e_relation_X_101_32_8_FPN_1x.yaml', 'MODEL.ROI_RELATION_HEAD.USE_GT_BOX', 'False', 'MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL', 'False', 'MODEL.ROI_RELATION_HEAD.PREDICTOR', 'CausalAnalysisPredictor', 'MODEL.ROI_RELATION_HEAD.CAUSAL.EFFECT_TYPE', 'none', 'MODEL.ROI_RELATION_HEAD.CAUSAL.FUSION_TYPE', 'sum', 'MODEL.ROI_RELATION_HEAD.CAUSAL.CONTEXT_LAYER', 'motifs', 'TEST.IMS_PER_BATCH', '1', 'DTYPE', 'float16', 'GLOVE_DIR', '/app/scene-graph-benchmark/glove', 'MODEL.PRETRAINED_DETECTOR_CKPT', '/home/kaihua/checkpoints/causal-motifs-sgdet', 'OUTPUT_DIR', '/app/scene-graph-benchmark/upload_causal_motif_sgdet', 'TEST.CUSTUM_EVAL', 'True', 'TEST.CUSTUM_PATH', '/app/scene-graph-benchmark/custom_images', 'DETECTED_SGG_DIR', '/app/scene-graph-benchmark/custom_images']' returned non-zero exit status 1.
Can someone point me in the right direction?
可能是你的cuda 版本太高了 项目代码的cuda用的10.1 3080应该不适配
请问你解决了吗我也碰到这个问题
请问你解决了吗我也碰到这个问题
可以看看这几个报错主要: cuDNN error: CUDNN_STATUS_MAPPING_ERROR: 这是主要错误,表明在使用 CUDA Deep Neural Network (cuDNN) 库时出现了映射错误。cuDNN 是 NVIDIA 提供的用于深度神经网络的库,这个错误通常与 GPU 资源分配或兼容性有关。 应该是安装的 cuDNN 版本可能与 CUDA 版本不兼容导致的报错,可能你用的显卡是30系或者更高版本的显卡,但是这个项目的代码是在低版本的cuda下运行的 30系以上的显卡只支持cuda 11以上的版本了 我用的是2080ti 可以避免这个问题
❓ Questions and Help
Here's my system: docker image with gpu support ubuntu 18.04
(base) root@43a59b70d445:/app/scene-graph-benchmark# nvidia-smi Thu Sep 21 11:57:45 2023 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3080 On | 00000000:08:00.0 On | N/A | | 53% 27C P3 90W / 340W | 895MiB / 10240MiB | 14% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| +---------------------------------------------------------------------------------------+ (base) root@43a59b70d445:/app/scene-graph-benchmark# conda list | grep 'cudatoolkit\|cudnn' (base) root@43a59b70d445:/app/scene-graph-benchmark# python -c "import torch; print(torch.__version__)" 1.4.0 (base) root@43a59b70d445:/app/scene-graph-benchmark# python -c "import torch; print(torch.cuda.is_available())" True (base) root@43a59b70d445:/app/scene-graph-benchmark# nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243
This is what I get when tryin gto run --- SGDet, Original, MOTIFS Model, SUM Fusion or --- SGDet, Causal TDE, MOTIFS Model, SUM Fusion.
Error: (base) root@43a59b70d445:/app/scene-graph-benchmark# CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10027 --nproc_per_node=1 tools/relation_test_net.py --config-file "configs/e2e_relation_X_101_32_8_FPN_1x.yaml" MODEL.ROI_RELATION_HEAD.USE_GT_BOX False MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False MODEL.ROI_RELATION_HEAD.PREDICTOR CausalAnalysisPredictor MODEL.ROI_RELATION_HEAD.CAUSAL.EFFECT_TYPE none MODEL.ROI_RELATION_HEAD.CAUSAL.FUSION_TYPE sum MODEL.ROI_RELATION_HEAD.CAUSAL.CONTEXT_LAYER motifs TEST.IMS_PER_BATCH 1 DTYPE "float16" GLOVE_DIR /app/scene-graph-benchmark/glove MODEL.PRETRAINED_DETECTOR_CKPT /home/kaihua/checkpoints/causal-motifs-sgdet OUTPUT_DIR /app/scene-graph-benchmark/upload_causal_motif_sgdet TEST.CUSTUM_EVAL True TEST.CUSTUM_PATH /app/scene-graph-benchmark/custom_images DETECTED_SGG_DIR /app/scene-graph-benchmark/custom_images 2023-09-21 11:58:35,785 maskrcnn_benchmark INFO: Using 1 GPUs 2023-09-21 11:58:35,785 maskrcnn_benchmark INFO: AMP_VERBOSE: False DATALOADER: ASPECT_RATIO_GROUPING: True NUM_WORKERS: 4 SIZE_DIVISIBILITY: 32 DATASETS: TEST: ('VG_stanford_filtered_with_attribute_test',) TO_TEST: None TRAIN: ('VG_stanford_filtered_with_attribute_train',) VAL: ('VG_stanford_filtered_with_attribute_val',) DETECTED_SGG_DIR: /app/scene-graph-benchmark/custom_images DTYPE: float16 GLOVE_DIR: /app/scene-graph-benchmark/glove INPUT: BRIGHTNESS: 0.0 CONTRAST: 0.0 HUE: 0.0 MAX_SIZE_TEST: 1000 MAX_SIZE_TRAIN: 1000 MIN_SIZE_TEST: 600 MIN_SIZE_TRAIN: (600,) PIXEL_MEAN: [102.9801, 115.9465, 122.7717] PIXEL_STD: [1.0, 1.0, 1.0] SATURATION: 0.0 TO_BGR255: True VERTICAL_FLIP_PROB_TRAIN: 0.0 MODEL: ATTRIBUTE_ON: False BACKBONE: CONV_BODY: R-101-FPN FREEZE_CONV_BODY_AT: 2 CLS_AGNOSTIC_BBOX_REG: False DEVICE: cuda FBNET: ARCH: default ARCH_DEF: BN_TYPE: bn DET_HEAD_BLOCKS: [] DET_HEAD_LAST_SCALE: 1.0 DET_HEAD_STRIDE: 0 DW_CONV_SKIP_BN: True DW_CONV_SKIP_RELU: True KPTS_HEAD_BLOCKS: [] KPTS_HEAD_LAST_SCALE: 0.0 KPTS_HEAD_STRIDE: 0 MASK_HEAD_BLOCKS: [] MASK_HEAD_LAST_SCALE: 0.0 MASK_HEAD_STRIDE: 0 RPN_BN_TYPE: RPN_HEAD_BLOCKS: 0 SCALE_FACTOR: 1.0 WIDTH_DIVISOR: 1 FLIP_AUG: False FPN: USE_GN: False USE_RELU: False GROUP_NORM: DIM_PER_GP: -1 EPSILON: 1e-05 NUM_GROUPS: 32 KEYPOINT_ON: False MASK_ON: False META_ARCHITECTURE: GeneralizedRCNN PRETRAINED_DETECTOR_CKPT: /home/kaihua/checkpoints/causal-motifs-sgdet RELATION_ON: True RESNETS: BACKBONE_OUT_CHANNELS: 256 DEFORMABLE_GROUPS: 1 NUM_GROUPS: 32 RES2_OUT_CHANNELS: 256 RES5_DILATION: 1 STAGE_WITH_DCN: (False, False, False, False) STEM_FUNC: StemWithFixedBatchNorm STEM_OUT_CHANNELS: 64 STRIDE_IN_1X1: False TRANS_FUNC: BottleneckWithFixedBatchNorm WIDTH_PER_GROUP: 8 WITH_MODULATED_DCN: False RETINANET: ANCHOR_SIZES: (32, 64, 128, 256, 512) ANCHOR_STRIDES: (8, 16, 32, 64, 128) ASPECT_RATIOS: (0.5, 1.0, 2.0) BBOX_REG_BETA: 0.11 BBOX_REG_WEIGHT: 4.0 BG_IOU_THRESHOLD: 0.4 FG_IOU_THRESHOLD: 0.5 INFERENCE_TH: 0.05 LOSS_ALPHA: 0.25 LOSS_GAMMA: 2.0 NMS_TH: 0.4 NUM_CLASSES: 81 NUM_CONVS: 4 OCTAVE: 2.0 PRE_NMS_TOP_N: 1000 PRIOR_PROB: 0.01 SCALES_PER_OCTAVE: 3 STRADDLE_THRESH: 0 USE_C5: True RETINANET_ON: False ROI_ATTRIBUTE_HEAD: ATTRIBUTE_BGFG_RATIO: 3 ATTRIBUTE_BGFG_SAMPLE: True ATTRIBUTE_LOSS_WEIGHT: 1.0 FEATURE_EXTRACTOR: FPN2MLPFeatureExtractor MAX_ATTRIBUTES: 10 NUM_ATTRIBUTES: 201 POS_WEIGHT: 50.0 PREDICTOR: FPNPredictor SHARE_BOX_FEATURE_EXTRACTOR: True USE_BINARY_LOSS: True ROI_BOX_HEAD: CONV_HEAD_DIM: 256 DILATION: 1 FEATURE_EXTRACTOR: FPN2MLPFeatureExtractor MLP_HEAD_DIM: 4096 NUM_CLASSES: 151 NUM_STACKED_CONVS: 4 POOLER_RESOLUTION: 7 POOLER_SAMPLING_RATIO: 2 POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125) PREDICTOR: FPNPredictor USE_GN: False ROI_HEADS: BATCH_SIZE_PER_IMAGE: 256 BBOX_REG_WEIGHTS: (10.0, 10.0, 5.0, 5.0) BG_IOU_THRESHOLD: 0.3 DETECTIONS_PER_IMG: 80 FG_IOU_THRESHOLD: 0.5 NMS: 0.3 NMS_FILTER_DUPLICATES: True POSITIVE_FRACTION: 0.5 POST_NMS_PER_CLS_TOPN: 300 SCORE_THRESH: 0.01 USE_FPN: True ROI_KEYPOINT_HEAD: CONV_LAYERS: (512, 512, 512, 512, 512, 512, 512, 512) FEATURE_EXTRACTOR: KeypointRCNNFeatureExtractor MLP_HEAD_DIM: 1024 NUM_CLASSES: 17 POOLER_RESOLUTION: 14 POOLER_SAMPLING_RATIO: 0 POOLER_SCALES: (0.0625,) PREDICTOR: KeypointRCNNPredictor RESOLUTION: 14 SHARE_BOX_FEATURE_EXTRACTOR: True ROI_MASK_HEAD: CONV_LAYERS: (256, 256, 256, 256) DILATION: 1 FEATURE_EXTRACTOR: ResNet50Conv5ROIFeatureExtractor MLP_HEAD_DIM: 1024 POOLER_RESOLUTION: 14 POOLER_SAMPLING_RATIO: 0 POOLER_SCALES: (0.0625,) POSTPROCESS_MASKS: False POSTPROCESS_MASKS_THRESHOLD: 0.5 PREDICTOR: MaskRCNNC4Predictor RESOLUTION: 14 SHARE_BOX_FEATURE_EXTRACTOR: True USE_GN: False ROI_RELATION_HEAD: ADD_GTBOX_TO_PROPOSAL_IN_TRAIN: True BATCH_SIZE_PER_IMAGE: 1024 CAUSAL: CONTEXT_LAYER: motifs EFFECT_ANALYSIS: True EFFECT_TYPE: none FUSION_TYPE: sum SEPARATE_SPATIAL: False SPATIAL_FOR_VISION: True CONTEXT_DROPOUT_RATE: 0.2 CONTEXT_HIDDEN_DIM: 512 CONTEXT_OBJ_LAYER: 1 CONTEXT_POOLING_DIM: 4096 CONTEXT_REL_LAYER: 1 EMBED_DIM: 200 FEATURE_EXTRACTOR: RelationFeatureExtractor LABEL_SMOOTHING_LOSS: False NUM_CLASSES: 51 NUM_SAMPLE_PER_GT_REL: 4 POOLING_ALL_LEVELS: True POSITIVE_FRACTION: 0.25 PREDICTOR: CausalAnalysisPredictor PREDICT_USE_BIAS: True PREDICT_USE_VISION: True REL_PROP: [0.01858, 0.00057, 0.00051, 0.00109, 0.0015, 0.00489, 0.00432, 0.02913, 0.00245, 0.00121, 0.00404, 0.0011, 0.00132, 0.00172, 5e-05, 0.00242, 0.0005, 0.00048, 0.00208, 0.15608, 0.0265, 0.06091, 0.009, 0.00183, 0.00225, 0.0009, 0.00028, 0.00077, 0.04844, 0.08645, 0.31621, 0.00088, 0.00301, 0.00042, 0.00186, 0.001, 0.00027, 0.01012, 0.0001, 0.01286, 0.00647, 0.00084, 0.01077, 0.00132, 0.00069, 0.00376, 0.00214, 0.11424, 0.01205, 0.02958] REQUIRE_BOX_OVERLAP: False TRANSFORMER: DROPOUT_RATE: 0.1 INNER_DIM: 2048 KEY_DIM: 64 NUM_HEAD: 8 OBJ_LAYER: 4 REL_LAYER: 2 VAL_DIM: 64 USE_GT_BOX: False USE_GT_OBJECT_LABEL: False RPN: ANCHOR_SIZES: (32, 64, 128, 256, 512) ANCHOR_STRIDE: (4, 8, 16, 32, 64) ASPECT_RATIOS: (0.23232838, 0.63365731, 1.28478321, 3.15089189) BATCH_SIZE_PER_IMAGE: 256 BG_IOU_THRESHOLD: 0.3 FG_IOU_THRESHOLD: 0.7 FPN_POST_NMS_PER_BATCH: False FPN_POST_NMS_TOP_N_TEST: 1000 FPN_POST_NMS_TOP_N_TRAIN: 1000 MIN_SIZE: 0 NMS_THRESH: 0.7 POSITIVE_FRACTION: 0.5 POST_NMS_TOP_N_TEST: 1000 POST_NMS_TOP_N_TRAIN: 1000 PRE_NMS_TOP_N_TEST: 6000 PRE_NMS_TOP_N_TRAIN: 6000 RPN_HEAD: SingleConvRPNHead RPN_MID_CHANNEL: 256 STRADDLE_THRESH: 0 USE_FPN: True RPN_ONLY: False VGG: VGG16_OUT_CHANNELS: 512 WEIGHT: catalog://ImageNetPretrained/FAIR/20171220/X-101-32x8d OUTPUT_DIR: /app/scene-graph-benchmark/upload_causal_motif_sgdet PATHS_CATALOG: /app/scene-graph-benchmark/maskrcnn_benchmark/config/paths_catalog.py PATHS_DATA: /app/scene-graph-benchmark/maskrcnn_benchmark/config/../data/datasets SOLVER: BASE_LR: 0.01 BIAS_LR_FACTOR: 1 CHECKPOINT_PERIOD: 2000 CLIP_NORM: 5.0 GAMMA: 0.1 GRAD_NORM_CLIP: 5.0 IMS_PER_BATCH: 16 MAX_ITER: 40000 MOMENTUM: 0.9 PRE_VAL: True PRINT_GRAD_FREQ: 4000 SCHEDULE: COOLDOWN: 0 FACTOR: 0.1 MAX_DECAY_STEP: 3 PATIENCE: 2 THRESHOLD: 0.001 TYPE: WarmupReduceLROnPlateau STEPS: (10000, 16000) TO_VAL: True UPDATE_SCHEDULE_DURING_LOAD: False VAL_PERIOD: 2000 WARMUP_FACTOR: 0.1 WARMUP_ITERS: 500 WARMUP_METHOD: linear WEIGHT_DECAY: 0.0001 WEIGHT_DECAY_BIAS: 0.0 TEST: ALLOW_LOAD_FROM_CACHE: False BBOX_AUG: ENABLED: False H_FLIP: False MAX_SIZE: 4000 SCALES: () SCALE_H_FLIP: False CUSTUM_EVAL: True CUSTUM_PATH: /app/scene-graph-benchmark/custom_images DETECTIONS_PER_IMG: 100 EXPECTED_RESULTS: [] EXPECTED_RESULTS_SIGMA_TOL: 4 IMS_PER_BATCH: 1 RELATION: IOU_THRESHOLD: 0.5 LATER_NMS_PREDICTION_THRES: 0.5 MULTIPLE_PREDS: False REQUIRE_OVERLAP: False SYNC_GATHER: True SAVE_PROPOSALS: False 2023-09-21 11:58:35,785 maskrcnn_benchmark INFO: Collecting env info (might take some time) 2023-09-21 11:58:37,647 maskrcnn_benchmark INFO: PyTorch version: 1.4.0 Is debug build: No CUDA used to build PyTorch: 10.1 OS: Ubuntu 18.04.6 LTS GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 CMake version: version 3.10.2 Python version: 3.8 Is CUDA available: Yes CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Nvidia driver version: 535.104.05 cuDNN version: Could not collect Versions of relevant libraries: [pip3] numpy==1.20.1 [pip3] numpydoc==1.1.0 [pip3] torch==1.4.0 [pip3] torchvision==0.5.0 [conda] blas 1.0 mkl [conda] mkl 2021.2.0 h06a4308_296 [conda] mkl-service 2.3.0 py38h27cfd23_1 [conda] mkl_fft 1.3.0 py38h42c9631_2 [conda] mkl_random 1.2.1 py38ha9443f7_2 [conda] torch 1.4.0 pypi_0 pypi [conda] torchvision 0.5.0 pypi_0 pypi Pillow (8.2.0) 2023-09-21 11:58:39,524 maskrcnn_benchmark.data.build INFO: ---------------------------------------------------------------------------------------------------- 2023-09-21 11:58:39,524 maskrcnn_benchmark.data.build INFO: get dataset statistics... 2023-09-21 11:58:39,524 maskrcnn_benchmark.data.build INFO: Loading data statistics from: /app/scene-graph-benchmark/upload_causal_motif_sgdet/VG_stanford_filtered_with_attribute_train_statistics.cache 2023-09-21 11:58:39,524 maskrcnn_benchmark.data.build INFO: ---------------------------------------------------------------------------------------------------- loading word vectors from /app/scene-graph-benchmark/glove/glove.6B.200d.pt __background__ -> __background__ fail on __background__ loading word vectors from /app/scene-graph-benchmark/glove/glove.6B.200d.pt __background__ -> __background__ fail on __background__ INIT SAVE DIR /app/scene-graph-benchmark/upload_causal_motif_sgdet get_checkpoint_file /app/scene-graph-benchmark/upload_causal_motif_sgdet/last_checkpoint last_saved /app/scene-graph-benchmark/upload_causal_motif_sgdet/model_0028000.pth 2023-09-21 11:58:41,861 maskrcnn_benchmark.utils.checkpoint INFO: Loading checkpoint from /app/scene-graph-benchmark/upload_causal_motif_sgdet/model_0028000.pth 50%|█████████████████████████████████████████████████████████████████▌ | 1/2 [00:00<00:00, 9.66it/s]Skipping non-image file: custom_data_info.json 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 19.29it/s] =====> /app/scene-graph-benchmark/custom_images/custom_data_info.json SAVED ! 2023-09-21 11:58:42,846 maskrcnn_benchmark.inference INFO: Start evaluation on VG_stanford_filtered_with_attribute_test dataset(1 images). 0%| | 0/1 [00:00<?, ?it/s] Traceback (most recent call last): File "tools/relation_test_net.py", line 123, in <module> main() File "tools/relation_test_net.py", line 107, in main inference( File "/app/scene-graph-benchmark/maskrcnn_benchmark/engine/inference.py", line 110, in inference predictions = compute_on_dataset(model, data_loader, device, synchronize_gather=cfg.TEST.RELATION.SYNC_GATHER, timer=inference_timer) File "/app/scene-graph-benchmark/maskrcnn_benchmark/engine/inference.py", line 34, in compute_on_dataset output = model(images.to(device), targets) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/app/scene-graph-benchmark/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py", line 49, in forward features = self.backbone(images.tensors) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/app/scene-graph-benchmark/maskrcnn_benchmark/modeling/backbone/resnet.py", line 149, in forward x = getattr(self, stage_name)(x) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/app/scene-graph-benchmark/maskrcnn_benchmark/modeling/backbone/resnet.py", line 331, in forward out = self.conv2(out) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/app/scene-graph-benchmark/maskrcnn_benchmark/layers/misc.py", line 33, in forward return super(Conv2d, self).forward(x) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 341, in conv2d_forward return F.conv2d(input, weight, self.bias, self.stride, File "/opt/conda/lib/python3.8/site-packages/apex/amp/wrap.py", line 28, in wrapper return orig_fn(*new_args, **kwargs) RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR Traceback (most recent call last): File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 263, in <module> main() File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 258, in main raise subprocess.CalledProcessError(returncode=process.returncode, subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'tools/relation_test_net.py', '--local_rank=0', '--config-file', 'configs/e2e_relation_X_101_32_8_FPN_1x.yaml', 'MODEL.ROI_RELATION_HEAD.USE_GT_BOX', 'False', 'MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL', 'False', 'MODEL.ROI_RELATION_HEAD.PREDICTOR', 'CausalAnalysisPredictor', 'MODEL.ROI_RELATION_HEAD.CAUSAL.EFFECT_TYPE', 'none', 'MODEL.ROI_RELATION_HEAD.CAUSAL.FUSION_TYPE', 'sum', 'MODEL.ROI_RELATION_HEAD.CAUSAL.CONTEXT_LAYER', 'motifs', 'TEST.IMS_PER_BATCH', '1', 'DTYPE', 'float16', 'GLOVE_DIR', '/app/scene-graph-benchmark/glove', 'MODEL.PRETRAINED_DETECTOR_CKPT', '/home/kaihua/checkpoints/causal-motifs-sgdet', 'OUTPUT_DIR', '/app/scene-graph-benchmark/upload_causal_motif_sgdet', 'TEST.CUSTUM_EVAL', 'True', 'TEST.CUSTUM_PATH', '/app/scene-graph-benchmark/custom_images', 'DETECTED_SGG_DIR', '/app/scene-graph-benchmark/custom_images']' returned non-zero exit status 1.
Can someone point me in the right direction? These are the main errors identified:
cuDNN error: CUDNN_STATUS_MAPPING_ERROR: This is the primary error, indicating a mapping error when using the CUDA Deep Neural Network (cuDNN) library. cuDNN, provided by NVIDIA for deep neural networks, often encounters this error due to GPU resource allocation or compatibility issues. The issue might stem from an incompatibility between the installed version of cuDNN and the CUDA version. If you're using a 30 series or higher version graphics card, but the project's code is running on a lower version of CUDA, this could cause a problem. Graphics cards from the 30 series and above only support CUDA 11 and higher versions. I am using a 2080ti, which can avoid this problem.
请问你解决了吗我也碰到这个问题
可以看看这几个报错主要: cuDNN error: CUDNN_STATUS_MAPPING_ERROR: 这是主要错误,表明在使用 CUDA Deep Neural Network (cuDNN) 库时出现了映射错误。cuDNN 是 NVIDIA 提供的用于深度神经网络的库,这个错误通常与 GPU 资源分配或兼容性有关。 应该是安装的 cuDNN 版本可能与 CUDA 版本不兼容导致的报错,可能你用的显卡是30系或者更高版本的显卡,但是这个项目的代码是在低版本的cuda下运行的 30系以上的显卡只支持cuda 11以上的版本了 我用的是2080ti 可以避免这个问题
你好 可以加个联系方式细聊嘛 我这个配的问题有点多
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
其实你可以使用Google
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
其实你
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
其实你可以使用Google
抱歉,我刚刚学习sgg,不太清楚具体要怎么弄
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
其实你
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
其实你可以使用Google
抱歉,我刚刚学习sgg,不太清楚具体要怎么弄
看你的报错应该是amp编译后和你的GPU版本不适配 PyTorch 版本或apex没有安装 GPU 支持版本 这个项目的代码很老了
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
其实你
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
其实你可以使用Google
抱歉,我刚刚学习sgg,不太清楚具体要怎么弄
看你的报错应该是amp编译后和你的GPU版本不适配 PyTorch 版本或apex没有安装 GPU 支持版本 这个项目的代码很老了
你好,有什么较新的SGG项目推荐嘛
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
其实你
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
其实你可以使用Google
抱歉,我刚刚学习sgg,不太清楚具体要怎么弄
看你的报错应该是amp编译后和你的GPU版本不适配 PyTorch 版本或apex没有安装 GPU 支持版本 这个项目的代码很老了
你好,有什么较新的SGG项目推荐嘛
KaiHua大佬的框架应该是整理的比较好的了 你可以在网上搜一下CSDN上有人在3080上部署过这个项目 应该可以解决你的问题 http://t.csdnimg.cn/aUaqv
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
其实你
VG_stanford_filtered_with_attribute_train_statistics.cache
您好,下载的预训练文件VG_stanford_filtered_with_attribute_train_statistics.cache我出现unable load 您知道怎么解决吗
我之前你也遇到过这个问题 看看你的配置路径有没有写对 比如我的VG_stanford_filtered_with_attribute_train_statistics.cache是在/root/autodl-tmp/checkpoints/relation_motif_SGDet下 在命令行中要指定OUTPUT_DIR /root/autodl-tmp/checkpoints/relation_motif_SGDet 再看一下你的maskracc_benchmark/config/paths_catalog.py文件中DatasetCatalog类的 DATA_DIR 有没有配置好
非常感谢您的帮助,但在这个问题解决后,它出现AttributeError: 'tqdm' object has no attribute 'disable'
这是tqdm库的问题 可能是你装的库不对
非常感谢,现在出现 File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 142, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/modeling/rpn/inference.py", line 122, in forward_for_single_feature_map score_field="objectness", File "/home/handofgod/Scene-Graph-Benchmark.pytorch-master/maskrcnn_benchmark/structures/boxlist_ops.py", line 28, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) File "/home/handofgod/anaconda3/envs/scene_graph/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/amp/amp.py", line 22, in wrapper return orig_fn(*args, **kwargs) RuntimeError: Not compiled with GPU support
其实你可以使用Google
抱歉,我刚刚学习sgg,不太清楚具体要怎么弄
看你的报错应该是amp编译后和你的GPU版本不适配 PyTorch 版本或apex没有安装 GPU 支持版本 这个项目的代码很老了
你好,有什么较新的SGG项目推荐嘛
KaiHua大佬的框架应该是整理的比较好的了 你可以在网上搜一下CSDN上有人在3080上部署过这个项目 应该可以解决你的问题 http://t.csdnimg.cn/aUaqv
非常感谢!!!