models
models copied to clipboard
Object detection API mAP values are 0 although my training data is good
1. The entire URL of the file you are using
https://github.com/tensorflow/models/tree/master/research/object_detection
2. Describe the bug
It happens to me that the data during training (loss) is very good, even too good in one part, but I accept that because the pictures are very similar. Unfortunately, what happens to me is that during the evaluation, all my mAP and other data are equal to 0.
WARNING:tensorflow:Forced number of epochs for all eval validations to be 1.
W0912 15:59:00.009776 140088353853824 model_lib_v2.py:1089] Forced number of epochs for all eval validations to be 1.
INFO:tensorflow:Maybe overwriting sample_1_of_n_eval_examples: None
I0912 15:59:00.009941 140088353853824 config_util.py:552] Maybe overwriting sample_1_of_n_eval_examples: None
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I0912 15:59:00.010002 140088353853824 config_util.py:552] Maybe overwriting use_bfloat16: False
INFO:tensorflow:Maybe overwriting eval_num_epochs: 1
I0912 15:59:00.010061 140088353853824 config_util.py:552] Maybe overwriting eval_num_epochs: 1
WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered eval_on_train_input_config.num_epochs
= 0. Overwriting num_epochs
to 1.
W0912 15:59:00.010136 140088353853824 model_lib_v2.py:1106] Expected number of evaluation epochs is 1, but instead encountered eval_on_train_input_config.num_epochs
= 0. Overwriting num_epochs
to 1.
2022-09-12 15:59:00.028241: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-09-12 15:59:00.962258: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7405 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1
2022-09-12 15:59:00.963072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 7078 MB memory: -> device: 1, name: NVIDIA GeForce GTX 1070, pci bus id: 0000:02:00.0, compute capability: 6.1
INFO:tensorflow:Reading unweighted datasets: ['./training_outlook_action_ctx/data/val.records']
I0912 15:59:01.027812 140088353853824 dataset_builder.py:162] Reading unweighted datasets: ['./training_outlook_action_ctx/data/val.records']
INFO:tensorflow:Reading record datasets for input file: ['./training_outlook_action_ctx/data/val.records']
I0912 15:59:01.027991 140088353853824 dataset_builder.py:79] Reading record datasets for input file: ['./training_outlook_action_ctx/data/val.records']
INFO:tensorflow:Number of filenames to read: 1
I0912 15:59:01.028057 140088353853824 dataset_builder.py:80] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W0912 15:59:01.028110 140088353853824 dataset_builder.py:86] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)
instead. If sloppy execution is desired, use tf.data.Options.deterministic
.
W0912 15:59:01.029739 140088353853824 deprecation.py:350] From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/object_detection/builders/dataset_builder.py:100: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.AUTOTUNE)
instead. If sloppy execution is desired, use tf.data.Options.deterministic
.
WARNING:tensorflow:From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/object_detection/builders/dataset_builder.py:235: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.data.Dataset.map() W0912 15:59:01.047504 140088353853824 deprecation.py:350] From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/object_detection/builders/dataset_builder.py:235: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version. Instructions for updating: Use
tf.data.Dataset.map()
WARNING:tensorflow:From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a tf.sparse.SparseTensor
and use tf.sparse.to_dense
instead.
W0912 15:59:04.354701 140088353853824 deprecation.py:350] From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a tf.sparse.SparseTensor
and use tf.sparse.to_dense
instead.
WARNING:tensorflow:From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast
instead.
W0912 15:59:05.460015 140088353853824 deprecation.py:350] From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast
instead.
INFO:tensorflow:Waiting for new checkpoint at ./training_outlook_action_ctx/training_1
I0912 15:59:07.470988 140088353853824 checkpoint_utils.py:136] Waiting for new checkpoint at ./training_outlook_action_ctx/training_1
INFO:tensorflow:Found new checkpoint at ./training_outlook_action_ctx/training_1/ckpt-2
I0912 15:59:07.471778 140088353853824 checkpoint_utils.py:145] Found new checkpoint at ./training_outlook_action_ctx/training_1/ckpt-2
/home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/keras/backend.py:450: UserWarning: tf.keras.backend.set_learning_phase
is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the training
argument of the __call__
method of your layer or model.
warnings.warn('tf.keras.backend.set_learning_phase
is deprecated and '
INFO:tensorflow:depth of additional conv before box predictor: 0
I0912 15:59:14.530708 140088353853824 convolutional_keras_box_predictor.py:152] depth of additional conv before box predictor: 0
WARNING:tensorflow:From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version.
Instructions for updating:
box_ind is deprecated, use box_indices instead
W0912 15:59:19.416570 140088353853824 deprecation.py:554] From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version.
Instructions for updating:
box_ind is deprecated, use box_indices instead
WARNING:tensorflow:From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/autograph/impl/api.py:459: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.
W0912 15:59:19.930708 140088353853824 deprecation.py:350] From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/autograph/impl/api.py:459: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use ref() instead.
WARNING:tensorflow:From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.
See tf.nn.softmax_cross_entropy_with_logits_v2
.
W0912 15:59:23.301795 140088353853824 deprecation.py:350] From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.
See tf.nn.softmax_cross_entropy_with_logits_v2
.
WARNING:tensorflow:From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: batch_gather (from tensorflow.python.ops.array_ops) is deprecated and will be removed after 2017-10-25.
Instructions for updating:
tf.batch_gather
is deprecated, please use tf.gather
with batch_dims=tf.rank(indices) - 1
instead.
W0912 15:59:28.054638 140088353853824 deprecation.py:350] From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: batch_gather (from tensorflow.python.ops.array_ops) is deprecated and will be removed after 2017-10-25.
Instructions for updating:
tf.batch_gather
is deprecated, please use tf.gather
with batch_dims=tf.rank(indices) - 1
instead.
2022-09-12 15:59:36.943932: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8401
WARNING:tensorflow:From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast
instead.
W0912 15:59:39.123106 140088353853824 deprecation.py:350] From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast
instead.
INFO:tensorflow:Finished eval step 0
I0912 15:59:39.147901 140088353853824 model_lib_v2.py:966] Finished eval step 0
WARNING:tensorflow:From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/autograph/impl/api.py:459: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means tf.py_function
s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.
W0912 15:59:39.260100 140088353853824 deprecation.py:350] From /home/robotiq-c3po/anaconda3/envs/tf_2/lib/python3.9/site-packages/tensorflow/python/autograph/impl/api.py:459: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means tf.py_function
s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.
INFO:tensorflow:Performing evaluation on 168 images. I0912 16:00:17.393700 140088353853824 coco_evaluation.py:293] Performing evaluation on 168 images. creating index... index created! INFO:tensorflow:Loading and preparing annotation results... I0912 16:00:17.397668 140088353853824 coco_tools.py:116] Loading and preparing annotation results... INFO:tensorflow:DONE (t=0.02s) I0912 16:00:17.413546 140088353853824 coco_tools.py:138] DONE (t=0.02s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.71s). Accumulating evaluation results... DONE (t=0.14s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000 INFO:tensorflow:Eval metrics at step 1000 I0912 16:00:18.290414 140088353853824 model_lib_v2.py:1015] Eval metrics at step 1000 INFO:tensorflow: + DetectionBoxes_Precision/mAP: 0.000000 I0912 16:00:18.291919 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP: 0.000000 INFO:tensorflow: + DetectionBoxes_Precision/[email protected]: 0.000000 I0912 16:00:18.292804 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Precision/[email protected]: 0.000000 INFO:tensorflow: + DetectionBoxes_Precision/[email protected]: 0.000000 I0912 16:00:18.293548 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Precision/[email protected]: 0.000000 INFO:tensorflow: + DetectionBoxes_Precision/mAP (small): 0.000000 I0912 16:00:18.294231 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (small): 0.000000 INFO:tensorflow: + DetectionBoxes_Precision/mAP (medium): 0.000000 I0912 16:00:18.294910 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (medium): 0.000000 INFO:tensorflow: + DetectionBoxes_Precision/mAP (large): 0.000000 I0912 16:00:18.295593 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Precision/mAP (large): 0.000000 INFO:tensorflow: + DetectionBoxes_Recall/AR@1: 0.000000 I0912 16:00:18.296273 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@1: 0.000000 INFO:tensorflow: + DetectionBoxes_Recall/AR@10: 0.000000 I0912 16:00:18.296947 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@10: 0.000000 INFO:tensorflow: + DetectionBoxes_Recall/AR@100: 0.000000 I0912 16:00:18.297632 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100: 0.000000 INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (small): 0.000000 I0912 16:00:18.298316 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (small): 0.000000 INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (medium): 0.000000 I0912 16:00:18.299822 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (medium): 0.000000 INFO:tensorflow: + DetectionBoxes_Recall/AR@100 (large): 0.000000 I0912 16:00:18.301264 140088353853824 model_lib_v2.py:1018] + DetectionBoxes_Recall/AR@100 (large): 0.000000 INFO:tensorflow: + Loss/RPNLoss/localization_loss: 0.862616 I0912 16:00:18.301923 140088353853824 model_lib_v2.py:1018] + Loss/RPNLoss/localization_loss: 0.862616 INFO:tensorflow: + Loss/RPNLoss/objectness_loss: 0.017968 I0912 16:00:18.302563 140088353853824 model_lib_v2.py:1018] + Loss/RPNLoss/objectness_loss: 0.017968 INFO:tensorflow: + Loss/BoxClassifierLoss/localization_loss: 0.000000 I0912 16:00:18.303211 140088353853824 model_lib_v2.py:1018] + Loss/BoxClassifierLoss/localization_loss: 0.000000 INFO:tensorflow: + Loss/BoxClassifierLoss/classification_loss: 0.002990 I0912 16:00:18.303851 140088353853824 model_lib_v2.py:1018] + Loss/BoxClassifierLoss/classification_loss: 0.002990 INFO:tensorflow: + Loss/regularization_loss: 0.000000 I0912 16:00:18.304493 140088353853824 model_lib_v2.py:1018] + Loss/regularization_loss: 0.000000 INFO:tensorflow: + Loss/total_loss: 0.883575 I0912 16:00:18.305129 140088353853824 model_lib_v2.py:1018] + Loss/total_loss: 0.883575 INFO:tensorflow:Waiting for new checkpoint at ./training_outlook_action_ctx/training_1
3. Steps to reproduce
I start training with the command:
python model_main_tf2.py --pipeline_config_path=./training_outlook_action_ctx/training_1/pipeline.config --model_dir=./training_outlook_action_ctx/training_1 --alsologtostderr
After closing training i start evaluation with:
python model_main_tf2.py --pipeline_config_path=./training_outlook_action_ctx/training_1/pipeline.config --model_dir=./training_outlook_action_ctx/training_1 --checkpoint_dir=./training_outlook_action_ctx/training_1 --alsologtostderr
Values during training:
I0912 15:37:09.102276 140574688362880 model_lib_v2.py:708] {'Loss/BoxClassifierLoss/classification_loss': 0.114061415, 'Loss/BoxClassifierLoss/localization_loss': 0.09612781, 'Loss/RPNLoss/localization_loss': 0.16708645, 'Loss/RPNLoss/objectness_loss': 0.0012185145, 'Loss/regularization_loss': 0.0, 'Loss/total_loss': 0.3784942, 'learning_rate': 0.0026}
4. Expected behavior
I expect at least some values, even if they are very, very small.
5. Additional context
Here is my pipeline.config file:
`Faster R-CNN with Resnet Sync-trained on COCO (8 GPUs), Initialized from Imagenet classification checkpoint TF2-Compatible, Not TPU-Compatible
model { faster_rcnn { num_classes: 7 image_resizer { keep_aspect_ratio_resizer { min_dimension: 800 max_dimension: 1333 pad_to_max_dimension: true } } feature_extractor { type: 'faster_rcnn_resnet101_keras' } first_stage_anchor_generator { grid_anchor_generator { scales: [0.25, 0.5, 1.0, 2.0] aspect_ratios: [0.5, 1.0, 2.0] height_stride: 16 width_stride: 16 } } first_stage_box_predictor_conv_hyperparams { op: CONV regularizer { l2_regularizer { weight: 0.0 } } initializer { truncated_normal_initializer { stddev: 0.01 } } } first_stage_nms_score_threshold: 0.0 first_stage_nms_iou_threshold: 0.7 first_stage_max_proposals: 300 first_stage_localization_loss_weight: 2.0 first_stage_objectness_loss_weight: 1.0 initial_crop_size: 14 maxpool_kernel_size: 2 maxpool_stride: 2 second_stage_box_predictor { mask_rcnn_box_predictor { use_dropout: false dropout_keep_probability: 1.0 fc_hyperparams { op: FC regularizer { l2_regularizer { weight: 0.0 } } initializer { variance_scaling_initializer { factor: 1.0 uniform: true mode: FAN_AVG } } } } } second_stage_post_processing { batch_non_max_suppression { score_threshold: 0.0 iou_threshold: 0.6 max_detections_per_class: 100 max_total_detections: 100 } score_converter: SOFTMAX } second_stage_localization_loss_weight: 2.0 second_stage_classification_loss_weight: 1.0 } } train_config: { batch_size: 2 num_steps: 200000 optimizer { momentum_optimizer: { learning_rate: { cosine_decay_learning_rate { learning_rate_base: 0.01 total_steps: 200000 warmup_learning_rate: 0.0 warmup_steps: 5000 } } momentum_optimizer_value: 0.9 } use_moving_average: false } gradient_clipping_by_norm: 10.0 fine_tune_checkpoint_version: V2 fine_tune_checkpoint: "/media/robotiq-c3po/HARD1T/Tensorflow2/models/research/object_detection/pretrained_models/faster_rcnn_resnet101_v1_800x1333_coco17_gpu-8/checkpoint/ckpt-0" fine_tune_checkpoint_type: "detection" data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { random_adjust_hue { } } data_augmentation_options { random_adjust_contrast { } } data_augmentation_options { random_adjust_saturation { } } data_augmentation_options { random_square_crop_by_scale { scale_min: 0.6 scale_max: 1.3 } } } train_input_reader: { label_map_path: "./training_outlook_action_ctx/data/label_map.pbtxt" tf_record_input_reader { input_path: "./training_outlook_action_ctx/data/train.records" } } eval_config: { metrics_set: "coco_detection_metrics" use_moving_averages: false batch_size: 2; } eval_input_reader: { label_map_path: "./training_outlook_action_ctx/data/label_map.pbtxt" shuffle: false num_epochs: 2 tf_record_input_reader { input_path: "./training_outlook_action_ctx/data/val.records" } }`
6. System information
- OS Platform and Distribution: Debian GNU/Linux 11 (bullseye)
- TensorFlow installed from (source or binary): https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html
- TensorFlow version (use command below):v2.9.0-18-gd8ce9f9c301 2.9.1
- Python version: 3.9.12
- CUDA/cuDNN version:CUDA Version: 11.7,
- GPU model and memory:2x NVIDIA Corporation GP104 [GeForce GTX 1070]
Thanks in advance
I am suffering from similar issues
+1 from me on FasterRCNN. I have a similar setup and data using SSD and it works, but with FasterRCNN I'm getting all Zeros.
Please don't post all these information, it doesn't help us. Post only the specific parts and necessary to guess where the issue is.
Hi, do you find any solution? @samchapman94 @Edi2410
I am also having the same issue. When I train my model only with Yolov8, it does show me the correct mAP, but when I use Keras, I always get 0
@tamaratoma @Edi2410 @samchapman94 Hi, I find the soultion. I just increased the number of training steps. it was like 2k and ı changed to 10k. Now, I'm getting 0.5 mAP50 values kinda bad but better than 0.