Mask_RCNN icon indicating copy to clipboard operation
Mask_RCNN copied to clipboard

no accuracy metrics while training

Open cpoptic opened this issue 5 years ago • 58 comments

Under model.py we see the keras model compile

        self.keras_model.compile(
            optimizer=optimizer,
            loss=[None] * len(self.keras_model.outputs),
            metrics=['accuracy'])

And then the metrics for the Mask RCNN metrics are added:

        # Add metrics for losses
        for name in loss_names:
            if name in self.keras_model.metrics_names:
                continue
            layer = self.keras_model.get_layer(name)
            self.keras_model.metrics_names.append(name)
            loss = (
                tf.reduce_mean(layer.output, keepdims=True)
                * self.config.LOSS_WEIGHTS.get(name, 1.))
            self.keras_model.metrics_tensors.append(loss)
        print("metrics: ", self.keras_model.metrics_names)

But when training the model, neither train_accuracy nor val_accuracy are reported. How to add these metrics and report them with each epoch, alongside the mrcnn metrics?

image

cpoptic avatar Oct 31 '19 15:10 cpoptic

Try using custom_callbacks. Counting accuracy takes a lot of time, so don't use it every epoch.

mean_average_precision_callback = modellib.MeanAveragePrecisionCallback(model,
model_inference, dataset_val, calculate_map_at_every_X_epoch=5, verbose=1)

model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=100, layers='heads', custom_callbacks=[mean_average_precision_callback])

VtlNmnk avatar Nov 04 '19 16:11 VtlNmnk

also add this somewhere in the model.py, for example at the end of the file

############################################################
#  Custom Callbacks
############################################################

class MeanAveragePrecisionCallback(Callback):
    def __init__(self, train_model: MaskRCNN, inference_model: MaskRCNN, dataset: Dataset,
                 calculate_map_at_every_X_epoch=5, dataset_limit=None,
                 verbose=1):
        super().__init__()
        self.train_model = train_model
        self.inference_model = inference_model
        self.dataset = dataset
        self.calculate_map_at_every_X_epoch = calculate_map_at_every_X_epoch
        self.dataset_limit = len(self.dataset.image_ids)
        if dataset_limit is not None:
            self.dataset_limit = dataset_limit
        self.dataset_image_ids = self.dataset.image_ids.copy()

        if inference_model.config.BATCH_SIZE != 1:
            raise ValueError("This callback only works with the bacth size of 1")

        self._verbose_print = print if verbose > 0 else lambda *a, **k: None

    def on_epoch_end(self, epoch, logs=None):

        if epoch > 2 and (epoch+1)%self.calculate_map_at_every_X_epoch == 0:
            self._verbose_print("Calculating mAP...")
            self._load_weights_for_model()

            mAPs = self._calculate_mean_average_precision()
            mAP = np.mean(mAPs)

            if logs is not None:
                logs["val_mean_average_precision"] = mAP

            self._verbose_print("mAP at epoch {0} is: {1}".format(epoch+1, mAP))

        super().on_epoch_end(epoch, logs)

    def _load_weights_for_model(self):
        last_weights_path = self.train_model.find_last()
        self._verbose_print("Loaded weights for the inference model (last checkpoint of the train model): {0}".format(
            last_weights_path))
        self.inference_model.load_weights(last_weights_path,
                                          by_name=True)

    def _calculate_mean_average_precision(self):
        mAPs = []

        # Use a random subset of the data when a limit is defined
        np.random.shuffle(self.dataset_image_ids)

        for image_id in self.dataset_image_ids[:self.dataset_limit]:
            image, image_meta, gt_class_id, gt_bbox, gt_mask = load_image_gt(self.dataset, self.inference_model.config,
                                                                             image_id, use_mini_mask=False)
            molded_images = np.expand_dims(mold_image(image, self.inference_model.config), 0)
            results = self.inference_model.detect(molded_images, verbose=0)
            r = results[0]
            # Compute mAP - VOC uses IoU 0.5
            AP, _, _, _ = utils.compute_ap(gt_bbox, gt_class_id, gt_mask, r["rois"],
                                           r["class_ids"], r["scores"], r['masks'])
            mAPs.append(AP)

        return np.array(mAPs)

VtlNmnk avatar Nov 04 '19 16:11 VtlNmnk

Hello guys, thanks for sharing the code to have accuracy metrics. I try to implement the code but I have errors :( I pasted the class "MeanAveragePrecisionCallbacks" at the end of the model.py code.

model_py

Then, in train.py I defined the variable like you did @VtlNmnk: mean_average_precision_callback = modellib.MeanAveragePrecisionCallback(model, model_inference, dataset_val, calculate_map_at_every_X_epoch=5, verbose=1)

train_py

But when I launch my training, I have this error:

Traceback (most recent call last): File "Taraudage.py", line 372, in <module> train(model) File "Taraudage.py", line 196, in train mean_average_precision_callback = modellib.MeanAveragePrecisionCallback(train_model=model, inference_model=model_inference, NameError: name 'model_inference' is not defined

What variable do you expect for model inference ?

Thanks for you time, Regards, Antoine

nayzen avatar Jan 31 '20 16:01 nayzen

Then, in train.py I defined the variable like you did @VtlNmnk: mean_average_precision_callback = modellib.MeanAveragePrecisionCallback(model, model_inference, dataset_val, calculate_map_at_every_X_epoch=5, verbose=1)

train_py

I did not write to you to change anything in the train.py ) This line needs to be added to the script from which you access the train.py. Personally, I use google collab from guys, but with many of my changes.

VtlNmnk avatar Feb 03 '20 08:02 VtlNmnk

image image Here is an example of how this works for me

VtlNmnk avatar Feb 03 '20 09:02 VtlNmnk

Hi,

I am sorry but I am trying to implement your code but the dataset: Dataset in the def init part when defining the callback comes up as "Dataset not defined".

Thank you for any suggestions :)

ben975 avatar Apr 04 '20 12:04 ben975

Hi @ben975,

You can use below class to calculate MAP, precision, recall for each image

from mrcnn.utils import compute_ap
class EvalImage():
  def __init__(self,dataset,model,cfg):
    self.dataset = dataset
    self.model   = model
    self.cfg     = cfg

 
 
  def evaluate_model(self , len = 50):
    APs = list()
    precisions_dict = {}
    recall_dict     = {}
    for index,image_id in enumerate(self.dataset.image_ids):
      if(index > len):
         break; 
      # load image, bounding boxes and masks for the image id
      image, image_meta, gt_class_id, gt_bbox, gt_mask = modellib.load_image_gt(self.dataset, self.cfg,image_id, use_mini_mask=False)
      # convert pixel values (e.g. center)
      #scaled_image = modellib.mold_image(image, self.cfg)
      # convert image into one sample
      sample = np.expand_dims(image, 0)
     # print(len(image))
      # make prediction
      yhat = self.model.detect(sample, verbose=1)
      # extract results for first sample
      r = yhat[0]
      # calculate statistics, including AP
      AP, precisions, recalls, _ = compute_ap(gt_bbox, gt_class_id, gt_mask, r["rois"], r["class_ids"], r["scores"], r['masks'])
      precisions_dict[image_id] = np.mean(precisions)
      recall_dict[image_id] = np.mean(recalls)
      # store
      APs.append(AP)

    # calculate the mean AP across all images
    mAP = np.mean(APs)
    return mAP,precisions_dict,recall_dict

Usage :

prepare data set:

dataset_val = Dataset() 
dataset_val.load_data_set("dataset name","val_data")
dataset_val.prepare()

Create object of config and load model

config = DataConfig()
eval = EvalImage(dataset_val,model,config)

evel.evaluate_model() you will get the results

mhtarora39 avatar Apr 04 '20 13:04 mhtarora39

Wow thank for the really fast reply! I am currently training but it will be done soon and will have a go! I have searched ages for this question to be answered. Will report back shortly! :)

ben975 avatar Apr 04 '20 14:04 ben975

Hi @ben975,

You can use below class to calculate MAP, precision, recall for each image

class EvalImage():
  def __init__(self,dataset,model,cfg):
    self.dataset = dataset
    self.model   = model
    self.cfg     = cfg

 
 
  def evaluate_model(self , len = 50):
    APs = list()
    precisions_dict = {}
    recall_dict     = {}
    for index,image_id in enumerate(self.dataset.image_ids):
      if(index > len):
         break; 
      # load image, bounding boxes and masks for the image id
      image, image_meta, gt_class_id, gt_bbox, gt_mask = modellib.load_image_gt(self.dataset, self.cfg,image_id, use_mini_mask=False)
      # convert pixel values (e.g. center)
      #scaled_image = modellib.mold_image(image, self.cfg)
      # convert image into one sample
      sample = np.expand_dims(image, 0)
     # print(len(image))
      # make prediction
      yhat = self.model.detect(sample, verbose=1)
      # extract results for first sample
      r = yhat[0]
      # calculate statistics, including AP
      AP, precisions, recalls, _ = compute_ap(gt_bbox, gt_class_id, gt_mask, r["rois"], r["class_ids"], r["scores"], r['masks'])
      precisions_dict[image_id] = np.mean(precisions)
      recall_dict[image_id] = np.mean(recalls)
      # store
      APs.append(AP)

    # calculate the mean AP across all images
    mAP = np.mean(APs)
    return mAP,precisions_dict,recall_dict

Usage :

prepare data set:

dataset_val = Dataset() 
dataset_val.load_data_set("onion","val_data")
dataset_val.prepare()

Create object of config and load model

config = DataConfig()
eval = EvalImage(dataset_val,model,config)

evel.evaluate_model() you will get the results

Hi @mhtarora39 im sorry but when I run it I seem to get:

File "", line 1, in eval.evaluate_model()

File "", line 28, in evaluate_model AP, precisions, recalls, _ = compute_ap(gt_bbox, gt_class_id, gt_mask, r["rois"], r["class_ids"], r["scores"], r['masks'])

NameError: name 'compute_ap' is not defined

Thanks again for your help I am new to mask rcnns

ben975 avatar Apr 04 '20 14:04 ben975

Sorry I didn't mentioned please import compute_ap as "from mrcnn.utils import compute_ap". I am also updating above code with import let me know if you encounter any other issue i will update code accordingly

mhtarora39 avatar Apr 04 '20 15:04 mhtarora39

seems to be running good thank you so much!

ben975 avatar Apr 04 '20 15:04 ben975

image image Here is an example of how this works for me

Hi @VtlNmnk I have made the changes that were mentioned by you and was able to run the code without any errors, but when I start training mAP is not getting printed. Training command: python custom.py train --dataset =customImages --weights=coco

Please find the output: mAP

hardikmanek avatar Apr 05 '20 02:04 hardikmanek

Hi, @hardikmanek! Can you show the code how you initialize the callback? Without additional information, I can’t help you.

VtlNmnk avatar Apr 06 '20 13:04 VtlNmnk

Hi @VtlNmnk Please find the attached screenshot. mAP_error1

I am passing the training command as a command-line argument mAP_error2

This is the initial code of the function copied in the last part of model.py mAP_error3

By the way, the complete execution command for the program is: python custom.py train --dataset=customImages --weights=coco

I got the mAP value after 9th Epoch which is 0.3, not sure what's wrong mAP_error5

Thank you so much.

hardikmanek avatar Apr 06 '20 14:04 hardikmanek

Can anyone in this thread explain how to get the loss output logging that we see in hardikmanik's screenshot? I'm talking about the various losses logged to stdout - I'm only seeing the total loss and none of the rest (like rpn_class_loss, rpn_box_loss, etc..).

dreichCSL avatar Apr 15 '20 08:04 dreichCSL

hi,@VtlNmnk .i used your code but have the error Traceback (most recent call last): File "D:/study/Mask_RCNN-2.0/train_test.py", line 16, in <module> import model as modellib File "D:\study\Mask_RCNN-2.0\model.py", line 2769, in <module> import train_test File "D:\study\Mask_RCNN-2.0\train_test.py", line 225, in <module> mean_average_precision_callback = modellib.MeanAveragePrecisionCallback(model, AttributeError: module 'model' has no attribute 'MeanAveragePrecisionCallback'

model.py: `from keras.callbacks import Callback import train_test Dataset=train_test.DrugDataset() class MeanAveragePrecisionCallback(Callback): def init(self, train_model: MaskRCNN, inference_model=MaskRCNN, dataset=Dataset, calculate_map_at_every_X_epoch=1, dataset_limit=None, verbose=1): super().init() self.train_model = train_model self.inference_model = inference_model self.dataset = dataset self.calculate_map_at_every_X_epoch = calculate_map_at_every_X_epoch self.dataset_limit = len(self.dataset.image_ids) if dataset_limit is not None: self.dataset_limit = dataset_limit self.dataset_image_ids = self.dataset.image_ids.copy()

    if inference_model.config.BATCH_SIZE != 1:
        raise ValueError("This callback only works with the bacth size of 1")

    self._verbose_print = print if verbose > 0 else lambda *a, **k: None

def on_epoch_end(self, epoch, logs=None):

    if epoch > 2 and (epoch+1)%self.calculate_map_at_every_X_epoch == 0:
        self._verbose_print("Calculating mAP...")
        self._load_weights_for_model()

        mAPs = self._calculate_mean_average_precision()
        mAP = np.mean(mAPs)

        if logs is not None:
            logs["val_mean_average_precision"] = mAP

        self._verbose_print("mAP at epoch {0} is: {1}".format(epoch+1, mAP))

    super().on_epoch_end(epoch, logs)

def _load_weights_for_model(self):
    last_weights_path = self.train_model.find_last()
    self._verbose_print("Loaded weights for the inference model (last checkpoint of the train model): {0}".format(
        last_weights_path))
    self.inference_model.load_weights(last_weights_path,
                                      by_name=True)

def _calculate_mean_average_precision(self):
    mAPs = []

    # Use a random subset of the data when a limit is defined
    np.random.shuffle(self.dataset_image_ids)

    for image_id in self.dataset_image_ids[:self.dataset_limit]:
        image, image_meta, gt_class_id, gt_bbox, gt_mask = load_image_gt(self.dataset, self.inference_model.config,
                                                                         image_id, use_mini_mask=False)
        molded_images = np.expand_dims(mold_image(image, self.inference_model.config), 0)
        results = self.inference_model.detect(molded_images, verbose=0)
        r = results[0]
        # Compute mAP - VOC uses IoU 0.5
        AP, _, _, _ = utils.compute_ap(gt_bbox, gt_class_id, gt_mask, r["rois"],
                                       r["class_ids"], r["scores"], r['masks'])
        mAPs.append(AP)

    return np.array(mAPs)`

my train.py: model = modellib.MaskRCNN(mode="training", config=config, model_dir=MODEL_DIR) model_inference = modellib.MaskRCNN(mode="inference", config=config, model_dir=MODEL_DIR) mean_average_precision_callback = modellib.MeanAveragePrecisionCallback(model, model_inference, dataset=dataset_val, calculate_map_at_every_X_epoch=1, verbose=1) model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=1, layers='heads', custom_callbacks=[mean_average_precision_callback])

thanks a lot!!

zrc9627 avatar Apr 30 '20 06:04 zrc9627

how to get the loss output logging that we see in hardikmanik's screenshot?

set verbose = 1

VtlNmnk avatar Apr 30 '20 11:04 VtlNmnk

from keras.callbacks import Callback import train_test Dataset=train_test.DrugDataset() class MeanAveragePrecisionCallback(Callback): def init(self, train_model: MaskRCNN, inference_model=MaskRCNN, dataset=Dataset, calculate_map_at_every_X_epoch=1, dataset_limit=None, verbose=1):

you showed your code in the model.py that the markup is not visible ) I think either invalid path to the model.py file is specified, or the wrong indentation in the model.py file itself.

VtlNmnk avatar Apr 30 '20 11:04 VtlNmnk

I got the mAP value after 9th Epoch which is 0.3, not sure what's wrong

Sorry for the long reply. Well, the mAP was calculated, but not after the third epoch. Perhaps there were no saved models to count it earlier? Check how often your models are saved.

VtlNmnk avatar Apr 30 '20 11:04 VtlNmnk

I customized the "https://github.com/matterport/Mask_RCNN.git" repository to train with my own data set, for object detection, ignoring the mask segmentation part. Now I am evaluating my results, I can calculate the MAP, but I cannot calculate the F1 score. I have this function: compute_ap, from "https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/utils.py" which returns the "mAP, details, memories, overlays" for each image. The point is that I cannot apply the F1 score formula, because the variables "precision" and "recalls" are lists.


def compute_ap(gt_boxes, gt_class_ids, gt_masks,
               pred_boxes, pred_class_ids, pred_scores, pred_masks,
               iou_threshold=0.5):

    # Get matches and overlaps
    gt_match, pred_match, overlaps = compute_matches(
        gt_boxes, gt_class_ids, gt_masks,
        pred_boxes, pred_class_ids, pred_scores, pred_masks,
        iou_threshold)

    # Compute precision and recall at each prediction box step
    precisions = np.cumsum(pred_match > -1) / (np.arange(len(pred_match)) + 1)
    recalls = np.cumsum(pred_match > -1).astype(np.float32) / len(gt_match)

    # Pad with start and end values to simplify the math
    precisions = np.concatenate([[0], precisions, [0]])
    recalls = np.concatenate([[0], recalls, [1]])

    # Ensure precision values decrease but don't increase. This way, the
    # precision value at each recall threshold is the maximum it can be
    # for all following recall thresholds, as specified by the VOC paper.
    for i in range(len(precisions) - 2, -1, -1):
        precisions[i] = np.maximum(precisions[i], precisions[i + 1])

    # Compute mean AP over recall range
    indices = np.where(recalls[:-1] != recalls[1:])[0] + 1
    mAP = np.sum((recalls[indices] - recalls[indices - 1]) *
                 precisions[indices])

    return mAP, precisions, recalls, overlaps

WillianaLeite avatar May 09 '20 19:05 WillianaLeite

Can anyone in this thread explain how to get the loss output logging that we see in hardikmanik's screenshot? I'm talking about the various losses logged to stdout - I'm only seeing the total loss and none of the rest (like rpn_class_loss, rpn_box_loss, etc..).

I have tried all the solutions I'm facing the same issue ...... any solution?

rupa1118 avatar May 18 '20 05:05 rupa1118

Hi @VtlNmnk Please find the attached screenshot. mAP_error1

I am passing the training command as a command-line argument mAP_error2

This is the initial code of the function copied in the last part of model.py mAP_error3

By the way, the complete execution command for the program is: python custom.py train --dataset=customImages --weights=coco

I got the mAP value after 9th Epoch which is 0.3, not sure what's wrong mAP_error5

Thank you so much.

Hello, I have tried this and I'm getting mAP value as 0 after every 3 epochs.... any suggestion why is the value of mAP zero????

Screenshot (408)

rupa1118 avatar May 18 '20 05:05 rupa1118

@rupa1118 I couldn't get the loss outputs. At this point I'm suspecting it might have something to do with the tensorflow/keras version you're using - maybe some older version prints these values? However, I'm not going to change the version just for printing the loss. You could also try writing a custom callback to print these numbers, I guess (I haven't tried that - instead I put some tf.Print lines in the code to see the losses for debugging (which is ugly, but was fast)).

dreichCSL avatar May 18 '20 09:05 dreichCSL

@rupa1118 Does the balloon example work for you? I would try to first add the mAP to a known working example, and add the changes one at a time. Perhaps you have something wrong with the masks, as your val_loss also looks strange.

VtlNmnk avatar May 19 '20 07:05 VtlNmnk

Hello, I'm trying to reproduce your @VtlNmnk VtlNmnk code. When I start training I get the error:

Using TensorFlow backend. Traceback (most recent call last): File "/content/Mask_RCNN/samples/metal_blanco/metal_blanco.py", line 45, in from mrcnn import model as modellib, utils File "", line 971, in _find_and_load File "", line 955, in _find_and_load_unlocked File "", line 656, in _load_unlocked File "", line 626, in _load_backward_compatible File "/usr/local/lib/python3.6/dist-packages/mask_rcnn-2.1-py3.6.egg/mrcnn/model.py", line 2876, in NameError: name 'Callback' is not defined

This is my model.py file

############################################################

Custom Callbacks

############################################################

from keras.callbacks import Callback

class MeanAveragePrecisionCallback(Callback): def init(self, train_model: MaskRCNN, inference_model: MaskRCNN, dataset: Dataset, calculate_map_at_every_X_epoch=3, dataset_limit=None, verbose=1): super().init() self.train_model = train_model self.inference_model = inference_model self.dataset = dataset

############################################################ This is my file that starts training: ############################################################ def train(model): """Train the model.""" # Training dataset. dataset_train = metal_blancoDataset() dataset_train.load_metal_blanco(args.dataset, "train") dataset_train.prepare()

# Validation dataset
dataset_val = metal_blancoDataset()
dataset_val.load_metal_blanco(args.dataset, "val")
dataset_val.prepare()

model_inference = modellib.MaskRCNN(mode="inference", config=config,
                              model_dir=args.logs)

mean_average_precision_callback = modellib.MeanAveragePrecisionCallback(model,
model_inference, dataset_val, calculate_map_at_every_X_epoch=3, verbose=1)

# *** This training schedule is an example. Update to your needs ***
# Since we're using a very small dataset, and starting from
# COCO trained weights, we don't need to train too long. Also,
# no need to train all layers, just the heads should do it.
print("Training network heads")
model.train(dataset_train, dataset_val,
            learning_rate=config.LEARNING_RATE,
            epochs=30,
            layers='heads',
            custom_callbacks=[mean_average_precision_callback])

############################################################

Any help will be appreciated

yoya93 avatar Jun 11 '20 18:06 yoya93

Hello, I'm trying to reproduce your @VtlNmnk VtlNmnk code. When I start training I get the error:

Using TensorFlow backend. Traceback (most recent call last): File "/content/Mask_RCNN/samples/metal_blanco/metal_blanco.py", line 45, in from mrcnn import model as modellib, utils File "", line 971, in _find_and_load File "", line 955, in _find_and_load_unlocked File "", line 656, in _load_unlocked File "", line 626, in _load_backward_compatible File "/usr/local/lib/python3.6/dist-packages/mask_rcnn-2.1-py3.6.egg/mrcnn/model.py", line 2876, in NameError: name 'Callback' is not defined

This is my model.py file

############################################################

Custom Callbacks

############################################################

from keras.callbacks import Callback

class MeanAveragePrecisionCallback(Callback): def init(self, train_model: MaskRCNN, inference_model: MaskRCNN, dataset: Dataset, calculate_map_at_every_X_epoch=3, dataset_limit=None, verbose=1): super().init() self.train_model = train_model self.inference_model = inference_model self.dataset = dataset

############################################################ This is my file that starts training: ############################################################ def train(model): """Train the model."""

Training dataset.

dataset_train = metal_blancoDataset() dataset_train.load_metal_blanco(args.dataset, "train") dataset_train.prepare()

# Validation dataset
dataset_val = metal_blancoDataset()
dataset_val.load_metal_blanco(args.dataset, "val")
dataset_val.prepare()

model_inference = modellib.MaskRCNN(mode="inference", config=config,
                              model_dir=args.logs)

mean_average_precision_callback = modellib.MeanAveragePrecisionCallback(model,
model_inference, dataset_val, calculate_map_at_every_X_epoch=3, verbose=1)

# *** This training schedule is an example. Update to your needs ***
# Since we're using a very small dataset, and starting from
# COCO trained weights, we don't need to train too long. Also,
# no need to train all layers, just the heads should do it.
print("Training network heads")
model.train(dataset_train, dataset_val,
            learning_rate=config.LEARNING_RATE,
            epochs=30,
            layers='heads',
            custom_callbacks=[mean_average_precision_callback])

############################################################

Any help will be appreciated

Hi @yoya93,Try checking train function in model.py and if custom_callbacks = None then please remove that None as this function will be accepting callback as parameter.

rupa1118 avatar Jun 12 '20 02:06 rupa1118

Hello @VtlNmnk thank you very much for your response. I am currently facing another problem. Training throws me this error:

Traceback (most recent call last): File "/content/Mask_RCNN/samples/metal_blanco/metal_blanco.py", line 371, in train(model) File "/content/Mask_RCNN/samples/metal_blanco/metal_blanco.py", line 195, in train inference_model=model_inference, dataset=dataset_val, calculate_map_at_every_X_epoch=5, verbose=1) File "/usr/local/lib/python3.6/dist-packages/mask_rcnn-2.1-py3.6.egg/mrcnn/model.py", line 2893, in init ValueError: This callback only works with the bacth size of 1

For this reason configure my config.py like this:

class Config(object): """Base configuration class. For custom configurations, create a sub-class that inherits from this one and override properties that need to be changed. """ # Name the configurations. For example, 'COCO', 'Experiment 3', ...etc. # Useful if your code needs to do things differently depending on which # experiment is running. NAME = None # Override in sub-classes

# NUMBER OF GPUs to use. When using only a CPU, this needs to be set to 1.
GPU_COUNT = 1

# Number of images to train with on each GPU. A 12GB GPU can typically
# handle 2 images of 1024x1024px.
# Adjust based on your GPU memory and image sizes. Use the highest
# number that your GPU can handle for best performance.
IMAGES_PER_GPU = 1

. . . def init(self): """Set values of computed attributes.""" # Effective batch size self.BATCH_SIZE = self.IMAGES_PER_GPU * self.GPU_COUNT

This should be the bacth size = 1 but when I print it on the screen it continues with the value 2

And if you freeze the value of bacth size at 1 like this

def init(self): """Set values of computed attributes.""" # Effective batch size self.BATCH_SIZE = 1

Print the error:

ValueError: slice index 1 of dimension 0 out of bounds. for 'ROI/strided_slice_12' (op: 'StridedSlice') with input shapes: [1,261888,4], [1], [1], [1] and with computed input tensors: input[1] = <1>, input[2] = <2>, input[3] = <1>.

If I ignore the line that check the BATCH_SIZE

if inference_model.config.BATCH_SIZE! = 1:

The system starts training but throws me another error when calculating the mAP

thank you very much in advance a greeting

yoya93 avatar Jun 12 '20 20:06 yoya93

Try using custom_callbacks. Counting accuracy takes a lot of time, so don't use it every epoch.

mean_average_precision_callback = modellib.MeanAveragePrecisionCallback(model, model_inference, dataset_val, calculate_map_at_every_X_epoch=5, verbose=1)

model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=100, layers='heads', custom_callbacks=[mean_average_precision_callback])

Thank you for sharing this code, and I have a small question: I am wondering why is there only dataset_val in the mean_average_precision_callback,

Juuustin avatar Jun 16 '20 10:06 Juuustin

@yoya93 and other guys, advice on any project. First achieve a well-known working condition. Then add the changes one at a time, and each time check whether it works or not. With the repetition of the Mask-RCNN project from Matterport, you first need to repeat their project as it is and get the same result. Then, when there is a working project, copy this project, change the input data to your own, change the number of classes, etc. And after each change, check that nothing is broken. Regarding your specific question: the source code of the "Config" class does not need to be changed. It is enough to access it from your code correctly. Here is an example of my call to the class. Naturally, you first need to import from mrcnn.config import Config

image

Then you can see the properties of your configuration for inference config = _InfConfig() config.display() image

VtlNmnk avatar Jun 17 '20 06:06 VtlNmnk

Hey @Juuustin. I divide the complete dataset into 3 sets: train, test, and val(60%-30%-10%). The first two are used when training the network, and the third is used to understand the accuracy on the data that the network does not see during training.

VtlNmnk avatar Jun 17 '20 06:06 VtlNmnk