Separate performance results for val_evaluator with multiple validation sets
Hi, in my project I have two validation sets that I need to evaluate separately, i.e. I want to monitor their respective evaluation metrics on the console panel.
I have tried using ConcatDataset to evaluate the trained model on the two datasets simutaneously, but it seems that mmdetection only concatenates the two datasets while does not output their evaluation metrics seperately. Is there a way to output them separately? I know that in mmdetection 2.x there is an argument separate_eval that could realise this, but it is no longer available in mmdetection 3.x, is there a 3.x equivalent of separate_eval in 2.x?
Below is the code I am currently running:
val_dataloader = dict(
batch_size=32,
num_workers=5,
dataset=dict(
type='ConcatDataset',
datasets=[dataset_A_val, dataset_B_val]),
sampler=dict(type='DefaultSampler', shuffle=False),
)
"Did you solve this problem?"
Same issue with me.
Hi, I also needed similar feature and I found that we can create a list of evaluators in the config. Something like this:
val_evaluator = [
dict(
type='CocoMetric',
ann_file='ann_file1.json',
metric='bbox',
classwise=True,
proposal_nums=[1, 10, 100],
format_only=False,
backend_args=backend_args
),
dict(
type='CocoMetric',
ann_file='ann_file2.json',
metric='bbox',
classwise=True,
proposal_nums=[1, 10, 100],
format_only=False,
backend_args=backend_args
)
]
Hope it will be helpful to someone.
Note: You may get error saying: Results do not belong to coco datasets (can't remember message exactly) due to image_id fields of each dataset you have concatenated. The solution is to have unique image_ids in each dataset as far as I remember.
`