运行mmdetection image_demo.py时报错 unknown argument {'texts', 'custom_entities'} for `preprocess`, `forward`, `visualize` and `postprocess`
运行mmdetection image_demo.py 时报错 unknown argument {'texts', 'custom_entities'} for preprocess, forward, visualize and postprocess
提前下载好了配置文件和权重文件
运行命令|:python image_demo.py demo.jpg rtmdet_tiny_8xb32-300e_coco.py --weights rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --show --device cpu
不知道问题出在哪,请教各位大佬!感谢!
We recommend using English or English & Chinese for issues so that we could have broader discussion.
@HePengguang Have you updated your code and configurations to the latest 3.1.0 ?
@HePengguang Have you updated your code and configurations to the latest 3.1.0 ?
thanks, it works. The cause of the error was that there were both 3.0.0 and 3.0.1 versions of mmdet in the environment, and then I uninstalled 3.0.0.
Can I ask you what updated did you? I have the same issue with it; I don't know how to update it. I'm doing dance project in the mmpose repository, and running all of the codes from jupyter notebook demo codes. Here's the error
ValueError Traceback (most recent call last)
[/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb) Cell 6 line 4
[2](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=1) student_poses, teacher_poses = [], []
[3](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=2) for frame in VideoReader(student_video):
----> [4](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=3) student_poses.append(get_keypoints_from_frame(frame, pose_estimator))
[5](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=4) for frame in VideoReader(teacher_video):
[6](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=5) teacher_poses.append(get_keypoints_from_frame(frame, pose_estimator))
File [~/miniconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py:115](https://file+.vscode-resource.vscode-cdn.net/Users/bahk_insung/Documents/Github/just_dance/~/miniconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py:115), in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
[/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb) Cell 6 line 5
[1](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=0) @torch.no_grad()
[2](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=1) def get_keypoints_from_frame(image, pose_estimator):
[3](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=2) """Extract keypoints from a single video frame."""
----> [5](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=4) det_results = pose_estimator.detector(
[6](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=5) image, return_datasample=True)['predictions']
[7](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=6) pred_instance = det_results[0].pred_instances.numpy()
[9](vscode-notebook-cell:/Users/bahk_insung/Documents/Github/just_dance/just_dance_demo.ipynb#W5sZmlsZQ%3D%3D?line=8) if len(pred_instance) == 0 or pred_instance.scores[0] < 0.2:
File [~/Documents/Github/mmdetection/mmdet/apis/det_inferencer.py:361](https://file+.vscode-resource.vscode-cdn.net/Users/bahk_insung/Documents/Github/just_dance/~/Documents/Github/mmdetection/mmdet/apis/det_inferencer.py:361), in DetInferencer.__call__(self, inputs, batch_size, return_vis, show, wait_time, no_save_vis, draw_pred, pred_score_thr, return_datasamples, print_result, no_save_pred, out_dir, texts, stuff_texts, custom_entities, **kwargs)
298 def __call__(
299 self,
300 inputs: InputsType,
(...)
317 custom_entities: bool = False,
318 **kwargs) -> dict:
319 """Call the inferencer.
320
321 Args:
(...)
354 dict: Inference and visualization results.
355 """
356 (
357 preprocess_kwargs,
358 forward_kwargs,
359 visualize_kwargs,
360 postprocess_kwargs,
--> 361 ) = self._dispatch_kwargs(**kwargs)
363 ori_inputs = self._inputs_to_list(inputs)
365 if texts is not None and isinstance(texts, str):
File [~/miniconda3/lib/python3.10/site-packages/mmengine/infer/infer.py:611](https://file+.vscode-resource.vscode-cdn.net/Users/bahk_insung/Documents/Github/just_dance/~/miniconda3/lib/python3.10/site-packages/mmengine/infer/infer.py:611), in BaseInferencer._dispatch_kwargs(self, **kwargs)
609 if union_kwargs != method_kwargs:
610 unknown_kwargs = union_kwargs - method_kwargs
--> 611 raise ValueError(
612 f'unknown argument {unknown_kwargs} for `preprocess`, '
613 '`forward`, `visualize` and `postprocess`')
615 preprocess_kwargs = {}
616 forward_kwargs = {}
ValueError: unknown argument {'return_datasample'} for `preprocess`, `forward`, `visualize` and `postprocess`
try to delete the code related to 'return_datasample' or 'custom_entities' in image_demo.py