ultralytics icon indicating copy to clipboard operation
ultralytics copied to clipboard

File "C:\Users\zengying\.conda\envs\zengying\lib\site-packages\ultralytics\utils\tal.py", line 255, in make_anchors _, _, h, w = feats[i].shape ValueError: not enough values to unpack (expected 4, got 2)

Open Zing-desire opened this issue 9 months ago β€’ 2 comments

Search before asking

  • [X] I have searched the YOLOv8 issues and discussions and found no similar questions.

Question

Why does this problem occur? It was already possible to run it before

Starting training for 5 epochs...

  Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
    1/5      14.6G      3.738      24.26      2.339         74        640: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 243/243 [03:33<00:00,  1.14it/s]
             Class     Images  Instances      Box(P          R      mAP50  mAP50-95):   0%|          | 0/25 [00:00<?, ?it/s]

Traceback (most recent call last): File "C:\Users\zengying.conda\envs\zengying\lib\runpy.py", line 194, in _run_module_as_main return run_code(code, main_globals, None, File "C:\Users\zengying.conda\envs\zengying\lib\runpy.py", line 87, in run_code exec(code, run_globals) File "C:\Users\zengying.conda\envs\zengying\Scripts\yolo.exe_main.py", line 7, in File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\cfg_init.py", line 444, in entrypoint getattr(model, mode)(**overrides) # default args from model File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\engine\model.py", line 341, in train self.trainer.train() File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\engine\trainer.py", line 192, in train self._do_train(world_size) File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\engine\trainer.py", line 385, in _do_train self.metrics, self.fitness = self.validate() File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\engine\trainer.py", line 489, in validate metrics = self.validator(self) File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\engine\validator.py", line 173, in call self.loss += model.loss(batch, preds)[1] File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\nn\tasks.py", line 217, in loss return self.criterion(preds, batch) File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\utils\loss.py", line 177, in call anchor_points, stride_tensor = make_anchors(feats, self.stride, 0.5) File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\utils\tal.py", line 255, in make_anchors _, _, h, w = feats[i].shape ValueError: not enough values to unpack (expected 4, got 2)

(zengying) D:\zengying\code\YOLOv8\ultralytics-main\ultralytics>

Additional

Plotting labels to runs\detect\train143\labels.jpg... optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... optimizer: AdamW(lr=0.002, momentum=0.9) with parameter groups 57 weight(decay=0.0), 64 weight(decay=0.0005), 63 bias(decay=0.0) Image sizes 640 train, 640 val Using 8 dataloader workers Logging results to runs\detect\train143 Starting training for 5 epochs...

  Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
    1/5      14.6G      3.738      24.26      2.339         74        640: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 243/243 [03:33<00:00,  1.14it/s]
             Class     Images  Instances      Box(P          R      mAP50  mAP50-95):   0%|          | 0/25 [00:00<?, ?it/s]

Traceback (most recent call last): File "C:\Users\zengying.conda\envs\zengying\lib\runpy.py", line 194, in _run_module_as_main return run_code(code, main_globals, None, File "C:\Users\zengying.conda\envs\zengying\lib\runpy.py", line 87, in run_code exec(code, run_globals) File "C:\Users\zengying.conda\envs\zengying\Scripts\yolo.exe_main.py", line 7, in File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\cfg_init.py", line 444, in entrypoint getattr(model, mode)(**overrides) # default args from model File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\engine\model.py", line 341, in train self.trainer.train() File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\engine\trainer.py", line 192, in train self._do_train(world_size) File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\engine\trainer.py", line 385, in _do_train self.metrics, self.fitness = self.validate() File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\engine\trainer.py", line 489, in validate metrics = self.validator(self) File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\engine\validator.py", line 173, in call self.loss += model.loss(batch, preds)[1] File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\nn\tasks.py", line 217, in loss return self.criterion(preds, batch) File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\utils\loss.py", line 177, in call anchor_points, stride_tensor = make_anchors(feats, self.stride, 0.5) File "C:\Users\zengying.conda\envs\zengying\lib\site-packages\ultralytics\utils\tal.py", line 255, in make_anchors _, _, h, w = feats[i].shape ValueError: not enough values to unpack (expected 4, got 2)

Zing-desire avatar May 07 '24 19:05 Zing-desire

πŸ‘‹ Hello @zeng215217, thank you for your interest in Ultralytics YOLOv8 πŸš€! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a πŸ› Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

github-actions[bot] avatar May 07 '24 19:05 github-actions[bot]

It looks like the error arises because the feats[i].shape is expected to unpack four values (indicating batch size, channels, height, width), but it’s only providing two. This could be due to the input not being properly formed as batches before being passed to the model.

Double-check how the data is processed and ensure that it conforms to the expected shape. Here's a quick check using the example:

import torch

feats = torch.randn(1, 3, 640, 640)  # Simulating one sample, 3 channels, 640x640 size
batch_size, channels, height, width = feats.shape

print(f'Batch Size: {batch_size}, Channels: {channels}, Height: {height}, Width: {width}')
# Ensure this outputs as expected

if len(feats.shape) != 4:
    raise ValueError("Features do not have the correct shape. Expected 4 dimensions.")

Make sure your incoming data into make_anchors matches this expected format, and if not, reshape or adjust preprocessing accordingly to ensure the expected dimensions are passed. If you're using custom datasets or input mechanisms, verify that they are correctly batched.

glenn-jocher avatar May 08 '24 00:05 glenn-jocher

πŸ‘‹ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

  • Docs: https://docs.ultralytics.com
  • HUB: https://hub.ultralytics.com
  • Community: https://community.ultralytics.com

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO πŸš€ and Vision AI ⭐

github-actions[bot] avatar Jun 08 '24 00:06 github-actions[bot]

For other running into this... In my case, I was running yolo detect train, instead of yolo obb train.

keesschollaart81 avatar Jul 01 '24 09:07 keesschollaart81

Hi @keesschollaart81,

Thank you for sharing your experience! It’s great to hear that you identified the root cause of the issue. Indeed, using the correct task-specific command, such as yolo obb train for oriented bounding boxes, is crucial for ensuring the model processes the data correctly.

For others encountering similar issues, always double-check that you are using the appropriate command for your specific task. If you continue to experience issues, please ensure you are using the latest version of the Ultralytics package and provide a minimum reproducible example to help us diagnose the problem more effectively. You can find more details on creating a reproducible example here.

If you have any further questions or run into other issues, feel free to ask. We're here to help! 😊

glenn-jocher avatar Jul 01 '24 15:07 glenn-jocher