ultralytics
ultralytics copied to clipboard
File "C:\Users\zengying\.conda\envs\zengying\lib\site-packages\ultralytics\utils\tal.py", line 255, in make_anchors _, _, h, w = feats[i].shape ValueError: not enough values to unpack (expected 4, got 2)
Search before asking
- [X] I have searched the YOLOv8 issues and discussions and found no similar questions.
Question
Why does this problem occur? It was already possible to run it before
Starting training for 5 epochs...
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
1/5 14.6G 3.738 24.26 2.339 74 640: 100%|ββββββββββ| 243/243 [03:33<00:00, 1.14it/s]
Class Images Instances Box(P R mAP50 mAP50-95): 0%| | 0/25 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\zengying.conda\envs\zengying\lib\runpy.py", line 194, in _run_module_as_main
return run_code(code, main_globals, None,
File "C:\Users\zengying.conda\envs\zengying\lib\runpy.py", line 87, in run_code
exec(code, run_globals)
File "C:\Users\zengying.conda\envs\zengying\Scripts\yolo.exe_main.py", line 7, in
(zengying) D:\zengying\code\YOLOv8\ultralytics-main\ultralytics>
Additional
Plotting labels to runs\detect\train143\labels.jpg... optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... optimizer: AdamW(lr=0.002, momentum=0.9) with parameter groups 57 weight(decay=0.0), 64 weight(decay=0.0005), 63 bias(decay=0.0) Image sizes 640 train, 640 val Using 8 dataloader workers Logging results to runs\detect\train143 Starting training for 5 epochs...
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
1/5 14.6G 3.738 24.26 2.339 74 640: 100%|ββββββββββ| 243/243 [03:33<00:00, 1.14it/s]
Class Images Instances Box(P R mAP50 mAP50-95): 0%| | 0/25 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\zengying.conda\envs\zengying\lib\runpy.py", line 194, in _run_module_as_main
return run_code(code, main_globals, None,
File "C:\Users\zengying.conda\envs\zengying\lib\runpy.py", line 87, in run_code
exec(code, run_globals)
File "C:\Users\zengying.conda\envs\zengying\Scripts\yolo.exe_main.py", line 7, in
π Hello @zeng215217, thank you for your interest in Ultralytics YOLOv8 π! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a π Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training β Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord π§ community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Install
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
Environments
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
-
Notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
-
Docker Image. See Docker Quickstart Guide
Status
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
It looks like the error arises because the feats[i].shape
is expected to unpack four values (indicating batch size, channels, height, width), but itβs only providing two. This could be due to the input not being properly formed as batches before being passed to the model.
Double-check how the data is processed and ensure that it conforms to the expected shape. Here's a quick check using the example:
import torch
feats = torch.randn(1, 3, 640, 640) # Simulating one sample, 3 channels, 640x640 size
batch_size, channels, height, width = feats.shape
print(f'Batch Size: {batch_size}, Channels: {channels}, Height: {height}, Width: {width}')
# Ensure this outputs as expected
if len(feats.shape) != 4:
raise ValueError("Features do not have the correct shape. Expected 4 dimensions.")
Make sure your incoming data into make_anchors
matches this expected format, and if not, reshape or adjust preprocessing accordingly to ensure the expected dimensions are passed. If you're using custom datasets or input mechanisms, verify that they are correctly batched.
π Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
- Docs: https://docs.ultralytics.com
- HUB: https://hub.ultralytics.com
- Community: https://community.ultralytics.com
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO π and Vision AI β
For other running into this... In my case, I was running yolo detect train
, instead of yolo obb train
.
Hi @keesschollaart81,
Thank you for sharing your experience! Itβs great to hear that you identified the root cause of the issue. Indeed, using the correct task-specific command, such as yolo obb train
for oriented bounding boxes, is crucial for ensuring the model processes the data correctly.
For others encountering similar issues, always double-check that you are using the appropriate command for your specific task. If you continue to experience issues, please ensure you are using the latest version of the Ultralytics package and provide a minimum reproducible example to help us diagnose the problem more effectively. You can find more details on creating a reproducible example here.
If you have any further questions or run into other issues, feel free to ask. We're here to help! π