fatih akyon
fatih akyon
Started a new training with v8.0.1 :+1:
v8.0.1 gives nan, now trying with v8.0.2 edit: v8.0.4 also gives nan
`cls_loss` increases rapidly at the training start. Then at %1 of training dataset `cls_loss` becomes `inf`. Since the number of target classes is 1, this seems odd. After %10 iterations...
@AyushExel can this be related to the newly introduced losses?
This issue is not present in coco dataset, it is happening when fine-tuning on a custom dataset.
My divergence issue seems to be fixed (10% at 2nd epoch with ultralytics==8.0.17 without nan)
@AyushExel you can keep it as a generator by removing return and keeping yield. Then implement `__len__()` method of `Result` class so that len(result) would return the true length although...
I see, then you may have to implement two classes `StreamingResult` and `Result` for iterable and full results. When `stream=True` predict will return `StreamingResult`. You can implement custom `__len__()` method...
Is there any ETA for this feature? Is it going to be backward compatible with v8.0.6 predict inputs/output? I am asking for my packages that rely on ultrlaytics package :)
@ahthserhsluk what is your imgsz during training?