mmpretrain
mmpretrain copied to clipboard
accuracy/top1 does not meet expectations, and the predicted results are in the same category
分支
main 分支 (mmpretrain 版本)
描述该错误
Using MobileNet-V3 and ConvNeXt_v2 to train the model, modifying only number_classes ,topk and using the default parameters, the inference results remained accuracy/ top1:51.9751 .This is not the case with networks such as resnet and resnext.
The confusion matrix results are as follows:
Data set picture size is 224*224, a total of 4 classes
I tried to modify the loss function to CrossEntropyLoss ,simplify data preprocessing operations in train_pipeline and cancel the augments operation during model training, but still failed to solve the problem. Please kindly ask experts for help.
环境信息
{'sys.platform': 'linux', 'Python': '3.8.0 (default, Nov 6 2019, 21:49:08) [GCC 7.3.0]', 'CUDA available': True, 'numpy_random_seed': 2147483648, 'GPU 0': 'NVIDIA GeForce RTX 4090', 'CUDA_HOME': '/usr/local/cuda-11.1', 'NVCC': 'Cuda compilation tools, release 11.1, V11.1.105', 'GCC': 'gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0', 'PyTorch': '1.10.0', 'TorchVision': '0.11.1', 'OpenCV': '4.7.0', 'MMEngine': '0.7.0', 'MMCV': '2.0.1', 'MMPreTrain': '1.0.0rc7+'}
其他信息
No response
Hello, I think you need to provide additional information, including detailed log information of the normal and abnormal models you tested. I think it is necessary to check the construction of the dataset and classification header when this situation usually occurs.