ml-mobileone
ml-mobileone copied to clipboard
This repository contains the official implementation of the research paper, "An Improved One millisecond Mobile Backbone".
model = mobileone(num_classes=1000, inference_mode=True, variant="s4") x = torch.rand(1, 3, 224, 224) addr = 'mobileonev4_demo.onnx' torch.onnx.export(model, x, addr, export_params=True, do_constant_folding=False, training=False, opset_version=11, verbose=True) -------------------------------------- error printed: f"'mode' should be a torch.onnx.TrainingMode...
Just found out that, Any intuition behind that? The coreml models provided are the non-reparametrized right?
I have reproduced the training accuracy of the paper base on [**MMClassification 1.x**](https://github.com/open-mmlab/mmclassification/tree/dev-1.x), For all the training configs, logs and checkpoints, please refer to [**this Page**](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobileone#results-and-models).  **tricks I use...
Thank you so much for your awesome job! I have tested the given models in the ImagNet-1k test dataset by using the test transform of [RepVGG](https://github.com/DingXiaoH/RepVGG/blob/5c2e359a144726b9d14cba1e455bf540eaa54afc/utils.py#L175), but the resulting accuracy...
I am trying to use MobileOne-S0 as a backbone for semantic segmentation. However, UNFUSED checkpoint is very big. How do I use the S0 model instead of unfused?
My device: Mac mini M2 with 16GB RAM, macOS Ventura 13.6, Xcode 15.2. I'm testing the latency of MobileOne-S0 using an iPhone 12 Pro simulator, but the experimental results differ...
Are there any plans to bring the mobileone architecture to the mediapipe platform?