ml-mobileone icon indicating copy to clipboard operation
ml-mobileone copied to clipboard

This repository contains the official implementation of the research paper, "An Improved One millisecond Mobile Backbone".

Results 19 ml-mobileone issues
Sort by recently updated
recently updated
newest added

model = mobileone(num_classes=1000, inference_mode=True, variant="s4") x = torch.rand(1, 3, 224, 224) addr = 'mobileonev4_demo.onnx' torch.onnx.export(model, x, addr, export_params=True, do_constant_folding=False, training=False, opset_version=11, verbose=True) -------------------------------------- error printed: f"'mode' should be a torch.onnx.TrainingMode...

Just found out that, Any intuition behind that? The coreml models provided are the non-reparametrized right?

I have reproduced the training accuracy of the paper base on [**MMClassification 1.x**](https://github.com/open-mmlab/mmclassification/tree/dev-1.x), For all the training configs, logs and checkpoints, please refer to [**this Page**](https://github.com/open-mmlab/mmclassification/tree/1.x/configs/mobileone#results-and-models). ![image](https://user-images.githubusercontent.com/18586273/204779548-afae9f21-66cd-4785-bc86-d9ae1dd6fce0.png) **tricks I use...

Thank you so much for your awesome job! I have tested the given models in the ImagNet-1k test dataset by using the test transform of [RepVGG](https://github.com/DingXiaoH/RepVGG/blob/5c2e359a144726b9d14cba1e455bf540eaa54afc/utils.py#L175), but the resulting accuracy...

I am trying to use MobileOne-S0 as a backbone for semantic segmentation. However, UNFUSED checkpoint is very big. How do I use the S0 model instead of unfused?

My device: Mac mini M2 with 16GB RAM, macOS Ventura 13.6, Xcode 15.2. I'm testing the latency of MobileOne-S0 using an iPhone 12 Pro simulator, but the experimental results differ...

Are there any plans to bring the mobileone architecture to the mediapipe platform?