Sacha
Sacha
Well that's why I originally left it empty because we cannot work out what hardware it will be targeting automatically.
No there is no issue on arm cause we can just target v8 or v7 and done. It's when you cross compile to, for example, iOS x86-64 simulator. I figure...
If it is iOS and x86, it has to be the simulator. Android is a bit more tricky. Even so, they likely want to target all supported x86 android devices.
How about if we detect iOS and x86_64, we use a sane default? They likely won't care about performance so much for simulator. The simulator probably has a minimum spec...
I believe I am also running in to this issue. Is there a workaround? Can I get it to map just that one op to the CPU? What is special...
I feel like my issue happens during quantization because the result gets worse with more images in the dataset I use for quantization.
You can load the weights from torch.jit.load and set these in to the state_dict of a real model (from BACKBONE_DICT) in order to support finetuning.
@bobo0810 Should be like this: ```Python mymodel = AttentionNet_IRSE_92() mymodel.load_state_dict(torch.jit.load('AttentionNet_IRSE_92_torchscript.pt').state_dict()) ``` See: https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html#torch.jit.ScriptModule.state_dict
> @cavalleria hello,according to data augmentation result in model_zoo.md,baseline is the best, so we do not need to do any data augmentation? When trained for the same amount of time,...
@FelixZhang7 AttNet-56-IR It is similar inference speed to resnet50 on GPU and better accuracy.