benchmark
benchmark copied to clipboard
Improve the coverage of `gen_inputs()` interface
We added the gen_inputs()
interface and we would like to increase its coverage towards 100%.
gen_inputs(num_batches) -> Tuple[Generator, Optional[int]]
returns a generator and an optional int value.
The int value indicates how many batches the generator can return. For example, if the model is using coco128 dataset, and batch size is 1, num_batches is 1, then the length of the generator is 128. It could also be None
, if the model uses randomly generated data instead of a mini-dataset, which means the length of the generator is infinite.
The generator, once called, should return a list, whose length could num_batches
or smaller (if the last batch). Each element in the list should be able to work as input to the model, similar to the get_module()
interface.
- [ ] Bert_pytorch
- [ ] Background_Matting
- [ ] LearningToPaint
- [ ] Super_SloMo
- [x] alexnet
- [ ] attention_is_all_you_need_pytorch
- [ ] dcgan
- [ ] demucs
- [x] densenet121
- [ ] detectron2_maskrcnn
- [ ] dlrm
- [ ] drq
- [ ] fambench_dlrm
- [ ] fambench_xlmr
- [ ] fastNLP_Bert
- [ ] hf_Albert
- [ ] hf_Bart
- [ ] hf_Bert
- [ ] hf_BigBird
- [ ] hf_DistilBert
- [ ] hf_GPT2
- [ ] hf_Longformer
- [ ] hf_Reformer
- [ ] hf_T5
- [ ] maml
- [ ] maml_omniglot
- [x] mnasnet1_0
- [x] mobilenet_v2
- [x] mobilenet_v2_quantized_qat
- [x] mobilenet_v3_large
- [ ] moco
- [ ] nvidia_deeprecommender
- [ ] opacus_cifar10
- [x] pplbench_beanmachine
- [ ] pyhpc_equation_of_state
- [ ] pyhpc_isoneutral_mixing
- [ ] pyhpc_turbulent_kinetic_energy
- [ ] pytorch_CycleGAN_and_pix2pix
- [ ] pytorch_stargan
- [ ] pytorch_struct
- [ ] pytorch_unet
- [x] resnet18
- [x] resnet50
- [x] resnet50_quantized_qat
- [x] resnext50_32x4d
- [x] shufflenet_v2_x1_0
- [ ] soft_actor_critic
- [ ] speech_transformer
- [x] squeezenet1_1
- [ ] tacotron2
- [x] timm_efficientdet
- [x] timm_efficientnet
- [x] timm_nfnet
- [x] timm_regnet
- [x] timm_resnest
- [x] timm_vision_transformer
- [x] timm_vovnet
- [ ] tts_angular
- [x] vgg16
- [ ] vision_maskrcnn
- [ ] yolov3
Note that you don't need to finish all models for this task - e.g., adding coverage to 5-10 models is very much appreciated.