automl icon indicating copy to clipboard operation
automl copied to clipboard

Google Brain AutoML

Results 132 automl issues
Sort by recently updated
recently updated
newest added

when i trained my model on new dataset (number classes :18) with transfer learning, but i get in output 406 classes or 181 number classes. so how can i get...

I tried reproduce V2-S,M training on TPUs. (Run main.py with default configurations) But it was worse accuracy than [pre-trained checkpoints](https://github.com/google/automl/tree/master/efficientnetv2#2-pretrained-efficientnetv2-checkpoints). ## EfficientnetV2-S Evaluation Result ``` Saving dict for global step...

If I only have one gpu to train, the setting for '--strategy' is still 'gpus' or None?

Now I'm training Efficientdet model on my custom dataset, but mAP is too low. I use rtx 3090 tf-nightly=2.8 cudatoolkit=11.2 cuDNN=8.2 And here is my configs image_size: 768x768 num_classes: 35...

/usr/src/tensorrt/bin/trtexec --onnx= models/efficidet/efficientdet-lite0/efficientdet-lite0-bs1.onnx --explicitBatch --int8 --workspace=1024 --sparsity=enable

I have only one NVIDIA GPU, I was training with TensorFlow 2.5.2 because of the bug with GPU and multiprocessing. - TF2.8 and No Child Process => works but Memory...

Generate TF-Lite model using below script : `!python model_inspect.py --runmode=saved_model --model_name=efficientdet-d1 \ --ckpt_path=AUTO_ML-50-30k --saved_model_dir=savedmodeldir-50-30k \ --tflite_path=efficientdet-d1-TF-50EP-30k.tflite \ --hparams=voc_config.yaml` I have trained efficientdet d1 (640x640) by setting --num_epochs=50 epochs. In some...

The change is proposed to improve the overall accessibility of the repo. It is often confusing for newcomers who are experimenting with toy datasets. The training fails with hard debug...

cla: yes

`parse_image_size()` in `efficientdet/utils.py` swaps height and width of the image when you give a string as argument. This PR fixes that and makes it possible to pass a `dict`. >...

cla: yes

The file COCO_evaluation.ipynb consists all the requirements to run the evaluation over COCO validation Dataset, rest is removed to make it just for the purpose.

cla: yes