TheBeastCoding

Results 9 comments of TheBeastCoding

Even at a loss of 12, that should still be a decent model. Here is a potential workaround if you are getting an mAP of 0.0 at the evaluation step...

When you run the training step, there will be a folder in your model workspace named cache. Before testing, delete the entire folder named cache. Testing will automatically recreate the...

Here is a potential workaround if you are getting an mAP of 0.0 at the evaluation step (and have verified the h5 model and the training/evaluation data is not corrupted):...

Here is a potential workaround if you are getting an mAP of 0.0 at the evaluation step (and have verified the h5 model and the training/evaluation data is not corrupted):...

> This is unusual. After installing TF 2.4.0, restart the runtime and run the training again. I restarted the runtime and noticed no changes in performance. I changed my dataset...

> This is unusual. After installing TF 2.4.0, restart the runtime and run the training again. UPDATE: Reverting back to TF-GPU==1.13.1, Keras==2.2.4 and imageai==2.1.0 FIXED the issue. I now am...

Try the following and let me know if it fixes: After training a model, delete the workspace cache: (ex. !rm -r ...//cache/). During the evaluation step, ImageAI will recreate this...

Also, 50 experiments for a training size of 50 and taking over a day is way too long. Consider using the free GPU resources provided by Google Collab: https://colab.research.google.com/, which...

This is a reminder that if you are a glaucoma dataset author, please post your dataset link and associated publication here so that we can feature your dataset in this...