Juan Acevedo
Juan Acevedo
I'm late to the party but I think the prepare_data.py has a bug in line 73. For flipped images is the same equation: ``` layout[:,:,l_list[ltype]+flip_idx[ltype][i]-1]=guassian_2d(x_mean, y_mean) ``` I think it...
the hdf5 file creation is optional, but you have to calculate the inception moments.
I was having this same issue with my datasets and opened a gcp ticket and everything. Eventually it came down to my instances not being large enough even though I...
@zion-b take a look at this notebook on deploying TFHub models to Vertex AI endpoints and call it from apis and curl. Might have what you are looking for https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/vertex_endpoints/tf_hub_obj_detection/deploy_tfhub_object_detection_on_vertex_endpoints.ipynb
any updates on this? Also will there be any breaking changes?
Perhaps if you re-train with an oss model, you should be able to use it for commercial purposes, but 2nd stage data was collected using chatgpt, how? Isn't OpenAI's TOS...
Thank you for your reply. Unfortunately it didn't work. I also tried the following but didn't work as well. ``` nnabla_cli convert -I NNP -O NNB -b 1 resnet_result.nnp output.nnb...
@StateGovernment please post the error message. Is there a reason you want to use A100? TPU trains really fast and the model weights can be easily converted to pytorch weights...
@StateGovernment that's because you need to set the accelerator count to minimum of 8, so if you set the accelerator count to 8 with TPU, it should work.
@StateGovernment I forgot to add another step, the container doesn't install cuda drivers, so it won't use the GPU, and will be extremely slow. You'll need to change (this line)[https://github.com/entrpn/serving-model-cards/blob/main/training-dreambooth/Dockerfile#L1]...