AVSL
AVSL copied to clipboard
How to export onnx?
It seems that the code snippet you've provided focuses on utilizing a manager object rather than directly showing the model itself.
Understanding gedml can indeed be challenging, especially when trying to locate the specific model within the codebase.
So how to export full model as onnx?
thanks for your help
Hello, friend! I apologize for the complexity of the GeDML package structure that may have caused confusion. We are currently working on releasing a new version of the AVSL code, which will have reduced dependency on the GeDML package. Additionally, we are taking into consideration the issue of model exportation and will address it accordingly.
Hello, friend! I apologize for the complexity of the GeDML package structure that may have caused confusion. We are currently working on releasing a new version of the AVSL code, which will have reduced dependency on the GeDML package. Additionally, we are taking into consideration the issue of model exportation and will address it accordingly.
Well. Looking forward to your new version code. Thanks again for your help
Hello, friend! I apologize for the complexity of the GeDML package structure that may have caused confusion. We are currently working on releasing a new version of the AVSL code, which will have reduced dependency on the GeDML package. Additionally, we are taking into consideration the issue of model exportation and will address it accordingly.
Well. Looking forward to your new version code. Thanks again for your help
Hi! Sorry for the late response. Recently we add a part of code in src/gedml/recorder/base_recorder.py so that we can choose whether to save models in .onnx format files or in .ckpt.
Here are detailed codes and methods.
- You need first download GeDML’s source code and remove the dependency on the faiss-gpu library(and remove according codes using faiss-gpu)
- open src/gedml/recorder/base_recorder.py, replace the codes between line 398~417(that is function save_models ) with codes below:
def deal_with_trun(self, device):
dummy_data = torch.randn(180, 3, 224, 224, device=device)
dummy_input = dummy_data
input_names = ['data']
output_names = ["output_layer0","output_layer1","output_layer2"]
return dummy_input, input_names, output_names
def deal_with_embe(self, device):
dummy_layer2 = torch.randn(180, 512, 28, 28, device=device)
dummy_layer3 = torch.randn(180, 1024, 14, 14, device=device)
dummy_layer4 = torch.randn(180, 2048, 7, 7, device=device)
dummy_input = [dummy_layer2, dummy_layer3, dummy_layer4]
input_names = ['input_layer0', 'input_layer1', 'input_layer2']
output_names = ["embed_list0", "embed_list1", "embed_list2",
"certainty_list0", "certainty_list1", "certainty_list2"]
return dummy_input, input_names, output_names
def deal_with_coll(self, device):
dummy_embed_list = [torch.randn(180, 512, device=device) for _ in range(3)]
dummy_certainty_list = [torch.randn(180, 512, device=device) for _ in range(3)]
dummy_input = (dummy_embed_list, dummy_certainty_list)
input_names = ["embed_list0", "embed_list1", "embed_list2",
"certainty_list0", "certainty_list1", "certainty_list2"]
output_names = ["metric_mat"]
return dummy_input, input_names, output_names
def deal_with_loss(self, device):
dummy_metric_mat = torch.randn(180, 100, device=device)
dummy_row_labels = torch.randn(180, 1, device=device)
dummy_col_labels = torch.randn(1, 100, device=device)
dummy_input = (dummy_metric_mat, dummy_row_labels, dummy_col_labels)
input_names = ["metric_mat", "row_labels", "col_labels"]
output_names = ["total_loss"]
return dummy_input, input_names, output_names
def save_models(self, obj, step, best=False, delete_old=True, use_onnx =False):
if delete_old: self.delete_models()
to_save_list = getattr(obj, TO_SAVE_LIST, {})
if use_onnx and step >= 10:
device = getattr(obj, DEVICE, torch.device('cpu'))
for to_save_name in to_save_list:
if to_save_name in {'optimizers','schedulers'}:
continue
to_save_item = getattr(obj, to_save_name, {})
assert isinstance(to_save_item, dict), "{} must a dictionary!".format(to_save_name)
for k, v in to_save_item.items():
curr_model_name = to_save_name + "_" + k + '_{}.onnx'.format(int(step))
curr_model_path = os.path.join(self.model_path, curr_model_name)
dummy_input, input_names, output_names = getattr(self, 'deal_with_'+ k[:4])(device)
torch.onnx.export(v, dummy_input, curr_model_path, export_params=True, verbose=True, do_constant_folding=False, training=torch.onnx.TrainingMode.EVAL,operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK, input_names=input_names, output_names=output_names)
logging.info('{} is saved in {}'.format(curr_model_name, curr_model_path))
if best:
best_model_name = to_save_name + "_" + k + '_best.onnx'
best_model_path = os.path.join(self.model_path, best_model_name)
torch.onnx.export(v, dummy_input, best_model_path, export_params=True, verbose=True, do_constant_folding=False, training=torch.onnx.TrainingMode.EVAL,operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK, input_names=input_names, output_names=output_names)
logging.info('{} is saved in {}'.format(best_model_name, best_model_path))
else:
for to_save_name in to_save_list:
to_save_item = getattr(obj, to_save_name, {})
assert isinstance(to_save_item, dict), "{} must a dictionary!".format(to_save_name)
for k, v in to_save_item.items():
curr_model_name = to_save_name + "_" + k + STEP_MODEL_SUFFIX.format(int(step))
curr_model_path = os.path.join(self.model_path, curr_model_name)
torch.save(v.state_dict(), curr_model_path)
logging.info('{} is saved in {}'.format(curr_model_name, curr_model_path))
if best:
best_model_name = to_save_name + "_" + k + BEST_MODEL_SUFFIX
best_model_path = os.path.join(self.model_path, best_model_name)
torch.save(v.state_dict(), best_model_path)
logging.info('{} is saved in {}'.format(best_model_name, best_model_path))
you can assign the variable use_onnx True or False to choose whether to export the model to ONNX format. Also, because exporting onnx format needs a bit long time, change the number in if block so you can decide when to export onnx and this can save some time. if use_onnx and step >= 10:
- Also ,you may encounter an error if you directly run avsl’s code, because in avsl’s avsl_collector, when exporting onnx, the model is set eval(not training),so self.split_num whose value is None will cause AssertionError ,then you may need to add this before assert.
if self.split_num == None:
self.split_num = 3
Hope this can help you!