Visual-Template-Free-Form-Parsing
Visual-Template-Free-Form-Parsing copied to clipboard
Testing the model for one image
Thank you for effort in this difficult problem,
i am trying to test the model on a single image for detection, i followed previous mentioned steps as bleow:
`
from model import yolo_box_detector
import json
config = json.load(open("/content/drive/MyDrive/Colab_Notebooks/AI_MX/ICS619/Free-Template/Visual-Template-Free-Form-Parsing-master/cf_detector.json"))
model = yolo_box_detector.YoloBoxDetector(config['model'])
import torch
checkpoint = torch.load('/content/drive/MyDrive/Colab_Notebooks/AI_MX/ICS619/Free-Template/checkpoint-latest (1).pth')
model = eval(config['arch'])(checkpoint['config']['model'])
model.load_state_dict(checkpoint['state_dict'])
`
np_img = cv2.imread(imagePath, 0)[None,...] # to read it as gray (256,197,3)
#np_img = cv2.resize(np_img,new_size) (just be sure the size in roughly in the range it's been trained to see)
img = np_img.transpose([0,1,2])[None,...] # to have the correct shape (batch, ch, row, col)
img = img.astype(np.float32)
img = torch.from_numpy(img)
img = 1.0 - img / 128.0
Then just run the image through the model: bbPred, bbOut, t1, t2, t3, t4 = model(img) # here, my model is detect and returns 6 lists
note that i trained the model, but did not do eval.py.
are my steps correct, if yes, can you please help me to interrupt the output of the model and how to visualize it.
thanks
the output shapes are as follows: bbPred.shape , bbOut.shape = (torch.Size([1, 4800, 9]), torch.Size([1, 25, 16, 12, 9]))
Sorry I never saw this!
You can use the new run.py
script with the -d
flag to run detection on a single image now.