Torch-Models icon indicating copy to clipboard operation
Torch-Models copied to clipboard

how to save trained models?

Open kapil1027 opened this issue 5 years ago • 3 comments

This may sound like a stupid question but i cannot find the way to save my trained model as .t7 file to use through openCV can u tell me how you did it from the start please or direct me from where i can learn it.

kapil1027 avatar Jul 06 '19 13:07 kapil1027

I'm not sure that these models are usable with OpenCV? They are Torch7 models meant for use with fast-neural-style: https://github.com/jcjohnson/fast-neural-style

ProGamerGov avatar Jul 18 '19 00:07 ProGamerGov

It is easy to use your torch7 models in opencv here is the code if you want. This code i found at some other place I do not remember which was modified to run with torch models to get style transfer models. ###################################################################### from imutils.video import VideoStream from imutils import paths import itertools import argparse import imutils import time import cv2

construct the argument parser and parse the arguments

class Args: #model directory models =r"models save place DO NOT DELETE\models\instance_norm"

args=Args()

grab the paths to all neural style transfer models in our 'models'

directory, provided all models end with the '.t7' file extension

modelPaths = paths.list_files(args.models, validExts=(".t7",)) modelPaths = sorted(list(modelPaths))

generate unique IDs for each of the model paths, then combine the

two lists together

models = list(zip(range(0, len(modelPaths)), (modelPaths)))

use the cycle function of itertools that can loop over all model

paths, and then when the end is reached, restart again

modelIter = itertools.cycle(models) (modelID, modelPath) = next(modelIter)

load the neural style transfer model from disk

print("[INFO] loading style transfer model...") net = cv2.dnn.readNetFromTorch(modelPath)

initialize the video stream, then allow the camera sensor to warm up

print("[INFO] starting video stream...") vs = VideoStream(src=0).start() time.sleep(2.0) print("[INFO] {}. {}".format(modelID + 1, modelPath))

loop over frames from the video file stream

while True: # grab the frame from the threaded video stream frame = vs.read()

# resize the frame to have a width of 600 pixels (while
# maintaining the aspect ratio), and then grab the image
# dimensions
frame = imutils.resize(frame, width=600)
orig = frame.copy()
(h, w) = frame.shape[:2]

# construct a blob from the frame, set the input, and then perform a
# forward pass of the network
blob = cv2.dnn.blobFromImage(frame, 1.0, (w, h),
    (103.939, 116.779, 123.680), swapRB=False, crop=False)
net.setInput(blob)

output = net.forward()

# reshape the output tensor, add back in the mean subtraction, and
# then swap the channel ordering
output = output.reshape((3, output.shape[2], output.shape[3]))
output[0] += 103.939
output[1] += 116.779
output[2] += 123.680
output /= 255.0
output = output.transpose(1, 2, 0)

# show the original frame along with the output neural style
# transfer
cv2.imshow("Input", frame)
cv2.imshow("Output", output)
key = cv2.waitKey(1) & 0xFF
# if the `n` key is pressed (for "next"), load the next neural
# style transfer model
if key == ord("n"):
    # grab the next nueral style transfer model model and load it
    (modelID, modelPath) = next(modelIter)
    print("[INFO] {}. {}".format(modelID + 1, modelPath))
    net = cv2.dnn.readNetFromTorch(modelPath)

# otheriwse, if the `q` key was pressed, break from the loop
elif key == ord("q"):
    break

do a bit of cleanup

cv2.destroyAllWindows() vs.stop()

kapil1027 avatar Jul 18 '19 06:07 kapil1027

thanks for the reply, but the like you provided is working with lua and i am using python 3.6 on Jupiter notebook.

kapil1027 avatar Jul 18 '19 06:07 kapil1027