deep-parking icon indicating copy to clipboard operation
deep-parking copied to clipboard

Running only the saved models

Open NarenBabuR opened this issue 5 years ago • 8 comments

Can you please tell the changes to be made in main.py to run only the trained model

NarenBabuR avatar Oct 08 '18 10:10 NarenBabuR

You can use the forward.py script provided in pyffe.

python pyffe/forward.py

usage: forward.py [-h] [-mf MEAN_FILE] [-mp MEAN_PIXEL] [--nogpu]
                  [-rf ROOT_FOLDER]
                  deploy_file caffemodel image_list output_file
[...]
positional arguments:
  deploy_file           Path to the deploy file
  caffemodel            Path to a .caffemodel
  image_list            Path to an image list
  output_file           Name of output file

Ignore mean_file and mean_pixel arguments (they are not used in deep-parking experiments). You just need to provide:

  • the deploy.prototxt file
  • the trained model (caffemodel)
  • a txt file containing the URLs of images to analyze, one per row (image_list)
  • a name for the output file (output_file), the output is a numpy file (.npy)

Example:

python pyffe/forward.py path/to/deploy.prototxt path/to/snapshot_iter_xxx.caffemodel images.txt predictions.npy

where an example of images.txt is:

/path/to/image1.png
/path/to/image2.png
...

fabiocarrara avatar Oct 08 '18 11:10 fabiocarrara

Thank you very much for the DETAILED reply.

  1. Can you just give an example for the above

  2. Mainly I need to work with Video file as input (as your YouTube video sample). Can you please tell me how to proceed with this.

Since I'm new to Deep Learning., I don't know much of it. Thanxs in advance

NarenBabuR avatar Oct 08 '18 12:10 NarenBabuR

I updated the first answer with an example. About videos, our model only works on pre-extracted image patches. The visualization part you see on YouTube use our model and is implemented in Java + OpenCV. Unfortunately, we were not responsible for that part, and we do not have any code to share. However, I think you can easily reimplement that with newer versions of OpenCV (>= 3.3), which added the support for caffe models in the DNN module.

Some guides for Python:

  • https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.html#basic-ops
  • https://docs.opencv.org/3.0-beta/modules/videoio/doc/reading_and_writing_video.html
  • https://www.pyimagesearch.com/2017/08/21/deep-learning-with-opencv/

fabiocarrara avatar Oct 08 '18 13:10 fabiocarrara

Can you just give the Exact command for testing the images with pretrained model.

Also can you give me an example of image_list file.

Sorry for the trouble and wasting your time! This is my last query :))

On Mon, Oct 8, 2018 at 6:42 PM fabiocarrara [email protected] wrote:

I updated the first answer with an example. About videos, our model only works on pre-extracted image patches. The visualization part you see on YouTube use our model and is implemented in Java + OpenCV. Unfortunately, we were not responsible for that part, and we do not have any code to share. However, I think you can easily reimplement that with newer versions of OpenCV (>= 3.3), which added the support for caffe models in the DNN module.

Some guides for Python:

https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.html#basic-ops

https://docs.opencv.org/3.0-beta/modules/videoio/doc/reading_and_writing_video.html

  • https://www.pyimagesearch.com/2017/08/21/deep-learning-with-opencv/

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/fabiocarrara/deep-parking/issues/8#issuecomment-427827004, or mute the thread https://github.com/notifications/unsubscribe-auth/AhLxTZOOuKm9uySs_wrNDshKXZHkslcdks5ui08zgaJpZM4XMnxf .

-- Regards, Naren Babu R

NarenBabuR avatar Oct 08 '18 13:10 NarenBabuR

I ran the pyffe/forward.py

python3 pyffe/forward.py ~/Downloads/CNRPark+EXT_Trained_Models_mAlexNet/mAlexNet-on-UFPR05/deploy.prototxt ~/Downloads/CNRPark+EXT_Trained_Models_mAlexNet/mAlexNet-on-UFPR05/snapshot_iter_16170.caffemodel images.txt prediction.npy

Here is the content of images.txt: m

here is the output of prediction.npy: o

I think the output is not expected. the prediction is not expected. Any help on this @fabiocarrara please.

ahadafzal avatar Mar 22 '20 20:03 ahadafzal

Did you solve your problem @ahadafzal ? I'm also having the same issue.

nikola310 avatar Dec 31 '20 11:12 nikola310

@nikola310 nope. I didn't use this later. I opted for vgg16 model. Recent published a paper also in IEEE Scopus. 🙂

ahadafzal avatar Dec 31 '20 11:12 ahadafzal

@ahadafzal I see. I'll have to check it out then :smiley:

In case anyone stumbles upon this problem, since I was trying to test on the same data sets used during training, my solution was to use the appropriate patched image for each model. So if you're trying to run model trained on CNRPark, you have to use CNRPark patched images.

nikola310 avatar Dec 31 '20 14:12 nikola310