mtcnn
mtcnn copied to clipboard
Memory Leak
I read 100,000 images from hard drive and run mtcnn on each image to detect faces. I do nothing else and notice that RAM (16 GB) utilization reaches 100%. This means there is memory leakage. How to solve this problem?
Are you closing the file descriptors of your images after predictions? can you supply a demo code for this bug?
detector = MTCNN()
for file_path in file_paths:
file_name=file_path.split('/')[-1]
folder_name=file_path.split('/')[-2]
try:
image=cv2.cvtColor(cv2.imread(file_path), cv2.COLOR_BGR2RGB)
bbs_all, points_all = detector.detect_faces(image)
except:
print('Read error.')
How to close the file descriptors of images after predictions?
I use memory_profile to verify and yes it is memory leak problem in mtcnn. Sizes of my images are larger than 1000x1000 pixels. For smaller images 300x300, the leak is negligible.
Even I am facing this issue. Any way to resolve this?
@jokerisgod Instead of MTCNN I now use RetinaFace, which is better than MTCNN in not only memory but also in accuracy.
Thank you @fisakhan
Can you point me to the repo which you're using?
Hi, Solution: Upgrade your tensorflow version: 2.4.0.
When I used tensorflow version: 2.2.0, I got same memory leak issue. After that I upgraded tensorflow and problem solved.
I read 100,000 images from hard drive and run mtcnn on each image to detect faces. I do nothing else and notice that RAM (16 GB) utilization reaches 100%. This means there is memory leakage. How to solve this problem?
Did you find any solution regarding this? I am also facing the same issue.
I read 100,000 images from hard drive and run mtcnn on each image to detect faces. I do nothing else and notice that RAM (16 GB) utilization reaches 100%. This means there is memory leakage. How to solve this problem?
Did you find any solution regarding this? I am also facing the same issue.
Hi, It's about tensorflow version. I updated tensorflow version and problem solved for me! My Tensorflow version: 2.5.0-dev20210223
I face some problems with tensorflow versions. I didn't check but @cevdetcvr might be right.
org.tensorflow:tensorflow-android:+ not able to change to version 2.4
Could not find org.tensorflow:tensorflow-android:2.4.0.
@jokerisgod Instead of MTCNN I now use RetinaFace, which is better than MTCNN in not only memory but also in accuracy.
Can you attach the link to the RetinaFace implementation you are using?
@himanshu-doi https://github.com/peteryuX/retinaface-tf2
https://github.com/timesler/facenet-pytorch/issues/57#issuecomment-667614153
^This comment helped me resolve the memory spike issue for my api using mtcnn module. Try to limit the batch size for rnet
and onet
models. I can suggest these two methods:
- By increasing the confidence threshold for
pnet
andrnet
models in mtcnn to ignore less confident predictions - By reducing/limiting the input image resolution
High resolution images cause pnet
to create a large number of box proposals which inflates the memory and increases latency for prediction.
Just in case anyone still has this issue, adding torch.cuda.empty_cache()
after the detect
function solved the problem for me (using MTCNN).
I'm currently working with tensor flow 2.10 I instantiated the detector and iteratively processing images. Unfortunately the gpu both gpu and system memory increase continuously until the process is killed by the system.
this is the an example of code producing such a problem.
` import cv2 from mtcnn import MTCNN
image_path = "/home/..../0001_01.jpg" img = cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB) face_detector = MTCNN() while(1): faces = face_detector.detect_faces(img) print("a") `
Have you some suggestion to solve this issue?
您的邮件已接收!
To fix the memory leak, update mtccn.py out = self._pnet.predict(img_y) out = self._rnet.predict(tempimg1) out = self._onet.predict(tempimg1) to out = self._pnet(img_y) out = self._rnet(tempimg1) out = self._onet(tempimg1)
https://stackoverflow.com/questions/64199384/tf-keras-model-predict-results-in-memory-leak
您的邮件已接收!
I'm currently working with tensor flow 2.10 I instantiated the detector and iteratively processing images. Unfortunately the gpu both gpu and system memory increase continuously until the process is killed by the system.
this is the an example of code producing such a problem.
` import cv2 from mtcnn import MTCNN
image_path = "/home/..../0001_01.jpg" img = cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB) face_detector = MTCNN() while(1): faces = face_detector.detect_faces(img) print("a") `
Have you some suggestion to solve this issue?
Hello, did you find any solution to this?