object-detection
object-detection copied to clipboard
Memory issue when continuously calling the lambda function
Hi, when i continuously call the lambda function, the memory increases call after call.
This is how i'm debugging the code:
console.log("imgToTensor: memory before: "+JSON.stringify(tf.memory())); const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3])) console.log("imgToTensor: memory after: "+JSON.stringify(tf.memory()));
The first time i call the function I get this: imgToTensor: memory before: {"unreliable":true,"numTensors":263,"numDataBuffers":263,"numBytes":47349088} imgToTensor: memory after: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}
The second time i call the function i get the following: imgToTensor: memory before: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648} imgToTensor: memory after: {"unreliable":true,"numTensors":265,"numDataBuffers":265,"numBytes":105978208}
Looks like the statement
const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3]))
If you take a look to the "numTensors" property, it's increased after each function call.
After 5 lambda executions my lambda fails with
Error: Runtime exited with error: signal: killed
Is there a way to clean the resources from the previous lambda function call?
Thanks!
solved it by adding
tensor.dispose();
after running the prediction
const { scores, boxes } = await predict(tfModel, tensor)
Hi @jogando,
This is an amazing finding, thanks for reporting 🙏 Would you mind creating a PR to update this repository?
Sure! I'm still investigating another issue. Jimp is not properly releasing the memory, so in each lambda execution, the total memory consumed is increased by 10MB.
This is the statement causing the issue:
const image = await Jimp.read(imgBuffer)