deep_sort_yolov3 icon indicating copy to clipboard operation
deep_sort_yolov3 copied to clipboard

有没有人把这个东西运行在 NVIDIA 的JETSON NANO中的 ,求教程

Open albertyou2 opened this issue 5 years ago • 8 comments

有没有人把这个东西运行在 NVIDIA 的JETSON NANO中的 ,求教程 谢谢!

albertyou2 avatar May 24 '19 07:05 albertyou2

I'm trying to run this on Jeston Nano but I get stuck on this:

out_boxes, out_scores, out_classes = self.sess.run(
           [self.boxes, self.scores, self.classes],
           feed_dict={
               self.yolo_model.input: image_data,
               self.input_image_shape: [image.size[1], image.size[0]],
               K.learning_phase(): 0
           })
       return_boxs = []

It hangs out at this level forever. What could it be causing it?

c4b4d4 avatar May 27 '19 18:05 c4b4d4

After leaving it for a while, it displays the following message:

kthreadd: page allocation stalls for 10004ms, order:2, mode:0x27080c0(GFP_KERNAL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK)
kthreadd invoked oom-killer: gfp_mask=0x27080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK), nodemask=0, order=2, oom_score_adj=0
Out of memory: Kill process 7248 (python3) score 266 or sacrifice child

c4b4d4 avatar May 27 '19 18:05 c4b4d4

看上去这内存分配失败了,是不是内存/显存不足?

albertyou2 avatar May 28 '19 02:05 albertyou2

Memory is hitting the limit, just verified it with tegrastats

I'm adding a swapfile to help the the memory, will let you know if that fixed it.

c4b4d4 avatar May 28 '19 02:05 c4b4d4

@kzka90 Thank you very much !

albertyou2 avatar May 28 '19 02:05 albertyou2

tensorflow part in generate_detections: config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.1 # GPU10% self.session = tf.Session(config=config)

keras part in yolo: config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.2 # GPU20% set_session(tf.Session(config=config))

huachao2017 avatar Jul 07 '19 10:07 huachao2017

@kzka90 did you solve the issue with a swapfile? I made a 6GB swapfile. But I can`t do well on my nano.

linspace100 avatar Dec 05 '19 05:12 linspace100

this seems to be running stuck on the following issue for me:

tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[3,3,512,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:Mul]

8GB allocated for memory, should a swap file help?

Also: would this work in a live setup, pulling images from the camera?

JanEveraertEHB avatar Jan 05 '21 14:01 JanEveraertEHB