LinkToPast1990
LinkToPast1990
same problem here...
same problem I add config.gpu_options.allow_growth=True but it still ResourceExhausted... how to fix this? my gpu is gtx 950m 2017-07-02 14:08:35.353748: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative...
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[25088,4096] [[Node: gradients/fc6/fc6/MatMul_grad/MatMul_1 = MatMul[T=DT_FLOAT, transpose_a=true, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](fc6/Reshape, gradients/fc6/fc6_grad/ReluGrad)]] [[Node: Momentum/update/_204 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_1822_Momentum/update", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
def train_net(network, imdb, roidb, output_dir, log_dir, pretrained_model=None, max_iters=40000, restore=False): """Train a Fast R-CNN network.""" config = tf.ConfigProto(allow_soft_placement=True) config.gpu_options.allocator_type = 'BFC' #config.gpu_options.per_process_gpu_memory_fraction = 0.40 config.gpu_options.allow_growth = True
Thanks Should I use something like batch generator to avoid out of memory of gpu or ram? 可以使用 tensorflow 提供的 batch generator, 首先把数据送入队列中 queue, 然后需要一种 tensorflow reader (tf 有好几种reader), 然后从队列中读取数据,...
What is bfc?
@AuroraLHT Sad...3.95GB * 0.8, same problem...
@AuroraLHT I think maybe this implement just read all the data at one time so out of memory. But it seems to happen when computing?’‘OOM when allocating tensor with shape[25088,4096]’‘
https://stackoverflow.com/questions/39076388/tensorflow-deep-mnist-resource-exhausted-oom-when-allocating-tensor-with-shape
@AuroraLHT Thanks......But I really want to solve this memory problem 1、Tensorflow Minist Example step 19900, training accuracy 1 2017-07-02 15:57:33.230281: W tensorflow/core/common_runtime/http://bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to...