S2VT icon indicating copy to clipboard operation
S2VT copied to clipboard

How about GPU?

Open Airotong opened this issue 7 years ago • 5 comments

I could run these code in my cpu tensorflow, but the training time is quite long. So I downloaded GPU tensorflow and wanted to run model_RGB.py again, but there were many peoblems. The most biggest problem is ResourceExhaustedError:OMM when allocating tensor with shape[3000,4000]. I want to know if these codes just for CPU? And we cannot simply apply them to GPU environment? Thank you for your reply! I am new to video description.

Airotong avatar Jun 29 '17 02:06 Airotong

Hi, @Airotong , I have encountered a similar situation. Is this problem solved?

siyilingting avatar Oct 25 '17 01:10 siyilingting

ResourceExhaustedError means you don't have enough GPU memory, you can try to use smaller data shape or upgrade your GPU

knwng avatar Nov 17 '17 12:11 knwng

How could I set the data size? I used smaller batch_size, but it lead to the same problem.

Airotong avatar Dec 25 '17 06:12 Airotong

@Airotong You can try this. train_op = tf.train.AdamOptimizer(learning_rate).minimize(tf_loss,aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N) or train_op = tf.train.AdamOptimizer(learning_rate).minimize(tf_loss, aggregation_method=tf.AggregationMethod.EXPERIMENTAL_TREE) Maybe can solve your problem.

siyilingting avatar Dec 25 '17 07:12 siyilingting

Thank you for your reply! But I still met the same problem though I usedtrain_op = tf.train.AdamOptimizer(learning_rate).minimize(tf_loss,aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N) or train_op = tf.train.AdamOptimizer(learning_rate).minimize(tf_loss, aggregation_method=tf.AggregationMethod.EXPERIMENTAL_TREE)

Airotong avatar Dec 25 '17 09:12 Airotong