DL4MT
DL4MT copied to clipboard
data prep fail warning
I get this warning interspersed with the normal logs (eg. training the a -> b -> a loop.
, training the b -> a -> b loop.
and training on bitext
)
WARNING:root:WARNING: data prep failed (_x_prep is None). ignoring...
I briefly started tracing through, but I am not sure if this is a couple buggy sentences, or one of the sources of data is bad, or if this is a bug in the code.
I have attached my full log: run_train_dual.py.out.txt
and this was using commit: ac5ac719272c34011bde6773a5eb793279b41dc5
We're looking at this issue now.
Sent from my iPhone
On May 25, 2017, at 10:24 PM, Huda Khayrallah [email protected] wrote:
I get this warning interspersed with the normal logs (eg. training the a -> b -> a loop., training the b -> a -> b loop. and training on bitext)
WARNING:root:WARNING: data prep failed (_x_prep is None). ignoring...
I briefly started tracing through, but I am not sure if this is a couple buggy sentences, or one of the sources of data is bad, or if this is a bug in the code.
I have attached my full log: run_train_dual.py.out.txt
and this was using commit: ac5ac71
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
@khayrallah @noisychannel Hi guys, pls help, when i run ./run_train_dual.py sample_config.py , appears this error GpuArrayException: cuMemAlloc: CUDA_ERROR_OUT_OF_MEMORY: out of memory. what should i do? GPU: GTX 960 CUDA Version 9.0 Cudnn 6.0.21 Theano 1.0.3 pygpu 0.7.6
I have attached my log: run_train_dual.py.output.txt
Look forward to your reply. Thanks!