DynamicLSTM
DynamicLSTM copied to clipboard
Training Language model for some other language data-set
How can we use your model to train language model for other language data set .
When we tried to run the python code for RL_train.py we got some error
we are using data with vocab size 169449
-----Initialized all dataset.-----
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
-----Constructing training graph.-----
-----Constructed training graph.-----
-----Constructing validating graph.-----
-----Constructed validating graph.-----
-----Constructing testing graph.-----
-----Constructed testing graph.-----
---Created and initialized fresh model. Size: 230862154
-----Start training model-----
W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: indices[0,12] = 191814 is not in [0, 169449)
[[Node: Model/Embedding/embedding_lookup = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@Model/Embedding/word_embedding"], validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](Model/Embedding/word_embedding/read, _recv_Model/input_0)]]
W tensorflow/core/kernels/queue_base.cc:294] _0_TrainInput/input_producer/fraction_of_32_full/fraction_of_32_full: Skipping cancelled enqueue attempt with queue not closed
W tensorflow/core/kernels/queue_base.cc:294] _1_TestInput/input_producer/fraction_of_32_full/fraction_of_32_full: Skipping cancelled enqueue attempt with queue not closed
W tensorflow/core/kernels/queue_base.cc:294] _2_ValidInput/input_producer/fraction_of_32_full/fraction_of_32_full: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
File "RL_train.py", line 321, in
Caused by op u'Model/Embedding/embedding_lookup', defined at:
File "RL_train.py", line 321, in
InvalidArgumentError (see above for traceback): indices[0,12] = 191814 is not in [0, 169449) [[Node: Model/Embedding/embedding_lookup = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@Model/Embedding/word_embedding"], validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](Model/Embedding/word_embedding/read, _recv_Model/input_0)]]
could you please help us.