im2latex-tensorflow
im2latex-tensorflow copied to clipboard
output the whole sequence within TF
thanks much for the excellent code.
in the current predict, i saw it gets output idx one by one: (Maybe it is for visualization purpose). attention.py: line 86 for i in xrange(1,160): inp_seqs[:,i] = sess.run(predictions,feed_dict={X:imgs,input_seqs:inp_seqs[:,:i]})
In my test, it takes quite a while on my GPU machine. My understanding is it goes back and forth between TF and Python. Would it be more efficient to have one TF OP to output the whole sequence? I would be happy to work on it if there is some guidance?
I already had this code. i forget to push it,
Check if it works now and let me know,
thanks much for the quick fix. I just tried the latest code from github and got this error:
$ python attention.py
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
Traceback (most recent call last):
File "attention.py", line 38, in
out,state =
tflib.ops.FreeRunIm2LatexAttention('AttLSTM',emb_seqs,ctx,EMB_DIM,ENC_DIM,DEC_DIM,D,H,W)
File "/home/mc/im2latex-tensorflow/tflib/ops.py", line 625, in FreeRunIm2LatexAttention
V = tf.transpose(ctx,[0,2,3,1]) # (B, H, W, D)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1285, in transpose
ret = gen_array_ops.transpose(a, perm, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 3658, in transpose
result = _op_def_lib.apply_op("Transpose", x=x, perm=perm, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2508, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1873, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1823, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn
debug_python_shape_fn, require_shape_fn)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 676, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Dimension must be 3 but is 4 for 'transpose' (op: 'Transpose') with input shapes: [?,?,80], [4]
On Fri, Oct 20, 2017 at 6:22 PM, Rithesh Kumar [email protected] wrote:
I already had this code. i forget to push it,
Check if it works now and let me know,
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ritheshkumar95/im2latex-tensorflow/issues/15#issuecomment-338335987, or mute the thread https://github.com/notifications/unsubscribe-auth/AHKK6G61wtEqrOJwgT5uT-UJFFAPqA4dks5suR0YgaJpZM4QA4DJ .
when I run "python attention.py" I met the same problem, could you tell me how to solve it? thank you very much! ! !
Traceback (most recent call last): File "attention.py", line 37, in out,state = tflib.ops.FreeRunIm2LatexAttention('AttLSTM',emb_seqs,ctx,EMB_DIM,ENC_DIM,DEC_DIM,D,H,W) File "/home/wll/im2latex/im2latex0123/tflib/ops.py", line 625, in FreeRunIm2LatexAttention V = tf.transpose(ctx,[0,2,3,1]) # (B, H, W, D) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1336, in transpose ret = gen_array_ops.transpose(a, perm, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 5694, in transpose "Transpose", x=x, perm=perm, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2958, in create_op set_shapes_for_outputs(ret) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2209, in set_shapes_for_outputs shapes = shape_func(op) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2159, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn require_shape_fn) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Dimension must be 3 but is 4 for 'transpose' (op: 'Transpose') with input shapes: [?,?,80], [4].
@moezlinlin @mingchen62 I'm sorry i'm quite busy, i don't get the time to fix bugs everytime tensorflow updates the version. Please try fixing it yourself, i'm happy to accept your pull requests.
I meet the same problem, and after going through the codes, I find that the def FreeRunIm2LatexAttention( name, ctx, input_dim, output_dim, ENC_DIM, DEC_DIM, D, H, W ): in tflib\ops.py accepts ctx after name but in attention.py, the function accepts emb_seqs rather than out,state = tflib.ops.FreeRunIm2LatexAttention('AttLSTM',emb_seqs,ctx,EMB_DIM,ENC_DIM,DEC_DIM,D,H,W), I think the problem has no relations with TF version, but a syntax error, yet I don not know how to fix it :(
@moezlinlin @mingchen62 @ritheshkumar95 OK, I know how to solve the problem, to replace FreeRunIm2LatexAttention with im2latexAttention, it seems that the attention function is mis-used
@wwjwhen When I replace FreeRunIm2LatexAttention with im2latexAttention I have some wrong bellow: InvalidArgumentError (see above for traceback): assertion failed: [Expected shape for Tensor rnn/sequence_length:0 is ] [20] [ but saw shape: ] [8] [[node rnn/Assert/Assert (defined at /root/im2latex-tensorflow/tflib/ops.py:533) = Assert[T=[DT_STRING, DT_INT32, DT_STRING, DT_INT32], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rnn/All/_99, rnn/Assert/Assert/data_0, rnn/stack/_101, rnn/Assert/Assert/data_2, rnn/Shape_1/_103)]] [[{{node rnn/while/PyFunc/_278}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_2593_rnn/while/PyFunc", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
Can you help me, please?
@moezlinlin @mingchen62 I'm sorry i'm quite busy, i don't get the time to fix bugs everytime tensorflow updates the version. Please try fixing it yourself, i'm happy to accept your pull requests.
You this sucker!!!!!
@moezlinlin @mingchen62 I'm sorry i'm quite busy, i don't get the time to fix bugs everytime tensorflow updates the version. Please try fixing it yourself, i'm happy to accept your pull requests.
You this sucker!!!!!
Reporting and blocking this user for bad language.