Aziz
Aziz
The issue is solved because I was working on a very huge TSV file , so I have tried to split it into parts and then the error is solved.
Hi ! I have seen Mr.Attardi's comment about the meaning of both accuracy and errors and he said > Don't worry about those numbers. You shoud get useable embeddings anyway.
Maybe use `tf.reset_default_graph()` ?
Hey @alvarobartt, thanks a lot for the hints. I am using the above notebook and your suggestion solved my memory issue on google colab.
I have also experienced a very slow transcription. Did you come up with a solution to this problem? Thanks.
It seems you're correct. Try to use the output as following: ```python prediction = tf.nn.softmax(output) # Define loss and optimizer (for example) loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( logits=output, labels=Y)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)...
Thanks @shubhamugare. Not really. I was just wondering how Syncode would find the most probable next token if we have prefix, suffix parts alongside with grammar rules. Right now, testing...