CNTK
CNTK copied to clipboard
RNNForward -> Feature Not Implemented
When I test a BiLSTM model with CNTK using CPU (trained using GPU), an error occurs:
Inside File: Source/Math/Matrix.cpp Line: 4423 Function: RNNForward -> Feature Not Implemented.
Closed Kaldi writer.
Then I turned to test the model using GPU, no more such error.
Has anyone met the same error?
This is because the RNN node used is based on cuDNN and is only implemented in GPU.
Thanks,
Dong
From: xuankai@sjtu [mailto:[email protected]] Sent: Monday, January 30, 2017 7:58 PM To: Microsoft/CNTK [email protected] Cc: Subscribed [email protected] Subject: [Microsoft/CNTK] RNNForward -> Feature Not Implemented (#1380)
When I test a BiLSTM model with CNTK using CPU (trained using GPU), an error occurs: Inside File: Source/Math/Matrix.cpp Line: 4423 Function: RNNForward -> Feature Not Implemented. Closed Kaldi writer.
Then I turned to test the model using GPU, no more such error.
Has anyone met the same error?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Microsoft/CNTK/issues/1380 , or mute the thread https://github.com/notifications/unsubscribe-auth/AL5Pc23MXTtoVaWzOX3MX52yuV9nU5Uqks5rXrEugaJpZM4LyN4i . https://github.com/notifications/beacon/AL5Pc_eVKKc20YD3hydH67GTkulDxdLvks5rXrEugaJpZM4LyN4i.gif
Our Python APIs support both CPU and GPU based RNNs. See Language Understanding Tutorial for more information.
I am closing this issue for now. Please feel free to re-open it if you have further comments.
I got this error when I do model evaluation with C++ API function void CNTK::Function::Evaluate(). I do this on CPU. It is still not implemented on CPU?
I am still seeing this issue on my cntk 2.4 on windows. when will it be fixed.
I encountered the issue running code in Module 03 of MS's DEV287x course on edX.org. If you look within the CNTK ops module, the method causing problem (for me) resides at following package location and has the following comment (in part) : ...\site-packages\cntk\ops_init_.py
def optimized_rnnstack(operand, weights, hidden_size, num_layers,
bidirectional=False, recurrent_op='lstm', name=''):
'''
An RNN implementation that uses the primitives in cuDNN.
If cuDNN is not available it fails. You can use :class:~cntk.misc.optimized_rnnstack_converter.convert_optimized_rnnstack
to convert a model to GEMM-based implementation when no cuDNN.
.... '''
Where required I used the original method call to C.optimized_rnnstack as the argument for a call to suggested GEMM-based RNN model via the method C.misc.optimized_rnnstack_converter.convert_optimized_rnnstack, i.e.
return C.misc.optimized_rnnstack_converter.convert_optimized_rnnstack(C.optimized_rnnstack(operand, weights=W, hidden_size=hidden_size, num_layers=num_layers, bidirectional=True, recurrent_op='lstm' ))
Although BLSTM fitting an acoustic model proceeded in very pokey fashion on my CPU-only Win10 machine (2.5 hrs/epoch), fitting over 10 epochs gave as approximately as good a fit as using a DNN feedforward network over 100 epochs (and I didn't have to buy a machine with an nVidia GPU!).