chatbot
chatbot copied to clipboard
Запуск бота на Flask выдает ошибки
При запуске flask_service_bot.py возникали всякие проблемы с путями:
Traceback (most recent call last):
File "flask_service_bot.py", line 87, in <module>
init_chatbot()
File "flask_service_bot.py", line 40, in init_chatbot
machine.load_models(w2v_folder, models_folder) # models_folder, w2v_folder было @joomler 10.09.2019
TypeError: load_models() missing 1 required positional argument: 'w2v_folder'
исправил это, строка 40 добавлением data_folder:
machine.load_models(data_folder, models_folder, w2v_folder)
также, строка 67, убрал .. :
parser.add_argument('--tmp_folder', type=str, default='/tmp')
После этого фласк запустил сервер, подгрузил приветствие, но после ввода текст и отправки вышла ошибка:
`
File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app
response = self.handle_exception(e)
File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1741, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot_service/routes.py", line 60, in index
bot.push_phrase(user_id, utterance)
File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/bot_personality.py", line 42, in push_phrase
self.engine.push_phrase(self, user_id, question)
File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/simple_answering_machine.py", line 291, in push_phrase
interpreted_phrase = self.interpret_phrase(bot, session, question)
File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/simple_answering_machine.py", line 208, in interpret_phrase
self.word_embeddings):
File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/nn_req_interpretation.py", line 59, in require_interpretation
y_pred = self.model.predict(x=X_batch, verbose=0)
File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 1164, in predict
self._make_predict_function()
File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 554, in _make_predict_function
**kwargs)
File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2744, in function
return Function(inputs, outputs, updates=updates, **kwargs)
File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2546, in __init__
with tf.control_dependencies(self.outputs):
File "/home/joo/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 5004, in control_dependencies
return get_default_graph().control_dependencies(control_inputs)
File "/home/joo/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 4543, in control_dependencies
c = self.as_graph_element(c)
File "/home/joo/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3490, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/home/joo/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3569, in _as_graph_element_locked
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("output_1/Softmax:0", shape=(?, 2), dtype=float32) is not an element of this graph.
`
Ничего не трогал в моделях, поэтому тут что-то с фласком, не так подгружает что-то
Спасибо за замечание, поправил исходник flask_service_bot.py К сожалению, у меня flask-сервис нормально не хочет работает, падает при использовании нейросетевых моделей со странными ошибками в tensorflow, поэтому более тщательно работоспособность проверить не могу.
Ошибки как у меня или другие?
Ошибки как у меня или другие? Другая ошибка, в одной из нейросетевых моделей:
ValueError: Tensor Tensor("output_2/Sigmoid:0", shape=(?, 1), dtype=float32) is not an element of this graph.
Причем эта же модель в этом же коде в консольной сервии бота работает без ошибок. Пока даже не знаю, куда копнуть.
Другая, но похожая ошибка вот 2 решения: https://kobkrit.com/tensor-something-is-not-an-element-of-this-graph-error-in-keras-on-flask-web-server-4173a8fe15e1 https://github.com/tensorflow/tensorflow/issues/14356 у меня оба не срабатывает, наверное не туда код добавляю Пробовал:
with graph.as_default():
machine.load_models(data_folder, models_folder, w2v_folder)
и перед init_chatbot() пробовал
Похоже нужно прямо перед предсказанием вставлять
Похоже нужно прямо перед предсказанием вставлять Ага, спасибо за рецепт, помогло. Я добавил перед вызовом predict для трех моделей - заработало! Костыль, конечно, код чатбота теперь прибит гвоздиками к tf бэкенду.
Изменения в трех исходниках закомитил.
Еще ошибка: https://github.com/Koziev/chatbot/blob/master/PyModels/bot_service/routes.py
from bot_service.dialog_phrase import DialogPhrase
Есть еще вопрос, а как реализуется сессионность? Ведь регистрации юзера нет, куки не отправляются, откуда модель знает, какой юзер с ней разговаривает? При том, что есть запоминание ответа юзера, как происходит выбор какому юзеру будет ответ из базы, по IP?
Успешно ли работает бот на Gunicorn?
У меня еще ошибка при работе именно через Flask, в консоли бот работает.
tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
Ошибка эта после гугления привела к https://github.com/tensorflow/tensorflow/issues/24828 , где большое обсуждение и есть несколько решений. В частности, предлагают такое: https://github.com/tensorflow/tensorflow/issues/24828#issuecomment-464960819
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
Но не очень понятно, как это применить к вашему коду?
Полный трейс ошибки:
2019-05-14 17:18:51,936 Loading greetings from ../data/smalltalk_opening.txt 2019-05-14 17:18:51,947 127.0.0.1 - - [14/May/2019 17:18:51] "GET /index HTTP/1.1" 200 - 2019-05-14 17:18:52,074 127.0.0.1 - - [14/May/2019 17:18:52] "GET /static/img/bot.png HTTP/1.1" 200 - 2019-05-14 17:18:56,078 BotScripting::start_conversation 2019-05-14 17:18:56,078 Loading greetings from ../data/smalltalk_opening.txt 2019-05-14 17:18:56,079 127.0.0.1 - - [14/May/2019 17:18:56] "GET /index HTTP/1.1" 200 - 2019-05-14 17:19:01.124269: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.126152: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.128045: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.129727: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.130586: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.131384: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.132288: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.133141: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.134008: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.134842: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.135697: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.136507: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.137421: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.138223: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.139055: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.139854: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.140687: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.141516: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.142401: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.143199: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.144030: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.144827: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.145656: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.146457: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01,230 127.0.0.1 - - [14/May/2019 17:19:01] "POST /index HTTP/1.1" 500 - Traceback (most recent call last): File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 2309, in __call__ return self.wsgi_app(environ, start_response) File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app response = self.handle_exception(e) File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1741, in handle_exception reraise(exc_type, exc_value, tb) File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise raise value File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app response = self.full_dispatch_request() File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception reraise(exc_type, exc_value, tb) File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise raise value File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request rv = self.dispatch_request() File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot_service/routes.py", line 60, in index bot.push_phrase(user_id, utterance) File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/bot_personality.py", line 42, in push_phrase self.engine.push_phrase(self, user_id, question) File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/simple_answering_machine.py", line 417, in push_phrase answers = self.build_answers(bot, interlocutor, interpreted_phrase) File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/simple_answering_machine.py", line 538, in build_answers answers, answer_confidenses = self.build_answers0(bot, interlocutor, interpreted_phrase) File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/simple_answering_machine.py", line 464, in build_answers0 word_embeddings=self.word_embeddings) File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/nn_enough_premises_model.py", line 91, in is_enough y = self.model.predict(x=self.inputs)[0] File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 1169, in predict steps=steps) File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/engine/training_arrays.py", line 294, in predict_loop batch_outs = f(ins_batch) File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2715, in __call__ return self._call(inputs) File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2675, in _call fetched = self._callable_fn(*array_vals) File "/home/joo/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1439, in __call__ run_metadata_ptr) File "/home/joo/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node shared_conv_1_4/convolution/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](shared_conv_1_4/convolution/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, shared_conv_1_4/convolution/ExpandDims_1)]] [[{{node output_2/Sigmoid/_759}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_2481_output_2/Sigmoid", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] 2019-05-14 17:19:01,331 127.0.0.1 - - [14/May/2019 17:19:01] "GET /index?__debugger__=yes&cmd=resource&f=debugger.js HTTP/1.1" 200 -
У меня еще ошибка при работе именно через Flask, в консоли бот работает.
tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
Похоже что flask+tensorflow перестало быть простым способом :( Тоже сейчас не знаю, как заюзать workaround, тем более что у меня такого еще не было.
Есть еще вопрос, а как реализуется сессионность? Ведь регистрации юзера
нет, куки не отправляются, откуда модель знает, какой юзер с ней разговаривает? При том, что есть запоминание ответа юзера, как происходит выбор какому юзеру будет ответ из базы, по IP?
Честно говоря, никак не реализовано. То, что есть в https://github.com/Koziev/chatbot/blob/master/PyModels/flask_service_bot.py, это просто черновик. id юзера хардкодом вбито в строке 18 в https://github.com/Koziev/chatbot/blob/master/PyModels/bot_service/routes.py. Я планировал вообще сделать нормальный фронт на ангуляре, с регистрацией и хранением истории сессий в sqlite или postgre, но пока руки не дошли.
Успешно ли работает бот на Gunicorn?
Не помню, запускал ли сервис под gunicorn, но в коде оставил кусочек на этот случай. Хотя в свете последней серии ошибок с tf как-то мутно выглядит перспектива сервиса бота на flask.
On Tue, May 14, 2019 at 5:33 PM BitcoinKing [email protected] wrote:
У меня еще ошибка при работе именно через Flask, в консоли бот работает.
tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
Ошибка эта после гугления привела к tensorflow/tensorflow#24828 https://github.com/tensorflow/tensorflow/issues/24828 , где большое обсуждение и есть несколько решений. В частности, предлагают такое: tensorflow/tensorflow#24828 (comment) https://github.com/tensorflow/tensorflow/issues/24828#issuecomment-464960819
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
Но не очень понятно, как это применить к вашему коду?
Полный трейс ошибки:
2019-05-14 17:18:51,936 Loading greetings from ../data/smalltalk_opening.txt 2019-05-14 17:18:51,947 127.0.0.1 - - [14/May/2019 17:18:51] "GET /index HTTP/1.1" 200 - 2019-05-14 17:18:52,074 127.0.0.1 - - [14/May/2019 17:18:52] "GET /static/img/bot.png HTTP/1.1" 200
- 2019-05-14 17:18:56,078 BotScripting::start_conversation 2019-05-14 17:18:56,078 Loading greetings from ../data/smalltalk_opening.txt 2019-05-14 17:18:56,079 127.0.0.1 - - [14/May/2019 17:18:56] "GET /index HTTP/1.1" 200 - 2019-05-14 17:19:01.124269: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.126152: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.128045: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.129727: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.130586: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.131384: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.132288: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.133141: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.134008: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.134842: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.135697: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.136507: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.137421: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.138223: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.139055: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.139854: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.140687: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.141516: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.142401: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.143199: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.144030: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.144827: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.145656: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01.146457: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-05-14 17:19:01,230 127.0.0.1 - - [14/May/2019 17:19:01] "POST /index HTTP/1.1" 500 - Traceback (most recent call last): File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 2309, in call return self.wsgi_app(environ, start_response) File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app response = self.handle_exception(e) File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1741, in handle_exception reraise(exc_type, exc_value, tb) File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise raise value File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app response = self.full_dispatch_request() File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception reraise(exc_type, exc_value, tb) File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise raise value File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request rv = self.dispatch_request() File "/home/joo/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request return self.view_functionsrule.endpoint File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot_service/routes.py", line 60, in index bot.push_phrase(user_id, utterance) File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/bot_personality.py", line 42, in push_phrase self.engine.push_phrase(self, user_id, question) File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/simple_answering_machine.py", line 417, in push_phrase answers = self.build_answers(bot, interlocutor, interpreted_phrase) File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/simple_answering_machine.py", line 538, in build_answers answers, answer_confidenses = self.build_answers0(bot, interlocutor, interpreted_phrase) File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/simple_answering_machine.py", line 464, in build_answers0 word_embeddings=self.word_embeddings) File "/home/joo/Документы/LocalRepository/chatbot-koziev-master/PyModels/bot/nn_enough_premises_model.py", line 91, in is_enough y = self.model.predict(x=self.inputs)[0] File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 1169, in predict steps=steps) File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/engine/training_arrays.py", line 294, in predict_loop batch_outs = f(ins_batch) File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2715, in call return self._call(inputs) File "/home/joo/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2675, in _call fetched = self._callable_fn(*array_vals) File "/home/joo/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1439, in call run_metadata_ptr) File "/home/joo/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node shared_conv_1_4/convolution/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](shared_conv_1_4/convolution/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, shared_conv_1_4/convolution/ExpandDims_1)]] [[{{node output_2/Sigmoid/_759}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_2481_output_2/Sigmoid", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]] 2019-05-14 17:19:01,331 127.0.0.1 - - [14/May/2019 17:19:01] "GET /index?debugger=yes&cmd=resource&f=debugger.js HTTP/1.1" 200 -
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Koziev/chatbot/issues/14?email_source=notifications&email_token=ABJ6SVWGDZVUL252422E7RLPVLERXA5CNFSM4HMGHUDKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVLVPJA#issuecomment-492263332, or mute the thread https://github.com/notifications/unsubscribe-auth/ABJ6SVUAUKYLUCGYL4XWYZLPVLERXANCNFSM4HMGHUDA .