stanford-tensorflow-tutorials icon indicating copy to clipboard operation
stanford-tensorflow-tutorials copied to clipboard

Why BOT responses are blanks?

Open gopinathankm opened this issue 7 years ago • 26 comments

When I run following, Why do I get blank responses from Bot. (my_env) ubuntu@ip-172-31-16-139:~/chatbot/annbot$ python3 chatbot.py --mode chat Preparing data to be model-ready ... Data ready! Initialize new model Create placeholders Create inference Creating loss... It might take a couple of minutes depending on how many buckets you have. WARNING:tensorflow:From /home/ubuntu/anaconda3/envs/my_env/lib/python3.6/site-packages/tensorflow/python/ops/nn_impl.py:1310: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

Time: 27.975615978240967 Create optimizer... It might take a couple of minutes depending on how many buckets you have. Loading parameters for the Chatbot Welcome to TensorBro. Say something. Enter to exit. Max length is 60

hi [[-0.584015 6.833303 -0.7068891 ... -0.23560944 -0.49890977 -0.4948687 ]] . hello [[-0.58008343 7.1182957 -0.69766235 ... -0.22473697 -0.4460981 -0.4717362 ]] ? Hello [[-0.58008343 7.1182957 -0.69766235 ... -0.22473697 -0.4460981 -0.4717362 ]] ? How are you? [[-0.45013425 6.5066013 -0.71693504 ... -0.17606334 -0.6236066 -0.541961 ]] .

================================================================== Also Why output_convo.txt BOT responses are blanks?

(my_env) ubuntu@ip-172-31-16-139:~/chatbot/annbot/processed$ nano output_convo.txt HUMAN ++++ hi BOT ++++ . HUMAN ++++ hello BOT ++++ ? HUMAN ++++ Hello BOT ++++ ? HUMAN ++++ How are you? BOT ++++ . =============================================

gopinathankm avatar Mar 23 '18 15:03 gopinathankm

I have met the same issue with you.The test loss is high. Did you solve the issue?

wsb5680 avatar Mar 24 '18 01:03 wsb5680

No yet.

gopinathankm avatar Mar 24 '18 03:03 gopinathankm

train it more

MartinAbilev avatar Mar 26 '18 07:03 MartinAbilev

How long we need to iterate? I have gone for more than 14000 iterations in 10 hours. Is not enough?

gopinathankm avatar Mar 26 '18 10:03 gopinathankm

enough is when you satisfied with output. sometimes something is wrong and training not going well at all and ya can train forewer without successes.

you need to play with your data and settings. chose smaller params first then move to bigger ones to see do it works at all

MartinAbilev avatar Mar 26 '18 11:03 MartinAbilev

I trained it for 73000 iterations but still i am getting same results like ? and .

zukerrrr avatar Apr 17 '18 09:04 zukerrrr

this is settings in my config.py last iteration is 62800

BUCKETS = [(16, 19)]

NUM_LAYERS = 3
HIDDEN_SIZE = 256
BATCH_SIZE = 64

LR = 0.5
MAX_GRAD_NORM = 5.0

MartinAbilev avatar Apr 18 '18 07:04 MartinAbilev

and result

> i dont know agent unk
[[ -0.58301872  10.56363678  -0.75994956 ...,  -0.98926491  -1.0604918
   -0.550138  ]]
oh , yeah .
> yeah
[[-0.48157236  9.12944603 -0.45098591 ..., -0.6943714  -0.65536427
  -0.37269545]]
.
> stop doting
[[ -0.68257785  11.46051979  -0.91514087 ...,  -0.51534909  -0.68435407
   -0.14121974]]
- -
> no and stripes i dont like even
[[-0.30254534  7.19619989 -0.30324209 ..., -0.46949279 -0.61815649
  -0.2052134 ]]
you ' re lying .
> no i not
[[ -0.74903452  10.35870457  -0.86198902 ...,  -0.66905749  -0.53297275
   -0.56081152]]
nothing .
> philosophy
[[ -0.57804114  11.67677402  -0.7756232  ...,  -0.4330861   -0.4938038
   -0.44071352]]
.
> how much legs cat hawe ?
[[-0.41352031  8.16804886 -0.73054808 ..., -0.73398674 -0.83470362
  -0.72415113]]
with the <unk> .
> yeah
[[-0.48157236  9.12944603 -0.45098591 ..., -0.6943714  -0.65536427
  -0.37269545]]
.
> tell me 
[[-0.63063455  6.57353258 -0.46259886 ..., -0.70763153 -0.37358218
  -0.24181336]]
don ' t you tell me about it .
> no i dont know it
[[ -0.61053139  10.25772285  -0.99005854 ...,  -0.82599133  -1.04334843
   -0.67073309]]
and you do everything .
> no i dont do everything
[[-0.81464332  8.16336727 -1.04276168 ..., -0.75862557 -1.03167498
  -1.02897918]]
that ' s all .
> ok
[[-0.617365    9.04081535 -0.75897199 ..., -0.75397456 -0.4791148
  -0.37771353]]
don ' t have much .
> bye
[[-0.46397874  8.31635571 -0.79642689 ..., -0.59941679 -0.60873872
  -0.34941944]]
you ' re not in the mood .
> 

MartinAbilev avatar Apr 18 '18 07:04 MartinAbilev

What is the test loss? @MartinAbilev

sinanatra avatar Apr 18 '18 10:04 sinanatra

~/dev/tenzorflow-chat-testdrive$ python chatbot.py                                                         
Data ready!                                                                                                                  
Bucketing conversation number 9999                                                                                           
Bucketing conversation number 19999                                                                                          
Bucketing conversation number 9999                                                                                           
Bucketing conversation number 19999                                                                                          
Bucketing conversation number 29999                                                                                          
Bucketing conversation number 39999                                                                                          
Bucketing conversation number 49999                                                                                          
Bucketing conversation number 59999                                                                                          
Bucketing conversation number 69999                                                                                          
Bucketing conversation number 79999                                                                                          
Bucketing conversation number 89999                                                                                          
Bucketing conversation number 99999                                                                                          
Bucketing conversation number 109999                                                                                         
Bucketing conversation number 119999                                                                                         
Bucketing conversation number 129999                                                                                         
Bucketing conversation number 139999                                                                                         
Bucketing conversation number 149999                                                                                         
Bucketing conversation number 159999                                                                                         
Bucketing conversation number 169999                                                                                         
Bucketing conversation number 179999                                                                                         
Bucketing conversation number 189999                                                                                         
Number of samples in each bucket:
 [103198]
Bucket scale:
 [1.0]
Initialize new model
Create placeholders
Create inference
Creating loss... 
It might take a couple of minutes depending on how many buckets you have.
Time: 2.9458863735198975
Create optimizer... 
It might take a couple of minutes depending on how many buckets you have.
Creating opt for bucket 0 took 6.980544805526733 seconds
Running session
Loading parameters for the Chatbot
Iter 62900: loss 1.7389020776748658, time 1.6152856349945068
Iter 63000: loss 1.713471269607544, time 1.4531621932983398
Test bucket 0: loss 3.626643180847168, time 2.8442370891571045

MartinAbilev avatar Apr 18 '18 11:04 MartinAbilev

she is fun as ever :D

Running session
Loading parameters for the Chatbot
Iter 62900: loss 1.7389020776748658, time 1.6152856349945068
Iter 63000: loss 1.713471269607544, time 1.4531621932983398
Test bucket 0: loss 3.626643180847168, time 2.8442370891571045
Iter 63100: loss 1.7354186856746674, time 1.5451974868774414
Iter 63200: loss 1.7274975204467773, time 1.4273159503936768
Iter 63300: loss 1.7350391232967377, time 1.4423978328704834
^CTraceback (most recent call last):
  File "chatbot.py", line 262, in <module>
    main()
  File "chatbot.py", line 257, in main
    train()
  File "chatbot.py", line 156, in train
    _, step_loss, _ = run_step(sess, model, encoder_inputs, decoder_inputs, decoder_masks, bucket_id, False)
  File "chatbot.py", line 82, in run_step
    outputs = sess.run(output_feed, input_feed)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 889, in run
    run_metadata_ptr)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1120, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1317, in _do_run
    options, run_metadata)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1323, in _do_call
    return fn(*args)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1302, in _run_fn
    status, run_metadata)
KeyboardInterrupt
~/dev/tenzorflow-chat-testdrive$ python3 chatbot.py --mode chat
Data ready!
Initialize new model
Create placeholders
Create inference
Creating loss... 
It might take a couple of minutes depending on how many buckets you have.
Time: 2.344371795654297
Create optimizer... 
It might take a couple of minutes depending on how many buckets you have.
Loading parameters for the Chatbot
Welcome to TensorBro. Say something. Enter to exit. Max length is 16
> hello
[[ -0.42601886  11.76952934  -0.96129787 ...,  -0.38945305  -0.63972712
   -0.52184105]]
come on .
> why you want me to do this ?
[[-0.84985662  7.54191875 -0.66726327 ..., -0.46486002 -0.69005805
  -0.38926086]]
throw me a little while .
> i trow some chocolate to you
[[-0.63454562  7.31223917 -0.68008763 ..., -0.3172918  -0.62918943
  -0.36875695]]
i ' m serious .
> me to !
[[ -1.19059336  10.17427731  -1.53483987 ...,  -0.87535918  -1.28455663
   -0.89420712]]
way to my room !
> ohh no
[[-0.62382394  8.92223644 -0.51953173 ..., -0.67145741 -0.60165089
  -0.46206495]]
no .
> i don`t want to get to bed with you now .
[[-0.61282045  7.70583582 -0.86621624 ..., -1.14901936 -1.03707504
  -0.30115816]]
where ' s the money ?
> i don`t tell you !
[[-0.9617241   9.48097229 -1.16938531 ..., -0.74119282 -1.03512502
  -0.71569276]]
what ?
> nothing
[[ -0.5954656   11.44416332  -0.8853721  ...,  -0.87170106  -0.29809356
   -0.50806785]]
- -
> bye
[[-0.35947308  7.6860323  -0.81516069 ..., -0.51186019 -0.61428398
  -0.26302782]]
i need to be .
> me too
[[-0.63649333  8.02639294 -1.06983578 ..., -0.93391991 -0.56139541
  -0.53440243]]
don ' t you ' re going to be here .
> no i go away. i go home .
[[-0.90490222  9.08857822 -0.83062935 ..., -0.40304869 -1.07223058
  -1.00504088]]
you will not . . . but you are .
> i am same as you 
[[-1.13782763  8.06291008 -1.10772276 ..., -0.78122187 -1.07012486
  -0.79459012]]
if you please leave .
> yes i leave. bye.    
[[-0.99189955  9.18696594 -0.97211885 ..., -0.34530568 -1.24671197
  -0.69110519]]
good .
> 

MartinAbilev avatar Apr 18 '18 11:04 MartinAbilev

i even have the same setting in config.py as your and my last iteration was 117000, but still only exclaimations and dots!!! @MartinAbilev

akhil2910c avatar Apr 22 '18 08:04 akhil2910c

something is wrong looks like with code or training data. it is hard to tell what exactly. you can create repo with your code and drop link.

MartinAbilev avatar Apr 23 '18 07:04 MartinAbilev

or you can check my copy and compare with you code.. it is public.

MartinAbilev avatar Apr 23 '18 07:04 MartinAbilev

I have the same problem. @akhil2910c have you solved it?

EthanPhan avatar Jun 20 '18 01:06 EthanPhan

No I didn't @EthanPhan!!

akhil2910c avatar Jun 20 '18 05:06 akhil2910c

Ok. I'm gonna check MartinAbilev's code out and see if it works and see what is the difference. I'll update if I find any.

EthanPhan avatar Jun 20 '18 06:06 EthanPhan

Unfortunately I have the same blank results. Something isn't working right with the code out of the box. Adjusting parameters made it worse.

J-Fo-S avatar Jul 04 '18 05:07 J-Fo-S

I also have that problem. Is there any practical solution?

yuewang1402 avatar Aug 17 '18 02:08 yuewang1402

Any one found a solution?

I'm still getting blank answers

OmarMAmin avatar Oct 04 '18 13:10 OmarMAmin

If you're not wed to using this particular model, this one works better than any other I've seen out in the open: https://github.com/pender/chatbot-rnn

It can be insulting though, as it was trained on reddit data, so be forewarned.

J-Fo-S avatar Oct 04 '18 13:10 J-Fo-S

@J-Fo-S thank you for sharing the link. Is this your project?

KINGdotNET avatar Oct 04 '18 13:10 KINGdotNET

You're welcome but no, it is not my project.

J-Fo-S avatar Oct 04 '18 13:10 J-Fo-S

Train your model and check. You can train with your custom small dataset line-by-line dialog:

=== Hi Hello, How are you? I am fine. Thanks. What are you doing? I am just chatting with you.

neeraj26jan avatar Jan 18 '19 05:01 neeraj26jan

Is it related to the number of buckets?

wz111 avatar Apr 22 '19 10:04 wz111

I think the model is too simple! So its matching competence is not enough to hold so much data. When I use 189999 samples, the final loss is around 2.2. When 110000 samples, corresponding loss is around 2.0. When lower samples, loss is still decreasing! Hope this can help you.

zzh-ecnu avatar Dec 16 '19 14:12 zzh-ecnu