seq2seq-fingerprint icon indicating copy to clipboard operation
seq2seq-fingerprint copied to clipboard

Fetching fingreprint error

Open phquanta opened this issue 6 years ago • 0 comments

When running decode.py in sample regime i'm getting following error:

KeyError: "The name 'model_with_buckets/embedding_attention_seq2seq_1/rnn/rnn/embedding_wrapper/embedding_wrapper/multi_rnn_cell/cell_0/cell_0/gru_cell/add_59:0' refers to a Tensor which does not exist. The operation, 'model_with_buckets/embedding_attention_seq2seq_1/rnn/rnn/embedding_wrapper/embedding_wrapper/multi_rnn_cell/cell_0/cell_0/gru_cell/add_59', does not exist in the graph."

I've looked all over computational graph to find tensor named .../cell_0/cell_0/... and could not find it, however tensors named .../cell_0/... are in abundance. In this case, if i change prefix to be (comment one cell_id)
`encoder_state_names = [

"%s/cell_%d/cell_%d/%s/add%s:0" % (

        "%s/cell_%d/%s/add%s:0" % (
            cell_prefix,
            cell_id,

cell_id,

            "gru_cell", # In the future, we might have LSTM support.
            "_%d" % n if n > 0 else ""
        ) for cell_id in xrange(self.num_layers)]`

This fix seems to be working with certain buckets but not with all of them.

Additionally, in paper it is mentioned that there is bottleneck layer to extract fp's however in the code it seems like that context + hidden states of encoder makes fps, not the output of bottleneck layer.

Is it a bug ?

phquanta avatar Oct 09 '19 18:10 phquanta