monkdou0
monkdou0
bow_indices.append( [tokenizer.encode(word.strip(), add_prefix_space=True, add_special_tokens=False) for word in words]) i try to run this code, and all words are composed of more than one token, i think this is because add_prefix_space=True....
IF it is possible, what's the input of the discriminator? The encoder hidden state or the decoder hidden state or both?
Same question