沧海桑田
沧海桑田
For unsupervised text clustering, the key thing is the init embedding for text. If we want to use https://github.com/facebookresearch/deepcluster for text, the problem for text is how to get the...
I read from [here](https://github.com/songrotek/DRL-FlappyBird/blob/master/BrainDQN_NIPS.py#L89-L108). Why do the program only use the current state and the next state? Why only using the two state can work? Thank you @songrotek
I find from [here ](https://github.com/songrotek/DRL-FlappyBird/blob/master/BrainDQN_NIPS.py#L88) that all the rewards are add into the deque. We need to sample the 1 and -1 reward from the deque to use them. So...
Thank you very much!
For unsupervised text clustering, the key thing is the init embedding for text. If we want to use https://github.com/facebookresearch/deepcluster for text, the problem for text is how to get the...
First, I'm not sure whether the model contains the encoder during training. EOS means end-of-sentence. Encoder and decoder are part of transformer network. If without-encoder, training time: ``` target: [E,...
Based on my understanding, gpt or gpt-2 are using language model loss to train and generate text, which do not contains GAN. So which is better: GPT vs RelGAN/LeakGAN/SeqGAN/TextGAN I...
Thank you very much!