no name

Results 7 comments of no name

but based on the Figure1 in paper, the conv output size (3 ) is different with embeddings size(5). is the "n" conv filter numbers, the "m" embeddings size ? or...

@skaae hi, for the two bugs reported by james, i modified `hid_init=lasagne.layers.InputLayer((BATCH_SIZE, REC_NUM_UNITS), hid1_init_sym) ` and ``` train_out, l_rec1_hid_out, l_rec2_hid_out = lasagne.layers.get_output( [l_out, l_rec1, l_rec2], {l_inp: sym_x } , deterministic=False)...

could we build the language model on sequence level ? which means that batch_input composed by sequences, and mask_input can be used to discard the blank output?/

@f0k hey Jan, input was follow: `l_inp = lasagne.layers.InputLayer((BATCH_SIZE, MODEL_SEQ_LEN), sym_x) ` two get_output were modified as below, it's working now, but is this your meaning? thank you very much...

how about Sparse Autoencoder (KL divergence) here ? is there any paper ?

@shicai 有联系方式吗,谢谢