gaceladri

Results 26 comments of gaceladri

Change it to: ``` loss = tf.nn.softmax_cross_entropy_with_logits_v2( **labels**=prediction, **logits**=tf.reshape(shifted, [-1, self.quantization_channels])) ```

Same. I started with the vue-hackernews-2.0, I did npm install vue-particles --save & npm install vue-particles --save-dev after when it didnt work. `ERROR in ./node_modules/babel-loader/lib!./node_modules/vue-loader/lib/selector.js?type=script&index=0&bustCache!./node_modules/vue-particles/src/vue-particles/vue-particles.vue Module build failed: ReferenceError: Unknown...

I have seen that comment, but I was not sure in what version it was commented. Because in v0.7.x I think, you can save the schema doing: `workflow.transform(dataset).to_parquet("./schema", num_partitions=1)` or...

Perfect. My concern was trying to reproduce a more complicated example that the ones that are up-to-date (v0.7.1) on the examples. I was trying to reproduce [the example on the...

Thank for your answers @gabrielspmoreira & @rnyak . Looking forward to see that PR and to find a solution when you have different source dataframes and merging them into one...

@dsanjit Download the pix2pix version before updated to tf 1.4 . I had the same issue and it was fixed training the model with the previous version of pix2pix since...

You'r a fucking machine! Haha, I am working with an implementation of Transformer-XL with adaptive softmax and dynamic evaluation and I found this awesome repo when I was looking for...

I read on an orthogonal way the paper but my intuition says that they got better results just by adding more computation in any or other way... Not read the...

@lucidrains One thing that I would do is, first a bottleneck after the embedding like in the MobileBert paper and then add the linear attention to the Trans-XL. It would...

Well, thanks for your answer! I am not looking for longer sequences, just efficiency and some performance. Do you think that it can work with a good performance/efficiency trade-off in...