Milan Straka
Milan Straka
@gowthamkpr The issue was originally a TF issue, but we were redirected to post it here. If you know any more details (why it is a TF issue and not...
Adding the author of the commit removing documented functionality: @jonycgn .
Alternatively, some variant of `write_scalar_summaries` could be moved to `on_train_batch_end` method of `TensorBoard` callback.
Hi, thanks for answer! The commit actually did remove functionality from `TensorBoard` callback -- the callback just sets correct `tf.summary.record_if` in https://github.com/keras-team/keras/blob/46121eed08d0feef743eacef3b66206df45cf656/keras/callbacks.py#L2362-L2373 but it relied on the `write_scalar_summaries` executed in...
> One thing to note is also that these logs were _not_ actual batch-level summaries, which was a reason to remove them. They are accumulated from the beginning of the...
> I disagree on the cumulative summaries front – I find it useful to have instantaneous batch summaries to get a sense of how much the loss function fluctuates. You...
Adding @pedro-r-marques who wrote the code.
On second thought, I opened an issue in the TensorFlow repository https://github.com/tensorflow/tensorflow/issues/55475 to discuss the problem with `tf.map_fn` on `RaggedTensors` -- the `RaggedTensor`s are supported according to the documentation, so...
@divyashreepathihalli Thanks for pointing it out -- I have closed my report in TensorFlow repository as a duplicate of it. This also means we need to go through the action...
I think there really needs to be a bug in the decoder. If you run the original Colab with beam size `K=3`, the `[0, 1, 0]` is returned with logprob...