bhack
bhack
How it scales in term of training/inference resolution?
I think you can check also how Dali it is handling this :wink: https://github.com/NVIDIA/DALI
@tanzhenyu What is our (sustainability) policy about brand new papers?
We had a quite long discussion with @LukeWood and @innat at https://github.com/keras-team/keras-cv/discussions/52#discussioncomment-2058663
I think that especially for a backbone the test of time is how many papers are using the specific backbone (in this specific case it was just published few days...
Yes I think that my points are general enough end not related to an "algorithm" for the inclusion. My points are more focused about the general susteinability of the library...
Also it seems that we were not really using the claimed within the batch augmentation or not https://github.com/keras-team/keras-cv/issues/1382#issuecomment-1424536428?
Yes I think it was basically wrong: ```python import tensorflow as tf def test_map(input): print("Call to test") return tf.constant([1]) def test_vectorized_map(input): print("Call to test vectorized") return tf.constant([1]) input_shape = (5,...
Ok so probabably it is internally wrapped in graph mode in `tf.vectorized_map`. We are never fully in eager mode with `vectorized_map`. ```python import tensorflow as tf def test_map(input): return input["input"]+tf.random.uniform([])...
> replace print with a tf.print() and you’ll get your expected result. No you will not get the expected result with a `tf.print`. This was the point of the thread....