Ömer Faruk Özdemir
Ömer Faruk Özdemir
We had a similar situation in Gradio, and resolved it via this kind of [approach](https://github.com/gradio-app/gradio/blob/1ba956af74a6b505199ecc2d6d78224050b876cb/gradio/blocks.py#L554). Wanted to share to support the issue.
@seppeon met with the issue now, perhaps you might have missed the answers, reminding the issue after a while :D
Hello @bradyz; https://github.com/bradyz/carla_project/blob/ac791fcf7e59ad80b6908dadc00eb4f26147c065/src/image_model.py#L132 This where I am talking about, it seems it mixes the output generated by the privileged agent with its own output. However it did not make sense...
Looks like a visual bug to me because it jumps to 1 from 11 really fast. Will be probably solved with the new queue.
@abidlabs why is there a `batch_fn`? Isn't it unnecessary and make it more complex? Sending arbitrary length of list of inputs to the `fn` seems fine to me and had...
I could not find an example of this in the web since then, WDYT about this currently @abidlabs, @dawoodkhan82, @pngwn, @gary149? Shall we put effort in this? Using the image...
Ok then, instead of closing this issue I would like to remove the milestone and keep it in low priority.
Thx for the suggestion! Hmm, it would be not that critical but nice to have I presume. Can't think of a way on doing that immediately, do you guys have...
@abidlabs do you know why sharing is [default ](https://github.com/gradio-app/gradio/blob/7e796a3e171191a2a0a0ab48fe19b02cde0d9777/gradio/blocks.py#L636)in colab notebooks? Because it is more handy?
What I actually meant was, we could have share off in Colab by default as well, like in local case.