lplcor
lplcor
Hey just checking any plan to have this released sooner? We are exploring this [tag header based routing](https://github.com/kserve/kserve/tree/master/docs/samples/v1beta1/rollout#tag-based-routing) feature and it looks like with that feature enabled, we are adding...
The code is single-GPU only at that time. For the model, at training time we found batch size 1 would end up with the best performance (perhaps restricted by dataset...
You may have to restructure model.py to place generators/discriminators in different GPUs. https://www.tensorflow.org/guide/using_gpu .
Hi @dleam , thanks for your interest! Here it is: https://1drv.ms/u/s!AmVzKjduHCxdqSQrUlK7nxz2wvZj
Hi @dleam , Per this feed_dict setting (https://github.com/Peilun-Li/SG-GAN/blob/master/model.py#L215-L231) you can find fake_B_sample is adapted from real_A, and mask_B_sample is actually mask_A. Since we wanna keep semantic information from being changed...
@yuzisun Great thanks! Looking forward to that feature.
Hey @yuzisun has that feature be supported (to customize those constant prefixes/suffixes) or is there a timeline? Thanks!
Hey @yuzisun just checking any updates or plan/workaround on this? Thanks!
Thanks @psschwei , yes we do want to have activator in the path (even the target already has some capacity) as that turns out to be helpful with long tail...
In case it helps, here's the plot of `activator_request_concurrency` reported by one activator pod during the example attack. Looks like it's ever-growing without any cap at 10k or so.