bhack
bhack
I don't remember what checkpoint we have used: https://huggingface.co/CompVis/stable-diffusion Probably for the calibration you can pass prompts samples from: "laion-improved-aesthetics" or "laion-aesthetics v2 5+" https://laion.ai/blog/laion-aesthetics/
@artemZholus Do you plan to fine-tune more than 48 frames?
> Hi @bhack, I plan to work on this, but not as part of Google. This will likely take a couple of weeks. @artemZholus Any news on this? I've tried...
@artemZholus Thanks, keep us updated. I think it is totally required to have a minimum of usability in TapNExt. State-query workarounds only at inference time are really overcomplicated, with a...
Oh ok so it is more on the tensor parallelism side (Dtensor)
@jonas-eschmann What is your scope with `s += np.sum(a[:10, :10])`?
Do you want to apply transformations to the dataset elements?
What was the logic to expose this in model compile instead?
I suppose that if we already let the user to `jit_compile` or not in the model compile API we don't want to automatically compile layers without any user control. https://github.com/keras-team/keras/blob/39ad2c1cb22b231baf05a0218322328c13654bda/keras/engine/training.py#L532
/cc @qlzh727 @LukeWood I suppose that we will have a small "explosion" of XLA `jit_compile` failures when we will enable the XLA compilation. And they will be more fatal then...