Dylan Prins
Dylan Prins
Hi! During training of the ARS model we obtain valid actions but during evaluation we obtain actions of 0. Both the evaluation and train environment are the same except for...
Hi! I'm trying to figure out how i can obtain a subset of tracks using a list of genres. I picked a couple of genres. using a list like ["genre1",...
Hi! Do you have the adapted VQ-VAE somewhere? I would love to experiment with that implementation. Otherwise, could you explain to me which VQ-VAE you used and what you did...
Hi! In sd3_train_network.py we have the following function: `def post_process_network(self, args, accelerator, network, text_encoders, unet): # check t5xxl is trained or not self.train_t5xxl = network.train_t5xxl ` But network has no...
### Describe the bug When using _sage_qk_int8_pv_fp8_cuda_sm90 as the attention backend on WAN2.2 I2V I notice that the output is broken: It works fine with _flash_3_hub and _sage_qk_int8_pv_fp16_cuda Does it...
### Description Briefly describe the bug you encountered. I'm comparing the Diffusers implementation by the Authors to your Implementation for WAN2.2. So the Non distilled version compared to the original...
Hi, in the comfyui workflows lot's of people use a KSampler to automatically determine the split between high and low noise models. Is there something similar in Lightx2v? How can...
Hi! I have a couple of questions regarding T2V for WAN2.2. Looking at the huggingface repo I only see new distilled models for I2V WAN2.2. I do see the wan...