Tae-Young Kim
Tae-Young Kim
In torch 1.10.0, I write some Convtranspose2d Lora code like class ConvTransposeLoRA(nn.Module, lora.LoRALayer): def __init__(self, conv_module, in_channels, out_channels, kernel_size, r=0, lora_alpha=1, lora_dropout=0., merge_weights=True, **kwargs): super(ConvTransposeLoRA, self).__init__() self.conv = conv_module(in_channels, out_channels,...
Thanks for your response! my nerfstudio version is 1.1.3 & gsplat version is 1.0.0 and script is ns-train desplat --output-dir "/mnt/ssd1/taeyoung/desplat/outputs/bra" --viewer.quit-on-train-completion True --steps_per_save 200000 --max_num_iterations 200000 --pipeline.model.stop_split_at 100000 --pipeline.model.enable_appearance...
My temporary solution was to add "continue_cull_post_densification: bool = False" to the desplat config.