Description
python3 main.py --text "a hamburger" --workspace trial -O --backbone grid_taichi
Get this error:
[Taichi] version 1.6.0, llvm 15.0.1, commit f1c6fbbd, win, python 3.9.13
No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2'
Warning:
Unable to load the following plugins:
filter_func.dll: filter_func.dll does not seem to be a Qt Plugin.
Cannot load library C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pymeshlab\lib\plugins\filter_func.dll: The specified module could not be found.
filter_mesh_booleans.dll: filter_mesh_booleans.dll does not seem to be a Qt Plugin.
Cannot load library C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pymeshlab\lib\plugins\filter_mesh_booleans.dll: The specified module could not be found.
filter_sketchfab.dll: filter_sketchfab.dll does not seem to be a Qt Plugin.
Cannot load library C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pymeshlab\lib\plugins\filter_sketchfab.dll: The specified module could not be found.
io_3ds.dll: io_3ds.dll does not seem to be a Qt Plugin.
Cannot load library C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pymeshlab\lib\plugins\io_3ds.dll: The specified module could not be found.
io_e57.dll: io_e57.dll does not seem to be a Qt Plugin.
Cannot load library C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pymeshlab\lib\plugins\io_e57.dll: The specified module could not be found.
io_u3d.dll: io_u3d.dll does not seem to be a Qt Plugin.
Cannot load library C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pymeshlab\lib\plugins\io_u3d.dll: The specified module could not be found.
[Taichi] Starting on arch=cuda
Namespace(file=None, text='a hamburger', negative='', O=True, O2=False, test=False, six_views=False, eval_interval=1, test_interval=100, workspace='trial', seed=None, image=None, image_config=None, known_view_interval=4, IF=False, guidance=['SD'], guidance_scale=100, save_mesh=False, mcubes_resolution=256, decimate_target=50000.0, dmtet=False, tet_grid_size=128, init_with='', lock_geo=False, perpneg=False, negative_w=-2, front_decay_factor=2, side_decay_factor=10, iters=10000, lr=0.001, ckpt='latest', cuda_ray=False, taichi_ray=True, max_steps=1024, num_steps=64, upsample_steps=32, update_extra_interval=16, max_ray_batch=4096, latent_iter_ratio=0.2, albedo_iter_ratio=0, min_ambient_ratio=0.1, textureless_ratio=0.2, jitter_pose=False, jitter_center=0.2, jitter_target=0.2, jitter_up=0.02, uniform_sphere_rate=0, grad_clip=-1, grad_clip_rgb=-1, bg_radius=1.4, density_activation='exp', density_thresh=10, blob_density=5, blob_radius=0.2, backbone='grid_taichi', optim='adan', sd_version='2.1', hf_key=None, fp16=True, vram_O=False, w=64, h=64, known_view_scale=1.5, known_view_noise_scale=0.002, dmtet_reso_scale=8, batch_size=1, bound=1, dt_gamma=0, min_near=0.01, radius_range=[3.0, 3.5], theta_range=[45, 105], phi_range=[-180, 180], fovy_range=[10, 30], default_radius=3.2, default_polar=90, default_azimuth=0, default_fovy=20, progressive_view=False, progressive_view_init_ratio=0.2, progressive_level=False, angle_overhead=30, angle_front=60, t_range=[0.02, 0.98], dont_override_stuff=False, lambda_entropy=0.001, lambda_opacity=0, lambda_orient=0.01, lambda_tv=0, lambda_wd=0, lambda_mesh_normal=0.5, lambda_mesh_laplacian=0.5, lambda_guidance=1, lambda_rgb=1000, lambda_mask=500, lambda_normal=0, lambda_depth=10, lambda_2d_normal_smooth=0, lambda_3d_normal_smooth=0, save_guidance=False, save_guidance_interval=10, gui=False, W=800, H=800, radius=5, fovy=20, light_theta=60, light_phi=0, max_spp=1, zero123_config='./pretrained/zero123/sd-objaverse-finetune-c_concat-256.yaml', zero123_ckpt='./pretrained/zero123/105000.ckpt', zero123_grad_scale='angle', dataset_size_train=100, dataset_size_valid=8, dataset_size_test=100, exp_start_iter=0, exp_end_iter=10000, images=None, ref_radii=[], ref_polars=[], ref_azimuths=[], zero123_ws=[], default_zero123_w=1)
per_level_scale: 1.3195079565048218
offset_: 5722520
total_hash_size: 11445040
NeRFNetwork(
(ray_marching): RayMarcherTaichi()
(volume_render): VolumeRendererTaichi()
(encoder): HashEncoderTaichi()
(sigma_net): MLP(
(net): ModuleList(
(0): Linear(in_features=32, out_features=32, bias=True)
(1): Linear(in_features=32, out_features=4, bias=True)
)
)
(encoder_bg): FreqEncoder_torch()
(bg_net): MLP(
(net): ModuleList(
(0): Linear(in_features=27, out_features=16, bias=True)
(1): Linear(in_features=16, out_features=3, bias=True)
)
)
)
[INFO] loading stable diffusion...
vae\diffusion_pytorch_model.safetensors not found
Pipelines loaded with torch_dtype=torch.float16
cannot run with cpu
device. It is not recommended to move them to cpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16
argument, or use another device for inference.
Pipelines loaded with torch_dtype=torch.float16
cannot run with cpu
device. It is not recommended to move them to cpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16
argument, or use another device for inference.
Pipelines loaded with torch_dtype=torch.float16
cannot run with cpu
device. It is not recommended to move them to cpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16
argument, or use another device for inference.
[INFO] loaded stable diffusion!
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\Holly\Desktop\pifuhd-main\f\stable-dreamfusion\main.py:396 in │
│ │
│ 393 │ │ │ from guidance.clip_utils import CLIP │
│ 394 │ │ │ guidance['clip'] = CLIP(device) │
│ 395 │ │ │
│ ❱ 396 │ │ trainer = Trainer(' '.join(sys.argv), 'df', opt, model, guidance, device=device, │
│ 397 │ │ │
│ 398 │ │ trainer.default_view_data = train_loader._data.get_default_view_data() │
│ 399 │
│ │
│ C:\Users\Holly\Desktop\pifuhd-main\f\stable-dreamfusion\nerf\utils.py:263 in init │
│ │
│ 260 │ │ │ │ for p in self.guidance[key].parameters(): │
│ 261 │ │ │ │ │ p.requires_grad = False │
│ 262 │ │ │ │ self.embeddings[key] = {} │
│ ❱ 263 │ │ │ self.prepare_embeddings() │
│ 264 │ │ │
│ 265 │ │ if isinstance(criterion, nn.Module): │
│ 266 │ │ │ criterion.to(self.device) │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\torch\utils_contextlib.py:115 in decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ C:\Users\Holly\Desktop\pifuhd-main\f\stable-dreamfusion\nerf\utils.py:359 in prepare_embeddings │
│ │
│ 356 │ │ if self.opt.text is not None: │
│ 357 │ │ │ │
│ 358 │ │ │ if 'SD' in self.guidance: │
│ ❱ 359 │ │ │ │ self.embeddings['SD']['default'] = self.guidance['SD'].get_text_embeds([ │
│ 360 │ │ │ │ self.embeddings['SD']['uncond'] = self.guidance['SD'].get_text_embeds([s │
│ 361 │ │ │ │ │
│ 362 │ │ │ │ for d in ['front', 'side', 'back']: │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\torch\utils_contextlib.py:115 in decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ C:\Users\Holly\Desktop\pifuhd-main\f\stable-dreamfusion\guidance\sd_utils.py:95 in │
│ get_text_embeds │
│ │
│ 92 │ │ # prompt: [str] │
│ 93 │ │ │
│ 94 │ │ inputs = self.tokenizer(prompt, padding='max_length', max_length=self.tokenizer. │
│ ❱ 95 │ │ embeddings = self.text_encoder(inputs.input_ids.to(self.device))[0] │
│ 96 │ │ │
│ 97 │ │ return embeddings │
│ 98 │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\torch\nn\modules\module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\transformers\models\clip\modeling_clip.py:822 in │
│ forward │
│ │
│ 819 │ │ ```""" │
│ 820 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │
│ 821 │ │ │
│ ❱ 822 │ │ return self.text_model( │
│ 823 │ │ │ input_ids=input_ids, │
│ 824 │ │ │ attention_mask=attention_mask, │
│ 825 │ │ │ position_ids=position_ids, │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\torch\nn\modules\module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\transformers\models\clip\modeling_clip.py:740 in │
│ forward │
│ │
│ 737 │ │ │ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] │
│ 738 │ │ │ attention_mask = _expand_mask(attention_mask, hidden_states.dtype) │
│ 739 │ │ │
│ ❱ 740 │ │ encoder_outputs = self.encoder( │
│ 741 │ │ │ inputs_embeds=hidden_states, │
│ 742 │ │ │ attention_mask=attention_mask, │
│ 743 │ │ │ causal_attention_mask=causal_attention_mask, │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\torch\nn\modules\module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\transformers\models\clip\modeling_clip.py:654 in │
│ forward │
│ │
│ 651 │ │ │ │ │ causal_attention_mask, │
│ 652 │ │ │ │ ) │
│ 653 │ │ │ else: │
│ ❱ 654 │ │ │ │ layer_outputs = encoder_layer( │
│ 655 │ │ │ │ │ hidden_states, │
│ 656 │ │ │ │ │ attention_mask, │
│ 657 │ │ │ │ │ causal_attention_mask, │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\torch\nn\modules\module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\transformers\models\clip\modeling_clip.py:382 in │
│ forward │
│ │
│ 379 │ │ """ │
│ 380 │ │ residual = hidden_states │
│ 381 │ │ │
│ ❱ 382 │ │ hidden_states = self.layer_norm1(hidden_states) │
│ 383 │ │ hidden_states, attn_weights = self.self_attn( │
│ 384 │ │ │ hidden_states=hidden_states, │
│ 385 │ │ │ attention_mask=attention_mask, │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\torch\nn\modules\module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\torch\nn\modules\normalization.py:190 in forward │
│ │
│ 187 │ │ │ init.zeros(self.bias) │
│ 188 │ │
│ 189 │ def forward(self, input: Tensor) -> Tensor: │
│ ❱ 190 │ │ return F.layer_norm( │
│ 191 │ │ │ input, self.normalized_shape, self.weight, self.bias, self.eps) │
│ 192 │ │
│ 193 │ def extra_repr(self) -> str: │
│ │
│ C:\Users\Holly\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCac │
│ he\local-packages\Python39\site-packages\torch\nn\functional.py:2515 in layer_norm │
│ │
│ 2512 │ │ return handle_torch_function( │
│ 2513 │ │ │ layer_norm, (input, weight, bias), input, normalized_shape, weight=weight, b │
│ 2514 │ │ ) │
│ ❱ 2515 │ return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.c │
│ 2516 │
│ 2517 │
│ 2518 def group_norm( │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
Exception ignored in: <function Trainer.del at 0x00000249000BDEE0>
Traceback (most recent call last):
File "C:\Users\Holly\Desktop\pifuhd-main\f\stable-dreamfusion\nerf\utils.py", line 424, in del
if self.log_ptr:
AttributeError: 'Trainer' object has no attribute 'log_ptr'
Steps to Reproduce
Use python3 main.py --text "a hamburger" --workspace trial -O --backbone grid_taichi
Expected Behavior
success
Environment
cuda 12.2.0