GeneFacePlusPlus icon indicating copy to clipboard operation
GeneFacePlusPlus copied to clipboard

大牛们,帮我解决下!

Open jakeytan opened this issue 1 year ago • 0 comments

~/GeneFacePlusPlus$ export PYTHONPATH=./ CUDA_VISIBLE_DEVICES=0 python inference/app_genefacepp.py --a2m_ckpt=checkpoints/audio2motion_vae --head_ckpt= --torso_ckpt=checkpoints/motion2video_nerf/may_torso | WARN: egs/egs_bases/audio2motion/vae.yaml not exist. | WARN: checkpoints/th1kh_512_audio2motion/base.yaml not exist.

RuntimeError: CUDA error: the provided PTX was compiled with an unsupported toolchain. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.

image

完整如下: export PYTHONPATH=./ CUDA_VISIBLE_DEVICES=0-1 python inference/app_genefacepp.py --a2m_ckpt=checkpoints/audio2motion_vae --head_ckpt= --torso_ckpt=checkpoints/motion2video_nerf/may_torso | WARN: egs/egs_bases/audio2motion/vae.yaml not exist. | WARN: checkpoints/th1kh_512_audio2motion/base.yaml not exist. | Hparams: { "accumulate_grad_batches": 1, "amp": false, "audio_type": "hubert", "base_config": [ "egs/egs_bases/audio2motion/vae.yaml", "../th1kh_512_audio2motion/base.yaml" ], "batch_size": 4, "binarization_args": { "with_coeff": true, "with_hubert": true, "with_mel": true }, "binary_data_dir": "data/binary/voxceleb2_audio2motion", "blink_mode": "blink_unit", "clip_grad_norm": 1, "clip_grad_value": 0, "debug": false, "ds_name": "TH1KH_512", "eval_max_batches": 10, "exp_name": "", "gen_dir_name": "", "hidden_size": 256, "infer": false, "infer_audio_source_name": "", "infer_ckpt_steps": 40000, "infer_out_npy_name": "", "init_from_ckpt": "", "init_method": "tcp", "lambda_kl": 0.02, "lambda_kl_t1": 2000, "lambda_kl_t2": 2000, "lambda_l2_reg_exp": 0.1, "lambda_mse_exp": 1.0, "lambda_mse_lm2d": 0.0, "lambda_mse_lm3d": 0.0, "load_ckpt": "", "load_db_to_memory": false, "lr": 0.0005, "max_sentences_per_batch": 512, "max_tokens_per_batch": 20000, "max_updates": 400000, "motion_type": "exp", "num_ckpt_keep": 100, "num_sanity_val_steps": 5, "num_valid_plots": 1, "num_workers": 4, "optimizer_adam_beta1": 0.9, "optimizer_adam_beta2": 0.999, "print_nan_grads": false, "process_id": 0, "raw_data_dir": "/home/tiger/datasets/raw/TH1KH_512", "ref_id_mode": "first_frame", "resume_from_checkpoint": 0, "sample_min_length": 32, "save_best": false, "save_codes": [ "tasks", "modules", "egs" ], "save_gt": true, "scheduler": "exponential", "seed": 9999, "smo_win_size": 5, "split_seed": 999, "start_rank": 0, "syncnet_ckpt_dir": "checkpoints/0904_syncnet/syncnet_hubert_vox2", "task_cls": "tasks.os_avatar.audio2secc_task.Audio2SECCTask", "tb_log_interval": 100, "total_process": 1, "use_eye_amp_embed": false, "use_flow": true, "use_fork": true, "use_kv_dataset": true, "use_mouth_amp_embed": true, "use_pitch": true, "val_check_interval": 2000, "valid_infer_interval": 2000, "valid_monitor_key": "val_loss", "valid_monitor_mode": "min", "validate": false, "warmup_updates": 1000, "weight_decay": 0, "work_dir": "", "world_size": -1 } | load 'model' from 'checkpoints/audio2motion_vae/model_ckpt_steps_400000.ckpt', strict=True | WARN: egs\egs_bases\radnerf\lm3d_radnerf.yaml not exist. | Hparams chains: ['checkpoints/motion2video_nerf/may_torso/lm3d_radnerf_torso.yaml', 'checkpoints/motion2video_nerf/may_torso/config.yaml'] | load 'model' from 'checkpoints/motion2video_nerf/may_torso/model_ckpt_steps_250000.ckpt', strict=True Traceback (most recent call last): File "/home/jk/GeneFacePlusPlus/inference/app_genefacepp.py", line 229, in demo = genefacepp_demo( File "/home/jk/GeneFacePlusPlus/inference/app_genefacepp.py", line 132, in genefacepp_demo infer_obj = Inferer( File "/home/jk/GeneFacePlusPlus/inference/genefacepp_infer.py", line 127, in init self.secc2video_model = self.load_secc2video(head_model_dir, torso_model_dir) File "/home/jk/GeneFacePlusPlus/inference/genefacepp_infer.py", line 183, in load_secc2video self.dataset = self.dataset_cls('trainval', training=False) File "/home/jk/GeneFacePlusPlus/tasks/radnerfs/dataset_utils.py", line 199, in init self.bg_img_512 = convert_to_tensor(bg_img).cuda() RuntimeError: CUDA error: the provided PTX was compiled with an unsupported toolchain. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

jakeytan avatar Mar 19 '24 23:03 jakeytan