qian2020
qian2020
from the paper KinectFusion, the only raw depth images are enough, so I don't know what the RGB images here are used for?


`import numpy as np from rouge import Rouge rouge = Rouge() def test_loop(dataloader, model): preds, labels = [], [] model.eval() for batch_data in tqdm(dataloader): batch_data = batch_data.to(device) with torch.no_grad(): generated_tokens...
root@VM-11-20-ubuntu:/home/jerry/ChatGLM-Finetuning# deepspeed predict_pt.py --model_dir /home/jerry/ChatGLM-Finetuning/output_dir_pt_20/global_step-3600/ [2023-08-17 20:52:59,552] [WARNING] [runner.py:186:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. [2023-08-17 20:52:59,561] [INFO] [runner.py:548:main] cmd = /usr/bin/python3 -u -m...