Neural-Texture-Extraction-Distribution
Neural-Texture-Extraction-Distribution copied to clipboard
generate unnorma images when inference
I can generated right images using the demo.py, but got below images when used the inference.py
There are also problems in the training stage. Neural-Texture-Extraction-Distribution/util/visualization/linear_attention.py:
def attn2image(source_softmax, seg_query, input_image): target_list = [] image_size = input_image.shape[2:] for softmax, query in zip(source_softmax, seg_query): b, num_label, h, w = query.shape input_resize = F.interpolate( input_image, (h,w) ) input_resize = input_resize.view(b, -1, h*w) extracted = torch.einsum('bkm,bvm->bvk', softmax, input_resize) query = F.softmax(query.view(b, num_label, -1), 1) estimated_target = torch.einsum('bkm,bvk->bvm', query, extracted) estimated_target = estimated_target.view(b, -1, h, w) target_list.append(F.interpolate(estimated_target, image_size)) target_gen = torch.cat(target_list, 3) return target_gen
The source_softmax is an empty list.
Hi! Is the checkpoint loaded successfully while inference? Can you provide the messages when you run inference.py
Hi! Is the checkpoint loaded successfully while inference? Can you provide the messages when you run inference.py
您好,我使用的原始代码,看起来inference没有成功load模型,没有报错信息,直接生成上面的图像
When you run inference.py
without training the model. You may need to load the model by adding --which_iter
such as:
python -m torch.distributed.launch \
--nproc_per_node=1 \
--master_port 12345 inference.py \
--config ./config/fashion_512.yaml \
--name fashion_512 \
--no_resume \
--output_dir ./result/fashion_512/inference \
--which_iter 495400
hello ,请问你解决了这个问题吗
There are also problems in the training stage. Neural-Texture-Extraction-Distribution/util/visualization/linear_attention.py:
def attn2image(source_softmax, seg_query, input_image): target_list = [] image_size = input_image.shape[2:] for softmax, query in zip(source_softmax, seg_query): b, num_label, h, w = query.shape input_resize = F.interpolate( input_image, (h,w) ) input_resize = input_resize.view(b, -1, h*w) extracted = torch.einsum('bkm,bvm->bvk', softmax, input_resize) query = F.softmax(query.view(b, num_label, -1), 1) estimated_target = torch.einsum('bkm,bvk->bvm', query, extracted) estimated_target = estimated_target.view(b, -1, h, w) target_list.append(F.interpolate(estimated_target, image_size)) target_gen = torch.cat(target_list, 3) return target_gen
The source_softmax is an empty list. 请问你解决这个问题了吗
Hi @Blackkinggg, How did generate pose key points?