DavideHe
DavideHe
you realize another channel,but i found it's wrong or i don't understand it ? [L104-you want add a new channel with r-->radius](https://github.com/mkocabas/CoordConv-pytorch/blob/master/CoordConv.py#L104) but this xx_channel is not range 0-1; so...
code at https://github.com/howardyclo/clcc-cvpr21/blob/master/util/raw2raw.py#L92 ``` M_AC = (1 - w[:,None,None]) * I + w[:,None,None] * M_AB M_BC = (1 - w[:,None,None]) * I + w[:,None,None] * M_BA ``` if Ill_c be...
the loss defined for training only is angle loss. there is no any other constraint for mask, this is efficient for confidence. I run some raw image with cwp file...
In api/data/export_kitti_depth_benchmark.py some syntax do not support in python3.6 and numpy should update 1.21.x ,but python3.6 not support this mybe there are other error by lib version or pytho version,so...
在代码https://github.com/huawei-noah/Efficient-AI-Backbones/blob/master/ghostnet_pytorch/validate.py#L61, 验证机数据处理是先采用transforms.Resize(256),然后transforms.CenterCrop(224),首先这种测试方式是不容易对齐的,因为每次产生测试数据的图是不一样的;因为我一般是采用原图直接resize 224然后normalize ,最后的输入网络测试;但是这种方式我测试出来的结果ghostnet 1.0 加载你们提供的训练好的网络state_dict_73.98.pth;最后测试出来的结果为top1: 0.7082 top5: 0.8979;paper中写的是top1:73.9,top5:91.4
mobilenetv2 1.0 224 epochs: 200 bacth size: 512 lr-decay: cos lr : 0.2 wd: 4e-5 wamps up: 0-0.4 first 5 epochs dropout : 0.2 top1 : 0.685 but In your...
as (run_streaming_llama.py#L61)[https://github.com/mit-han-lab/streaming-llm/blob/main/examples/run_streaming_llama.py#L61] see, prompt must be send to model one by one. that will take the high GPU usage time. Is there the way of parallel prompt ?
### System Info ```Shell - `Accelerate` version: 0.28.0 - Platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.29 - Python version: 3.8.10 - Numpy version: 1.21.0 - PyTorch version (GPU?): 2.2.0+cu118 (True) - PyTorch XPU available: False...

python code ``` tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-7B",trust_remote_code=True) text = "What is an activated complex?\n\n\n" input_ids = tokenizer(text,add_special_tokens=False)["input_ids"] tokens = tokenizer.convert_ids_to_tokens(input_ids) print(tokens) ## result ## ['', '', 'What', 'Ġis', 'Ġan', 'Ġactivated', 'Ġcomplex',...