ztfmars
ztfmars
> > > > array > > 我也遇到了同样的问题,请问您解决了吗? 调试代码之后发现, num_boxes=np.sum(np.where((np.minimum(tmpw,tmph)>=delete_size),ones,zeros))这里返回的num_boxes为0,所以产生了上述错误。 这个感觉应该是好多在win10上跑的小伙伴才会有的问题吧,可以在添加在MtcnnDetector.py的detect_rnet(self,im,dets)里面,修改如下: 在上述代码(num_boxes=np.sum(np.where((np.minimum(tmpw,tmph)>=delete_size),ones,zeros)))下面增加一行 if num_boxes == 0: return None, None, None 即可。 希望可以帮到后续的同学
> 这个PR是否已经解决了 这个问题? There's 2 ops are not supported yet =========== fake_channel_wise_quantize_dequantize_abs_max =========== =========== fake_quantize_dequantize_moving_average_abs_max ===========
> 把这个插件关掉就行了,把关于这个插件的都注释掉 这个在哪里关掉呢?兄弟,问题解决了么
+1 ,好像只看到了多卡部署,没有并行训练
> @ztfmars Hi > > This issues is caused by the mismatch between the version of transformers and peft. > > This PR https://github.com/huggingface/peft/pull/1368/files supports the layer_replication for LoraConfig, so...
@Jason8Kang hi, the offical xturner/llava-llama3 can be used for lmdeploy and work well.  i get the same llava-llama3 huggingface transfer question , i just following your changed code This...
(1) i have fix the file `examples/utils.py` as following: ``` device_map = { 'vit': 0, 'vision_proj': 0, 'model.tok_embeddings': 0, 'plora_glb_GN': num_gpus - 1, 'plora_sub_GN':num_gpus - 1, 'model.norm': num_gpus - 1,...
> build_mlp.py > > openai/clip-vit-large-patch14-336 -> AI-ModelScope/clip-vit-large-patch14-336 thx, it works after fix the download url in `~/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-4khd-7b/build_mlp.py`
it works! thx, the chinese readme may need fixed on the version. https://github.com/InternLM/InternLM-XComposer/blob/main/docs/install_CN.md