ztfmars

Results 12 issues of ztfmars

origin: **confidence_loss = object_mask * K.binary_crossentropy(object_mask, raw_pred[...,4:5], from_logits=True)+ (1-object_mask) * K.binary_crossentropy(object_mask, raw_pred[...,4:5], from_logits=True) * ignore_mask** i think , it means the different cross entroypy value of labels = 1 and...

**请将下面信息填写完整,便于我们快速解决问题,谢谢!** **问题描述** RT **报错截图** λ b8dcbef621aa /workspace/PaddleSeg {release/2.3} paddle2onnx --model_dir output_quant_infer \ > --model_filename model.pdmodel \ > --params_filename model.pdiparams \ > --opset_version 11 \ > --save_file output_onnx/deeplabv3_int.onnx grep: warning: GREP_OPTIONS...

Operator(New)

### board type:maipy go, ### os version:maixpy_v0.5.0_95_g05c4e70_with_lvgl, ### error info:show as following, even restart many times ![XJ`JHYSZCVJ6S%46L0BW@~N](https://user-images.githubusercontent.com/35566228/85123023-12c62e00-b25a-11ea-9999-971d63ad911f.jpg) i just use the offical demo in your [github link](https://github.com/sipeed/MaixPy_scripts/blob/master/network/demo_http.py), but can't run...

### ./train/models.py 166 ![image](https://user-images.githubusercontent.com/35566228/79728052-e5bfd280-831f-11ea-974d-101342b035ed.png) 在求face classification的时候,代码中loss中使用的是 ### P1=y*log(p(y)) (这个应该是用在多分类问题中吧),但是论文里面说了这个是二分类问题,应该选用的是 ### P2=y*log(p(y))+(1-y)*log(1-p(y)) 毕竟这个两个求出的相关交叉熵函数是不一样的,可能在optimizer的时候有差异。 例如: y =1,p(y)=0.0, P1=log(0.6); P2=log(0.6)+log(0.4) (优化未能充分说明,结果待验证) 楼主怎么看?

i didnt see SSA & SCA module in your code . maybe you just add a general attention module add the end of backbone (resnet 101)? that's totally different from...

- description i follow the turial for llama3 ft :[https://github.com/SmartFlowAI/Llama3-Tutorial/blob/main/docs/llava.md](https://github.com/SmartFlowAI/Llama3-Tutorial/blob/main/docs/llava.md) i use `llava_llama3_8b_instruct_qlora_clip_vit_large_p14_336_lora_e1_finetune.py` to fineture on my dataset, and want to get a llava-llama38b multimodal model on my datasets. after...

- cmd: `xtuner chat LLM-Research/Meta-Llama-3-8B-Instruct \ --visual-encoder ./clip-vit-large-patch14-336 \ --llava ./LLM-Research/llava-llama-3-8b \ --prompt-template llama3_chat \ --image ./test001.png` - question: trained multimodal model can only input **one image** at one time...

1. **support plan** when it will release the version for supporting llava-llama3-70b? meainwhile, will it will consider of supporting unofficial version like, using llm of llama3-120b? huggingface link: [mlabonne/Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct) 2....

I use the modelscope way to download the model weights, while test demo for model of 'Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b' is ok, but 'Shanghai_AI_Laboratory/internlm-xcomposer2-4khd-7b' is wrong and occurred with an HTTP erro. **code**...

i have 4 3090 gpu and have install the enviroment as instructions. i used modelscope to download the pretrained weight , but it‘s wrong to run the `example/example_chat.py` the code...