Jason

Results 6 issues of Jason

I do as the demo,but get the error Type "help", "copyright", "credits" or "license" for more information. >>> from __future__ import print_function, division, unicode_literals >>> from psy import Irt, data...

[Enter steps to reproduce below:] 1. ... 2. ... **Atom Version**: 1.11.2 **Electron Version**: 0.37.8 **System**: Unknown Windows Version **Thrown From**: [qiniu-uploader](https://github.com/knightli/qiniu-uploader) package, v0.0.3 ### Stack Trace Uncaught EvalError: Refused...

In the project: https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336, it gives an examples how to convert llava-llama3 model to hf format: ` python ./convert_xtuner_weights_to_hf.py --text_model_id ./iter_39620_xtuner --vision_model_id ./iter_39620_visual_encoder --projector_weight ./iter_39620_xtuner/projector/model.safetensors --save_path ./iter_39620_llava` I follow it...

如图,原本有3w多样本, 最后就只有4k多,该如何定位该问题? 脚本如下: [rm -rf llama3_finetune_pth/* output_dir=llama3_finetune_pth config_py=xtuner/configs/llama/llama3_8b_instruct/llama3_8b_instruct_qlora_alpaca_e3.py CUDA_VISIBLE_DEVICES=0,1 NPROC_PER_NODE=2 xtuner train ${config_py} --work-dir ${output_dir} --deepspeed deepspeed_zero2 --seed 1024](url)

### 📚 The doc issue 感谢你们的工作,我参考官方文档:https://github.com/InternLM/lmdeploy/blob/main/docs/zh_cn/multi_modal/vl_pipeline.md,能够跑通,代码如下 ``` import os os.environ["CUDA_VISIBLE_DEVICES"] = "3" from lmdeploy import pipeline, ChatTemplateConfig from lmdeploy.vl import load_image pipe = pipeline('**, chat_template_config=ChatTemplateConfig(model_name='llama3')) image = load_image('**') response =...

I'd like to replace the vision encoder of vit (openai/clip-vit-large-patch14-336) with swin-transformers v2(microsoft/swinv2-base-patch4-window8-256) I modify the code in config file. I replace 'from transformers import CLIPImageProcessor, CLIPVisionModel' with 'from transformers...