Longleaves

Results 7 issues of Longleaves

Hi, I found that currently doccano can only realize the annotation of text classification tasks with single labels. Is there any ways for multi-label text classification?

使用的命令是: python image_demo.py configs/pretrain/yolo_world_s_dual_vlpan_l2norm_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_minival.py models/yolo_world_seg_m_dual_vlpan_2e-4_80e_8gpus_allmodules_finetune_lvis-ca465825.pth data/images/ 'person,dog,cat' --topk 100 --threshold 0.005 --output-dir demo_outputs/ 报错内容是: ModuleNotFoundError: No module named 'mmcv._ext' ImportError: Failed to import yolo_world You should set `PYTHONPATH` to make...

I already set up CHECKPOINTS_PATH and cards, but why always Downloading the tokenizer of seamlessM4T_v2_large when I python app.py? Please help, thanks. ![图片](https://github.com/facebookresearch/seamless_communication/assets/58903935/93968114-0e9a-4162-bd04-8123f786c1c2) ![图片](https://github.com/facebookresearch/seamless_communication/assets/58903935/51d2c752-8d39-4562-8eb6-1c931941b239) ![图片](https://github.com/facebookresearch/seamless_communication/assets/58903935/fdd38e99-c40c-4d65-a3c8-b8f8488d5006)

Hi, may I ask where can I set the max input length when translating, and I want to ask can the model recognize languages automatically without setting target language? Thanks!

- 系统环境/System Environment:Linux - 版本号/Version:Paddle-2.7 - 运行指令/Command Code: 可正常执行: paddleocr --image_dir ./imgs/11.jpg --use_angle_cls true --use_gpu false 改成了: paddleocr --image_dir ./imgs/11.jpg --use_angle_cls true 或者 paddleocr --image_dir ./imgs/11.jpg --use_angle_cls true --use_gpu true...

documentation

发现python starup.py -a后,当多个人同时提问AI模型时,输出会卡顿,速度较慢。运行的是13b模型。 硬件占用情况: 1.显卡占用为一半不到(2张48G显卡,分别只占用了20G不到) 2.cpu占用率仅为2个核(220%),现在cpu为16核,怎么利用上其他核? 想求教各位大佬,怎样可以多进程或多线程启动AI服务接口,应该是端口7861。 是应该改写starup.py中的run_model_worker的uvicorn.run(app, host=host, port=port, log_level=log_level.lower()) 吗? starup.py参数改了如下部分: args.gpus = "0,1" # GPU的编号,如果有多个GPU,可以设置为"0,1,2,3" args.max_gpu_memory = "40GiB" args.num_gpus = 2 # model worker的切分是model并行,这里填写显卡的数量 我才疏学浅,真挚请教各位,谢谢!

bug

想请教一下,对话调用的/chat/chat接口实际上与哪个端口相关?/chat/chat为以下: app.post("/chat/chat", tags=["Chat"], summary="与llm模型对话(通过LLMChain)", )(chat) 是与startup.py中的run_model_worker启动的20002端口相关吗?是怎么样联动/chat/chat接口的?希望大佬们告知一下相关位置在哪里,感激不尽!

bug