xiangxinhello

Results 11 issues of xiangxinhello

How to set parameters and prompt words, only change people, not change the background. What I did was to change the character of the cue word, but the result was...

Looking forward to your reply, thank you.

![2024-05-22 12-31-20屏幕截图](https://github.com/Zj-BinXia/SSL/assets/169245314/128d7a4c-5a77-4256-b078-70d9d73a13d1) 改变这个scale没有作用

### System Info A100-PCIe-40GB Tensorrt-LLM-verison:0.11.0 ### Who can help? @Tracin ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [...

bug

### System Info nvidia A100 PCIE 40g TensorRT-LLM version: 0.12.0.dev2024070200 python convert_checkpoint.py --model_dir ./tmp/Qwen/7B/ --output_dir ./tllm_checkpoint_1gpu_fp16 --dtype float16 trtllm-build --checkpoint_dir ./tllm_checkpoint_1gpu_fp16 --output_dir ./tmp/qwen/7B/trt_engines/fp16/1-gpu --gemm_plugin float16 --max_batch_size 1 --max_input_len 1 --max_seq_len...

question

Excuse me, how can we solve these problems Thanks! configs文件的flow_root 如何制作 https://github.com/sczhou/ProPainter/assets/169245314/cac64fb8-5b19-4ea5-97d4-135cf75731d5 https://github.com/sczhou/ProPainter/assets/169245314/14e34c5e-6812-47ef-89d4-9b90095ff8a6

1.5-7b-chat的参数如何设置参数。 我的需求是: 1.输入同样的问题,每次的答案要一致

inactive

### Your current environment from PIL import Image from transformers import AutoProcessor from vllm import LLM, SamplingParams from qwen_vl_utils import process_vision_info MODEL_PATH = '/workspace/mnt/storage/trt-llama/Qwen2-VL-7B-Instruct' IMAGE_PATH = '/workspace/mnt/storage/llm_storge/vllm/examples/demo.jpeg' llm = LLM(...

bug

I've noticed that the GPU utilization is very low during model inference, with a maximum of only 80%, but I want to increase the GPU utilization to 99%. How can...