HuXinjing
HuXinjing
> Hi, we found that you were using Qwen(1.0) code to finetune Qwen1.5 models, which is incompatible. To finetune Qwen1.5 models, please refer to the README. 你好 请问直接添加特殊token可以吗 我看llamafactory好像是往tokenizer对象里加里特殊token? hello,Is...
Have there been any developments about “ _official FoT large scale continual pre-training (FoT finetuning) code_ ”
我也遇到同样的问题,请问你通过配置环境解决了吗
That's to say "twin" here means it is the sibling of OLMo-7b? I misunderstood it. ---Original--- From: "Akshita ***@***.***> Date: Tue, Mar 5, 2024 06:30 AM To: ***@***.***>; Cc: ***@***.******@***.***>;...
It seems I misunderstood the cuda version maga_transformer need, my nvcc version is cuda 11.8, but run_time is 12.2. So, will maga_transformer-0.1.9+cuda118-cp310-cp310-manylinux1_x86_64.whl become available?
I have tried both cuda11 and cuda12 image in docker, but `sudo sh ./create_container.sh rtp registry.cn-hangzhou.aliyuncs.com/havenask/rtp_llm:deploy_image_cuda12 or 11` told me `docker: Error response from daemon: could not select device driver...
> I have tried both cuda11 and cuda12 image in docker, but `sudo sh ./create_container.sh rtp registry.cn-hangzhou.aliyuncs.com/havenask/rtp_llm:deploy_image_cuda12 or 11` told me `docker: Error response from daemon: could not select device...
But I got `Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/maga_transformer/start_server.py", line 82,...
> But I got `Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/maga_transformer/start_server.py", line...
But I got another! `[root][07/21/2024 08:51:36][start_server.py:local_rank_start():34][ERROR] start server error: module 'torch' has no attribute 'uint32', trace: multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result...