xxll88

Results 17 issues of xxll88

初次加载本地知识库成功,但提问后,就无法重写加载本地知识库 出现ERROR 2023-05-16 07:06:58,642-1d: 'ascii' codec can't encode characters in position 14-21: ordinal not in range(128)

bug

**功能描述 / Feature Description** 用简洁明了的语言描述所需的功能 / Describe the desired feature in a clear and concise manner. 2. IP地址8.8.8.8 分词后只保留8 3. pdf在configs/model_config.py中可选文本模式还是ocr模式 4. 给个例子加载整个知识库目录 5. display_answer显示第几页

enhancement

As per title, adds Freeze fintuning and adds the GaLore optimizer from https://github.com/jiaweizzhao/GaLore

feature request

I use local llm in android app , api host set whatever http://domain:port or https://domain:port when chat anything , can connect remote api host , but 0 token returned I...

用chatglm-6b 运行train.py,显示: trainable params: 14680064 || all params: 3368640512 || trainable%: 0.4357860076700283 全部参数的数量不是6B,是什么原因? 微调完成后,运行 infer_lora_finetuning.py ,显示 trainable params: 6172532736 || all params: 6187212800 || trainable%: 99.7627354274933 全部参数的数量正确,但lora可训练参数 99%, 是什么原因?

按args.md调了些参数都没用,无法启动quantization_bit 8/4 ,直接爆显存 args.md 中 ptuning v2 global_args = { "load_in_8bit": False, # lora 如果显卡支持int8 可以开启 , 需安装依赖 pip install bitsandbytes "num_layers_freeze": -1, # 非lora,非p-tuning 模式 ,

### Reminder - [X] I have read the README and searched the existing issues. ### Reproduction # model model_name_or_path: /home/ubuntu/Meta-Llama-3-8B # method stage: sft do_train: true finetuning_type: freeze template: llama3...

### Do you need to file an issue? - [ ] I have searched the existing issues and this bug is not already filed. - [ ] My model is...

bug
triage

Although json_clean_up repair json faulty responses,it increases index time and global search time 70-80% here are some time comparation : v0.2.0 index time 35min , v0.2.1 62min v0.2.0 global search...

bug
triage

Only repair broken responses #834 can't reslove the time problem , json_clean_up may be not main reason for increased time in 0.2.1 over 0.2.0

bug
triage