MagicSource

Results 902 issues of MagicSource

一个无关的问题: 我用该仓库配置保存的lora模型非常小(pytorch_model.bin) 大概只有 17M。 但是我用Chinese-llama那个仓库的lora,体积非常大: ![image](https://github.com/Facico/Chinese-Vicuna/assets/21303438/3966ef3f-de1d-44e8-a73e-3d41e87b2419) 参数配置区别在于: ``` lora_rank 8 lora_alpha 32 modules_to_save "embed_tokens,lm_head" ```​ 是因为保存了额外参数的原因吗?那么这个参数是否是有必要的呢?

``` ───────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM: Missing key(s) in state_dict: "base_model.model.model.embed_tokens.weight", "base_model.model.model.layers.0.self_attn.q_proj.weight", "base_model.model.model.layers.0.self_attn.q_proj.lora_A.default.weight", ``` ``` RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM: Missing key(s) in state_dict: "base_model.model.model.embed_tokens.weight",...

Seems BELLE using vocab size 250k size, while LLAMA using 32k, does theere any exp on how does vocab size effect final model performance?

Does deformable conv support export to onnx or not?

enhancement
help wanted
module: ops
module: onnx

how's the cpu performance compares with other sota model such as yolov8-s?

### MediaPipe Solution (you are using) Mediapipe T-Pose skeleton? ### Programming language _No response_ ### Are you willing to contribute it No ### Describe the feature and the current behaviour/state...

type:feature
legacy:pose
stat:awaiting googler

windows build failed

/pycocotools/coco.py", line 96, in createIndex anns[ann['id']] = ann KeyError: 'id'

``` Compiler message: lib/src/favorites.g.dart:137:8: Error: The return type of the method '$FavoritesTable.validateIntegrity' is 'bool', which does not match the return type, 'VerificationContext', of the overridden method, 'TableInfo.validateIntegrity'. - 'VerificationContext' is...