pppppM

Results 16 issues of pppppM

## Motivation Currently, the distiller can only get the output of the PyTorch module, and MMRazor cannot reproduce some more complex distillation algorithms, such as #17 . This pr designs...

## Motivation As the title ## BC-breaking (Optional) Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the...

Bug:P2

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand...

WIP

Thanks for your excellent work! I'm trying to reproduce the paper, but I have some confusion about the search phase. Could you please further describe the training details of the...

## Motivation The related pr in `MMRazor` is https://github.com/open-mmlab/mmrazor/pull/365 MMRazor is developing quantization algorithms, including PTQ and QAT. This PR is a draft code to deploy `MMRazor` quantization model in...

加载模型 & Chat 用例:`xtuner/model/auto.py` 训练 alpaca ``` # 把 alpaca 数据集转换为 openai 格式 json python xtuner/tools/convert_dataset.py tatsu-lab/alpaca alpaca --save-dir converted_alpaca xtuner train xtuner/configs/internlm/internlm2_chat_1_8b/example.py ```

Mixtral 8x7B ZeRO3 : 110GB Mistral 7B ZeRO3: 15GB

推理用例:xtuner/chat/conversation.py 数据集用例:xtuner/dataset/hybrid/dataset.py llava 用例:xtuner/configs/internlm/internlm2_chat_1_8b/hybrid/internlm2_chat_1_8b_llava_sft.py