MOSS
MOSS copied to clipboard
An open-source tool-augmented conversational language model from Fudan University
是否支持M2的MacBook。 如果可以支持,需要什么样的 cpu 和内存配置。 我目前的配置是(12 核中央处理器、38 核图形处理器和 16核神经网络引擎)32GB 统一内存。 目前提示Torch not compiled with CUDA enabled。 感谢。
企业申请api
企业如何申请api
https://huggingface.co/datasets/fnlp/moss-002-sft-data/tree/main
from transformers import AutoTokenizer, AutoModelForCausalLM int4_model = "/data-ssd-1t/hf_model/moss-moon-003-sft-int4" tokenizer = AutoTokenizer.from_pretrained(int4_model, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(int4_model, trust_remote_code=True).half().cuda() model = model.eval() meta_instruction = "You are an AI assistant whose name is MOSS.\n-...
using auto-gptq to simplify code and quantization, by this, user can use quantized model to inference with or without triton installed, and can even run on CPU.
使用moss-moon-003-sft-int4,单卡推理,显卡内存随着推理,慢慢占满,怎样设置,推理完一个问题后, 释放显卡内存
- Add `Moss` using `jittor` - Add a CLI demo `moss_cli_demo_jittor` - Update `README.md` and `README_en.md`
Thank you for your wonderful work! Do you have plans to publish papers about this repo? I wish you could organize more details in the paper and show more experimental...
训练数据
请问预训练使用的训练数据,可以公布一下吗?
感谢MOSS的开源!试了下模型效果很赞! 请问后续会支持LoRA等Adaptation的SFT不,直接SFT成本有点大(穷人