mRASP
mRASP copied to clipboard
我将fairseq模型转为transformers模型
https://huggingface.co/thehonestbob/mrasp
https://huggingface.co/thehonestbob/mrasp
您好,我在按照您给出的代码使用的时候出现了下面的问题:
OSError: Can't load config for 'thehonestbob/mrasp'. Make sure that: 'thehonestbob/mrasp' is a correct model identifier listed on 'https://huggingface.co/models' or 'thehonestbob/mrasp' is the correct path to a directory containing a config.json file
我使用的代码是: ` from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_path = 'thehonestbob/mrasp'
model = AutoModelForSeq2SeqLM.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path)
input_text = ["Welcome to download and use!"]
inputs = tokenizer(input_text, return_tensors="pt", padding=True, max_length=300, truncation=True)
result = model.generate(**inputs)
result = tokenizer.batch_decode(result, skip_special_tokens=True)
result = [pre.strip() for pre in result]
`
请问是模型的问题吗?
model_path
https://huggingface.co/thehonestbob/mrasp
您好,我在按照您给出的代码使用的时候出现了下面的问题:
OSError: Can't load config for 'thehonestbob/mrasp'. Make sure that: 'thehonestbob/mrasp' is a correct model identifier listed on 'https://huggingface.co/models' or 'thehonestbob/mrasp' is the correct path to a directory containing a config.json file
我使用的代码是: ` from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_path = 'thehonestbob/mrasp'
model = AutoModelForSeq2SeqLM.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path)
input_text = ["Welcome to download and use!"]
inputs = tokenizer(input_text, return_tensors="pt", padding=True, max_length=300, truncation=True)
result = model.generate(**inputs)
result = tokenizer.batch_decode(result, skip_special_tokens=True)
result = [pre.strip() for pre in result]
`
请问是模型的问题吗?
你先把整个模型文件下载到本地,然后model_path是你下载下来的文件路径
model_path
https://huggingface.co/thehonestbob/mrasp
您好,我在按照您给出的代码使用的时候出现了下面的问题:
OSError: Can't load config for 'thehonestbob/mrasp'. Make sure that: 'thehonestbob/mrasp' is a correct model identifier listed on 'https://huggingface.co/models' or 'thehonestbob/mrasp' is the correct path to a directory containing a config.json file
我使用的代码是:from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model_path = 'thehonestbob/mrasp' model = AutoModelForSeq2SeqLM.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path) input_text = ["Welcome to download and use!"] inputs = tokenizer(input_text, return_tensors="pt", padding=True, max_length=300, truncation=True) result = model.generate(**inputs) result = tokenizer.batch_decode(result, skip_special_tokens=True) result = [pre.strip() for pre in result]
请问是模型的问题吗?你先把整个模型文件下载到本地,然后model_path是你下载下来的文件路径
已经成功跑通了,非常感谢您的工作!
How to translate in Chinese to English direction?
How to translate in Chinese to English direction?
你可以查看tokenization_bat.py文件,主要方法就是添加start字符以说明当前文本语种
How to translate in Chinese to English direction?
你可以查看tokenization_bat.py文件,主要方法就是添加start字符以说明当前文本语种
I did that and it is not working. Could you help a little more?