inference icon indicating copy to clipboard operation
inference copied to clipboard

What's the path to download the model?

Open ciaoyizhen opened this issue 1 year ago • 0 comments

My server is offline.

After I started it, I tried to start the chatglm3-32k model, he ran it and then reported an error that the download failed, which is of course because there is no internet, then naturally I thought to put the chatglm3-6b-32k model in the directory, which I tried and then it failed, so I printed the path in the code, the legacy_cache_path in the cache function in llm_family.py he prints out a chatglm3-32k-pytorch-6b-none/model.bin I don't know what this one is, because the chatglm on huggingface's model.bin is splitted.The folder names don't match either.

So I would like to ask, what do I have to do with my model as well as the paths?

ciaoyizhen avatar May 09 '24 07:05 ciaoyizhen