Running server gives error when using huggingface model
850 return issubclass(cls.origin, self.origin) 851 if not isinstance(cls, _GenericAlias): --> 852 return issubclass(cls, self.origin) 853 return super().subclasscheck(cls)
TypeError: issubclass() arg 1 must be a class
The manual package works with the model not huggingface
https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/resolve/main/ggml-vicuna-13b-4bit-rev1.bin
Are you saying that you're trying to load the model from hugging face? Or that you downloaded it from hugging face?
Because that file is not in the hugging face format, it's in GGML. What specifically are you running that's giving you the type error? This reminds me of an error I was getting when I didn't have the correct pydantic version.
https://github.com/abetlen/llama-cpp-python/pull/29#issuecomment-1498248609
Downloaded from huggingface.
pip install llama-cpp-python[server] export MODEL=./models/7B/ggml-model.bin python3 -m llama_cpp.server
The solution is to install pydantic==1.10.7 ? Didn't get a chance. I thought that is what the comment said https://github.com/abetlen/llama-cpp-python/pull/29#issuecomment-1498248609.
I'm not sure what the solution is, did you try it and get it work? I had a similar issue and I think it did fix mine.
Can this issue be closed?
no just left it. Didn't have much time to continue play around.
Please reopen when you have time.