MindSearch icon indicating copy to clipboard operation
MindSearch copied to clipboard

Not supported on old graphics cards?

Open jsrdcht opened this issue 5 months ago • 7 comments

Error when running on 3090 and 2080 Ti, and my version is

NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0

mindsearch-backend   | Traceback (most recent call last):
mindsearch-backend   |   File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
mindsearch-backend   |     self.run()
mindsearch-backend   |   File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
mindsearch-backend   |     self._target(*self._args, **self._kwargs)
mindsearch-backend   |   File "/opt/lmdeploy/lmdeploy/serve/openai/api_server.py", line 1285, in serve
mindsearch-backend   |     VariableInterface.async_engine = pipeline_class(
mindsearch-backend   |   File "/opt/lmdeploy/lmdeploy/serve/async_engine.py", line 190, in __init__
mindsearch-backend   |     self._build_turbomind(model_path=model_path,
mindsearch-backend   |   File "/opt/lmdeploy/lmdeploy/serve/async_engine.py", line 235, in _build_turbomind
mindsearch-backend   |     self.engine = tm.TurboMind.from_pretrained(
mindsearch-backend   |   File "/opt/lmdeploy/lmdeploy/turbomind/turbomind.py", line 340, in from_pretrained
mindsearch-backend   |     return cls(model_path=pretrained_model_name_or_path,
mindsearch-backend   |   File "/opt/lmdeploy/lmdeploy/turbomind/turbomind.py", line 144, in __init__
mindsearch-backend   |     self.model_comm = self._from_hf(model_source=model_source,
mindsearch-backend   |   File "/opt/lmdeploy/lmdeploy/turbomind/turbomind.py", line 230, in _from_hf
mindsearch-backend   |     output_model_name, cfg = get_output_model_registered_name_and_config(
mindsearch-backend   |   File "/opt/lmdeploy/lmdeploy/turbomind/deploy/converter.py", line 123, in get_output_model_registered_name_and_config
mindsearch-backend   |     if not torch.cuda.is_bf16_supported():
mindsearch-backend   |   File "/opt/py3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 128, in is_bf16_supported
mindsearch-backend   |     device = torch.cuda.current_device()
mindsearch-backend   |   File "/opt/py3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 778, in current_device
mindsearch-backend   |     _lazy_init()
mindsearch-backend   |   File "/opt/py3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
mindsearch-backend   |     torch._C._cuda_init()
mindsearch-backend   | RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW

jsrdcht avatar Aug 27 '24 18:08 jsrdcht