lollms-webui icon indicating copy to clipboard operation
lollms-webui copied to clipboard

Error SSL: CERTIFICATE_VERIFY_FAILED while installing model

Open CapitaineThug opened this issue 1 year ago • 0 comments

/!\This text was translated from French by Deepl and may contain errors/!\

Expected Behavior

When I install a model, the model should be downloaded from the Internet and then installed on the system.

Current Behavior

With the "AutoGPTQ" binding, I install the "Wizard-Vicuna-7B-Uncensored-GPTQ" model, and the following error is displayed in the console:

Traceback (most recent call last):
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 1348, in do_open
    h.request(req.get_method(), req.selector, req.data, headers,
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\http\client.py", line 1283, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\http\client.py", line 1329, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\http\client.py", line 1278, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\http\client.py", line 1038, in _send_output
    self.send(msg)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\http\client.py", line 976, in send
    self.connect()
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\http\client.py", line 1455, in connect
    self.sock = self._context.wrap_socket(self.sock,
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\ssl.py", line 513, in wrap_socket
    return self.sslsocket_class._create(
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\ssl.py", line 1071, in _create
    self.do_handshake()
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\ssl.py", line 1342, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:1007)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "S:\Lollms_Webui\lollms-webui\api\__init__.py", line 230, in install_model_
    "total_size":self.binding.get_file_size(model_path),
  File "S:\Lollms_Webui\lollms-webui\lollms-data\bindings_zoo\gptq\__init__.py", line 503, in get_file_size
    response = urllib.request.urlopen(filename)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 216, in urlopen
    return opener.open(url, data, timeout)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 519, in open
    response = self._open(req, data)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 536, in _open
    result = self._call_chain(self.handle_open, protocol, protocol +
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 496, in _call_chain
    result = func(*args)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 1391, in https_open
    return self.do_open(http.client.HTTPSConnection, req,
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 1351, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:1007)>

Model installation stops immediately. I tryed with others binding and the error is the same. For GPT4all, I manually downloaded the model "orca-mini-3b.ggmlv3.q4_0.bin" and moved it to the binding folder and the following error appeared while trying to load it:

Couldn't load model: [Unable to instantiate model]
Traceback (most recent call last):
  File "S:\Lollms_Webui\lollms-webui\app.py", line 779, in update_setting
    self.model = self.binding.build_model()
  File "S:\Lollms_Webui\lollms-webui\lollms-data\bindings_zoo\gpt_4all\__init__.py", line 84, in build_model
    self.model = GPT4All(
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\site-packages\gpt4all\gpt4all.py", line 98, in __init__
    self.model.load_model(self.config["path"])
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\site-packages\gpt4all\pyllmodel.py", line 267, in load_model
    raise ValueError("Unable to instantiate model")
ValueError: Unable to instantiate model`
This error didn't appeared while downloading the model "ggml-all-MiniLM-L6-v2-f16.bin" but I had this one:
`Traceback (most recent call last):
  File "S:\Lollms_Webui\lollms-webui\api\__init__.py", line 230, in install_model_
    "total_size":self.binding.get_file_size(model_path),
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\site-packages\lollms\binding.py", line 118, in get_file_size
    response = urllib.request.urlopen(url)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 216, in urlopen
    return opener.open(url, data, timeout)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 525, in open
    response = meth(req, response)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 634, in http_response
    response = self.parent.error(
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 563, in error
    return self._call_chain(*args)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 496, in _call_chain
    result = func(*args)
  File "S:\Lollms_Webui\installer_files\lollms_env\lib\urllib\request.py", line 643, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden

Possible Solution

I think the problem is either with the model download servers, because when I try to download it manually, I get the error "Entry not found" on the following link: https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-GPTQ/resolve/main/Wizard-Vicuna-30B-Superhot-8K-GPTQ or it's to do with the versions of python installed.

Context

This is my working environment OS: Windows Server 2022 Datacenter 21H2 Components: Intel Xeon E5620 | 48 GO RAM | Nvidia Geforce GTX 650, Nvidia Geforce GT 630, Nvidia Geforce GT 730 (all card are detected by lollms-webui) | 500 GO SSD Python version installed: i'm using miniconda3 installed automatically in win_install.bat Visual Studio Installed: Visual Studio 2022 community + Worload Desktop development with C++ CUDA Toolkit: NVIDIA CUDA Toolkit 12.2 Nvidia Driver: 474.44 (latest update for these GPUs...) Game ready Driver lollm-webui version: Lollms version : 5.1.0 | Lollms webui version : 6.1 Network: Static IP, no VPN, no Proxy, website don't use reverse proxy, nothing is virtualized, public DNS Server

For you

Sorry to send you such a big text but I want you to have as much information as possible to help me. Thank you for your help and for your program!

CapitaineThug avatar Sep 03 '23 13:09 CapitaineThug