Keonwoo Roh
Keonwoo Roh
In torchnlp/download.py ```python def _download_file_from_drive(filename, url): # pragma: no cover """ Download filename from google drive unless it's already in directory. Args: filename (str): Name of the file to download...
hello, here is PR #95 for korean localization update. If there is some problem, send me again I'll modify
> Hello, I have no experience in LLVM's but just messing around, I stumbled across the same issue. > > ``` > Loading model state-spaces/mamba-130m > Special tokens have been...
@Emerald01 I have the same issue, so has the core dump problem not been solved?
In my case, I deployed each agent in local and it works. If you deploy in local you should add the controller port number in the restful API server
The error is caused by pytorch version compatibility, downgrade the version < 2.2
Oh, I see. I deployed each agent to fastchat separately.
1. run the controller default port: 21001 python3 -m fastchat.serve.controller 2. run the open_api_server python3 -m fastchat.serve.openai_api_server --host localhost --port 8000 --controller-address "http://localhost:21001" 3. deploy each agent in [https://github.com/zjunlp/AutoAct/blob/main/Scripts/model_bash/single_model.sh](https://github.com/zjunlp/AutoAct/issues/url) I...