elven2016

Results 37 comments of elven2016

the same error too, have you found the solution?

> add with torch.autocast("cuda"): and it will work. > > ``` > with torch.autocast("cuda"): > inputs = tokenizer(prompt, return_tensors="pt") > input_ids = inputs["input_ids"].to("cuda") > > with torch.no_grad(): > generation_output =...

> python test.py Traceback (most recent call last): File "/Users/xxxxx/gpt4free/gpt4free/forefront/__init__.py", line 44, in create trace_token = response.json()['response']['id'] KeyError: 'response' During handling of the above exception, another exception occurred: Traceback (most...

the same issue too, My models are these: ggml-gpt4all-j.bin,vicuna-13b-v1.1 , the api container run up sucecess ,but api seems not work : curl http://localhost:8080/v1/models curl: (52) Empty reply from server

vicuna使用本地模型启动,单卡没问题,双卡就出现 ValueError: The device_map provided does not give any device for the following parameters: model.embed_tokens.weight, model.layers.0.self_attn.q_proj.weight, > 启动时如果判断torch.cuda.is_available返回为Ture且有多卡会自动把模型分配在多卡中运行

兄弟这个问题你解决了吗,我也是,单卡是可以的,双卡就没法加载vicuna

> 你好,lichao。这是因为这个demo中仅有GPT-3对于AAPL这只股票的分析结果是我们预先提供的,如果您需要其他股票的结果可以将`agent_args`中的`source`参数改为”openai“,同时将`/demos/chatgpt-trading-v2/openai_token/token_.py`中的`OPEN_AI_TOKEN`换成您自己的openai token,此时运行会调用openai的gpt-3模型的api,具体价格请参考openai官网。同时,由于网络不稳定因素可能导致调用过程报错影响程序运行,可以仿照`demos/chatgpt-trading-v2/get_gpt_sentiment_results`文件下的代码先将情感分析结果预先保存下来在进行后续计算。 > > Hello, lichao. This is because only GPT-3 analysis results for the stock AAPL in this demo are provided by us in advance. If you need...

应该和模型没有关系,可能tokenizer有关,我也遇到了,现在还没解决

> If you can provide a video of the error occurrence, it will be easier for us to debug and reproduce the issue. Thank you. the following is my operation...