nameless0704

Results 17 comments of nameless0704

the same with function BBANDS...I don't think there is NaN interfering because the last 3 values are non-NaN but still getting NaN returns.

想请问如果不考虑画图 只考虑计算部分 这个库有问题嘛…如果计算也有问题求可用项目指路^ ^谢谢

我在basic_language_model_gpt2_ml.py里面也遇到了一样的报错 ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_14936\15076559.py in 1 import numpy as np ----> 2 from bert4keras.models import build_transformer_model 3 from bert4keras.tokenizers import Tokenizer 4 from bert4keras.snippets...

改成单机多卡推理,AutoModel里加上device_map='auto'之后会报错:tensor不在同一个device上,但是sentencetransformers里读的embeddings和langchain UnstructedFileLoader好像(暂时)都没法multigpu……所以根本不能在一个device?

> Apparently this is a text2text model and not an autoregressive model. So it's more like FLAN than GPT-J or other currently supported models. I tried mt0 and some encoder-decoder...

I'm still getting this error on version 0.1.4 using open-source LLM Qwen-14B-Chat model. Is it still a thing?

I also want to know what master address/port should I use to avoid ```system error: 10049```

> Hi @candowu, thanks for raising this issue. This is arising, because the `tokenizer` in the [config on the hub](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer_config.json) points to `LLaMATokenizer`. However, the tokenizer in the library is...

目测是transformers版本不对吧,感觉至少4.26.1之后才支持ChatGLM系列