hwang136
hwang136
When I tried to execute `tokenizer = AutoTokenizer.from_pretrained("/root/app/CustomLLM/InternVL-Chat-V1-5/", trust_remote_code=True)` it said > File "/usr/local/python38/lib/python3.8/site-packages/sentencepiece/__init__.py", line 316, in LoadFromFile > return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) > RuntimeError: Internal: could not parse ModelProto from...
FAILED: /tmp/pip-install-xa71jyjv/flash-attn_7b0f168e91914b648e389dd311d289cf/build/temp.linux-x86_64-3.10/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.o /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output /tmp/pip-install-xa71jyjv/flash-attn_7b0f168e91914b648e389dd311d289cf/build/temp.linux-x86_64-3.10/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.o.d -I/tmp/pip-install-xa71jyjv/flash-attn_7b0f168e91914b648e389dd311d289cf/csrc/flash_attn -I/tmp/pip-install-xa71jyjv/flash-attn_7b0f168e91914b648e389dd311d289cf/csrc/flash_attn/src -I/tmp/pip-install-xa71jyjv/flash-attn_7b0f168e91914b648e389dd311d289cf/csrc/cutlass/include -I/home/jovyan/.local/lib/python3.10/site-packages/torch/include -I/home/jovyan/.local/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/jovyan/.local/lib/python3.10/site-packages/torch/include/TH -I/home/jovyan/.local/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.10 -c -c /tmp/pip-install-xa71jyjv/flash-attn_7b0f168e91914b648e389dd311d289cf/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.cu -o /tmp/pip-install-xa71jyjv/flash-attn_7b0f168e91914b648e389dd311d289cf/build/temp.linux-x86_64-3.10/csrc/flash_attn/src/flash_bwd_hdim128_bf16_sm80.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -std=c++17 -U__CUDA_NO_HALF_OPERATORS__...
Is there a map telling me which of these special tokens map to which language? "additional_special_tokens": [ -- | "ace_Arab", | "ace_Latn", | "acm_Arab", | "acq_Arab", | "aeb_Arab", | "afr_Latn", ...
insight face does not use local model even if the local path is specified with the exact same models
I have already downloaded the model and included them directly inside the server `app = FaceAnalysis(name="buffalo_l", root="/root", providers=['CPUExecutionProvider'])` but it still tried to access `github.com/deepinsight/insightface/releases/download/v0.7/buffalo_l.zip` to download that zip file
### 🔎 Search before asking - [x] I have searched the PaddleOCR [Docs](https://paddlepaddle.github.io/PaddleOCR/) and found no similar bug report. - [x] I have searched the PaddleOCR [Issues](https://github.com/PaddlePaddle/PaddleOCR/issues) and found no...
### 🔎 Search before asking - [x] I have searched the PaddleOCR [Docs](https://paddlepaddle.github.io/PaddleOCR/) and found no similar bug report. - [x] I have searched the PaddleOCR [Issues](https://github.com/PaddlePaddle/PaddleOCR/issues) and found no...
### 🔎 Search before asking - [x] I have searched the PaddleOCR [Docs](https://paddlepaddle.github.io/PaddleOCR/) and found no similar bug report. - [x] I have searched the PaddleOCR [Issues](https://github.com/PaddlePaddle/PaddleOCR/issues) and found no...
Value error, Model architectures ['PaddleOCRVLForConditionalGeneration'] are not supported for now.
### 🔎 Search before asking - [x] I have searched the PaddleOCR [Docs](https://paddlepaddle.github.io/PaddleOCR/) and found no similar bug report. - [x] I have searched the PaddleOCR [Issues](https://github.com/PaddlePaddle/PaddleOCR/issues) and found no...
### 🔎 Search before asking - [x] I have searched the PaddleOCR [Docs](https://paddlepaddle.github.io/PaddleOCR/) and found no similar bug report. - [x] I have searched the PaddleOCR [Issues](https://github.com/PaddlePaddle/PaddleOCR/issues) and found no...