JamieVC
JamieVC
Thanks for the good idea ```del llm_model``` , but I have another question. The create_model() is set `@st.cache_resource` like source code below. In my understandings, the function create_model() just run...
> Hi @JamieVC , what's the version of `transformers` in your environment? Did you run `./setup.sh` as described in [readme](https://github.com/intel/ai-reference-models/tree/main/models_v2/pytorch/llama/training/cpu#inference)? hi @huiyan2021 , In my environment, the version of transformers:...
> Thanks for reporting this. It's suspicious that it's due to a model state_dict definition conflict among different transformers versions. Our dev peer is on the issue. Thanks for your...
hi @huiyan2021 and @shiyang-weng sorry, I may need your support. Not sure how to modify. Could you please integrate the possible workaround into my finetune.py ? **The possible workaround:** https://github.com/tloen/alpaca-lora/issues/319...
Thanks so quick response! yes, the code excludes NPU. In the following sample that RAG starts to run llm on NPU with openvino.genai, so just wonder if the llm ReAct...
similar issue. The local search in graphRAG doesn't answer content from input text files.