Darren Oberst
Darren Oberst
@nithamitabh - Hi Amitabh - thanks for the thoughtful feedback and recommendation. We agree with you. We have implemented a **very lightweight** (but functional) API inference server in llmware -...
@nithamitabh - we have added in 0.2.15 version (released yesterday) and in the main branch - a new option to dynamically pass an api_endpoint in the Model load process -...
@geverl - thanks for raising this - and appreciate a nice clear simple example (sometimes the most simple example produces the bug that you missed!) .... I am running the...
@geverl - thanks for highlighting this - it is a good catch. It has been fixed in the main branch if you pull the latest code, and will be in...
@arcontechnologies - could you please try the following to set a new home path: ```python from llmware.configs import LLMWareConfig # check current home path home = LLMWareConfig().get_home() print("home: ", home)...
@wissamharoun and @CodeWithChetan2 - could you please explain what issue you are seeing in more detail - the code/example, and details on platform/python/llmware versions ?
@arcontechnologies - if I am following your question, llmware does not delete the sqlite_llmware.db when you change the home paths. The beauty of SQLite is also its simplicity - it...
@JBatUN- thanks - this is a great question. For GGUF models from HF, llmware pulls a snapshot of the HF model repo using the huggingface_hub api, and then places the...
@bhugueney - thanks for the suggestion - it is good idea - let me look into it ... If you have some specific ideas around integrating or would like to...
@limcheekin & @mallahyari - thanks for raising this - the general prompt format for a SLIM model looks like this: full_prompt = ": " + {{context}} + "\n" + "...