Xavier
Xavier
I am currently training Alpaca-LoRA as it is, but already wondering if someone tried to speed this up with DyLORA ? Paper: https://neurips2022-enlsp.github.io/papers/paper_37.pdf Code to replace LoRA by DyLoRA: https://github.com/huawei-noah/KD-NLP/tree/main/DyLoRA
I set different goals or asked different questions: there is no final answer, summary, action plan or clear progression to goal. It seems like a good infinite brainstorming rather than...
Adding Tags to add(doc_url, tags=["wikipedia", "subject1"]) and query(tags_to_include=["wikipedia"])
### 🚀 The feature Enable setting tags with resources added via .add(..., tags=[]), so that when querying via .query(..., include_tags=[], exclude_tags=[]) we can include/exclude specific tags, or global search when...
In a setup with the Pi3 builtin WiFi interface and >= 2 USB WiFi adapters used to connect multiple clients, OP will either show only 1 WiFi client interface, either...
LMQL query with proper scripting (inside & outside query) could simulate a llm/gpt-based (semi) autonomous agent (e.g. Auto-GPT, BabyAGI). What could not be covered by LMQL ? LMQL can handle...
Tree of Thoughts (https://arxiv.org/abs/2305.10601) is a kind of powerful generalization of chain-of-thought. Were you able to implement it in LMQL (e.g. https://github.com/princeton-nlp/tree-of-thought-llm ) ? If yes, this kind of approach...
Could you please explain in your readme or here what is the difference and added to value to Langchain llm caching ?