thepok
thepok
Many TODOs but already workable. Now its easy to load big open source models. I was able to load neox 20b and facebook 30b on my laptop with 64gb ram...
API seams easy https://wolframalpha.readthedocs.io/en/latest/?badge=latest
``` from langchain.chains import LLMChain from langchain.prompts import PromptTemplate from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.cache import SQLiteCache import langchain langchain.llm_cache=SQLiteCache(".langchain.db") OpenAI(temperature=0, max_tokens=-1)("hallo") OpenAI(temperature=0, max_tokens=-1)("hallo") ``` ValueError...
Generate a lot of Text from different perspectives with LLM, and you will find the right answer in it most of the time! Without using DPR/Google, it achieved SoTA on...
When a request was fullfilled from cache, an empty call to the LLM was made... this fixes the bug, but i am not sure how to make the lint errors...
I would like to have one chain produce some text, and another chain refine that text.
Lol, why isnt that a thing yet ;D