thepok
thepok
i made a fix https://github.com/hwchase17/langchain/pull/777 i hope somebody can clean it up to the repository standards
i have a working prototype with gpu support https://github.com/hwchase17/langchain/pull/410
``` from langchain.llms import Accelerate model_name = "facebook/opt-30b" FastLLM = Accelerate.from_model_name(model_name=model_name) print(FastLLM("Hello World")) ```
chatgtp could have solved that one! "No module named 'accelerate'" you have to install accelerate.
Mr Wolfram thinks about something like that it self https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/
maybe in the prompt tell gpt what libraries are available
@delip Thank you for your time reviewing this. i have no interest at the moment to implement this further. Feel free to implement your suggestions your self. It could be...
nope, but i think huggingfaces transformer lib can it now by itself, and hugginfaces transformer is already in langchain :)
that seems like a spam attack vector. create a self calling loop without llm and watch your server spin freely until you restart it