Onkar Pandit
Onkar Pandit
Hi Qi, Thanks for the reply. True that, I can run your system but there are some memory constraints on my machine. And size of complete YAGO is 168 G!!...
Thanks for the help!
Hi, Were you able to do it? I also have similar requirement? thanks, Onkar
> Only thing I've seen so far is this: > > ``` > llm = SelfHostedHuggingFaceLLM(model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"]) > ``` > > Found [here](https://python.langchain.com/en/latest/modules/models/llms/integrations/self_hosted_examples.html) > > It looks...
Hi, I am trying to do similar thing but still getting error. ```python from langchain.llms import SelfHostedHuggingFaceLLM from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def get_pipeline(model_id =...
Hello, I was having the same question. It took some time to understand that. Its the the coreference cluster present in the actual gold files. The key "clusters" in jsonlines...