Chris Mattmann
Chris Mattmann
@MBoustani ^^ please see @allenpope question above
handled in #18
@sujen1412 please review
man, I have 8GB NVIDIA TX 2080 and can't run it locally, even the 7B model. Options are basically to try and get a Cloud VM I suppose. Sigh.
What's crazy is that I just went through all the hoops to get a working Python version with torch (I had 3.6 before and had to up to 3.9 to...
update, I was able to run the CPU version on my Mac but it took 35 seconds to load the model and 30+ minutes to run the example, but at...
> @chrismattmann Why not try out the new 4bit fork? https://github.com/qwopqwop200/GPTQ-for-LLaMa Thank you I will give this a shot. I have 32Gb on my Kubuntu Focus Gen 2, with NVIDIA...
> > It's unclear to me the exact steps from reading the README > > I was able to get the 4bit version kind of working on 8G 2060 SUPER...
Great Mazi
Use Tika-Python