Jim Zieleman
Jim Zieleman
Okay made the changes and fixed the readme also. I think this is getting close to good to go for the basis of localgptV2
Wow yeah this is awesome. I think this is all really darn good! Since ingest is also separate from the localgpt class most of these changes wont be too big...
I agree that we go with the route @teleprint-me has designed.
if you use the nvidia-smi command, what is your vram usage?
You can try upgrading to a better sentence transformer. Increase chunksize for documents and also increase number of source documents to recall.
You will need to create a queue in the api. To manage incoming requests. So u need to wait for a request to complete then serve the next request. If...
Is there a way to do s3 to s3 sync?