jlonge4
jlonge4
@plsnotracking Ahhh I see. Perhaps you are onto something with the python version. I have been thinking of making this a docker container so maybe that would resolve it for...
@kkose @plsnotracking what do you two think about making this project use an in memory document store. Pros: none of these issues. Cons: your document index would only persist for...
@plsnotracking Done! The latest version uses a persisted doc store
Hey there @bilgeyucel @anakin87! Is this still a thing? I just toyed around with the API and got good results. Would be happy to knock this out if you guys...
@anakin87 I think it's pretty cool. Do you think the existing LinkContentFetcher/Web Search components have too much overlap in functionality with it?
@anakin87 I agree! Would passing modes at init time to a single component make sense? Like `reader = JinaReader(mode="read")` or something to designate which endpoint to use.
@anakin87 looks great, I'll get it cooking asap! @bilgeyucel you have a great point, it definitely is more of a converter or fetcher vs a reader.
@anakin87 you've got it, no problem 😎
@anakin87 https://github.com/deepset-ai/haystack-integrations/pull/288
@lonngxiang I think even with CPU offloading 24GB VRAM wouldn't be enough to get you there for inference without CUDA OOM. 16.5GB total - 4.5 for text encoder = 12...