Denis Lapchev
Denis Lapchev
Hi Mike, > In order for that to work it would be necessary to run risk calculations at the same frequency as the sim clock (as you suggested) since the...
Hi, Sorry, I am not aware of a formula that allows calculating it easily. Would start with monitoring the GPU usage (on Linux, you can use nvidia-smi tool) and optimise...
Hi @Hisma , thanks for the suggestion. Unstructured I/O is already installed as a requirement and supported in the package as an alternative back-end in case native parsing isn't available...
Sorry for delay. I watched the course and tried some of the approaches mentioned there. Advanced, model based methods for PDF parsing are definitely an improvement, especially for documents with...
Great! Happy to look into other approaches, agree, let's leave it open. Quality parsing of complex PDF remains a holy grail of RAG.
Closing this for now, as integration with gmft and azure doc intelligence address this need.
Hi @EmmaWebGH Thanks for your interest in the project. There are no videos, but a cli based demo using Google Colab. It is also linked in Readme - https://githubtocolab.com/snexus/llm-search/blob/main/notebooks/llmsearch_google_colab_demo.ipynb You...
Hi @amscosta When you open the notebook, you can click on the left pane, then right click -> create folder, as shown on the screenshot below:  Name the folder...
Hi @amscosta > sample_data folder is equivalent to sample_docs folder (you mentioned in the colab)? `sampe_data ` is the default folder that Google Colab created for you. The package expects...
Hi, In the offline version - I use it with a 500MB-1 GB knowledge base (combined - pdf and markdown files). Don't think it will scale well beyond a few...