PRADEEP T
PRADEEP T
@ilham-bintang For me also the given code snippet is seems like hanging. from bert_serving.client import BertClient bc = BertClient(ip='127.0.0.1') The versions I am using are bert-serving-client==1.10.0 bert-serving-server==1.10.0. And running the...
 @ilham-bintang Here is the output of the bert server running console.
 @ilham-bintang Hope this is what you looking for
@ilham-bintang Its worked for me I put the changes like this from bert_serving.client import BertClient bc = BertClient(ip='127.0.0.1',port=5555) Initially the port number I have given was incorrect , that's why...
So multi hop questions are not possible in load_qa_chain, RetrievalQA, VectorstoreIndexCreator, and ConversationalRetrievalChain? Then how the proposed pipeline(VectorStoreInfo,VectorStoreToolkit and vectorstore_agent) make it happen internally? Please explain the internal flow of...
@sankar-forrester No it doesnt
@zhaoxin-jia-tfs @devstein any update on this?
@frankh077 Could you able to run the docker command ```docker run --gpus device=1 --shm-size=2g --net=host --ulimit memlock=-1 --rm -it -v ${PWD}:/workspace -w /workspace -v ${PWD}/results:/results nvcr.io/ea-bignlp/ga-participants/nemofw-training:23.08.03 bash``` specified in the...
is this container setup is mandatory? @frankh077 Shall we do the fine-tuning directly in the Python console without using the container?
The ```/data/onnx_model``` directory contain the onnx model file and tokenizer files then tried this code ``` from deepsparse import Pipeline # download onnx from sparsezoo and compile with batch size...