PRADEEP T

Results 20 comments of PRADEEP T

@ilham-bintang For me also the given code snippet is seems like hanging. from bert_serving.client import BertClient bc = BertClient(ip='127.0.0.1') The versions I am using are bert-serving-client==1.10.0 bert-serving-server==1.10.0. And running the...

![Screenshot from 2021-06-18 11-35-31](https://user-images.githubusercontent.com/41164884/122514288-57c99f80-d029-11eb-8b4b-ada1fba92e3c.png) @ilham-bintang Here is the output of the bert server running console.

![Screenshot from 2021-06-18 14-53-33](https://user-images.githubusercontent.com/41164884/122540058-e1876600-d045-11eb-9a83-a7e07f1a6008.png) @ilham-bintang Hope this is what you looking for

@ilham-bintang Its worked for me I put the changes like this from bert_serving.client import BertClient bc = BertClient(ip='127.0.0.1',port=5555) Initially the port number I have given was incorrect , that's why...

So multi hop questions are not possible in load_qa_chain, RetrievalQA, VectorstoreIndexCreator, and ConversationalRetrievalChain? Then how the proposed pipeline(VectorStoreInfo,VectorStoreToolkit and vectorstore_agent) make it happen internally? Please explain the internal flow of...

@frankh077 Could you able to run the docker command ```docker run --gpus device=1 --shm-size=2g --net=host --ulimit memlock=-1 --rm -it -v ${PWD}:/workspace -w /workspace -v ${PWD}/results:/results nvcr.io/ea-bignlp/ga-participants/nemofw-training:23.08.03 bash``` specified in the...

is this container setup is mandatory? @frankh077 Shall we do the fine-tuning directly in the Python console without using the container?

The ```/data/onnx_model``` directory contain the onnx model file and tokenizer files then tried this code ``` from deepsparse import Pipeline # download onnx from sparsezoo and compile with batch size...