V.Prasanna kumar

Results 11 issues of V.Prasanna kumar

I was working on a kaggle competition and after sometime the code server stops and all the files created were gone. Is there any way to save the files in...

I have created the Docker file and updated the port in the main file.

is it possible to load the awq qunatized models into CPU memory instead of GPU memory ? similar issue : #999

is LLM class from vllm is asynchronous by nature ? why am i asking this from the [slides](https://docs.google.com/presentation/d/1QL-XPFXiFpDBh86DbEegFXBXFXjix4v032GhShbKf3s/edit#slide=id.g24ad94a0065_0_84) on the first meetup it has mentioned that llm is synchronous rather...

added value error for `DirectoryLoader` if the given path is a valid path but it does not contain any files .

area: doc loader
auto:bug
size:XS

I have created classification code which can handle text classification kind of task with TPU enabled and on Multiple TPU Pods with distributed sampler

Error while following the [example](https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/docs/user_guides/langchain/chain-with-guardrails/chain-with-guardrails.ipynb) in the repo ![Screenshot from 2024-02-19 18-42-59](https://github.com/NVIDIA/NeMo-Guardrails/assets/30804112/b1f47adb-b426-4774-9f09-69a1dc95b934)

question
status: waiting confirmation

Hi Team Is there any way to calculate the total number of tokens spends for the models which were consumed using ctransformers like how OPENAI is giving for their models...

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function * outputs = self.distribute_strategy.run( /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run ** return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:533 train_step ** y, y_pred, sample_weight,...

lets say i am using llama2 model to evaluate my RAG Applications does ragas takes care to format the prompt how llama2 understands ? like wrapping the prompt something like...

question
stale