Surya Kant Sahu

Results 9 issues of Surya Kant Sahu

Hello, I am trying out RWKV with audio modality and when I set T_MAX>>1000, it throws this error: ``` Emitting ninja build file /root/.cache/torch_extensions/py39_cu116/timex/build.ninja... Building extension module timex... Allowing ninja...

Hi, great work, and thanks for the code! I was wondering if the following is possible. I have a system of ODEs (two ODEs): `x_state = f(x, t, theta)` `0...

Total Karma exceeds the Karma shown in a user's profile. Probably the Karma algorithm has changed since the time this script was written. Please update if possible.

Paper: https://arxiv.org/pdf/2111.09543.pdf The authors compare XLMR with [mDeBERTa-v3](https://huggingface.co/microsoft/mdeberta-v3-base), and show that mDeBERTa-v3 is significantly better than XLMR (previous sota) in XNLI dataset. Changing from XLMR to mDeBERTa-v3 should be trivially...

feature

For getting structured outputs from custom-finetuned LLMs, extensive use of [constrained decoding](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.DisjunctiveConstraint) is standard. Is there a plan to add support for DisjunctiveConstraint (and others) to vLLM in the near...

good first issue
feature request

Hello, I want to use N-Best hypotheses of each audio file in voxceleb. Is this supported out-of-the-box or code changes are required?

The error: ```UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf0 in position 0: invalid continuation byte``` The stack trace: ``` [00:05:34] Pre-processing sequences ███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ 93583 / 93583 [00:00:04] Tokenize words ███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████...

Is it possible to use pretrained weights for predicting codes in a chunk-wise fashion (streaming input audio)?

Inspired from [this paper](https://arxiv.org/abs/2405.14862), we're exploring ways to bootstrap a bidirectional-context LLM from a decoder-only Causal LLM (e.g. llama-3). This is very easy to do in huggingface transformers by passing...

feature request
stale