M. Ali

Results 5 comments of M. Ali

> Hey @romitjain - we're working on integrating the OpenAI Whisper algorithm into Transformers, which will provide more support for these fine-grained decoding parameters! _c.f._ #27492 are contribution allowed here?...

I'll be working on [grounding_dino](https://github.com/huggingface/transformers/blob/main/src/transformers/models/grounding_dino/modeling_grounding_dino.py) and hopefuly I will have a PR soon.

@amyeroberts Aha, thanks for letting me know, I'd like to work on [swin2sr](https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin2sr/modeling_swin2sr.py) then since I already allocated time this week.

have you checked minimum required CUDA version / NVIDIA driver version for latest ggml? also you can check ggml, and llama.cpp repos for more help on this issue * https://github.com/ggerganov/ggml...

Looks like classic OOM (out-of-memory) error, I'd advise using smaller model or more quantized model. Without much context, it is hard to tell the issue.