Nicolas Patry

Results 978 comments of Nicolas Patry

Small differences in numbers can be explained by hardware, torch version etc... Nothing can be done about it. For the difference in output the API uses a different default from...

Hi @Rishabh-Choudhry . This is impossible to do with `whisper`. Whisper simply doesn't work in such a way, it output "timestamp" tokens, roughly when it feels like. And that's all...

There's a PR opened for it: https://github.com/huggingface/transformers/pull/21427 If you look at it, it actually uncovered some issues with Whisper itself (in non timestamp mode, the default in `transformers`, not the...

This change was necessary to get some tests running. Namely testing that oneformer and the like are actually working. These models **do not** have a feature extractor, only a `ImageProcessor`....

> Also cc @Narsil - any ideas here? Yes, by default added tokens always use `lstrip/rstrip=True` which swallows prefix/suffix spaces (it's a convenience for so you don't have to worry...

Hi, why do you want that option ? Sorry, but we try to limit the number of parameters available (for simplicity). This is also not available in `transformers` 's pipeline...

Special tokens are meant to be non-readable, if you want to use readable tokens, couldn't you use regular added tokens ? (`tokenizer.add_tokens` vs `tokenizer.add_special_tokens` IIRC) Special tokens are special mostly...

Hi @Matthieu-Tinycoaching This is linked to: huggingface/api-inference-community#26 Community images do not implement: - private models - GPU inference - Acceleration So what you are seeing is quite normal and is...

Hi, You can test `transformers` based models with all the API features, not `sentence-transformers` at the moment. Also `feature-extraction` even in transformers does not have every optimizations enabled by default....

Anything specific on the machines you are running regarding AVX/SIMD ? What are the CPUs ? https://github.com/BurntSushi/memchr/issues/57 Might be showing some insights.