Arthur

Results 795 comments of Arthur

Jumping here, the error `RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'` is just that `Half` only works on `GPU` and should not be used on cpu 😉

Hey, can you provide a reproducing script ?

Okay, this does not really need a reproduction script and I agree with you, the expected behaviour is that if you pre-compute the `input_embeds`, the output of the model should...

Before diving a bit deeper, I don't really understand why are you using `convert_id_to_tokens` instead of juste using the `tokenizer.batch_decode` method? Did you try with it?

There seems to indeed be a bug! When I use the `generate()` function, I am getting the correct output : ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer...

Added the [WIP] Label to prevent the bot from coming back 😉

Hey! 1. With regards to testing, you should indeed add a tester. While the `FeatureExctractor` were replaced with `ImageProcessors` I think we are still using the `test_feature_extraction_...` 2. I think...

I think you are just using the wrong checkpoint. Using the `"facebook/mbart-large-50-many-to-many-mmt"` I obtain the following : ```યુનાઇટેડ સ્ટેટ્સ ઓફ અમેરિકાના પ્રાંતિકારી کہتے हैं कि सीरिया में कोई सैन्य समाधान...

> The pretrained checkpoint should also be able to give output in the target language if we force the BOS token to the target language I think this depends on...