Janus
Janus copied to clipboard
It is very slow , `use_fast` is set to `True` but the image processor class does not have a fast version
I did manually set use_fast to true. still it is using slow version.
G:\xxxxxxxxxxxx\venv\lib\site-packages\transformers\models\auto\image_processing_auto.py:593: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use slow_image_processor_class
, or fast_image_processor_class
instead
warnings.warn(
ingprocess--->> preprocessor_config.json
use_fast
is set to True
but the image processor class does not have a fast version. Falling back to the slow version.
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the legacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False
. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
why it is falling back didn't understand. any help greatly appreciated.
Thanks