Janus
Janus copied to clipboard
The gradio loads but the model does not output anything SOLVED VRAM memory issues SOLVED
Microsoft Windows [Version 10.0.19045.5371] (c) Microsoft Corporation. All rights reserved.
D:\Janus\Janus>myenv\Scripts\activate
(myenv) D:\Janus\Janus>python demo/app_januspro.py
Python version is above 3.10, patching the collections module.
D:\Janus\Janus\myenv\Lib\site-packages\transformers\models\auto\image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use slow_image_processor_class
, or fast_image_processor_class
instead
warnings.warn(
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.06s/it]
Using a slow image processor as use_fast
is unset and a slow processor was saved with this model. use_fast=True
will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with use_fast=False
.
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the legacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False
. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
Some kwargs in processor config are unused and will not have any effect: ignore_id, add_special_token, image_tag, num_image_tokens, mask_prompt, sft_format.
INFO: Could not find files for the given pattern(s).
- Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.