Janus icon indicating copy to clipboard operation
Janus copied to clipboard

我用macos启动janus-pro1B模型只能使用图生文,用文生图时报错,请帮忙解决

Open packyzhou opened this issue 11 months ago • 3 comments

(janus_pro) zhoujingwei@MrJoe-Macbook-pro2022 Janus % python demo/app_januspro.py /Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/transformers/models/auto/image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use slow_image_processor_class, or fast_image_processor_class instead warnings.warn( Using a slow image processor as use_fast is unset and a slow processor was saved with this model. use_fast=True will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with use_fast=False. You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message. Some kwargs in processor config are unused and will not have any effect: sft_format, ignore_id, image_tag, add_special_token, mask_prompt, num_image_tokens. Running on local URL: http://127.0.0.1:7860

Could not create share link. Please check your internet connection or our status page: https://status.gradio.app.

Traceback (most recent call last): File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/gradio/queueing.py", line 536, in process_events response = await route_utils.call_process_api( File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/gradio/route_utils.py", line 322, in call_process_api output = await app.get_blocks().process_api( File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/gradio/blocks.py", line 1935, in process_api result = await self.call_function( File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/gradio/blocks.py", line 1520, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread return await future File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 962, in run result = context.run(func, *args) File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/gradio/utils.py", line 826, in wrapper response = f(*args, **kwargs) File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/Users/zhoujingwei/Desktop/worke/project/Janus/demo/app_januspro.py", line 160, in generate_image output, patches = generate(input_ids, File "/Users/zhoujingwei/Desktop/worke/project/Janus/demo/app_januspro.py", line 118, in generate patches = vl_gpt.gen_vision_model.decode_code(generated_tokens.to(dtype=torch.int), File "/Users/zhoujingwei/Desktop/worke/project/Janus/janus/models/vq_model.py", line 507, in decode_code dec = self.decode(quant_b) File "/Users/zhoujingwei/Desktop/worke/project/Janus/janus/models/vq_model.py", line 502, in decode dec = self.decoder(quant) File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/Users/zhoujingwei/Desktop/worke/project/Janus/janus/models/vq_model.py", line 208, in forward h = block.upsample(h) File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/Users/zhoujingwei/Desktop/worke/project/Janus/janus/models/vq_model.py", line 426, in forward x = self.conv(x) File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 554, in forward return self._conv_forward(input, self.weight, self.bias) File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 549, in _conv_forward return F.conv2d( RuntimeError: Input type (c10::BFloat16) and bias type (c10::Half) should be the same

packyzhou avatar Feb 17 '25 07:02 packyzhou

可以改一下python依赖版本试试

torch==2.3.1
transformers==4.43.1
tokenizers==0.19.1

hail100 avatar Feb 20 '25 03:02 hail100

可以改一下python依赖版本试试

torch==2.3.1
transformers==4.43.1
tokenizers==0.19.1

更糟糕了,启动都启动不了

Traceback (most recent call last): File "/Users/zhoujingwei/Desktop/worke/project/Janus/demo/app_januspro.py", line 4, in from janus.models import MultiModalityCausalLM, VLChatProcessor File "/Users/zhoujingwei/Desktop/worke/project/Janus/janus/models/init.py", line 20, in from .image_processing_vlm import VLMImageProcessor File "/Users/zhoujingwei/Desktop/worke/project/Janus/janus/models/image_processing_vlm.py", line 24, in import torchvision File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/torchvision/init.py", line 10, in from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip File "/Users/zhoujingwei/anaconda3/envs/janus_pro/lib/python3.9/site-packages/torchvision/_meta_registrations.py", line 163, in @torch.library.register_fake("torchvision::nms") AttributeError: module 'torch.library' has no attribute 'register_fake'

packyzhou avatar Feb 20 '25 08:02 packyzhou

把文件 app_januspro.py 里24行25行注释掉

if torch.cuda.is_available():
    vl_gpt = vl_gpt.to(torch.bfloat16).cuda()
# else:
#     vl_gpt = vl_gpt.to(torch.float16)

猜测可能是mac不支持这种转换,或者不需要转换

fnzi avatar Mar 08 '25 13:03 fnzi