PhotoMaker icon indicating copy to clipboard operation
PhotoMaker copied to clipboard

Copying space of https://huggingface.co/spaces/TencentARC/PhotoMaker is not working.

Open atonamy opened this issue 1 year ago • 4 comments

When I copied the space of https://huggingface.co/spaces/TencentARC/PhotoMaker it doesn't work. I use Nvidia T4 medium hardware for my Space. It just throw error:

Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Start inference...
[Debug] Prompt: cinematic photo [img] warrior . 35mm photograph, film, bokeh, professional, 4k, highly detailed, 
[Debug] Neg Prompt: drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
10
Traceback (most recent call last):
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/queueing.py", line 495, in call_prediction
    output = await route_utils.call_process_api(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/route_utils.py", line 232, in call_process_api
    output = await app.get_blocks().process_api(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/blocks.py", line 1561, in process_api
    result = await self.call_function(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/blocks.py", line 1179, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
    return await future
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/utils.py", line 678, in wrapper
    response = f(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/utils.py", line 678, in wrapper
    response = f(*args, **kwargs)
  File "/home/user/app/app.py", line 74, in generate_image
    images = pipe(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/user/app/pipeline.py", line 331, in __call__
    ) = self.encode_prompt(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 415, in encode_prompt
    prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 798, in forward
    return self.text_model(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 703, in forward
    encoder_outputs = self.encoder(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 630, in forward
    layer_outputs = encoder_layer(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 372, in forward
    hidden_states, attn_weights = self.self_attn(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 261, in forward
    query_states = self.q_proj(hidden_states) * self.scale
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [25,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.

atonamy avatar Jan 20 '24 06:01 atonamy

This is a GPU memory issue.

Paper99 avatar Jan 20 '24 06:01 Paper99

Then what hardware is suitable?

atonamy avatar Jan 20 '24 06:01 atonamy

A10 is suitable

Atonamy @.***>于2024年1月20日 周六14:40写道:

Then what hardware is suitable?

— Reply to this email directly, view it on GitHub https://github.com/TencentARC/PhotoMaker/issues/85#issuecomment-1901786840, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFATMT5FUKIMFQKKQT6FLGDYPNRFZAVCNFSM6AAAAABCC5RID2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBRG44DMOBUGA . You are receiving this because you commented.Message ID: @.***>

Paper99 avatar Jan 20 '24 06:01 Paper99

An important note: For those GPUs that do not support bfloat16, please change this line to torch_dtype = torch.float16, the speed will be greatly improved (1min/img (before) vs. 14s/img (after) on V100). The minimum GPU memory requirement for PhotoMaker is 15G.

Paper99 avatar Jan 23 '24 07:01 Paper99