[Bug]: Running Fooocus with an AMD CPU results in the CPU utilization maxing out
Checklist
- [ ] The issue has not been resolved by following the troubleshooting guide
- [ ] The issue exists on a clean installation of Fooocus
- [ ] The issue exists in the current version of Fooocus
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
What happened?
When I run Fooocus with A100-80G-12C (Intel CPU), its CPU utilization fluctuates normally, but when I run Fooocus with A100-80-12 (AMD CPU), its CPU utilization maxes out. What could be the reason for this?
Steps to reproduce the problem
1.run Fooocus with A100-80-12 (AMD CPU) 2.loop use api: /v1/generation/text-to-image
What should have happened?
CPU utilization fluctuates normally
What browsers do you use to access Fooocus?
No response
Where are you running Fooocus?
Locally with virtualization (e.g. Docker)
What operating system are you using?
linux
Console logs
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Task Queue] Task queue start task, job_id=xxxxxxxxxx[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Parameters] Adaptive CFG = 7.0[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Parameters] Sharpness = 2.0[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Parameters] ControlNet Softness = 0.25[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Parameters] ADM Scale = 1.5 : 0.8 : 0.3[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Parameters] CFG = 4.0[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Parameters] Seed = 1761225031969009902[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Parameters] Sampler = dpmpp_2m_sde_gpu - karras[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Parameters] Steps = 30 - 15[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Fooocus--01] Initializing ...[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Fooocus--01] Loading models ...[0m
【2024-06-06 13:43:13】Refiner unloaded.
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Fooocus--01] Processing prompts ...[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Fooocus--01] Encoding positive #1 ...[0m
【2024-06-06 13:43:13】[32m[2024-06-06 13:43:13] INFO [0m [34m[Fooocus--01] Encoding negative #1 ...[0m
【2024-06-06 13:43:14】[32m[2024-06-06 13:43:14] INFO [0m [34m[Parameters] Denoising Strength = 1.0[0m
【2024-06-06 13:43:14】[32m[2024-06-06 13:43:14] INFO [0m [34m[Parameters] Initial Latent shape: Image Space (1024, 1024)[0m
【2024-06-06 13:43:14】[32m[2024-06-06 13:43:14] INFO [0m [34m[Fooocus] Preparation time: 0.12 seconds[0m
【2024-06-06 13:43:14】[Sampler] refiner_swap_method = joint
【2024-06-06 13:43:14】[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
【2024-06-06 13:43:18】
0%| | 0/30 [00:00<?, ?it/s]
3%|▎ | 1/30 [00:00<00:05, 5.66it/s]
7%|▋ | 2/30 [00:00<00:04, 6.22it/s]
10%|█ | 3/30 [00:00<00:04, 6.44it/s]
13%|█▎ | 4/30 [00:00<00:03, 6.54it/s]
17%|█▋ | 5/30 [00:00<00:03, 6.57it/s]
20%|██ | 6/30 [00:00<00:03, 6.58it/s]
23%|██▎ | 7/30 [00:01<00:03, 6.58it/s]
27%|██▋ | 8/30 [00:01<00:03, 6.56it/s]
30%|███ | 9/30 [00:01<00:03, 6.53it/s]
33%|███▎ | 10/30 [00:01<00:03, 6.43it/s]
37%|███▋ | 11/30 [00:01<00:02, 6.48it/s]
40%|████ | 12/30 [00:01<00:02, 6.52it/s]
43%|████▎ | 13/30 [00:02<00:02, 6.54it/s]
47%|████▋ | 14/30 [00:02<00:02, 6.57it/s]
50%|█████ | 15/30 [00:02<00:02, 6.59it/s]
53%|█████▎ | 16/30 [00:02<00:02, 6.60it/s]
57%|█████▋ | 17/30 [00:02<00:01, 6.62it/s]
60%|██████ | 18/30 [00:02<00:01, 6.62it/s]
63%|██████▎ | 19/30 [00:02<00:01, 6.64it/s]
67%|██████▋ | 20/30 [00:03<00:01, 6.59it/s]
70%|███████ | 21/30 [00:03<00:01, 6.61it/s]
73%|███████▎ | 22/30 [00:03<00:01, 6.61it/s]
77%|███████▋ | 23/30 [00:03<00:01, 6.62it/s]
80%|████████ | 24/30 [00:03<00:00, 6.63it/s]
83%|████████▎ | 25/30 [00:03<00:00, 6.64it/s]
87%|████████▋ | 26/30 [00:03<00:00, 6.65it/s]
90%|█████████ | 27/30 [00:04<00:00, 6.63it/s]
93%|█████████▎| 28/30 [00:04<00:00, 6.65it/s]
97%|█████████▋| 29/30 [00:04<00:00, 6.67it/s]
100%|██████████| 30/30 [00:04<00:00, 6.48it/s]
100%|██████████| 30/30 [00:04<00:00, 6.56it/s]
【2024-06-06 13:43:18】[32m[2024-06-06 13:43:18] INFO [0m [34m[Fooocus--01] Checking for NSFW content ...[0m
【2024-06-06 13:43:21】[32m[2024-06-06 13:43:21] INFO [0m [34m[Fooocus] Generating and saving time: 7.69 seconds[0m
【2024-06-06 13:43:24】[32m[2024-06-06 13:43:24] INFO [0m [34m[Task Queue] Finish task,
Additional information
No response
I can not test this locally with an AMD CPU, but are you certain you've set up both instances the same and have sufficient RAM/swap available?
I set up two identical instances in the cloud, both with A100-40G-8C resources, and the RAM and swap space are sufficient. Using py-spy top --pid 123 for analysis, I found that _save (PIL/ImageFile.py) has a high CPU utilization.
closing as stale