stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: Model hash is slow to compute, or this is slow to read, model ram cache is not shared
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
1,When switching models for the first time, most of the time is spent calculating the model hash or reading from disk 2,When 2 AUTOMATIC1111s are started on the same server, the models cached in the ram will not be shared
Steps to reproduce the problem
Switch the model for the first time to ensure that the model is not cached in the ram
What should have happened?
1,Compute model hashes fast and reads fast 2,Model cache sharing in RAM on the same machine
Commit where the problem happens
https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/e8a41df49fadd2cf9f23b1f02d75a4947bec5646
What platforms do you use to access the UI ?
No response
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
--api --disable-safe-unpickle --enable-insecure-extension-access --no-half --no-half-vae --xformers
List of extensions
no
Console logs
Loading weights [cc6cb27103] from /opt/project/stable/models/Stable-diffusion/sd-v1-5-pruned-emaonly.ckpt
Applying xformers cross attention optimization.
Weights loaded in 62.5s (load weights from disk: 61.1s, apply weights to model: 0.3s, move model to device: 1.1s).
...
Loading weights [cc6cb27103] from /opt/project/stable/models/Stable-diffusion/sd-v1-5-pruned-emaonly.ckpt
Applying xformers cross attention optimization.
Weights loaded in 26.9s (calculate hash: 23.9s, load weights from disk: 1.8s, apply weights to model: 0.2s, move model to device: 1.0s).
Additional information
The disk is SSD and the graphics card is Tesla V100.