feat: add web ui for core ml stable diffusion (#56)

issue: #56
Thank you for your interest in contributing to Core ML Stable Diffusion! Please review CONTRIBUTING.md. If you would like to proceed with this pull request, please indicate your agreement to the terms outlined in CONTRIBUTING.md by checking the box below.
We appreciate your interest in the project!
- [ ] I agree to the terms outlined in CONTRIBUTING.md
- [x] I agree to the terms outlined in CONTRIBUTING.md
"I agree that all information entered is original and owned by me, and I hereby provide an irrevocable, royalty-free license to Apple to use, modify, copy, publish, prepare derivate works of, distribute (including under the Apple Sample Code License), such information and all intellectual property therein in whole or part, in perpetuity and worldwide, without any attribution."
@atiorh
Thanks for the PR @soulteary, I tested this UI and it is pretty cool! Some notes:
- Adding
gradioto the requirements did not work out of the box for me, had to installaltairseparately. Is this expected? - Including
gradioin the requirements increased the total number of packages being installed bypip install -e .by quite a lot. Instead of adding a requirement, could you please create a try-except block aroundimport gradio as grand warn the user to install it in the case ofModuleNotFoundError?
I reinitialized the environment and printed out the versions of the dependencies @atiorh
When we execute pip install gradio, dependencies will be installed automatically.
accelerate==0.15.0
- numpy [required: >=1.17, installed: 1.23.5]
- packaging [required: >=20.0, installed: 22.0]
- psutil [required: Any, installed: 5.9.4]
- pyyaml [required: Any, installed: 6.0]
- torch [required: >=1.4.0, installed: 1.13.0]
- typing-extensions [required: Any, installed: 4.4.0]
coremltools==6.1
- numpy [required: >=1.14.5, installed: 1.23.5]
- packaging [required: Any, installed: 22.0]
- protobuf [required: >=3.1.0,<=4.0.0, installed: 3.20.3]
- sympy [required: Any, installed: 1.11.1]
- mpmath [required: >=0.19, installed: 1.2.1]
- tqdm [required: Any, installed: 4.64.1]
diffusers==0.10.2
- filelock [required: Any, installed: 3.8.2]
- huggingface-hub [required: >=0.10.0, installed: 0.11.1]
- filelock [required: Any, installed: 3.8.2]
- packaging [required: >=20.9, installed: 22.0]
- pyyaml [required: >=5.1, installed: 6.0]
- requests [required: Any, installed: 2.28.1]
- certifi [required: >=2017.4.17, installed: 2022.9.24]
- charset-normalizer [required: >=2,<3, installed: 2.1.1]
- idna [required: >=2.5,<4, installed: 3.4]
- urllib3 [required: >=1.21.1,<1.27, installed: 1.26.13]
- tqdm [required: Any, installed: 4.64.1]
- typing-extensions [required: >=3.7.4.3, installed: 4.4.0]
- importlib-metadata [required: Any, installed: 5.1.0]
- zipp [required: >=0.5, installed: 3.11.0]
- numpy [required: Any, installed: 1.23.5]
- Pillow [required: Any, installed: 9.3.0]
- regex [required: !=2019.12.17, installed: 2022.10.31]
- requests [required: Any, installed: 2.28.1]
- certifi [required: >=2017.4.17, installed: 2022.9.24]
- charset-normalizer [required: >=2,<3, installed: 2.1.1]
- idna [required: >=2.5,<4, installed: 3.4]
- urllib3 [required: >=1.21.1,<1.27, installed: 1.26.13]
gradio==3.13.2
- aiohttp [required: Any, installed: 3.8.3]
- aiosignal [required: >=1.1.2, installed: 1.3.1]
- frozenlist [required: >=1.1.0, installed: 1.3.3]
- async-timeout [required: >=4.0.0a3,<5.0, installed: 4.0.2]
- attrs [required: >=17.3.0, installed: 22.1.0]
- charset-normalizer [required: >=2.0,<3.0, installed: 2.1.1]
- frozenlist [required: >=1.1.1, installed: 1.3.3]
- multidict [required: >=4.5,<7.0, installed: 6.0.3]
- yarl [required: >=1.0,<2.0, installed: 1.8.2]
- idna [required: >=2.0, installed: 3.4]
- multidict [required: >=4.0, installed: 6.0.3]
Could we consider adding a specific version in the requirements.txt file, the cost of user reproduction is lower?
my test env:
conda create -n coreml_stable_diffusion python=3.8 -y
conda activate coreml_stable_diffusion
python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker -o ./models
pip install gradio
python -m python_coreml_stable_diffusion.web -i ./models --compute-unit ALL
@soulteary We are keen to minimize the number of requirements for maintainability so my next recommendation would be to append a note in this README section saying: "In order to use the Python image generation pipeline through a Web UI and avoid the model load time in between image generation calls, please use the following command". Then I would mention pip install gradio as a pre-step and share an example command to launch the Web ui in a syntax consistent with the other example commands. Please let me know what you think. This Web UI looks really useful and I appreciate your work!
Hi @atiorh I am back, with a patch that will automatically complete gradio dependencies.
I think in this project, although gradio is used as the framework of the Web UI, it is not necessary for those who only want to carry out the pipeline process instead of using the Web UI.
So, maybe let people who need to use the Web UI, when starting the program, be able to automatically complete the download of gradio dependencies.
For Python users, it may be better not to introduce troublesome instructions, but to automate some trivial things. (life is short :D
Could use noob level instructions on getting the web env working.
I tried this and then for the 3rd line I get:
(coreml_stable_diffusion) radfaraf@macmini~ % python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker -o ./models
/Users/radfaraf/miniconda3/envs/coreml_stable_diffusion/bin/python: Error while finding module specification for 'python_coreml_stable_diffusion.torch2coreml' (ModuleNotFoundError: No module named 'python_coreml_stable_diffusion')
Messed a bit with the command thinking it needs to point to where I cloned the m1-stable-difussion repository, but not able to get it work.