Ahsen Khaliq
Ahsen Khaliq
@ben-hayes thanks, when trying extract_f0_with_pyin getting this error Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.7/dist-packages/flask/app.py", line 1952, in full_dispatch_request rv =...
@ben-hayes also adding the extract_f0_with_pyin method to the colab as a option as well as crepe
@rawandahmad698 update should work now, using a session token so has a queue and wait time
@unixpickle sounds good, also a community member released a demo today that includes image -> PC https://huggingface.co/spaces/anzorq/point-e_demo
this PR should fix it: https://github.com/modelscope/modelscope/pull/173
awesome, also there is a gradio space hosted on huggingface here: https://huggingface.co/spaces/hysts/Shap-E, if there is interested in adding a badge as well in the readme, see badges: https://huggingface.co/datasets/huggingface/badges
> torch==1.8.0 can confirm using torch==1.8.0 and restarting runtime in colab works
@kbrodt thanks for adding the demo although it is with the streamlit sdk, have you tried gradio: https://gradio.app/, it has some nice features like queueing, concurrency, and is used in...
@kbrodt yes, you can request gpu access like this https://huggingface.co/spaces/THUDM/CogVideo/discussions/2, usually it is a t4 gpu, for gradio examples are cached by default on spaces with the setting cache_examples=True, I...
@kbrodt awesome, thanks, I will also followup with the team on the gpu request