Radamés Ajna
Radamés Ajna
As address on #57 , we can modify `wrapper.py` to enable prompt on `img2img` mode and an optional `engine_dir` argument, that's useful when running on Docker, one can re-use the...
I've made a live demo for both global and local modals. Here's a link for the project
This PR brings * Option to Enable fast inference with LCM * ControlnetPose Example * Script to download the models `download_models.py` * brought enhance_face_region from Spaces to demo here
## Problem Currently, there is no straightforward method to integrate a custom frontend experience that connects to a Gradio API backend via `@gradio/client`. More specifically, there's no easy way to...
add try example on front https://huggingface.co/spaces/Xenova/whisper-web/discussions/2 not sure if we should also trigger the transcription? WDYT?
Hi @antiboredom, amazing project! It looks like the issue with ffmpeg wasm MT on Chrome was solved with the latest `@ffmpeg/[email protected]` it worth trying! https://github.com/antiboredom/ffmpeg-explorer/blob/568390ba23b5565dd5ce623eed1d22ea4c7b3b85/src/App.svelte#L21-L22 BTW is you ever need...
Considering that TGI now supports the Messages API compatible with OpenAI API specs, it would be great to have native support in the Inference package. ```bash curl localhost:3000/v1/chat/completions \ -X...
Hi @coyotte508, I was trying to use the `@huggingface/inference` custom Request/StreamingRequest as a client for Ollama (). It almost works, but it needs a couple of custom arguments and requires...
Hi @andresprados amazing work! I've build a demo on huggingface https://huggingface.co/spaces/radames/SPIGA-face-alignment-headpose-estimator would you like to add a link on your repo? Also, we have a papers interface now, I hope...
Add two examples for Svelte kit client + server Docs with walkthrough