Matthew Waller

Results 15 issues of Matthew Waller

It would be wonderful if DeepSpeech models could be converted to CoreML, for offline use in apps. Here is documentation to do just that. https://developer.apple.com/documentation/coreml/converting_trained_models_to_core_ml Thanks!

enhancement
Priority: P4

Hello! Thank you @taylorlu for all your work here, first off. I'm working to get a handle on speaker diarization, and wanted to know if you had an idea of...

## ❓Question I have a large converted model that I would like to run on a phone. The model takes up too much memory right now. So at the expense...

question

Hi folks, This is to [address this issue](https://github.com/huggingface/diffusers/issues/669). I converted this CrossAttention portion with coremltools, and it does in fact remove about 4 reshape operation and a few transposes, getting...

**Is your feature request related to a problem? Please describe.** I'm trying to convert portions of unet into CoreML. However, CrossAttention fails to compile to the Apple Neural Engine. **Describe...

### Describe the bug I'm using the following code: ``` !pip install diffusers !pip install transformers scipy ftfy pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16, use_auth_token=True) prompt = "Pineapple on a white...

bug

What would it take to convert an entire pipeline to a coreml model? For instance, I have saved the stable-diffusion checkpoint, and several of the models have their own configs,...

**Is your feature request related to a problem? Please describe.** I’m looking for ways to speed the process and save memory **Describe the solution you'd like** I wondered if it...

stale

- [ x] I have updated Purchases SDK to the latest version - [x ] I have read the [Contribution Guidelines](https://github.com/RevenueCat/purchases-ios/blob/main/Contributing/CONTRIBUTING.md) - [ x] I have searched the [Community](https://community.revenuecat.com) -...

bug

In the LLMEval project, the generation stops after reaching a limit on tokens. Is there a way to configure stopping when it finds a special token? I tried to look...