brandonkoch3
brandonkoch3
Thank you for the reply (and for marking as a feature request). Both ARKit and Vision can now handle this and I would love to see ARCore support this; it...
I am experiencing this same issue when trying to generate an image on an iPhone 14 Pro Max, as well (iOS 16.2, RC at the time of this posting). I...
> Interestingly, when I moved all the resources to ODR. Both iPhone 12 Pro and iPad Pro works. @legolasW Can you clarify what ODR is? This sounds intriguing, but I...
> Are you using an iPad with 16 or 8GB of RAM? M1 or M2? My iPad is 16GB, so basically, that is what I have found at this point...
I was able to use a local model that I cloned from HuggingFace by feeding the local URL to the model folder in place of the `--model-version` argument. For example,...
> Local doesn't seem to work for me -- wondering maybe miniconda3 is not supported and need the larger package? Also seems to except to find a model_index.json file when...
My general understanding (which is very cursory) is that the CoreML tools are looking for more than just the .ckpt file, but are looking for the entire folder structure (the...
> I've been working on converting `ckpt` files to Core ML as well. I found a process that "sort of" works but the generated images are still funky. > https://github.com/godly-devotion/mochi-diffusion/wiki/How-to-convert-ckpt-files-to-Core-ML...
> @alelordelo If you want to convert a model to mps from a .ckpt file, you can do so following these steps: > > **Step One:** First prepare to send...
> Ah interesting, thank you! It looks like "Personal development teams do not support the Extended Virtual Addressing capability." unfortunately; did you find that that was required? I'm not entirely...