CoderTillAITakesOver
CoderTillAITakesOver
I am using the OpenAI API and can confirm that even over the API calls {{user}} is not getting resolved. Will attach logs by end of my day (IST).
> I am using the OpenAI API and can confirm that even over the API calls {{user}} is not getting resolved. Will attach logs by end of my day (IST)....
> It's weird {{char}} and {{user}} don't get the same treatment. And what you say makes more sense. However, I haven't met a model misunderstanding the user as the AI,...
> I got this working as well! Inference time seems to increase more than linearly with prompt size > > * 3 seconds of audio: 10 seconds of generation >...
> of I am getting, 2s of audio: 11 seconds and 6s of audio: 36 seconds
> I'm using an M1 Ultra 64gb and using dmpp_sde_gpu/normal works fine with SDXL and other models. No change in speed or anything. So you just upgraded? No downgrade of...
> M1 Macbook Pro, macOS 14.4, with torchvision==0.16.2, most checkpoints work, with a few exceptions. This took 100 seconds, seems normal for SDXL. So we need to wait for a...
Is it possible to load local Core ML models, once we convert them? I think right now we need to download from hub?
> I did use the Qwen model. What can I do? You were not facing this before, I guess? It is only after an upgrade? If so, then please revert...
I have the following set, and it goes to GPU, ``` import numpy as np from scipy.io import wavfile import os os.environ["SUNO_ENABLE_MPS"] = "True" os.environ["SUNO_USE_SMALL_MODELS"] = "False" os.environ["PYTORCH_ENABLE_MPS_FALLBACK"]="1" os.environ["SUNO_OFFLOAD_CPU"] =...