adi-lb-phoenix

Results 7 issues of adi-lb-phoenix

as mentioned in this jupyter notebook file https://github.com/cloneofsimo/lora/blob/master/scripts/merge_lora_with_lora.ipynb . rom lora_diffusion import monkeypatch_lora, tune_lora_scale, monkeypatch_add_lora monkeypatch_lora(pipe.unet, torch.load("../lora_kiriko.pt")) monkeypatch_lora(pipe.text_encoder, torch.load("../lora_kiriko.text_encoder.pt"), target_replace_module=["CLIPAttention"]) tune_lora_scale(pipe.unet, 1.00) torch.manual_seed(0) image = pipe(prompt, num_inference_steps=30, guidance_scale=7).images[0] image.save("../contents/lora_with_clip.jpg") image...

Documentation for installing AdaptiveCpp on macbook Pro M1 with reference to https://github.com/AdaptiveCpp/AdaptiveCpp/issues/1433

### Describe the bug Thank you for building and maintaining this repo as it will help me in learning parallel programming . I followed the guide of installing dpc++ from...

bug

I have tried to convert llama 2 model from .gguf to .bin ``` ~/llm_inferences/llama.cpp/models/meta$ ls llama-2-7b.Q4_K_M.gguf python3 export.py llama2_7b.bin --meta-llama /home/####/llm_inferences/llama.cpp/models Traceback (most recent call last): File "/home/aadithya.bhat/llm_inferences/llama2.c/export.py", line 559,...

I started a server with the command ` OLLAMA_NUM_PARALLEL=4 OLLAMA_MAX_LOADED_MODELS=4 ./ollama serve`. We open 4 terminals and executed the command` ./ollama run codellama after which the model loaded. So now...

user issue

I was trying to finetune whisper-large following the tutorial https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Whisper.ipynb#scrollTo=y7rOo10YkEqf. i made small changes to the code , that is dtype = torch.float32 for the model loaded. ``` model, tokenizer...

fixed - pending confirmation
unsure bug?

So we have a matrix server where we have integrated whatsapp into it through mautrix-whatsapp bridge. I have used matrix-neo to get the chats and store them in a file....