Simon Lund
Simon Lund
### Question In the README, you report finetuning with LoRA on 8xA100 40GB. However, the specified batch size (16) `scripts/v1.5/lora_finetune_task.sh` is too large for 40GB VRAM.
The DATASET.md file of Seed-Bench now contains instructions for both versions of Seed-Bench (v1 and v2). In the LLaVA-1.5 paper, you reference the paper presenting Seed-Bench-1. So, this link as...
### Is there an existing issue for this? - [X] I have searched the existing issues ### Describe the bug With the code below, the bp middleware runs before the...
https://github.com/ahopkins/sanic-session/blob/551de4b503ab1a595b3b2b07cbe08806508a043e/sanic_session/base.py#L164-L169 Hello, I was reading your code and stumbled upon the linked code lines. There I don't understand why **line 167** doesn't throw an error. Because if `if not req[self.session_name]`...
I would like to implment a feature similar to Github Copilot for text editing. That is, the AI can make text suggestions at the current cursor position, which can then...
https://learn.svelte.dev/tutorial/dom-event-forwarding In this chapter, it says that you can also forward DOM events. However, the previous chapter starts with “Unlike DOM events, component events don't bubble.” So, I wonder why...
**The bug** The program terminates with the simplest setup. I also tried llama.cpp alone which works fine. Output: ```bash ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices:...