Arthur Wu

Results 26 issues of Arthur Wu

I looked at the llama source code and there is an intermedie layer that palm doesn't have or I didn't find it? Can you point it out clearly? Or how...

or if i want use memory-efficient attention, i must call scaled_dot_product_attention? PyTorch 2.0 includes an optimized and memory-efficient attention implementation through the [torch.nn.functional.scaled_dot_product_attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention) function

can't train reward model with batch seq, prompt_mask, labels = next(train_loader) loss = reward_model(seq, prompt_mask = prompt_mask, labels = labels) accelerator.backward(loss / GRADIENT_ACCUMULATE_EVERY) i set this but i get error...

After the upload is started in the sample code, the dialog box cannot work. How can it work at the same time? Can a new process be started?

needs-triage

![image](https://github.com/Chainlit/chainlit/assets/5139426/44b88b97-eb37-4105-9d74-c2a019acfd84)

needs-triage

while login, get Unable to sign in. and my code same as example import chainlit as cl @cl.password_auth_callback def auth_callback(username: str, password: str): # Fetch the user matching username from...

bug

i want build login page, How to set css style same as chainlit? any example?

enhancement

try run code: `./train_all_data_at_once.sh Transformer` then get error ``` Traceback (most recent call last): File "run.py", line 352, in main() File "run.py", line 329, in main train(args, model, tokenizer, device,...

payload = { "text": text, "reference_text": None, "reference_audio": None, "max_new_tokens": 0, "chunk_length": 30, "top_k": 0, "top_p": 0.7, "repetition_penalty": 1.5, "temperature": 0.7, "speaker": "纳西妲", "format": "wav" } 这样请求还是会有不同说话人的声音,要求:不传入原始音频,仅传入text,固定某一个说话人(女声)

bug

loaded openchat-3.5-0106-gemma-Q5_K_M.gguf ,then get error. it's newest version of llama.cpp