OpenChatKit
OpenChatKit copied to clipboard
## Error when i run below command, `bash training/finetune_llama-2-7b-32k-mqa.sh` it will return errors: `NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.` ## How-to-Fix re-install datasets...
I'm trying to inference the API key I'm using streamlit to host it but it's not working. It's showing invalid API key provided even though the correct API is provided
I tried to intialize the env by fllowing the [guide](https://github.com/togethercomputer/OpenChatKit#requirements), but some dependencies cannot be resolved. ``` Could not solve for environment specs The following packages are incompatible ├─ pyarrow...
how did you train [Fine-tuning Llama-2-7B-32K-beta](https://github.com/togethercomputer/OpenChatKit/blob/main/README.md#fine-tuning-llama-2-7b-32k-beta)
[output_and_error.log](https://github.com/togethercomputer/OpenChatKit/files/13232768/output_and_error.log) **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3....
A base question about how many card days to Fine-tuning Llama-2-7B-32K-beta? card: model?num? days: xxx
bash training/finetune_RedPajama-INCITE-Chat-3B-v1.sh My configurations changes as below: --lr 1e-5 --seq-length 2048 --batch-size 8 --micro-batch-size 1 --gradient-accumulate-step 1 \ --num-layers 2 --embedding-dim 2560 \ --world-size 1 --pipeline-group-size 1 --data-group-size 1 \...
hi, i'm curious about the experiment that togetherAI reproduced of paper: LOST in the MIDDLE. It seems that llama-2 and llama2-32K does not have the "U-shape performance" in the long-context...
First of all many thanks for the release of llama 2 7b 32k and your precious contributions! It's appreciated that you provide example scripts for Finetuning; however the (for me)...
Environment fails to build because of fastparquet's strict versioning requirements