Open-Assistant icon indicating copy to clipboard operation
Open-Assistant copied to clipboard

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

Results 1557 Open-Assistant issues
Sort by recently updated
recently updated
newest added

Bumps [typescript](https://github.com/Microsoft/TypeScript) from 4.9.5 to 5.2.2. Release notes Sourced from typescript's releases. TypeScript 5.2 For release notes, check out the release announcement. For the complete list of fixed issues, check...

dependencies

Is there any plan to release the dataset in cycles ? I think in comparison to the V1 dataset it should be grown pretty much

data

Hello, as someone who has contributed to the data of Open Assistant, I would really appreciate it if I (and others) could make use of it locally on mobile. To...

**Description:** While using Open Assistant for chat-based interactions, I encountered an issue related to changing plugins during the loading of answers. This behavior seems to result in errors and extended...

``` from transformers import AutoTokenizer AutoTokenizer.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319").padding_side >> 'left' AutoTokenizer.from_pretrained("TheBloke/Llama-2-13B-fp16") >> 'left' AutoTokenizer.from_pretrained("mosaicml/mpt-7b").padding_side >> 'right' AutoTokenizer.from_pretrained("huggyllama/llama-7b").padding_side >> 'left' AutoTokenizer.from_pretrained("OpenAssistant/llama-30b-sft-v8.2-2.4k-steps-system").padding_side >> 'left' ``` Since llama models are using left padding, the supervised...

bug
ml

If the model's num_embeddings is 10000,but we change the tokenizer to 10007. After SFT training the model's num_embeddings will be 10016, that because in model/model_training/utils/utils.py get_model(conf, tokenizer, pad_vocab_size_to_multiple_of=16, check_freeze_layer=True) has...

Edit sft Doc

I am trying to run pretrain of LLaMA 30b. And here is my running cmd: ``` deepspeed trainer_sft.py --configs defaults llama-30b-pretrain pretrain --cache_dir $DATA_PATH --output_dir $MODEL_PATH/llama-30b-pre --deepspeed ``` And after...

ml

Hi, I've seen vicuna model [here](https://lmsys.org/blog/2023-03-30-vicuna/). It seems like a pretty good model and supports lots of other languages (like Persian) other of the box. It speaks farsi but not...